code
stringlengths 2.5k
6.36M
| kind
stringclasses 2
values | parsed_code
stringlengths 0
404k
| quality_prob
float64 0
0.98
| learning_prob
float64 0.03
1
|
---|---|---|---|---|
```
%load_ext autoreload
%reload_ext autoreload
%autoreload 2
%matplotlib inline
import os
import warnings
import numpy as np
from astropy.table import Table
import matplotlib.pyplot as plt
from matplotlib import rcParams
warnings.filterwarnings("ignore")
plt.rc('text', usetex=True)
rcParams.update({'axes.linewidth': 1.5})
rcParams.update({'xtick.direction': 'in'})
rcParams.update({'ytick.direction': 'in'})
rcParams.update({'xtick.minor.visible': 'True'})
rcParams.update({'ytick.minor.visible': 'True'})
rcParams.update({'xtick.major.pad': '7.0'})
rcParams.update({'xtick.major.size': '8.0'})
rcParams.update({'xtick.major.width': '1.5'})
rcParams.update({'xtick.minor.pad': '7.0'})
rcParams.update({'xtick.minor.size': '4.0'})
rcParams.update({'xtick.minor.width': '1.5'})
rcParams.update({'ytick.major.pad': '7.0'})
rcParams.update({'ytick.major.size': '8.0'})
rcParams.update({'ytick.major.width': '1.5'})
rcParams.update({'ytick.minor.pad': '7.0'})
rcParams.update({'ytick.minor.size': '4.0'})
rcParams.update({'ytick.minor.width': '1.5'})
rcParams.update({'axes.titlepad': '10.0'})
rcParams.update({'font.size': 25})
```
### Using the summary file for a galaxy's stellar mass, age, and metallicity maps
* For a given Illustris or TNG galaxy, we have measured basic properties of their stellar mass, age, and metallicity distributions.
* Each galaxy has data for the 3 primary projections: `xy`, `xz`, `yz`.
- We treat them as independent measurement. Result for each projection is saved in its own file.
* Each galaxy has three "components":
- `ins`: in-situ stellar component, meaning the stars formed in the halo of the main progenitor.
- `exs`: ex-situ stellar component, meaning the stars accreted from other halos.
- `gal`: the whole galaxy, a combination of `ins` and `exs`.
```
data_dir = '/Users/song/data/massive/simulation/riker/tng/sum'
```
* Naming of the file, for example: `tng100_z0.4_hres_158_181900_xy_sum.npy`
- `tng100`: Simulation used. `ori` for original Illustris; `tng` for Illustris TNG. Number indicates the volume of the simulation. e.g. `tng100`, `tng300`.
- `z0.4`: the snapshot is at z=0.4.
- `hres`: high-resolution map, which has a 1 kpc pixel; also have `lres` for low-resolution, which has a 5.33 kpc pixel.
- `158_181900` in the format of `INDEX_CATSHID`: `INDEX` is just the index in the dataset; `CATSHID` is the subhalo ID from the simulation, which is a unique number to indentify halo or galaxy.
- `xy`: projection used in this data.
* Let's try read in the summary files in all 3 projections for one galaxy:
```
xy_sum = np.load(os.path.join(data_dir, 'tng100_z0.4_hres_15_31188_xy_sum.npy'))
xz_sum = np.load(os.path.join(data_dir, 'tng100_z0.4_hres_15_31188_xz_sum.npy'))
yz_sum = np.load(os.path.join(data_dir, 'tng100_z0.4_hres_15_31188_yz_sum.npy'))
```
* The structure of the summary file is just a dictionary that contains the results from different steps of analysis, each of which is also a dictionary of data.
* For each projection, there are 4 components:
- **info**: basic info of the galaxy from simulation, including subhalo ID, stellar mass, halo mass (`M200c`), and average age and metallicity values. This can be used to identify and match galaxies. Also includes information about the size and pixel scale of the map.
- **geom**: basic geometry information of the galaxy on the stellar mass map. `x`, `y` for centroid; `ba` for axis ratio, `pa` for position angle in degree (`theta` is the same, but in radian).
- **aper**: stellar mass, age, and metallicity profiles for each component measured in a series of apertures. Will explain more later.
- **prof**: 1-D profile information from `Ellipse` profile fitting process. Will explain in details below.
```
# Take a look at the basic information
xy_sum['info']
# Basic geometric information of the galaxy
xy_sum['geom']
```
### Aperture measurements
* It is easier to see the content of the summary using a astropy table:
* Key information:
- `rad_inn`, `rad_out`, `rad_mid`: inner & outer edges, and the middle point of the radial bins.
- `maper_[gal/ins/exs]`: aperture masses enclosed in different radius (using elliptical apertures).
- `mprof_[gal/ins/exs]`: stellar mass in each radial bin.
- `age_[gal/ins/exs]_w`: mass-weighted stellar age.
- `met_[gal/ins/exs]_w`: mass-weighted stellar metallicity.
- There are also flags to indicate if the age and metallicity values can be trust. Any value >0 means there is an issue.
* Notice that the `age` and `met` values can be `NaN` in the outskirts.
* It is also possible that `mprof` has 0.0 value in an outer bin for some compact galaxies.
```
Table(xy_sum['aper'])
# Check the differences of the three projections in term of mass profiles
# They are very similar, but not the same
plt.plot(np.log10(xy_sum['aper']['rad_mid']), np.log10(xy_sum['aper']['mprof_gal']), label='xy')
plt.plot(np.log10(xz_sum['aper']['rad_mid']), np.log10(xz_sum['aper']['mprof_gal']), label='xz')
plt.plot(np.log10(yz_sum['aper']['rad_mid']), np.log10(yz_sum['aper']['mprof_gal']), label='yz')
plt.legend(loc='best')
_ = plt.xlabel(r'$\log\ [\rm R/kpc]$')
_ = plt.ylabel(r'$\log_{10}\ [M_{\star}/M_{\odot}]$')
```
### Summary of `Ellipse` results.
* We use the `Ellipse` code to extract the shape and mass density profiles for each component. So, if everything goes well, the `prof` summary dictionary should contain 6 profiles.
- `[gal/ins/exs]_shape`: We first run `Ellipse` allowing the shape and position angle to change along the radius. From this step, we can estimate the profile of ellipticity and position angle.
- `[gal/ins/exs]_mprof`: Then we fix the isophotal shape at the flux-weighted value and run `Ellipse` to extract 1-D mass density profiles along the major axis.
- Each profile itself is a small table, which can be visualize better using `astropy.table`
- For the `ins` and `exs` components, their mass density profiles are extracted using the average shape of the **whole galaxy**, which can be different from the average shape of these two components.
* In each profile, the useful information are:
- `r_pix` and `r_kpc`: radius in pixel and kpc unit.
- `ell` and `ell_err`: ellipticity and its error
- `pa` and `pa_err`: position angle (in degree) and its error. **Notice**: need to be normalized sometimes.
- `pa_norm`: position angle profile after normalization
- `growth_ori`: total stellar mass enclosed in each isophote. This is from integrating the profile using average values at each radii, could show small difference with the values from "aperture photometry".
- `intens` and `int_err`: average stellar mass per pixel at each radii. When divided by the pixel are in unit of kpc^2, it will give the surface mass density profile and the associated error.
- The amplitudes of first 4 Fourier deviations are available too: `a1/a2/a3/a4` & `b1/b2/b3/b4`. These values should be interpreted as relative deviation. Their errors are available too. **Please be cautious when using these values.**
* In some rare cases, `Ellipse` can fail to get useful profile. This is usually caused by on-going mergers and mostly happens for the `shape` profile.
- The corresponding field will be `None` when `Ellipse` failed. Please be careful!
- It is safe to assume these failed cases are "weird" and exclude them from most of the analysis.
```
# Check out the 1-D shape profile for the whole galaxy
Table(xy_sum['prof']['gal_mprof'])
# Check the ellipticity profiles of the three projections
plt.plot(np.log10(xy_sum['prof']['gal_shape']['r_kpc']), xy_sum['prof']['gal_shape']['ell'], label='xy')
plt.plot(np.log10(xz_sum['prof']['gal_shape']['r_kpc']), xz_sum['prof']['gal_shape']['ell'], label='xz')
plt.plot(np.log10(yz_sum['prof']['gal_shape']['r_kpc']), yz_sum['prof']['gal_shape']['ell'], label='yz')
_ = plt.xlabel(r'$\log\ [\rm R/kpc]$')
_ = plt.ylabel(r'$e$')
# And the mass profiles derived using varied and fixed shape can be different
plt.plot(np.log10(xy_sum['prof']['gal_shape']['r_kpc']), np.log10(xy_sum['prof']['gal_shape']['intens']),
label='Varied Shape')
plt.plot(np.log10(xy_sum['prof']['gal_mprof']['r_kpc']), np.log10(xy_sum['prof']['gal_mprof']['intens']),
label='Fixed Shape')
_ = plt.xlabel(r'$\log\ [\rm R/kpc]$')
_ = plt.ylabel(r'$\log\ \mu_{\star}$')
```
### Organize the data better
* For certain project, you can organize information from the summary file into a structure that is easier to use.
* Using the 3-D galaxy shape project as example:
```
xy_sum_for3d = {
# Subhalo ID
'catsh_id': xy_sum['info']['catsh_id'],
# Stellar mass of the galaxy
'logms': xy_sum['info']['logms'],
# Basic geometry of the galaxy
'aper_x0': xy_sum['geom']['x'],
'aper_y0': xy_sum['geom']['y'],
'aper_ba': xy_sum['geom']['ba'],
'aper_pa': xy_sum['geom']['pa'],
# Aperture mass profile
'aper_rkpc': xy_sum['aper']['rad_mid'],
# Total mass enclosed in the aperture
'aper_maper': xy_sum['aper']['maper_gal'],
# Stellar mass in the bin
'aper_mbins': xy_sum['aper']['mprof_gal'],
# 1-D profile with varied shape
'rkpc_shape': xy_sum['prof']['gal_shape']['r_kpc'],
# This is the surface mass density profile and its error
'mu_shape': xy_sum['prof']['gal_shape']['intens'] / (xy_sum['info']['pix'] ** 2.0),
'mu_err_shape': xy_sum['prof']['gal_shape']['int_err'] / (xy_sum['info']['pix'] ** 2.0),
# This is the ellipticity profiles
'e_shape': xy_sum['prof']['gal_shape']['ell'],
'e_err_shape': xy_sum['prof']['gal_shape']['ell_err'],
# This is the normalized position angle profile
'pa_shape': xy_sum['prof']['gal_shape']['pa_norm'],
'pa_err_shape': xy_sum['prof']['gal_shape']['pa_err'],
# Total mass enclosed by apertures
'maper_shape': xy_sum['prof']['gal_shape']['growth_ori'],
# 1-D profile using fixed shape
'rkpc_prof': xy_sum['prof']['gal_mprof']['r_kpc'],
# This is the surface mass density profile and its error
'mu_mprof': xy_sum['prof']['gal_mprof']['intens'] / (xy_sum['info']['pix'] ** 2.0),
'mu_err_mprof': xy_sum['prof']['gal_mprof']['int_err'] / (xy_sum['info']['pix'] ** 2.0),
# Total mass enclosed by apertures
'maper_mprof': xy_sum['prof']['gal_mprof']['growth_ori'],
# Fourier deviations
'a1_mprof': xy_sum['prof']['gal_mprof']['a1'],
'a1_err_mprof': xy_sum['prof']['gal_mprof']['a1_err'],
'a2_mprof': xy_sum['prof']['gal_mprof']['a2'],
'a2_err_mprof': xy_sum['prof']['gal_mprof']['a2_err'],
'a3_mprof': xy_sum['prof']['gal_mprof']['a3'],
'a3_err_mprof': xy_sum['prof']['gal_mprof']['a3_err'],
'a4_mprof': xy_sum['prof']['gal_mprof']['a4'],
'a4_err_mprof': xy_sum['prof']['gal_mprof']['a4_err'],
'b1_mprof': xy_sum['prof']['gal_mprof']['b1'],
'b1_err_mprof': xy_sum['prof']['gal_mprof']['b1_err'],
'b2_mprof': xy_sum['prof']['gal_mprof']['b2'],
'b2_err_mprof': xy_sum['prof']['gal_mprof']['b2_err'],
'b3_mprof': xy_sum['prof']['gal_mprof']['b3'],
'b3_err_mprof': xy_sum['prof']['gal_mprof']['b3_err'],
'b4_mprof': xy_sum['prof']['gal_mprof']['b4'],
'b4_err_mprof': xy_sum['prof']['gal_mprof']['b4_err']
}
```
### Visualizations
* `riker` provides a few routines to help you visualize the `aper` and `prof` result.
* You can `git clone git@github.com:dr-guangtou/riker.git` and the `python3 setup.py develop` to install it.
* Unfortunately, now it still depends on some of my personal code that is still under development. But it is fine to just make a few plots.
```
from riker import visual
# Summary plot of the aperture measurements
_ = visual.show_aper(xy_sum['info'], xy_sum['aper'])
# Fake some empty stellar mass maps structure
maps_fake = {'mass_gal': None, 'mass_ins': None, 'mass_exs': None}
# Organize the data
ell_plot = visual.prepare_show_ellipse(xy_sum['info'], maps_fake, xy_sum['prof'])
# Show a summary plot for the 1-D profiles
_ = visual.plot_ell_prof(ell_plot)
# You can also visualize the 1-D profiles of Fourier deviations for a certain profile
_ = visual.plot_ell_fourier(xy_sum['prof']['ins_mprof'], show_both=True)
```
|
github_jupyter
|
%load_ext autoreload
%reload_ext autoreload
%autoreload 2
%matplotlib inline
import os
import warnings
import numpy as np
from astropy.table import Table
import matplotlib.pyplot as plt
from matplotlib import rcParams
warnings.filterwarnings("ignore")
plt.rc('text', usetex=True)
rcParams.update({'axes.linewidth': 1.5})
rcParams.update({'xtick.direction': 'in'})
rcParams.update({'ytick.direction': 'in'})
rcParams.update({'xtick.minor.visible': 'True'})
rcParams.update({'ytick.minor.visible': 'True'})
rcParams.update({'xtick.major.pad': '7.0'})
rcParams.update({'xtick.major.size': '8.0'})
rcParams.update({'xtick.major.width': '1.5'})
rcParams.update({'xtick.minor.pad': '7.0'})
rcParams.update({'xtick.minor.size': '4.0'})
rcParams.update({'xtick.minor.width': '1.5'})
rcParams.update({'ytick.major.pad': '7.0'})
rcParams.update({'ytick.major.size': '8.0'})
rcParams.update({'ytick.major.width': '1.5'})
rcParams.update({'ytick.minor.pad': '7.0'})
rcParams.update({'ytick.minor.size': '4.0'})
rcParams.update({'ytick.minor.width': '1.5'})
rcParams.update({'axes.titlepad': '10.0'})
rcParams.update({'font.size': 25})
data_dir = '/Users/song/data/massive/simulation/riker/tng/sum'
xy_sum = np.load(os.path.join(data_dir, 'tng100_z0.4_hres_15_31188_xy_sum.npy'))
xz_sum = np.load(os.path.join(data_dir, 'tng100_z0.4_hres_15_31188_xz_sum.npy'))
yz_sum = np.load(os.path.join(data_dir, 'tng100_z0.4_hres_15_31188_yz_sum.npy'))
# Take a look at the basic information
xy_sum['info']
# Basic geometric information of the galaxy
xy_sum['geom']
Table(xy_sum['aper'])
# Check the differences of the three projections in term of mass profiles
# They are very similar, but not the same
plt.plot(np.log10(xy_sum['aper']['rad_mid']), np.log10(xy_sum['aper']['mprof_gal']), label='xy')
plt.plot(np.log10(xz_sum['aper']['rad_mid']), np.log10(xz_sum['aper']['mprof_gal']), label='xz')
plt.plot(np.log10(yz_sum['aper']['rad_mid']), np.log10(yz_sum['aper']['mprof_gal']), label='yz')
plt.legend(loc='best')
_ = plt.xlabel(r'$\log\ [\rm R/kpc]$')
_ = plt.ylabel(r'$\log_{10}\ [M_{\star}/M_{\odot}]$')
# Check out the 1-D shape profile for the whole galaxy
Table(xy_sum['prof']['gal_mprof'])
# Check the ellipticity profiles of the three projections
plt.plot(np.log10(xy_sum['prof']['gal_shape']['r_kpc']), xy_sum['prof']['gal_shape']['ell'], label='xy')
plt.plot(np.log10(xz_sum['prof']['gal_shape']['r_kpc']), xz_sum['prof']['gal_shape']['ell'], label='xz')
plt.plot(np.log10(yz_sum['prof']['gal_shape']['r_kpc']), yz_sum['prof']['gal_shape']['ell'], label='yz')
_ = plt.xlabel(r'$\log\ [\rm R/kpc]$')
_ = plt.ylabel(r'$e$')
# And the mass profiles derived using varied and fixed shape can be different
plt.plot(np.log10(xy_sum['prof']['gal_shape']['r_kpc']), np.log10(xy_sum['prof']['gal_shape']['intens']),
label='Varied Shape')
plt.plot(np.log10(xy_sum['prof']['gal_mprof']['r_kpc']), np.log10(xy_sum['prof']['gal_mprof']['intens']),
label='Fixed Shape')
_ = plt.xlabel(r'$\log\ [\rm R/kpc]$')
_ = plt.ylabel(r'$\log\ \mu_{\star}$')
xy_sum_for3d = {
# Subhalo ID
'catsh_id': xy_sum['info']['catsh_id'],
# Stellar mass of the galaxy
'logms': xy_sum['info']['logms'],
# Basic geometry of the galaxy
'aper_x0': xy_sum['geom']['x'],
'aper_y0': xy_sum['geom']['y'],
'aper_ba': xy_sum['geom']['ba'],
'aper_pa': xy_sum['geom']['pa'],
# Aperture mass profile
'aper_rkpc': xy_sum['aper']['rad_mid'],
# Total mass enclosed in the aperture
'aper_maper': xy_sum['aper']['maper_gal'],
# Stellar mass in the bin
'aper_mbins': xy_sum['aper']['mprof_gal'],
# 1-D profile with varied shape
'rkpc_shape': xy_sum['prof']['gal_shape']['r_kpc'],
# This is the surface mass density profile and its error
'mu_shape': xy_sum['prof']['gal_shape']['intens'] / (xy_sum['info']['pix'] ** 2.0),
'mu_err_shape': xy_sum['prof']['gal_shape']['int_err'] / (xy_sum['info']['pix'] ** 2.0),
# This is the ellipticity profiles
'e_shape': xy_sum['prof']['gal_shape']['ell'],
'e_err_shape': xy_sum['prof']['gal_shape']['ell_err'],
# This is the normalized position angle profile
'pa_shape': xy_sum['prof']['gal_shape']['pa_norm'],
'pa_err_shape': xy_sum['prof']['gal_shape']['pa_err'],
# Total mass enclosed by apertures
'maper_shape': xy_sum['prof']['gal_shape']['growth_ori'],
# 1-D profile using fixed shape
'rkpc_prof': xy_sum['prof']['gal_mprof']['r_kpc'],
# This is the surface mass density profile and its error
'mu_mprof': xy_sum['prof']['gal_mprof']['intens'] / (xy_sum['info']['pix'] ** 2.0),
'mu_err_mprof': xy_sum['prof']['gal_mprof']['int_err'] / (xy_sum['info']['pix'] ** 2.0),
# Total mass enclosed by apertures
'maper_mprof': xy_sum['prof']['gal_mprof']['growth_ori'],
# Fourier deviations
'a1_mprof': xy_sum['prof']['gal_mprof']['a1'],
'a1_err_mprof': xy_sum['prof']['gal_mprof']['a1_err'],
'a2_mprof': xy_sum['prof']['gal_mprof']['a2'],
'a2_err_mprof': xy_sum['prof']['gal_mprof']['a2_err'],
'a3_mprof': xy_sum['prof']['gal_mprof']['a3'],
'a3_err_mprof': xy_sum['prof']['gal_mprof']['a3_err'],
'a4_mprof': xy_sum['prof']['gal_mprof']['a4'],
'a4_err_mprof': xy_sum['prof']['gal_mprof']['a4_err'],
'b1_mprof': xy_sum['prof']['gal_mprof']['b1'],
'b1_err_mprof': xy_sum['prof']['gal_mprof']['b1_err'],
'b2_mprof': xy_sum['prof']['gal_mprof']['b2'],
'b2_err_mprof': xy_sum['prof']['gal_mprof']['b2_err'],
'b3_mprof': xy_sum['prof']['gal_mprof']['b3'],
'b3_err_mprof': xy_sum['prof']['gal_mprof']['b3_err'],
'b4_mprof': xy_sum['prof']['gal_mprof']['b4'],
'b4_err_mprof': xy_sum['prof']['gal_mprof']['b4_err']
}
from riker import visual
# Summary plot of the aperture measurements
_ = visual.show_aper(xy_sum['info'], xy_sum['aper'])
# Fake some empty stellar mass maps structure
maps_fake = {'mass_gal': None, 'mass_ins': None, 'mass_exs': None}
# Organize the data
ell_plot = visual.prepare_show_ellipse(xy_sum['info'], maps_fake, xy_sum['prof'])
# Show a summary plot for the 1-D profiles
_ = visual.plot_ell_prof(ell_plot)
# You can also visualize the 1-D profiles of Fourier deviations for a certain profile
_ = visual.plot_ell_fourier(xy_sum['prof']['ins_mprof'], show_both=True)
| 0.494385 | 0.843573 |
```
from timeit import default_timer as timer
from functools import partial
import yaml
import sys
import os
from estimagic.optimization.optimize import maximize
from scipy.optimize import root_scalar
from scipy.stats import chi2
import numdifftools as nd
import pandas as pd
import respy as rp
import numpy as np
sys.path.insert(0, "python")
from auxiliary import plot_bootstrap_distribution # noqa: E402
from auxiliary import plot_computational_budget # noqa: E402
from auxiliary import plot_smoothing_parameter # noqa: E402
from auxiliary import plot_score_distribution # noqa: E402
from auxiliary import plot_score_function # noqa: E402
from auxiliary import plot_likelihood # noqa: E402
```
# Maximum likelihood estimation
## Introduction
EKW models are calibrated to data on observed individual decisions and experiences under the hypothesis that the individual's behavior is generated from the solution to the model. The goal is to back out information on reward functions, preference parameters, and transition probabilities. This requires the full parameterization $\theta$ of the model.
Economists have access to information for $i = 1, ..., N$ individuals in each time period $t$. For every observation $(i, t)$ in the data, we observe action $a_{it}$, reward $r_{it}$, and a subset $x_{it}$ of the state $s_{it}$. Therefore, from an economist's point of view, we need to distinguish between two types of state variables $s_{it} = (x_{it}, \epsilon_{it})$. At time $t$, the economist and individual both observe $x_{it}$ while $\epsilon_{it}$ is only observed by the individual. In summary, the data $\mathcal{D}$ has the following structure:
\begin{align*}
\mathcal{D} = \{a_{it}, x_{it}, r_{it}: i = 1, ..., N; t = 1, ..., T_i\},
\end{align*}
where $T_i$ is the number of observations for which we observe individual $i$.
Likelihood-based calibration seeks to find the parameterization $\hat{\theta}$ that maximizes the likelihood function $\mathcal{L}(\theta\mid\mathcal{D})$, i.e. the probability of observing the given data as a function of $\theta$. As we only observe a subset $x_t$ of the state, we can determine the probability $p_{it}(a_{it}, r_{it} \mid x_{it}, \theta)$ of individual $i$ at time $t$ in $x_{it}$ choosing $a_{it}$ and receiving $r_{it}$ given parametric assumptions about the distribution of $\epsilon_{it}$. The objective function takes the following form:
\begin{align*}
\hat{\theta} \equiv \text{argmax}{\theta \in \Theta} \underbrace{\prod^N_{i= 1} \prod^{T_i}_{t= 1}\, p_{it}(a_{it}, r_{it} \mid x_{it}, \theta)}_{\mathcal{L}(\theta\mid\mathcal{D})}.
\end{align*}
We will explore the following issues:
* likelihood function
* score function and statistic
* asymptotic distribution
* linearity
* confidence intervals
* Wald
* likelihood - based
* Bootstrap
* numerical approximations
* smoothing of choice probabilities
* grid search
Most of the material is from the following two references:
* Pawitan, Y. (2001). [In all likelihood: Statistical modelling and inference using likelihood](https://www.amazon.de/dp/0199671222/ref=sr_1_1?keywords=in+all+likelihood&qid=1573806115&sr=8-1). Clarendon Press, Oxford.
* Casella, G., & Berger, R. L. (2002). [Statistical inference](https://www.amazon.de/dp/0534243126/ref=sr_1_1?keywords=casella+berger&qid=1573806129&sr=8-1). Duxbury, Belmont, CA.
Let's get started!
```
options_base = yaml.safe_load(open(os.environ["ROBINSON_SPEC"] + "/robinson.yaml", "r"))
params_base = pd.read_csv(open(os.environ["ROBINSON_SPEC"] + "/robinson.csv", "r"))
params_base.set_index(["category", "name"], inplace=True)
simulate = rp.get_simulate_func(params_base, options_base)
df = simulate(params_base)
```
Let us briefly inspect the parameterization.
```
params_base
```
Several options need to be specified as well.
```
options_base
```
We can now look at the simulated dataset.
```
df.head()
```
## Likelihood function
We can now start exploring the likelihood function that provides an order of preference on $\theta$. The likelihood function is a measure of information about the potentially unknown parameters of the model. The information will usually be incomplete and the likelihood function also expresses the degree of incompleteness
We will usually work with the sum of the individual log-likelihoods throughout as the likelihood cannot be represented without raising problems of numerical overflow. Note that the criterion function of the ``respy`` package returns to the average log-likelihood across the sample. Thus, we need to be careful with scaling it up when computing some of the test statistics later in the notebook.
We will first trace out the likelihood over reasonable parameter values.
```
params_base["lower"] = [0.948, 0.0695, -0.11, 1.04, 0.0030, 0.005, -0.10]
params_base["upper"] = [0.952, 0.0705, -0.09, 1.05, 0.1000, 0.015, +0.10]
```
We plot the normalized likelihood, i.e. set the maximum of the likelihood function to one by dividing it by its maximum.
```
crit_func = rp.get_log_like_func(params_base, options_base, df)
rslts = dict()
for index in params_base.index:
upper, lower = params_base.loc[index][["upper", "lower"]]
grid = np.linspace(lower, upper, 20)
fvals = list()
for value in grid:
params = params_base.copy()
params.loc[index, "value"] = value
fval = options_base["simulation_agents"] * crit_func(params)
fvals.append(fval)
rslts[index] = fvals
```
Let's visualize the results.
```
plot_likelihood(rslts, params_base)
```
### Maximum likelihood estimate
So far, we looked at the likelihood function in its entirety. Going forward, we will take a narrower view and just focus on the maximum likelihood estimate. We restrict our attention to the discount factor $\delta$ and treat it as the only unknown parameter. We will use [estimagic](https://estimagic.readthedocs.io/) for all our estimations.
```
crit_func = rp.get_log_like_func(params_base, options_base, df)
```
However, we will make our life even easier and fix all parameters but the discount factor $\delta$.
```
constr_base = [
{"loc": "shocks_sdcorr", "type": "fixed"},
{"loc": "wage_fishing", "type": "fixed"},
{"loc": "nonpec_fishing", "type": "fixed"},
{"loc": "nonpec_hammock", "type": "fixed"},
]
```
We will start the estimation with a perturbation of the true value.
```
params_start = params_base.copy()
params_start.loc[("delta", "delta"), "value"] = 0.91
```
Now we are ready to deal with the selection and specification of the optimization algorithm.
```
algo_options = {"maxeval": 100}
algo_name = "nlopt_bobyqa"
results, params_rslt = maximize(
crit_func,
params_base,
algo_name,
algo_options=algo_options,
constraints=constr_base,
)
```
Let's look at the results.
```
params_rslt
fval = results["fitness"] * options_base["simulation_agents"]
print(f"criterion function at optimum {fval:5.3f}")
```
We need to set up a proper interface to use some other Python functionality going forward.
```
def wrapper_crit_func(crit_func, options_base, params_base, value):
params = params_base.copy()
params.loc["delta", "value"] = value
return options_base["simulation_agents"] * crit_func(params)
p_wrapper_crit_func = partial(wrapper_crit_func, crit_func, options_base, params_base)
```
We need to use the MLE repeatedly going forward.
```
delta_hat = params_rslt.loc[("delta", "delta"), "value"]
```
At the maximum, the second derivative of the log-likelihood is negative and we define the observed Fisher information as follows
\begin{align*}
I(\hat{\theta}) \equiv -\frac{\partial^2 \log L(\hat{\theta})}{\partial^2 \theta}
\end{align*}
A larger curvature is associated with a strong peak, thus indicating less uncertainty about $\theta$.
```
delta_fisher = -nd.Derivative(p_wrapper_crit_func, n=2)([delta_hat])
delta_fisher
```
### Score statistic and Score function
The Score function is the first-derivative of the log-likelihood.
\begin{align*}
S(\theta) \equiv \frac{\partial \log L(\theta)}{\partial \theta}
\end{align*}
#### Distribution
The asymptotic normality of the score statistic is of key importance in deriving the asymptotic normality of the maximum likelihood estimator. Here we simulate $1,000$ samples of $10,000$ individuals and compute the score function at the true values. I had to increase the number of simulated individuals as convergence to the asymptotic distribution just took way to long.
```
plot_score_distribution()
```
#### Linearity
We seek linearity of the score function around the true value so that the log-likelihood is reasonably well approximated by a second order Taylor-polynomial.
\begin{align*}
\log L(\theta) \approx \log L(\hat{\theta}) + S(\hat{\theta})(\theta - \hat{\theta}) - \tfrac{1}{2} I(\hat{\theta}))(\theta - \hat{\theta})^2
\end{align*}
Since $S(\hat{\theta}) = 0$, we get:
\begin{align*}
\log\left(\frac{L(\theta)}{L(\hat{\theta})}\right) \approx - \tfrac{1}{2} I(\hat{\theta})(\theta - \hat{\theta})^2
\end{align*}
Taking the derivative to work with the score function, the following relationship is approximately true if the usual regularity conditions hold:
\begin{align*}
- I^{-1/2}(\hat{\theta}) S(\theta) \approx I^{1/2}(\hat{\theta}) (\theta - \hat{\theta})
\end{align*}
```
num_points, index = 10, ("delta", "delta")
upper, lower = params_base.loc[index, ["upper", "lower"]]
grid = np.linspace(lower, upper, num_points)
fds = np.tile(np.nan, num_points)
for i, point in enumerate(grid):
fds[i] = nd.Derivative(p_wrapper_crit_func, n=1)([point])
norm_fds = fds * -(1 / np.sqrt(delta_fisher))
norm_grid = (grid - delta_hat) * (np.sqrt(delta_fisher))
```
In the best case we see a standard normal distribution of $I^{1/2} (\hat{\theta}) (\theta - \hat{\theta})$ and so it is common practice to evaluate the linearity over $-2$ and $2$.
```
plot_score_function(norm_grid, norm_fds)
```
Alternative shapes are possible.
<img src="material/fig-quadratic-approximation.png" width="700" >
### Confidence intervals
How do we communicate the statistical evidence using the likelihood? Several notions exist that have different demands on the score function. Wile the Wald intervals rely on the asymptotic normality and linearity, likelihood-based intervals only require asymptotic normality. In well-behaved problems, both measures of uncertainty agree.
#### Wald intervals
```
rslt = list()
rslt.append(delta_hat - 1.96 * 1 / np.sqrt(delta_fisher))
rslt.append(delta_hat + 1.96 * 1 / np.sqrt(delta_fisher))
"{:5.3f} / {:5.3f}".format(*rslt)
```
#### Likelihood-based intervals
```
def root_wrapper(delta, options_base, alpha, index):
crit_val = -0.5 * chi2.ppf(1 - alpha, 1)
params_eval = params_base.copy()
params_eval.loc[("delta", "delta"), "value"] = delta
likl_ratio = options_base["simulation_agents"] * (
crit_func(params_eval) - crit_func(params_base)
)
return likl_ratio - crit_val
brackets = [[0.75, 0.95], [0.95, 1.10]]
rslt = list()
for bracket in brackets:
root = root_scalar(
root_wrapper,
method="bisect",
bracket=bracket,
args=(options_base, 0.05, index),
).root
rslt.append(root)
print("{:5.3f} / {:5.3f}".format(*rslt))
```
## Bootstrap
We can now run a simple bootstrap to see how the asymptotic standard errors line up.
Here are some useful resources on the topic:
* Davison, A., & Hinkley, D. (1997). [Bootstrap methods and their application](https://www.amazon.de/dp/B00D2WQ02U/ref=sr_1_1?keywords=bootstrap+methods+and+their+application&qid=1574070350&s=digital-text&sr=1-1). Cambridge University Press, Cambridge.
* Hesterberg, T. C. (2015). [What teachers should know about the bootstrap: Resampling in the undergraduate statistics curriculum](https://amstat.tandfonline.com/doi/full/10.1080/00031305.2015.1089789#.XdZhBldKjIV), *The American Statistician, 69*(4), 371-386.
* Horowitz, J. L. (2001). [Chapter 52. The bootstrap](https://www.scholars.northwestern.edu/en/publications/chapter-52-the-bootstrap). In Heckman, J.J., & Leamer, E.E., editors, *Handbook of Econometrics, 5*, 3159-3228. Elsevier Science B.V.
```
plot_bootstrap_distribution()
```
We can now construct the bootstrap confidence interval.
```
fname = "material/bootstrap.delta_perturb_true.pkl"
boot_params = pd.read_pickle(fname)
rslt = list()
for quantile in [0.025, 0.975]:
rslt.append(boot_params.loc[("delta", "delta"), :].quantile(quantile))
print("{:5.3f} / {:5.3f}".format(*rslt))
```
### Numerical aspects
The shape and properties of the likelihood function are determined by different numerical tuning parameters such as quality of numerical integration, smoothing of choice probabilities. We would simply choose all components to be the "best", but that comes at the cost of increasing the time to solution.
```
grid = np.linspace(100, 1000, 100, dtype=int)
rslts = list()
for num_draws in grid:
options = options_base.copy()
options["estimation_draws"] = num_draws
options["solution_draws"] = num_draws
start = timer()
rp.get_solve_func(params_base, options)
finish = timer()
rslts.append(finish - start)
```
We are ready to see how time to solution increases as we improve the quality of the numerical integration by increasing the number of Monte Carlo draws.
```
plot_computational_budget(grid, rslts)
```
We need to learn where to invest a limited computational budget. We focus on the following going forward:
* smoothing parameter for logit accept-reject simulator
* grid search across core parameters
#### Smoothing parameter
We now show the shape of the likelihood function for alternative choices of the smoothing parameter $\tau$. There exists no closed-form solution for the choice probabilities, so these are simulated. Application of a basic accept-reject (AR) simulator poses the two challenges. First, there is the ocurrance of zero probability simulation for low probability events which causes problems for the evaluation of the log-likelihood. Second, the choice probabilities are not smooth in the parameters and instead are a step function. This is why McFadden (1989) introduces a class of smoothed AR simulators. The logit-smoothed AR simulator is the most popular one and also implemented in `respy`. The implementation requires to specify the smoothing parameter $\tau$. As $\tau \rightarrow 0$ the logit smoother approaches the original indicator function.
* McFadden, D. (1989). [A method of simulated moments for estimation of discrete response models without numerical integration](https://www.jstor.org/stable/1913621?seq=1#metadata_info_tab_contents). *Econometrica, 57*(5), 995-1026.
* Train, K. (2009). [Discrete choice methods with simulation](https://eml.berkeley.edu/books/train1201.pdf). Cambridge University Press, Cambridge.
```
rslts = dict()
for tau in [0.01, 0.001, 0.0001]:
index = ("delta", "delta")
options = options_base.copy()
options["estimation_tau"] = tau
crit_func = rp.get_log_like_func(params_base, options, df)
grid = np.linspace(0.948, 0.952, 20)
fvals = list()
for value in grid:
params = params_base.copy()
params.loc[index, "value"] = value
fvals.append(crit_func(params))
rslts[tau] = fvals - np.max(fvals)
```
Now we are ready to inspect the shape of the likelihood function.
```
plot_smoothing_parameter(rslts, params_base, grid)
```
#### Grid search
We can look at the interplay of several major numerical tuning parameters. We combine choices for `simulation_agents`, `solution_draws`, `estimation_draws`, and `tau` to see how the maximum of the likelihood function changes.
```
df = pd.read_pickle("material/tuning.delta.pkl")
df.loc[((10000), slice(None)), :]
```
|
github_jupyter
|
from timeit import default_timer as timer
from functools import partial
import yaml
import sys
import os
from estimagic.optimization.optimize import maximize
from scipy.optimize import root_scalar
from scipy.stats import chi2
import numdifftools as nd
import pandas as pd
import respy as rp
import numpy as np
sys.path.insert(0, "python")
from auxiliary import plot_bootstrap_distribution # noqa: E402
from auxiliary import plot_computational_budget # noqa: E402
from auxiliary import plot_smoothing_parameter # noqa: E402
from auxiliary import plot_score_distribution # noqa: E402
from auxiliary import plot_score_function # noqa: E402
from auxiliary import plot_likelihood # noqa: E402
options_base = yaml.safe_load(open(os.environ["ROBINSON_SPEC"] + "/robinson.yaml", "r"))
params_base = pd.read_csv(open(os.environ["ROBINSON_SPEC"] + "/robinson.csv", "r"))
params_base.set_index(["category", "name"], inplace=True)
simulate = rp.get_simulate_func(params_base, options_base)
df = simulate(params_base)
params_base
options_base
df.head()
params_base["lower"] = [0.948, 0.0695, -0.11, 1.04, 0.0030, 0.005, -0.10]
params_base["upper"] = [0.952, 0.0705, -0.09, 1.05, 0.1000, 0.015, +0.10]
crit_func = rp.get_log_like_func(params_base, options_base, df)
rslts = dict()
for index in params_base.index:
upper, lower = params_base.loc[index][["upper", "lower"]]
grid = np.linspace(lower, upper, 20)
fvals = list()
for value in grid:
params = params_base.copy()
params.loc[index, "value"] = value
fval = options_base["simulation_agents"] * crit_func(params)
fvals.append(fval)
rslts[index] = fvals
plot_likelihood(rslts, params_base)
crit_func = rp.get_log_like_func(params_base, options_base, df)
constr_base = [
{"loc": "shocks_sdcorr", "type": "fixed"},
{"loc": "wage_fishing", "type": "fixed"},
{"loc": "nonpec_fishing", "type": "fixed"},
{"loc": "nonpec_hammock", "type": "fixed"},
]
params_start = params_base.copy()
params_start.loc[("delta", "delta"), "value"] = 0.91
algo_options = {"maxeval": 100}
algo_name = "nlopt_bobyqa"
results, params_rslt = maximize(
crit_func,
params_base,
algo_name,
algo_options=algo_options,
constraints=constr_base,
)
params_rslt
fval = results["fitness"] * options_base["simulation_agents"]
print(f"criterion function at optimum {fval:5.3f}")
def wrapper_crit_func(crit_func, options_base, params_base, value):
params = params_base.copy()
params.loc["delta", "value"] = value
return options_base["simulation_agents"] * crit_func(params)
p_wrapper_crit_func = partial(wrapper_crit_func, crit_func, options_base, params_base)
delta_hat = params_rslt.loc[("delta", "delta"), "value"]
delta_fisher = -nd.Derivative(p_wrapper_crit_func, n=2)([delta_hat])
delta_fisher
plot_score_distribution()
num_points, index = 10, ("delta", "delta")
upper, lower = params_base.loc[index, ["upper", "lower"]]
grid = np.linspace(lower, upper, num_points)
fds = np.tile(np.nan, num_points)
for i, point in enumerate(grid):
fds[i] = nd.Derivative(p_wrapper_crit_func, n=1)([point])
norm_fds = fds * -(1 / np.sqrt(delta_fisher))
norm_grid = (grid - delta_hat) * (np.sqrt(delta_fisher))
plot_score_function(norm_grid, norm_fds)
rslt = list()
rslt.append(delta_hat - 1.96 * 1 / np.sqrt(delta_fisher))
rslt.append(delta_hat + 1.96 * 1 / np.sqrt(delta_fisher))
"{:5.3f} / {:5.3f}".format(*rslt)
def root_wrapper(delta, options_base, alpha, index):
crit_val = -0.5 * chi2.ppf(1 - alpha, 1)
params_eval = params_base.copy()
params_eval.loc[("delta", "delta"), "value"] = delta
likl_ratio = options_base["simulation_agents"] * (
crit_func(params_eval) - crit_func(params_base)
)
return likl_ratio - crit_val
brackets = [[0.75, 0.95], [0.95, 1.10]]
rslt = list()
for bracket in brackets:
root = root_scalar(
root_wrapper,
method="bisect",
bracket=bracket,
args=(options_base, 0.05, index),
).root
rslt.append(root)
print("{:5.3f} / {:5.3f}".format(*rslt))
plot_bootstrap_distribution()
fname = "material/bootstrap.delta_perturb_true.pkl"
boot_params = pd.read_pickle(fname)
rslt = list()
for quantile in [0.025, 0.975]:
rslt.append(boot_params.loc[("delta", "delta"), :].quantile(quantile))
print("{:5.3f} / {:5.3f}".format(*rslt))
grid = np.linspace(100, 1000, 100, dtype=int)
rslts = list()
for num_draws in grid:
options = options_base.copy()
options["estimation_draws"] = num_draws
options["solution_draws"] = num_draws
start = timer()
rp.get_solve_func(params_base, options)
finish = timer()
rslts.append(finish - start)
plot_computational_budget(grid, rslts)
rslts = dict()
for tau in [0.01, 0.001, 0.0001]:
index = ("delta", "delta")
options = options_base.copy()
options["estimation_tau"] = tau
crit_func = rp.get_log_like_func(params_base, options, df)
grid = np.linspace(0.948, 0.952, 20)
fvals = list()
for value in grid:
params = params_base.copy()
params.loc[index, "value"] = value
fvals.append(crit_func(params))
rslts[tau] = fvals - np.max(fvals)
plot_smoothing_parameter(rslts, params_base, grid)
df = pd.read_pickle("material/tuning.delta.pkl")
df.loc[((10000), slice(None)), :]
| 0.484624 | 0.968649 |
# N-grams and Markov chains
By [Allison Parrish](http://www.decontextualize.com/)
Markov chain text generation is [one of the oldest](https://elmcip.net/creative-work/travesty) strategies for predictive text generation. This notebook takes you through the basics of implementing a simple and concise Markov chain text generation procedure in Python.
If all you want is to generate text with a Markov chain and you don't care about how the functions are implemented (or if you already went through this notebook and want to use the functions without copy-and-pasting them), you can [download a Python file with all of the functions here](https://gist.github.com/aparrish/14cb94ce539a868e6b8714dd84003f06). Just download the file, put it in the same directory as your code, type `from shmarkov import *` at the top, and you're good to go.
## Tuples: a quick introduction
Before we get to all that, I need to review a helpful Python data structure: the tuple.
Tuples (rhymes with "supple") are data structures very similar to lists. You can create a tuple using parentheses (instead of square brackets, as you would with a list):
```
t = ("alpha", "beta", "gamma", "delta")
t
```
You can access the values in a tuple in the same way as you access the values in a list: using square bracket indexing syntax. Tuples support slice syntax and negative indexes, just like lists:
```
t[-2]
t[1:3]
```
The difference between a list and a tuple is that the values in a tuple can't be changed after the tuple is created. This means, for example, that attempting to .append() a value to a tuple will fail:
```
t.append("epsilon")
t[2] = "bravo"
```
"So," you think to yourself. "Tuples are just like... broken lists. That's strange and a little unreasonable. Why even have them in your programming language?" That's a fair question, and answering it requires a bit of knowledge of how Python works with these two kinds of values (lists and tuples) behind the scenes.
Essentially, tuples are faster and smaller than lists. Because lists can be modified, potentially becoming larger after they're initialized, Python has to allocate more memory than is strictly necessary whenever you create a list value. If your list grows beyond what Python has already allocated, Python has to allocate more memory. Allocating memory, copying values into memory, and then freeing memory when it's when no longer needed, are all (perhaps surprisingly) slow processes---slower, at least, than using data already loaded into memory when your program begins.
Because a tuple can't grow or shrink after it's created, Python knows exactly how much memory to allocate when you create a tuple in your program. That means: less wasted memory, and less wasted time allocating a deallocating memory. The cost of this decreased resource footprint is less versatility.
Tuples are often called an immutable data type. "Immutable" in this context simply means that it can't be changed after it's created.
For our purposes, the most important aspect of tuples is that–unlike lists–they can be *keys in dictionaries*. The utility of this will become apparent later in this tutorial, but to illustrate, let's start with an empty dictionary:
```
my_stuff = {}
```
I can use a string as a key, of course, no problem:
```
my_stuff["cheese"] = 1
```
I can also use an integer:
```
my_stuff[17] = "hello"
my_stuff
```
But I can't use a *list* as a key:
```
my_stuff[ ["a", "b"] ] = "asdf"
```
That's because a list, as a mutable data type, is "unhashable": since its contents can change, it's impossible to come up with a single value to represent it, as is required of dictionary keys. However, because tuples are *immutable*, you can use them as dictionary keys:
```
my_stuff[ ("a", "b") ] = "asdf"
my_stuff
```
This behavior is helpful when you want to make a data structure that maps *sequences* as keys to corresponding values. As we'll see below!
It's easy to make a list that is a copy of a tuple, and a tuple that is a copy of a list, using the `list()` and `tuple()` functions, respectively:
```
t = (1, 2, 3)
list(t)
things = [4, 5, 6]
tuple(things)
```
## N-grams
The first kind of text analysis that we’ll look at today is an n-gram model. An n-gram is simply a sequence of units drawn from a longer sequence; in the case of text, the unit in question is usually a character or a word. For convenience, we'll call the unit of the n-gram is called its *level*; the length of the n-gram is called its *order*. For example, the following is a list of all unique character-level order-2 n-grams in the word `condescendences`:
co
on
nd
de
es
sc
ce
en
nc
And the following is an excerpt from the list of all unique word-level order-5 n-grams in *The Road Not Taken*:
Two roads diverged in a
roads diverged in a yellow
diverged in a yellow wood,
in a yellow wood, And
a yellow wood, And sorry
yellow wood, And sorry I
N-grams are used frequently in natural language processing and are a basic tool text analysis. Their applications range from programs that correct spelling to creative visualizations to compression algorithms to stylometrics to generative text. They can be used as the basis of a Markov chain algorithm—and, in fact, that’s one of the applications we’ll be using them for later in this lesson.
### Finding and counting word pairs
So how would we go about writing Python code to find n-grams? We'll start with a simple task: finding *word pairs* in a text. A word pair is essentially a word-level order-2 n-gram; once we have code to find word pairs, we’ll generalize it to handle n-grams of any order.
To find word pairs, we'll first need some words!
```
text = open("genesis.txt").read()
words = text.split()
```
The data structure we want to end up with is a *list* of *tuples*, where the tuples have two elements, i.e., each successive pair of words from the text. There are a number of clever ways to go about creating this list. Here's one: imagine our starting list of strings, with their corresponding indices:
['a', 'b', 'c', 'd', 'e']
0 1 2 3 4
The first item of the list of pairs should consist of the elements at index 0 and index 1 from this list; the second item should consist of the elements at index 1 and index 2; and so forth. We can accomplish this using a list comprehension over the range of numbers from zero up until the end of the list minus one:
```
pairs = [(words[i], words[i+1]) for i in range(len(words)-1)]
```
(Why `len(words) - 1`? Because the final element of the list can only be the *second* element of a pair. Otherwise we'd be trying to access an element beyond the end of the list.)
The corresponding way to write this with a `for` loop:
```
pairs = []
for i in range(len(words)-1):
this_pair = (words[i], words[i+1])
pairs.append(this_pair)
```
In either case, the list of n-grams ends up looking like this. (I'm only showing the first 25 for the sake of brevity; remove `[:25]` to see the whole list.)
```
pairs[:25]
```
Now that we have a list of word pairs, we can count them using a `Counter` object.
```
from collections import Counter
pair_counts = Counter(pairs)
```
The `.most_common()` method of the `Counter` shows us the items in our list that occur most frequently:
```
pair_counts.most_common(10)
```
So the phrase "And God" occurs 21 times, by far the most common word pair in the text. In fact, "And God" comprises about 3% of all word pairs found in the text:
```
pair_counts[("And", "God")] / sum(pair_counts.values())
```
You can do the same calculation with character-level pairs with pretty much exactly the same code, owing to the fact that strings and lists can be indexed using the same syntax:
```
char_pairs = [(text[i], text[i+1]) for i in range(len(text)-1)]
```
The variable `char_pairs` now has a list of all pairs of *characters* in the text. Using `Counter` again, we can find the most common pairs of characters:
```
char_pair_counts = Counter(char_pairs)
char_pair_counts.most_common(10)
```
> What are the practical applications of this kind of analysis? For one, you can use n-gram counts to judge *similarity* between two texts. If two texts have the same n-grams in similar proportions, then those texts probably have similar compositions, meanings, or authorship. N-grams can also be a basis for fast text searching; [Google Books Ngram Viewer](https://books.google.com/ngrams) works along these lines.
### N-grams of arbitrary lengths
The step from pairs to n-grams of arbitrary lengths is a only a matter of using slice indexes to get a slice of length `n`, where `n` is the length of the desired n-gram. For example, to get all of the word-level order 7 n-grams from the list of words in `genesis.txt`:
```
seven_grams = [tuple(words[i:i+7]) for i in range(len(words)-6)]
seven_grams[:20]
```
Two tricky things in this expression: in `tuple(words[i:i+7])`, I call `tuple()` to convert the list slice (`words[i:i+7]`) into a tuple. In `range(len(words)-6)`, the `6` is there because it's one fewer than the length of the n-gram. Just as with the pairs above, we need to stop counting before we reach the end of the list with enough room to make sure we're always grabbing slices of the desired length.
For the sake of convenience, here's a function that will return n-grams of a desired length from any sequence, whether list or string:
```
def ngrams_for_sequence(n, seq):
return [tuple(seq[i:i+n]) for i in range(len(seq)-n+1)]
```
Using this function, here are random character-level n-grams of order 9 from `genesis.txt`:
```
import random
genesis_9grams = ngrams_for_sequence(9, open("genesis.txt").read())
random.sample(genesis_9grams, 10)
```
Or all the word-level 5-grams from `frost.txt`:
```
frost_word_5grams = ngrams_for_sequence(5, open("frost.txt").read().split())
frost_word_5grams
```
All of the bigrams (ngrams of order 2) from the string `condescendences`:
```
ngrams_for_sequence(2, "condescendences")
```
This function works with non-string sequences as well:
```
ngrams_for_sequence(4, [5, 10, 15, 20, 25, 30])
```
And of course we can use it in conjunction with a `Counter` to find the most common n-grams in a text:
```
Counter(ngrams_for_sequence(3, open("genesis.txt").read())).most_common(20)
```
## Markov models: what comes next?
Now that we have the ability to find and record the n-grams in a text, it’s time to take our analysis one step further. The next question we’re going to try to answer is this: Given a particular n-gram in a text, what is most likely to come next?
We can imagine the kind of algorithm we’ll need to extract this information from the text. It will look very similar to the code to find n-grams above, but it will need to keep track not just of the n-grams but also a list of all units (word, character, whatever) that *follow* those n-grams.
Let’s do a quick example by hand. This is the same character-level order-2 n-gram analysis of the (very brief) text “condescendences” as above, but this time keeping track of all characters that follow each n-gram:
| n-grams | next? |
| ------- | ----- |
|co| n|
|on| d|
|nd| e, e|
|de| s, n|
|es| c, (end of text)|
|sc| e|
|ce| n, s|
|en| d, c|
|nc| e|
From this table, we can determine that while the n-gram `co` is followed by n 100% of the time, and while the n-gram `on` is followed by `d` 100% of the time, the n-gram `de` is followed by `s` 50% of the time, and `n` the rest of the time. Likewise, the n-gram `es` is followed by `c` 50% of the time, and followed by the end of the text the other 50% of the time.
The easiest way to represent this model is with a dictionary whose keys are the n-grams and whose values are all of the possible "nexts." Here's what the Python code looks like to construct this model from a string. We'll use the special token `$` to represent the notion of the "end of text" in the table above.
```
src = "condescendences"
src += "$" # to indicate the end of the string
model = {}
for i in range(len(src)-2):
ngram = tuple(src[i:i+2]) # get a slice of length 2 from current position
next_item = src[i+2] # next item is current index plus two (i.e., right after the slice)
if ngram not in model: # check if we've already seen this ngram; if not...
model[ngram] = [] # value for this key is an empty list
model[ngram].append(next_item) # append this next item to the list for this ngram
model
```
The functions in the cell below generalize this to n-grams of arbitrary length (and use the special Python value `None` to indicate the end of a sequence). The `markov_model()` function creates an empty dictionary and takes an n-gram length and a sequence (which can be a string or a list) and calls the `add_to_model()` function on that sequence. The `add_to_model()` function does the same thing as the code above: iterates over every index of the sequence and grabs an n-gram of the desired length, adding keys and values to the dictionary as necessary.
```
def add_to_model(model, n, seq):
# make a copy of seq and append None to the end
seq = list(seq[:]) + [None]
for i in range(len(seq)-n):
# tuple because we're using it as a dict key!
gram = tuple(seq[i:i+n])
next_item = seq[i+n]
if gram not in model:
model[gram] = []
model[gram].append(next_item)
def markov_model(n, seq):
model = {}
add_to_model(model, n, seq)
return model
```
So, e.g., an order-2 character-level Markov model of `condescendences`:
```
markov_model(2, "condescendences")
```
Or an order 3 word-level Markov model of `genesis.txt`:
```
genesis_markov_model = markov_model(3, open("genesis.txt").read().split())
genesis_markov_model
```
We can now use the Markov model to make *predictions*. Given the information in the Markov model of `genesis.txt`, what words are likely to follow the sequence of words `and over the`? We can find out simply by getting the value for the key for that sequence:
```
genesis_markov_model[('and', 'over', 'the')]
```
This tells us that the sequence `and over the` is followed by `fowl` 50% of the time, `night,` 25% of the time and `cattle,` 25% of the time.
### Markov chains: Generating text from a Markov model
The Markov models we created above don't just give us interesting statistical probabilities. It also allows us generate a *new* text with those probabilities by *chaining together predictions*. Here’s how we’ll do it, starting with the order 2 character-level Markov model of `condescendences`: (1) start with the initial n-gram (`co`)—those are the first two characters of our output. (2) Now, look at the last *n* characters of output, where *n* is the order of the n-grams in our table, and find those characters in the “n-grams” column. (3) Choose randomly among the possibilities in the corresponding “next” column, and append that letter to the output. (Sometimes, as with `co`, there’s only one possibility). (4) If you chose “end of text,” then the algorithm is over. Otherwise, repeat the process starting with (2). Here’s a record of the algorithm in action:
co
con
cond
conde
conden
condend
condendes
condendesc
condendesce
condendesces
As you can see, we’ve come up with a word that looks like the original word, and could even be passed off as a genuine English word (if you squint at it). From a statistical standpoint, the output of our algorithm is nearly indistinguishable from the input. This kind of algorithm—moving from one state to the next, according to a list of probabilities—is known as a Markov chain.
Implementing this procedure in code is a little bit tricky, but it looks something like this. First, we'll make a Markov model of `condescendences`:
```
cmodel = markov_model(2, "condescendences")
cmodel
```
We're going to generate output as we go. We'll initialize the output to the characters we want to start with, i.e., `co`:
```
output = "co"
```
Now what we have to do is get the last two characters of the output, look them up in the model, and select randomly among the characters in the value for that key (which should be a list). Finally, we'll append that randomly-selected value to the end of the string:
```
ngram = tuple(output[-2:])
next_item = random.choice(cmodel[ngram])
output += next_item
print(output)
```
Try running the cell above multiple times: the `output` variable will get longer and longer—until you get an error. You can also put it into a `for` loop:
```
output = "co"
for i in range(100):
ngram = tuple(output[-2:])
next_item = random.choice(cmodel[ngram])
output += next_item
print(output)
```
The `TypeError` you see above is what happens when we stumble upon the "end of text" condition, which we'd chosen above to represent with the special Python value `None`. When this value comes up, it means that statistically speaking, we've reached the end of the text, and so can stop generating. We'll obey this directive by skipping out of the loop early with the `break` keyword:
```
output = "co"
for i in range(100):
ngram = tuple(output[-2:])
next_item = random.choice(cmodel[ngram])
if next_item is None:
break # "break" tells Python to immediately exit the loop, skipping any remaining values
else:
output += next_item
print(output)
```
Why `range(100)`? No reason, really—I just picked 100 as a reasonable number for the maximum number of times the Markov chain should produce attempt to append to the output. Because there's a loop in this particular model (`nd` -> `e`, `de` -> `n`, `en` -> `d`), any time you generate text from this Markov chain, it could potentially go on infinitely. Limiting the number to `100` makes sure that it doesn't ever actually do that. You should adjust the number based on what you need the Markov chain to do.
### A function to generate from a Markov model
The `gen_from_model()` function below is a more general version of the code that we just wrote that works with lists and strings and n-grams of any length:
```
import random
def gen_from_model(n, model, start=None, max_gen=100):
if start is None:
start = random.choice(list(model.keys()))
output = list(start)
for i in range(max_gen):
start = tuple(output[-n:])
next_item = random.choice(model[start])
if next_item is None:
break
else:
output.append(next_item)
return output
```
The `gen_from_model()` function's first parameter is the length of n-gram; the second parameter is a Markov model, as returned from `markov_model()` defined above, and the third parameter is the "seed" n-gram to start the generation from. The `gen_from_model()` function always returns a list:
```
gen_from_model(2, cmodel, ('c', 'o'))
```
So if you're working with a character-level Markov chain, you'll want to glue the list back together into a string:
```
''.join(gen_from_model(2, cmodel, ('c', 'o')))
```
If you leave out the "seed," this function will just pick a random n-gram to start with:
```
sea_model = markov_model(3, "she sells seashells by the seashore")
for i in range(12):
print(''.join(gen_from_model(3, sea_model)))
```
### Advanced Markov style: Generating lines
You can use the `gen_from_model()` function to generate word-level Markov chains as well:
```
genesis_word_model = markov_model(2, open("genesis.txt").read().split())
generated_words = gen_from_model(2, genesis_word_model, ('In', 'the'))
print(' '.join(generated_words))
```
This looks good! But there's a problem: the generation of the text just sorta... keeps going. Actually it goes on for exactly 100 words, which is also the maximum number of iterations specified in the function. We can make it go even longer by supplying a fourth parameter to the function:
```
generated_words = gen_from_model(2, genesis_word_model, ('In', 'the'), 500)
print(' '.join(generated_words))
```
The reason for this is that unless the Markov chain generator reaches the "end of text" token, it'll just keep going on forever. And the longer the text, the less likely it is that the "end of text" token will be reached.
Maybe this is okay, but the underlying text actually has some structure in it: each line of the file is actually a verse. If you want to generate individual *verses*, you need to treat each line separately, producing an end-of-text token for each line. The following function does just this by creating a model, adding each item from a list to the model as a separate item, and returning the combined model:
```
def markov_model_from_sequences(n, sequences):
model = {}
for item in sequences:
add_to_model(model, n, item)
return model
```
This function expects to receive a list of sequences (the sequences can be either lists or strings, depending on if you want a word-level model or a character-level model). So, for example:
```
genesis_lines = open("genesis.txt").readlines() # all of the lines from the file
# genesis_lines_words will be a list of lists of words in each line
genesis_lines_words = [line.strip().split() for line in genesis_lines] # strip whitespace and split into words
genesis_lines_model = markov_model_from_sequences(2, genesis_lines_words)
```
The `genesis_lines_model` variable now contains a Markov model with end-of-text tokens where they should be, at the end of each line. Generating from this model, we get:
```
for i in range(10):
print("verse", i, "-", ' '.join(gen_from_model(2, genesis_lines_model)))
```
Better—the verses are ending at appropriate places—but still not quite right, since we're generating from random keys in the Markov model! To make this absolutely correct, we'd want to *start* each line with an n-gram that also occurred at the start of each line in the original text file. To do this, we'll work in two passes. First, get the list of lists of words:
```
genesis_lines = open("genesis.txt").readlines() # all of the lines from the file
# genesis_lines_words will be a list of lists of words in each line
genesis_lines_words = [line.strip().split() for line in genesis_lines] # strip whitespace and split into words
```
Now, get the n-grams at the start of each line:
```
genesis_starts = [item[:2] for item in genesis_lines_words if len(item) >= 2]
```
Now create the Markov model:
```
genesis_lines_model = markov_model_from_sequences(2, genesis_lines_words)
```
And generate from it, picking a random "start" for each line:
```
for i in range(10):
start = random.choice(genesis_starts)
generated = gen_from_model(2, genesis_lines_model, random.choice(genesis_starts))
print("verse", i, "-", ' '.join(generated))
```
### Putting it together
The `markov_generate_from_sequences()` function below wraps up everything above into one function that takes an n-gram length, a list of sequences (e.g., a list of lists of words for a word-level Markov model, or a list of strings for a character-level Markov model), and a number of lines to generate, and returns that many generated lines, starting the generation only with n-grams that begin lines in the source file:
```
def markov_generate_from_sequences(n, sequences, count, max_gen=100):
starts = [item[:n] for item in sequences if len(item) >= n]
model = markov_model_from_sequences(n, sequences)
return [gen_from_model(n, model, random.choice(starts), max_gen)
for i in range(count)]
```
Here's how to use this function to generate from a character-level Markov model of `frost.txt`:
```
frost_lines = [line.strip() for line in open("frost.txt").readlines()]
for item in markov_generate_from_sequences(5, frost_lines, 20):
print(''.join(item))
```
And from a word-level Markov model of Shakespeare's sonnets:
```
sonnets_words = [line.strip().split() for line in open("sonnets.txt").readlines()]
for item in markov_generate_from_sequences(2, sonnets_words, 14):
print(' '.join(item))
```
A fun thing to do is combine *two* source texts and make a Markov model from the combination. So for example, read in the lines of both *The Road Not Taken* and `genesis.txt` and put them into the same list:
```
frost_lines = [line.strip() for line in open("frost.txt").readlines()]
genesis_lines = [line.strip() for line in open("genesis.txt").readlines()]
both_lines = frost_lines + genesis_lines
for item in markov_generate_from_sequences(5, both_lines, 14, max_gen=150):
print(''.join(item))
```
The resulting text has properties of both of the underlying source texts! Whoa.
### Putting it all *even more together*
If you're really super lazy, the `markov_generate_from_lines_in_file()` function below does allll the work for you. It takes an n-gram length, an open filehandle to read from, the number of lines to generate, and the string `char` for a character-level Markov model and `word` for a word-level model. It returns the requested number of lines generated from a Markov model of the desired order and level.
```
def markov_generate_from_lines_in_file(n, filehandle, count, level='char', max_gen=100):
if level == 'char':
glue = ''
sequences = [item.strip() for item in filehandle.readlines()]
elif level == 'word':
glue = ' '
sequences = [item.strip().split() for item in filehandle.readlines()]
generated = markov_generate_from_sequences(n, sequences, count, max_gen)
return [glue.join(item) for item in generated]
```
So, for example, to generate twenty lines from an order-3 model of H.D.'s *Sea Rose*:
```
for item in markov_generate_from_lines_in_file(3, open("sea_rose.txt"), 20, 'char'):
print(item)
```
Or an order-3 word-level model of `genesis.txt`:
```
for item in markov_generate_from_lines_in_file(3, open("genesis.txt"), 5, 'word'):
print(item)
print("")
```
|
github_jupyter
|
t = ("alpha", "beta", "gamma", "delta")
t
t[-2]
t[1:3]
t.append("epsilon")
t[2] = "bravo"
my_stuff = {}
my_stuff["cheese"] = 1
my_stuff[17] = "hello"
my_stuff
my_stuff[ ["a", "b"] ] = "asdf"
my_stuff[ ("a", "b") ] = "asdf"
my_stuff
t = (1, 2, 3)
list(t)
things = [4, 5, 6]
tuple(things)
text = open("genesis.txt").read()
words = text.split()
pairs = [(words[i], words[i+1]) for i in range(len(words)-1)]
pairs = []
for i in range(len(words)-1):
this_pair = (words[i], words[i+1])
pairs.append(this_pair)
pairs[:25]
from collections import Counter
pair_counts = Counter(pairs)
pair_counts.most_common(10)
pair_counts[("And", "God")] / sum(pair_counts.values())
char_pairs = [(text[i], text[i+1]) for i in range(len(text)-1)]
char_pair_counts = Counter(char_pairs)
char_pair_counts.most_common(10)
seven_grams = [tuple(words[i:i+7]) for i in range(len(words)-6)]
seven_grams[:20]
def ngrams_for_sequence(n, seq):
return [tuple(seq[i:i+n]) for i in range(len(seq)-n+1)]
import random
genesis_9grams = ngrams_for_sequence(9, open("genesis.txt").read())
random.sample(genesis_9grams, 10)
frost_word_5grams = ngrams_for_sequence(5, open("frost.txt").read().split())
frost_word_5grams
ngrams_for_sequence(2, "condescendences")
ngrams_for_sequence(4, [5, 10, 15, 20, 25, 30])
Counter(ngrams_for_sequence(3, open("genesis.txt").read())).most_common(20)
src = "condescendences"
src += "$" # to indicate the end of the string
model = {}
for i in range(len(src)-2):
ngram = tuple(src[i:i+2]) # get a slice of length 2 from current position
next_item = src[i+2] # next item is current index plus two (i.e., right after the slice)
if ngram not in model: # check if we've already seen this ngram; if not...
model[ngram] = [] # value for this key is an empty list
model[ngram].append(next_item) # append this next item to the list for this ngram
model
def add_to_model(model, n, seq):
# make a copy of seq and append None to the end
seq = list(seq[:]) + [None]
for i in range(len(seq)-n):
# tuple because we're using it as a dict key!
gram = tuple(seq[i:i+n])
next_item = seq[i+n]
if gram not in model:
model[gram] = []
model[gram].append(next_item)
def markov_model(n, seq):
model = {}
add_to_model(model, n, seq)
return model
markov_model(2, "condescendences")
genesis_markov_model = markov_model(3, open("genesis.txt").read().split())
genesis_markov_model
genesis_markov_model[('and', 'over', 'the')]
cmodel = markov_model(2, "condescendences")
cmodel
output = "co"
ngram = tuple(output[-2:])
next_item = random.choice(cmodel[ngram])
output += next_item
print(output)
output = "co"
for i in range(100):
ngram = tuple(output[-2:])
next_item = random.choice(cmodel[ngram])
output += next_item
print(output)
output = "co"
for i in range(100):
ngram = tuple(output[-2:])
next_item = random.choice(cmodel[ngram])
if next_item is None:
break # "break" tells Python to immediately exit the loop, skipping any remaining values
else:
output += next_item
print(output)
import random
def gen_from_model(n, model, start=None, max_gen=100):
if start is None:
start = random.choice(list(model.keys()))
output = list(start)
for i in range(max_gen):
start = tuple(output[-n:])
next_item = random.choice(model[start])
if next_item is None:
break
else:
output.append(next_item)
return output
gen_from_model(2, cmodel, ('c', 'o'))
''.join(gen_from_model(2, cmodel, ('c', 'o')))
sea_model = markov_model(3, "she sells seashells by the seashore")
for i in range(12):
print(''.join(gen_from_model(3, sea_model)))
genesis_word_model = markov_model(2, open("genesis.txt").read().split())
generated_words = gen_from_model(2, genesis_word_model, ('In', 'the'))
print(' '.join(generated_words))
generated_words = gen_from_model(2, genesis_word_model, ('In', 'the'), 500)
print(' '.join(generated_words))
def markov_model_from_sequences(n, sequences):
model = {}
for item in sequences:
add_to_model(model, n, item)
return model
genesis_lines = open("genesis.txt").readlines() # all of the lines from the file
# genesis_lines_words will be a list of lists of words in each line
genesis_lines_words = [line.strip().split() for line in genesis_lines] # strip whitespace and split into words
genesis_lines_model = markov_model_from_sequences(2, genesis_lines_words)
for i in range(10):
print("verse", i, "-", ' '.join(gen_from_model(2, genesis_lines_model)))
genesis_lines = open("genesis.txt").readlines() # all of the lines from the file
# genesis_lines_words will be a list of lists of words in each line
genesis_lines_words = [line.strip().split() for line in genesis_lines] # strip whitespace and split into words
genesis_starts = [item[:2] for item in genesis_lines_words if len(item) >= 2]
genesis_lines_model = markov_model_from_sequences(2, genesis_lines_words)
for i in range(10):
start = random.choice(genesis_starts)
generated = gen_from_model(2, genesis_lines_model, random.choice(genesis_starts))
print("verse", i, "-", ' '.join(generated))
def markov_generate_from_sequences(n, sequences, count, max_gen=100):
starts = [item[:n] for item in sequences if len(item) >= n]
model = markov_model_from_sequences(n, sequences)
return [gen_from_model(n, model, random.choice(starts), max_gen)
for i in range(count)]
frost_lines = [line.strip() for line in open("frost.txt").readlines()]
for item in markov_generate_from_sequences(5, frost_lines, 20):
print(''.join(item))
sonnets_words = [line.strip().split() for line in open("sonnets.txt").readlines()]
for item in markov_generate_from_sequences(2, sonnets_words, 14):
print(' '.join(item))
frost_lines = [line.strip() for line in open("frost.txt").readlines()]
genesis_lines = [line.strip() for line in open("genesis.txt").readlines()]
both_lines = frost_lines + genesis_lines
for item in markov_generate_from_sequences(5, both_lines, 14, max_gen=150):
print(''.join(item))
def markov_generate_from_lines_in_file(n, filehandle, count, level='char', max_gen=100):
if level == 'char':
glue = ''
sequences = [item.strip() for item in filehandle.readlines()]
elif level == 'word':
glue = ' '
sequences = [item.strip().split() for item in filehandle.readlines()]
generated = markov_generate_from_sequences(n, sequences, count, max_gen)
return [glue.join(item) for item in generated]
for item in markov_generate_from_lines_in_file(3, open("sea_rose.txt"), 20, 'char'):
print(item)
for item in markov_generate_from_lines_in_file(3, open("genesis.txt"), 5, 'word'):
print(item)
print("")
| 0.291283 | 0.987092 |
# Introduction to NumPy
The topic is very broad: datasets can come from a wide range of sources and a wide range of formats, including be collections of documents, collections of images, collections of sound clips, collections of numerical measurements, or nearly anything else.
Despite this apparent heterogeneity, it will help us to think of all data fundamentally as arrays of numbers.
For this reason, efficient storage and manipulation of numerical arrays is absolutely fundamental to the process of doing data science.
NumPy (short for *Numerical Python*) provides an efficient interface to store and operate on dense data buffers.
In some ways, NumPy arrays are like Python's built-in ``list`` type, but NumPy arrays provide much more efficient storage and data operations as the arrays grow larger in size.
NumPy arrays form the core of nearly the entire ecosystem of data science tools in Python, so time spent learning to use NumPy effectively will be valuable no matter what aspect of data science interests you.
```
import numpy
numpy.__version__
```
By convention, you'll find that most people in the SciPy/PyData world will import NumPy using ``np`` as an alias:
```
import numpy as np
```
Throughout this chapter, and indeed the rest of the book, you'll find that this is the way we will import and use NumPy.
# Understanding Data Types in Python
Effective data-driven science and computation requires understanding how data is stored and manipulated.
Here we outlines and contrasts how arrays of data are handled in the Python language itself, and how NumPy improves on this.
Python offers several different options for storing data in efficient, fixed-type data buffers.
The built-in ``array`` module (available since Python 3.3) can be used to create dense arrays of a uniform type:
```
import array
L = list(range(10))
A = array.array('i', L)
A
type(A)
[x ** 2 for x in range(10)]
type(_)
```
Here ``'i'`` is a type code indicating the contents are integers.
Much more useful, however, is the ``ndarray`` object of the NumPy package.
While Python's ``array`` object provides efficient storage of array-based data, NumPy adds to this efficient *operations* on that data.
## Creating Arrays from Python Lists
First, we can use ``np.array`` to create arrays from Python lists:
```
np.array([1, 4, 2, 5, 3])
```
Remember that unlike Python lists, NumPy is constrained to arrays that all contain the same type.
If types do not match, NumPy will upcast if possible (here, integers are up-cast to floating point):
```
np.array([3.14, 4, 2, 3])
```
If we want to explicitly set the data type of the resulting array, we can use the ``dtype`` keyword:
```
np.array([1, 2, 3, 4], dtype='float32')
```
## Creating Arrays from Scratch
Especially for larger arrays, it is more efficient to create arrays from scratch using routines built into NumPy:
```
np.zeros(10, dtype=int)
np.ones((3, 5), dtype=float)
np.full((3, 5), 3.14)
np.arange(0, 20, 2)
np.linspace(0, 1, 5)
np.random.random((3, 3))
np.random.normal(0, 1, (3, 3))
np.eye(3)
```
## NumPy Standard Data Types
NumPy arrays contain values of a single type, so have a look at those types and their bounds:
| Data type | Description |
|---------------|-------------|
| ``bool_`` | Boolean (True or False) stored as a byte |
| ``int_`` | Default integer type (same as C ``long``; normally either ``int64`` or ``int32``)|
| ``intc`` | Identical to C ``int`` (normally ``int32`` or ``int64``)|
| ``intp`` | Integer used for indexing (same as C ``ssize_t``; normally either ``int32`` or ``int64``)|
| ``int8`` | Byte (-128 to 127)|
| ``int16`` | Integer (-32768 to 32767)|
| ``int32`` | Integer (-2147483648 to 2147483647)|
| ``int64`` | Integer (-9223372036854775808 to 9223372036854775807)|
| ``uint8`` | Unsigned integer (0 to 255)|
| ``uint16`` | Unsigned integer (0 to 65535)|
| ``uint32`` | Unsigned integer (0 to 4294967295)|
| ``uint64`` | Unsigned integer (0 to 18446744073709551615)|
| ``float_`` | Shorthand for ``float64``.|
| ``float16`` | Half precision float: sign bit, 5 bits exponent, 10 bits mantissa|
| ``float32`` | Single precision float: sign bit, 8 bits exponent, 23 bits mantissa|
| ``float64`` | Double precision float: sign bit, 11 bits exponent, 52 bits mantissa|
| ``complex_`` | Shorthand for ``complex128``.|
| ``complex64`` | Complex number, represented by two 32-bit floats|
| ``complex128``| Complex number, represented by two 64-bit floats|
## INTERMEZZO
```
[x**4 for i, x in enumerate(range(10, 0, -1))]
_
[ _**4 for (x, _, _) in [(1, 2, 3), (2, 3, 4)]]
[ tuple([x**4, y**3]) for (x, y, _) in [(1, 2, 3), (2, 3, 4)]]
a = (2, 3, 4)
a.append(5)
b = a + (5,)
b
assert a != b
(1,2,3), [1, 2, 3]
tuple(range(100))
def A(a, b=0, c=1):
return a+b+c
A(1, 2,)
{1, 2, 23,}
L = [
'/my/path/to/an/interesting/file0',
'/my/path/to/an/interesting/file1',
'/my/path/to/an/interesting/file2',
'/my/path/to/an/interesting/file3',
'/my/path/to/an/interesting/file4',
'/my/path/to/an/interesting/file5',
]
L
[object(), 3, 3.14, 'hello world']
```
---
# The Basics of NumPy Arrays
Data manipulation in Python is nearly synonymous with NumPy array manipulation: even newer tools like Pandas are built around the NumPy array.
- *Attributes of arrays*: Determining the size, shape, memory consumption, and data types of arrays
- *Indexing of arrays*: Getting and setting the value of individual array elements
- *Slicing of arrays*: Getting and setting smaller subarrays within a larger array
- *Reshaping of arrays*: Changing the shape of a given array
- *Joining and splitting of arrays*: Combining multiple arrays into one, and splitting one array into many
- *Universal functions and broadcasting*
## NumPy Array Attributes
First let's discuss some useful array attributes.
We'll start by defining three random arrays, a one-dimensional, two-dimensional, and three-dimensional array:
```
np.random.seed(0) # seed for reproducibility
x1 = np.random.randint(10, size=6) # One-dimensional array
x2 = np.random.randint(10, size=(3, 4)) # Two-dimensional array
x3 = np.random.randint(10, size=(3, 4, 5)) # Three-dimensional array
```
Each array has attributes ``ndim`` (the number of dimensions), ``shape`` (the size of each dimension), ``size`` (the total size of the array) and ``dtype`` (the data type of the array):
```
print("x3 ndim: ", x3.ndim)
print("x3 shape:", x3.shape)
print("x3 size: ", x3.size)
print("dtype:", x3.dtype)
```
## Array Indexing: Accessing Single Elements
In a one-dimensional array, the $i^{th}$ value (counting from zero) can be accessed by specifying the desired index in square brackets, just as with Python lists:
```
x1
x1[0]
x1[-1] # To index from the end of the array, you can use negative indices.
```
In a multi-dimensional array, items can be accessed using a comma-separated tuple of indices:
```
x2
x2[0, 0]
x2[2, -1]
```
Values can also be modified using any of the above index notation:
```
x2[0, 0] = 12
x2
```
Keep in mind that, unlike Python lists, NumPy arrays have a fixed type.
```
x1[0] = 3.14159 # this will be truncated!
x1
```
## Array Slicing: Accessing Subarrays
Just as we can use square brackets to access individual array elements, we can also use them to access subarrays with the *slice* notation, marked by the colon (``:``) character.
The NumPy slicing syntax follows that of the standard Python list; to access a slice of an array ``x``, use this:
``` python
x[start:stop:step]
```
If any of these are unspecified, they default to the values ``start=0``, ``stop=``*``size of dimension``*, ``step=1``.
### One-dimensional subarrays
```
x = np.arange(10)
x
x[:5] # first five elements
x[5:] # elements after index 5
x[4:7] # middle sub-array
x[::2] # every other element
x[1::2] # every other element, starting at index 1
```
A potentially confusing case is when the ``step`` value is negative.
In this case, the defaults for ``start`` and ``stop`` are swapped.
This becomes a convenient way to reverse an array:
```
x[::-1] # all elements, reversed
x[5::-2] # reversed every other from index 5
```
### Multi-dimensional subarrays
Multi-dimensional slices work in the same way, with multiple slices separated by commas:
```
x2
x2[:2, :3] # two rows, three columns
x2[:3, ::2] # all rows, every other column
x2[::-1, ::-1]
```
#### Accessing array rows and columns
One commonly needed routine is accessing of single rows or columns of an array:
```
print(x2[:, 0]) # first column of x2
print(x2[0, :]) # first row of x2
print(x2[0]) # equivalent to x2[0, :]
```
### Subarrays as no-copy views
One important–and extremely useful–thing to know about array slices is that they return *views* rather than *copies* of the array data.
This is one area in which NumPy array slicing differs from Python list slicing: in lists, slices will be copies.
```
x2
x2_sub = x2[:2, :2]
x2_sub
x2_sub[0, 0] = 99 # if we modify this subarray, the original array is changed too
x2
```
It is sometimes useful to instead explicitly copy the data within an array or a subarray. This can be most easily done with the ``copy()`` method.
## Reshaping of Arrays
If you want to put the numbers 1 through 9 in a $3 \times 3$ grid:
```
np.arange(1, 10)
_.shape
__.reshape((3, 3))
x = np.array([1, 2, 3])
x
x.shape
x.reshape((1, 3)) # row vector via reshape
_.shape
x.shape # therefore `reshape` doesn't modify in place the array we are working on
x[np.newaxis, :] # row vector via newaxis
_.shape
x.shape
x.reshape((3, 1)) # column vector via reshape
_.shape
x.shape
x[:, np.newaxis] # column vector via newaxis
_.shape
x.shape
```
### Concatenation of arrays
``np.concatenate`` takes a tuple or list of arrays as its first argument:
```
x = np.array([1, 2, 3])
y = np.array([3, 2, 1])
np.concatenate([x, y])
z = [99, 99, 99]
np.concatenate([x, y, z])
grid = np.array([[1, 2, 3],
[4, 5, 6]])
np.concatenate([grid, grid]) # concatenate along the first axis
np.concatenate([grid, grid], axis=1) # concatenate along the second axis (zero-indexed)
```
For working with arrays of mixed dimensions, it can be clearer to use the ``np.vstack`` (vertical stack) and ``np.hstack`` (horizontal stack) functions:
```
x = np.array([1, 2, 3])
grid = np.array([[9, 8, 7],
[6, 5, 4]])
np.vstack([x, grid]) # vertically stack the arrays
y = np.array([[99],
[99]])
np.hstack([grid, y]) # horizontally stack the arrays
```
### Splitting of arrays
The opposite of concatenation is splitting, we can pass a list of indices giving the split points:
```
x = [1, 2, 3, 99, 99, 3, 2, 1]
x1, x2, x3 = np.split(x, [3, 5])
print(x1, x2, x3)
grid = np.arange(16).reshape((4, 4))
grid
np.vsplit(grid, [2])
np.hsplit(grid, [2])
```
# Computation on NumPy Arrays: Universal Functions
`Numpy` provides an easy and flexible interface to optimized computation with arrays of data.
The key to making it fast is to use *vectorized* operations, generally implemented through NumPy's *universal functions* (ufuncs).
## The Slowness of Loops
Python's default implementation (known as CPython) does some operations very slowly, this is in part due to the dynamic, interpreted nature of the language.
The relative sluggishness of Python generally manifests itself in situations where many small operations are being repeated – for instance looping over arrays to operate on each element.
For example, pretend to compute the reciprocal of values contained in a array:
```
np.random.seed(0)
def compute_reciprocals(values):
output = np.empty(len(values))
for i in range(len(values)):
output[i] = 1.0 / values[i]
return output
values = np.random.randint(1, 10, size=5)
compute_reciprocals(values)
```
If we measure the execution time of this code for a large input, we see that this operation is very slow, perhaps surprisingly so!
```
big_array = np.random.randint(1, 100, size=1000000)
%timeit compute_reciprocals(big_array)
```
It takes $2.63$ seconds to compute these million operations and to store the result.
It turns out that the bottleneck here is not the operations themselves, but the type-checking and function dispatches that CPython must do at each cycle of the loop.
If we were working in compiled code instead, this type specification would be known before the code executes and the result could be computed much more efficiently.
## Introducing UFuncs
For many types of operations, NumPy provides a convenient interface into just this kind of compiled routine.
This is known as a *vectorized* operation.
This can be accomplished by performing an operation on the array, which will then be applied to each element.
```
%timeit (1.0 / big_array)
```
Vectorized operations in NumPy are implemented via *ufuncs*, whose main purpose is to quickly execute repeated operations on values in NumPy arrays.
Ufuncs are extremely flexible – before we saw an operation between a scalar and an array, but we can also operate between two arrays:
```
np.arange(5) / np.arange(1, 6)
```
And ufunc operations are not limited to one-dimensional arrays–they can also act on multi-dimensional arrays as well:
```
x = np.arange(9).reshape((3, 3))
2 ** x
```
_Any time you see such a loop in a Python script, you should consider whether it can be replaced with a vectorized expression._
### Array arithmetic
NumPy's ufuncs feel very natural to use because they make use of Python's native arithmetic operators:
```
x = np.arange(4)
print("x =", x)
print("x + 5 =", x + 5)
print("x - 5 =", x - 5)
print("x * 2 =", x * 2)
print("x / 2 =", x / 2)
print("x // 2 =", x // 2) # floor division
print("-x = ", -x)
print("x ** 2 = ", x ** 2)
print("x % 2 = ", x % 2)
-(0.5*x + 1) ** 2 # can be strung together also
```
### Trigonometric functions
`NumPy` provides a large number of useful ufuncs, we'll start by defining an array of angles:
```
theta = np.linspace(0, np.pi, 3)
print("theta = ", theta)
print("sin(theta) = ", np.sin(theta))
print("cos(theta) = ", np.cos(theta))
print("tan(theta) = ", np.tan(theta))
```
### Exponents and logarithms
Another common `NumPy` ufunc are the exponentials (that are useful for maintaining precision with very small inputs)
```
x = [1, 2, 3]
print("x =", x)
print("e^x =", np.exp(x))
print("2^x =", np.exp2(x))
print("3^x =", np.power(3, x))
x = [1, 2, 4, 10]
print("x =", x)
print("ln(x) =", np.log(x))
print("log2(x) =", np.log2(x))
print("log10(x) =", np.log10(x))
```
### Specifying output
For large calculations, it is sometimes useful to be able to specify the array where the result of the calculation will be stored:
```
x = np.arange(5)
y = np.empty(5)
np.multiply(x, 10, out=y)
print(y)
y = np.zeros(10)
np.power(2, x, out=y[::2])
print(y)
```
### Outer products
Finally, any ufunc can compute the output of all pairs of two different inputs using the ``outer`` method:
```
x = np.arange(1, 6)
np.multiply.outer(x, x)
```
# Aggregations: Min, Max, and Everything In Between
## Summing the Values in an Array
As a quick example, consider computing the sum of all values in an array.
Python itself can do this using the built-in ``sum`` function:
```
L = np.random.random(100)
sum(L)
np.sum(L)
big_array = np.random.rand(1_000_000)
%timeit sum(big_array)
%timeit np.sum(big_array)
```
## Minimum and Maximum
Similarly, Python has built-in ``min`` and ``max`` functions:
```
min(big_array), max(big_array)
np.min(big_array), np.max(big_array)
%timeit min(big_array)
%timeit np.min(big_array)
big_array.min(), big_array.max(), big_array.sum()
```
### Multi dimensional aggregates
One common type of aggregation operation is an aggregate along a row or column:
```
M = np.random.random((3, 4))
M
M.sum() # By default, each NumPy aggregation function works on the whole array
M.min(axis=0) # specifying the axis along which the aggregate is computed
M.max(axis=1) # find the maximum value within each row
```
### Other aggregation functions
Additionally, most aggregates have a ``NaN``-safe counterpart that computes the result while ignoring missing values, which are marked by the special IEEE floating-point ``NaN`` value
|Function Name | NaN-safe Version | Description |
|-------------------|---------------------|-----------------------------------------------|
| ``np.sum`` | ``np.nansum`` | Compute sum of elements |
| ``np.prod`` | ``np.nanprod`` | Compute product of elements |
| ``np.mean`` | ``np.nanmean`` | Compute mean of elements |
| ``np.std`` | ``np.nanstd`` | Compute standard deviation |
| ``np.var`` | ``np.nanvar`` | Compute variance |
| ``np.min`` | ``np.nanmin`` | Find minimum value |
| ``np.max`` | ``np.nanmax`` | Find maximum value |
| ``np.argmin`` | ``np.nanargmin`` | Find index of minimum value |
| ``np.argmax`` | ``np.nanargmax`` | Find index of maximum value |
| ``np.median`` | ``np.nanmedian`` | Compute median of elements |
| ``np.percentile`` | ``np.nanpercentile``| Compute rank-based statistics of elements |
| ``np.any`` | N/A | Evaluate whether any elements are true |
| ``np.all`` | N/A | Evaluate whether all elements are true |
# Computation on Arrays: Broadcasting
Another means of vectorizing operations is to use NumPy's *broadcasting* functionality.
Broadcasting is simply a set of rules for applying binary ufuncs (e.g., addition, subtraction, multiplication, etc.) on arrays of different sizes.
## Introducing Broadcasting
Recall that for arrays of the same size, binary operations are performed on an element-by-element basis:
```
a = np.array([0, 1, 2])
b = np.array([5, 5, 5])
a + b
```
Broadcasting allows these types of binary operations to be performed on arrays of different sizes:
```
a + 5
```
We can think of this as an operation that stretches or duplicates the value ``5`` into the array ``[5, 5, 5]``, and adds the results; the advantage of NumPy's broadcasting is that this duplication of values does not actually take place.
We can similarly extend this to arrays of higher dimensions:
```
M = np.ones((3, 3))
M
M + a
```
Here the one-dimensional array ``a`` is stretched, or broadcast across the second dimension in order to match the shape of ``M``.
More complicated cases can involve broadcasting of both arrays:
```
a = np.arange(3)
b = np.arange(3)[:, np.newaxis]
a, b
a + b
```
## Rules of Broadcasting
Broadcasting in NumPy follows a strict set of rules to determine the interaction between the two arrays:
- Rule 1: If the two arrays differ in their number of dimensions, the shape of the one with fewer dimensions is *padded* with ones on its leading (left) side.
- Rule 2: If the shape of the two arrays does not match in any dimension, the array with shape equal to 1 in that dimension is stretched to match the other shape.
- Rule 3: If in any dimension the sizes disagree and neither is equal to 1, an error is raised.
### Centering an array
Imagine you have an array of 10 observations, each of which consists of 3 values, we'll store this in a $10 \times 3$ array:
```
X = np.random.random((10, 3))
Xmean = X.mean(0)
Xmean
X_centered = X - Xmean
X_centered.mean(0) # To double-check, we can check that the centered array has near 0 means.
```
### Plotting a two-dimensional function
One place that broadcasting is very useful is in displaying images based on two-dimensional functions.
If we want to define a function $z = f(x, y)$, broadcasting can be used to compute the function across the grid:
```
steps = 500
x = np.linspace(0, 5, steps) # # x and y have 500 steps from 0 to 5
y = np.linspace(0, 5, steps)[:, np.newaxis]
z = np.sin(x) ** 10 + np.cos(10 + y * x) * np.cos(x)
%matplotlib inline
import matplotlib.pyplot as plt
plt.imshow(z, origin='lower', extent=[0, 5, 0, 5], cmap='viridis')
plt.colorbar();
```
# Comparisons, Masks, and Boolean Logic
Masking comes up when you want to extract, modify, count, or otherwise manipulate values in an array based on some criterion: for example, you might wish to count all values greater than a certain value, or perhaps remove all outliers that are above some threshold.
In NumPy, Boolean masking is often the most efficient way to accomplish these types of tasks.
## Comparison Operators as ufuncs
```
x = np.array([1, 2, 3, 4, 5])
x < 3 # less than
x > 3 # greater than
x != 3 # not equal
(2 * x) == (x ** 2)
```
Just as in the case of arithmetic ufuncs, these will work on arrays of any size and shape:
```
rng = np.random.RandomState(0)
x = rng.randint(10, size=(3, 4))
x
x < 6
```
### Counting entries
To count the number of ``True`` entries in a Boolean array, ``np.count_nonzero`` is useful:
```
np.count_nonzero(x < 6) # how many values less than 6?
np.sum(x < 6)
np.sum(x < 6, axis=1) # how many values less than 6 in each row?
np.any(x > 8) # are there any values greater than 8?
np.any(x < 0) # are there any values less than zero?
np.all(x < 10) # are all values less than 10?
np.all(x < 8, axis=1) # are all values in each row less than 8?
```
## Boolean Arrays as Masks
A more powerful pattern is to use Boolean arrays as masks, to select particular subsets of the data themselves:
```
x
x < 5
x[x < 5]
```
What is returned is a one-dimensional array filled with all the values that meet this condition; in other words, all the values in positions at which the mask array is ``True``.
# Fancy Indexing
We saw how to access and modify portions of arrays using simple indices (e.g., ``arr[0]``), slices (e.g., ``arr[:5]``), and Boolean masks (e.g., ``arr[arr > 0]``).
We'll look at another style of array indexing, known as *fancy indexing*, that is like the simple indexing we've already seen, but we pass arrays of indices in place of single scalars.
Fancy indexing is conceptually simple: it means passing an array of indices to access multiple array elements at once:
```
rand = np.random.RandomState(42)
x = rand.randint(100, size=10)
x
[x[3], x[7], x[2]] # Suppose we want to access three different elements.
ind = [3, 7, 4]
x[ind] # Alternatively, we can pass a single list or array of indices
```
When using fancy indexing, the shape of the result reflects the shape of the index arrays rather than the shape of the array being indexed:
```
ind = np.array([[3, 7],
[4, 5]])
x[ind]
```
Fancy indexing also works in multiple dimensions:
```
X = np.arange(12).reshape((3, 4))
X
```
Like with standard indexing, the first index refers to the row, and the second to the column:
```
row = np.array([0, 1, 2])
col = np.array([2, 1, 3])
X[row, col]
```
The pairing of indices in fancy indexing follows all the broadcasting rules that we've already seen:
```
X[row[:, np.newaxis], col]
```
each row value is matched with each column vector, exactly as we saw in broadcasting of arithmetic operations
```
row[:, np.newaxis] * col
```
Remember: _with fancy indexing that the return value reflects the **broadcasted shape of the indices**, rather than the shape of the array being indexed_.
## Combined Indexing
For even more powerful operations, fancy indexing can be combined with the other indexing schemes we've seen:
```
X
X[2, [2, 0, 1]] # combine fancy and simple indices
X[1:, [2, 0, 1]] # combine fancy indexing with slicing
mask = np.array([1, 0, 1, 0], dtype=bool)
X[row[:, np.newaxis], mask] # combine fancy indexing with masking
```
## Example: Selecting Random Points
One common use of fancy indexing is the selection of subsets of rows from a matrix.
For example, we might have an $N$ by $D$ matrix representing $N$ points in $D$ dimensions, such as the following points drawn from a two-dimensional normal distribution:
```
mean = [0, 0]
cov = [[1, 2],
[2, 5]]
X = rand.multivariate_normal(mean, cov, 100)
X.shape
plt.scatter(X[:, 0], X[:, 1]);
```
Let's use fancy indexing to select 20 random points. We'll do this by first choosing 20 random indices with no repeats, and use these indices to select a portion of the original array:
```
indices = np.random.choice(X.shape[0], 20, replace=False)
indices
selection = X[indices] # fancy indexing here
selection.shape
```
Now to see which points were selected, let's over-plot large circles at the locations of the selected points:
```
plt.scatter(X[:, 0], X[:, 1], alpha=0.3);
```
## Modifying Values with Fancy Indexing
Fancy indexing it can also be used to modify parts of an array:
```
x = np.arange(10)
i = np.array([2, 1, 8, 4])
x[i] = 99
x
x[i] -= 10 # use any assignment-type operator for this
x
```
Notice, though, that repeated indices with these operations can cause some potentially unexpected results:
```
x = np.zeros(10)
x[[0, 0]] = [4, 6]
x
```
Where did the 4 go? The result of this operation is to first assign ``x[0] = 4``, followed by ``x[0] = 6``.
The result, of course, is that ``x[0]`` contains the value 6.
```
i = [2, 3, 3, 4, 4, 4]
x[i] += 1
x
```
You might expect that ``x[3]`` would contain the value 2, and ``x[4]`` would contain the value 3, as this is how many times each index is repeated. Why is this not the case?
Conceptually, this is because ``x[i] += 1`` is meant as a shorthand of ``x[i] = x[i] + 1``. ``x[i] + 1`` is evaluated, and then the result is assigned to the indices in x.
With this in mind, it is not the augmentation that happens multiple times, but the assignment, which leads to the rather nonintuitive results.
```
x = np.zeros(10)
np.add.at(x, i, 1)
x
```
The ``at()`` method does an in-place application of the given operator at the specified indices (here, ``i``) with the specified value (here, 1).
Another method that is similar in spirit is the ``reduceat()`` method of ufuncs, which you can read about in the NumPy documentation.
## Example: Binning Data
You can use these ideas to efficiently bin data to create a histogram by hand.
For example, imagine we have 1,000 values and would like to quickly find where they fall within an array of bins.
We could compute it using ``ufunc.at`` like this:
```
np.random.seed(42)
x = np.random.randn(100)
# compute a histogram by hand
bins = np.linspace(-5, 5, 20)
counts = np.zeros_like(bins)
# find the appropriate bin for each x
i = np.searchsorted(bins, x)
# add 1 to each of these bins
np.add.at(counts, i, 1)
# The counts now reflect the number of points
# within each bin–in other words, a histogram:
line, = plt.plot(bins, counts);
line.set_drawstyle("steps")
print("NumPy routine:")
%timeit counts, edges = np.histogram(x, bins)
print("Custom routine:")
%timeit np.add.at(counts, np.searchsorted(bins, x), 1)
```
Our own one-line algorithm is several times faster than the optimized algorithm in NumPy! How can this be?
If you dig into the ``np.histogram`` source code (you can do this in IPython by typing ``np.histogram??``), you'll see that it's quite a bit more involved than the simple search-and-count that we've done; this is because NumPy's algorithm is more flexible, and particularly is designed for better performance when the number of data points becomes large...
```
x = np.random.randn(1000000)
print("NumPy routine:")
%timeit counts, edges = np.histogram(x, bins)
print("Custom routine:")
%timeit np.add.at(counts, np.searchsorted(bins, x), 1)
```
What this comparison shows is that algorithmic efficiency is almost never a simple question. An algorithm efficient for large datasets will not always be the best choice for small datasets, and vice versa.
The key to efficiently using Python in data-intensive applications is knowing about general convenience routines like ``np.histogram`` and when they're appropriate, but also knowing how to make use of lower-level functionality when you need more pointed behavior.
# Sorting Arrays
Up to this point we have been concerned mainly with tools to access and operate on array data with NumPy.
This section covers algorithms related to sorting values in NumPy arrays.
## Fast Sorting in NumPy: ``np.sort`` and ``np.argsort``
Although Python has built-in ``sort`` and ``sorted`` functions to work with lists, NumPy's ``np.sort`` function turns out to be much more efficient and useful.
To return a sorted version of the array *without modifying the input*, you can use ``np.sort``:
```
x = np.array([2, 1, 4, 3, 5])
np.sort(x)
x
```
A related function is ``argsort``, which instead returns the *indices* of the sorted elements:
```
i = np.argsort(x)
i
```
The first element of this result gives the index of the smallest element, the second value gives the index of the second smallest, and so on.
These indices can then be used (via fancy indexing) to construct the sorted array if desired:
```
x[i]
```
### Sorting along rows or columns
```
rand = np.random.RandomState(42)
X = rand.randint(0, 10, (4, 6))
X
np.sort(X, axis=0) # sort each column of X
np.sort(X, axis=1) # sort each row of X
```
Keep in mind that this treats each row or column as an independent array, and any relationships between the row or column values will be lost!
## Partial Sorts: Partitioning
Sometimes we're not interested in sorting the entire array, but simply want to find the *k* smallest values in the array. ``np.partition`` takes an array and a number *K*; the result is a new array with the smallest *K* values to the left of the partition, and the remaining values to the right, in arbitrary order:
```
x = np.array([7, 2, 3, 1, 6, 5, 4])
np.partition(x, 3)
```
Note that the first three values in the resulting array are the three smallest in the array, and the remaining array positions contain the remaining values.
*Within the two partitions, the elements have arbitrary order.*
Similarly to sorting, we can partition along an arbitrary axis of a multidimensional array:
```
np.partition(X, 2, axis=1)
```
The result is an array where the first two slots in each row contain the smallest values from that row, with the remaining values filling the remaining slots.
Finally, just as there is a ``np.argsort`` that computes indices of the sort, there is a ``np.argpartition`` that computes indices of the partition.
## Example: k-Nearest Neighbors
Let's quickly see how we might use this ``argsort`` function along multiple axes to find the nearest neighbors of each point in a set.
We'll start by creating a random set of 10 points on a two-dimensional plane:
```
X = rand.rand(50, 2)
plt.scatter(X[:, 0], X[:, 1], s=100);
# compute the distance between each pair of points
dist_sq = np.sum((X[:, np.newaxis, :] - X[np.newaxis, :, :]) ** 2, axis=-1)
dist_sq.shape, np.all(dist_sq.diagonal() == 0)
```
With the pairwise square-distances converted, we can now use ``np.argsort`` to sort along each row.
The leftmost columns will then give the indices of the nearest neighbors:
```
nearest = np.argsort(dist_sq, axis=1)
nearest[:,0]
```
Notice that the first column is order because each point's closest neighbor is itself.
If we're simply interested in the nearest $k$ neighbors, all we need is to partition each row so that the smallest $k + 1$ squared distances come first, with larger distances filling the remaining positions of the array:
```
K = 2
nearest_partition = np.argpartition(dist_sq, K + 1, axis=1)
plt.scatter(X[:, 0], X[:, 1], s=100)
K = 2 # draw lines from each point to its two nearest neighbors
for i in range(X.shape[0]):
for j in nearest_partition[i, :K+1]:
plt.plot(*zip(X[j], X[i]), color='black')
```
At first glance, it might seem strange that some of the points have more than two lines coming out of them: this is due to the fact that if point A is one of the two nearest neighbors of point B, this does not necessarily imply that point B is one of the two nearest neighbors of point A.
You might be tempted to do the same type of operation by manually looping through the data and sorting each set of neighbors individually. The beauty of our approach is that *it's written in a way that's agnostic to the size of the input data*: we could just as easily compute the neighbors among 100 or 1,000,000 points in any number of dimensions, and the code would look the same.
```
def A(a: int) -> (3 if 0 else 4):
return 4
A(3)
A.__annotations__
type(_)
def B(f):
print(f.__annotations__)
B(A)
```
|
github_jupyter
|
import numpy
numpy.__version__
import numpy as np
import array
L = list(range(10))
A = array.array('i', L)
A
type(A)
[x ** 2 for x in range(10)]
type(_)
np.array([1, 4, 2, 5, 3])
np.array([3.14, 4, 2, 3])
np.array([1, 2, 3, 4], dtype='float32')
np.zeros(10, dtype=int)
np.ones((3, 5), dtype=float)
np.full((3, 5), 3.14)
np.arange(0, 20, 2)
np.linspace(0, 1, 5)
np.random.random((3, 3))
np.random.normal(0, 1, (3, 3))
np.eye(3)
[x**4 for i, x in enumerate(range(10, 0, -1))]
_
[ _**4 for (x, _, _) in [(1, 2, 3), (2, 3, 4)]]
[ tuple([x**4, y**3]) for (x, y, _) in [(1, 2, 3), (2, 3, 4)]]
a = (2, 3, 4)
a.append(5)
b = a + (5,)
b
assert a != b
(1,2,3), [1, 2, 3]
tuple(range(100))
def A(a, b=0, c=1):
return a+b+c
A(1, 2,)
{1, 2, 23,}
L = [
'/my/path/to/an/interesting/file0',
'/my/path/to/an/interesting/file1',
'/my/path/to/an/interesting/file2',
'/my/path/to/an/interesting/file3',
'/my/path/to/an/interesting/file4',
'/my/path/to/an/interesting/file5',
]
L
[object(), 3, 3.14, 'hello world']
np.random.seed(0) # seed for reproducibility
x1 = np.random.randint(10, size=6) # One-dimensional array
x2 = np.random.randint(10, size=(3, 4)) # Two-dimensional array
x3 = np.random.randint(10, size=(3, 4, 5)) # Three-dimensional array
print("x3 ndim: ", x3.ndim)
print("x3 shape:", x3.shape)
print("x3 size: ", x3.size)
print("dtype:", x3.dtype)
x1
x1[0]
x1[-1] # To index from the end of the array, you can use negative indices.
x2
x2[0, 0]
x2[2, -1]
x2[0, 0] = 12
x2
x1[0] = 3.14159 # this will be truncated!
x1
If any of these are unspecified, they default to the values ``start=0``, ``stop=``*``size of dimension``*, ``step=1``.
### One-dimensional subarrays
A potentially confusing case is when the ``step`` value is negative.
In this case, the defaults for ``start`` and ``stop`` are swapped.
This becomes a convenient way to reverse an array:
### Multi-dimensional subarrays
Multi-dimensional slices work in the same way, with multiple slices separated by commas:
#### Accessing array rows and columns
One commonly needed routine is accessing of single rows or columns of an array:
### Subarrays as no-copy views
One important–and extremely useful–thing to know about array slices is that they return *views* rather than *copies* of the array data.
This is one area in which NumPy array slicing differs from Python list slicing: in lists, slices will be copies.
It is sometimes useful to instead explicitly copy the data within an array or a subarray. This can be most easily done with the ``copy()`` method.
## Reshaping of Arrays
If you want to put the numbers 1 through 9 in a $3 \times 3$ grid:
### Concatenation of arrays
``np.concatenate`` takes a tuple or list of arrays as its first argument:
For working with arrays of mixed dimensions, it can be clearer to use the ``np.vstack`` (vertical stack) and ``np.hstack`` (horizontal stack) functions:
### Splitting of arrays
The opposite of concatenation is splitting, we can pass a list of indices giving the split points:
# Computation on NumPy Arrays: Universal Functions
`Numpy` provides an easy and flexible interface to optimized computation with arrays of data.
The key to making it fast is to use *vectorized* operations, generally implemented through NumPy's *universal functions* (ufuncs).
## The Slowness of Loops
Python's default implementation (known as CPython) does some operations very slowly, this is in part due to the dynamic, interpreted nature of the language.
The relative sluggishness of Python generally manifests itself in situations where many small operations are being repeated – for instance looping over arrays to operate on each element.
For example, pretend to compute the reciprocal of values contained in a array:
If we measure the execution time of this code for a large input, we see that this operation is very slow, perhaps surprisingly so!
It takes $2.63$ seconds to compute these million operations and to store the result.
It turns out that the bottleneck here is not the operations themselves, but the type-checking and function dispatches that CPython must do at each cycle of the loop.
If we were working in compiled code instead, this type specification would be known before the code executes and the result could be computed much more efficiently.
## Introducing UFuncs
For many types of operations, NumPy provides a convenient interface into just this kind of compiled routine.
This is known as a *vectorized* operation.
This can be accomplished by performing an operation on the array, which will then be applied to each element.
Vectorized operations in NumPy are implemented via *ufuncs*, whose main purpose is to quickly execute repeated operations on values in NumPy arrays.
Ufuncs are extremely flexible – before we saw an operation between a scalar and an array, but we can also operate between two arrays:
And ufunc operations are not limited to one-dimensional arrays–they can also act on multi-dimensional arrays as well:
_Any time you see such a loop in a Python script, you should consider whether it can be replaced with a vectorized expression._
### Array arithmetic
NumPy's ufuncs feel very natural to use because they make use of Python's native arithmetic operators:
### Trigonometric functions
`NumPy` provides a large number of useful ufuncs, we'll start by defining an array of angles:
### Exponents and logarithms
Another common `NumPy` ufunc are the exponentials (that are useful for maintaining precision with very small inputs)
### Specifying output
For large calculations, it is sometimes useful to be able to specify the array where the result of the calculation will be stored:
### Outer products
Finally, any ufunc can compute the output of all pairs of two different inputs using the ``outer`` method:
# Aggregations: Min, Max, and Everything In Between
## Summing the Values in an Array
As a quick example, consider computing the sum of all values in an array.
Python itself can do this using the built-in ``sum`` function:
## Minimum and Maximum
Similarly, Python has built-in ``min`` and ``max`` functions:
### Multi dimensional aggregates
One common type of aggregation operation is an aggregate along a row or column:
### Other aggregation functions
Additionally, most aggregates have a ``NaN``-safe counterpart that computes the result while ignoring missing values, which are marked by the special IEEE floating-point ``NaN`` value
|Function Name | NaN-safe Version | Description |
|-------------------|---------------------|-----------------------------------------------|
| ``np.sum`` | ``np.nansum`` | Compute sum of elements |
| ``np.prod`` | ``np.nanprod`` | Compute product of elements |
| ``np.mean`` | ``np.nanmean`` | Compute mean of elements |
| ``np.std`` | ``np.nanstd`` | Compute standard deviation |
| ``np.var`` | ``np.nanvar`` | Compute variance |
| ``np.min`` | ``np.nanmin`` | Find minimum value |
| ``np.max`` | ``np.nanmax`` | Find maximum value |
| ``np.argmin`` | ``np.nanargmin`` | Find index of minimum value |
| ``np.argmax`` | ``np.nanargmax`` | Find index of maximum value |
| ``np.median`` | ``np.nanmedian`` | Compute median of elements |
| ``np.percentile`` | ``np.nanpercentile``| Compute rank-based statistics of elements |
| ``np.any`` | N/A | Evaluate whether any elements are true |
| ``np.all`` | N/A | Evaluate whether all elements are true |
# Computation on Arrays: Broadcasting
Another means of vectorizing operations is to use NumPy's *broadcasting* functionality.
Broadcasting is simply a set of rules for applying binary ufuncs (e.g., addition, subtraction, multiplication, etc.) on arrays of different sizes.
## Introducing Broadcasting
Recall that for arrays of the same size, binary operations are performed on an element-by-element basis:
Broadcasting allows these types of binary operations to be performed on arrays of different sizes:
We can think of this as an operation that stretches or duplicates the value ``5`` into the array ``[5, 5, 5]``, and adds the results; the advantage of NumPy's broadcasting is that this duplication of values does not actually take place.
We can similarly extend this to arrays of higher dimensions:
Here the one-dimensional array ``a`` is stretched, or broadcast across the second dimension in order to match the shape of ``M``.
More complicated cases can involve broadcasting of both arrays:
## Rules of Broadcasting
Broadcasting in NumPy follows a strict set of rules to determine the interaction between the two arrays:
- Rule 1: If the two arrays differ in their number of dimensions, the shape of the one with fewer dimensions is *padded* with ones on its leading (left) side.
- Rule 2: If the shape of the two arrays does not match in any dimension, the array with shape equal to 1 in that dimension is stretched to match the other shape.
- Rule 3: If in any dimension the sizes disagree and neither is equal to 1, an error is raised.
### Centering an array
Imagine you have an array of 10 observations, each of which consists of 3 values, we'll store this in a $10 \times 3$ array:
### Plotting a two-dimensional function
One place that broadcasting is very useful is in displaying images based on two-dimensional functions.
If we want to define a function $z = f(x, y)$, broadcasting can be used to compute the function across the grid:
# Comparisons, Masks, and Boolean Logic
Masking comes up when you want to extract, modify, count, or otherwise manipulate values in an array based on some criterion: for example, you might wish to count all values greater than a certain value, or perhaps remove all outliers that are above some threshold.
In NumPy, Boolean masking is often the most efficient way to accomplish these types of tasks.
## Comparison Operators as ufuncs
Just as in the case of arithmetic ufuncs, these will work on arrays of any size and shape:
### Counting entries
To count the number of ``True`` entries in a Boolean array, ``np.count_nonzero`` is useful:
## Boolean Arrays as Masks
A more powerful pattern is to use Boolean arrays as masks, to select particular subsets of the data themselves:
What is returned is a one-dimensional array filled with all the values that meet this condition; in other words, all the values in positions at which the mask array is ``True``.
# Fancy Indexing
We saw how to access and modify portions of arrays using simple indices (e.g., ``arr[0]``), slices (e.g., ``arr[:5]``), and Boolean masks (e.g., ``arr[arr > 0]``).
We'll look at another style of array indexing, known as *fancy indexing*, that is like the simple indexing we've already seen, but we pass arrays of indices in place of single scalars.
Fancy indexing is conceptually simple: it means passing an array of indices to access multiple array elements at once:
When using fancy indexing, the shape of the result reflects the shape of the index arrays rather than the shape of the array being indexed:
Fancy indexing also works in multiple dimensions:
Like with standard indexing, the first index refers to the row, and the second to the column:
The pairing of indices in fancy indexing follows all the broadcasting rules that we've already seen:
each row value is matched with each column vector, exactly as we saw in broadcasting of arithmetic operations
Remember: _with fancy indexing that the return value reflects the **broadcasted shape of the indices**, rather than the shape of the array being indexed_.
## Combined Indexing
For even more powerful operations, fancy indexing can be combined with the other indexing schemes we've seen:
## Example: Selecting Random Points
One common use of fancy indexing is the selection of subsets of rows from a matrix.
For example, we might have an $N$ by $D$ matrix representing $N$ points in $D$ dimensions, such as the following points drawn from a two-dimensional normal distribution:
Let's use fancy indexing to select 20 random points. We'll do this by first choosing 20 random indices with no repeats, and use these indices to select a portion of the original array:
Now to see which points were selected, let's over-plot large circles at the locations of the selected points:
## Modifying Values with Fancy Indexing
Fancy indexing it can also be used to modify parts of an array:
Notice, though, that repeated indices with these operations can cause some potentially unexpected results:
Where did the 4 go? The result of this operation is to first assign ``x[0] = 4``, followed by ``x[0] = 6``.
The result, of course, is that ``x[0]`` contains the value 6.
You might expect that ``x[3]`` would contain the value 2, and ``x[4]`` would contain the value 3, as this is how many times each index is repeated. Why is this not the case?
Conceptually, this is because ``x[i] += 1`` is meant as a shorthand of ``x[i] = x[i] + 1``. ``x[i] + 1`` is evaluated, and then the result is assigned to the indices in x.
With this in mind, it is not the augmentation that happens multiple times, but the assignment, which leads to the rather nonintuitive results.
The ``at()`` method does an in-place application of the given operator at the specified indices (here, ``i``) with the specified value (here, 1).
Another method that is similar in spirit is the ``reduceat()`` method of ufuncs, which you can read about in the NumPy documentation.
## Example: Binning Data
You can use these ideas to efficiently bin data to create a histogram by hand.
For example, imagine we have 1,000 values and would like to quickly find where they fall within an array of bins.
We could compute it using ``ufunc.at`` like this:
Our own one-line algorithm is several times faster than the optimized algorithm in NumPy! How can this be?
If you dig into the ``np.histogram`` source code (you can do this in IPython by typing ``np.histogram??``), you'll see that it's quite a bit more involved than the simple search-and-count that we've done; this is because NumPy's algorithm is more flexible, and particularly is designed for better performance when the number of data points becomes large...
What this comparison shows is that algorithmic efficiency is almost never a simple question. An algorithm efficient for large datasets will not always be the best choice for small datasets, and vice versa.
The key to efficiently using Python in data-intensive applications is knowing about general convenience routines like ``np.histogram`` and when they're appropriate, but also knowing how to make use of lower-level functionality when you need more pointed behavior.
# Sorting Arrays
Up to this point we have been concerned mainly with tools to access and operate on array data with NumPy.
This section covers algorithms related to sorting values in NumPy arrays.
## Fast Sorting in NumPy: ``np.sort`` and ``np.argsort``
Although Python has built-in ``sort`` and ``sorted`` functions to work with lists, NumPy's ``np.sort`` function turns out to be much more efficient and useful.
To return a sorted version of the array *without modifying the input*, you can use ``np.sort``:
A related function is ``argsort``, which instead returns the *indices* of the sorted elements:
The first element of this result gives the index of the smallest element, the second value gives the index of the second smallest, and so on.
These indices can then be used (via fancy indexing) to construct the sorted array if desired:
### Sorting along rows or columns
Keep in mind that this treats each row or column as an independent array, and any relationships between the row or column values will be lost!
## Partial Sorts: Partitioning
Sometimes we're not interested in sorting the entire array, but simply want to find the *k* smallest values in the array. ``np.partition`` takes an array and a number *K*; the result is a new array with the smallest *K* values to the left of the partition, and the remaining values to the right, in arbitrary order:
Note that the first three values in the resulting array are the three smallest in the array, and the remaining array positions contain the remaining values.
*Within the two partitions, the elements have arbitrary order.*
Similarly to sorting, we can partition along an arbitrary axis of a multidimensional array:
The result is an array where the first two slots in each row contain the smallest values from that row, with the remaining values filling the remaining slots.
Finally, just as there is a ``np.argsort`` that computes indices of the sort, there is a ``np.argpartition`` that computes indices of the partition.
## Example: k-Nearest Neighbors
Let's quickly see how we might use this ``argsort`` function along multiple axes to find the nearest neighbors of each point in a set.
We'll start by creating a random set of 10 points on a two-dimensional plane:
With the pairwise square-distances converted, we can now use ``np.argsort`` to sort along each row.
The leftmost columns will then give the indices of the nearest neighbors:
Notice that the first column is order because each point's closest neighbor is itself.
If we're simply interested in the nearest $k$ neighbors, all we need is to partition each row so that the smallest $k + 1$ squared distances come first, with larger distances filling the remaining positions of the array:
At first glance, it might seem strange that some of the points have more than two lines coming out of them: this is due to the fact that if point A is one of the two nearest neighbors of point B, this does not necessarily imply that point B is one of the two nearest neighbors of point A.
You might be tempted to do the same type of operation by manually looping through the data and sorting each set of neighbors individually. The beauty of our approach is that *it's written in a way that's agnostic to the size of the input data*: we could just as easily compute the neighbors among 100 or 1,000,000 points in any number of dimensions, and the code would look the same.
| 0.608129 | 0.99269 |
<h1>Guided Webshell Investigation - MDE Microsoft Sentinel Enrichments</h1>
<p><b>Notebook Version:</b> 1.0<br>
<b>Python Version:</b> Python 3.6<br>
<b>Data Sources Required:</b> MDE SecurityAlert, W3CIIS Log (or similar web logging)</p>
<p>This notebook investigates Microsoft Defender for Endpoint (MDE) webshell alerts. The notebook will guide you through steps to collect MDE alerts for webshell activity and link them to server access logs to identify potential attackers.</p>
<p><b>Configuration Required!</b></p>
<p>This Notebook presumes you have Microsoft Sentinel Workspace settings configured in a config file. If you do not have this in place please <a href ="https://msticpy.readthedocs.io/en/latest/getting_started/msticpyconfig.html#">read the docs</a> and <a href="https://github.com/Azure/Azure-Sentinel-Notebooks/blob/master/ConfiguringNotebookEnvironment.ipynb">use this notebook</a> to test.</p>
<h2>How to use:</h2>
<p>This notebook provides a step-by-step investigation to understand MDE webshell alerts on your server. While our example uses IIS logging this notebook can be converted to support any web log type.</p>
<p>After congiuration you can investigate two scenarios, a webshell file alert or a webshell command execution alert. For each of these we will need to retrieve different data, the notebook contains branching execution at Step 3 to enable this.</p>
<p>Below you'll find a more detailed description of the two types of investigation</p>
<ul>
<h4><u>Shell File Alert</u></h4>
<p>This alert type will fire when a file that is suspected to be a webshell appears on disk. For this investigation we will start with a known filename that is a suspected shell (e.g. Setconfigure.aspx) and we will try to understand how this webshell was placed on the server.</p>
<h4><u>Shell Command Execution Alert</u></h4>
<p>This alert type will fire when a command is executed on your web server that is suspicious. For this investigation we start with the command line that was executed and the time window that execution took place.</p>
</ul>
<p><b>For both of the above alert types this notebook will allow you to find the following information:</b></p>
<ul>
<li>The attacker IP</li>
<li>The attacker User Agent</li>
<li>The website name the attacker interacted with</li>
<li>The location of the shell on your server</li>
</ul>
<p>Once we have that information this notebook will allow you to investigate the attacker IP, User Agent or both to discover:
<ul>
<li>The files the attacker accessed prior to the installation of the shell</li>
<li>The first time the attacker accessed your server</li>
</ul>
<hr>
<h1>Notebook Initialization</h1>
<p>This cell:
<ul>
<li>Checks for the correct Python version</li>
<li>Checks versions and optionally installs required packages</li>
<li>Imports the required packages into the notebook</li>
<li>Sets a number of configuration options.</li>
</ul>
This should complete without errors. If you encounter errors or warnings look at the following two notebooks:</p>
<ul>
<li><a href="https://github.com/Azure/Azure-Sentinel-Notebooks/blob/master/TroubleShootingNotebooks.ipynb">TroubleShootingNotebooks</a></li>
<li><a href="https://github.com/Azure/Azure-Sentinel-Notebooks/blob/master/ConfiguringNotebookEnvironment.ipynb">ConfiguringNotebookEnvironment</a></li>
</ul>
You may also need to do some additional configuration to successfully use functions such as Threat Intelligence service lookup and Geo IP lookup. See the <a href="https://app.reviewnb.com/Azure/Azure-Sentinel-Notebooks/commit/0ba12819a4bf3d9e1c167ca3b1a9738d6df3be35/#Configuration">Configuration</a> section at the end of the notebook and the <a href="https://github.com/Azure/Azure-Sentinel-Notebooks/blob/master/ConfiguringNotebookEnvironment.ipynb">ConfiguringNotebookEnvironment<a>.
```
from pathlib import Path
from IPython.display import display, HTML
REQ_PYTHON_VER = "3.6"
REQ_MSTICPY_VER = "1.0.0"
display(HTML("<h3>Starting Notebook setup...</h3>"))
# If not using Azure Notebooks, install msticpy with
# %pip install msticpy
from msticpy.nbtools import nbinit
nbinit.init_notebook(
namespace=globals()
);
import ipywidgets as widgets
from ipywidgets import HBox
try:
pick_time_range = widgets.Dropdown(
options=['30d', '60d', '90d'],
decription="Time range",
disabled=False,
)
workspaces_available = WorkspaceConfig().list_workspaces()
if not workspaces_available:
def_config = WorkspaceConfig()
workspaces_available = {
def_config["workspace_id"]: def_config["workspace_name"]
}
target_workspace = widgets.Dropdown(
options=workspaces_available.keys(),
decription="Workspace")
display(HBox([pick_time_range, target_workspace]))
except RuntimeError:
md("""You do not have any Workspaces configured in your config files.
Please run the https://github.com/Azure/Azure-Sentinel-Notebooks/blob/master/ConfiguringNotebookEnvironment.ipynb
to setup these files before proceeding""" ,'bold')
# This time period is used to determine how far back the analytic looks, e.g. 1h, 3d, 7d, 1w
# If you expereince timeout errors or the notebook is returning too much data, try lowering this
time_range = pick_time_range.value
workspace_name = target_workspace.value
# Collect Microsoft Sentinel Workspace Details from our config file and use them to connect
import warnings
try:
# Update to WorkspaceConfig(workspace="WORKSPACE_NAME") to get alerts from a Workspace other than your default one.
# Run WorkspaceConfig().list_workspaces() to see a list of configured workspaces
ws_config = WorkspaceConfig(workspace=workspace_name)
ws_id = ws_config['workspace_id']
ten_id = ws_config['tenant_id']
md("Workspace details collected from config file")
with warnings.catch_warnings():
warnings.simplefilter(action="ignore")
qry_prov = QueryProvider(data_environment='LogAnalytics')
qry_prov.connect(connection_str=ws_config.code_connect_str)
except RuntimeError:
md("""You do not have any Workspaces configured in your config files.
Please run the https://github.com/Azure/Azure-Sentinel-Notebooks/blob/master/ConfiguringNotebookEnvironment.ipynb
to setup these files before proceeding""" ,'bold')
# This cell will collect an alert summary to help you decide which investigation to launch
alert_summary_query = f'''
let timeRange = {time_range};
SecurityAlert
| where ProviderName =~ "MDATP"
| where DisplayName has_any("Possible IIS web shell", "Possible IIS compromise", "Suspicious processes indicative of a web shell", "A suspicious web script was created", "Possible web shell installation")
| extend AlertType = iff(DisplayName has_any("Possible IIS web shell", "Possible IIS compromise", "Suspicious processes indicative of a web shell", "A suspicious web script was created"), "Webshell Command Alerts", "Webshell File Alerts")
| summarize count(AlertType) by AlertType
| project AlertType, NumberOfAlerts=count_AlertType
'''
display(HTML('<h2>Alert Summary</h2><p>The following alert types have been found on your server:'))
alertout = qry_prov.exec_query(alert_summary_query)
display(alertout)
if (not isinstance(alertout, pd.DataFrame)) or alertout.empty:
print("No MDE Webshell alerts found.")
print("If you think that this is not correct, please check the time range you are using.")
```
<div style="border-left: 6px solid #ccc; border-left-width: 6px; border-left-style: solid; padding: 0.01em 16px; border-color: # #0080ff; background-color:#e6f2ff;">
<h1>Before you continue!</h1>
<p> Now it's time to select which type of investigation you would like to try. Above we have provided a summary of the high-level alert types present on your server, if the above table is blank no alerts were found.</p>
<p><b><i>If the table is empty, this notebook has no alerts to work with and will produce errors in subsequent cells.</i></b></p>
<p>If you have alerts you have a couple of different options.<br> You can <b>click the links</b> to jump to the start of the investigation. </p>
<p><b><a href="#Step-3:-Begin-File-Investigation">Shell file alert Investigation</a>:</b> If you would like to conduct an investigation into an ASPX file that has been detected by Microsoft Defender ATP please run the code block beneath "Begin File Investigation"<p>
<p><b><a href="#Step-3:-Begin-Command-Investigation">Shell command alert Investigation</a>:</b> If you would like to conduct an investigation into suspicious command execution on your web server please run the code block below "Begin Command Investigation"<p>
</div>
<h2>Step 3: Begin File Investigation</h2>
<p>We can now begin our investigation into a webshell file that has been placed on a system in your network. We'll start by collecting relevant events from MDE.<p>
```
# First the notebook collects alerts from MDE with the following query
display(HTML('<h3>Collecting relevant alerts from MDE</h3>'))
mde_events_query = f'''
let timeRange = {time_range};
let scriptExtensions = dynamic([".php", ".jsp", ".js", ".aspx", ".asmx", ".asax", ".cfm", ".shtml"]);
SecurityAlert
| where TimeGenerated > ago(timeRange)
| where ProviderName == "MDATP"
| where DisplayName =~ "Possible web shell installation"
| extend alertData = parse_json(Entities)
| mvexpand alertData
| where alertData.Type == "file"
| where alertData.Name has_any(scriptExtensions)
| extend filename = alertData.Name, directory = alertData.Directory
| project TimeGenerated, filename, directory
'''
aspx_data = qry_prov.exec_query(mde_events_query)
if (not isinstance(aspx_data, pd.DataFrame)) or aspx_data.empty:
print("No MDE Webshell alerts found. Please try a different time range.")
raise ValueError("No MDE Webshell alerts found. Cannot continue")
shells = aspx_data['filename']
# Everything below is presentational
pick_shell = widgets.Dropdown(
options=shells,
decription="Webshells",
disabled=False,
)
if isinstance(aspx_data, pd.DataFrame) and not aspx_data.empty:
display(HTML('<p>Below you can see the filename, the directory it was found in, and the time it was found.</p><p>Please select a webshell to investigate before you continue:</p>'))
display(aspx_data)
display(pick_shell)
display(HTML('<hr>'))
display(HTML('<h1>Collect Enrichment Events</h1>'))
display(HTML('<p>Now we will enrich this webshell event with additional information before continuing to find the attacker.</p>'))
else:
md_warn('No relevant alerts were found in your MDE logs, try expanding your timeframe in the config.')
# Now collect enrichments from the W3CIIS log table
dfindex = pick_shell.index
filename = aspx_data.loc[[dfindex]]['filename'].values[0]
directory = aspx_data.loc[[dfindex]]['directory'].values[0]
timegenerated = aspx_data.loc[[dfindex]]['TimeGenerated'].values[0]
# Check the directory matches
directory_split = directory.split("\\")
first_directory = directory_split[-1]
# This query will collect file accessed on the server within the same time window
iis_query = f'''
let scriptExtensions = dynamic([".php", ".jsp", ".js", ".aspx", ".asmx", ".asax", ".cfm", ".shtml"]);
W3CIISLog
| where TimeGenerated >= datetime("{timegenerated}") - 10s
| where TimeGenerated <= datetime("{timegenerated}") + 10s
| where csUriStem has_any(scriptExtensions)
| extend splitUriStem = split(csUriStem, "/")
| extend FileName = splitUriStem[-1] | extend firstDir = splitUriStem[-2]
| where FileName == "{filename}" and firstDir == "{first_directory}"
| summarize StartTime=min(TimeGenerated), EndTime=max(TimeGenerated) by AttackerIP=cIP, AttackerUserAgent=csUserAgent, SiteName=sSiteName, ShellLocation=csUriStem
| order by StartTime asc
'''
if isinstance(iis_data, pd.DataFrame) and not iis_data.empty:
iis_data = qry_prov.exec_query(iis_query)
display(HTML('<div style="border-left: 6px solid #ccc; border-left-width: 6px; border-left-style: solid; padding: 0.01em 16px; border-color:#00cc69; background-color:#e6fff3;"><h3>Enrichment complete!<br> Please <a href="#Step-1:-Find-the-Attacker">click here</a> to continue your investigation.</h3><br></div><hr>'))
else:
if iis_data.empty:
md_warn('No events were found in W3CIISLog')
else:
md_warn('The query failed, it may have timed out')
```
<h2>Step 3: Begin Command Investigation</h2>
<p>To begin the investigation into a command that has been executed by a webshell on your network, we will begin by collecting MDE data.</p>
```
command_investigation_query = f'''
let timeRange = {time_range};
let alerts = SecurityAlert
| where TimeGenerated > ago(timeRange)
| extend alertData = parse_json(Entities), recordGuid = new_guid();
let shellAlerts = alerts
| where ProviderName =~ "MDATP"
| mvexpand alertData
| where alertData.Type == "file" and alertData.Name == "w3wp.exe"
| distinct SystemAlertId
| join kind=inner (alerts) on SystemAlertId;
let alldata = shellAlerts
| mvexpand alertData
| extend Type = alertData.Type;
let filedata = alldata
| extend id = tostring(alertData.$id)
| extend ImageName = alertData.Name
| where Type == "file" and ImageName != "w3wp.exe"
| extend imagefileref = id;
let commanddata = alldata
| extend CommandLine = tostring(alertData.CommandLine)
| extend creationtime = tostring(alertData.CreationTimeUtc)
| where Type =~ "process"
| where isnotempty(CommandLine)
| extend imagefileref = tostring(alertData.ImageFile.$ref);
let hostdata = alldata
| where Type =~ "host"
| project HostName = tostring(alertData.HostName), DnsDomain = tostring(alertData.DnsDomain), SystemAlertId
| distinct HostName, DnsDomain, SystemAlertId;
filedata
| join kind=inner (
commanddata
) on imagefileref
| join kind=inner (hostdata) on SystemAlertId
| project DisplayName, recordGuid, TimeGenerated, ImageName, CommandLine, HostName, DnsDomain
'''
cmd_data = qry_prov.exec_query(command_investigation_query)
if isinstance(cmd_data, pd.DataFrame) and not cmd_data.empty:
display(HTML('''<h2>Step 3.1: Select a command to investigate</h2>
<p>Below you will find the suspicious commands that were executed. Matching GUIDs indicate that the events were linked and likely executed within seconds of each other,
for the purpose of the investigation you can select either as the default time windows are wide enough to encapsulate both events. There's a full breakdown of the fields below.</p>
<ul>
<li>DisplayName: The MDE alert display name</li>
<li>recordGuid: A GUID used to track previoulsy linked events</li>
<li>TimeGenerated: The time the log entry was made</li>
<li>ImageName: The executing process image name</li>
<li>CommandLine: The command line that was executed</li>
<li>HostName: The host name of the impacted machine</li>
<li>DnsDomain: The domain of the impacted machine</li>
</ul>
<p>Note: The GUID generated here will change with each execution and is used only by the notebook.</p>'''))
command = cmd_data['recordGuid']
pick_cmd = widgets.Dropdown(
options=command,
decription="Commands",
disabled=False,
)
display(HTML('<h3>Select the GUID associated with the command you wish to investigate.</h3>'))
display(pick_cmd)
display(HTML('<hr><h2>Step 3.2: Execute to Collect Events</h2><p>Please select an access threshold, by default the script will look for files on the server that have been accessed by fewer than 3 IP addresses</p>'))
access_threshold = widgets.IntSlider(
value=3,
min=0,
max=15,
step=1,
decription="Access Threshold",
disabled=False,
orientation='horizontal',
readout=True,
readout_format='d'
)
display(access_threshold)
else:
if iis_data.empty:
md_warn('No events were found in SecurityAlert. Continuing will result in errors.')
else:
md_warn('The query failed, it may have timed out. Continuing will result in errors.')
dfindex = pick_cmd.index
imagename = cmd_data.loc[[dfindex]]['ImageName'].values[0]
commandline = cmd_data.loc[[dfindex]]['CommandLine'].values[0]
creationtime = cmd_data.loc[[dfindex]]['TimeGenerated'].values[0]
# Retrieves access to script files on the web server using logs stored in W3CIIS.
# Checks for how many unique client IP addresses access the file, uses access_threshold
script_data_query = f'''
let scriptExtensions = dynamic([".php", ".jsp", ".aspx", ".asmx", ".asax", ".cfm", ".shtml"]);
let alldata = W3CIISLog
| where TimeGenerated >= datetime("{creationtime}") - 30s
| where TimeGenerated <= datetime("{creationtime}") + 30s
| where csUriStem has_any(scriptExtensions)
| extend splitUriStem = split(csUriStem, "/")
| extend FileName = splitUriStem[-1] | extend firstDir = splitUriStem[-2]
| summarize StartTime=min(TimeGenerated), EndTime=max(TimeGenerated) by AttackerIP=cIP, AttackerUserAgent=csUserAgent, csUriStem, filename=tostring(FileName), tostring(firstDir)
| order by StartTime asc;
let fileprev = W3CIISLog
| summarize accessCount=dcount(cIP) by csUriStem;
alldata
| join (
fileprev
) on csUriStem
| extend ShellLocation = csUriStem
| project-away csUriStem, csUriStem1
| where accessCount <= {access_threshold}
'''
aspx_data = qry_prov.exec_query(script_data_query)
if isinstance(aspx_data, pd.DataFrame) and not aspx_data.empty:
display(HTML('<h2>Step 3.3: File to investigate</h2><p>The files in the drop down below were accessed on the web server (and are therefore in W3CIIS Log) within 30 seconds of the command executing.</p><p>By default the notebook will only show files that have been accessed by a single client IP or UA.</p>'))
shells = aspx_data['ShellLocation']
pick_shell = widgets.Dropdown(
options=shells,
decription="Webshells",
disabled=False,
)
aspx_data_display = aspx_data
aspx_data_display = aspx_data_display.drop(['AttackerIP', 'AttackerUserAgent', 'firstDir', 'EndTime'], axis=1)
aspx_data_display.rename(columns={'filename':'ShellName', 'StartTime':'AccessTime'}, inplace=True)
display(HTML('Please select which file you would like to investigate:'))
#display(aspx_data)
display(pick_shell)
display(HTML('<hr><h2>Step 3.4: Enrich</h2>'))
else:
if aspx_data.empty:
md_warn('No events were found in W3CIISLog. Continuing will result in errors.')
else:
md_warn('The query failed, it may have timed out. Continuing will result in errors.')
if isinstance(aspx_data, pd.DataFrame) and not aspx_data.empty:
dfindex = pick_shell.index
filename = aspx_data.loc[[dfindex]]['filename'].values[0]
timegenerated = aspx_data.loc[[dfindex]]['StartTime'].values[0]
#Check the directory matches
first_directory = aspx_data.loc[[dfindex]]['firstDir'].values[0]
iis_query = f'''
let scriptExtensions = dynamic([".php", ".jsp", ".js", ".aspx", ".asmx", ".asax", ".cfm", ".shtml"]);
W3CIISLog
| where TimeGenerated >= datetime("{timegenerated}") - 30s
| where TimeGenerated <= datetime("{timegenerated}") + 30s
| where csUriStem has_any(scriptExtensions)
| extend splitUriStem = split(csUriStem, "/")
| extend FileName = splitUriStem[-1] | extend firstDir = splitUriStem[-2]
| where FileName == "{filename}" and firstDir == "{first_directory}"
| summarize StartTime=min(TimeGenerated), EndTime=max(TimeGenerated) by AttackerIP=cIP, AttackerUserAgent=csUserAgent, SiteName=sSiteName, ShellLocation=csUriStem
| order by StartTime asc
'''
iis_data = qry_prov.exec_query(iis_query)
else:
md_warn('There is no data in an object that should have data. A previous step has likely failed, we cannot continue.')
if isinstance(iis_data, pd.DataFrame) and not iis_data.empty:
display(HTML('<div style="border-left: 6px solid #ccc; border-left-width: 6px; border-left-style: solid; padding: 0.01em 16px; border-color:#00cc69; background-color:#e6fff3;"><h3>Enrichment complete!<br> Please <a href="#Step-4:-Find-the-Attacker">click here</a> to continue your investigation.</h3><br></div><hr>'))
else:
if iis_data.empty:
md_warn('No events were found in W3CIISLog. Continuing will result in errors.')
else:
md_warn('The query failed, it may have timed out. Continuing will result in errors.')
```
<h2>Step 4: Find the Attacker</h2>
```
attackerip = iis_data['AttackerIP']
attackerua = iis_data['AttackerUserAgent']
pick_ip = widgets.Dropdown(
options=attackerip,
decription="IP Addresses",
disabled=False,
)
pick_ua = widgets.Dropdown(
options=attackerua,
decription="User Agents",
disabled=False,
)
pick_window = widgets.Dropdown(
options=['30m','1h','5h','7h', '1d', '3d','7d'],
decription="Window",
disabled=False,
)
pick_investigation = widgets.Dropdown(
options=['Investigate Both', 'Investigate IP','Investigate Useragent'],
decription="What should we investigatew?",
disabled=False,
)
display(HTML('<h2>Candidate Attacker IP Addresses</h2>'))
md('The following attacker IP addresses accessed the webshell during the alert window, continue to Step 5 to choose which to investigate.')
display(iis_data)
display(HTML('<hr><br><h2>Step 5: Select Investigation Parameters</h2>'))
display(HTML('<h3>Attacker To Investigate</h3><p>Now it is time to hone in on our attacker. If you have multiple attacker indicators you can repeat from this step.</p><p>Select parameters to investigate, the default selection is the earliest access within the alert window:</p>'))
display(HBox([pick_ip, pick_ua, pick_investigation]))
widgets.jslink((pick_ip, 'index'), (pick_ua, 'index'))
display(HTML('<h3>Previous file access window</h3><p>To determine what files were accessed immediately before the shell, please pick the window we\'ll use to look back:</p>'))
display(pick_window)
display(HTML('<hr><h2>Step 6: Collect Attacker Enrichments</h2><p>Finally execute the below cell to collect additional details about the attacker.</p>'))
queryWindow = pick_window.value # Lookback window
investigation_param = pick_investigation.index # 0 = both, 1 = ip, 2 = ua
dfindex = pick_ip.index # contains dataframe index (int)
attackerip = str(pick_ip.value)
attackerua = iis_data.loc[[dfindex]]['AttackerUserAgent'].values[0]
attackertime = iis_data.loc[[dfindex]]['StartTime'].values[0]
sitename = iis_data.loc[[dfindex]]['SiteName'].values[0]
shell_location = iis_data.loc[[dfindex]]['ShellLocation'].values[0]
access_data = ['','']
first_server_access_data = ['','']
def iis_access_ip():
iis_access_ip = f'''
let scriptExtensions = dynamic([".php", ".jsp", ".js", ".aspx", ".asmx", ".asax", ".cfm", ".shtml"]);
W3CIISLog
| where TimeGenerated >= datetime("{attackertime}") - {queryWindow}
| where TimeGenerated <= datetime("{attackertime}")
| where sSiteName == "{sitename}"
| where cIP == "{attackerip}"
| order by TimeGenerated desc
| project TimeAccessed=TimeGenerated, SiteName=sSiteName, ServerIP=sIP, FilesTouched=csUriStem, AttackerP=cIP
| where FilesTouched has_any(scriptExtensions)
| order by TimeAccessed asc
'''
#Find the first time the attacker accessed the webserver
first_server_access_ip = f'''
W3CIISLog
| where TimeGenerated > ago(30d)
| where sSiteName == "{sitename}"
| where cIP == "{attackerip}"
| order by TimeGenerated asc
| take 1
| project TimeAccessed=TimeGenerated, Site=sSiteName, FileAccessed=csUriStem
| order by TimeAccessed asc
'''
access_data = qry_prov.exec_query(iis_access_ip)
first_server_access_data = qry_prov.exec_query(first_server_access_ip)
return access_data, first_server_access_data
def iis_access_ua():
iis_access_ua = f'''
let scriptExtensions = dynamic([".php", ".jsp", ".js", ".aspx", ".asmx", ".asax", ".cfm", ".shtml"]);
W3CIISLog
| where TimeGenerated >= datetime("{attackertime}") - {queryWindow}
| where TimeGenerated <= datetime("{attackertime}")
| where sSiteName == "{sitename}"
| where csUserAgent == "{attackerua}"
| order by TimeGenerated desc
| project TimeAccessed=TimeGenerated, SiteName=sSiteName, ServerIP=sIP, FilesTouched=csUriStem, AttackerP=cIP, AttackerUserAgent=csUserAgent
| where FilesTouched has_any(scriptExtensions)
| order by TimeAccessed asc
'''
#Find the first time the attacker accessed the webserver
first_server_access_ua = f'''
W3CIISLog
| where TimeGenerated > ago(30d)
| where sSiteName == "{sitename}"
| where csUserAgent == "{attackerua}"
| order by TimeGenerated asc
| take 1
| project TimeAccessed=TimeGenerated, Site=sSiteName, FileAccessed=csUriStem
| order by TimeAccessed asc
'''
access_data = qry_prov.exec_query(iis_access_ua)
first_server_access_data = qry_prov.exec_query(first_server_access_ua)
return access_data, first_server_access_data
first_shell_index = None
if investigation_param == 1:
display(HTML('<p>Querying for attacker IP</p>'))
result = iis_access_ip()
access_data[0] = result[0]
first_server_access_data[0] = result[1]
first_shell_index = access_data[0][access_data[0].FilesTouched==shell_location].first_valid_index()
elif investigation_param == 2:
display(HTML('<p>Querying for attacker UA</p>'))
result = iis_access_ua()
access_data[1] = result[0]
first_server_access_data[1] = result[1]
first_shell_index = access_data[1][access_data[1].FilesTouched==shell_location].first_valid_index()
elif investigation_param == 0:
display(HTML('<p>Querying for attacker IP and UA</p>'))
result_ip = iis_access_ip()
result_ua = iis_access_ua()
access_data[0] = result_ip[0]
access_data[1] = result_ua[0]
first_server_access_data[0] = result_ip[1]
first_server_access_data[1] = result_ua[1]
first_shell_index = access_data[0][access_data[0].FilesTouched==shell_location].first_valid_index()
first_shell_index_ua = access_data[1][access_data[1].FilesTouched==shell_location].first_valid_index()
display(HTML('<div style="border-left: 6px solid #ccc; border-left-width: 6px; border-left-style: solid; padding: 0.01em 16px; border-color:#00cc69; background-color:#e6fff3;"><h3>Enrichment complete!</h3><p>Continue to generate your report</p><br></div><hr><h2>Step 7: Generate Report</h2>'))
attackerua = attackerua.replace("+", " ")
display(HTML(f'''
<h2> Attack Summary</h2>
<div style="border-left: 6px solid #ccc; border-left-width: 6px; border-left-style: solid; padding: 0.01em 16px; border-color:#1aa3ff; background-color: #f2f2f2;">
<p></p>
<p><b>Attacker IP: </b>{attackerip}</p>
<p><b>Attacker user agent: </b>{attackerua}</p>
<p><b>Webshell installed: </b>{shell_location}</p>
<p><b>Victim site: </b>{sitename}</p>
<br>
</div>
'''))
look_back = 0
# No results
if first_shell_index is None:
first_shell_index = 0
# Our default look back is 5 files, if there are not 5 files we take what we can get
elif first_shell_index < 5:
look_back = first_shell_index
else:
look_back = 5
if investigation_param == 1:
display(HTML('<h2>File history</h2>'))
if first_shell_index > 0:
print('The files the attacker IP \"'+attackerip+'\" accessed prior to the webshell installation were:')
display(access_data[0][first_shell_index-look_back:first_shell_index+1])
else:
print(f'No files were access by the attacker prior to webshell install, try expaning the query window (currently:{queryWindow})')
display(HTML('<h2>Earliest access</h2><p>In the last 30 days the earliest known access to the server from the attacker IP was:</p>'))
display(first_server_access_data[0])
elif investigation_param == 2:
display(HTML('<h2>File history</h2>'))
if first_shell_index > 0:
print('The files the attacker UA \"'+attackerua+'\" accessed prior to the webshell installation were:')
display(access_data[1][first_shell_index-look_back:first_shell_index+1])
else:
print(f'No files were access by the attacker prior to webshell install, try expaning the query window (currently:{queryWindow})')
display(HTML('<h2>Earliest access</h2><p>In the last 30 days the earliest known access to the server from the attacker UA was:</p>'))
display(first_server_access_data[1])
elif investigation_param == 0:
look_back_ua = 0
if first_shell_index_ua is None:
first_shell_index_ua = 0
elif first_shell_index_ua < 5:
look_back_ua = first_shell_index_ua
else:
look_back_ua = 5
display(HTML('<h2>File history</h2>'))
if first_shell_index > 0 or first_shell_index_ua > 0:
print('The files the attacker IP \"'+attackerip+'\" accessed prior to the webshell installation were:')
display(access_data[0][first_shell_index-look_back:first_shell_index+1])
print('The files the attacker UA \"'+attackerua+'\" accessed prior to the webshell installation were:')
display(access_data[1][first_shell_index_ua-look_back_ua:first_shell_index_ua+1])
else:
print(f'No files were access by the attacker prior to webshell install, try expanding the query window (currently:{queryWindow})')
display(HTML('<h2>Earliest access</h2><p>In the last 30 days the earliest known access to the server from the attacker IP was:</p>'))
display(first_server_access_data[0])
display(HTML('<p>In the last 30 days the earliest known access to the server from the attacker UA was:</p>'))
display(first_server_access_data[1])
```
|
github_jupyter
|
from pathlib import Path
from IPython.display import display, HTML
REQ_PYTHON_VER = "3.6"
REQ_MSTICPY_VER = "1.0.0"
display(HTML("<h3>Starting Notebook setup...</h3>"))
# If not using Azure Notebooks, install msticpy with
# %pip install msticpy
from msticpy.nbtools import nbinit
nbinit.init_notebook(
namespace=globals()
);
import ipywidgets as widgets
from ipywidgets import HBox
try:
pick_time_range = widgets.Dropdown(
options=['30d', '60d', '90d'],
decription="Time range",
disabled=False,
)
workspaces_available = WorkspaceConfig().list_workspaces()
if not workspaces_available:
def_config = WorkspaceConfig()
workspaces_available = {
def_config["workspace_id"]: def_config["workspace_name"]
}
target_workspace = widgets.Dropdown(
options=workspaces_available.keys(),
decription="Workspace")
display(HBox([pick_time_range, target_workspace]))
except RuntimeError:
md("""You do not have any Workspaces configured in your config files.
Please run the https://github.com/Azure/Azure-Sentinel-Notebooks/blob/master/ConfiguringNotebookEnvironment.ipynb
to setup these files before proceeding""" ,'bold')
# This time period is used to determine how far back the analytic looks, e.g. 1h, 3d, 7d, 1w
# If you expereince timeout errors or the notebook is returning too much data, try lowering this
time_range = pick_time_range.value
workspace_name = target_workspace.value
# Collect Microsoft Sentinel Workspace Details from our config file and use them to connect
import warnings
try:
# Update to WorkspaceConfig(workspace="WORKSPACE_NAME") to get alerts from a Workspace other than your default one.
# Run WorkspaceConfig().list_workspaces() to see a list of configured workspaces
ws_config = WorkspaceConfig(workspace=workspace_name)
ws_id = ws_config['workspace_id']
ten_id = ws_config['tenant_id']
md("Workspace details collected from config file")
with warnings.catch_warnings():
warnings.simplefilter(action="ignore")
qry_prov = QueryProvider(data_environment='LogAnalytics')
qry_prov.connect(connection_str=ws_config.code_connect_str)
except RuntimeError:
md("""You do not have any Workspaces configured in your config files.
Please run the https://github.com/Azure/Azure-Sentinel-Notebooks/blob/master/ConfiguringNotebookEnvironment.ipynb
to setup these files before proceeding""" ,'bold')
# This cell will collect an alert summary to help you decide which investigation to launch
alert_summary_query = f'''
let timeRange = {time_range};
SecurityAlert
| where ProviderName =~ "MDATP"
| where DisplayName has_any("Possible IIS web shell", "Possible IIS compromise", "Suspicious processes indicative of a web shell", "A suspicious web script was created", "Possible web shell installation")
| extend AlertType = iff(DisplayName has_any("Possible IIS web shell", "Possible IIS compromise", "Suspicious processes indicative of a web shell", "A suspicious web script was created"), "Webshell Command Alerts", "Webshell File Alerts")
| summarize count(AlertType) by AlertType
| project AlertType, NumberOfAlerts=count_AlertType
'''
display(HTML('<h2>Alert Summary</h2><p>The following alert types have been found on your server:'))
alertout = qry_prov.exec_query(alert_summary_query)
display(alertout)
if (not isinstance(alertout, pd.DataFrame)) or alertout.empty:
print("No MDE Webshell alerts found.")
print("If you think that this is not correct, please check the time range you are using.")
# First the notebook collects alerts from MDE with the following query
display(HTML('<h3>Collecting relevant alerts from MDE</h3>'))
mde_events_query = f'''
let timeRange = {time_range};
let scriptExtensions = dynamic([".php", ".jsp", ".js", ".aspx", ".asmx", ".asax", ".cfm", ".shtml"]);
SecurityAlert
| where TimeGenerated > ago(timeRange)
| where ProviderName == "MDATP"
| where DisplayName =~ "Possible web shell installation"
| extend alertData = parse_json(Entities)
| mvexpand alertData
| where alertData.Type == "file"
| where alertData.Name has_any(scriptExtensions)
| extend filename = alertData.Name, directory = alertData.Directory
| project TimeGenerated, filename, directory
'''
aspx_data = qry_prov.exec_query(mde_events_query)
if (not isinstance(aspx_data, pd.DataFrame)) or aspx_data.empty:
print("No MDE Webshell alerts found. Please try a different time range.")
raise ValueError("No MDE Webshell alerts found. Cannot continue")
shells = aspx_data['filename']
# Everything below is presentational
pick_shell = widgets.Dropdown(
options=shells,
decription="Webshells",
disabled=False,
)
if isinstance(aspx_data, pd.DataFrame) and not aspx_data.empty:
display(HTML('<p>Below you can see the filename, the directory it was found in, and the time it was found.</p><p>Please select a webshell to investigate before you continue:</p>'))
display(aspx_data)
display(pick_shell)
display(HTML('<hr>'))
display(HTML('<h1>Collect Enrichment Events</h1>'))
display(HTML('<p>Now we will enrich this webshell event with additional information before continuing to find the attacker.</p>'))
else:
md_warn('No relevant alerts were found in your MDE logs, try expanding your timeframe in the config.')
# Now collect enrichments from the W3CIIS log table
dfindex = pick_shell.index
filename = aspx_data.loc[[dfindex]]['filename'].values[0]
directory = aspx_data.loc[[dfindex]]['directory'].values[0]
timegenerated = aspx_data.loc[[dfindex]]['TimeGenerated'].values[0]
# Check the directory matches
directory_split = directory.split("\\")
first_directory = directory_split[-1]
# This query will collect file accessed on the server within the same time window
iis_query = f'''
let scriptExtensions = dynamic([".php", ".jsp", ".js", ".aspx", ".asmx", ".asax", ".cfm", ".shtml"]);
W3CIISLog
| where TimeGenerated >= datetime("{timegenerated}") - 10s
| where TimeGenerated <= datetime("{timegenerated}") + 10s
| where csUriStem has_any(scriptExtensions)
| extend splitUriStem = split(csUriStem, "/")
| extend FileName = splitUriStem[-1] | extend firstDir = splitUriStem[-2]
| where FileName == "{filename}" and firstDir == "{first_directory}"
| summarize StartTime=min(TimeGenerated), EndTime=max(TimeGenerated) by AttackerIP=cIP, AttackerUserAgent=csUserAgent, SiteName=sSiteName, ShellLocation=csUriStem
| order by StartTime asc
'''
if isinstance(iis_data, pd.DataFrame) and not iis_data.empty:
iis_data = qry_prov.exec_query(iis_query)
display(HTML('<div style="border-left: 6px solid #ccc; border-left-width: 6px; border-left-style: solid; padding: 0.01em 16px; border-color:#00cc69; background-color:#e6fff3;"><h3>Enrichment complete!<br> Please <a href="#Step-1:-Find-the-Attacker">click here</a> to continue your investigation.</h3><br></div><hr>'))
else:
if iis_data.empty:
md_warn('No events were found in W3CIISLog')
else:
md_warn('The query failed, it may have timed out')
command_investigation_query = f'''
let timeRange = {time_range};
let alerts = SecurityAlert
| where TimeGenerated > ago(timeRange)
| extend alertData = parse_json(Entities), recordGuid = new_guid();
let shellAlerts = alerts
| where ProviderName =~ "MDATP"
| mvexpand alertData
| where alertData.Type == "file" and alertData.Name == "w3wp.exe"
| distinct SystemAlertId
| join kind=inner (alerts) on SystemAlertId;
let alldata = shellAlerts
| mvexpand alertData
| extend Type = alertData.Type;
let filedata = alldata
| extend id = tostring(alertData.$id)
| extend ImageName = alertData.Name
| where Type == "file" and ImageName != "w3wp.exe"
| extend imagefileref = id;
let commanddata = alldata
| extend CommandLine = tostring(alertData.CommandLine)
| extend creationtime = tostring(alertData.CreationTimeUtc)
| where Type =~ "process"
| where isnotempty(CommandLine)
| extend imagefileref = tostring(alertData.ImageFile.$ref);
let hostdata = alldata
| where Type =~ "host"
| project HostName = tostring(alertData.HostName), DnsDomain = tostring(alertData.DnsDomain), SystemAlertId
| distinct HostName, DnsDomain, SystemAlertId;
filedata
| join kind=inner (
commanddata
) on imagefileref
| join kind=inner (hostdata) on SystemAlertId
| project DisplayName, recordGuid, TimeGenerated, ImageName, CommandLine, HostName, DnsDomain
'''
cmd_data = qry_prov.exec_query(command_investigation_query)
if isinstance(cmd_data, pd.DataFrame) and not cmd_data.empty:
display(HTML('''<h2>Step 3.1: Select a command to investigate</h2>
<p>Below you will find the suspicious commands that were executed. Matching GUIDs indicate that the events were linked and likely executed within seconds of each other,
for the purpose of the investigation you can select either as the default time windows are wide enough to encapsulate both events. There's a full breakdown of the fields below.</p>
<ul>
<li>DisplayName: The MDE alert display name</li>
<li>recordGuid: A GUID used to track previoulsy linked events</li>
<li>TimeGenerated: The time the log entry was made</li>
<li>ImageName: The executing process image name</li>
<li>CommandLine: The command line that was executed</li>
<li>HostName: The host name of the impacted machine</li>
<li>DnsDomain: The domain of the impacted machine</li>
</ul>
<p>Note: The GUID generated here will change with each execution and is used only by the notebook.</p>'''))
command = cmd_data['recordGuid']
pick_cmd = widgets.Dropdown(
options=command,
decription="Commands",
disabled=False,
)
display(HTML('<h3>Select the GUID associated with the command you wish to investigate.</h3>'))
display(pick_cmd)
display(HTML('<hr><h2>Step 3.2: Execute to Collect Events</h2><p>Please select an access threshold, by default the script will look for files on the server that have been accessed by fewer than 3 IP addresses</p>'))
access_threshold = widgets.IntSlider(
value=3,
min=0,
max=15,
step=1,
decription="Access Threshold",
disabled=False,
orientation='horizontal',
readout=True,
readout_format='d'
)
display(access_threshold)
else:
if iis_data.empty:
md_warn('No events were found in SecurityAlert. Continuing will result in errors.')
else:
md_warn('The query failed, it may have timed out. Continuing will result in errors.')
dfindex = pick_cmd.index
imagename = cmd_data.loc[[dfindex]]['ImageName'].values[0]
commandline = cmd_data.loc[[dfindex]]['CommandLine'].values[0]
creationtime = cmd_data.loc[[dfindex]]['TimeGenerated'].values[0]
# Retrieves access to script files on the web server using logs stored in W3CIIS.
# Checks for how many unique client IP addresses access the file, uses access_threshold
script_data_query = f'''
let scriptExtensions = dynamic([".php", ".jsp", ".aspx", ".asmx", ".asax", ".cfm", ".shtml"]);
let alldata = W3CIISLog
| where TimeGenerated >= datetime("{creationtime}") - 30s
| where TimeGenerated <= datetime("{creationtime}") + 30s
| where csUriStem has_any(scriptExtensions)
| extend splitUriStem = split(csUriStem, "/")
| extend FileName = splitUriStem[-1] | extend firstDir = splitUriStem[-2]
| summarize StartTime=min(TimeGenerated), EndTime=max(TimeGenerated) by AttackerIP=cIP, AttackerUserAgent=csUserAgent, csUriStem, filename=tostring(FileName), tostring(firstDir)
| order by StartTime asc;
let fileprev = W3CIISLog
| summarize accessCount=dcount(cIP) by csUriStem;
alldata
| join (
fileprev
) on csUriStem
| extend ShellLocation = csUriStem
| project-away csUriStem, csUriStem1
| where accessCount <= {access_threshold}
'''
aspx_data = qry_prov.exec_query(script_data_query)
if isinstance(aspx_data, pd.DataFrame) and not aspx_data.empty:
display(HTML('<h2>Step 3.3: File to investigate</h2><p>The files in the drop down below were accessed on the web server (and are therefore in W3CIIS Log) within 30 seconds of the command executing.</p><p>By default the notebook will only show files that have been accessed by a single client IP or UA.</p>'))
shells = aspx_data['ShellLocation']
pick_shell = widgets.Dropdown(
options=shells,
decription="Webshells",
disabled=False,
)
aspx_data_display = aspx_data
aspx_data_display = aspx_data_display.drop(['AttackerIP', 'AttackerUserAgent', 'firstDir', 'EndTime'], axis=1)
aspx_data_display.rename(columns={'filename':'ShellName', 'StartTime':'AccessTime'}, inplace=True)
display(HTML('Please select which file you would like to investigate:'))
#display(aspx_data)
display(pick_shell)
display(HTML('<hr><h2>Step 3.4: Enrich</h2>'))
else:
if aspx_data.empty:
md_warn('No events were found in W3CIISLog. Continuing will result in errors.')
else:
md_warn('The query failed, it may have timed out. Continuing will result in errors.')
if isinstance(aspx_data, pd.DataFrame) and not aspx_data.empty:
dfindex = pick_shell.index
filename = aspx_data.loc[[dfindex]]['filename'].values[0]
timegenerated = aspx_data.loc[[dfindex]]['StartTime'].values[0]
#Check the directory matches
first_directory = aspx_data.loc[[dfindex]]['firstDir'].values[0]
iis_query = f'''
let scriptExtensions = dynamic([".php", ".jsp", ".js", ".aspx", ".asmx", ".asax", ".cfm", ".shtml"]);
W3CIISLog
| where TimeGenerated >= datetime("{timegenerated}") - 30s
| where TimeGenerated <= datetime("{timegenerated}") + 30s
| where csUriStem has_any(scriptExtensions)
| extend splitUriStem = split(csUriStem, "/")
| extend FileName = splitUriStem[-1] | extend firstDir = splitUriStem[-2]
| where FileName == "{filename}" and firstDir == "{first_directory}"
| summarize StartTime=min(TimeGenerated), EndTime=max(TimeGenerated) by AttackerIP=cIP, AttackerUserAgent=csUserAgent, SiteName=sSiteName, ShellLocation=csUriStem
| order by StartTime asc
'''
iis_data = qry_prov.exec_query(iis_query)
else:
md_warn('There is no data in an object that should have data. A previous step has likely failed, we cannot continue.')
if isinstance(iis_data, pd.DataFrame) and not iis_data.empty:
display(HTML('<div style="border-left: 6px solid #ccc; border-left-width: 6px; border-left-style: solid; padding: 0.01em 16px; border-color:#00cc69; background-color:#e6fff3;"><h3>Enrichment complete!<br> Please <a href="#Step-4:-Find-the-Attacker">click here</a> to continue your investigation.</h3><br></div><hr>'))
else:
if iis_data.empty:
md_warn('No events were found in W3CIISLog. Continuing will result in errors.')
else:
md_warn('The query failed, it may have timed out. Continuing will result in errors.')
attackerip = iis_data['AttackerIP']
attackerua = iis_data['AttackerUserAgent']
pick_ip = widgets.Dropdown(
options=attackerip,
decription="IP Addresses",
disabled=False,
)
pick_ua = widgets.Dropdown(
options=attackerua,
decription="User Agents",
disabled=False,
)
pick_window = widgets.Dropdown(
options=['30m','1h','5h','7h', '1d', '3d','7d'],
decription="Window",
disabled=False,
)
pick_investigation = widgets.Dropdown(
options=['Investigate Both', 'Investigate IP','Investigate Useragent'],
decription="What should we investigatew?",
disabled=False,
)
display(HTML('<h2>Candidate Attacker IP Addresses</h2>'))
md('The following attacker IP addresses accessed the webshell during the alert window, continue to Step 5 to choose which to investigate.')
display(iis_data)
display(HTML('<hr><br><h2>Step 5: Select Investigation Parameters</h2>'))
display(HTML('<h3>Attacker To Investigate</h3><p>Now it is time to hone in on our attacker. If you have multiple attacker indicators you can repeat from this step.</p><p>Select parameters to investigate, the default selection is the earliest access within the alert window:</p>'))
display(HBox([pick_ip, pick_ua, pick_investigation]))
widgets.jslink((pick_ip, 'index'), (pick_ua, 'index'))
display(HTML('<h3>Previous file access window</h3><p>To determine what files were accessed immediately before the shell, please pick the window we\'ll use to look back:</p>'))
display(pick_window)
display(HTML('<hr><h2>Step 6: Collect Attacker Enrichments</h2><p>Finally execute the below cell to collect additional details about the attacker.</p>'))
queryWindow = pick_window.value # Lookback window
investigation_param = pick_investigation.index # 0 = both, 1 = ip, 2 = ua
dfindex = pick_ip.index # contains dataframe index (int)
attackerip = str(pick_ip.value)
attackerua = iis_data.loc[[dfindex]]['AttackerUserAgent'].values[0]
attackertime = iis_data.loc[[dfindex]]['StartTime'].values[0]
sitename = iis_data.loc[[dfindex]]['SiteName'].values[0]
shell_location = iis_data.loc[[dfindex]]['ShellLocation'].values[0]
access_data = ['','']
first_server_access_data = ['','']
def iis_access_ip():
iis_access_ip = f'''
let scriptExtensions = dynamic([".php", ".jsp", ".js", ".aspx", ".asmx", ".asax", ".cfm", ".shtml"]);
W3CIISLog
| where TimeGenerated >= datetime("{attackertime}") - {queryWindow}
| where TimeGenerated <= datetime("{attackertime}")
| where sSiteName == "{sitename}"
| where cIP == "{attackerip}"
| order by TimeGenerated desc
| project TimeAccessed=TimeGenerated, SiteName=sSiteName, ServerIP=sIP, FilesTouched=csUriStem, AttackerP=cIP
| where FilesTouched has_any(scriptExtensions)
| order by TimeAccessed asc
'''
#Find the first time the attacker accessed the webserver
first_server_access_ip = f'''
W3CIISLog
| where TimeGenerated > ago(30d)
| where sSiteName == "{sitename}"
| where cIP == "{attackerip}"
| order by TimeGenerated asc
| take 1
| project TimeAccessed=TimeGenerated, Site=sSiteName, FileAccessed=csUriStem
| order by TimeAccessed asc
'''
access_data = qry_prov.exec_query(iis_access_ip)
first_server_access_data = qry_prov.exec_query(first_server_access_ip)
return access_data, first_server_access_data
def iis_access_ua():
iis_access_ua = f'''
let scriptExtensions = dynamic([".php", ".jsp", ".js", ".aspx", ".asmx", ".asax", ".cfm", ".shtml"]);
W3CIISLog
| where TimeGenerated >= datetime("{attackertime}") - {queryWindow}
| where TimeGenerated <= datetime("{attackertime}")
| where sSiteName == "{sitename}"
| where csUserAgent == "{attackerua}"
| order by TimeGenerated desc
| project TimeAccessed=TimeGenerated, SiteName=sSiteName, ServerIP=sIP, FilesTouched=csUriStem, AttackerP=cIP, AttackerUserAgent=csUserAgent
| where FilesTouched has_any(scriptExtensions)
| order by TimeAccessed asc
'''
#Find the first time the attacker accessed the webserver
first_server_access_ua = f'''
W3CIISLog
| where TimeGenerated > ago(30d)
| where sSiteName == "{sitename}"
| where csUserAgent == "{attackerua}"
| order by TimeGenerated asc
| take 1
| project TimeAccessed=TimeGenerated, Site=sSiteName, FileAccessed=csUriStem
| order by TimeAccessed asc
'''
access_data = qry_prov.exec_query(iis_access_ua)
first_server_access_data = qry_prov.exec_query(first_server_access_ua)
return access_data, first_server_access_data
first_shell_index = None
if investigation_param == 1:
display(HTML('<p>Querying for attacker IP</p>'))
result = iis_access_ip()
access_data[0] = result[0]
first_server_access_data[0] = result[1]
first_shell_index = access_data[0][access_data[0].FilesTouched==shell_location].first_valid_index()
elif investigation_param == 2:
display(HTML('<p>Querying for attacker UA</p>'))
result = iis_access_ua()
access_data[1] = result[0]
first_server_access_data[1] = result[1]
first_shell_index = access_data[1][access_data[1].FilesTouched==shell_location].first_valid_index()
elif investigation_param == 0:
display(HTML('<p>Querying for attacker IP and UA</p>'))
result_ip = iis_access_ip()
result_ua = iis_access_ua()
access_data[0] = result_ip[0]
access_data[1] = result_ua[0]
first_server_access_data[0] = result_ip[1]
first_server_access_data[1] = result_ua[1]
first_shell_index = access_data[0][access_data[0].FilesTouched==shell_location].first_valid_index()
first_shell_index_ua = access_data[1][access_data[1].FilesTouched==shell_location].first_valid_index()
display(HTML('<div style="border-left: 6px solid #ccc; border-left-width: 6px; border-left-style: solid; padding: 0.01em 16px; border-color:#00cc69; background-color:#e6fff3;"><h3>Enrichment complete!</h3><p>Continue to generate your report</p><br></div><hr><h2>Step 7: Generate Report</h2>'))
attackerua = attackerua.replace("+", " ")
display(HTML(f'''
<h2> Attack Summary</h2>
<div style="border-left: 6px solid #ccc; border-left-width: 6px; border-left-style: solid; padding: 0.01em 16px; border-color:#1aa3ff; background-color: #f2f2f2;">
<p></p>
<p><b>Attacker IP: </b>{attackerip}</p>
<p><b>Attacker user agent: </b>{attackerua}</p>
<p><b>Webshell installed: </b>{shell_location}</p>
<p><b>Victim site: </b>{sitename}</p>
<br>
</div>
'''))
look_back = 0
# No results
if first_shell_index is None:
first_shell_index = 0
# Our default look back is 5 files, if there are not 5 files we take what we can get
elif first_shell_index < 5:
look_back = first_shell_index
else:
look_back = 5
if investigation_param == 1:
display(HTML('<h2>File history</h2>'))
if first_shell_index > 0:
print('The files the attacker IP \"'+attackerip+'\" accessed prior to the webshell installation were:')
display(access_data[0][first_shell_index-look_back:first_shell_index+1])
else:
print(f'No files were access by the attacker prior to webshell install, try expaning the query window (currently:{queryWindow})')
display(HTML('<h2>Earliest access</h2><p>In the last 30 days the earliest known access to the server from the attacker IP was:</p>'))
display(first_server_access_data[0])
elif investigation_param == 2:
display(HTML('<h2>File history</h2>'))
if first_shell_index > 0:
print('The files the attacker UA \"'+attackerua+'\" accessed prior to the webshell installation were:')
display(access_data[1][first_shell_index-look_back:first_shell_index+1])
else:
print(f'No files were access by the attacker prior to webshell install, try expaning the query window (currently:{queryWindow})')
display(HTML('<h2>Earliest access</h2><p>In the last 30 days the earliest known access to the server from the attacker UA was:</p>'))
display(first_server_access_data[1])
elif investigation_param == 0:
look_back_ua = 0
if first_shell_index_ua is None:
first_shell_index_ua = 0
elif first_shell_index_ua < 5:
look_back_ua = first_shell_index_ua
else:
look_back_ua = 5
display(HTML('<h2>File history</h2>'))
if first_shell_index > 0 or first_shell_index_ua > 0:
print('The files the attacker IP \"'+attackerip+'\" accessed prior to the webshell installation were:')
display(access_data[0][first_shell_index-look_back:first_shell_index+1])
print('The files the attacker UA \"'+attackerua+'\" accessed prior to the webshell installation were:')
display(access_data[1][first_shell_index_ua-look_back_ua:first_shell_index_ua+1])
else:
print(f'No files were access by the attacker prior to webshell install, try expanding the query window (currently:{queryWindow})')
display(HTML('<h2>Earliest access</h2><p>In the last 30 days the earliest known access to the server from the attacker IP was:</p>'))
display(first_server_access_data[0])
display(HTML('<p>In the last 30 days the earliest known access to the server from the attacker UA was:</p>'))
display(first_server_access_data[1])
| 0.610802 | 0.892046 |
***Re-Co***
*société coopérative d’habitation*
Av. de Morges 63
1004 Lausanne
info@re-co.ch
-----
**Assemblée Générale**
15 Janvier 2022, 17h00-19h00
Av. de Morges 63
1004 Lausanne
**Présents**
La liste des fondateurs présents est annexée au présent PV.
-----
# Procès-verbal de l’assemblée constitutive de la société coopérative d'habitation ***Re-Co*** dont le siège est situé à Lausanne
### 1. Ordre du jour
*L'ordre du jour a été envoyé à tous les participants le lundi 13 décembre 2021.*
> <sup>1</sup> Adoption de l'ordre du jour
> <sup>2</sup> Présentation Générale de *Re-Co*
> <sup>3</sup> Adoption des statuts :
> - Source des statuts et prévision d'une charte opérationnelle
> - Adoption
> - élection du comité et de la présidence
>
> <sup>4</sup> Cotisations et parts sociales
> <sup>5</sup> Élection de l'organe de révision
> <sup>6</sup> Activités 2022
> <sup>7</sup> Remerciements
>
> Lucas Uhlmann indique le déroulement de l'assemblée constitutive de Re-Co.
> L'assemblée constitutive n'est pas une assemblée générale ordinaire.
> La présente assemblée sert à constituer formellement la coopérative.
### 2. Présentation générale
> Lucas Uhlmann rappelle les objectifs et principes rassemblés sur le site re-co.ch
> L'organisation générale prévue est décrite : outre l'assemblée générale, le comité et l'organe de révision, Re-Co sera constitué de sections opérationnelles responsables respectivement de
> - la Communication
> - des Finances
> - de l'Architecture
>
> Les membres de Re-Co sont toutes et tous coopératrices et coopérateurs donc à ce titre possèdent au moins une part sociale de la société coopérative.
### 3. Statuts
> Re-Co a choisi de reprendre en intégralité les statuts-types de l'association romande des maîtres d'ouvrage d'utilité publique.
> L'assemblée décide de complémenter ses statuts par une charte opérationnelle dans le délai d'une année. Cette dernière sera soumis à la prochaine assemblée générale.
>
> Suite à cette discussion, les statuts sont approuvés par l'assemblée.
>
> Les participants ne pourront être formellement admis comme membres qu'une fois la coopérative inscrite au registre du commerce et aux conditions définies par les statuts.
**Comité 2022**
> Les sept membres fondateurs de Re-Co sont désignés membres du comité :
>- Antoine Girardin
>- Grégoire Henrioud
>- Mélanie Rouge
>- Marie Sigrist
>- Lucas Uhlmann
>- Elliot Vaucher
>- Marie-Pascale Wellinger
>
> Les membres du comité sont élus par acclamation, Lucas Uhlmann est élu à la présidence.
> Il est décidé que pour l'année 2022 seuls les membres du comité seront inscrits au registre du commerce.
### 4. Cotisation et part sociale
> Lucas Uhlmann reprend la présidence de la séance.
> Le comité réitère sa volonté de renoncer à une cotisation comme l'indiquent les statuts.
> Dans un esprit de rendre accessible la participation à Re-Co, la part sociale est fixéee à CHF 100.-
> Le comité se laisse la liberté de préciser le nombre de parts nécessaires pour intégrer un logement, selon les statuts.
### 6. Election de l'organe de révision
> Le comité renonce à l'élection d'un organe de révision par voie d'Opting-Out.
> La déclaration de renonciation est annexée au présent PV
### 7. Remerciements
> Le président remercie les membres pour leur présence ainsi que pour leur contribution à la coopérative ayant permis sa fondation.
> La séance est levée à 20h00.
---
**Les statuts susmentionnés ont été adoptés lors de l'assemblée constitutive du 15 Janvier 2022.**
|
github_jupyter
|
***Re-Co***
*société coopérative d’habitation*
Av. de Morges 63
1004 Lausanne
info@re-co.ch
-----
**Assemblée Générale**
15 Janvier 2022, 17h00-19h00
Av. de Morges 63
1004 Lausanne
**Présents**
La liste des fondateurs présents est annexée au présent PV.
-----
# Procès-verbal de l’assemblée constitutive de la société coopérative d'habitation ***Re-Co*** dont le siège est situé à Lausanne
### 1. Ordre du jour
*L'ordre du jour a été envoyé à tous les participants le lundi 13 décembre 2021.*
> <sup>1</sup> Adoption de l'ordre du jour
> <sup>2</sup> Présentation Générale de *Re-Co*
> <sup>3</sup> Adoption des statuts :
> - Source des statuts et prévision d'une charte opérationnelle
> - Adoption
> - élection du comité et de la présidence
>
> <sup>4</sup> Cotisations et parts sociales
> <sup>5</sup> Élection de l'organe de révision
> <sup>6</sup> Activités 2022
> <sup>7</sup> Remerciements
>
> Lucas Uhlmann indique le déroulement de l'assemblée constitutive de Re-Co.
> L'assemblée constitutive n'est pas une assemblée générale ordinaire.
> La présente assemblée sert à constituer formellement la coopérative.
### 2. Présentation générale
> Lucas Uhlmann rappelle les objectifs et principes rassemblés sur le site re-co.ch
> L'organisation générale prévue est décrite : outre l'assemblée générale, le comité et l'organe de révision, Re-Co sera constitué de sections opérationnelles responsables respectivement de
> - la Communication
> - des Finances
> - de l'Architecture
>
> Les membres de Re-Co sont toutes et tous coopératrices et coopérateurs donc à ce titre possèdent au moins une part sociale de la société coopérative.
### 3. Statuts
> Re-Co a choisi de reprendre en intégralité les statuts-types de l'association romande des maîtres d'ouvrage d'utilité publique.
> L'assemblée décide de complémenter ses statuts par une charte opérationnelle dans le délai d'une année. Cette dernière sera soumis à la prochaine assemblée générale.
>
> Suite à cette discussion, les statuts sont approuvés par l'assemblée.
>
> Les participants ne pourront être formellement admis comme membres qu'une fois la coopérative inscrite au registre du commerce et aux conditions définies par les statuts.
**Comité 2022**
> Les sept membres fondateurs de Re-Co sont désignés membres du comité :
>- Antoine Girardin
>- Grégoire Henrioud
>- Mélanie Rouge
>- Marie Sigrist
>- Lucas Uhlmann
>- Elliot Vaucher
>- Marie-Pascale Wellinger
>
> Les membres du comité sont élus par acclamation, Lucas Uhlmann est élu à la présidence.
> Il est décidé que pour l'année 2022 seuls les membres du comité seront inscrits au registre du commerce.
### 4. Cotisation et part sociale
> Lucas Uhlmann reprend la présidence de la séance.
> Le comité réitère sa volonté de renoncer à une cotisation comme l'indiquent les statuts.
> Dans un esprit de rendre accessible la participation à Re-Co, la part sociale est fixéee à CHF 100.-
> Le comité se laisse la liberté de préciser le nombre de parts nécessaires pour intégrer un logement, selon les statuts.
### 6. Election de l'organe de révision
> Le comité renonce à l'élection d'un organe de révision par voie d'Opting-Out.
> La déclaration de renonciation est annexée au présent PV
### 7. Remerciements
> Le président remercie les membres pour leur présence ainsi que pour leur contribution à la coopérative ayant permis sa fondation.
> La séance est levée à 20h00.
---
**Les statuts susmentionnés ont été adoptés lors de l'assemblée constitutive du 15 Janvier 2022.**
| 0.276397 | 0.616618 |
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
# configure df options
pd.set_option('display.max_rows', None)
pd.set_option('display.max_columns', None)
pd.set_option('display.width', None)
pd.set_option('display.max_colwidth', None)
pd.options.display.float_format = '{:,.5f}'.format
import warnings
warnings.filterwarnings("ignore")
```
### Read samples from one station
```
station_code = 'SONDOC'
value_field = 'max'
```
#### Read true raw samples
```
df = pd.read_csv('../../dataset/final/bentre-cleaned.csv', parse_dates=['date'])
df.set_index('date', inplace=True)
df = df[df['code'] == station_code]
# How samples distributed
df.groupby(df.index.year).count()
```
From 2002 to 2010, samples are completed for dry seasons: 181 for normal years; 182 for leap years (January to June)
2011 and 2018 have samples from January to May (151 days)
2012 to 2016 have less samples, missing dates
2017 has no samples at all
#### Reread prepared train samples
```
train_df = pd.read_csv(f'../../dataset/final/tops/{station_code}.csv', parse_dates=['date'])
# set index to time-series based 'date'
train_df.set_index('date', inplace=True)
train_df.index
# can not set index frequency to D - daily due to missing discontinuous timestamps in the dataset
train_df.index.freq = 'D'
train_df.index
train_df.info()
# sort by date index
train_df.sort_index(inplace=True)
train_df.head()
train_df.tail()
```
### ARIMA Self Help
```
from statsmodels.tsa.arima_model import ARIMA
help(ARIMA)
```
#### Reread prepared test samples
Use 2011
```
test_year = 2011
test_df = pd.read_csv(f'../../dataset/final/stations/{station_code}.csv', parse_dates=['date'])
test_df.set_index('date', inplace=True)
test_df = test_df[test_df.index.year == test_year]
test_df = test_df[f'{test_year}-01-01':f'{test_year}-05-31']
test_df.index
test_df.index.freq = 'D'
test_df.index
```
#### Resample train and test datasets to monthly data
Train dataset
```
train_data = train_df[value_field].resample('MS').mean()
train_data.index
train_data.head()
train_data.tail()
train_df[value_field].plot(label='Daily train data', legend=True)
train_data.plot(figsize=(20, 10), label='Monthly mean train data', legend=True);
```
Test dataset
```
test_data = test_df[value_field].resample('MS').mean()
test_data.index
test_data.head()
test_data.tail()
test_df[value_field].plot(label='Daily test data', legend=True)
test_data.plot(figsize=(20, 10), label='Monthly mean test data', legend=True);
```
Train vs. test datasets
```
train_data.plot(legend=True, label='TRAIN')
test_data.plot(legend=True, label='TEST', figsize=(20,10));
```
### Try out some simpler models
```
# seasonal adjustment
adjustment = 'additive'
#adjustment = 'multiplicative'
# Annual
season_length = 12 #test_data.shape[0] # same length of test data
season_length
```
1. Holt-Winters method via Exponential Smoothing
```
from statsmodels.tsa.holtwinters import ExponentialSmoothing
hw_model = ExponentialSmoothing(train_data,
trend=adjustment, seasonal=adjustment,
seasonal_periods=season_length).fit()
hw_prediction = hw_model.forecast(len(test_data))
hw_prediction.head(10)
hw_prediction.tail(10)
# plot prediction vs. true values
train_data.plot(legend=True, label='TRAIN')
test_data.plot(legend=True, label='TEST')
hw_prediction.plot(legend=True, label='PREDICTION', figsize=(20, 10));
# plot prediction vs. true values on test set (zoomed version)
train_data.plot(legend=True, label='TRAIN')
test_data.plot(legend=True, label='TEST', figsize=(12,8))
hw_prediction.plot(legend=True, label='PREDICTION', xlim=[f'{test_year - 1}-01-01', f'{test_year}-05-31']);
```
#### Evaluating Prediction against test set
```
# Option 1: use scikit-learns implementations
from sklearn.metrics import mean_squared_error, mean_absolute_error
test_data.describe()
hw_prediction.describe()
```
The average of test data value is: 6.60 while the average of prediction value is: 1.66
```
mae = mean_absolute_error(test_data, hw_prediction)
mae
mse = mean_squared_error(test_data, hw_prediction)
mse
rmse = np.sqrt(mse)
rmse
```
##### Holt-Winters prediction is GOOD ENOUGH
rmse = 1.40 vs. test data STD = 2.62: Error in form of RMSE is about 53% of Test STD => GOOD ENOUGH
2. Other simple model goes here
### ARIMA models
1. Using AR component
```
from statsmodels.tsa.ar_model import AR, ARResults
model = AR(train_data)
ARfit = model.fit(method='mle', ic='t-stat')
lags = ARfit.k_ar
print(f'Lag: {lags}')
print(f'Coefficients:\n{ARfit.params}')
# general formula to calculate time periods for obtaining predictions
start = len(train_data)
end = start + len(test_data) - 1
start
end
ARprediction = ARfit.predict(start=start, end=end).rename(f'AR({lags}) Prediction')
ARprediction.head()
ARprediction.tail()
test_data.plot(legend=True)
ARprediction.plot(legend=True,figsize=(12,6));
```
AR only prediction is not bad since it can capture the mean of test data
#### Evaluating models
```
# Option 2: Use statsmodels implementations
from statsmodels.tools.eval_measures import mse, rmse, meanabs, aic, bic
mae = meanabs(test_data, ARprediction)
mae
# Akaike information criterion (AIC)
# we seldom compute AIC alone as it is built into many of the statsmodels tools we use
# Bayesian information criterion (BIC)
# we seldom compute BIC alone as it is built into many of the statsmodels tools we use
```
2. Pyramid ARIMA aka Auto-ARIMA
```
from pmdarima import auto_arima
help(auto_arima)
stepwise_fit = auto_arima(train_data,
start_p=0, start_q=0,
max_p=6, max_q=6,
m=12, # 12 month season
start_P=0,
seasonal=True,
d=None,
D=1,
trace=True,
error_action='ignore', # we don't want to know if an order does not work
suppress_warnings=True, # we don't want convergence warnings
stepwise=True) # set to stepwise
stepwise_fit.summary()
from statsmodels.tsa.stattools import adfuller
def adf_test(series, title=''):
"""
Pass in a time series and an optional title, returns an ADF report
"""
print(f'Augmented Dickey-Fuller Test: {title}')
result = adfuller(series.dropna(),autolag='AIC') # .dropna() handles differenced data
labels = ['ADF test statistic','p-value','# lags used','# observations']
out = pd.Series(result[0:4],index=labels)
for key,val in result[4].items():
out[f'critical value ({key})']=val
print(out.to_string()) # .to_string() removes the line "dtype: float64"
if result[1] <= 0.05:
print("Strong evidence against the null hypothesis")
print("Reject the null hypothesis")
print("Data has no unit root and is stationary")
else:
print("Weak evidence against the null hypothesis")
print("Fail to reject the null hypothesis")
print("Data has a unit root and is non-stationary")
adf_test(train_data)
# apply the first different
from statsmodels.tsa.statespace.tools import diff
diff_df = diff(train_data, k_diff=5)
diff_df.head()
adf_test(diff_df)
# fit model
model = ARIMA(train_data, order=(5,0,1))
results = model.fit()
results.summary()
# predict
predictions = results.predict(start=start, end=end,
dynamic=False,
typ='levels' # linear: return in differences; levels: return in original form
).rename('ARIMA Predictions')
predictions.head()
predictions.tail()
# plot
test_data.plot(legend=True)
predictions.plot(legend=True,figsize=(12,6));
```
3. Seasonal ARIMA
```
from statsmodels.tsa.statespace.sarimax import SARIMAX
model = SARIMAX(train_data,order=(5,0,1),seasonal_order=(1,1,1,12)) # seasonal
#model = SARIMAX(train_data,order=(3,1,1)) # non seasonal
results = model.fit()
results.summary()
predictions = results.predict(start=start, end=end,
dynamic=False, # dynamic=False means that forecasts at each point are generated using the full history up to that point (all lagged values).
typ='levels' # typ='levels' predicts the levels of the original endogenous variables. If we'd used the default typ='linear' we would have seen linear predictions in terms of the differenced endogenous variables
).rename('SARIMAX Predictions')
predictions.head()
predictions.tail()
test_df = pd.read_csv(f'../../dataset/final/stations/{station_code}.csv', parse_dates=['date'])
test_df.set_index('date', inplace=True)
test_df.index.freq = 'D'
test_df.index
test_df.sort_index(inplace=True)
test_df = test_df[(test_df.index.year >= 2011) & (test_df.index.year <= test_year)]
test_df = test_df[f'2011-01-01':f'{test_year}-05-31']
test_data = test_df[value_field].resample('MS').mean()
test_data.head()
test_data.plot(legend=True)
predictions.plot(legend=True,figsize=(12,6));
```
MSE/RMSE for ARIMA/SARIMAX
|
github_jupyter
|
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
# configure df options
pd.set_option('display.max_rows', None)
pd.set_option('display.max_columns', None)
pd.set_option('display.width', None)
pd.set_option('display.max_colwidth', None)
pd.options.display.float_format = '{:,.5f}'.format
import warnings
warnings.filterwarnings("ignore")
station_code = 'SONDOC'
value_field = 'max'
df = pd.read_csv('../../dataset/final/bentre-cleaned.csv', parse_dates=['date'])
df.set_index('date', inplace=True)
df = df[df['code'] == station_code]
# How samples distributed
df.groupby(df.index.year).count()
train_df = pd.read_csv(f'../../dataset/final/tops/{station_code}.csv', parse_dates=['date'])
# set index to time-series based 'date'
train_df.set_index('date', inplace=True)
train_df.index
# can not set index frequency to D - daily due to missing discontinuous timestamps in the dataset
train_df.index.freq = 'D'
train_df.index
train_df.info()
# sort by date index
train_df.sort_index(inplace=True)
train_df.head()
train_df.tail()
from statsmodels.tsa.arima_model import ARIMA
help(ARIMA)
test_year = 2011
test_df = pd.read_csv(f'../../dataset/final/stations/{station_code}.csv', parse_dates=['date'])
test_df.set_index('date', inplace=True)
test_df = test_df[test_df.index.year == test_year]
test_df = test_df[f'{test_year}-01-01':f'{test_year}-05-31']
test_df.index
test_df.index.freq = 'D'
test_df.index
train_data = train_df[value_field].resample('MS').mean()
train_data.index
train_data.head()
train_data.tail()
train_df[value_field].plot(label='Daily train data', legend=True)
train_data.plot(figsize=(20, 10), label='Monthly mean train data', legend=True);
test_data = test_df[value_field].resample('MS').mean()
test_data.index
test_data.head()
test_data.tail()
test_df[value_field].plot(label='Daily test data', legend=True)
test_data.plot(figsize=(20, 10), label='Monthly mean test data', legend=True);
train_data.plot(legend=True, label='TRAIN')
test_data.plot(legend=True, label='TEST', figsize=(20,10));
# seasonal adjustment
adjustment = 'additive'
#adjustment = 'multiplicative'
# Annual
season_length = 12 #test_data.shape[0] # same length of test data
season_length
from statsmodels.tsa.holtwinters import ExponentialSmoothing
hw_model = ExponentialSmoothing(train_data,
trend=adjustment, seasonal=adjustment,
seasonal_periods=season_length).fit()
hw_prediction = hw_model.forecast(len(test_data))
hw_prediction.head(10)
hw_prediction.tail(10)
# plot prediction vs. true values
train_data.plot(legend=True, label='TRAIN')
test_data.plot(legend=True, label='TEST')
hw_prediction.plot(legend=True, label='PREDICTION', figsize=(20, 10));
# plot prediction vs. true values on test set (zoomed version)
train_data.plot(legend=True, label='TRAIN')
test_data.plot(legend=True, label='TEST', figsize=(12,8))
hw_prediction.plot(legend=True, label='PREDICTION', xlim=[f'{test_year - 1}-01-01', f'{test_year}-05-31']);
# Option 1: use scikit-learns implementations
from sklearn.metrics import mean_squared_error, mean_absolute_error
test_data.describe()
hw_prediction.describe()
mae = mean_absolute_error(test_data, hw_prediction)
mae
mse = mean_squared_error(test_data, hw_prediction)
mse
rmse = np.sqrt(mse)
rmse
from statsmodels.tsa.ar_model import AR, ARResults
model = AR(train_data)
ARfit = model.fit(method='mle', ic='t-stat')
lags = ARfit.k_ar
print(f'Lag: {lags}')
print(f'Coefficients:\n{ARfit.params}')
# general formula to calculate time periods for obtaining predictions
start = len(train_data)
end = start + len(test_data) - 1
start
end
ARprediction = ARfit.predict(start=start, end=end).rename(f'AR({lags}) Prediction')
ARprediction.head()
ARprediction.tail()
test_data.plot(legend=True)
ARprediction.plot(legend=True,figsize=(12,6));
# Option 2: Use statsmodels implementations
from statsmodels.tools.eval_measures import mse, rmse, meanabs, aic, bic
mae = meanabs(test_data, ARprediction)
mae
# Akaike information criterion (AIC)
# we seldom compute AIC alone as it is built into many of the statsmodels tools we use
# Bayesian information criterion (BIC)
# we seldom compute BIC alone as it is built into many of the statsmodels tools we use
from pmdarima import auto_arima
help(auto_arima)
stepwise_fit = auto_arima(train_data,
start_p=0, start_q=0,
max_p=6, max_q=6,
m=12, # 12 month season
start_P=0,
seasonal=True,
d=None,
D=1,
trace=True,
error_action='ignore', # we don't want to know if an order does not work
suppress_warnings=True, # we don't want convergence warnings
stepwise=True) # set to stepwise
stepwise_fit.summary()
from statsmodels.tsa.stattools import adfuller
def adf_test(series, title=''):
"""
Pass in a time series and an optional title, returns an ADF report
"""
print(f'Augmented Dickey-Fuller Test: {title}')
result = adfuller(series.dropna(),autolag='AIC') # .dropna() handles differenced data
labels = ['ADF test statistic','p-value','# lags used','# observations']
out = pd.Series(result[0:4],index=labels)
for key,val in result[4].items():
out[f'critical value ({key})']=val
print(out.to_string()) # .to_string() removes the line "dtype: float64"
if result[1] <= 0.05:
print("Strong evidence against the null hypothesis")
print("Reject the null hypothesis")
print("Data has no unit root and is stationary")
else:
print("Weak evidence against the null hypothesis")
print("Fail to reject the null hypothesis")
print("Data has a unit root and is non-stationary")
adf_test(train_data)
# apply the first different
from statsmodels.tsa.statespace.tools import diff
diff_df = diff(train_data, k_diff=5)
diff_df.head()
adf_test(diff_df)
# fit model
model = ARIMA(train_data, order=(5,0,1))
results = model.fit()
results.summary()
# predict
predictions = results.predict(start=start, end=end,
dynamic=False,
typ='levels' # linear: return in differences; levels: return in original form
).rename('ARIMA Predictions')
predictions.head()
predictions.tail()
# plot
test_data.plot(legend=True)
predictions.plot(legend=True,figsize=(12,6));
from statsmodels.tsa.statespace.sarimax import SARIMAX
model = SARIMAX(train_data,order=(5,0,1),seasonal_order=(1,1,1,12)) # seasonal
#model = SARIMAX(train_data,order=(3,1,1)) # non seasonal
results = model.fit()
results.summary()
predictions = results.predict(start=start, end=end,
dynamic=False, # dynamic=False means that forecasts at each point are generated using the full history up to that point (all lagged values).
typ='levels' # typ='levels' predicts the levels of the original endogenous variables. If we'd used the default typ='linear' we would have seen linear predictions in terms of the differenced endogenous variables
).rename('SARIMAX Predictions')
predictions.head()
predictions.tail()
test_df = pd.read_csv(f'../../dataset/final/stations/{station_code}.csv', parse_dates=['date'])
test_df.set_index('date', inplace=True)
test_df.index.freq = 'D'
test_df.index
test_df.sort_index(inplace=True)
test_df = test_df[(test_df.index.year >= 2011) & (test_df.index.year <= test_year)]
test_df = test_df[f'2011-01-01':f'{test_year}-05-31']
test_data = test_df[value_field].resample('MS').mean()
test_data.head()
test_data.plot(legend=True)
predictions.plot(legend=True,figsize=(12,6));
| 0.678859 | 0.834609 |
# Milestone 1
## DSCI 525 Web and Cloud Computing
## Group 16
### Authors: Jared Splinter, Steffen Pentelow, Chen Zhao, Ifeanyi Anene
This notebook downloads various observed and simulated rainfall data sets from New South Wales, Australia over the period of 1889 - 2014. The data are then combined and basic exporatory data analyses are conducted using both Python and R programming languages.
## Import packages
```
import re
import os
import zipfile
import requests
from urllib.request import urlretrieve
import json
import rpy2.rinterface
import dask.dataframe as dd
import pandas as pd
from memory_profiler import memory_usage
import pyarrow.dataset as ds
import pyarrow as pa
import pyarrow.parquet as pq
import pyarrow.feather as feather
import rpy2.rinterface
import rpy2_arrow.pyarrow_rarrow as pyra
%load_ext rpy2.ipython
%load_ext memory_profiler
%%R
library(dplyr)
library(arrow)
```
## Data Download
The following code chunk downloads the data used in the subsequent analyses. The data are downloaded from 'figshare.com'. The file 'data.zip' is saved to a local directory called 'data'.
```
%%time
%%memit
# Print out time and memory taken for downloading data
# This code is adapted from DSCI 525 lecture demonstration notebook (Gittu George, 2021,
# https://github.ubc.ca/MDS-2020-21/DSCI_525_web-cloud-comp_students/blob/master/Lectures/Lecture_1_2.ipynb)
url = f"https://api.figshare.com/v2/articles/14096681"
headers = {"Content-Type": "application/json"}
output_directory = "../data/"
response = requests.request("GET", url, headers=headers)
data = json.loads(response.text)
files = data["files"]
for file in files:
if file["name"] in "data.zip":
os.makedirs(output_directory, exist_ok=True)
urlretrieve(file["download_url"], output_directory + file["name"])
```
After it has been downloaded locally, 'data.zip' is extracted and stored in the 'data' directory.
```
%%time
%%memit
# Print out time and memory taken to extract data
with zipfile.ZipFile(os.path.join(output_directory, "data.zip"), "r") as f:
f.extractall(output_directory)
```
### Discussion:
Loading each .csv file into memory, concatenating it with the other .csv files, then writing back to disc is computationally and memory intensive. To load each .csv file, it must be deserialized (a process that takes some time with large files), then stored in RAM while being manipulated. Assuming a computer has sufficient RAM to hold all of the files, they can be concatenated together (e.g., using Pandas or Numpy), but then must be serialized again to be saved as a large .csv file. It would be better to be able to concatenate the files directly on disk or at least have them saved in a format where they could be read and written directly to and from disk without serialization/deserialization.
## Combining Data
The following code chunk combines all of the unzipped rainfall data .csv files into a single file called 'combined_data.csv'. This process is accomplished by creating a pandas dataframe called `full_df`, then one by one loading each .csv file and concatenating it with `full_df`. This requires that all of the .csv files be read into a pandas dataframe variable and held in RAM at once. In this case, this requires that almost 7 GB of data be held in RAM and manipulated. Some computers will not be able to perform this data combining operation because they do not have sufficient RAM. Even for systems which have sufficient RAM, performing simple operations (such as concatenation) on on a variable of this size are time consuming. To demonstrate this, below the code chunk, we have included screen shots of the time and memory usage for the execution of this data combining operation. To summarize, the time taken to complete this operation on each system are listed below (along with some general hardware specifications):
1. Wall time: 7min 9s; Peak memory: 6891.53 MiB
- Processor: i7-10510U (4 cores, up to 4.90 GHz)
- RAM: 16 GB
2. Wall time: 9min 46s; Peak memory: 3097.45 MiB
- Processor: i5
- RAM: 8 GB
3. Wall time: 6min 5s; Peak memory: 7265.16 MiB
- Processor: i7-8700K (6 cores, up to 3.70 GHz)
- RAM: 16 GB
4. Wall time: 23min 43s; Peak memory: 2541.36 MiB
- Current Computer
- Processor i5-5300U (4 cores)
- RAM: 8 GB
```
%%time
%%memit
# Print out time and memory taken to merge and save csv files
file_names = os.listdir(output_directory)
file_names = [file for file in file_names if file[-4:] == ".csv"]
cols = ["lat_min", "lat_max", "lon_min", "lon_max", "rain (mm/day)"]
full_df = pd.DataFrame(columns=["model"] + cols)
full_df.index.rename("time", inplace=True)
for file in file_names:
result = re.search("^.*(?=_daily)", file)
if result:
model_name = result.group(0)
full_df = pd.concat(
[
full_df,
pd.read_csv(output_directory + file, index_col=0).assign(
model=model_name
),
]
)
full_df.to_csv(output_directory + "combined_data.csv")
```
> **NOTE:** This notebook was last run with a i5-5300U CPU @ 2.30 GHz with 4 CPUS and 8GB of RAM and thus the run times in this notebook are worse. The runs for the above code cell on other machines are shown below.
1. Processor: i7-10510U (4 cores, up to 4.90 GHz); RAM: 16 GB

2. Processor: 2.3 GHz Quad-Core Intel Core i5; RAM: 8GB

3. Processor: i7-8700K (6 cores, up to 3.70 GHz); RAM: 16 GB

## Task 5. Load the combined CSV to memory and perform a simple EDA
### 1. Investigate at least 2 approaches and perform a simple EDA
```
full_df.head()
full_df.info()
full_df.dtypes
```
#### Method 1: Loading in Chunks
```
%%time
%%memit
import dask.dataframe as dd
### Code adapted from DSCI 525 Lecture ipynb notebook (Gittu George, 2021)
counts = pd.Series(dtype=int)
for chunk in pd.read_csv("../data/combined_data.csv", chunksize=10_000_000):
counts = counts.add(chunk["model"].value_counts(), fill_value=0)
print(counts.astype(int))
```
#### Method 2: Using Dask
```
%%time
%%memit
### Code adapted from DSCI 525 Lecture ipynb notebook (Gittu George, 2021)
dask_df = dd.read_csv("../data/combined_data.csv")
print(dask_df["model"].value_counts().compute())
```
#### Method 3: Loading just columns what we want
```
%%time
%%memit
# The only column we want is the model column
model_df = pd.read_csv("../data/combined_data.csv", usecols=["model"])
print(model_df["model"].value_counts())
```
### 2. Observations discussion.
- Loading just the column we want seems to have the shortest CPU times (user 30.9 s, sys: 2.3 s, total: 33.2 s) and wall time (33.9 s).
- **NOTE:** These numbers come from a previous run, for this run Dask had the shortest run time with wall time of 1min 43s, 3 seconds faster than loading in the column
- Loading the combined data using Dask has a shorter wall time (40.6 s) than loading in chunks, however, it has longer CPU times (user 1min 24s, sys: 18.6 s, total: 1min 43s) than loading in chunks (user 59.6 s, sys: 7.12 s, total: 1min 6s).
- Loading just the column we want has the minimum peak memory and increment used (1166.77 MiB and 780.00 MiB), whilst loading in chunks has the maximum peak memory and increment (1873.48 MiB, increment: 1458.30 MiB).
- It is also worth noting that the memory usage from full_df.info() was memory usage: 3.3+ GB. Thus, using these methods to load the data all saved us considerable memory space.
- In conclusion, loading just the column we want gives us the optimal time and space savings.
- **NOTE:** In the last run, Dask had a slightly faster wall time, however loading in the column we want is still better considering space. Furthermore, as the wall times between Dask and loading in column method, we can conclude that loading in the column may be preferred.
## Task 6. Perform a simple EDA in R
### 1. Store data in different format
Here we will write the data in 2 more different formats to compare the running time and ocuppied storage between different formats. All formats of data in this section including:
- csv format
- feather format
- parquet format
```
%%time
dataset = ds.dataset("../data/combined_data.csv", format="csv")
table = dataset.to_table()
```
#### Method 1: Feather Format
```
%%time
feather.write_feather(table, "../data/example.feather")
```
#### Method 2: Parquet format
```
%%time
pq.write_table(table, "../data/example.parquet")
```
#### Check the size of data in all different formats
```
%%sh
du -sh ../data/combined_data.csv
du -sh ../data/example.feather
du -sh ../data/example.parquet
```
### Discussion:
- The parquet file format of data takes the least storage, while feather file is twice as big as the parquest file. The csv file is the largest, which we may not want to store on local machines
- It takes less time to write the feather file than it takes to write the parquet file.
- The parquet file takes more time to write but uses less memory when loaded. This is because the data needs more layers of encoding and compressing during the saving process. If we are limited by the storage capability, the parquet file may be ideal. However, if we are more concerned about time, storing data as feather format may be more suitable.
### 2. Transfer the Dataframe from Python to R and perform EDA
Here we will experiment 3 exchange approaches to transfer the loaded dataset from python to R and perform EDA. In the end, we will pick one appropriate approach over others. All exchange approaches in this section including:
- Arrow exchange
- feather file exchange
- parquet file exchange
#### Arrow exchange and EDA
```
%%time
r_table = pyra.converter.py2rpy(table)
%%time
%%R -i r_table
start_time <- Sys.time()
head_df <- head(r_table)
glimpse_df <- glimpse(r_table)
model_count <- r_table %>% collect() %>% count(model)
end_time <- Sys.time()
print(class(r_table))
print(head_df)
print(glimpse_df)
print(model_count)
print(end_time - start_time)
```
#### Feather file exchange and EDA
```
%%time
%%R
start_time <- Sys.time()
r_table <- arrow::read_feather("../data/example.feather")
head_df <- head(r_table)
glimpse_df <- glimpse(r_table)
model_count <- r_table %>% count(model)
end_time <- Sys.time()
print(class(r_table))
print(head_df)
print(glimpse_df)
print(model_count)
print(end_time - start_time)
```
#### Parquet file exchange and EDA
```
%%time
%%R
start_time <- Sys.time()
r_table <- arrow::read_parquet("../data/example.parquet")
head_df <- head(r_table)
glimpse_df <- glimpse(r_table)
model_count <- r_table %>% count(model)
end_time <- Sys.time()
print(class(r_table))
print(head_df)
print(glimpse_df)
print(model_count)
print(end_time - start_time)
```
### Discussion:
- Although the running time for arrow exchange and EDA seems to be the shortest, we need to count in the time of loading the arrow dataframe as well
- The total time consumption for arrow exhange and EDA, feather file loading and EDA, and parquet file loading and EDA is roughly the same
- Considering the size of the data we are working with all 3 methods tested run quickly
- Arrow exchange and the functions associated are still in development. Some functions and applications are limited and may be unstable. Further development is likely needed.
- Exchanging data by writing it to parquet from Python and then reading in R is an efficiently optimized way to deal with large size of data, and it is widely used within industry.
### 3. Summary
Based on our observations, we will pick parquet file format to transfer data from Python to R for the following reasons:
1. The Arrow memory format is an unified way to represent memory for efficient analytic operations. It saves the time and storage for serialization and deserialization. Parquet is a columnar based file format which can write arrow to disk and deal with big data efficiently
2. Earlier we concluded a parquet file can store the data using much less space and memory. This can speed up the process largely when working with the data.
3. Although it takes more time to write a parquet file compared to a feather file, the parquet file features more layers of encoding and compression. When dealing with data size as in this excercise, it is not too much. However, as the parquet file takes the least amount of space and memory compared to other methods it is the fastest when wrangling data.
4. Earlier we concluded that the total time consumption for arrow exhange and EDA, feather file loading and EDA, and parquet file loading and EDA is roughly the same considering the size of data we are working with. However, parquet still manages the be the quickest. When working with large data, these small changes could potentially be really important.
5. Although feather is another good option to work with large data and is very fast, it has only recently been developed. The package is not mature enough and some functions may be unstable.
In conclusion, when dealing with big data, and we want to storage the data for long-term, we will pick parquet file format, as it saves storage and is very efficient when analyzing big data due to its columnar based format.
# Final Thoughts and Conclusions
> The challenge of working with large data is apparent. Loading in such a large data set proved to be a challenge among every laptop; taking up incredible amounts of space, pushing our laptop CPU's to the limit, and causing long code run times.
>
>In particular, in one situation, while re-running this notebook the memory on my laptop was completely used up and the kernel crashed. I needed to delete every picture from my laptop to overcome the storage requirements. A few times a process would crash and progress would be lost. In another occassion, an error kept persisting in a code cell one for not enough memory allocation and one for a process being used elsewhere. After some searching online it became apparent that my laptop was not capable of handling everything at once and a restart was needed. There is not much more demoralizing than realizing you need to load in a massive data set again and sit through and wait for the code to run because there is not enough memory, or a windows error. Perhaps that is what I deserve for having a laptop that uses Windows.
>
> It also became clear throughout this milestone that with large data a simple csv file and traditional data wrangling methods become very long and ineffective. Fortunately, there are other methods such as Dask, Feather/Parquet files, and Arrow exchange that can make working with such a large data set much easier and vastly improve run times. However, the run times when using the most efficient methods explored here can still take some time and memory to run.
>
> Finally, the importance of a good computer increases when working with large data. Among the machines tested in our group, those with better processors and more RAM were able to run code much faster. If one is working with large data frequently it may be a good investment to get a powerful computer with lots of RAM. Loading and working with large data directly by loading into memory only has so many workarounds to speed up processes. In this notebook, we have explored working with large data directly and we can now see the need and benefits that cloud computing may be able to provide in working with large data.
|
github_jupyter
|
import re
import os
import zipfile
import requests
from urllib.request import urlretrieve
import json
import rpy2.rinterface
import dask.dataframe as dd
import pandas as pd
from memory_profiler import memory_usage
import pyarrow.dataset as ds
import pyarrow as pa
import pyarrow.parquet as pq
import pyarrow.feather as feather
import rpy2.rinterface
import rpy2_arrow.pyarrow_rarrow as pyra
%load_ext rpy2.ipython
%load_ext memory_profiler
%%R
library(dplyr)
library(arrow)
%%time
%%memit
# Print out time and memory taken for downloading data
# This code is adapted from DSCI 525 lecture demonstration notebook (Gittu George, 2021,
# https://github.ubc.ca/MDS-2020-21/DSCI_525_web-cloud-comp_students/blob/master/Lectures/Lecture_1_2.ipynb)
url = f"https://api.figshare.com/v2/articles/14096681"
headers = {"Content-Type": "application/json"}
output_directory = "../data/"
response = requests.request("GET", url, headers=headers)
data = json.loads(response.text)
files = data["files"]
for file in files:
if file["name"] in "data.zip":
os.makedirs(output_directory, exist_ok=True)
urlretrieve(file["download_url"], output_directory + file["name"])
%%time
%%memit
# Print out time and memory taken to extract data
with zipfile.ZipFile(os.path.join(output_directory, "data.zip"), "r") as f:
f.extractall(output_directory)
%%time
%%memit
# Print out time and memory taken to merge and save csv files
file_names = os.listdir(output_directory)
file_names = [file for file in file_names if file[-4:] == ".csv"]
cols = ["lat_min", "lat_max", "lon_min", "lon_max", "rain (mm/day)"]
full_df = pd.DataFrame(columns=["model"] + cols)
full_df.index.rename("time", inplace=True)
for file in file_names:
result = re.search("^.*(?=_daily)", file)
if result:
model_name = result.group(0)
full_df = pd.concat(
[
full_df,
pd.read_csv(output_directory + file, index_col=0).assign(
model=model_name
),
]
)
full_df.to_csv(output_directory + "combined_data.csv")
full_df.head()
full_df.info()
full_df.dtypes
%%time
%%memit
import dask.dataframe as dd
### Code adapted from DSCI 525 Lecture ipynb notebook (Gittu George, 2021)
counts = pd.Series(dtype=int)
for chunk in pd.read_csv("../data/combined_data.csv", chunksize=10_000_000):
counts = counts.add(chunk["model"].value_counts(), fill_value=0)
print(counts.astype(int))
%%time
%%memit
### Code adapted from DSCI 525 Lecture ipynb notebook (Gittu George, 2021)
dask_df = dd.read_csv("../data/combined_data.csv")
print(dask_df["model"].value_counts().compute())
%%time
%%memit
# The only column we want is the model column
model_df = pd.read_csv("../data/combined_data.csv", usecols=["model"])
print(model_df["model"].value_counts())
%%time
dataset = ds.dataset("../data/combined_data.csv", format="csv")
table = dataset.to_table()
%%time
feather.write_feather(table, "../data/example.feather")
%%time
pq.write_table(table, "../data/example.parquet")
%%sh
du -sh ../data/combined_data.csv
du -sh ../data/example.feather
du -sh ../data/example.parquet
%%time
r_table = pyra.converter.py2rpy(table)
%%time
%%R -i r_table
start_time <- Sys.time()
head_df <- head(r_table)
glimpse_df <- glimpse(r_table)
model_count <- r_table %>% collect() %>% count(model)
end_time <- Sys.time()
print(class(r_table))
print(head_df)
print(glimpse_df)
print(model_count)
print(end_time - start_time)
%%time
%%R
start_time <- Sys.time()
r_table <- arrow::read_feather("../data/example.feather")
head_df <- head(r_table)
glimpse_df <- glimpse(r_table)
model_count <- r_table %>% count(model)
end_time <- Sys.time()
print(class(r_table))
print(head_df)
print(glimpse_df)
print(model_count)
print(end_time - start_time)
%%time
%%R
start_time <- Sys.time()
r_table <- arrow::read_parquet("../data/example.parquet")
head_df <- head(r_table)
glimpse_df <- glimpse(r_table)
model_count <- r_table %>% count(model)
end_time <- Sys.time()
print(class(r_table))
print(head_df)
print(glimpse_df)
print(model_count)
print(end_time - start_time)
| 0.319546 | 0.824214 |
# This colab notebook must be run on a **P100** GPU instance otherwise it will crash. Use the Cell-1 to ensure that it has a **P100** GPU instance
Cell-1: Ensure the required gpu instance (P100)
```
#no.of sockets i.e available slots for physical processors
!lscpu | grep 'Socket(s):'
#no.of cores each processor is having
!lscpu | grep 'Core(s) per socket:'
#no.of threads each core is having
!lscpu | grep 'Thread(s) per core'
#GPU count and name
!nvidia-smi -L
#use this command to see GPU activity while doing Deep Learning tasks, for this command 'nvidia-smi' and for above one to work, go to 'Runtime > change runtime type > Hardware Accelerator > GPU'
!nvidia-smi
```
Cell-2: Add Google Drive
```
from google.colab import drive
drive.mount('/content/gdrive')
```
Cell-3: Install Required Dependencies
```
!pip install efficientnet_pytorch==0.7.0
!pip install albumentations==0.4.5
!pip install torch==1.6.0+cu101 torchvision==0.7.0+cu101 -f https://download.pytorch.org/whl/torch\_stable.html -q\
```
Cell-4: Run this cell to generate current fold weight ( Estimated Time for training this fold is around 6 hours 50 minutes )
```
import sys
sys.path.insert(0, "/content/gdrive/My Drive/zindi_cgiar_wheat_growth_stage_challenge/src_wd")
from dataset import *
from model import *
from trainer import *
from utils import *
import numpy as np
import pandas as pd
from sklearn.model_selection import StratifiedKFold
from torch.utils.data import DataLoader
config = {
'n_folds': 5,
'random_seed': 29,
'run_fold': 2,
'batch_size': 66,
'n_core': 0,
'model_name': 'efficientnet-b3',
'global_dim': 1536,
'weight_saving_path': '/content/gdrive/My Drive/zindi_cgiar_wheat_growth_stage_challenge/train_wd_effnet_b3/weights/',
'resume_checkpoint_path': None,
'lr': 0.01,
'total_epochs': 100,
}
if __name__ == '__main__':
set_random_state(config['random_seed'])
imgs = np.load('/content/gdrive/My Drive/zindi_cgiar_wheat_growth_stage_challenge/zindi_npy_data/train_imgs.npy')
labels = np.load('/content/gdrive/My Drive/zindi_cgiar_wheat_growth_stage_challenge/zindi_npy_data/train_labels.npy')
labels = labels - 1
skf = StratifiedKFold(n_splits=config['n_folds'], shuffle=True, random_state=config['random_seed'])
for fold_number, (train_index, val_index) in enumerate(skf.split(X=imgs, y=labels)):
if fold_number != 0:
continue
train_dataset = ZCDataset(
imgs[train_index],
labels[train_index],
test=False
)
train_loader = DataLoader(
train_dataset,
batch_size=config['batch_size'],
shuffle=True,
num_workers=config['n_core'],
drop_last=True,
pin_memory=True,
)
val_dataset = ZCDataset(
imgs[val_index],
labels[val_index],
test=True,
)
val_loader = DataLoader(
val_dataset,
batch_size=config['batch_size'],
shuffle=False,
num_workers=config['n_core'],
pin_memory=True,
)
del imgs,labels
model = CNN_Model(config['model_name'], config['global_dim'])
args = {
'model': model,
'Loaders': [train_loader,val_loader],
'metrics': {'Loss':AverageMeter, 'f1_score':PrintMeter, 'rmse':PrintMeter},
'checkpoint_saving_path': config['weight_saving_path'],
'resume_train_from_checkpoint': False,
'resume_checkpoint_path': config['resume_checkpoint_path'],
'lr': config['lr'],
'fold': fold_number,
'epochsTorun': config['total_epochs'],
'batch_size': config['batch_size'],
'test_run_for_error': False,
'problem_name': 'zindi_cigar',
}
Trainer = ModelTrainer(**args)
Trainer.fit()
```
|
github_jupyter
|
#no.of sockets i.e available slots for physical processors
!lscpu | grep 'Socket(s):'
#no.of cores each processor is having
!lscpu | grep 'Core(s) per socket:'
#no.of threads each core is having
!lscpu | grep 'Thread(s) per core'
#GPU count and name
!nvidia-smi -L
#use this command to see GPU activity while doing Deep Learning tasks, for this command 'nvidia-smi' and for above one to work, go to 'Runtime > change runtime type > Hardware Accelerator > GPU'
!nvidia-smi
from google.colab import drive
drive.mount('/content/gdrive')
!pip install efficientnet_pytorch==0.7.0
!pip install albumentations==0.4.5
!pip install torch==1.6.0+cu101 torchvision==0.7.0+cu101 -f https://download.pytorch.org/whl/torch\_stable.html -q\
import sys
sys.path.insert(0, "/content/gdrive/My Drive/zindi_cgiar_wheat_growth_stage_challenge/src_wd")
from dataset import *
from model import *
from trainer import *
from utils import *
import numpy as np
import pandas as pd
from sklearn.model_selection import StratifiedKFold
from torch.utils.data import DataLoader
config = {
'n_folds': 5,
'random_seed': 29,
'run_fold': 2,
'batch_size': 66,
'n_core': 0,
'model_name': 'efficientnet-b3',
'global_dim': 1536,
'weight_saving_path': '/content/gdrive/My Drive/zindi_cgiar_wheat_growth_stage_challenge/train_wd_effnet_b3/weights/',
'resume_checkpoint_path': None,
'lr': 0.01,
'total_epochs': 100,
}
if __name__ == '__main__':
set_random_state(config['random_seed'])
imgs = np.load('/content/gdrive/My Drive/zindi_cgiar_wheat_growth_stage_challenge/zindi_npy_data/train_imgs.npy')
labels = np.load('/content/gdrive/My Drive/zindi_cgiar_wheat_growth_stage_challenge/zindi_npy_data/train_labels.npy')
labels = labels - 1
skf = StratifiedKFold(n_splits=config['n_folds'], shuffle=True, random_state=config['random_seed'])
for fold_number, (train_index, val_index) in enumerate(skf.split(X=imgs, y=labels)):
if fold_number != 0:
continue
train_dataset = ZCDataset(
imgs[train_index],
labels[train_index],
test=False
)
train_loader = DataLoader(
train_dataset,
batch_size=config['batch_size'],
shuffle=True,
num_workers=config['n_core'],
drop_last=True,
pin_memory=True,
)
val_dataset = ZCDataset(
imgs[val_index],
labels[val_index],
test=True,
)
val_loader = DataLoader(
val_dataset,
batch_size=config['batch_size'],
shuffle=False,
num_workers=config['n_core'],
pin_memory=True,
)
del imgs,labels
model = CNN_Model(config['model_name'], config['global_dim'])
args = {
'model': model,
'Loaders': [train_loader,val_loader],
'metrics': {'Loss':AverageMeter, 'f1_score':PrintMeter, 'rmse':PrintMeter},
'checkpoint_saving_path': config['weight_saving_path'],
'resume_train_from_checkpoint': False,
'resume_checkpoint_path': config['resume_checkpoint_path'],
'lr': config['lr'],
'fold': fold_number,
'epochsTorun': config['total_epochs'],
'batch_size': config['batch_size'],
'test_run_for_error': False,
'problem_name': 'zindi_cigar',
}
Trainer = ModelTrainer(**args)
Trainer.fit()
| 0.352982 | 0.629732 |
```
from keras.models import load_model
from astropy.io import fits
import numpy as np
from keras.utils.np_utils import to_categorical
import matplotlib.pyplot as plt
import json
basedir = "/scratch/dgandhi/desi/time-domain-bkup/tuning_batch_v2/cnn/categorical/"
models = ["batch(07-18_14:44:48)/iter(17)_run(07-18_14:44:49_511884)/weights/weights.Ep73-ValAcc0.83.hdf5",
"batch(07-18_14:44:48)/iter(67)_run(07-18_14:44:54_886971)/weights/weights.Ep72-ValAcc0.84.hdf5",
"batch(07-19_11:36:51)/iter(92)_run(07-19_11:36:55_775852)/weights/weights.Ep75-ValAcc0.87.hdf5",
"batch(07-19_11:36:51)/iter(64)_run(07-19_11:36:54_667053)/weights/weights.Ep73-ValAcc0.86.hdf5",
"batch(07-19_11:36:51)/iter(45)_run(07-19_11:36:53_892865)/weights/weights.Ep70-ValAcc0.85.hdf5",
"batch(07-19_11:36:51)/iter(27)_run(07-19_11:36:53_132799)/weights/weights.Ep73-ValAcc0.85.hdf5",
"batch(07-20_15:55:00)/iter(47)_run(07-20_15:55:02_573722)/weights/weights.Ep58-ValAcc0.87.hdf5"
]
h = fits.open('/scratch/dgandhi/desi/time-domain-bkup/cnn-data/hosts_data.fits')
standardized_hosts = h[0].data
rmags_hosts = h[2].data
h.close()
h = fits.open('/scratch/dgandhi/desi/time-domain-bkup/cnn-data/sne_ia_data.fits')
standardized_ia = h[0].data
rfr_ia = h[2].data
rmags_ia = h[4].data
h.close()
h = fits.open('/scratch/dgandhi/desi/time-domain-bkup/cnn-data/sne_iip_data.fits')
standardized_iip = h[0].data
rfr_iip = h[2].data
rmags_iip = h[4].data
h.close()
print(standardized_hosts.shape)
print(rmags_hosts.shape)
print(standardized_ia.shape)
print(rfr_ia.shape)
print(standardized_iip.shape)
print(rfr_iip.shape)
index_begin_test_set = 85000
index_end_test_set = min(len(standardized_hosts),len(standardized_ia),len(standardized_iip))
test_hosts = standardized_hosts[index_begin_test_set:index_end_test_set]
test_ia = standardized_ia[index_begin_test_set:index_end_test_set]
test_iip = standardized_iip[index_begin_test_set:index_end_test_set]
len_each_test_class = len(test_hosts)
print(test_hosts.shape, test_ia.shape, test_iip.shape)
x_test = np.concatenate([test_hosts, test_ia, test_iip]).reshape(-1,400,1)
y_test = np.concatenate([np.zeros(len(test_hosts)), np.ones(len(test_ia)), 1+np.ones(len(test_iip))])
brightest = np.concatenate(
[rmags_hosts[index_begin_test_set:index_end_test_set] <= np.median(rmags_hosts),
rfr_ia[index_begin_test_set:index_end_test_set] >=0.5,
rfr_iip[index_begin_test_set:index_end_test_set] >= 0.5])
model1 = load_model(basedir + models[-1])
y_pred = model1.predict(x_test)
print(y_pred)
y_pred_labels = np.argmax(y_pred, axis=1)
print(y_pred_labels)
from sklearn.metrics import accuracy_score
print(accuracy_score(y_test,y_pred_labels))
print(accuracy_score(y_test[brightest],y_pred_labels[brightest]))
```
## Looking at Recall from Class of SNEs
```
y_pred_class_hosts = y_pred_labels[:len_each_test_class]
y_pred_class_ia = y_pred_labels[len_each_test_class:2*len_each_test_class]
y_pred_class_iip = y_pred_labels[2*len_each_test_class:]
y_pred_class_ia_predicted_hosts = y_pred_class_ia == 0
y_pred_class_ia_predicted_ia = y_pred_class_ia == 1
y_pred_class_ia_predicted_iip = y_pred_class_ia == 2
rfr_ia_test = rfr_ia[index_begin_test_set:index_end_test_set]
(counts_rfr_ia_total, bins_rfr_ia_total, patches_rfr_ia_total) = plt.hist(rfr_ia_test, label="Total counts in dataset", bins=20, histtype='step', color='black', alpha=0.5)
(counts_rfr_ia_phost, bins_rfr_ia_phost, patches_rfr_ia_phost) = plt.hist(rfr_ia_test[y_pred_class_ia_predicted_hosts], label="Predicted as host", bins=bins_rfr_ia_total, histtype='step', color='red')
(counts_rfr_ia_pia, bins_rfr_ia_pia, patches_rfr_ia_pia) = plt.hist(rfr_ia_test[y_pred_class_ia_predicted_ia], label="Predicted as IA", bins=bins_rfr_ia_total, histtype='step', color='green')
(counts_rfr_ia_piip, bins_rfr_ia_piip, patches_rfr_ia_piip) = plt.hist(rfr_ia_test[y_pred_class_ia_predicted_iip], label="Predicted as IIP", bins=bins_rfr_ia_total, histtype='step', color='blue')
plt.xlabel(r'$\rho$, SNE Flux Ratio', fontsize=12)
plt.ylabel('Counts', fontsize=12)
plt.title('Class of SNE IAs with Prediction Counts over Flux Ratio')
plt.legend(bbox_to_anchor=(1,0.5), loc="lower left")
plt.show()
rmags_ia_test = rmags_ia[index_begin_test_set:index_end_test_set]
(counts_mags_ia_total, bins_mags_ia_total, patches_mags_ia_total) = plt.hist(rmags_ia_test, label="Total counts in dataset", bins=20, histtype='step', color='black', alpha=0.5)
(counts_mags_ia_phost, bins_mags_ia_phost, patches_mags_ia_phost) = plt.hist(rmags_ia_test[y_pred_class_ia_predicted_hosts], label="Predicted as host", bins=bins_mags_ia_total, histtype='step', color='red')
(counts_mags_ia_pia, bins_mags_ia_pia, patches_mags_ia_pia) = plt.hist(rmags_ia_test[y_pred_class_ia_predicted_ia], label="Predicted as IA", bins=bins_mags_ia_total, histtype='step', color='green')
(counts_mags_ia_piip, bins_mags_ia_piip, patches_mags_ia_piip) = plt.hist(rmags_ia_test[y_pred_class_ia_predicted_iip], label="Predicted as IIP", bins=bins_mags_ia_total, histtype='step', color='blue')
plt.xlabel('Flux Magnitudes', fontsize=12)
plt.ylabel('Counts', fontsize=12)
plt.title('Class of SNE IAs with Prediction Counts over Flux Ratio')
plt.legend(bbox_to_anchor=(1,0.5), loc="lower left")
plt.show()
y_pred_class_iip_predicted_hosts = y_pred_class_iip == 0
y_pred_class_iip_predicted_ia = y_pred_class_iip == 1
y_pred_class_iip_predicted_iip = y_pred_class_iip == 2
rfr_iip_test = rfr_iip[index_begin_test_set:index_end_test_set]
(counts_rfr_iip_total, bins_rfr_iip_total, patches_rfr_iip_total) = plt.hist(rfr_iip_test, label="Total counts in dataset", bins=20, histtype='step', color='black', alpha=0.5)
(counts_rfr_iip_phost, bins_rfr_iip_phost, patches_rfr_iip_phost) = plt.hist(rfr_iip_test[y_pred_class_iip_predicted_hosts], label="Predicted as host", bins=bins_rfr_iip_total, histtype='step', color='red')
(counts_rfr_iip_pia, bins_rfr_iip_pia, patches_rfr_iip_pia) = plt.hist(rfr_iip_test[y_pred_class_iip_predicted_ia], label="Predicted as IA", bins=bins_rfr_iip_total, histtype='step', color='blue')
(counts_rfr_iip_piip, bins_rfr_iip_piip, patches_iip_piip) = plt.hist(rfr_iip_test[y_pred_class_iip_predicted_iip], label="Predicted as IIP", bins=bins_rfr_iip_total, histtype='step', color='green')
plt.xlabel(r'$\rho$, SNE Flux Ratio', fontsize=12)
plt.ylabel('Counts', fontsize=12)
plt.title('Class of SNE IIPs with Prediction Counts over Flux Ratio')
plt.legend(bbox_to_anchor=(1,0.5), loc="lower left")
plt.show()
rmags_iip_test = rmags_iip[index_begin_test_set:index_end_test_set]
(counts_mags_iip_total, bins_mags_iip_total, patches_mags_iip_total) = plt.hist(rmags_iip_test, label="Total counts in dataset", bins=20, histtype='step', color='black', alpha=0.5)
(counts_mags_iip_phost, bins_mags_iip_phost, patches_mags_iip_phost) = plt.hist(rmags_iip_test[y_pred_class_iip_predicted_hosts], label="Predicted as host", bins=bins_mags_iip_total, histtype='step', color='red')
(counts_mags_iip_pia, bins_mags_iip_pia, patches_mags_iip_pia) = plt.hist(rmags_iip_test[y_pred_class_iip_predicted_ia], label="Predicted as IA", bins=bins_mags_iip_total, histtype='step', color='blue')
(counts_mags_iip_piip, bins_mags_iip_piip, patches_mags_iip_piip) = plt.hist(rmags_iip_test[y_pred_class_iip_predicted_iip], label="Predicted as IIP", bins=bins_mags_iip_total, histtype='step', color='green')
plt.xlabel('Flux Magnitudes', fontsize=12)
plt.ylabel('Counts', fontsize=12)
plt.title('Class of SNE IAs with Prediction Counts over Flux Ratio')
plt.legend(bbox_to_anchor=(1,0.5), loc="lower left")
plt.show()
med_rfr_ia_bins = [(bins_rfr_ia_total[i]+bins_rfr_ia_total[i+1])/2 for i in range(len(bins_rfr_ia_total)-1)]
ia_efficiency_host_over_total = counts_rfr_ia_phost / counts_rfr_ia_total
ia_efficiency_ia_over_total = counts_rfr_ia_pia / counts_rfr_ia_total
ia_efficiency_iip_over_total = counts_rfr_ia_piip / counts_rfr_ia_total
plt.step(med_rfr_ia_bins, ia_efficiency_host_over_total, label="Predicted as Host", color="red")
plt.step(med_rfr_ia_bins, ia_efficiency_ia_over_total, label="Predicted as IA", color="green")
plt.step(med_rfr_ia_bins, ia_efficiency_iip_over_total, label="Predicted as IIP", color="blue")
plt.xlabel(r'$\rho$, SNE Flux Ratio', fontsize=12)
plt.ylabel('Fraction of Counts', fontsize=12)
plt.title("Recall of IA, with False Negatives, Over Flux Ratio Bins")
plt.legend(bbox_to_anchor=(1,0.5), loc="lower left")
plt.show()
med_rfr_iip_bins = [(bins_rfr_iip_total[i]+bins_rfr_iip_total[i+1])/2 for i in range(len(bins_rfr_iip_total)-1)]
iip_efficiency_host_over_total = counts_rfr_iip_phost / counts_rfr_iip_total
iip_efficiency_ia_over_total = counts_rfr_iip_pia / counts_rfr_iip_total
iip_efficiency_iip_over_total = counts_rfr_iip_piip / counts_rfr_iip_total
plt.step(med_rfr_iip_bins, iip_efficiency_host_over_total, label="Predicted as Host", color="red")
plt.step(med_rfr_iip_bins, iip_efficiency_ia_over_total, label="Predicted as IA", color="blue")
plt.step(med_rfr_iip_bins, iip_efficiency_iip_over_total, label="Predicted as IIP", color="green")
plt.xlabel(r'$\rho$, SNE Flux Ratio', fontsize=12)
plt.ylabel('Counts', fontsize=12)
plt.title("Recall of IIP, with False Negatives, Over Flux Ratio Bins")
plt.legend(bbox_to_anchor=(1,0.5), loc="lower left")
plt.show()
```
## Looking at Precision from Predicted Classes
```
y_predicted_hosts = (y_pred_labels == 0)
y_predicted_hosts_class_hosts = y_predicted_hosts[:len_each_test_class]
y_predicted_hosts_class_ia = y_predicted_hosts[len_each_test_class:2*len_each_test_class]
y_predicted_hosts_class_iip = y_predicted_hosts[2*len_each_test_class:]
rmags_predicted_hosts_class_hosts = rmags_hosts[index_begin_test_set:index_end_test_set][y_predicted_hosts_class_hosts.astype(bool)]
rmags_predicted_hosts_class_ia = rmags_ia[index_begin_test_set:index_end_test_set][y_predicted_hosts_class_ia.astype(bool)]
rmags_predicted_hosts_class_iip = rmags_iip[index_begin_test_set:index_end_test_set][y_predicted_hosts_class_iip.astype(bool)]
(counts_mags_pred_hosts_total, bins_mags_pred_hosts_total, patches_mags_pred_hosts_total) = \
plt.hist(np.concatenate([rmags_predicted_hosts_class_hosts,
rmags_predicted_hosts_class_ia,
rmags_predicted_hosts_class_iip]),
histtype='step', color="black", alpha=0.5, bins=20, label="Total Counts")
(counts_mags_pred_hosts_class_hosts, bins_mags_pred_hosts_class_hosts, patches_mags_pred_hosts_class_hosts) =\
plt.hist(rmags_predicted_hosts_class_hosts, histtype='step', color="green",
bins=bins_mags_pred_hosts_total, label="Actual Class: Hosts")
(counts_mags_pred_hosts_class_ia, bins_mags_pred_hosts_class_ia, patches_mags_pred_hosts_class_ia) =\
plt.hist(rmags_predicted_hosts_class_ia, histtype='step', color="red",
bins=bins_mags_pred_hosts_total, label="Actual Class: SNE IA")
(counts_mags_pred_hosts_class_iip, bins_mags_pred_hosts_class_iip, patches_mags_pred_hosts_class_iip) =\
plt.hist(rmags_predicted_hosts_class_iip, histtype='step', color="blue",
bins=bins_mags_pred_hosts_total, label="Actual Class: SNE IIP")
plt.title("Predicted Hosts with Class Counts over Magnitudes")
plt.ylabel("Counts")
plt.xlabel("Flux Magnitudes")
plt.legend(bbox_to_anchor=(1,0.5), loc="lower left")
plt.show()
med_rmags_pred_host_bins = [(bins_mags_pred_hosts_total[i]+bins_mags_pred_hosts_total[i+1])/2 for i in range(len(bins_mags_pred_hosts_total)-1)]
# There's a zero in one of the bins...
pred_host_class_host_efficiency_over_total = counts_mags_pred_hosts_class_hosts / counts_mags_pred_hosts_total
pred_host_class_ia_efficiency_over_total = counts_mags_pred_hosts_class_ia / counts_mags_pred_hosts_total
pred_host_class_iip_efficiency_over_total = counts_mags_pred_hosts_class_iip / counts_mags_pred_hosts_total
plt.step(med_rmags_pred_host_bins, pred_host_class_host_efficiency_over_total, label="Actual: Host", color="green")
plt.step(med_rmags_pred_host_bins, pred_host_class_ia_efficiency_over_total, label="Actual: IA", color="red")
plt.step(med_rmags_pred_host_bins, pred_host_class_iip_efficiency_over_total, label="Actual: IIP", color="blue")
plt.xlabel('Flux Magnitudes', fontsize=12)
plt.ylabel('Fraction of Counts', fontsize=12)
plt.title("Precision of Hosts, with False Positives, Over Magnitude Bins")
plt.legend(bbox_to_anchor=(1,0.5), loc="lower left")
plt.show()
y_predicted_ia = (y_pred_labels == 1)
y_predicted_ia_class_hosts = y_predicted_ia[:len_each_test_class]
y_predicted_ia_class_ia = y_predicted_ia[len_each_test_class:2*len_each_test_class]
y_predicted_ia_class_iip = y_predicted_ia[2*len_each_test_class:]
rmags_predicted_ia_class_hosts = rmags_hosts[index_begin_test_set:index_end_test_set][y_predicted_ia_class_hosts.astype(bool)]
rmags_predicted_ia_class_ia = rmags_ia[index_begin_test_set:index_end_test_set][y_predicted_ia_class_ia.astype(bool)]
rmags_predicted_ia_class_iip = rmags_iip[index_begin_test_set:index_end_test_set][y_predicted_ia_class_iip.astype(bool)]
(counts_mags_pred_ia_total, bins_mags_pred_ia_total, patches_mags_pred_ia_total) = \
plt.hist(np.concatenate([rmags_predicted_ia_class_hosts,
rmags_predicted_ia_class_ia,
rmags_predicted_ia_class_iip]),
histtype='step', color="black", bins=20, alpha=0.5, label="Total Counts")
(counts_mags_pred_ia_class_hosts, bins_mags_pred_ia_class_hosts, patches_mags_pred_ia_class_hosts) = \
plt.hist(rmags_predicted_ia_class_hosts, histtype='step', color="red",
bins=bins_mags_pred_ia_total, label="Actual Class: Hosts")
(counts_mags_pred_ia_class_ia, bins_mags_pred_ia_class_ia, patches_mags_pred_ia_class_ia) = \
plt.hist(rmags_predicted_ia_class_ia, histtype='step', color="green",
bins=bins_mags_pred_ia_total, label="Actual Class: SNE IA")
(counts_mags_pred_ia_class_iip, bins_mags_pred_ia_class_iip, patches_mags_pred_ia_class_iip) = \
plt.hist(rmags_predicted_ia_class_iip, histtype='step', color="blue",
bins=bins_mags_pred_ia_total, label="Actual Class: SNE IIP")
plt.title("Predicted SNE IAs with Class Counts over Magnitudes")
plt.ylabel("Counts")
plt.xlabel("Flux Magnitudes")
plt.legend(bbox_to_anchor=(1,0.5), loc="lower left")
plt.show()
med_rmags_pred_ia_bins = [(bins_mags_pred_ia_total[i]+bins_mags_pred_ia_total[i+1])/2 for i in range(len(bins_mags_pred_ia_total)-1)]
pred_ia_class_host_efficiency_over_total = counts_mags_pred_ia_class_hosts / counts_mags_pred_ia_total
pred_ia_class_ia_efficiency_over_total = counts_mags_pred_ia_class_ia / counts_mags_pred_ia_total
pred_ia_class_iip_efficiency_over_total = counts_mags_pred_ia_class_iip / counts_mags_pred_ia_total
plt.step(med_rmags_pred_ia_bins, pred_ia_class_host_efficiency_over_total, label="Actual: Host", color="red")
plt.step(med_rmags_pred_ia_bins, pred_ia_class_ia_efficiency_over_total, label="Actual: IA", color="green")
plt.step(med_rmags_pred_ia_bins, pred_ia_class_iip_efficiency_over_total, label="Actual: IIP", color="blue")
plt.xlabel('Flux Magnitudes', fontsize=12)
plt.ylabel('Fraction of Counts', fontsize=12)
plt.title("Precision of Predicted IA, with False Positives, Over Magnitude Bins")
plt.legend(bbox_to_anchor=(1,0.5), loc="lower left")
plt.show()
y_predicted_iip = (y_pred_labels == 2)
y_predicted_iip_class_hosts = y_predicted_iip[:len_each_test_class]
y_predicted_iip_class_ia = y_predicted_iip[len_each_test_class:2*len_each_test_class]
y_predicted_iip_class_iip = y_predicted_iip[2*len_each_test_class:]
rmags_predicted_iip_class_hosts = rmags_hosts[index_begin_test_set:index_end_test_set][y_predicted_iip_class_hosts.astype(bool)]
rmags_predicted_iip_class_ia = rmags_ia[index_begin_test_set:index_end_test_set][y_predicted_iip_class_ia.astype(bool)]
rmags_predicted_iip_class_iip = rmags_iip[index_begin_test_set:index_end_test_set][y_predicted_iip_class_iip.astype(bool)]
(counts_mags_pred_iip_total, bins_mags_pred_iip_total, patches_mags_pred_iip_total) = \
plt.hist(np.concatenate([rmags_predicted_iip_class_hosts,
rmags_predicted_iip_class_ia,
rmags_predicted_iip_class_iip]),
histtype='step', color="black", bins=20, alpha=0.5, label="Total Counts")
(counts_mags_pred_iip_class_hosts, bins_mags_pred_iip_class_hosts, patches_mags_pred_iip_class_hosts) = \
plt.hist(rmags_predicted_iip_class_hosts, histtype='step', color="red",
bins=bins_mags_pred_iip_total, label="Actual Class: Hosts")
(counts_mags_pred_iip_class_ia, bins_mags_pred_iip_class_ia, patches_mags_pred_iip_class_ia) = \
plt.hist(rmags_predicted_iip_class_ia, histtype='step', color="blue",
bins=bins_mags_pred_iip_total, label="Actual Class: SNE IA")
(counts_mags_pred_iip_class_iip, bins_mags_pred_iip_class_iip, patches_mags_pred_iip_class_iip) = \
plt.hist(rmags_predicted_iip_class_iip, histtype='step', color="green",
bins=bins_mags_pred_iip_total, label="Actual Class: SNE IIP")
plt.title("Predicted SNE IIPs with Class Counts over Magnitudes")
plt.ylabel("Counts")
plt.xlabel("Flux Magnitudes")
plt.legend(bbox_to_anchor=(1,0.5), loc="lower left")
plt.show()
med_rmags_pred_iip_bins = [(bins_mags_pred_iip_total[i]+bins_mags_pred_iip_total[i+1])/2 for i in range(len(bins_mags_pred_iip_total)-1)]
pred_iip_class_host_efficiency_over_total = counts_mags_pred_iip_class_hosts / counts_mags_pred_iip_total
pred_iip_class_ia_efficiency_over_total = counts_mags_pred_iip_class_ia / counts_mags_pred_iip_total
pred_iip_class_iip_efficiency_over_total = counts_mags_pred_iip_class_iip / counts_mags_pred_iip_total
plt.step(med_rmags_pred_iip_bins, pred_iip_class_host_efficiency_over_total, label="Actual: Host", color="red")
plt.step(med_rmags_pred_iip_bins, pred_iip_class_ia_efficiency_over_total, label="Actual: IA", color="blue")
plt.step(med_rmags_pred_iip_bins, pred_iip_class_iip_efficiency_over_total, label="Actual: IIP", color="green")
plt.xlabel('Flux Magnitudes', fontsize=12)
plt.ylabel('Fraction of Counts', fontsize=12)
plt.title("Precision of Predicted IIP, with False Positives, Over Magnitude Bins")
plt.legend(bbox_to_anchor=(1,0.5), loc="lower left")
plt.show()
```
|
github_jupyter
|
from keras.models import load_model
from astropy.io import fits
import numpy as np
from keras.utils.np_utils import to_categorical
import matplotlib.pyplot as plt
import json
basedir = "/scratch/dgandhi/desi/time-domain-bkup/tuning_batch_v2/cnn/categorical/"
models = ["batch(07-18_14:44:48)/iter(17)_run(07-18_14:44:49_511884)/weights/weights.Ep73-ValAcc0.83.hdf5",
"batch(07-18_14:44:48)/iter(67)_run(07-18_14:44:54_886971)/weights/weights.Ep72-ValAcc0.84.hdf5",
"batch(07-19_11:36:51)/iter(92)_run(07-19_11:36:55_775852)/weights/weights.Ep75-ValAcc0.87.hdf5",
"batch(07-19_11:36:51)/iter(64)_run(07-19_11:36:54_667053)/weights/weights.Ep73-ValAcc0.86.hdf5",
"batch(07-19_11:36:51)/iter(45)_run(07-19_11:36:53_892865)/weights/weights.Ep70-ValAcc0.85.hdf5",
"batch(07-19_11:36:51)/iter(27)_run(07-19_11:36:53_132799)/weights/weights.Ep73-ValAcc0.85.hdf5",
"batch(07-20_15:55:00)/iter(47)_run(07-20_15:55:02_573722)/weights/weights.Ep58-ValAcc0.87.hdf5"
]
h = fits.open('/scratch/dgandhi/desi/time-domain-bkup/cnn-data/hosts_data.fits')
standardized_hosts = h[0].data
rmags_hosts = h[2].data
h.close()
h = fits.open('/scratch/dgandhi/desi/time-domain-bkup/cnn-data/sne_ia_data.fits')
standardized_ia = h[0].data
rfr_ia = h[2].data
rmags_ia = h[4].data
h.close()
h = fits.open('/scratch/dgandhi/desi/time-domain-bkup/cnn-data/sne_iip_data.fits')
standardized_iip = h[0].data
rfr_iip = h[2].data
rmags_iip = h[4].data
h.close()
print(standardized_hosts.shape)
print(rmags_hosts.shape)
print(standardized_ia.shape)
print(rfr_ia.shape)
print(standardized_iip.shape)
print(rfr_iip.shape)
index_begin_test_set = 85000
index_end_test_set = min(len(standardized_hosts),len(standardized_ia),len(standardized_iip))
test_hosts = standardized_hosts[index_begin_test_set:index_end_test_set]
test_ia = standardized_ia[index_begin_test_set:index_end_test_set]
test_iip = standardized_iip[index_begin_test_set:index_end_test_set]
len_each_test_class = len(test_hosts)
print(test_hosts.shape, test_ia.shape, test_iip.shape)
x_test = np.concatenate([test_hosts, test_ia, test_iip]).reshape(-1,400,1)
y_test = np.concatenate([np.zeros(len(test_hosts)), np.ones(len(test_ia)), 1+np.ones(len(test_iip))])
brightest = np.concatenate(
[rmags_hosts[index_begin_test_set:index_end_test_set] <= np.median(rmags_hosts),
rfr_ia[index_begin_test_set:index_end_test_set] >=0.5,
rfr_iip[index_begin_test_set:index_end_test_set] >= 0.5])
model1 = load_model(basedir + models[-1])
y_pred = model1.predict(x_test)
print(y_pred)
y_pred_labels = np.argmax(y_pred, axis=1)
print(y_pred_labels)
from sklearn.metrics import accuracy_score
print(accuracy_score(y_test,y_pred_labels))
print(accuracy_score(y_test[brightest],y_pred_labels[brightest]))
y_pred_class_hosts = y_pred_labels[:len_each_test_class]
y_pred_class_ia = y_pred_labels[len_each_test_class:2*len_each_test_class]
y_pred_class_iip = y_pred_labels[2*len_each_test_class:]
y_pred_class_ia_predicted_hosts = y_pred_class_ia == 0
y_pred_class_ia_predicted_ia = y_pred_class_ia == 1
y_pred_class_ia_predicted_iip = y_pred_class_ia == 2
rfr_ia_test = rfr_ia[index_begin_test_set:index_end_test_set]
(counts_rfr_ia_total, bins_rfr_ia_total, patches_rfr_ia_total) = plt.hist(rfr_ia_test, label="Total counts in dataset", bins=20, histtype='step', color='black', alpha=0.5)
(counts_rfr_ia_phost, bins_rfr_ia_phost, patches_rfr_ia_phost) = plt.hist(rfr_ia_test[y_pred_class_ia_predicted_hosts], label="Predicted as host", bins=bins_rfr_ia_total, histtype='step', color='red')
(counts_rfr_ia_pia, bins_rfr_ia_pia, patches_rfr_ia_pia) = plt.hist(rfr_ia_test[y_pred_class_ia_predicted_ia], label="Predicted as IA", bins=bins_rfr_ia_total, histtype='step', color='green')
(counts_rfr_ia_piip, bins_rfr_ia_piip, patches_rfr_ia_piip) = plt.hist(rfr_ia_test[y_pred_class_ia_predicted_iip], label="Predicted as IIP", bins=bins_rfr_ia_total, histtype='step', color='blue')
plt.xlabel(r'$\rho$, SNE Flux Ratio', fontsize=12)
plt.ylabel('Counts', fontsize=12)
plt.title('Class of SNE IAs with Prediction Counts over Flux Ratio')
plt.legend(bbox_to_anchor=(1,0.5), loc="lower left")
plt.show()
rmags_ia_test = rmags_ia[index_begin_test_set:index_end_test_set]
(counts_mags_ia_total, bins_mags_ia_total, patches_mags_ia_total) = plt.hist(rmags_ia_test, label="Total counts in dataset", bins=20, histtype='step', color='black', alpha=0.5)
(counts_mags_ia_phost, bins_mags_ia_phost, patches_mags_ia_phost) = plt.hist(rmags_ia_test[y_pred_class_ia_predicted_hosts], label="Predicted as host", bins=bins_mags_ia_total, histtype='step', color='red')
(counts_mags_ia_pia, bins_mags_ia_pia, patches_mags_ia_pia) = plt.hist(rmags_ia_test[y_pred_class_ia_predicted_ia], label="Predicted as IA", bins=bins_mags_ia_total, histtype='step', color='green')
(counts_mags_ia_piip, bins_mags_ia_piip, patches_mags_ia_piip) = plt.hist(rmags_ia_test[y_pred_class_ia_predicted_iip], label="Predicted as IIP", bins=bins_mags_ia_total, histtype='step', color='blue')
plt.xlabel('Flux Magnitudes', fontsize=12)
plt.ylabel('Counts', fontsize=12)
plt.title('Class of SNE IAs with Prediction Counts over Flux Ratio')
plt.legend(bbox_to_anchor=(1,0.5), loc="lower left")
plt.show()
y_pred_class_iip_predicted_hosts = y_pred_class_iip == 0
y_pred_class_iip_predicted_ia = y_pred_class_iip == 1
y_pred_class_iip_predicted_iip = y_pred_class_iip == 2
rfr_iip_test = rfr_iip[index_begin_test_set:index_end_test_set]
(counts_rfr_iip_total, bins_rfr_iip_total, patches_rfr_iip_total) = plt.hist(rfr_iip_test, label="Total counts in dataset", bins=20, histtype='step', color='black', alpha=0.5)
(counts_rfr_iip_phost, bins_rfr_iip_phost, patches_rfr_iip_phost) = plt.hist(rfr_iip_test[y_pred_class_iip_predicted_hosts], label="Predicted as host", bins=bins_rfr_iip_total, histtype='step', color='red')
(counts_rfr_iip_pia, bins_rfr_iip_pia, patches_rfr_iip_pia) = plt.hist(rfr_iip_test[y_pred_class_iip_predicted_ia], label="Predicted as IA", bins=bins_rfr_iip_total, histtype='step', color='blue')
(counts_rfr_iip_piip, bins_rfr_iip_piip, patches_iip_piip) = plt.hist(rfr_iip_test[y_pred_class_iip_predicted_iip], label="Predicted as IIP", bins=bins_rfr_iip_total, histtype='step', color='green')
plt.xlabel(r'$\rho$, SNE Flux Ratio', fontsize=12)
plt.ylabel('Counts', fontsize=12)
plt.title('Class of SNE IIPs with Prediction Counts over Flux Ratio')
plt.legend(bbox_to_anchor=(1,0.5), loc="lower left")
plt.show()
rmags_iip_test = rmags_iip[index_begin_test_set:index_end_test_set]
(counts_mags_iip_total, bins_mags_iip_total, patches_mags_iip_total) = plt.hist(rmags_iip_test, label="Total counts in dataset", bins=20, histtype='step', color='black', alpha=0.5)
(counts_mags_iip_phost, bins_mags_iip_phost, patches_mags_iip_phost) = plt.hist(rmags_iip_test[y_pred_class_iip_predicted_hosts], label="Predicted as host", bins=bins_mags_iip_total, histtype='step', color='red')
(counts_mags_iip_pia, bins_mags_iip_pia, patches_mags_iip_pia) = plt.hist(rmags_iip_test[y_pred_class_iip_predicted_ia], label="Predicted as IA", bins=bins_mags_iip_total, histtype='step', color='blue')
(counts_mags_iip_piip, bins_mags_iip_piip, patches_mags_iip_piip) = plt.hist(rmags_iip_test[y_pred_class_iip_predicted_iip], label="Predicted as IIP", bins=bins_mags_iip_total, histtype='step', color='green')
plt.xlabel('Flux Magnitudes', fontsize=12)
plt.ylabel('Counts', fontsize=12)
plt.title('Class of SNE IAs with Prediction Counts over Flux Ratio')
plt.legend(bbox_to_anchor=(1,0.5), loc="lower left")
plt.show()
med_rfr_ia_bins = [(bins_rfr_ia_total[i]+bins_rfr_ia_total[i+1])/2 for i in range(len(bins_rfr_ia_total)-1)]
ia_efficiency_host_over_total = counts_rfr_ia_phost / counts_rfr_ia_total
ia_efficiency_ia_over_total = counts_rfr_ia_pia / counts_rfr_ia_total
ia_efficiency_iip_over_total = counts_rfr_ia_piip / counts_rfr_ia_total
plt.step(med_rfr_ia_bins, ia_efficiency_host_over_total, label="Predicted as Host", color="red")
plt.step(med_rfr_ia_bins, ia_efficiency_ia_over_total, label="Predicted as IA", color="green")
plt.step(med_rfr_ia_bins, ia_efficiency_iip_over_total, label="Predicted as IIP", color="blue")
plt.xlabel(r'$\rho$, SNE Flux Ratio', fontsize=12)
plt.ylabel('Fraction of Counts', fontsize=12)
plt.title("Recall of IA, with False Negatives, Over Flux Ratio Bins")
plt.legend(bbox_to_anchor=(1,0.5), loc="lower left")
plt.show()
med_rfr_iip_bins = [(bins_rfr_iip_total[i]+bins_rfr_iip_total[i+1])/2 for i in range(len(bins_rfr_iip_total)-1)]
iip_efficiency_host_over_total = counts_rfr_iip_phost / counts_rfr_iip_total
iip_efficiency_ia_over_total = counts_rfr_iip_pia / counts_rfr_iip_total
iip_efficiency_iip_over_total = counts_rfr_iip_piip / counts_rfr_iip_total
plt.step(med_rfr_iip_bins, iip_efficiency_host_over_total, label="Predicted as Host", color="red")
plt.step(med_rfr_iip_bins, iip_efficiency_ia_over_total, label="Predicted as IA", color="blue")
plt.step(med_rfr_iip_bins, iip_efficiency_iip_over_total, label="Predicted as IIP", color="green")
plt.xlabel(r'$\rho$, SNE Flux Ratio', fontsize=12)
plt.ylabel('Counts', fontsize=12)
plt.title("Recall of IIP, with False Negatives, Over Flux Ratio Bins")
plt.legend(bbox_to_anchor=(1,0.5), loc="lower left")
plt.show()
y_predicted_hosts = (y_pred_labels == 0)
y_predicted_hosts_class_hosts = y_predicted_hosts[:len_each_test_class]
y_predicted_hosts_class_ia = y_predicted_hosts[len_each_test_class:2*len_each_test_class]
y_predicted_hosts_class_iip = y_predicted_hosts[2*len_each_test_class:]
rmags_predicted_hosts_class_hosts = rmags_hosts[index_begin_test_set:index_end_test_set][y_predicted_hosts_class_hosts.astype(bool)]
rmags_predicted_hosts_class_ia = rmags_ia[index_begin_test_set:index_end_test_set][y_predicted_hosts_class_ia.astype(bool)]
rmags_predicted_hosts_class_iip = rmags_iip[index_begin_test_set:index_end_test_set][y_predicted_hosts_class_iip.astype(bool)]
(counts_mags_pred_hosts_total, bins_mags_pred_hosts_total, patches_mags_pred_hosts_total) = \
plt.hist(np.concatenate([rmags_predicted_hosts_class_hosts,
rmags_predicted_hosts_class_ia,
rmags_predicted_hosts_class_iip]),
histtype='step', color="black", alpha=0.5, bins=20, label="Total Counts")
(counts_mags_pred_hosts_class_hosts, bins_mags_pred_hosts_class_hosts, patches_mags_pred_hosts_class_hosts) =\
plt.hist(rmags_predicted_hosts_class_hosts, histtype='step', color="green",
bins=bins_mags_pred_hosts_total, label="Actual Class: Hosts")
(counts_mags_pred_hosts_class_ia, bins_mags_pred_hosts_class_ia, patches_mags_pred_hosts_class_ia) =\
plt.hist(rmags_predicted_hosts_class_ia, histtype='step', color="red",
bins=bins_mags_pred_hosts_total, label="Actual Class: SNE IA")
(counts_mags_pred_hosts_class_iip, bins_mags_pred_hosts_class_iip, patches_mags_pred_hosts_class_iip) =\
plt.hist(rmags_predicted_hosts_class_iip, histtype='step', color="blue",
bins=bins_mags_pred_hosts_total, label="Actual Class: SNE IIP")
plt.title("Predicted Hosts with Class Counts over Magnitudes")
plt.ylabel("Counts")
plt.xlabel("Flux Magnitudes")
plt.legend(bbox_to_anchor=(1,0.5), loc="lower left")
plt.show()
med_rmags_pred_host_bins = [(bins_mags_pred_hosts_total[i]+bins_mags_pred_hosts_total[i+1])/2 for i in range(len(bins_mags_pred_hosts_total)-1)]
# There's a zero in one of the bins...
pred_host_class_host_efficiency_over_total = counts_mags_pred_hosts_class_hosts / counts_mags_pred_hosts_total
pred_host_class_ia_efficiency_over_total = counts_mags_pred_hosts_class_ia / counts_mags_pred_hosts_total
pred_host_class_iip_efficiency_over_total = counts_mags_pred_hosts_class_iip / counts_mags_pred_hosts_total
plt.step(med_rmags_pred_host_bins, pred_host_class_host_efficiency_over_total, label="Actual: Host", color="green")
plt.step(med_rmags_pred_host_bins, pred_host_class_ia_efficiency_over_total, label="Actual: IA", color="red")
plt.step(med_rmags_pred_host_bins, pred_host_class_iip_efficiency_over_total, label="Actual: IIP", color="blue")
plt.xlabel('Flux Magnitudes', fontsize=12)
plt.ylabel('Fraction of Counts', fontsize=12)
plt.title("Precision of Hosts, with False Positives, Over Magnitude Bins")
plt.legend(bbox_to_anchor=(1,0.5), loc="lower left")
plt.show()
y_predicted_ia = (y_pred_labels == 1)
y_predicted_ia_class_hosts = y_predicted_ia[:len_each_test_class]
y_predicted_ia_class_ia = y_predicted_ia[len_each_test_class:2*len_each_test_class]
y_predicted_ia_class_iip = y_predicted_ia[2*len_each_test_class:]
rmags_predicted_ia_class_hosts = rmags_hosts[index_begin_test_set:index_end_test_set][y_predicted_ia_class_hosts.astype(bool)]
rmags_predicted_ia_class_ia = rmags_ia[index_begin_test_set:index_end_test_set][y_predicted_ia_class_ia.astype(bool)]
rmags_predicted_ia_class_iip = rmags_iip[index_begin_test_set:index_end_test_set][y_predicted_ia_class_iip.astype(bool)]
(counts_mags_pred_ia_total, bins_mags_pred_ia_total, patches_mags_pred_ia_total) = \
plt.hist(np.concatenate([rmags_predicted_ia_class_hosts,
rmags_predicted_ia_class_ia,
rmags_predicted_ia_class_iip]),
histtype='step', color="black", bins=20, alpha=0.5, label="Total Counts")
(counts_mags_pred_ia_class_hosts, bins_mags_pred_ia_class_hosts, patches_mags_pred_ia_class_hosts) = \
plt.hist(rmags_predicted_ia_class_hosts, histtype='step', color="red",
bins=bins_mags_pred_ia_total, label="Actual Class: Hosts")
(counts_mags_pred_ia_class_ia, bins_mags_pred_ia_class_ia, patches_mags_pred_ia_class_ia) = \
plt.hist(rmags_predicted_ia_class_ia, histtype='step', color="green",
bins=bins_mags_pred_ia_total, label="Actual Class: SNE IA")
(counts_mags_pred_ia_class_iip, bins_mags_pred_ia_class_iip, patches_mags_pred_ia_class_iip) = \
plt.hist(rmags_predicted_ia_class_iip, histtype='step', color="blue",
bins=bins_mags_pred_ia_total, label="Actual Class: SNE IIP")
plt.title("Predicted SNE IAs with Class Counts over Magnitudes")
plt.ylabel("Counts")
plt.xlabel("Flux Magnitudes")
plt.legend(bbox_to_anchor=(1,0.5), loc="lower left")
plt.show()
med_rmags_pred_ia_bins = [(bins_mags_pred_ia_total[i]+bins_mags_pred_ia_total[i+1])/2 for i in range(len(bins_mags_pred_ia_total)-1)]
pred_ia_class_host_efficiency_over_total = counts_mags_pred_ia_class_hosts / counts_mags_pred_ia_total
pred_ia_class_ia_efficiency_over_total = counts_mags_pred_ia_class_ia / counts_mags_pred_ia_total
pred_ia_class_iip_efficiency_over_total = counts_mags_pred_ia_class_iip / counts_mags_pred_ia_total
plt.step(med_rmags_pred_ia_bins, pred_ia_class_host_efficiency_over_total, label="Actual: Host", color="red")
plt.step(med_rmags_pred_ia_bins, pred_ia_class_ia_efficiency_over_total, label="Actual: IA", color="green")
plt.step(med_rmags_pred_ia_bins, pred_ia_class_iip_efficiency_over_total, label="Actual: IIP", color="blue")
plt.xlabel('Flux Magnitudes', fontsize=12)
plt.ylabel('Fraction of Counts', fontsize=12)
plt.title("Precision of Predicted IA, with False Positives, Over Magnitude Bins")
plt.legend(bbox_to_anchor=(1,0.5), loc="lower left")
plt.show()
y_predicted_iip = (y_pred_labels == 2)
y_predicted_iip_class_hosts = y_predicted_iip[:len_each_test_class]
y_predicted_iip_class_ia = y_predicted_iip[len_each_test_class:2*len_each_test_class]
y_predicted_iip_class_iip = y_predicted_iip[2*len_each_test_class:]
rmags_predicted_iip_class_hosts = rmags_hosts[index_begin_test_set:index_end_test_set][y_predicted_iip_class_hosts.astype(bool)]
rmags_predicted_iip_class_ia = rmags_ia[index_begin_test_set:index_end_test_set][y_predicted_iip_class_ia.astype(bool)]
rmags_predicted_iip_class_iip = rmags_iip[index_begin_test_set:index_end_test_set][y_predicted_iip_class_iip.astype(bool)]
(counts_mags_pred_iip_total, bins_mags_pred_iip_total, patches_mags_pred_iip_total) = \
plt.hist(np.concatenate([rmags_predicted_iip_class_hosts,
rmags_predicted_iip_class_ia,
rmags_predicted_iip_class_iip]),
histtype='step', color="black", bins=20, alpha=0.5, label="Total Counts")
(counts_mags_pred_iip_class_hosts, bins_mags_pred_iip_class_hosts, patches_mags_pred_iip_class_hosts) = \
plt.hist(rmags_predicted_iip_class_hosts, histtype='step', color="red",
bins=bins_mags_pred_iip_total, label="Actual Class: Hosts")
(counts_mags_pred_iip_class_ia, bins_mags_pred_iip_class_ia, patches_mags_pred_iip_class_ia) = \
plt.hist(rmags_predicted_iip_class_ia, histtype='step', color="blue",
bins=bins_mags_pred_iip_total, label="Actual Class: SNE IA")
(counts_mags_pred_iip_class_iip, bins_mags_pred_iip_class_iip, patches_mags_pred_iip_class_iip) = \
plt.hist(rmags_predicted_iip_class_iip, histtype='step', color="green",
bins=bins_mags_pred_iip_total, label="Actual Class: SNE IIP")
plt.title("Predicted SNE IIPs with Class Counts over Magnitudes")
plt.ylabel("Counts")
plt.xlabel("Flux Magnitudes")
plt.legend(bbox_to_anchor=(1,0.5), loc="lower left")
plt.show()
med_rmags_pred_iip_bins = [(bins_mags_pred_iip_total[i]+bins_mags_pred_iip_total[i+1])/2 for i in range(len(bins_mags_pred_iip_total)-1)]
pred_iip_class_host_efficiency_over_total = counts_mags_pred_iip_class_hosts / counts_mags_pred_iip_total
pred_iip_class_ia_efficiency_over_total = counts_mags_pred_iip_class_ia / counts_mags_pred_iip_total
pred_iip_class_iip_efficiency_over_total = counts_mags_pred_iip_class_iip / counts_mags_pred_iip_total
plt.step(med_rmags_pred_iip_bins, pred_iip_class_host_efficiency_over_total, label="Actual: Host", color="red")
plt.step(med_rmags_pred_iip_bins, pred_iip_class_ia_efficiency_over_total, label="Actual: IA", color="blue")
plt.step(med_rmags_pred_iip_bins, pred_iip_class_iip_efficiency_over_total, label="Actual: IIP", color="green")
plt.xlabel('Flux Magnitudes', fontsize=12)
plt.ylabel('Fraction of Counts', fontsize=12)
plt.title("Precision of Predicted IIP, with False Positives, Over Magnitude Bins")
plt.legend(bbox_to_anchor=(1,0.5), loc="lower left")
plt.show()
| 0.655667 | 0.577942 |
```
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split,cross_val_score,GridSearchCV
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier,GradientBoostingClassifier,AdaBoostClassifier
from sklearn.svm import SVC
from sklearn.tree import DecisionTreeClassifier
from xgboost import XGBClassifier
from sklearn.preprocessing import StandardScaler,MinMaxScaler
from sklearn.naive_bayes import GaussianNB
from imblearn.under_sampling import NearMiss
from keras.models import Sequential
from keras.layers import Dense
from sklearn import metrics
from sklearn.neighbors import KNeighborsClassifier
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
from pandas_profiling import ProfileReport
data=pd.read_csv("train_ctrUa4K.csv")
for column in ('Gender','Married','Dependents','Self_Employed'):
data[column].fillna(data[column].mode()[0],inplace=True)
for column in ('LoanAmount','Loan_Amount_Term','Credit_History'):
data[column].fillna(data[column].mean(),inplace=True)
for variable in ('Gender','Married','Dependents','Education','Self_Employed','Property_Area'):
data[variable].fillna("Missing",inplace=True)
dummies=pd.get_dummies(data[variable],prefix=variable)
data=pd.concat([data,dummies],axis=1)
data.drop([variable],axis=1,inplace=True)
data['Loan_Status']=data.Loan_Status.map({'Y':0,'N':1})
Y=data['Loan_Status']
data.drop(['Loan_Status'],axis=1,inplace=True)
X=data[data.iloc[:,1:23].columns]
X_train,X_test,Y_train,Y_test=train_test_split(X,Y,random_state=100,test_size=0.2)
scaler=StandardScaler()
scaled_X_train=scaler.fit_transform(X_train)
scaled_X_test=scaler.transform(X_test)
def gbm_param(X, y, nfolds):
n_estimators=[150,200,500,1000,1500,2000]
max_features=[1,2,3]
max_depth=[1,2,3,4,5,6,7,8,9,10]
param_grid = {'n_estimators': n_estimators,'max_features':max_features,'max_depth':max_depth}
grid_search_gbm = GridSearchCV(GradientBoostingClassifier(learning_rate= 0.05), param_grid, cv=nfolds)
grid_search_gbm.fit(X,y)
return grid_search_gbm.best_params_
gbm_param(scaled_X_train,Y_train,3)
GBM_model=GradientBoostingClassifier(n_estimators=150,max_features=3,max_depth=1,learning_rate=0.05)
GBM_model.fit(scaled_X_train,Y_train)
GBM_pred=GBM_model.predict(scaled_X_test)
print("Recall for gradient boosting model:",metrics.recall_score(Y_test,GBM_pred))
print("Precision for gradient boosting model:",metrics.precision_score(Y_test,GBM_pred))
print("Accuracy for gradient boosting model:",metrics.accuracy_score(Y_test,GBM_pred))
print("F-score for gradient boosting model:",metrics.f1_score(Y_test,GBM_pred))
print("Log-loss for gradient boosting model:",metrics.log_loss(Y_test,GBM_pred))
```
|
github_jupyter
|
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split,cross_val_score,GridSearchCV
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier,GradientBoostingClassifier,AdaBoostClassifier
from sklearn.svm import SVC
from sklearn.tree import DecisionTreeClassifier
from xgboost import XGBClassifier
from sklearn.preprocessing import StandardScaler,MinMaxScaler
from sklearn.naive_bayes import GaussianNB
from imblearn.under_sampling import NearMiss
from keras.models import Sequential
from keras.layers import Dense
from sklearn import metrics
from sklearn.neighbors import KNeighborsClassifier
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
from pandas_profiling import ProfileReport
data=pd.read_csv("train_ctrUa4K.csv")
for column in ('Gender','Married','Dependents','Self_Employed'):
data[column].fillna(data[column].mode()[0],inplace=True)
for column in ('LoanAmount','Loan_Amount_Term','Credit_History'):
data[column].fillna(data[column].mean(),inplace=True)
for variable in ('Gender','Married','Dependents','Education','Self_Employed','Property_Area'):
data[variable].fillna("Missing",inplace=True)
dummies=pd.get_dummies(data[variable],prefix=variable)
data=pd.concat([data,dummies],axis=1)
data.drop([variable],axis=1,inplace=True)
data['Loan_Status']=data.Loan_Status.map({'Y':0,'N':1})
Y=data['Loan_Status']
data.drop(['Loan_Status'],axis=1,inplace=True)
X=data[data.iloc[:,1:23].columns]
X_train,X_test,Y_train,Y_test=train_test_split(X,Y,random_state=100,test_size=0.2)
scaler=StandardScaler()
scaled_X_train=scaler.fit_transform(X_train)
scaled_X_test=scaler.transform(X_test)
def gbm_param(X, y, nfolds):
n_estimators=[150,200,500,1000,1500,2000]
max_features=[1,2,3]
max_depth=[1,2,3,4,5,6,7,8,9,10]
param_grid = {'n_estimators': n_estimators,'max_features':max_features,'max_depth':max_depth}
grid_search_gbm = GridSearchCV(GradientBoostingClassifier(learning_rate= 0.05), param_grid, cv=nfolds)
grid_search_gbm.fit(X,y)
return grid_search_gbm.best_params_
gbm_param(scaled_X_train,Y_train,3)
GBM_model=GradientBoostingClassifier(n_estimators=150,max_features=3,max_depth=1,learning_rate=0.05)
GBM_model.fit(scaled_X_train,Y_train)
GBM_pred=GBM_model.predict(scaled_X_test)
print("Recall for gradient boosting model:",metrics.recall_score(Y_test,GBM_pred))
print("Precision for gradient boosting model:",metrics.precision_score(Y_test,GBM_pred))
print("Accuracy for gradient boosting model:",metrics.accuracy_score(Y_test,GBM_pred))
print("F-score for gradient boosting model:",metrics.f1_score(Y_test,GBM_pred))
print("Log-loss for gradient boosting model:",metrics.log_loss(Y_test,GBM_pred))
| 0.456652 | 0.373362 |
# LGBMRegressor with StandardScaler
This Code template is for the regression analysis using a simple LGBMRegressor and feature rescaling technique called StandardScaler
### Required Packages
```
import warnings
import numpy as np
import pandas as pd
import seaborn as se
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
import lightgbm as ltb
from sklearn.metrics import r2_score, mean_absolute_error, mean_squared_error
from sklearn.preprocessing import StandardScaler
warnings.filterwarnings('ignore')
```
### Initialization
Filepath of CSV file
```
file_path =" "
```
List of features which are required for model training .
```
features = []
```
Target feature for prediction.
```
target = ' '
```
### Data Fetching
Pandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.
We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.
```
df=pd.read_csv(file_path)
df.head()
```
### Feature Selections
It is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.
We will assign all the required input features to X and target/outcome to Y.
```
X = df[features]
Y = df[target]
```
### Data Preprocessing
Since the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes.
```
def NullClearner(df):
if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])):
df.fillna(df.mean(),inplace=True)
return df
elif(isinstance(df, pd.Series)):
df.fillna(df.mode()[0],inplace=True)
return df
else:return df
def EncodeX(df):
return pd.get_dummies(df)
```
Calling preprocessing functions on the feature and target set.
```
x=X.columns.to_list()
for i in x:
X[i]=NullClearner(X[i])
X=EncodeX(X)
Y=NullClearner(Y)
X.head()
```
#### Correlation Map
In order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns.
```
f,ax = plt.subplots(figsize=(18, 18))
matrix = np.triu(X.corr())
se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix)
plt.show()
```
### Data Rescaling
For rescaling the data StandardScaler function of Sklearn is used.
Standardize features by removing the mean and scaling to unit variance.
#### Scale function
Reference URL to StandardScaler API :
https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html
```
X_Scaled=StandardScaler().fit_transform(X)
X=pd.DataFrame(X_Scaled,columns=x)
X.head(3)
```
### Data Splitting
The train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data.
```
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size = 0.2, random_state = 123)
```
### Model
LightGBM is a gradient boosting framework that uses tree based learning algorithms. It is designed to be distributed and efficient with the following advantages:
- Faster training speed and higher efficiency.
- Lower memory usage.
- Better accuracy.
- Support of parallel, distributed, and GPU learning.
- Capable of handling large-scale data.
#### Model Tuning Parameters:
> <b>boosting_type</b> (str, optional (default='gbdt')) – ‘gbdt’, traditional Gradient Boosting Decision Tree. ‘dart’, Dropouts meet Multiple Additive Regression Trees. ‘goss’, Gradient-based One-Side Sampling. ‘rf’, Random Forest
> <b>num_leaves</b> (int, optional (default=31)) – Maximum tree leaves for base learners.
> <b>max_depth</b> (int, optional (default=-1)) – Maximum tree depth for base learners, <=0 means no limit.
> <b>p</b>: Power parameter for the Minkowski metric.
> <b>learning_rate</b> (float, optional (default=0.1)) – Boosting learning rate. You can use callbacks parameter of fit method to shrink/adapt learning rate in training using reset_parameter callback. Note, that this will ignore the learning_rate argument in training.
> <b>min_split_gain</b> (float, optional (default=0.)) – Minimum loss reduction required to make a further partition on a leaf node of the tree.
> <b>min_child_samples</b> (int, optional (default=20)) – Minimum number of data needed in a child (leaf).
```
# Build Model here
model= ltb.LGBMRegressor()
model.fit(X_train,y_train)
```
#### Model Accuracy
We will use the trained model to make a prediction on the test set.Then use the predicted value for measuring the accuracy of our model.
> **score**: The **score** function returns the coefficient of determination <code>R<sup>2</sup></code> of the prediction.
```
print("Accuracy score {:.2f} %\n".format(model.score(X_test,y_test)*100))
```
> **r2_score**: The **r2_score** function computes the percentage variablility explained by our model, either the fraction or the count of correct predictions.
> **mae**: The **mean abosolute error** function calculates the amount of total error(absolute average distance between the real data and the predicted data) by our model.
> **mse**: The **mean squared error** function squares the error(penalizes the model for large errors) by our model.
```
y_pred=model.predict(X_test)
print("R2 Score: {:.2f} %".format(r2_score(y_test,y_pred)*100))
print("Mean Absolute Error {:.2f}".format(mean_absolute_error(y_test,y_pred)))
print("Mean Squared Error {:.2f}".format(mean_squared_error(y_test,y_pred)))
```
#### Feature Importances
The Feature importance refers to techniques that assign a score to features based on how useful they are for making the prediction.
```
plt.figure(figsize=(8,6))
n_features = len(X.columns)
plt.barh(range(n_features), model.feature_importances_, align='center')
plt.yticks(np.arange(n_features), X.columns)
plt.xlabel("Feature importance")
plt.ylabel("Feature")
plt.ylim(-1, n_features)
```
#### Prediction Plot
First, we make use of a plot to plot the actual observations, with x_train on the x-axis and y_train on the y-axis.
For the regression line, we will use x_train on the x-axis and then the predictions of the x_train observations on the y-axis.
```
plt.figure(figsize=(14,10))
plt.plot(range(20),y_test[0:20], color = "green")
plt.plot(range(20),model.predict(X_test[0:20]), color = "red")
plt.legend(["Actual","prediction"])
plt.title("Predicted vs True Value")
plt.xlabel("Record number")
plt.ylabel(target)
plt.show()
```
#### Creator: Surya Kiran , Github: [Profile](https://github.com/surya2365)
|
github_jupyter
|
import warnings
import numpy as np
import pandas as pd
import seaborn as se
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
import lightgbm as ltb
from sklearn.metrics import r2_score, mean_absolute_error, mean_squared_error
from sklearn.preprocessing import StandardScaler
warnings.filterwarnings('ignore')
file_path =" "
features = []
target = ' '
df=pd.read_csv(file_path)
df.head()
X = df[features]
Y = df[target]
def NullClearner(df):
if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])):
df.fillna(df.mean(),inplace=True)
return df
elif(isinstance(df, pd.Series)):
df.fillna(df.mode()[0],inplace=True)
return df
else:return df
def EncodeX(df):
return pd.get_dummies(df)
x=X.columns.to_list()
for i in x:
X[i]=NullClearner(X[i])
X=EncodeX(X)
Y=NullClearner(Y)
X.head()
f,ax = plt.subplots(figsize=(18, 18))
matrix = np.triu(X.corr())
se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix)
plt.show()
X_Scaled=StandardScaler().fit_transform(X)
X=pd.DataFrame(X_Scaled,columns=x)
X.head(3)
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size = 0.2, random_state = 123)
# Build Model here
model= ltb.LGBMRegressor()
model.fit(X_train,y_train)
print("Accuracy score {:.2f} %\n".format(model.score(X_test,y_test)*100))
y_pred=model.predict(X_test)
print("R2 Score: {:.2f} %".format(r2_score(y_test,y_pred)*100))
print("Mean Absolute Error {:.2f}".format(mean_absolute_error(y_test,y_pred)))
print("Mean Squared Error {:.2f}".format(mean_squared_error(y_test,y_pred)))
plt.figure(figsize=(8,6))
n_features = len(X.columns)
plt.barh(range(n_features), model.feature_importances_, align='center')
plt.yticks(np.arange(n_features), X.columns)
plt.xlabel("Feature importance")
plt.ylabel("Feature")
plt.ylim(-1, n_features)
plt.figure(figsize=(14,10))
plt.plot(range(20),y_test[0:20], color = "green")
plt.plot(range(20),model.predict(X_test[0:20]), color = "red")
plt.legend(["Actual","prediction"])
plt.title("Predicted vs True Value")
plt.xlabel("Record number")
plt.ylabel(target)
plt.show()
| 0.383988 | 0.982955 |
```
import pandas as pd
import pathlib
from matplotlib import pyplot as plt
from collections import defaultdict
import re
import numpy as np
import matplotlib.transforms as transforms
import matplotlib as mpl
# mpl.rcParams['figure.dpi'] = 300
# mpl.rcParams['figure.figsize'] = [40, 20]
mpl.rcParams['figure.dpi'] = 300
plt.rcParams['axes.axisbelow'] = True
lthmp_path = pathlib.Path('../../experiments/LTHMP2020_results')
sorted_results = defaultdict(list)
for run_collection in lthmp_path.glob('*'):
for res in run_collection.glob('*'):
path_str = str(res)
pkl_dir = res/'metrics.pkl'
if not pkl_dir.exists():
print('{} does not have metrics pkl!'.format(res))
continue
if '-seed-9' in path_str:
continue
metric_pkl = pd.read_pickle(str(pkl_dir.resolve()))
if '-irl-0' in path_str:
sorted_results['no_irl'].append(metric_pkl)
if 'irl-1500' in path_str:
if 'preirl-100' in path_str:
sorted_results['pre-irl-1500'].append(metric_pkl)
elif 'preirl-0' in path_str:
sorted_results['irl-1500'].append(metric_pkl)
if 'irl-5000' in path_str:
if 'preirl-100' in path_str:
sorted_results['pre-irl-2500'].append(metric_pkl)
elif 'preirl-0' in path_str:
sorted_results['irl-2500'].append(metric_pkl)
sorted_results.keys()
for key,val in sorted_results.items():
print(key, len(val))
for key,val in sorted_results.items():
goals_reached = [tab['goal_reached'].sum() for tab in val]
print(key, 'mean', np.mean(goals_reached), np.std(goals_reached))
plt.bar(key, np.mean(goals_reached), yerr=np.std(goals_reached), capsize=3)
plt.ylabel('average num goal reached')
plt.grid(axis='y')
for key,val in sorted_results.items():
goals_reached = [tab['goal_reached'].sum() for tab in val]
print(key, 'mean', np.mean(goals_reached), np.std(goals_reached))
plt.scatter([key]*len(goals_reached), goals_reached)
plt.ylabel('average num goal reached')
plt.grid(axis='y')
for key,val in sorted_results.items():
print(val[0])
for key,val in sorted_results.items():
traj_length = []
for tab in val:
success_tab = tab[tab['goal_reached'] == True]
traj_length.append(success_tab['trajectory_length'].mean())
print(key, 'mean', np.mean(traj_length), np.std(traj_length))
plt.bar(key, np.mean(traj_length), yerr=np.std(traj_length), capsize=3)
plt.ylabel('successful trajectory length')
plt.grid(axis='y')
for key,val in sorted_results.items():
goals_reached = [tab['count_collisions'].sum() for tab in val]
print(key, 'mean', np.mean(goals_reached), np.std(goals_reached))
for key,val in sorted_results.items():
smooth = []
for tab in val:
for tup in tab['compute_trajectory_smoothness']:
smooth.append(tup[1])
print(key, 'mean', np.mean(smooth), np.std(smooth))
for key,val in sorted_results.items():
ddr = [tab['compute_distance_displacement_ratio'].mean() for tab in val]
print(key, 'mean', np.mean(goals_reached), np.std(goals_reached))
for key,val in sorted_results.items():
int_totals = []
pers_totals = []
social_totals = []
for tab in val:
intimate = []
personal = []
social = []
for tup in tab['proxemic_intrusions']:
intimate.append(tup[0])
personal.append(tup[1])
social.append(tup[2])
int_sum = sum(intimate)
pers_sum = sum(personal)
social_sum = sum(social)
int_totals.append(int_sum)
pers_totals.append(pers_sum)
social_totals.append(social_sum)
print(key, 'mean int', np.mean(int_totals), np.std(int_totals))
print(key, 'mean pers', np.mean(pers_totals), np.std(pers_totals))
print(key, 'mean social ', np.mean(social_totals), np.std(social_totals))
offset = lambda p: transforms.ScaledTranslation(p/72.,0, plt.gcf().dpi_scale_trans)
trans = plt.gca().transData
plt.errorbar([key], np.mean(int_totals), yerr=np.std(int_totals) , fmt='o', transform=trans+offset(-5), color='red', capsize=2)
plt.errorbar([key], np.mean(pers_totals) , yerr=np.std(pers_totals), fmt='o', transform=trans+offset(-5), color='orange', capsize=2)
# plt.errorbar([key], np.mean(social_totals) , yerr=np.std(social_totals), fmt='o', transform=trans+offset(-5), color='blue')
plt.ylabel('num proxemic intrusions.')
plt.legend(['Intimate space', 'personal space'], loc=9)
plt.grid(axis='y')
for key,val in sorted_results.items():
int_totals = []
pers_totals = []
social_totals = []
for tab in val:
intimate = []
personal = []
social = []
for tup in tab['anisotropic_intrusions']:
intimate.append(tup[0])
personal.append(tup[1])
social.append(tup[2])
int_sum = sum(intimate)
pers_sum = sum(personal)
social_sum = sum(social)
int_totals.append(int_sum)
pers_totals.append(pers_sum)
social_totals.append(social_sum)
print(key, 'mean int', np.mean(int_totals), np.std(int_totals))
print(key, 'mean pers', np.mean(pers_totals), np.std(pers_totals))
print(key, 'mean social ', np.mean(social_totals), np.std(social_totals))
offset = lambda p: transforms.ScaledTranslation(p/72.,0, plt.gcf().dpi_scale_trans)
trans = plt.gca().transData
plt.errorbar([key], np.mean(int_totals), yerr=np.std(int_totals) , fmt='.', transform=trans+offset(-5), color='red', capsize=5)
plt.errorbar([key], np.mean(pers_totals) , yerr=np.std(pers_totals), fmt='.', transform=trans+offset(-5), color='orange', capsize=5)
plt.errorbar([key], np.mean(social_totals) , yerr=np.std(social_totals), fmt='.', transform=trans+offset(-5), color='blue', capsize=5)
```
|
github_jupyter
|
import pandas as pd
import pathlib
from matplotlib import pyplot as plt
from collections import defaultdict
import re
import numpy as np
import matplotlib.transforms as transforms
import matplotlib as mpl
# mpl.rcParams['figure.dpi'] = 300
# mpl.rcParams['figure.figsize'] = [40, 20]
mpl.rcParams['figure.dpi'] = 300
plt.rcParams['axes.axisbelow'] = True
lthmp_path = pathlib.Path('../../experiments/LTHMP2020_results')
sorted_results = defaultdict(list)
for run_collection in lthmp_path.glob('*'):
for res in run_collection.glob('*'):
path_str = str(res)
pkl_dir = res/'metrics.pkl'
if not pkl_dir.exists():
print('{} does not have metrics pkl!'.format(res))
continue
if '-seed-9' in path_str:
continue
metric_pkl = pd.read_pickle(str(pkl_dir.resolve()))
if '-irl-0' in path_str:
sorted_results['no_irl'].append(metric_pkl)
if 'irl-1500' in path_str:
if 'preirl-100' in path_str:
sorted_results['pre-irl-1500'].append(metric_pkl)
elif 'preirl-0' in path_str:
sorted_results['irl-1500'].append(metric_pkl)
if 'irl-5000' in path_str:
if 'preirl-100' in path_str:
sorted_results['pre-irl-2500'].append(metric_pkl)
elif 'preirl-0' in path_str:
sorted_results['irl-2500'].append(metric_pkl)
sorted_results.keys()
for key,val in sorted_results.items():
print(key, len(val))
for key,val in sorted_results.items():
goals_reached = [tab['goal_reached'].sum() for tab in val]
print(key, 'mean', np.mean(goals_reached), np.std(goals_reached))
plt.bar(key, np.mean(goals_reached), yerr=np.std(goals_reached), capsize=3)
plt.ylabel('average num goal reached')
plt.grid(axis='y')
for key,val in sorted_results.items():
goals_reached = [tab['goal_reached'].sum() for tab in val]
print(key, 'mean', np.mean(goals_reached), np.std(goals_reached))
plt.scatter([key]*len(goals_reached), goals_reached)
plt.ylabel('average num goal reached')
plt.grid(axis='y')
for key,val in sorted_results.items():
print(val[0])
for key,val in sorted_results.items():
traj_length = []
for tab in val:
success_tab = tab[tab['goal_reached'] == True]
traj_length.append(success_tab['trajectory_length'].mean())
print(key, 'mean', np.mean(traj_length), np.std(traj_length))
plt.bar(key, np.mean(traj_length), yerr=np.std(traj_length), capsize=3)
plt.ylabel('successful trajectory length')
plt.grid(axis='y')
for key,val in sorted_results.items():
goals_reached = [tab['count_collisions'].sum() for tab in val]
print(key, 'mean', np.mean(goals_reached), np.std(goals_reached))
for key,val in sorted_results.items():
smooth = []
for tab in val:
for tup in tab['compute_trajectory_smoothness']:
smooth.append(tup[1])
print(key, 'mean', np.mean(smooth), np.std(smooth))
for key,val in sorted_results.items():
ddr = [tab['compute_distance_displacement_ratio'].mean() for tab in val]
print(key, 'mean', np.mean(goals_reached), np.std(goals_reached))
for key,val in sorted_results.items():
int_totals = []
pers_totals = []
social_totals = []
for tab in val:
intimate = []
personal = []
social = []
for tup in tab['proxemic_intrusions']:
intimate.append(tup[0])
personal.append(tup[1])
social.append(tup[2])
int_sum = sum(intimate)
pers_sum = sum(personal)
social_sum = sum(social)
int_totals.append(int_sum)
pers_totals.append(pers_sum)
social_totals.append(social_sum)
print(key, 'mean int', np.mean(int_totals), np.std(int_totals))
print(key, 'mean pers', np.mean(pers_totals), np.std(pers_totals))
print(key, 'mean social ', np.mean(social_totals), np.std(social_totals))
offset = lambda p: transforms.ScaledTranslation(p/72.,0, plt.gcf().dpi_scale_trans)
trans = plt.gca().transData
plt.errorbar([key], np.mean(int_totals), yerr=np.std(int_totals) , fmt='o', transform=trans+offset(-5), color='red', capsize=2)
plt.errorbar([key], np.mean(pers_totals) , yerr=np.std(pers_totals), fmt='o', transform=trans+offset(-5), color='orange', capsize=2)
# plt.errorbar([key], np.mean(social_totals) , yerr=np.std(social_totals), fmt='o', transform=trans+offset(-5), color='blue')
plt.ylabel('num proxemic intrusions.')
plt.legend(['Intimate space', 'personal space'], loc=9)
plt.grid(axis='y')
for key,val in sorted_results.items():
int_totals = []
pers_totals = []
social_totals = []
for tab in val:
intimate = []
personal = []
social = []
for tup in tab['anisotropic_intrusions']:
intimate.append(tup[0])
personal.append(tup[1])
social.append(tup[2])
int_sum = sum(intimate)
pers_sum = sum(personal)
social_sum = sum(social)
int_totals.append(int_sum)
pers_totals.append(pers_sum)
social_totals.append(social_sum)
print(key, 'mean int', np.mean(int_totals), np.std(int_totals))
print(key, 'mean pers', np.mean(pers_totals), np.std(pers_totals))
print(key, 'mean social ', np.mean(social_totals), np.std(social_totals))
offset = lambda p: transforms.ScaledTranslation(p/72.,0, plt.gcf().dpi_scale_trans)
trans = plt.gca().transData
plt.errorbar([key], np.mean(int_totals), yerr=np.std(int_totals) , fmt='.', transform=trans+offset(-5), color='red', capsize=5)
plt.errorbar([key], np.mean(pers_totals) , yerr=np.std(pers_totals), fmt='.', transform=trans+offset(-5), color='orange', capsize=5)
plt.errorbar([key], np.mean(social_totals) , yerr=np.std(social_totals), fmt='.', transform=trans+offset(-5), color='blue', capsize=5)
| 0.316053 | 0.304016 |
# 5.12 稠密连接网络(DenseNet)
ResNet中的跨层连接设计引申出了数个后续工作。本节我们介绍其中的一个:稠密连接网络(DenseNet) [1]。 它与ResNet的主要区别如图5.10所示。
<div align=center>
<img width="400" src="../img/chapter05/5.12_densenet.svg"/>
</div>
<div align=center>图5.10 ResNet(左)与DenseNet(右)在跨层连接上的主要区别:使用相加和使用连结</div>
图5.10中将部分前后相邻的运算抽象为模块$A$和模块$B$。与ResNet的主要区别在于,DenseNet里模块$B$的输出不是像ResNet那样和模块$A$的输出相加,而是在通道维上连结。这样模块$A$的输出可以直接传入模块$B$后面的层。在这个设计里,模块$A$直接跟模块$B$后面的所有层连接在了一起。这也是它被称为“稠密连接”的原因。
DenseNet的主要构建模块是稠密块(dense block)和过渡层(transition layer)。前者定义了输入和输出是如何连结的,后者则用来控制通道数,使之不过大。
## 5.12.1 稠密块
DenseNet使用了ResNet改良版的“批量归一化、激活和卷积”结构,我们首先在`BottleNeck`函数里实现这个结构。在前向计算时,我们将每块的输入和输出在通道维上连结。
```
import tensorflow as tf
class BottleNeck(tf.keras.layers.Layer):
def __init__(self, growth_rate, drop_rate):
super(BottleNeck, self).__init__()
self.bn1 = tf.keras.layers.BatchNormalization()
self.conv1 = tf.keras.layers.Conv2D(filters=4 * growth_rate,
kernel_size=(1, 1),
strides=1,
padding="same")
self.bn2 = tf.keras.layers.BatchNormalization()
self.conv2 = tf.keras.layers.Conv2D(filters=growth_rate,
kernel_size=(3, 3),
strides=1,
padding="same")
self.dropout = tf.keras.layers.Dropout(rate=drop_rate)
self.listLayers = [self.bn1,
tf.keras.layers.Activation("relu"),
self.conv1,
self.bn2,
tf.keras.layers.Activation("relu"),
self.conv2,
self.dropout]
def call(self, x):
y = x
for layer in self.listLayers.layers:
y = layer(y)
y = tf.keras.layers.concatenate([x,y], axis=-1)
return y
```
稠密块由多个`BottleNeck`组成,每块使用相同的输出通道数。
```
class DenseBlock(tf.keras.layers.Layer):
def __init__(self, num_layers, growth_rate, drop_rate=0.5):
super(DenseBlock, self).__init__()
self.num_layers = num_layers
self.growth_rate = growth_rate
self.drop_rate = drop_rate
self.listLayers = []
for _ in range(num_layers):
self.listLayers.append(BottleNeck(growth_rate=self.growth_rate, drop_rate=self.drop_rate))
def call(self, x):
for layer in self.listLayers.layers:
x = layer(x)
return x
```
在下面的例子中,我们定义一个有2个输出通道数为10的卷积块。使用通道数为3的输入时,我们会得到通道数为$3+2\times 10=23$的输出。卷积块的通道数控制了输出通道数相对于输入通道数的增长,因此也被称为增长率(growth rate)。
```
blk = DenseBlock(2, 10)
X = tf.random.uniform((4, 8, 8,3))
Y = blk(X)
print(Y.shape)
```
## 5.12.2 过渡层
由于每个稠密块都会带来通道数的增加,使用过多则会带来过于复杂的模型。过渡层用来控制模型复杂度。它通过$1\times1$卷积层来减小通道数,并使用步幅为2的平均池化层减半高和宽,从而进一步降低模型复杂度。
```
class TransitionLayer(tf.keras.layers.Layer):
def __init__(self, out_channels):
super(TransitionLayer, self).__init__()
self.bn = tf.keras.layers.BatchNormalization()
self.conv = tf.keras.layers.Conv2D(filters=out_channels,
kernel_size=(1, 1),
strides=1,
padding="same")
self.pool = tf.keras.layers.MaxPool2D(pool_size=(2, 2),
strides=2,
padding="same")
def call(self, inputs):
x = self.bn(inputs)
x = tf.keras.activations.relu(x)
x = self.conv(x)
x = self.pool(x)
return x
```
对上一个例子中稠密块的输出使用通道数为10的过渡层。此时输出的通道数减为10,高和宽均减半。
```
blk = TransitionLayer(10)
blk(Y).shape
```
## 5.12.3 DenseNet模型
我们来构造DenseNet模型。DenseNet首先使用同ResNet一样的单卷积层和最大池化层。类似于ResNet接下来使用的4个残差块,DenseNet使用的是4个稠密块。同ResNet一样,我们可以设置每个稠密块使用多少个卷积层。这里我们设成4,从而与上一节的ResNet-18保持一致。稠密块里的卷积层通道数(即增长率)设为32,所以每个稠密块将增加128个通道。
ResNet里通过步幅为2的残差块在每个模块之间减小高和宽。这里我们则使用过渡层来减半高和宽,并减半通道数。
```
class DenseNet(tf.keras.Model):
def __init__(self, num_init_features, growth_rate, block_layers, compression_rate, drop_rate):
super(DenseNet, self).__init__()
self.conv = tf.keras.layers.Conv2D(filters=num_init_features,
kernel_size=(7, 7),
strides=2,
padding="same")
self.bn = tf.keras.layers.BatchNormalization()
self.pool = tf.keras.layers.MaxPool2D(pool_size=(3, 3),
strides=2,
padding="same")
self.num_channels = num_init_features
self.dense_block_1 = DenseBlock(num_layers=block_layers[0], growth_rate=growth_rate, drop_rate=drop_rate)
self.num_channels += growth_rate * block_layers[0]
self.num_channels = compression_rate * self.num_channels
self.transition_1 = TransitionLayer(out_channels=int(self.num_channels))
self.dense_block_2 = DenseBlock(num_layers=block_layers[1], growth_rate=growth_rate, drop_rate=drop_rate)
self.num_channels += growth_rate * block_layers[1]
self.num_channels = compression_rate * self.num_channels
self.transition_2 = TransitionLayer(out_channels=int(self.num_channels))
self.dense_block_3 = DenseBlock(num_layers=block_layers[2], growth_rate=growth_rate, drop_rate=drop_rate)
self.num_channels += growth_rate * block_layers[2]
self.num_channels = compression_rate * self.num_channels
self.transition_3 = TransitionLayer(out_channels=int(self.num_channels))
self.dense_block_4 = DenseBlock(num_layers=block_layers[3], growth_rate=growth_rate, drop_rate=drop_rate)
self.avgpool = tf.keras.layers.GlobalAveragePooling2D()
self.fc = tf.keras.layers.Dense(units=10,
activation=tf.keras.activations.softmax)
def call(self, inputs):
x = self.conv(inputs)
x = self.bn(x)
x = tf.keras.activations.relu(x)
x = self.pool(x)
x = self.dense_block_1(x)
x = self.transition_1(x)
x = self.dense_block_2(x)
x = self.transition_2(x)
x = self.dense_block_3(x)
x = self.transition_3(x,)
x = self.dense_block_4(x)
x = self.avgpool(x)
x = self.fc(x)
return x
def densenet():
return DenseNet(num_init_features=64, growth_rate=32, block_layers=[4,4,4,4], compression_rate=0.5, drop_rate=0.5)
mynet=densenet()
```
我们尝试打印每个子模块的输出维度确保网络无误:
```
X = tf.random.uniform(shape=(1, 96, 96 , 1))
for layer in mynet.layers:
X = layer(X)
print(layer.name, 'output shape:\t', X.shape)
```
## 5.12.3 获取数据并训练模型
由于这里使用了比较深的网络,本节里我们将输入高和宽从224降到96来简化计算。
```
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.fashion_mnist.load_data()
x_train = x_train.reshape((60000, 28, 28, 1)).astype('float32') / 255
x_test = x_test.reshape((10000, 28, 28, 1)).astype('float32') / 255
mynet.compile(loss='sparse_categorical_crossentropy',
optimizer=tf.keras.optimizers.Adam(),
metrics=['accuracy'])
history = mynet.fit(x_train, y_train,
batch_size=64,
epochs=5,
validation_split=0.2)
test_scores = mynet.evaluate(x_test, y_test, verbose=2)
```
|
github_jupyter
|
import tensorflow as tf
class BottleNeck(tf.keras.layers.Layer):
def __init__(self, growth_rate, drop_rate):
super(BottleNeck, self).__init__()
self.bn1 = tf.keras.layers.BatchNormalization()
self.conv1 = tf.keras.layers.Conv2D(filters=4 * growth_rate,
kernel_size=(1, 1),
strides=1,
padding="same")
self.bn2 = tf.keras.layers.BatchNormalization()
self.conv2 = tf.keras.layers.Conv2D(filters=growth_rate,
kernel_size=(3, 3),
strides=1,
padding="same")
self.dropout = tf.keras.layers.Dropout(rate=drop_rate)
self.listLayers = [self.bn1,
tf.keras.layers.Activation("relu"),
self.conv1,
self.bn2,
tf.keras.layers.Activation("relu"),
self.conv2,
self.dropout]
def call(self, x):
y = x
for layer in self.listLayers.layers:
y = layer(y)
y = tf.keras.layers.concatenate([x,y], axis=-1)
return y
class DenseBlock(tf.keras.layers.Layer):
def __init__(self, num_layers, growth_rate, drop_rate=0.5):
super(DenseBlock, self).__init__()
self.num_layers = num_layers
self.growth_rate = growth_rate
self.drop_rate = drop_rate
self.listLayers = []
for _ in range(num_layers):
self.listLayers.append(BottleNeck(growth_rate=self.growth_rate, drop_rate=self.drop_rate))
def call(self, x):
for layer in self.listLayers.layers:
x = layer(x)
return x
blk = DenseBlock(2, 10)
X = tf.random.uniform((4, 8, 8,3))
Y = blk(X)
print(Y.shape)
class TransitionLayer(tf.keras.layers.Layer):
def __init__(self, out_channels):
super(TransitionLayer, self).__init__()
self.bn = tf.keras.layers.BatchNormalization()
self.conv = tf.keras.layers.Conv2D(filters=out_channels,
kernel_size=(1, 1),
strides=1,
padding="same")
self.pool = tf.keras.layers.MaxPool2D(pool_size=(2, 2),
strides=2,
padding="same")
def call(self, inputs):
x = self.bn(inputs)
x = tf.keras.activations.relu(x)
x = self.conv(x)
x = self.pool(x)
return x
blk = TransitionLayer(10)
blk(Y).shape
class DenseNet(tf.keras.Model):
def __init__(self, num_init_features, growth_rate, block_layers, compression_rate, drop_rate):
super(DenseNet, self).__init__()
self.conv = tf.keras.layers.Conv2D(filters=num_init_features,
kernel_size=(7, 7),
strides=2,
padding="same")
self.bn = tf.keras.layers.BatchNormalization()
self.pool = tf.keras.layers.MaxPool2D(pool_size=(3, 3),
strides=2,
padding="same")
self.num_channels = num_init_features
self.dense_block_1 = DenseBlock(num_layers=block_layers[0], growth_rate=growth_rate, drop_rate=drop_rate)
self.num_channels += growth_rate * block_layers[0]
self.num_channels = compression_rate * self.num_channels
self.transition_1 = TransitionLayer(out_channels=int(self.num_channels))
self.dense_block_2 = DenseBlock(num_layers=block_layers[1], growth_rate=growth_rate, drop_rate=drop_rate)
self.num_channels += growth_rate * block_layers[1]
self.num_channels = compression_rate * self.num_channels
self.transition_2 = TransitionLayer(out_channels=int(self.num_channels))
self.dense_block_3 = DenseBlock(num_layers=block_layers[2], growth_rate=growth_rate, drop_rate=drop_rate)
self.num_channels += growth_rate * block_layers[2]
self.num_channels = compression_rate * self.num_channels
self.transition_3 = TransitionLayer(out_channels=int(self.num_channels))
self.dense_block_4 = DenseBlock(num_layers=block_layers[3], growth_rate=growth_rate, drop_rate=drop_rate)
self.avgpool = tf.keras.layers.GlobalAveragePooling2D()
self.fc = tf.keras.layers.Dense(units=10,
activation=tf.keras.activations.softmax)
def call(self, inputs):
x = self.conv(inputs)
x = self.bn(x)
x = tf.keras.activations.relu(x)
x = self.pool(x)
x = self.dense_block_1(x)
x = self.transition_1(x)
x = self.dense_block_2(x)
x = self.transition_2(x)
x = self.dense_block_3(x)
x = self.transition_3(x,)
x = self.dense_block_4(x)
x = self.avgpool(x)
x = self.fc(x)
return x
def densenet():
return DenseNet(num_init_features=64, growth_rate=32, block_layers=[4,4,4,4], compression_rate=0.5, drop_rate=0.5)
mynet=densenet()
X = tf.random.uniform(shape=(1, 96, 96 , 1))
for layer in mynet.layers:
X = layer(X)
print(layer.name, 'output shape:\t', X.shape)
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.fashion_mnist.load_data()
x_train = x_train.reshape((60000, 28, 28, 1)).astype('float32') / 255
x_test = x_test.reshape((10000, 28, 28, 1)).astype('float32') / 255
mynet.compile(loss='sparse_categorical_crossentropy',
optimizer=tf.keras.optimizers.Adam(),
metrics=['accuracy'])
history = mynet.fit(x_train, y_train,
batch_size=64,
epochs=5,
validation_split=0.2)
test_scores = mynet.evaluate(x_test, y_test, verbose=2)
| 0.916983 | 0.914558 |
# LassoRegresion with StandardScaler & Polynomial Features
This Code template is for the regression analysis using Lasso Regression and the feature rescaling technique Polynomial Features rescaling technique StandardScaler in a pipeline. Lasso stands for Least Absolute Shrinkage and Selection Operator is a type of linear regression that uses shrinkage.
### Required Packages
```
import warnings
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as se
from sklearn.linear_model import Lasso
from sklearn.model_selection import train_test_split
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import PolynomialFeatures
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import r2_score, mean_absolute_error, mean_squared_error
warnings.filterwarnings('ignore')
```
### Initialization
Filepath of CSV file
```
#filepath
file_path= ""
```
List of features which are required for model training .
```
#x_values
features=[]
```
Target feature for prediction.
```
#y_value
target=''
```
### Data Fetching
Pandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.
We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.
```
df=pd.read_csv(file_path)
df.head()
```
### Feature Selections
It is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.
We will assign all the required input features to X and target/outcome to Y.
```
X=df[features]
Y=df[target]
```
### Data Preprocessing
Since the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes.
```
def NullClearner(df):
if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])):
df.fillna(df.mean(),inplace=True)
return df
elif(isinstance(df, pd.Series)):
df.fillna(df.mode()[0],inplace=True)
return df
else:return df
def EncodeX(df):
return pd.get_dummies(df)
```
Calling preprocessing functions on the feature and target set.
```
x=X.columns.to_list()
for i in x:
X[i]=NullClearner(X[i])
X=EncodeX(X)
Y=NullClearner(Y)
X.head()
```
#### Correlation Map
In order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns.
```
f,ax = plt.subplots(figsize=(18, 18))
matrix = np.triu(X.corr())
se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix)
plt.show()
```
### Data Splitting
The train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data.
```
x_train,x_test,y_train,y_test=train_test_split(X,Y,test_size=0.2,random_state=123)
```
### Model
Linear Model trained with L1 prior as regularizer (aka the Lasso)
The Lasso is a linear model that estimates sparse coefficients. It is useful in some contexts due to its tendency to prefer solutions with fewer non-zero coefficients, effectively reducing the number of features upon which the given solution is dependent. For this reason Lasso and its variants are fundamental to the field of compressed sensing.
#### Model Tuning Parameter
> **alpha** -> Constant that multiplies the L1 term. Defaults to 1.0. alpha = 0 is equivalent to an ordinary least square, solved by the LinearRegression object. For numerical reasons, using alpha = 0 with the Lasso object is not advised.
> **selection** -> If set to ‘random’, a random coefficient is updated every iteration rather than looping over features sequentially by default. This (setting to ‘random’) often leads to significantly faster convergence especially when tol is higher than 1e-4.
> **tol** -> The tolerance for the optimization: if the updates are smaller than tol, the optimization code checks the dual gap for optimality and continues until it is smaller than tol.
> **max_iter** -> The maximum number of iterations.
Scaling
Standardize features by removing the mean and scaling to unit variance
The standard score of a sample x is calculated as:
z = (x - u) / s
Refer [API](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html) for parameters.
Feature Transformation
Polynomial Features is a technique to generate polynomial and interaction features.
Polynomial features are features created by raising existing features to an exponent. PolynomialFeatures function generates a new feature matrix consisting of all polynomial combinations of the features with degree less than or equal to the specified degree. For example, if an input sample is two dimensional and of the form [a, b], the degree-2 polynomial features are [1, a, b, a^2, ab, b^2].
Refer [API](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.PolynomialFeatures.html) for the parameters
```
model=make_pipeline(StandardScaler(),PolynomialFeatures(),Lasso(random_state=123))
model.fit(x_train,y_train)
```
#### Model Accuracy
We will use the trained model to make a prediction on the test set.Then use the predicted value for measuring the accuracy of our model.
> **score**: The **score** function returns the coefficient of determination <code>R<sup>2</sup></code> of the prediction.
```
print("Accuracy score {:.2f} %\n".format(model.score(x_test,y_test)*100))
```
> **r2_score**: The **r2_score** function computes the percentage variablility explained by our model, either the fraction or the count of correct predictions.
> **mae**: The **mean abosolute error** function calculates the amount of total error(absolute average distance between the real data and the predicted data) by our model.
> **mse**: The **mean squared error** function squares the error(penalizes the model for large errors) by our model.
```
y_pred=model.predict(x_test)
print("R2 Score: {:.2f} %".format(r2_score(y_test,y_pred)*100))
print("Mean Absolute Error {:.2f}".format(mean_absolute_error(y_test,y_pred)))
print("Mean Squared Error {:.2f}".format(mean_squared_error(y_test,y_pred)))
```
#### Prediction Plot
First, we make use of a plot to plot the actual observations, with x_train on the x-axis and y_train on the y-axis.
For the regression line, we will use x_train on the x-axis and then the predictions of the x_train observations on the y-axis.
```
plt.figure(figsize=(14,10))
plt.plot(range(20),y_test[0:20], color = "green")
plt.plot(range(20),model.predict(x_test[0:20]), color = "red")
plt.legend(["Actual","prediction"])
plt.title("Predicted vs True Value")
plt.xlabel("Record number")
plt.ylabel(target)
plt.show()
```
#### Creator: Akshar Nerkar , Github: [Profile](https://github.com/Akshar777)
|
github_jupyter
|
import warnings
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as se
from sklearn.linear_model import Lasso
from sklearn.model_selection import train_test_split
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import PolynomialFeatures
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import r2_score, mean_absolute_error, mean_squared_error
warnings.filterwarnings('ignore')
#filepath
file_path= ""
#x_values
features=[]
#y_value
target=''
df=pd.read_csv(file_path)
df.head()
X=df[features]
Y=df[target]
def NullClearner(df):
if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])):
df.fillna(df.mean(),inplace=True)
return df
elif(isinstance(df, pd.Series)):
df.fillna(df.mode()[0],inplace=True)
return df
else:return df
def EncodeX(df):
return pd.get_dummies(df)
x=X.columns.to_list()
for i in x:
X[i]=NullClearner(X[i])
X=EncodeX(X)
Y=NullClearner(Y)
X.head()
f,ax = plt.subplots(figsize=(18, 18))
matrix = np.triu(X.corr())
se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix)
plt.show()
x_train,x_test,y_train,y_test=train_test_split(X,Y,test_size=0.2,random_state=123)
model=make_pipeline(StandardScaler(),PolynomialFeatures(),Lasso(random_state=123))
model.fit(x_train,y_train)
print("Accuracy score {:.2f} %\n".format(model.score(x_test,y_test)*100))
y_pred=model.predict(x_test)
print("R2 Score: {:.2f} %".format(r2_score(y_test,y_pred)*100))
print("Mean Absolute Error {:.2f}".format(mean_absolute_error(y_test,y_pred)))
print("Mean Squared Error {:.2f}".format(mean_squared_error(y_test,y_pred)))
plt.figure(figsize=(14,10))
plt.plot(range(20),y_test[0:20], color = "green")
plt.plot(range(20),model.predict(x_test[0:20]), color = "red")
plt.legend(["Actual","prediction"])
plt.title("Predicted vs True Value")
plt.xlabel("Record number")
plt.ylabel(target)
plt.show()
| 0.374104 | 0.99015 |
# 1-3.1 Intro Python
## Functions Arguments & Parameters
- **Creating a simple Function with a parameter**
- Exploring Functions with `return` values
- Creating Functions with multiple parameters
- Sequence in python
-----
><font size="5" color="#00A0B2" face="verdana"> <B>Student will be able to</B></font>
- **create functions with a parameter**
- create functions with a `return` value
- create functions with multiple parameters
- use knowledge of sequence in coding tasks
- Use coding best practices
#
<font size="6" color="#00A0B2" face="verdana"> <B>Concept</B></font>
## Calling Functions with Arguments
Functions are used for code tasks that are intended to be reused
[]( http://edxinteractivepage.blob.core.windows.net/edxpages/f7cff1a7-5601-48a1-95a6-fd1fdfabd20e.html?details=[{"src":"http://jupyternootbookwams.streaming.mediaservices.windows.net/621d10f8-23d5-4571-b0fd-aa12b0de98d8/Unit1_Section3.1-function-arguments.ism/manifest","type":"application/vnd.ms-sstr+xml"}],[{"src":"http://jupyternootbookwams.streaming.mediaservices.windows.net/621d10f8-23d5-4571-b0fd-aa12b0de98d8/Unit1_Section3.1-function-arguments.vtt","srclang":"en","kind":"subtitles","label":"english"}])
Python allows us to create **User Defined Functions** and provides many **Built-in Functions** such as **`print()`**
- **`print()`** can be called using arguments (or without) and sends text to standard output, such as the console.
- **`print()`** uses **Parameters** to define the variable Arguments that can be passed to the Function.
- **`print()`** defines multiple string/numbers parameters which means we can send a long list of Arguments to **`print()`**, separated by commas.
- **`print()`** can also be called directly with just its name and empty parentheses and it will return a blank line to standard output
#
<font size="6" color="#00A0B2" face="verdana"> <B>Examples</B></font>
```
print('Hello World!', 'I am sending string arguments to print ')
student_age = 17
student_name = "Hiroto Yamaguchi"
print(student_name,'will be in the class for',student_age, 'year old students.')
print("line 1")
print("line 2")
# line 3 is an empty return - the default when no arguments
print()
print("line 4")
```
#
<font size="6" color="#B24C00" face="verdana"> <B>Task 1</B></font>
## Passing Arguments to `print()`
### Many Arguments can be passed to print
- update the print statement to use **`print()`** with **8** or more arguments
```
#[ ] increase the number of arguments used in print() to 8 or more
student_age = 17
student_name = "Hiroto Yamaguchi"
student_name_2 = "Kyle Heinze"
student_age_2 = "15"
print(student_name,'will be in the class for',student_age, 'year old students.',student_name_2,'will be in the class for',student_age_2,'year old students.')
```
#
<font size="6" color="#00A0B2" face="verdana"> <B>Concept</B></font>
## Create a simple Function
Creating user defined functions is at the core of computer programming. Functions enable code reuse and make code easier to develop and maintain.
[]( http://edxinteractivepage.blob.core.windows.net/edxpages/f7cff1a7-5601-48a1-95a6-fd1fdfabd20e.html?details=[{"src":"http://jupyternootbookwams.streaming.mediaservices.windows.net/35458114-6211-4d10-85bc-7c4eb7834c52/Unit1_Section3.1-Simplest_Functions.ism/manifest","type":"application/vnd.ms-sstr+xml"}],[{"src":"http://jupyternootbookwams.streaming.mediaservices.windows.net/35458114-6211-4d10-85bc-7c4eb7834c52/Unit1_Section3.1-Simplest_Functions.vtt","srclang":"en","kind":"subtitles","label":"english"}])
### basics of a user defined function
- define a function with **`def`**
- use indentation (4 spaces)
- define parameters
- optional parameters
- **`return`** values (or none)
- function scope (basics defaults)
### `def some_function():`
use the **`def`** statement when creating a **function**
- use a function name that **starts with a letter** or underscore (usually a lower-case letter)
- function names can contain **letters, numbers or underscores**
- parenthesis **()** follow the function name
- a colon **:** follows the parenthesis
- the code for the function is indented under the function definition (use 4 spaces for this course)
```python
def some_function():
#code the function tasks indented here
```
The **end of the function** is denoted by returning to **no indentation**
#
<font size="6" color="#00A0B2" face="verdana"> <B>Examples</B></font>
```
# defines a function named say_hi
def say_hi():
print("Hello there!")
print("goodbye")
# define three_three
def three_three():
print(33)
```
#
<font size="6" color="#00A0B2" face="verdana"> <B>Concept</B></font>
## Call a function by name
Call a simple function using the function name followed by parenthesis. For instance, calling print is
**`print()`**
##
<font size="6" color="#00A0B2" face="verdana"> <B>Examples</B></font>
```
# Program defines and calls the say_hi & three_three functions
# [ ] review and run the code
def say_hi():
print("Hello there!")
print("goodbye")
# end of indentation ends the function
# define three_three
def three_three():
print(33)
# calling the functions
say_hi()
print()
three_three()
```
#
<font size="6" color="#B24C00" face="verdana"> <B>Task 2</B></font>
## Define and call a simple function `yell_it()`
### `yell_it()` prints the phrase with "!" concatenated to the end
- takes no arguments
- indented function code does the following
- define a variable for called **`phrase`** and intialize with a short *phrase*
- prints **`phrase`** as all upper-case letters followed by "!"
- call `yell_it` at the bottom of the cell after the function **`def`** (**Tip:** no indentation should be used)
```
#[ ] define (def) a simple function called yell_it() and call the function
def yell_it():
phrase = 'This is a short phrase'
print(phrase.upper() + "!")
yell_it()
```
#
<font size="6" color="#00A0B2" face="verdana"> <B>Concept</B></font>
## Functions that have Parameters
[]( http://edxinteractivepage.blob.core.windows.net/edxpages/f7cff1a7-5601-48a1-95a6-fd1fdfabd20e.html?details=[{"src":"http://jupyternootbookwams.streaming.mediaservices.windows.net/c84008fa-2ec9-4e4b-8b6b-15b9063852a1/Unit1_Section3.1-funct-parameter.ism/manifest","type":"application/vnd.ms-sstr+xml"}],[{"src":"http://jupyternootbookwams.streaming.mediaservices.windows.net/c84008fa-2ec9-4e4b-8b6b-15b9063852a1/Unit1_Section3.1-funct-parameter.vtt","srclang":"en","kind":"subtitles","label":"english"}])
**`print()`** and **`type()`** are examples of built-in functions that have **Parameters** defined
**`type()`** has a parameter for a **Python Object** and sends back the *type* of the object
an **Argument** is a value given for a parameter when calling a function
- **`type`** is called providing an **Argument** - in this case the string *"Hello"*
```python
type("Hello")
```
## Defining Function Parameters
- Parameters are defined inside of the parenthesis as part of a function **`def`** statement
- Parameters are typically copies of objects that are available for use in function code
```python
def say_this(phrase):
print(phrase)
```
## Function can have default Arguments
- Default Arguments are used if no argument is supplied
- Default arguments are assigned when creating the parameter list
```python
def say_this(phrase = "Hi"):
print(phrase)
```
##
<font size="6" color="#00A0B2" face="verdana"> <B>Example</B></font>
```
# yell_this() yells the string Argument provided
def yell_this(phrase):
print(phrase.upper() + "!")
# call function with a string
yell_this("It is time to save the notebook")
# use a default argument
def say_this(phrase = "Hi"):
print(phrase)
say_this()
say_this("Bye")
```
#
<font size="6" color="#B24C00" face="verdana"> <B>Task 3</B></font>
## Define `yell_this()` and call with variable argument
- define variable **`words_to_yell`** as a string gathered from user `input()`
- Call **`yell_this()`** with **`words_to_yell`** as argument
- get user input() for the string words_to_yell
```
# [ ] define yell_this()
def yell_this(words_to_yell):
print(words_to_yell.upper() + "!")
# [ ] get user input in variable words_to_yell
words_to_yell = input('Yell this: ')
# [ ] call yell_this function with words_to_yell as argument
yell_this(words_to_yell)
```
[Terms of use](http://go.microsoft.com/fwlink/?LinkID=206977) [Privacy & cookies](https://go.microsoft.com/fwlink/?LinkId=521839) © 2017 Microsoft
|
github_jupyter
|
print('Hello World!', 'I am sending string arguments to print ')
student_age = 17
student_name = "Hiroto Yamaguchi"
print(student_name,'will be in the class for',student_age, 'year old students.')
print("line 1")
print("line 2")
# line 3 is an empty return - the default when no arguments
print()
print("line 4")
#[ ] increase the number of arguments used in print() to 8 or more
student_age = 17
student_name = "Hiroto Yamaguchi"
student_name_2 = "Kyle Heinze"
student_age_2 = "15"
print(student_name,'will be in the class for',student_age, 'year old students.',student_name_2,'will be in the class for',student_age_2,'year old students.')
def some_function():
#code the function tasks indented here
# defines a function named say_hi
def say_hi():
print("Hello there!")
print("goodbye")
# define three_three
def three_three():
print(33)
# Program defines and calls the say_hi & three_three functions
# [ ] review and run the code
def say_hi():
print("Hello there!")
print("goodbye")
# end of indentation ends the function
# define three_three
def three_three():
print(33)
# calling the functions
say_hi()
print()
three_three()
#[ ] define (def) a simple function called yell_it() and call the function
def yell_it():
phrase = 'This is a short phrase'
print(phrase.upper() + "!")
yell_it()
type("Hello")
def say_this(phrase):
print(phrase)
def say_this(phrase = "Hi"):
print(phrase)
# yell_this() yells the string Argument provided
def yell_this(phrase):
print(phrase.upper() + "!")
# call function with a string
yell_this("It is time to save the notebook")
# use a default argument
def say_this(phrase = "Hi"):
print(phrase)
say_this()
say_this("Bye")
# [ ] define yell_this()
def yell_this(words_to_yell):
print(words_to_yell.upper() + "!")
# [ ] get user input in variable words_to_yell
words_to_yell = input('Yell this: ')
# [ ] call yell_this function with words_to_yell as argument
yell_this(words_to_yell)
| 0.18072 | 0.933734 |
# 1-2 Intro Python Practice
## Strings: input, testing, formatting
<font size="5" color="#00A0B2" face="verdana"> <B>Student will be able to</B></font>
- gather, store and use string `input()`
- format `print()` output
- test string characteristics
- format string output
- search for a string in a string
## input()
getting input from users
```
# [ ] get user input for a variable named remind_me
print("Enter remind me: 5 ")
remind_me = input(5)
# [ ] print the value of the variable remind_me
print("remind_me")
# use string addition to print "remember: " before the remind_me input string
remind_me = 5
print("remind_me + remember")
```
### Program: Meeting Details
#### [ ] get user **input** for meeting subject and time
`what is the meeting subject?: plan for graduation`
`what is the meeting time?: 3:00 PM on Monday`
#### [ ] print **output** with descriptive labels
`Meeting Subject: plan for graduation`
`Meeting Time: 3:00 PM on Monday`
```
# [ ] get user input for 2 variables: meeting_subject and meeting_time
meeting_subject = input ("plan for graduation")
# [ ] use string addition to print meeting subject and time with labels
```
## print() formatting
### combining multiple strings separated by commas in the print() function
```
# [ ] print the combined strings "Wednesday is" and "in the middle of the week"
print("Wednesday is", "and", "in the middle of the week")
# [ ] print combined string "Remember to" and the string variable remind_me from input above
print("Remember to", "and the string variable remind_me from input above")
# [ ] Combine 3 variables from above with multiple strings
print("Remember to", "look and see what the strings look like", "and", "watch what happens live")
```
### print() quotation marks
```
# [ ] print a string sentence that will display an Apostrophe (')
print("I don't like eggplants")
# [ ] print a string sentence that will display a quote(") or quotes
print('"what is going on?"')
```
## Boolean string tests
### Vehicle tests
#### get user input for a variable named vehicle
print the following tests results
- check True or False if vehicle is All alphabetical characters using .isalpha()
- check True or False if vehicle is only All alphabetical & numeric characters
- check True or False if vehicle is Capitalized (first letter only)
- check True or False if vehicle is All lowercase
- **bonus** print description for each test (e.g.- `"All Alpha: True"`)
```
# [ ] complete vehicle tests
"Vehicle".isalpha()
"Vehicle".isalnum()
"vehicle".islower()
# [ ] print True or False if color starts with "b"
"color".startswith("b")
```
## Sting formatting
```
# [ ] print the string variable capital_this Capitalizing only the first letter
capitalize_this = "the TIME is Noon."
print(capitalize_this.capitalize())
# print the string variable swap_this in swapped case
swap_this = "wHO writes LIKE tHIS?"
print(swap_this.swapcase())
# print the string variable whisper_this in all lowercase
whisper_this = "Can you hear me?"
print(whisper_this.lower())
# print the string variable yell_this in all UPPERCASE
yell_this = "Can you hear me Now!?"
print(yell_this.upper())
#format input using .upper(), .lower(), .swapcase, .capitalize()
format_input = input('enter a string to reformat: ')
```
### input() formatting
```
# [ ] get user input for a variable named color
# [ ] modify color to be all lowercase and print
# [ ] get user input using variable remind_me and format to all **lowercase** and print
# [ ] test using input with mixed upper and lower cases
# [] get user input for the variable yell_this and format as a "YELL" to ALL CAPS
```
## "in" keyword
### boolean: short_str in long_str
```
# [ ] get user input for the name of some animals in the variable animals_input
# [ ] print true or false if 'cat' is in the string variable animals_input
# [ ] get user input for color
# [ ] print True or False for starts with "b"
# [ ] print color variable value exactly as input
# test with input: "Blue", "BLUE", "bLUE"
```
## Program: guess what I'm reading
### short_str in long_str
1. **[ ]** get user **`input`** for a single word describing something that can be read
save in a variable called **can_read**
e.g. - "website", "newspaper", "blog", "textbook"
2. **[ ]** get user **`input`** for 3 things can be read
save in a variable called **can_read_things**
3. **[ ]** print **`true`** if the **can_read** string is found
**in** the **can_read_things** string variable
*example of program input and output*
[ ](https://1drv.ms/i/s!Am_KPRosgtaij7A_G6RtDlWZeYA3ZA)
```
# project: "guess what I'm reading"
# 1[ ] get 1 word input for can_read variable
# 2[ ] get 3 things input for can_read_things variable
# 3[ ] print True if can_read is in can_read_things
# [] challenge: format the output to read "item found = True" (or false)
# hint: look print formatting exercises
```
## Program: Allergy Check
1. **[ ]** get user **`input`** for categories of food eaten in the last 24 hours
save in a variable called **input_test**
*example input*
[ ](https://1drv.ms/i/s!Am_KPRosgtaij65qzFD5CGvv95-ijg)
2. **[ ]** print **`True`** if "dairy" is in the **input_test** string
**[ ]** Test the code so far
3. **[ ]** modify the print statement to output similar to below
*example output*
[ ](https://1drv.ms/i/s!Am_KPRosgtaij65rET-wmlpCdMX7CQ)
Test the code so far trying input including the string "dairy" and without
4. **[ ]** repeat the process checking the input for "nuts", **challenge** add "Seafood" and "chocolate"
**[ ]** Test your code
5. **[ ] challenge:** make your code work for input regardless of case, e.g. - print **`True`** for "Nuts", "NuTs", "NUTS" or "nuts"
```
# Allergy check
# 1[ ] get input for test
# 2/3[ ] print True if "dairy" is in the input or False if not
# 4[ ] Check if "nuts" are in the input
# 4+[ ] Challenge: Check if "seafood" is in the input
# 4+[ ] Challenge: Check if "chocolate" is in the input
```
[Terms of use](http://go.microsoft.com/fwlink/?LinkID=206977) [Privacy & cookies](https://go.microsoft.com/fwlink/?LinkId=521839) © 2017 Microsoft
|
github_jupyter
|
# [ ] get user input for a variable named remind_me
print("Enter remind me: 5 ")
remind_me = input(5)
# [ ] print the value of the variable remind_me
print("remind_me")
# use string addition to print "remember: " before the remind_me input string
remind_me = 5
print("remind_me + remember")
# [ ] get user input for 2 variables: meeting_subject and meeting_time
meeting_subject = input ("plan for graduation")
# [ ] use string addition to print meeting subject and time with labels
# [ ] print the combined strings "Wednesday is" and "in the middle of the week"
print("Wednesday is", "and", "in the middle of the week")
# [ ] print combined string "Remember to" and the string variable remind_me from input above
print("Remember to", "and the string variable remind_me from input above")
# [ ] Combine 3 variables from above with multiple strings
print("Remember to", "look and see what the strings look like", "and", "watch what happens live")
# [ ] print a string sentence that will display an Apostrophe (')
print("I don't like eggplants")
# [ ] print a string sentence that will display a quote(") or quotes
print('"what is going on?"')
# [ ] complete vehicle tests
"Vehicle".isalpha()
"Vehicle".isalnum()
"vehicle".islower()
# [ ] print True or False if color starts with "b"
"color".startswith("b")
# [ ] print the string variable capital_this Capitalizing only the first letter
capitalize_this = "the TIME is Noon."
print(capitalize_this.capitalize())
# print the string variable swap_this in swapped case
swap_this = "wHO writes LIKE tHIS?"
print(swap_this.swapcase())
# print the string variable whisper_this in all lowercase
whisper_this = "Can you hear me?"
print(whisper_this.lower())
# print the string variable yell_this in all UPPERCASE
yell_this = "Can you hear me Now!?"
print(yell_this.upper())
#format input using .upper(), .lower(), .swapcase, .capitalize()
format_input = input('enter a string to reformat: ')
# [ ] get user input for a variable named color
# [ ] modify color to be all lowercase and print
# [ ] get user input using variable remind_me and format to all **lowercase** and print
# [ ] test using input with mixed upper and lower cases
# [] get user input for the variable yell_this and format as a "YELL" to ALL CAPS
# [ ] get user input for the name of some animals in the variable animals_input
# [ ] print true or false if 'cat' is in the string variable animals_input
# [ ] get user input for color
# [ ] print True or False for starts with "b"
# [ ] print color variable value exactly as input
# test with input: "Blue", "BLUE", "bLUE"
# project: "guess what I'm reading"
# 1[ ] get 1 word input for can_read variable
# 2[ ] get 3 things input for can_read_things variable
# 3[ ] print True if can_read is in can_read_things
# [] challenge: format the output to read "item found = True" (or false)
# hint: look print formatting exercises
# Allergy check
# 1[ ] get input for test
# 2/3[ ] print True if "dairy" is in the input or False if not
# 4[ ] Check if "nuts" are in the input
# 4+[ ] Challenge: Check if "seafood" is in the input
# 4+[ ] Challenge: Check if "chocolate" is in the input
| 0.115025 | 0.882731 |
# Assignment 3 - Building a Custom Visualization
---
In this assignment you must choose one of the options presented below and submit a visual as well as your source code for peer grading. The details of how you solve the assignment are up to you, although your assignment must use matplotlib so that your peers can evaluate your work. The options differ in challenge level, but there are no grades associated with the challenge level you chose. However, your peers will be asked to ensure you at least met a minimum quality for a given technique in order to pass. Implement the technique fully (or exceed it!) and you should be able to earn full grades for the assignment.
Ferreira, N., Fisher, D., & Konig, A. C. (2014, April). [Sample-oriented task-driven visualizations: allowing users to make better, more confident decisions.](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/Ferreira_Fisher_Sample_Oriented_Tasks.pdf)
In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 571-580). ACM. ([video](https://www.youtube.com/watch?v=BI7GAs-va-Q))
In this [paper](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/Ferreira_Fisher_Sample_Oriented_Tasks.pdf) the authors describe the challenges users face when trying to make judgements about probabilistic data generated through samples. As an example, they look at a bar chart of four years of data (replicated below in Figure 1). Each year has a y-axis value, which is derived from a sample of a larger dataset. For instance, the first value might be the number votes in a given district or riding for 1992, with the average being around 33,000. On top of this is plotted the 95% confidence interval for the mean (see the boxplot lectures for more information, and the yerr parameter of barcharts).
<br>
<img src="readonly/Assignment3Fig1.png" alt="Figure 1" style="width: 400px;"/>
<h4 style="text-align: center;" markdown="1"> Figure 1 from (Ferreira et al, 2014).</h4>
<br>
A challenge that users face is that, for a given y-axis value (e.g. 42,000), it is difficult to know which x-axis values are most likely to be representative, because the confidence levels overlap and their distributions are different (the lengths of the confidence interval bars are unequal). One of the solutions the authors propose for this problem (Figure 2c) is to allow users to indicate the y-axis value of interest (e.g. 42,000) and then draw a horizontal line and color bars based on this value. So bars might be colored red if they are definitely above this value (given the confidence interval), blue if they are definitely below this value, or white if they contain this value.
<br>
<img src="readonly/Assignment3Fig2c.png" alt="Figure 1" style="width: 400px;"/>
<h4 style="text-align: center;" markdown="1"> Figure 2c from (Ferreira et al. 2014). Note that the colorbar legend at the bottom as well as the arrows are not required in the assignment descriptions below.</h4>
<br>
<br>
**Easiest option:** Implement the bar coloring as described above - a color scale with only three colors, (e.g. blue, white, and red). Assume the user provides the y axis value of interest as a parameter or variable.
**Harder option:** Implement the bar coloring as described in the paper, where the color of the bar is actually based on the amount of data covered (e.g. a gradient ranging from dark blue for the distribution being certainly below this y-axis, to white if the value is certainly contained, to dark red if the value is certainly not contained as the distribution is above the axis).
**Even Harder option:** Add interactivity to the above, which allows the user to click on the y axis to set the value of interest. The bar colors should change with respect to what value the user has selected.
**Hardest option:** Allow the user to interactively set a range of y values they are interested in, and recolor based on this (e.g. a y-axis band, see the paper for more details).
---
*Note: The data given for this assignment is not the same as the data used in the article and as a result the visualizations may look a little different.*
```
# Use the following data for this assignment:
%matplotlib notebook
import pandas as pd
import numpy as np
from matplotlib import pyplot as plt
import matplotlib.patches as mpatches
import math
np.random.seed(12345)
df = pd.DataFrame([np.random.normal(32000,200000,3650),
np.random.normal(43000,100000,3650),
np.random.normal(43500,140000,3650),
np.random.normal(48000,70000,3650)],
index=[1992,1993,1994,1995])
df = df.T
means = df.mean() # Mean of each sample
stds = df.std() # Standard deviation of each sample
ns = df.count() # Number of samples
sqrt_ns = np.sqrt(ns) # Square root of the number of samples for calculation of st. error
sterr = stds / sqrt_ns # Standard deviations of the means of the sampling distributions
confintervals = 1.96 * sterr
color_scales = {0:'#273f84',1:'#5f71a4',2:'#bbc2d8',3:'#ffffff',4:'#ffffff',5:'#e9b2b2', 6:'#cd4a4a',7:'#bb0b0b'}
color_desc = {0:'<-3 st.d.',1:'-2 to -3 st.d.',2:'-1 to -2 st.d.',3:'-1 to 0 st.d.',4:'0 to +1 st.d.',5:'+1 to +2 st.d.', 6:'+2 to +3 st.d.',7:'> +3 st.d.'}
margin_above = 0.5
y_lim = int(math.ceil((means.max()*(1+margin_above)) / 10000.0)) * 10000
fig = plt.figure(figsize = (8,5))
positions = range(len(df.columns))
xlabels = df.columns
plt.bar(positions, means ,yerr=confintervals, capsize = 7, align='center', color = '#aaaaaa', edgecolor='#333333', linewidth=1, width=0.6)
# Set title
plt.title('Conf. intervals across different sample distributions', size=12, fontweight = 'bold', alpha=0.7)
# Set y-axis properties
plt.ylim((0, y_lim))
plt.xlim((-0.5,3.5))
# Format the ticks and labels
plt.tick_params(top='off', bottom='off', left='off', right='off', labelleft='on', labelbottom='on')
plt.xticks(positions, xlabels)
# Add annotation
#caption_txt = '95% confidence intervals are indicated for each year.'
#fig.text(.125,0,caption_txt,alpha=0.8)
plt.annotate('Click on the y-axis (or chart) to select Y-value. The bar\n' + \
'colors will reflect the distance of the mean (bar height)\n' + \
'from the selected Y-value in number of st. deviations (z-value).\n' + \
'The ranges at the top of each bar already indicate 95% conf.\n' + \
'interval (i.e. z~2 st. deviations away from the mean).', [-0.425, y_lim*.97], alpha=0.8, va='top', fontsize=8)
# Add legend
patches = []
for item in color_scales:
patch = mpatches.Patch(edgecolor='#333333', linewidth=1, color=color_scales[item], label=color_desc[item])
patches.append(patch)
plt.legend(handles=patches, ncol=2, fontsize=8)
plt.show()
def onclick(event):
plt.cla()
# Calculate y
if isinstance(event.ydata, float):
y = event.ydata
else:
y = (event.y - 70) * (float(y_lim) / (550 - 70))
# Calculate distance from mu
distances = means - y
z_values = distances / sterr
bins=[-3,-2,-1,0,1,2,3]
color_pos = np.digitize(z_values, bins)
# Plot the bars
bars = plt.bar(positions, means ,yerr=confintervals, capsize = 7, align='center', linewidth=1, width=0.6)
for i in range(len(bars)):
color = color_scales[color_pos[i]]
bars[i].set_color(color)
bars[i].set_edgecolor('#333333')
# Plot the line
plt.plot([-0.5,3.5],[y, y],'--',color='#cccccc') # Plot the line
# Display the selected y-value
plt.annotate('Click on the y-axis (or chart) to select Y-value. The bar\n' + \
'colors will reflect the distance of the mean (bar height)\n' + \
'from the selected Y-value in number of st. deviations (z-value).\n' + \
'The ranges at the top of each bar already indicate 95% conf.\n' + \
'interval (i.e. z~2 st. deviations away from the mean).', [-0.425, y_lim*.97], alpha=0.8, va='top', fontsize=8)
plt.annotate('y = {:0.0f}'.format(y), [-0.425, y_lim*.75], alpha=0.8, fontsize = 8)
# Fix in place other elements cleared by CLA
plt.title('Conf. intervals across different sample distributions', size=12, fontweight = 'bold', alpha=0.7) # Title
plt.ylim((0, y_lim)) # y-axis
plt.xlim((-0.5,3.5)) # x-axis
# Format the ticks and labels
plt.tick_params(top='off', bottom='off', left='off', right='off', labelleft='on', labelbottom='on')
plt.xticks(positions, xlabels)
plt.legend(handles=patches, ncol=2, fontsize=8)
plt.show()
plt.gcf().canvas.mpl_connect('button_press_event', onclick)
```
|
github_jupyter
|
# Use the following data for this assignment:
%matplotlib notebook
import pandas as pd
import numpy as np
from matplotlib import pyplot as plt
import matplotlib.patches as mpatches
import math
np.random.seed(12345)
df = pd.DataFrame([np.random.normal(32000,200000,3650),
np.random.normal(43000,100000,3650),
np.random.normal(43500,140000,3650),
np.random.normal(48000,70000,3650)],
index=[1992,1993,1994,1995])
df = df.T
means = df.mean() # Mean of each sample
stds = df.std() # Standard deviation of each sample
ns = df.count() # Number of samples
sqrt_ns = np.sqrt(ns) # Square root of the number of samples for calculation of st. error
sterr = stds / sqrt_ns # Standard deviations of the means of the sampling distributions
confintervals = 1.96 * sterr
color_scales = {0:'#273f84',1:'#5f71a4',2:'#bbc2d8',3:'#ffffff',4:'#ffffff',5:'#e9b2b2', 6:'#cd4a4a',7:'#bb0b0b'}
color_desc = {0:'<-3 st.d.',1:'-2 to -3 st.d.',2:'-1 to -2 st.d.',3:'-1 to 0 st.d.',4:'0 to +1 st.d.',5:'+1 to +2 st.d.', 6:'+2 to +3 st.d.',7:'> +3 st.d.'}
margin_above = 0.5
y_lim = int(math.ceil((means.max()*(1+margin_above)) / 10000.0)) * 10000
fig = plt.figure(figsize = (8,5))
positions = range(len(df.columns))
xlabels = df.columns
plt.bar(positions, means ,yerr=confintervals, capsize = 7, align='center', color = '#aaaaaa', edgecolor='#333333', linewidth=1, width=0.6)
# Set title
plt.title('Conf. intervals across different sample distributions', size=12, fontweight = 'bold', alpha=0.7)
# Set y-axis properties
plt.ylim((0, y_lim))
plt.xlim((-0.5,3.5))
# Format the ticks and labels
plt.tick_params(top='off', bottom='off', left='off', right='off', labelleft='on', labelbottom='on')
plt.xticks(positions, xlabels)
# Add annotation
#caption_txt = '95% confidence intervals are indicated for each year.'
#fig.text(.125,0,caption_txt,alpha=0.8)
plt.annotate('Click on the y-axis (or chart) to select Y-value. The bar\n' + \
'colors will reflect the distance of the mean (bar height)\n' + \
'from the selected Y-value in number of st. deviations (z-value).\n' + \
'The ranges at the top of each bar already indicate 95% conf.\n' + \
'interval (i.e. z~2 st. deviations away from the mean).', [-0.425, y_lim*.97], alpha=0.8, va='top', fontsize=8)
# Add legend
patches = []
for item in color_scales:
patch = mpatches.Patch(edgecolor='#333333', linewidth=1, color=color_scales[item], label=color_desc[item])
patches.append(patch)
plt.legend(handles=patches, ncol=2, fontsize=8)
plt.show()
def onclick(event):
plt.cla()
# Calculate y
if isinstance(event.ydata, float):
y = event.ydata
else:
y = (event.y - 70) * (float(y_lim) / (550 - 70))
# Calculate distance from mu
distances = means - y
z_values = distances / sterr
bins=[-3,-2,-1,0,1,2,3]
color_pos = np.digitize(z_values, bins)
# Plot the bars
bars = plt.bar(positions, means ,yerr=confintervals, capsize = 7, align='center', linewidth=1, width=0.6)
for i in range(len(bars)):
color = color_scales[color_pos[i]]
bars[i].set_color(color)
bars[i].set_edgecolor('#333333')
# Plot the line
plt.plot([-0.5,3.5],[y, y],'--',color='#cccccc') # Plot the line
# Display the selected y-value
plt.annotate('Click on the y-axis (or chart) to select Y-value. The bar\n' + \
'colors will reflect the distance of the mean (bar height)\n' + \
'from the selected Y-value in number of st. deviations (z-value).\n' + \
'The ranges at the top of each bar already indicate 95% conf.\n' + \
'interval (i.e. z~2 st. deviations away from the mean).', [-0.425, y_lim*.97], alpha=0.8, va='top', fontsize=8)
plt.annotate('y = {:0.0f}'.format(y), [-0.425, y_lim*.75], alpha=0.8, fontsize = 8)
# Fix in place other elements cleared by CLA
plt.title('Conf. intervals across different sample distributions', size=12, fontweight = 'bold', alpha=0.7) # Title
plt.ylim((0, y_lim)) # y-axis
plt.xlim((-0.5,3.5)) # x-axis
# Format the ticks and labels
plt.tick_params(top='off', bottom='off', left='off', right='off', labelleft='on', labelbottom='on')
plt.xticks(positions, xlabels)
plt.legend(handles=patches, ncol=2, fontsize=8)
plt.show()
plt.gcf().canvas.mpl_connect('button_press_event', onclick)
| 0.821868 | 0.985 |
# Understanding my finances
## Purpose
The purpose of this notebook is to expose some insights around my finances and where my money goes (expenditures).
We will consider two sources of data for this analysis: my chequing account statement and my credit card statement.
## Context and Scope
To inform interpretation of the data, some context and scope:
- Salary is paid into the chequing account
- Chequing account transactions will include transfers between other accounts. These transactions should be excluded from this analysis, as our purpose is to understand my expenditures
- Credit card transactions will include payments of the balance, recorded as credits. These will be excluded from this analysis as they do not provide additional insight into my finances
```
# Set up external imports
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import datetime
import calendar
from scipy.stats import shapiro
import statsmodels.api as sm
idx = pd.IndexSlice
print("External modules imported")
%matplotlib inline
# Define file path
fpath = "./Personal_Finances/"
```
### Functions used in this notebook
```
# Function to save plots to .png files
def generate_png(name):
pltfile = fpath + name
plt.savefig(pltfile, dpi=300, format="png")
# Function to add a transaction type field
def add_transaction_type(col):
if col < 0:
return "D"
else:
return "C"
print("Set up complete")
```
## 1. First, let us load the transaction data and do some basic formatting
__Raw data__:
cheque_data
credit_data
__Formatted data__:
tran_cheque_data
tran_credit_data
```
# Find files
import os
for root, dirs, files in os.walk(fpath):
for file in files:
print(os.path.join(root, file))
# Load chequing account transactions
cheque_data = pd.read_csv('.\Personal_Finances\chq.csv', skiprows=5)
cheque_data.size
# Get chequing data dtypes
cheque_data.dtypes
# Format chequing data types
tran_cheque_data = cheque_data.drop(["Cheque Number","Date"], axis=1)
tran_cheque_data["TransactionDate"] = pd.to_datetime(cheque_data["Date"], format="%Y/%m/%d")
tran_cheque_data["Type"] = cheque_data["Amount"].apply(add_transaction_type)
tran_cheque_data["Amount"] = cheque_data["Amount"].apply(lambda x: abs(x))
tran_cheque_data["Raw Amount"] = cheque_data["Amount"]
# tran_cheque_data["Year"] = tran_cheque_data["TransactionDate"].dt.year
tran_cheque_data["Details"] = cheque_data[["Payee","Memo"]].fillna(value='').astype(str).agg(" ".join, axis=1)
tran_cheque_data = tran_cheque_data.assign(Account="CHQ")
# Check formatted data
tran_cheque_data.head()
# Load credit card transactions
credit_data = pd.read_csv('.\Personal_Finances\crd.csv')
credit_data.dtypes
credit_data.head()
tran_credit_data = credit_data.drop(["Card"], axis=1)
tran_credit_data["TransactionDate"] = pd.to_datetime(credit_data["TransactionDate"], format="%d/%m/%Y")
tran_credit_data["ProcessedDate"] = pd.to_datetime(credit_data["ProcessedDate"], format="%d/%m/%Y")
tran_credit_data["ForeignTransaction"] = tran_credit_data["ForeignCurrencyAmount"].notnull()
# tran_credit_data["Year"] = tran_credit_data["TransactionDate"].dt.year
tran_credit_data = tran_credit_data.assign(Account="CRD")
# Check formatted data
tran_credit_data.head()
```
## 2. Next, we will separate the transactions into credits and debits for each group.
We can expect the following broad groups of transactions:
For the chequing transactions:
Credits:
- Salary
- General credits
Debits:
- Transfers to other bank account (where we have the credit card)
- General debits
For the credit card transactions:
Credits:
- Paying off card balance
- Refunds
Debits:
- General transactions
```
# Chequing data sets
chq_d = tran_cheque_data[tran_cheque_data["Type"] == "D"]
chq_c = tran_cheque_data[tran_cheque_data["Type"] == "C"]
print("Chequing data count:", len(tran_cheque_data.index))
print("Chequing debits count:", len(chq_d.index))
print("Chequing credits count:", len(chq_c.index))
# Credit card data sets
crd_d = tran_credit_data[tran_credit_data["Type"] == "D"]
crd_c = tran_credit_data[tran_credit_data["Type"] == "C"]
print("Credit Card data shape:", len(tran_credit_data.index))
print("Credit Card debits shape:", len(crd_d.index))
print("Credit Card credits shape:", len(crd_c.index))
```
## 3. We want a unique set of debits and credits
We will now transform the data according to the scope highlighted at the start of this notebook.
We want to join the chequing and credit card debit sets to get a view of where our money is going, without duplicates.
We want to also join the chequing and credit card credits, while separating credit card payments.
We will separate transactions out of scope and store them in separate sets.
Finally, we want to apply the same format to the credit and debit data sets and append where appropriate for these sets:
1. Unique set of debits showing cash flow out of our accounts
2. Unique set of credits showing cash flow into our accounts
3. Credit card payments
4. Out of scope transactions for our accounts
### Let's have a look at the chequing account transactions to see if we can find some way to separate out of scope transactions
```
## Identify what the Tran Type field corresponds to
# chq_d_cnt = chq_d.groupby(["Tran Type", "Year"]).size().sort_values(ascending=False)
chq_d_cnt = chq_d.groupby(["Tran Type"]).size().sort_values(ascending=False)
# Transformation rules to filter out of scope data
# Remove savings and investment transactions
chq_d_flt = # details omitted
len(chq_d_flt)
chq_d
## Get amount values by type
# chq_d_sum = chq_d.groupby(["Tran Type", "Year"])["Amount"].sum().sort_values(ascending=False)
chq_d_sum = chq_d.groupby(["Tran Type"])["Amount"].sum().sort_values(ascending=False)
## Get summarised values
chq_d_sv = pd.DataFrame(chq_d_cnt).join(pd.DataFrame(chq_d_sum))
chq_d_sv
# chq_c_cnt = chq_c.groupby(["Tran Type", "Year"]).size().sort_values(ascending=False)
chq_c_cnt = chq_c.groupby(["Tran Type"]).size().sort_values(ascending=False)
## Get amount values by type
# chq_c_sum = chq_c.groupby(["Tran Type", "Year"])["Amount"].sum().sort_values(ascending=False)
chq_c_sum = chq_c.groupby(["Tran Type"])["Amount"].sum().sort_values(ascending=False)
## Get summarised values
chq_c_sv = pd.DataFrame(chq_c_cnt).join(pd.DataFrame(chq_c_sum))
chq_c_sv
```
### All credit card credit transactions are essentially in scope. We need to check if debit transactions will need to be filtered
```
# Visual inspection of credit transactions
crd_c.head()
crd_d.dtypes
```
### There are a few refund transactions. Nothing is available to filter these on, may affect our stats slightly
### Let's get our common set of debits
```
# # Verify columns and get common set for debits
# crd_d.columns
# chq_d.columns
# cols = ['TransactionDate', 'Year', 'Account', 'Type', 'Details', 'Amount']
cols = ['TransactionDate', 'Account', 'Type', 'Details', 'Amount']
combined_d = crd_d[cols].append(chq_d_flt[cols])
# Let's look at the 10 top transactions by Amount
combined_d.sort_values(by="Amount", ascending=False).head(10)
```
## 4. Aggregate visualisations on combined debit set
### Weekly aggregate view
```
# tmp = combined_d.join(combined_d["TransactionDate"].dt.isocalendar().set_index(combined_d["TransactionDate"]).drop_duplicates(), on="TransactionDate").drop("day", axis=1).groupby(["Account", "year", "week"]).sum().unstack([1,0])
# tmp.columns = tmp.columns.droplevel(0)
# tmp = tmp.sort_index(axis=1)
# # tmp.columns = tmp.columns
# tmp == combined_d_week_by_acc
# 1. aggregate our combined transactions to get the sums per week
combined_d_week = combined_d.groupby(["Account"]).resample("W", on="TransactionDate").sum()
combined_d_week.reset_index(0, inplace=True)
# 2. get week number and group by account, week to get our summarised stats
combined_d_week_by_acc = combined_d_week.join(combined_d_week.index.isocalendar().drop_duplicates()).pivot(index="week", columns=["year","Account"], values="Amount")
combined_d_week_total = combined_d_week_by_acc.groupby("year", axis=1).sum()
week_2020 = combined_d_week_by_acc.loc[:,idx[2020]]
for i in range(week_2020.shape[1]):
plt.plot(week_2020.iloc[:,i], label = week_2020.columns[i])
plt.xlabel("Week of Year")
plt.ylabel("Amount $NZD")
plt.title("Spend vs Week of Year 2020")
plt.legend()
plt.tick_params(axis='y',
which='both',
left=False,
labelleft=False)
# Adjust plot spacing
size = plt.gcf().get_size_inches()
size[0] *= 1.4
plt.gcf().set_size_inches(size)
# plt.ylim(top=n)
generate_png("spend_vs_week_of_year_2020.png")
week_2019 = combined_d_week_by_acc.loc[:,idx[2019]]
for i in range(week_2019.shape[1]):
plt.plot(week_2019.iloc[:,i], label = week_2019.columns[i])
plt.xlabel("Week of Year")
plt.ylabel("Amount $NZD")
plt.title("Spend vs Week of Year 2019")
plt.legend()
plt.tick_params(axis='y',
which='both',
left=False,
labelleft=False)
# Adjust plot spacing
size = plt.gcf().get_size_inches()
size[0] *= 1.4
plt.gcf().set_size_inches(size)
# plt.ylim(top=n)
generate_png("spend_vs_week_of_year_2019.png")
for i in range(combined_d_week_total.shape[1]):
plt.plot(combined_d_week_total.iloc[:,i], label = combined_d_week_total.columns[i])
expected = pd.Series(n, index=week_2019.index)
plt.plot(expected, label='expected')
combined_d_week_total.mean()
for i in range(len(combined_d_week_total.mean())):
plt.plot(pd.Series(combined_d_week_total.mean().iloc[i], index=combined_d_week_total.index), label = 'mean ' + str(combined_d_week_total.mean().index[i]))
# mean = pd.Series(combined_d_week_total.mean(), index=week_2019.index)
# plt.plot(mean, label='mean')
plt.xlabel("Week of Year")
plt.ylabel("Amount $NZD")
plt.title("Spend vs Week of Year")
plt.legend()
plt.tick_params(axis='y',
which='both',
left=False,
labelleft=False)
# Adjust plot spacing
size = plt.gcf().get_size_inches()
size[0] *= 1.4
plt.gcf().set_size_inches(size)
# plt.ylim(top=n)
generate_png("spend_vs_week_of_year.png")
```
### Day of week aggregate view
```
# Aggregate by day-of-week/account
# 1. aggregate our combined transactions to get the sums per date
combined_d_dow_by_acc = combined_d.groupby(["Account","TransactionDate"]).sum()
# 2. get our day of week
combined_d_dow_by_acc = combined_d_dow_by_acc.reset_index(0).join(pd.date_range('2019-01-01','2020-12-31').isocalendar(), on="TransactionDate").drop(["week","year"], axis=1)
# 3. group by account, day-of-week and get our summarised stats
combined_d_dow_by_acc = combined_d_dow_by_acc.groupby(["Account","day"]).agg([np.sum, np.mean, np.median, np.size]).unstack(0)
# 4. get day-of-week names
combined_d_dow_by_acc.set_index(combined_d_dow_by_acc.reset_index()["day"].apply(lambda x: calendar.day_name[x-1]), inplace=True)
# 5. remove column multi-index level
combined_d_dow_by_acc.columns = combined_d_dow_by_acc.columns.droplevel(0)
# Aggregate by day-of-week
# 1. aggregate our combined transactions to get the sums per date
combined_d_dow_total = combined_d.groupby(["TransactionDate"]).sum()
# 2. get our day of week
combined_d_dow_total = combined_d_dow_total.reset_index(0).join(pd.date_range('2019-01-01','2020-12-31').isocalendar(), on="TransactionDate").drop(["week","year"], axis=1)
# 3. group by day-of-week and get our summarised stats
combined_d_dow_total = combined_d_dow_total.groupby(["day"]).agg([np.sum, np.mean, np.median, np.size])
# 4. get day-of-week names
combined_d_dow_total.set_index(combined_d_dow_total.reset_index()["day"].apply(lambda x: calendar.day_name[x-1]), inplace=True)
# 5. remove column multi-index level
combined_d_dow_total.columns = combined_d_dow_total.columns.droplevel(0)
combined_d_dow_by_acc
combined_d_dow_total
mean_data = combined_d_dow_by_acc.loc[:,idx['mean']]
for i in range(mean_data.shape[1]):
plt.plot(mean_data.iloc[:,i], label = mean_data.columns[i])
expected = pd.Series(n, index=mean_data.index)
plt.plot(expected, label='expected')
plt.plot('mean', data=combined_d_dow_total, label='combined')
plt.xlabel("Day of Week")
plt.ylabel("Amount $NZD")
plt.title("Mean Spend vs Day of Week")
plt.legend()
plt.tick_params(axis='y',
which='both',
left=False,
labelleft=False)
# Adjust plot spacing
size = plt.gcf().get_size_inches()
size[0] *= 1.4
plt.gcf().set_size_inches(size)
# print(combined_d_dow_by_acc.loc[:,idx[['mean','size'], :]])
# print(combined_d_dow_total.loc[:,['mean','size']])
generate_png("mean_spend_vs_day_of_week.png")
median_data = combined_d_dow_by_acc.loc[:,idx['median']]
for i in range(median_data.shape[1]):
plt.plot(median_data.iloc[:,i], label = median_data.columns[i])
expected = pd.Series(n, index=median_data.index)
plt.plot(expected, label='expected')
# plt.plot('mean', data=combined_d_dow_by_acc.loc[:,idx[:, 'CHQ']], label='CHQ')
# plt.plot('mean', data=combined_d_dow_by_acc.loc[:,idx[:, 'CRD']], label='CRD')
plt.plot('median', data=combined_d_dow_total, label='Combined')
plt.xlabel("Day of Week")
plt.ylabel("Amount $NZD")
plt.title("Median Spend vs Day of Week")
plt.legend()
plt.tick_params(axis='y',
which='both',
left=False,
labelleft=False)
# Adjust plot spacing
size = plt.gcf().get_size_inches()
size[0] *= 1.4
plt.gcf().set_size_inches(size)
# print(combined_d_dow_by_acc.loc[:,idx[['mean','size'], :]])
# print(combined_d_dow_total.loc[:,['mean','size']])
generate_png("median_spend_vs_day_of_week.png")
# Transaction count by Account, day-of-week
# 1. get our day of week
combined_d_dow_count_by_acc = combined_d.join(pd.date_range('2019-01-01','2020-12-31').isocalendar(), on="TransactionDate").drop(["week","year"], axis=1)
# 2. group by account, day-of-week and get our count
combined_d_dow_count_by_acc = combined_d_dow_count_by_acc.groupby(["Account","day"]).size().unstack(0)
# 3. get day-of-week names
combined_d_dow_count_by_acc.set_index(combined_d_dow_count_by_acc.reset_index()["day"].apply(lambda x: calendar.day_name[x-1]), inplace=True)
# Transaction count by day-of-week
# 1. get our day of week
combined_d_dow_count_total = combined_d.join(pd.date_range('2019-01-01','2020-12-31').isocalendar(), on="TransactionDate").drop(["week","year"], axis=1)
# 2. group by day-of-week and get our count
combined_d_dow_count_total = pd.DataFrame(combined_d_dow_count_total.groupby(["day"]).size().rename("count"))
# 3. get day-of-week names
combined_d_dow_count_total.set_index(combined_d_dow_count_total.reset_index()["day"].apply(lambda x: calendar.day_name[x-1]), inplace=True)
size_data = combined_d_dow_count_by_acc
for i in range(size_data.shape[1]):
plt.plot(size_data.iloc[:,i], label = size_data.columns[i])
plt.plot('count', data=combined_d_dow_count_total, label='Combined')
plt.xlabel("Day of Week")
plt.ylabel("Transaction Count")
plt.title("Number of Transactions vs Day of Week")
plt.legend()
plt.tick_params(axis='y',
which='both',
left=False,
labelleft=False)
# Adjust plot spacing
size = plt.gcf().get_size_inches()
size[0] *= 1.4
plt.gcf().set_size_inches(size)
# print(combined_d_dow_by_acc.loc[:,idx[['mean','size'], :]])
# print(combined_d_dow_total.loc[:,['mean','size']])
generate_png("transactions_vs_day_of_week.png")
```
## Weekly spend distribution
```
combined_d_week_total.unstack().max()
combined_d_week_total_all = combined_d_week_total.unstack()
n, bins, patches = plt.hist(x=combined_d_week_total_all,
bins='auto',
alpha=0.7,
rwidth=0.85)
plt.xlabel('Amount ($NZD)')
plt.ylabel('Frequency')
plt.title('Amount of Weekly Spend (n=106)')
plt.tick_params(axis='x',
which='both',
bottom=False,
labelbottom=False)
generate_png("amount_of_weekly_spend.png")
shapiro(combined_d_week_total_all)
sm.qqplot(combined_d_week_total_all, line ='45')
plt.tick_params(axis='both',
which='both',
bottom=False,
labelbottom=False,
left=False,
labelleft=False)
generate_png("amount_of_weekly_spend_qqplot.png")
n, bins, patches = plt.hist(x=combined_d_week_total,
bins='auto',
label=combined_d_week_total.columns,
alpha=0.7,
rwidth=0.85)
plt.xlabel('Amount ($NZD)')
plt.ylabel('Frequency')
plt.title('Amount of Weekly Spend by year (n=106)')
plt.legend()
plt.tick_params(axis='x',
which='both',
bottom=False,
labelbottom=False)
generate_png("amount_of_weekly_spend_by_year.png")
shapiro(combined_d_week_total[2020])
shapiro(combined_d_week_total[2019])
sm.qqplot(combined_d_week_total[2020], line ='45')
sm.qqplot(combined_d_week_total[2019], line ='45')
```
## Daily spend distribution
```
combined_d_daily = combined_d.resample('D', on="TransactionDate").sum()
n, bins, patches = plt.hist(x=combined_d_daily,
bins=bins,
label=combined_d_daily.columns,
alpha=0.7,
rwidth=0.85)
plt.xlabel('Amount ($NZD)')
plt.ylabel('Frequency')
plt.title('Amount of Daily Spend (n=731)')
plt.tick_params(axis='x',
which='both',
bottom=False,
labelbottom=False)
generate_png("amount_of_daily_spend.png")
n, bins, patches = plt.hist(x=combined_d_daily,
bins='auto',
label=combined_d_daily.columns,
alpha=0.7,
rwidth=0.85)
plt.xlabel('Amount ($NZD)')
plt.ylabel('Frequency')
plt.title('Amount of Daily Spend (n=731)')
plt.tick_params(axis='x',
which='both',
bottom=False,
labelbottom=False)
generate_png("amount_of_daily_spend_full.png")
sm.qqplot(combined_d_daily, line ='45')
plt.tick_params(axis='both',
which='both',
bottom=False,
labelbottom=False,
left=False,
labelleft=False)
generate_png("amount_of_daily_spend_qqplot.png")
combined_d_daily.plot(kind='box')
combined_d.groupby('Account')["Amount"].sum()
combined_d[combined_d["Account"]=="CHQ"].sort_values(by="Amount", ascending=False).head(15)
```
|
github_jupyter
|
# Set up external imports
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import datetime
import calendar
from scipy.stats import shapiro
import statsmodels.api as sm
idx = pd.IndexSlice
print("External modules imported")
%matplotlib inline
# Define file path
fpath = "./Personal_Finances/"
# Function to save plots to .png files
def generate_png(name):
pltfile = fpath + name
plt.savefig(pltfile, dpi=300, format="png")
# Function to add a transaction type field
def add_transaction_type(col):
if col < 0:
return "D"
else:
return "C"
print("Set up complete")
# Find files
import os
for root, dirs, files in os.walk(fpath):
for file in files:
print(os.path.join(root, file))
# Load chequing account transactions
cheque_data = pd.read_csv('.\Personal_Finances\chq.csv', skiprows=5)
cheque_data.size
# Get chequing data dtypes
cheque_data.dtypes
# Format chequing data types
tran_cheque_data = cheque_data.drop(["Cheque Number","Date"], axis=1)
tran_cheque_data["TransactionDate"] = pd.to_datetime(cheque_data["Date"], format="%Y/%m/%d")
tran_cheque_data["Type"] = cheque_data["Amount"].apply(add_transaction_type)
tran_cheque_data["Amount"] = cheque_data["Amount"].apply(lambda x: abs(x))
tran_cheque_data["Raw Amount"] = cheque_data["Amount"]
# tran_cheque_data["Year"] = tran_cheque_data["TransactionDate"].dt.year
tran_cheque_data["Details"] = cheque_data[["Payee","Memo"]].fillna(value='').astype(str).agg(" ".join, axis=1)
tran_cheque_data = tran_cheque_data.assign(Account="CHQ")
# Check formatted data
tran_cheque_data.head()
# Load credit card transactions
credit_data = pd.read_csv('.\Personal_Finances\crd.csv')
credit_data.dtypes
credit_data.head()
tran_credit_data = credit_data.drop(["Card"], axis=1)
tran_credit_data["TransactionDate"] = pd.to_datetime(credit_data["TransactionDate"], format="%d/%m/%Y")
tran_credit_data["ProcessedDate"] = pd.to_datetime(credit_data["ProcessedDate"], format="%d/%m/%Y")
tran_credit_data["ForeignTransaction"] = tran_credit_data["ForeignCurrencyAmount"].notnull()
# tran_credit_data["Year"] = tran_credit_data["TransactionDate"].dt.year
tran_credit_data = tran_credit_data.assign(Account="CRD")
# Check formatted data
tran_credit_data.head()
# Chequing data sets
chq_d = tran_cheque_data[tran_cheque_data["Type"] == "D"]
chq_c = tran_cheque_data[tran_cheque_data["Type"] == "C"]
print("Chequing data count:", len(tran_cheque_data.index))
print("Chequing debits count:", len(chq_d.index))
print("Chequing credits count:", len(chq_c.index))
# Credit card data sets
crd_d = tran_credit_data[tran_credit_data["Type"] == "D"]
crd_c = tran_credit_data[tran_credit_data["Type"] == "C"]
print("Credit Card data shape:", len(tran_credit_data.index))
print("Credit Card debits shape:", len(crd_d.index))
print("Credit Card credits shape:", len(crd_c.index))
## Identify what the Tran Type field corresponds to
# chq_d_cnt = chq_d.groupby(["Tran Type", "Year"]).size().sort_values(ascending=False)
chq_d_cnt = chq_d.groupby(["Tran Type"]).size().sort_values(ascending=False)
# Transformation rules to filter out of scope data
# Remove savings and investment transactions
chq_d_flt = # details omitted
len(chq_d_flt)
chq_d
## Get amount values by type
# chq_d_sum = chq_d.groupby(["Tran Type", "Year"])["Amount"].sum().sort_values(ascending=False)
chq_d_sum = chq_d.groupby(["Tran Type"])["Amount"].sum().sort_values(ascending=False)
## Get summarised values
chq_d_sv = pd.DataFrame(chq_d_cnt).join(pd.DataFrame(chq_d_sum))
chq_d_sv
# chq_c_cnt = chq_c.groupby(["Tran Type", "Year"]).size().sort_values(ascending=False)
chq_c_cnt = chq_c.groupby(["Tran Type"]).size().sort_values(ascending=False)
## Get amount values by type
# chq_c_sum = chq_c.groupby(["Tran Type", "Year"])["Amount"].sum().sort_values(ascending=False)
chq_c_sum = chq_c.groupby(["Tran Type"])["Amount"].sum().sort_values(ascending=False)
## Get summarised values
chq_c_sv = pd.DataFrame(chq_c_cnt).join(pd.DataFrame(chq_c_sum))
chq_c_sv
# Visual inspection of credit transactions
crd_c.head()
crd_d.dtypes
# # Verify columns and get common set for debits
# crd_d.columns
# chq_d.columns
# cols = ['TransactionDate', 'Year', 'Account', 'Type', 'Details', 'Amount']
cols = ['TransactionDate', 'Account', 'Type', 'Details', 'Amount']
combined_d = crd_d[cols].append(chq_d_flt[cols])
# Let's look at the 10 top transactions by Amount
combined_d.sort_values(by="Amount", ascending=False).head(10)
# tmp = combined_d.join(combined_d["TransactionDate"].dt.isocalendar().set_index(combined_d["TransactionDate"]).drop_duplicates(), on="TransactionDate").drop("day", axis=1).groupby(["Account", "year", "week"]).sum().unstack([1,0])
# tmp.columns = tmp.columns.droplevel(0)
# tmp = tmp.sort_index(axis=1)
# # tmp.columns = tmp.columns
# tmp == combined_d_week_by_acc
# 1. aggregate our combined transactions to get the sums per week
combined_d_week = combined_d.groupby(["Account"]).resample("W", on="TransactionDate").sum()
combined_d_week.reset_index(0, inplace=True)
# 2. get week number and group by account, week to get our summarised stats
combined_d_week_by_acc = combined_d_week.join(combined_d_week.index.isocalendar().drop_duplicates()).pivot(index="week", columns=["year","Account"], values="Amount")
combined_d_week_total = combined_d_week_by_acc.groupby("year", axis=1).sum()
week_2020 = combined_d_week_by_acc.loc[:,idx[2020]]
for i in range(week_2020.shape[1]):
plt.plot(week_2020.iloc[:,i], label = week_2020.columns[i])
plt.xlabel("Week of Year")
plt.ylabel("Amount $NZD")
plt.title("Spend vs Week of Year 2020")
plt.legend()
plt.tick_params(axis='y',
which='both',
left=False,
labelleft=False)
# Adjust plot spacing
size = plt.gcf().get_size_inches()
size[0] *= 1.4
plt.gcf().set_size_inches(size)
# plt.ylim(top=n)
generate_png("spend_vs_week_of_year_2020.png")
week_2019 = combined_d_week_by_acc.loc[:,idx[2019]]
for i in range(week_2019.shape[1]):
plt.plot(week_2019.iloc[:,i], label = week_2019.columns[i])
plt.xlabel("Week of Year")
plt.ylabel("Amount $NZD")
plt.title("Spend vs Week of Year 2019")
plt.legend()
plt.tick_params(axis='y',
which='both',
left=False,
labelleft=False)
# Adjust plot spacing
size = plt.gcf().get_size_inches()
size[0] *= 1.4
plt.gcf().set_size_inches(size)
# plt.ylim(top=n)
generate_png("spend_vs_week_of_year_2019.png")
for i in range(combined_d_week_total.shape[1]):
plt.plot(combined_d_week_total.iloc[:,i], label = combined_d_week_total.columns[i])
expected = pd.Series(n, index=week_2019.index)
plt.plot(expected, label='expected')
combined_d_week_total.mean()
for i in range(len(combined_d_week_total.mean())):
plt.plot(pd.Series(combined_d_week_total.mean().iloc[i], index=combined_d_week_total.index), label = 'mean ' + str(combined_d_week_total.mean().index[i]))
# mean = pd.Series(combined_d_week_total.mean(), index=week_2019.index)
# plt.plot(mean, label='mean')
plt.xlabel("Week of Year")
plt.ylabel("Amount $NZD")
plt.title("Spend vs Week of Year")
plt.legend()
plt.tick_params(axis='y',
which='both',
left=False,
labelleft=False)
# Adjust plot spacing
size = plt.gcf().get_size_inches()
size[0] *= 1.4
plt.gcf().set_size_inches(size)
# plt.ylim(top=n)
generate_png("spend_vs_week_of_year.png")
# Aggregate by day-of-week/account
# 1. aggregate our combined transactions to get the sums per date
combined_d_dow_by_acc = combined_d.groupby(["Account","TransactionDate"]).sum()
# 2. get our day of week
combined_d_dow_by_acc = combined_d_dow_by_acc.reset_index(0).join(pd.date_range('2019-01-01','2020-12-31').isocalendar(), on="TransactionDate").drop(["week","year"], axis=1)
# 3. group by account, day-of-week and get our summarised stats
combined_d_dow_by_acc = combined_d_dow_by_acc.groupby(["Account","day"]).agg([np.sum, np.mean, np.median, np.size]).unstack(0)
# 4. get day-of-week names
combined_d_dow_by_acc.set_index(combined_d_dow_by_acc.reset_index()["day"].apply(lambda x: calendar.day_name[x-1]), inplace=True)
# 5. remove column multi-index level
combined_d_dow_by_acc.columns = combined_d_dow_by_acc.columns.droplevel(0)
# Aggregate by day-of-week
# 1. aggregate our combined transactions to get the sums per date
combined_d_dow_total = combined_d.groupby(["TransactionDate"]).sum()
# 2. get our day of week
combined_d_dow_total = combined_d_dow_total.reset_index(0).join(pd.date_range('2019-01-01','2020-12-31').isocalendar(), on="TransactionDate").drop(["week","year"], axis=1)
# 3. group by day-of-week and get our summarised stats
combined_d_dow_total = combined_d_dow_total.groupby(["day"]).agg([np.sum, np.mean, np.median, np.size])
# 4. get day-of-week names
combined_d_dow_total.set_index(combined_d_dow_total.reset_index()["day"].apply(lambda x: calendar.day_name[x-1]), inplace=True)
# 5. remove column multi-index level
combined_d_dow_total.columns = combined_d_dow_total.columns.droplevel(0)
combined_d_dow_by_acc
combined_d_dow_total
mean_data = combined_d_dow_by_acc.loc[:,idx['mean']]
for i in range(mean_data.shape[1]):
plt.plot(mean_data.iloc[:,i], label = mean_data.columns[i])
expected = pd.Series(n, index=mean_data.index)
plt.plot(expected, label='expected')
plt.plot('mean', data=combined_d_dow_total, label='combined')
plt.xlabel("Day of Week")
plt.ylabel("Amount $NZD")
plt.title("Mean Spend vs Day of Week")
plt.legend()
plt.tick_params(axis='y',
which='both',
left=False,
labelleft=False)
# Adjust plot spacing
size = plt.gcf().get_size_inches()
size[0] *= 1.4
plt.gcf().set_size_inches(size)
# print(combined_d_dow_by_acc.loc[:,idx[['mean','size'], :]])
# print(combined_d_dow_total.loc[:,['mean','size']])
generate_png("mean_spend_vs_day_of_week.png")
median_data = combined_d_dow_by_acc.loc[:,idx['median']]
for i in range(median_data.shape[1]):
plt.plot(median_data.iloc[:,i], label = median_data.columns[i])
expected = pd.Series(n, index=median_data.index)
plt.plot(expected, label='expected')
# plt.plot('mean', data=combined_d_dow_by_acc.loc[:,idx[:, 'CHQ']], label='CHQ')
# plt.plot('mean', data=combined_d_dow_by_acc.loc[:,idx[:, 'CRD']], label='CRD')
plt.plot('median', data=combined_d_dow_total, label='Combined')
plt.xlabel("Day of Week")
plt.ylabel("Amount $NZD")
plt.title("Median Spend vs Day of Week")
plt.legend()
plt.tick_params(axis='y',
which='both',
left=False,
labelleft=False)
# Adjust plot spacing
size = plt.gcf().get_size_inches()
size[0] *= 1.4
plt.gcf().set_size_inches(size)
# print(combined_d_dow_by_acc.loc[:,idx[['mean','size'], :]])
# print(combined_d_dow_total.loc[:,['mean','size']])
generate_png("median_spend_vs_day_of_week.png")
# Transaction count by Account, day-of-week
# 1. get our day of week
combined_d_dow_count_by_acc = combined_d.join(pd.date_range('2019-01-01','2020-12-31').isocalendar(), on="TransactionDate").drop(["week","year"], axis=1)
# 2. group by account, day-of-week and get our count
combined_d_dow_count_by_acc = combined_d_dow_count_by_acc.groupby(["Account","day"]).size().unstack(0)
# 3. get day-of-week names
combined_d_dow_count_by_acc.set_index(combined_d_dow_count_by_acc.reset_index()["day"].apply(lambda x: calendar.day_name[x-1]), inplace=True)
# Transaction count by day-of-week
# 1. get our day of week
combined_d_dow_count_total = combined_d.join(pd.date_range('2019-01-01','2020-12-31').isocalendar(), on="TransactionDate").drop(["week","year"], axis=1)
# 2. group by day-of-week and get our count
combined_d_dow_count_total = pd.DataFrame(combined_d_dow_count_total.groupby(["day"]).size().rename("count"))
# 3. get day-of-week names
combined_d_dow_count_total.set_index(combined_d_dow_count_total.reset_index()["day"].apply(lambda x: calendar.day_name[x-1]), inplace=True)
size_data = combined_d_dow_count_by_acc
for i in range(size_data.shape[1]):
plt.plot(size_data.iloc[:,i], label = size_data.columns[i])
plt.plot('count', data=combined_d_dow_count_total, label='Combined')
plt.xlabel("Day of Week")
plt.ylabel("Transaction Count")
plt.title("Number of Transactions vs Day of Week")
plt.legend()
plt.tick_params(axis='y',
which='both',
left=False,
labelleft=False)
# Adjust plot spacing
size = plt.gcf().get_size_inches()
size[0] *= 1.4
plt.gcf().set_size_inches(size)
# print(combined_d_dow_by_acc.loc[:,idx[['mean','size'], :]])
# print(combined_d_dow_total.loc[:,['mean','size']])
generate_png("transactions_vs_day_of_week.png")
combined_d_week_total.unstack().max()
combined_d_week_total_all = combined_d_week_total.unstack()
n, bins, patches = plt.hist(x=combined_d_week_total_all,
bins='auto',
alpha=0.7,
rwidth=0.85)
plt.xlabel('Amount ($NZD)')
plt.ylabel('Frequency')
plt.title('Amount of Weekly Spend (n=106)')
plt.tick_params(axis='x',
which='both',
bottom=False,
labelbottom=False)
generate_png("amount_of_weekly_spend.png")
shapiro(combined_d_week_total_all)
sm.qqplot(combined_d_week_total_all, line ='45')
plt.tick_params(axis='both',
which='both',
bottom=False,
labelbottom=False,
left=False,
labelleft=False)
generate_png("amount_of_weekly_spend_qqplot.png")
n, bins, patches = plt.hist(x=combined_d_week_total,
bins='auto',
label=combined_d_week_total.columns,
alpha=0.7,
rwidth=0.85)
plt.xlabel('Amount ($NZD)')
plt.ylabel('Frequency')
plt.title('Amount of Weekly Spend by year (n=106)')
plt.legend()
plt.tick_params(axis='x',
which='both',
bottom=False,
labelbottom=False)
generate_png("amount_of_weekly_spend_by_year.png")
shapiro(combined_d_week_total[2020])
shapiro(combined_d_week_total[2019])
sm.qqplot(combined_d_week_total[2020], line ='45')
sm.qqplot(combined_d_week_total[2019], line ='45')
combined_d_daily = combined_d.resample('D', on="TransactionDate").sum()
n, bins, patches = plt.hist(x=combined_d_daily,
bins=bins,
label=combined_d_daily.columns,
alpha=0.7,
rwidth=0.85)
plt.xlabel('Amount ($NZD)')
plt.ylabel('Frequency')
plt.title('Amount of Daily Spend (n=731)')
plt.tick_params(axis='x',
which='both',
bottom=False,
labelbottom=False)
generate_png("amount_of_daily_spend.png")
n, bins, patches = plt.hist(x=combined_d_daily,
bins='auto',
label=combined_d_daily.columns,
alpha=0.7,
rwidth=0.85)
plt.xlabel('Amount ($NZD)')
plt.ylabel('Frequency')
plt.title('Amount of Daily Spend (n=731)')
plt.tick_params(axis='x',
which='both',
bottom=False,
labelbottom=False)
generate_png("amount_of_daily_spend_full.png")
sm.qqplot(combined_d_daily, line ='45')
plt.tick_params(axis='both',
which='both',
bottom=False,
labelbottom=False,
left=False,
labelleft=False)
generate_png("amount_of_daily_spend_qqplot.png")
combined_d_daily.plot(kind='box')
combined_d.groupby('Account')["Amount"].sum()
combined_d[combined_d["Account"]=="CHQ"].sort_values(by="Amount", ascending=False).head(15)
| 0.555556 | 0.907722 |
# Plotting magnitude spectra in dB and the effect of windowing
### George Tzanetakis, University of Victoria
In this notebook we will look at how magnitude spectra look in dB and use the plots to explore what happens when an input sinusoid falls half way between DFT bins.
```
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import IPython.display as ipd
import scipy.io.wavfile as wav
```
We have seen that when an input sinusoid matches the frequency of one of the DFT bins then we get a corresponding peak at the magnitude spectrum. Let's look at what happens to the magnitude spectrum when the input falls half way between two DFT bins. As you can see we get two peaks and the energy is spread to neighboring bins. Notice that this spreading is reduced when using a window function.
```
N = 1024
t = np.arange(0, N)
f1 = 256 # frequency expressed in DFT bins
f2 = 256.5
x1 = np.sin(2 * np.pi * f1 * t / N)
x2 = np.sin(2 * np.pi * f2 * t / N)
X1 = np.abs(np.fft.fft(x1))
X2 = np.abs(np.fft.fft(x2))
X3 = np.abs(np.fft.fft(x2 * np.hanning(N)))
start_bin = 225
end_bin = 275
zoom_t = t[start_bin:end_bin]
zoom_X1 = X1[start_bin:end_bin]
zoom_X2 = X2[start_bin:end_bin]
zoom_X3 = X3[start_bin:end_bin]
plt.figure(figsize=(20,10))
plt.stem(zoom_t, zoom_X1, linefmt='r', markerfmt='ro-', basefmt='r')
plt.stem(zoom_t, zoom_X2, linefmt='g', markerfmt='go-', basefmt='g')
plt.stem(zoom_t, zoom_X3, linefmt='b', markerfmt='bo-', basefmt='b')
plt.legend(['Sine at DFT bin 256','Sine at DFT bin 256.5', 'Windowed sine at DFT bin 256.5'],
fontsize='xx-large')
X1 = 20*np.log10(np.abs(np.fft.fft(x1)))
X2 = 20*np.log10(np.abs(np.fft.fft(x2)))
X3 = 20*np.log10(np.abs(np.fft.fft(x2 * np.hanning(N))))
start_bin = 225
end_bin = 275
zoom_t = t[start_bin:end_bin]
zoom_X1 = X1[start_bin:end_bin]
zoom_X2 = X2[start_bin:end_bin]
zoom_X3 = X3[start_bin:end_bin]
plt.figure(figsize=(20,10))
plt.plot(zoom_t, zoom_X1,color='r')
plt.plot(zoom_t, zoom_X2,color='g')
plt.plot(zoom_t, zoom_X3,color='b')
plt.legend(['Sine at DFT bin 256','Sine at DFT bin 256.5', 'Windowed sine at DFT bin 256.5'],
fontsize='xx-large', loc='upper right')
start_bin = 0
end_bin = 512
zoom_t = t[start_bin:end_bin]
zoom_X1 = X1[start_bin:end_bin]
zoom_X2 = X2[start_bin:end_bin]
zoom_X3 = X3[start_bin:end_bin]
plt.figure(figsize=(20,10))
plt.plot(zoom_t, zoom_X1,color='r')
plt.plot(zoom_t, zoom_X2,color='g')
plt.plot(zoom_t, zoom_X3,color='b')
plt.legend(['Sine at DFT bin 256','Sine at DFT bin 256.5', 'Windowed sine at DFT bin 256.5'],
fontsize='xx-large', loc='upper right')
```
|
github_jupyter
|
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import IPython.display as ipd
import scipy.io.wavfile as wav
N = 1024
t = np.arange(0, N)
f1 = 256 # frequency expressed in DFT bins
f2 = 256.5
x1 = np.sin(2 * np.pi * f1 * t / N)
x2 = np.sin(2 * np.pi * f2 * t / N)
X1 = np.abs(np.fft.fft(x1))
X2 = np.abs(np.fft.fft(x2))
X3 = np.abs(np.fft.fft(x2 * np.hanning(N)))
start_bin = 225
end_bin = 275
zoom_t = t[start_bin:end_bin]
zoom_X1 = X1[start_bin:end_bin]
zoom_X2 = X2[start_bin:end_bin]
zoom_X3 = X3[start_bin:end_bin]
plt.figure(figsize=(20,10))
plt.stem(zoom_t, zoom_X1, linefmt='r', markerfmt='ro-', basefmt='r')
plt.stem(zoom_t, zoom_X2, linefmt='g', markerfmt='go-', basefmt='g')
plt.stem(zoom_t, zoom_X3, linefmt='b', markerfmt='bo-', basefmt='b')
plt.legend(['Sine at DFT bin 256','Sine at DFT bin 256.5', 'Windowed sine at DFT bin 256.5'],
fontsize='xx-large')
X1 = 20*np.log10(np.abs(np.fft.fft(x1)))
X2 = 20*np.log10(np.abs(np.fft.fft(x2)))
X3 = 20*np.log10(np.abs(np.fft.fft(x2 * np.hanning(N))))
start_bin = 225
end_bin = 275
zoom_t = t[start_bin:end_bin]
zoom_X1 = X1[start_bin:end_bin]
zoom_X2 = X2[start_bin:end_bin]
zoom_X3 = X3[start_bin:end_bin]
plt.figure(figsize=(20,10))
plt.plot(zoom_t, zoom_X1,color='r')
plt.plot(zoom_t, zoom_X2,color='g')
plt.plot(zoom_t, zoom_X3,color='b')
plt.legend(['Sine at DFT bin 256','Sine at DFT bin 256.5', 'Windowed sine at DFT bin 256.5'],
fontsize='xx-large', loc='upper right')
start_bin = 0
end_bin = 512
zoom_t = t[start_bin:end_bin]
zoom_X1 = X1[start_bin:end_bin]
zoom_X2 = X2[start_bin:end_bin]
zoom_X3 = X3[start_bin:end_bin]
plt.figure(figsize=(20,10))
plt.plot(zoom_t, zoom_X1,color='r')
plt.plot(zoom_t, zoom_X2,color='g')
plt.plot(zoom_t, zoom_X3,color='b')
plt.legend(['Sine at DFT bin 256','Sine at DFT bin 256.5', 'Windowed sine at DFT bin 256.5'],
fontsize='xx-large', loc='upper right')
| 0.505859 | 0.981436 |
```
# HIDDEN
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn import metrics
pd.set_option('display.max_rows', 100)
sns.set(rc={'figure.figsize':(12,6)})
import warnings
warnings.filterwarnings("ignore")
url = 'https://raw.githubusercontent.com/propublica/compas-analysis/master/compas-scores-two-years.csv'
recidivism = (
pd.read_csv(url)
.drop(['name', 'first', 'last'], axis=1)
)
```
# COMPAS Analysis
### Individual Fairness
* Define individual similarity measure on COMPAS
* Investigate COMPAS w/r/t individual fairness
### Define a Metric
For comparing similarity between individuals, use 'task relevant' variables:
* Criminal History
* Severity of current charge
* Age of defendant (associated w/recidivism)
'task relevant' variables reflect *effort*.
To start:
* Don't use Race, Gender
* Use Euclidean Distance for numeric variables
* Use "Hard Match" for categorical variables
```
metric_vars = [
'age', 'juv_fel_count', 'juv_misd_count',
'priors_count', 'c_charge_degree', 'c_charge_desc']
other_vars = ['decile_score', 'is_recid', 'race', 'sex']
```
### Metric Base Class
* On instantiation, gather column types (to know how to process them)
* Define a `dist` method that takes in two rows, outputs a non-negative number (metric)
```
class CompasMetricBase(object):
def __init__(self, df, metric_vars):
self.vars = metric_vars
df = df[metric_vars]
self.nums = df.dtypes.loc[lambda x:(x == 'float') | (x == 'int')].index
self.cats = df.dtypes.loc[lambda x:x == 'object'].index
return
def dist(self, ser1, ser2):
# sum of squares
# dnum = np.sqrt(((ser1.loc[self.nums] - ser2.loc[self.nums])**2).sum())
# absolute difference
dnum = np.abs((ser1.loc[self.nums] - ser2.loc[self.nums])).sum()
# exact-match
dcat = (1 - (ser1.loc[self.cats] == ser2.loc[self.cats]).astype(int)).sum()
return (dnum + dcat) / len(self.vars)
# Ignore this
def min_max_normalize(df):
return (df - df.min()) / (df.max() - df.min())
df = recidivism[metric_vars + other_vars]
d = CompasMetricBase(df, metric_vars)
# d = MaxSimilarityMetric(df, metric_vars)
# d = CompasMetricBow(df, metric_vars)
# d = CompasMetricLuckEgal(df, metric_vars)
```
### Compute pairwise similarities
* Too many to compute exhaustively
- COMPAS is ~7k individuals
- Pairwise distances ~5.2M pairs
* Procedure:
- Randomly sample 10k pairs
- Compute pairwise similarity of those pairs (`d`)
- Calculate pairwise similarity of outcomes (`D`)
```
samp = 10000
np.random.seed(42)
a = df.copy()
a[d.nums] = min_max_normalize(a[d.nums]) # comment out (for luck egalitarianism)
dm = []
for i,j in np.random.choice(a.index, size=(samp,2)):
v1, v2 = a.loc[i], a.iloc[j]
dist = d.dist(v1, v2)
D = np.abs(v1['decile_score'] - v2['decile_score']) / 10
dm.append((i, j, dist, D))
dm = pd.DataFrame(dm, columns=['idx_1', 'idx_2', 'd', 'D'])
```
### Results: pairs + (d, D)
```
dm
```
### Select 'unfair' decisions based on (d,D)-Fairness condition
The Lipschitz condition for fairness is:
$$D(S(x), S(y)) \leq d(x,y)$$
* We calculate the proportion of pairs that don't satisfy the fair decision condition:
```
unfairs = dm[dm['d'] < dm['D']]
len(unfairs) / len(dm)
fig, (ax1, ax2) = plt.subplots(1,2, figsize=(12,4))
dm['d'].plot(kind='hist', bins=25, ax=ax1, title='distribution of similarity of individuals (d)')
dm['D'].plot(kind='hist', ax=ax2, title='distribution of similarity of outcomes (D)')
plt.suptitle('Distibutions of (d, D) in sample population')
plt.tight_layout();
fig, (ax1, ax2) = plt.subplots(1,2, figsize=(12,4))
unfairs['d'].plot(kind='hist', bins=25, ax=ax1, title='distribution of similarity of individuals (d)')
unfairs['D'].plot(kind='hist', bins=np.linspace(0,1,10), ax=ax2, title='distribution of similarity of outcomes (D)')
plt.suptitle('Distibutions of (d, D) in among unfair decisions')
plt.tight_layout();
```
### Question: thinking about what goes into our metrics
* How are variables individually weighted in the similarity calculation?
- which variable do you think has the most influence?
- you can train a regression model to learn `d` and look at the importance of each variable
* Are `d` and `D` appropriately comparable?
- can you think of a choice of metric `d` or `D` so that fairness always holds?
- can you think of a choice of metric `D` so that fairness almost never holds?
* Return to top and make metric tweaks!
```
jittered = dm.assign(
D=(dm['D'] + np.random.normal(0,0.01, size=dm.shape[0]))
)
jittered.plot(kind='scatter', x='d', y='D', c='b', title='plot of individual similarity vs similarity in outcomes');
```
### Rejoin the similarity pairs to the original data
* Analyze the (lack of) fairness in terms of original data
```
def rejoin(dm, df):
# join similarities with original data
# prefix=1,2 denote data attached to each individual in pair
m = (
dm
.merge(df, left_on='idx_1', right_index=True)
.merge(df, left_on='idx_2', right_index=True, suffixes=['_1', '_2'])
.sort_index(axis=1)
)
# extract the data associated to the higher COMPAS score
# when an unfair decision is made, this is the individual impacted most by decision
cols1 = m['decile_score_1'] < m['decile_score_2']
cols2 = m['decile_score_1'] > m['decile_score_2']
repl_cols = ['race', 'decile_score', 'is_recid', 'c_charge_degree', 'c_charge_desc', 'sex', 'age']
for c in repl_cols:
m.loc[cols1, c] = m.loc[cols1, '%s_2' % c]
m.loc[cols2, c] = m.loc[cols2, '%s_1' % c]
# Calculate if a FP or FN occurred within the pair
FP = ((m.decile_score >= 5) & (m.is_recid == 0)).astype(int)
FN = (
((m['decile_score_1'] < m['decile_score_2']) & (m['decile_score_1'] < 5) & (m['is_recid_1'] == 1)) |
((m['decile_score_2'] < m['decile_score_1']) & (m['decile_score_2'] < 5) & (m['is_recid_2'] == 1))
).astype(int)
m = m.assign(FP=FP, FN=FN)
return m
r = rejoin(dm, df)
```
### Individuals distance ~0.31 apart
* Note, that not everyone is similar in the same way!
* Larger difference in age could be made up by a smaller distance in prior_count, for example
* Which variables have largest influence?
```
(
r.loc[
(r.d < 0.32) & (r.d > 0.30),
[c for c in r.columns if (('_1' in c) or ('_2' in c))]
].loc[:,[c for c in r.columns if c[:-2] in metric_vars]]
)
```
### Very unfair pairs:
* individuals nearly identical (< 0.02)
* outcomes very different (> 0.5)
What are the (differing) characteristics of these pairs?
```
rejoin(
unfairs[(unfairs['d'] < 0.11) & (unfairs['D'] > 0.5)],
df
).T
```
### Focus on unfair decisions
```
a = rejoin(unfairs, df)
pd.concat([
a.race.value_counts(normalize=True).rename('unfairly treated'),
recidivism.race.value_counts(normalize=True).rename('population')
], axis=1).plot(kind='bar', title='distibution of impacts from unfair decisions by race');
```
### Average difference in outcomes by race (among unfair decisions)
* Scores of Black defendants among unfair decisions are higher than those of White defendants
```
sns.barplot(data=a, x='race', y='D');
```
### Recidivism Rate of by race among the higher of the COMPAS scores in unfair decisions
* The difference in Black vs White defendants may be due to the high FN rate in White population
```
sns.barplot(data=a, x='race', y='is_recid');
cdict = {'Other': 'b', 'Caucasian': 'orange', 'African-American': 'green', 'Hispanic': 'red', 'Asian': 'purple', 'Native American': 'brown'}
jittered = a.assign(
D=(a['D'] + np.random.normal(0,0.02, size=a.shape[0]))
)
ax = jittered.plot(kind='scatter', x='d', y='D', c=a['race'].replace(cdict), alpha=0.4, title='(d,D) by Race');
pd.concat([
recidivism.c_charge_desc.value_counts(normalize=True).iloc[:10].rename('population'),
a.c_charge_desc.value_counts(normalize=True).iloc[:10].rename('unfairly treated')
], axis=1).plot(kind='bar', title='Distribution of charge description');
pd.concat([
recidivism.c_charge_degree.value_counts(normalize=True).iloc[:10].rename('population'),
a.c_charge_degree.value_counts(normalize=True).iloc[:10].rename('unfairly treated')
], axis=1).plot(kind='bar', title='Charge Degree (Felony vs Misd) distribution');
```
### Average age by race/sex in unfair decisions
* Pretty even
```
sns.barplot(data=a, x='race', y='age', hue='sex');
```
### False Positive Rate by race in unfair decisions
```
sns.barplot(data=a, x='race', y='FP');
```
### False Negative Rate by race in unfair decisions
```
sns.barplot(data=a, x='race', y='FN');
```
### Other metric definitions
* What tweaks can you think of?
* Make your own and try them out!
```
class MaxSimilarityMetric(CompasMetricBase):
def dist(self, ser1, ser2):
# absolute difference
dnum = np.abs((ser1.loc[self.nums] - ser2.loc[self.nums])).max()
# exact-match
dcat = (1 - (ser1.loc[self.cats] == ser2.loc[self.cats]).astype(int)).max()
return max(dnum, dcat)
recidivism.c_charge_desc.value_counts()
class CompasMetricBow(object):
def __init__(self, df, metric_vars):
self.vars = metric_vars
df = df[metric_vars]
self.nums = df.dtypes.loc[lambda x:(x == 'float') | (x == 'int')].index
self.cats = df.dtypes.loc[lambda x:x == 'object'].index
from sklearn.feature_extraction.text import CountVectorizer
dv = CountVectorizer()
bow = dv.fit_transform(df['c_charge_desc'].replace(np.nan, '')).todense()
self.bow = pd.DataFrame(bow, columns=sorted(dv.vocabulary_, key=lambda x:x[1]))
return
def dist(self, ser1, ser2):
# absolute difference
dnum = np.abs((ser1.loc[self.nums] - ser2.loc[self.nums])).sum()
# exact-match for charge degree (M vs F)
dcat1 = (1 - (ser1.loc[['c_charge_degree']] == ser2.loc[['c_charge_degree']]).astype(int)).sum()
# bow for charge description
bow = self.bow
i1, i2 = ser1.name, ser2.name
dot = (bow.loc[i1] * bow.loc[i2]).sum()
N1 = np.sqrt((bow.loc[2]**2).sum())
N2 = np.sqrt((bow.loc[3]**2).sum())
dcat2 = dot / (N1 * N2)
return (dnum + dcat1 + dcat2) / len(self.vars)
class CompasMetricLuckEgal(object):
def __init__(self, df, metric_vars):
self.vars = metric_vars
df = df[metric_vars + ['race']]
self.nums = df.dtypes.loc[lambda x:(x == 'float') | (x == 'int')].index
self.cats = df.dtypes.loc[lambda x:x == 'object'].index
cdfs = {}
for c in d.nums:
cdfs[c] = (
df
.groupby('race')[c]
.apply(self._calculate_cdf)
.reset_index()
.pivot(index='level_1', columns='race', values=c)
.fillna(1.0)
)
if 'Asian' not in cdfs[c].columns:
cdfs[c]['Asian'] = 1.0
self.cdfs = cdfs
return
def dist(self, ser1, ser2):
dnum = 0
for c in self.nums:
cdf = self.cdfs[c]
p1 = cdf.loc[ser1[c], ser1['race']]
p2 = cdf.loc[ser2[c], ser2['race']]
dnum = dnum + np.abs(p1 - p2)
# exact-match for charge degree (M vs F)
dcat = (1 - (ser1.loc[d.cats] == ser2.loc[d.cats]).astype(int)).sum()
return (dnum + dcat) / len(self.vars)
def _calculate_cdf(self, col):
cdf = col.value_counts(normalize=True).sort_index().cumsum()
reindexed_cdf = cdf.reindex(
pd.RangeIndex(
start=cdf.index.min(),
stop=(cdf.index.max() + 1)
), method='nearest')
return reindexed_cdf
```
|
github_jupyter
|
# HIDDEN
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn import metrics
pd.set_option('display.max_rows', 100)
sns.set(rc={'figure.figsize':(12,6)})
import warnings
warnings.filterwarnings("ignore")
url = 'https://raw.githubusercontent.com/propublica/compas-analysis/master/compas-scores-two-years.csv'
recidivism = (
pd.read_csv(url)
.drop(['name', 'first', 'last'], axis=1)
)
metric_vars = [
'age', 'juv_fel_count', 'juv_misd_count',
'priors_count', 'c_charge_degree', 'c_charge_desc']
other_vars = ['decile_score', 'is_recid', 'race', 'sex']
class CompasMetricBase(object):
def __init__(self, df, metric_vars):
self.vars = metric_vars
df = df[metric_vars]
self.nums = df.dtypes.loc[lambda x:(x == 'float') | (x == 'int')].index
self.cats = df.dtypes.loc[lambda x:x == 'object'].index
return
def dist(self, ser1, ser2):
# sum of squares
# dnum = np.sqrt(((ser1.loc[self.nums] - ser2.loc[self.nums])**2).sum())
# absolute difference
dnum = np.abs((ser1.loc[self.nums] - ser2.loc[self.nums])).sum()
# exact-match
dcat = (1 - (ser1.loc[self.cats] == ser2.loc[self.cats]).astype(int)).sum()
return (dnum + dcat) / len(self.vars)
# Ignore this
def min_max_normalize(df):
return (df - df.min()) / (df.max() - df.min())
df = recidivism[metric_vars + other_vars]
d = CompasMetricBase(df, metric_vars)
# d = MaxSimilarityMetric(df, metric_vars)
# d = CompasMetricBow(df, metric_vars)
# d = CompasMetricLuckEgal(df, metric_vars)
samp = 10000
np.random.seed(42)
a = df.copy()
a[d.nums] = min_max_normalize(a[d.nums]) # comment out (for luck egalitarianism)
dm = []
for i,j in np.random.choice(a.index, size=(samp,2)):
v1, v2 = a.loc[i], a.iloc[j]
dist = d.dist(v1, v2)
D = np.abs(v1['decile_score'] - v2['decile_score']) / 10
dm.append((i, j, dist, D))
dm = pd.DataFrame(dm, columns=['idx_1', 'idx_2', 'd', 'D'])
dm
unfairs = dm[dm['d'] < dm['D']]
len(unfairs) / len(dm)
fig, (ax1, ax2) = plt.subplots(1,2, figsize=(12,4))
dm['d'].plot(kind='hist', bins=25, ax=ax1, title='distribution of similarity of individuals (d)')
dm['D'].plot(kind='hist', ax=ax2, title='distribution of similarity of outcomes (D)')
plt.suptitle('Distibutions of (d, D) in sample population')
plt.tight_layout();
fig, (ax1, ax2) = plt.subplots(1,2, figsize=(12,4))
unfairs['d'].plot(kind='hist', bins=25, ax=ax1, title='distribution of similarity of individuals (d)')
unfairs['D'].plot(kind='hist', bins=np.linspace(0,1,10), ax=ax2, title='distribution of similarity of outcomes (D)')
plt.suptitle('Distibutions of (d, D) in among unfair decisions')
plt.tight_layout();
jittered = dm.assign(
D=(dm['D'] + np.random.normal(0,0.01, size=dm.shape[0]))
)
jittered.plot(kind='scatter', x='d', y='D', c='b', title='plot of individual similarity vs similarity in outcomes');
def rejoin(dm, df):
# join similarities with original data
# prefix=1,2 denote data attached to each individual in pair
m = (
dm
.merge(df, left_on='idx_1', right_index=True)
.merge(df, left_on='idx_2', right_index=True, suffixes=['_1', '_2'])
.sort_index(axis=1)
)
# extract the data associated to the higher COMPAS score
# when an unfair decision is made, this is the individual impacted most by decision
cols1 = m['decile_score_1'] < m['decile_score_2']
cols2 = m['decile_score_1'] > m['decile_score_2']
repl_cols = ['race', 'decile_score', 'is_recid', 'c_charge_degree', 'c_charge_desc', 'sex', 'age']
for c in repl_cols:
m.loc[cols1, c] = m.loc[cols1, '%s_2' % c]
m.loc[cols2, c] = m.loc[cols2, '%s_1' % c]
# Calculate if a FP or FN occurred within the pair
FP = ((m.decile_score >= 5) & (m.is_recid == 0)).astype(int)
FN = (
((m['decile_score_1'] < m['decile_score_2']) & (m['decile_score_1'] < 5) & (m['is_recid_1'] == 1)) |
((m['decile_score_2'] < m['decile_score_1']) & (m['decile_score_2'] < 5) & (m['is_recid_2'] == 1))
).astype(int)
m = m.assign(FP=FP, FN=FN)
return m
r = rejoin(dm, df)
(
r.loc[
(r.d < 0.32) & (r.d > 0.30),
[c for c in r.columns if (('_1' in c) or ('_2' in c))]
].loc[:,[c for c in r.columns if c[:-2] in metric_vars]]
)
rejoin(
unfairs[(unfairs['d'] < 0.11) & (unfairs['D'] > 0.5)],
df
).T
a = rejoin(unfairs, df)
pd.concat([
a.race.value_counts(normalize=True).rename('unfairly treated'),
recidivism.race.value_counts(normalize=True).rename('population')
], axis=1).plot(kind='bar', title='distibution of impacts from unfair decisions by race');
sns.barplot(data=a, x='race', y='D');
sns.barplot(data=a, x='race', y='is_recid');
cdict = {'Other': 'b', 'Caucasian': 'orange', 'African-American': 'green', 'Hispanic': 'red', 'Asian': 'purple', 'Native American': 'brown'}
jittered = a.assign(
D=(a['D'] + np.random.normal(0,0.02, size=a.shape[0]))
)
ax = jittered.plot(kind='scatter', x='d', y='D', c=a['race'].replace(cdict), alpha=0.4, title='(d,D) by Race');
pd.concat([
recidivism.c_charge_desc.value_counts(normalize=True).iloc[:10].rename('population'),
a.c_charge_desc.value_counts(normalize=True).iloc[:10].rename('unfairly treated')
], axis=1).plot(kind='bar', title='Distribution of charge description');
pd.concat([
recidivism.c_charge_degree.value_counts(normalize=True).iloc[:10].rename('population'),
a.c_charge_degree.value_counts(normalize=True).iloc[:10].rename('unfairly treated')
], axis=1).plot(kind='bar', title='Charge Degree (Felony vs Misd) distribution');
sns.barplot(data=a, x='race', y='age', hue='sex');
sns.barplot(data=a, x='race', y='FP');
sns.barplot(data=a, x='race', y='FN');
class MaxSimilarityMetric(CompasMetricBase):
def dist(self, ser1, ser2):
# absolute difference
dnum = np.abs((ser1.loc[self.nums] - ser2.loc[self.nums])).max()
# exact-match
dcat = (1 - (ser1.loc[self.cats] == ser2.loc[self.cats]).astype(int)).max()
return max(dnum, dcat)
recidivism.c_charge_desc.value_counts()
class CompasMetricBow(object):
def __init__(self, df, metric_vars):
self.vars = metric_vars
df = df[metric_vars]
self.nums = df.dtypes.loc[lambda x:(x == 'float') | (x == 'int')].index
self.cats = df.dtypes.loc[lambda x:x == 'object'].index
from sklearn.feature_extraction.text import CountVectorizer
dv = CountVectorizer()
bow = dv.fit_transform(df['c_charge_desc'].replace(np.nan, '')).todense()
self.bow = pd.DataFrame(bow, columns=sorted(dv.vocabulary_, key=lambda x:x[1]))
return
def dist(self, ser1, ser2):
# absolute difference
dnum = np.abs((ser1.loc[self.nums] - ser2.loc[self.nums])).sum()
# exact-match for charge degree (M vs F)
dcat1 = (1 - (ser1.loc[['c_charge_degree']] == ser2.loc[['c_charge_degree']]).astype(int)).sum()
# bow for charge description
bow = self.bow
i1, i2 = ser1.name, ser2.name
dot = (bow.loc[i1] * bow.loc[i2]).sum()
N1 = np.sqrt((bow.loc[2]**2).sum())
N2 = np.sqrt((bow.loc[3]**2).sum())
dcat2 = dot / (N1 * N2)
return (dnum + dcat1 + dcat2) / len(self.vars)
class CompasMetricLuckEgal(object):
def __init__(self, df, metric_vars):
self.vars = metric_vars
df = df[metric_vars + ['race']]
self.nums = df.dtypes.loc[lambda x:(x == 'float') | (x == 'int')].index
self.cats = df.dtypes.loc[lambda x:x == 'object'].index
cdfs = {}
for c in d.nums:
cdfs[c] = (
df
.groupby('race')[c]
.apply(self._calculate_cdf)
.reset_index()
.pivot(index='level_1', columns='race', values=c)
.fillna(1.0)
)
if 'Asian' not in cdfs[c].columns:
cdfs[c]['Asian'] = 1.0
self.cdfs = cdfs
return
def dist(self, ser1, ser2):
dnum = 0
for c in self.nums:
cdf = self.cdfs[c]
p1 = cdf.loc[ser1[c], ser1['race']]
p2 = cdf.loc[ser2[c], ser2['race']]
dnum = dnum + np.abs(p1 - p2)
# exact-match for charge degree (M vs F)
dcat = (1 - (ser1.loc[d.cats] == ser2.loc[d.cats]).astype(int)).sum()
return (dnum + dcat) / len(self.vars)
def _calculate_cdf(self, col):
cdf = col.value_counts(normalize=True).sort_index().cumsum()
reindexed_cdf = cdf.reindex(
pd.RangeIndex(
start=cdf.index.min(),
stop=(cdf.index.max() + 1)
), method='nearest')
return reindexed_cdf
| 0.619471 | 0.812347 |
```
import numpy as np
import pandas as pd
import csv
from sklearn import tree
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import train_test_split
from sklearn import preprocessing
from sklearn.metrics import confusion_matrix,accuracy_score
from sklearn.metrics import precision_recall_fscore_support as score
from sklearn.ensemble import RandomForestClassifier
flight_data = pd.read_csv('/home/kkanagalananthapadm1/flights.csv')
airport_data = pd.read_csv('/home/kkanagalananthapadm1/airports.csv')
holiday_data = pd.read_csv('/home/kkanagalananthapadm1/Holiday_List.csv')
flight_data['FLIGHT_DATE'] = pd.to_datetime(flight_data[['YEAR','MONTH','DAY']])
holiday_data['FLIGHT_DATE'] = pd.to_datetime(holiday_data[['YEAR','MONTH','DAY']])
# List All the redundant columns
col_list = [ 'TAIL_NUMBER','TAXI_OUT', 'WHEELS_OFF', 'WHEELS_ON', 'TAXI_IN','YEAR','MONTH','DAY','AIR_SYSTEM_DELAY',
'SECURITY_DELAY','AIRLINE_DELAY','LATE_AIRCRAFT_DELAY','WEATHER_DELAY']
# Drop All the redundant columns
flight_data.drop(col_list,inplace=True,axis=1)
holiday_data.drop(['YEAR','MONTH','DAY'],inplace=True,axis=1)
# Merge Holiday Data to Flight Data
flight_data = pd.merge(flight_data,holiday_data,on = 'FLIGHT_DATE',how='left')
# Create a bin for Target Variable Departure Delay
flight_data['DEP_DELAY_BIN'] = np.where(flight_data['DEPARTURE_DELAY'] < 15, 1, np.where(flight_data['DEPARTURE_DELAY'] < 60, 2,np.where(flight_data['DEPARTURE_DELAY'] < 180, 3,4)))
# Add Non-Holiday days rows with zero
flight_data['DAY_TYPE'].fillna(value=0,inplace=True)
# Create a bin for Target Variable Departure Delay
flight_data['DEP_DELAY_BIN'] = np.where(flight_data['DEPARTURE_DELAY'] < 15, 1, np.where(flight_data['DEPARTURE_DELAY'] < 60, 2,np.where(flight_data['DEPARTURE_DELAY'] < 180, 3,4)))
flight_data['ORIGIN_AIRPORT'].unique()
flight_data.dropna(subset=['ELAPSED_TIME','AIR_TIME'],inplace=True)
flight_data.shape
FLIGHT_NUMBER_MEAN_DELAY_METRIC = flight_data.loc[flight_data['DEPARTURE_DELAY']>0,['AIRLINE','DEPARTURE_DELAY']].groupby(['AIRLINE']).agg(['count','mean','median','max','min'])
FLIGHT_NUMBER_MEAN_DELAY_METRIC
Percentage_flights_delayed = flight_data.loc[flight_data['DEPARTURE_DELAY']>0,['AIRLINE','DEPARTURE_DELAY']].groupby(['AIRLINE']).apply(lambda x: x.count())
Percentage_flights_delayed
# For Decision Tree Implementation
le = preprocessing.LabelEncoder()
flight_data['AIRLINE_ENCODE'] = le.fit_transform(flight_data['AIRLINE'])
flight_data['ORIGIN_AIRPORT_ENCODE'] = le.fit_transform(flight_data['ORIGIN_AIRPORT'])
flight_data['DESTINATION_AIRPORT_ENCODE'] = le.fit_transform(flight_data['DESTINATION_AIRPORT'])
train, test = train_test_split(flight_data, test_size = 0.25)
C = DecisionTreeClassifier(criterion='gini',splitter='best', max_depth=None, min_samples_split=1000)
Features = ['DAY_OF_WEEK','AIRLINE_ENCODE', 'ORIGIN_AIRPORT_ENCODE', 'DESTINATION_AIRPORT_ENCODE', 'DEPARTURE_TIME',
'ELAPSED_TIME', 'AIR_TIME', 'DISTANCE', 'DAY_TYPE']
X_train = train[Features]
y_train = train["DEP_DELAY_BIN"]
X_test = test[Features]
y_test = test["DEP_DELAY_BIN"]
dt = C.fit(X_train,y_train)
y_pred = C.predict(X_test)
confusion_matrix(y_test, y_pred)
accuracy_score(y_test, y_pred)
precision, recall, fscore, support = score(y_test, y_pred)
print precision
print recall
print fscore
print support
dotfile = open('/home/kkanagalananthapadm1/dtree.dat','w')
type(y_pred)
dotfile = tree.export_graphviz(dt, out_file = dotfile)
clf = RandomForestClassifier(n_jobs=2, random_state=0)
y_pred_rf = clf.fit(X_train,y_train)
y_pred_rf
y_pred_final = clf.predict(X_test)
accuracy_score(y_test, y_pred_final)
rf_precision, rf_recall, rf_fscore, rf_support = score(y_test, y_pred_final)
print rf_precision
print rf_recall
print rf_fscore
print rf_support
```
|
github_jupyter
|
import numpy as np
import pandas as pd
import csv
from sklearn import tree
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import train_test_split
from sklearn import preprocessing
from sklearn.metrics import confusion_matrix,accuracy_score
from sklearn.metrics import precision_recall_fscore_support as score
from sklearn.ensemble import RandomForestClassifier
flight_data = pd.read_csv('/home/kkanagalananthapadm1/flights.csv')
airport_data = pd.read_csv('/home/kkanagalananthapadm1/airports.csv')
holiday_data = pd.read_csv('/home/kkanagalananthapadm1/Holiday_List.csv')
flight_data['FLIGHT_DATE'] = pd.to_datetime(flight_data[['YEAR','MONTH','DAY']])
holiday_data['FLIGHT_DATE'] = pd.to_datetime(holiday_data[['YEAR','MONTH','DAY']])
# List All the redundant columns
col_list = [ 'TAIL_NUMBER','TAXI_OUT', 'WHEELS_OFF', 'WHEELS_ON', 'TAXI_IN','YEAR','MONTH','DAY','AIR_SYSTEM_DELAY',
'SECURITY_DELAY','AIRLINE_DELAY','LATE_AIRCRAFT_DELAY','WEATHER_DELAY']
# Drop All the redundant columns
flight_data.drop(col_list,inplace=True,axis=1)
holiday_data.drop(['YEAR','MONTH','DAY'],inplace=True,axis=1)
# Merge Holiday Data to Flight Data
flight_data = pd.merge(flight_data,holiday_data,on = 'FLIGHT_DATE',how='left')
# Create a bin for Target Variable Departure Delay
flight_data['DEP_DELAY_BIN'] = np.where(flight_data['DEPARTURE_DELAY'] < 15, 1, np.where(flight_data['DEPARTURE_DELAY'] < 60, 2,np.where(flight_data['DEPARTURE_DELAY'] < 180, 3,4)))
# Add Non-Holiday days rows with zero
flight_data['DAY_TYPE'].fillna(value=0,inplace=True)
# Create a bin for Target Variable Departure Delay
flight_data['DEP_DELAY_BIN'] = np.where(flight_data['DEPARTURE_DELAY'] < 15, 1, np.where(flight_data['DEPARTURE_DELAY'] < 60, 2,np.where(flight_data['DEPARTURE_DELAY'] < 180, 3,4)))
flight_data['ORIGIN_AIRPORT'].unique()
flight_data.dropna(subset=['ELAPSED_TIME','AIR_TIME'],inplace=True)
flight_data.shape
FLIGHT_NUMBER_MEAN_DELAY_METRIC = flight_data.loc[flight_data['DEPARTURE_DELAY']>0,['AIRLINE','DEPARTURE_DELAY']].groupby(['AIRLINE']).agg(['count','mean','median','max','min'])
FLIGHT_NUMBER_MEAN_DELAY_METRIC
Percentage_flights_delayed = flight_data.loc[flight_data['DEPARTURE_DELAY']>0,['AIRLINE','DEPARTURE_DELAY']].groupby(['AIRLINE']).apply(lambda x: x.count())
Percentage_flights_delayed
# For Decision Tree Implementation
le = preprocessing.LabelEncoder()
flight_data['AIRLINE_ENCODE'] = le.fit_transform(flight_data['AIRLINE'])
flight_data['ORIGIN_AIRPORT_ENCODE'] = le.fit_transform(flight_data['ORIGIN_AIRPORT'])
flight_data['DESTINATION_AIRPORT_ENCODE'] = le.fit_transform(flight_data['DESTINATION_AIRPORT'])
train, test = train_test_split(flight_data, test_size = 0.25)
C = DecisionTreeClassifier(criterion='gini',splitter='best', max_depth=None, min_samples_split=1000)
Features = ['DAY_OF_WEEK','AIRLINE_ENCODE', 'ORIGIN_AIRPORT_ENCODE', 'DESTINATION_AIRPORT_ENCODE', 'DEPARTURE_TIME',
'ELAPSED_TIME', 'AIR_TIME', 'DISTANCE', 'DAY_TYPE']
X_train = train[Features]
y_train = train["DEP_DELAY_BIN"]
X_test = test[Features]
y_test = test["DEP_DELAY_BIN"]
dt = C.fit(X_train,y_train)
y_pred = C.predict(X_test)
confusion_matrix(y_test, y_pred)
accuracy_score(y_test, y_pred)
precision, recall, fscore, support = score(y_test, y_pred)
print precision
print recall
print fscore
print support
dotfile = open('/home/kkanagalananthapadm1/dtree.dat','w')
type(y_pred)
dotfile = tree.export_graphviz(dt, out_file = dotfile)
clf = RandomForestClassifier(n_jobs=2, random_state=0)
y_pred_rf = clf.fit(X_train,y_train)
y_pred_rf
y_pred_final = clf.predict(X_test)
accuracy_score(y_test, y_pred_final)
rf_precision, rf_recall, rf_fscore, rf_support = score(y_test, y_pred_final)
print rf_precision
print rf_recall
print rf_fscore
print rf_support
| 0.466846 | 0.241556 |
```
import numpy as np
import pandas as pd
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from google.colab import auth
from oauth2client.client import GoogleCredentials
# Authenticate and create the PyDrive client.
auth.authenticate_user()
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
drive = GoogleDrive(gauth)
#https://drive.google.com/file/d/1LxF2TdIqrZ71l8IQIbnVUejcpyoVVgKH/view?usp=sharing
downloaded = drive.CreateFile({'id':'1LxF2TdIqrZ71l8IQIbnVUejcpyoVVgKH'})
downloaded.GetContentFile('training.1600000.processed.noemoticon')
data = pd.read_csv('training.1600000.processed.noemoticon',encoding='latin-1',header=None)
data.head()
import nltk
from nltk.corpus import stopwords
from nltk.stem.porter import PorterStemmer
import seaborn as sns
import re
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.model_selection import train_test_split
stemmer = PorterStemmer()
nltk.download('stopwords')
text1=[]
for i in range(750000,850000):
sentence = re.sub('[^a-zA-Z123456789]', ' ', data[5][i])
sentence = sentence.lower()
sentence = sentence.split()
sentence = [stemmer.stem(word) for word in sentence if not word in stopwords.words('english')]
sentence = ' '.join(sentence)
text1.append(sentence)
len(text1)
y = data[0][750000:850000]
import tensorflow as tf
tf.__version__
from tensorflow.keras.layers import Embedding
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.models import Sequential
from tensorflow.keras.preprocessing.text import one_hot
from tensorflow.keras.layers import LSTM
from tensorflow.keras.layers import Dense
voc_size = 10000
onehot_repr=[one_hot(words,voc_size)for words in text1]
onehot_repr
sent_length=50
embedded_docs=pad_sequences(onehot_repr,padding='pre',maxlen=sent_length)
#print(embedded_docs)
from tensorflow.keras.layers import Dropout
## Creating model
embedding_vector_features=40
model=Sequential()
model.add(Embedding(voc_size,embedding_vector_features,input_length=sent_length))
model.add(Dropout(0.7))
model.add(LSTM(100))
model.add(Dropout(0.7))
model.add(Dense(1,activation='sigmoid'))
model.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accuracy'])
print(model.summary())
len(embedded_docs),y.shape
X_final=np.array(embedded_docs)
y_final=np.array(y)
for i in range(100000):
if y_final[i]==4:
y_final[i]=1
sns.countplot(y_final)
X_train, X_test, y_train, y_test = train_test_split(X_final, y_final, test_size=0.33, random_state=42)
# Finally Training
model.fit(X_train,y_train,validation_data=(X_test,y_test),epochs=5,batch_size=32)
y_pred=model.predict_classes(X_test)
from sklearn.metrics import confusion_matrix
print(confusion_matrix(y_test,y_pred))
from sklearn.metrics import accuracy_score
accuracy_score(y_test,y_pred)
sns.countplot(y)
#https://drive.google.com/file/d/1_VWF8om2pO4Bn77WygW-ggXKHFffY8k6/view?usp=sharing
#https://drive.google.com/file/d/1_VWF8om2pO4Bn77WygW-ggXKHFffY8k6/view?usp=sharing
downloaded = drive.CreateFile({'id':'1_VWF8om2pO4Bn77WygW-ggXKHFffY8k6'})
downloaded.GetContentFile('ocr_final_data.csv')
data1 = pd.read_csv('ocr_final_data.csv')
data1.head()
predtext=[]
for i in range(239):
sentence = re.sub('[^a-zA-Z123456789\n]', ' ', str(data1['Text'][i]))
sentence = sentence.lower()
sentence = sentence.split()
sentence = [stemmer.stem(word) for word in sentence if not word in stopwords.words('english')]
sentence = ' '.join(sentence)
predtext.append(sentence)
len(predtext)
onehot_repr1=[one_hot(words,voc_size)for words in predtext]
onehot_repr1
sent_length=50
embedded_docs1=pad_sequences(onehot_repr1,padding='pre',maxlen=sent_length)
ypred = model.predict(embedded_docs1)
ypred[0]
if(ypred[0]>0.69):
print("sj")
else:
print("frf")
finalprediction = []
for i in range(239):
if(ypred[i]>0.5):
finalprediction.append('Positive')
else:
finalprediction.append('Negative')
finalprediction[0]
data1['finalprediction'] = finalprediction
type(data1['Text'][1])
j=0
for i in range(239):
if type(data1['Text'][i])==float:
data1['finalprediction'][i]="Random"
j=j+1
print(j)
sns.countplot(data1['finalprediction'])
mydata = data1['Filename']
finalcsv = pd.DataFrame(mydata)
finalcsv['Category'] = data1['finalprediction']
finalcsv.head()
from google.colab import files
finalcsv.to_csv('submission2.csv')
files.download('submission2.csv')
```
|
github_jupyter
|
import numpy as np
import pandas as pd
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from google.colab import auth
from oauth2client.client import GoogleCredentials
# Authenticate and create the PyDrive client.
auth.authenticate_user()
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
drive = GoogleDrive(gauth)
#https://drive.google.com/file/d/1LxF2TdIqrZ71l8IQIbnVUejcpyoVVgKH/view?usp=sharing
downloaded = drive.CreateFile({'id':'1LxF2TdIqrZ71l8IQIbnVUejcpyoVVgKH'})
downloaded.GetContentFile('training.1600000.processed.noemoticon')
data = pd.read_csv('training.1600000.processed.noemoticon',encoding='latin-1',header=None)
data.head()
import nltk
from nltk.corpus import stopwords
from nltk.stem.porter import PorterStemmer
import seaborn as sns
import re
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.model_selection import train_test_split
stemmer = PorterStemmer()
nltk.download('stopwords')
text1=[]
for i in range(750000,850000):
sentence = re.sub('[^a-zA-Z123456789]', ' ', data[5][i])
sentence = sentence.lower()
sentence = sentence.split()
sentence = [stemmer.stem(word) for word in sentence if not word in stopwords.words('english')]
sentence = ' '.join(sentence)
text1.append(sentence)
len(text1)
y = data[0][750000:850000]
import tensorflow as tf
tf.__version__
from tensorflow.keras.layers import Embedding
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.models import Sequential
from tensorflow.keras.preprocessing.text import one_hot
from tensorflow.keras.layers import LSTM
from tensorflow.keras.layers import Dense
voc_size = 10000
onehot_repr=[one_hot(words,voc_size)for words in text1]
onehot_repr
sent_length=50
embedded_docs=pad_sequences(onehot_repr,padding='pre',maxlen=sent_length)
#print(embedded_docs)
from tensorflow.keras.layers import Dropout
## Creating model
embedding_vector_features=40
model=Sequential()
model.add(Embedding(voc_size,embedding_vector_features,input_length=sent_length))
model.add(Dropout(0.7))
model.add(LSTM(100))
model.add(Dropout(0.7))
model.add(Dense(1,activation='sigmoid'))
model.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accuracy'])
print(model.summary())
len(embedded_docs),y.shape
X_final=np.array(embedded_docs)
y_final=np.array(y)
for i in range(100000):
if y_final[i]==4:
y_final[i]=1
sns.countplot(y_final)
X_train, X_test, y_train, y_test = train_test_split(X_final, y_final, test_size=0.33, random_state=42)
# Finally Training
model.fit(X_train,y_train,validation_data=(X_test,y_test),epochs=5,batch_size=32)
y_pred=model.predict_classes(X_test)
from sklearn.metrics import confusion_matrix
print(confusion_matrix(y_test,y_pred))
from sklearn.metrics import accuracy_score
accuracy_score(y_test,y_pred)
sns.countplot(y)
#https://drive.google.com/file/d/1_VWF8om2pO4Bn77WygW-ggXKHFffY8k6/view?usp=sharing
#https://drive.google.com/file/d/1_VWF8om2pO4Bn77WygW-ggXKHFffY8k6/view?usp=sharing
downloaded = drive.CreateFile({'id':'1_VWF8om2pO4Bn77WygW-ggXKHFffY8k6'})
downloaded.GetContentFile('ocr_final_data.csv')
data1 = pd.read_csv('ocr_final_data.csv')
data1.head()
predtext=[]
for i in range(239):
sentence = re.sub('[^a-zA-Z123456789\n]', ' ', str(data1['Text'][i]))
sentence = sentence.lower()
sentence = sentence.split()
sentence = [stemmer.stem(word) for word in sentence if not word in stopwords.words('english')]
sentence = ' '.join(sentence)
predtext.append(sentence)
len(predtext)
onehot_repr1=[one_hot(words,voc_size)for words in predtext]
onehot_repr1
sent_length=50
embedded_docs1=pad_sequences(onehot_repr1,padding='pre',maxlen=sent_length)
ypred = model.predict(embedded_docs1)
ypred[0]
if(ypred[0]>0.69):
print("sj")
else:
print("frf")
finalprediction = []
for i in range(239):
if(ypred[i]>0.5):
finalprediction.append('Positive')
else:
finalprediction.append('Negative')
finalprediction[0]
data1['finalprediction'] = finalprediction
type(data1['Text'][1])
j=0
for i in range(239):
if type(data1['Text'][i])==float:
data1['finalprediction'][i]="Random"
j=j+1
print(j)
sns.countplot(data1['finalprediction'])
mydata = data1['Filename']
finalcsv = pd.DataFrame(mydata)
finalcsv['Category'] = data1['finalprediction']
finalcsv.head()
from google.colab import files
finalcsv.to_csv('submission2.csv')
files.download('submission2.csv')
| 0.340047 | 0.186502 |
<a href="https://colab.research.google.com/github/BrunoGomesCoelho/mosquito-networking/blob/master/notebooks/1.3-BrunoGomesCoelho_Colab3.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
# Colab console code
"""
function ClickConnect(){
console.log("Working");
document.querySelector("colab-toolbar-button#connect").click()
}setInterval(ClickConnect,60000)
"""
import time
start_time = time.time()
COLAB_IDX = 3
TESTING = False
COLAB = True
if COLAB:
BASE_DIR = "/content/drive/My Drive/IC/mosquito-networking/"
else:
BASE_DIR = "../"
from google.colab import drive
drive.mount('/content/drive')
import sys
sys.path.append("/content/drive/My Drive/IC/mosquito-networking/")
!python3 -m pip install -qr "/content/drive/My Drive/IC/mosquito-networking/drive_requirements.txt"
```
- - -
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
# Trying out a full pytorch experiment, with tensorboard, // processing, etc
```
# OPTIONAL: Load the "autoreload" extension so that code can change
#%load_ext autoreload
# OPTIONAL: always reload modules so that as you change code in src, it gets loaded
#%autoreload 2
import numpy as np
import pandas as pd
from src.data import make_dataset
from src.data import read_dataset
from src.data import util
from src.data.colab_dataset import MosquitoDatasetColab
import joblib
from torchsummary import summary
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
```
# Experiment params
```
# Parameters
params = {'batch_size': 64,
'shuffle': True,
'num_workers': 0}
max_epochs = 1
if TESTING:
params["num_workers"] = 0
version = !python3 --version
version = version[0].split(".")[1]
if int(version) < 7 and params["num_workers"]:
print("WARNING\n"*10)
print("Parallel execution only works for python3.7 or above!")
print("Running in parallel with other versions is not guaranted to work")
print("See https://discuss.pytorch.org/t/valueerror-signal-number-32-out-of-range-when-loading-data-with-num-worker-0/39615/2")
## Load gpu or cpu
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print(f"Using device {device}")
```
# load data
```
# Load scaler
#scaler = joblib.load("../data/interim/scaler.pkl")
scaler = joblib.load(BASE_DIR + "data/interim/scaler.pkl")
data = np.load(BASE_DIR + "data/interim/all_wavs.npy", allow_pickle=True)
data = data[data[:, -1].argsort()]
df = pd.read_csv(BASE_DIR + "data/interim/file_names.csv")
df.sort_values("original_name", inplace=True)
errors = (df["original_name"].values != data[:, -1]).sum()
if errors:
print(f"We have {errors} errors!")
raise ValueError("Error in WAV/CSV")
x = data[:, 0]
y = df["label"]
train_idx = df["training"] == 1
# Generators
training_set = MosquitoDatasetColab(x[train_idx], y[train_idx].values,
device=device, scaler=scaler)
training_generator = torch.utils.data.DataLoader(training_set, **params,
pin_memory=True)
test_set = MosquitoDatasetColab(x[~train_idx], y[~train_idx].values,
device=device, scaler=scaler)
test_generator = torch.utils.data.DataLoader(test_set, **params,
pin_memory=True)
#sc Generate some example data
temp_generator = torch.utils.data.DataLoader(training_set, **params)
for (local_batch, local_labels) in temp_generator:
example_x = local_batch
example_y = local_labels
break
```
# Load model
```
from src.models.BasicMosquitoNet2 import BasicMosquitoNet
# create your optimizer
net = BasicMosquitoNet()
net.load_state_dict(torch.load(BASE_DIR + f"runs/colab/{COLAB_IDX-1}/model_epoch_90.pt"))
criterion = nn.BCEWithLogitsLoss()
optimizer = optim.SGD(net.parameters(), lr=0.1, momentum=0.9)
if device.type == "cuda":
net.cuda()
summary(net, input_size=example_x.shape[1:])
```
# Start tensorboard
```
from torch.utils.tensorboard import SummaryWriter
save_path = BASE_DIR + f"runs/colab/{COLAB_IDX}/"
# default `log_dir` is "runs" - we'll be more specific here
writer = SummaryWriter(save_path, max_queue=3)
```
# train function
```
# Simple train function
def train(net, optimizer, max_epochs, testing=False, testing_idx=0,
save_idx=1, save_path=""):
# Loop over epochs
last_test_loss = 0
for epoch in range(max_epochs):
# Training
cumulative_train_loss = 0
cumulative_train_acc = 0
amount_train_samples = 0
for idx, (local_batch, local_labels) in enumerate(training_generator):
amount_train_samples += len(local_batch)
local_batch, local_labels = util.convert_cuda(local_batch,
local_labels,
device)
optimizer.zero_grad() # zero the gradient buffers
output = net(local_batch)
loss = criterion(output, local_labels)
cumulative_train_loss += loss.data.item()
loss.backward()
optimizer.step() # Does the update
# Stores loss
pred = output >= 0.5
cumulative_train_acc += pred.float().eq(local_labels).sum().data.item()
if testing and idx == testing_idx:
break
cumulative_train_loss /= (idx+1)
cumulative_train_acc /= amount_train_samples
writer.add_scalar("Train Loss", cumulative_train_loss, epoch)
writer.add_scalar("Train Acc", cumulative_train_acc, epoch)
# Validation
with torch.set_grad_enabled(False):
cumulative_test_loss = 0
cumulative_test_acc = 0
amount_test_samples = 0
for idx, (local_batch, local_labels) in enumerate(training_generator):
amount_test_samples += len(local_batch)
local_batch, local_labels = util.convert_cuda(local_batch,
local_labels,
device)
output = net(local_batch)
loss = criterion(output, local_labels)
cumulative_test_loss += loss.data.item()
# Stores loss
pred = output >= 0.5
cumulative_test_acc += pred.float().eq(local_labels).sum().data.item()
if testing:
break
cumulative_test_loss /= (idx+1)
cumulative_test_acc /= amount_test_samples
writer.add_scalar("Test Loss", cumulative_test_loss, epoch)
writer.add_scalar("Test Acc", cumulative_test_acc, epoch)
torch.save(net.state_dict(), save_path + f"model_epoch_{epoch}.pt")
writer.close()
return cumulative_test_loss
%%time
train(net, optimizer, 150, testing=TESTING, save_path=save_path)
print(start_time)
print(time.time() - start_time)
```
|
github_jupyter
|
# Colab console code
"""
function ClickConnect(){
console.log("Working");
document.querySelector("colab-toolbar-button#connect").click()
}setInterval(ClickConnect,60000)
"""
import time
start_time = time.time()
COLAB_IDX = 3
TESTING = False
COLAB = True
if COLAB:
BASE_DIR = "/content/drive/My Drive/IC/mosquito-networking/"
else:
BASE_DIR = "../"
from google.colab import drive
drive.mount('/content/drive')
import sys
sys.path.append("/content/drive/My Drive/IC/mosquito-networking/")
!python3 -m pip install -qr "/content/drive/My Drive/IC/mosquito-networking/drive_requirements.txt"
# OPTIONAL: Load the "autoreload" extension so that code can change
#%load_ext autoreload
# OPTIONAL: always reload modules so that as you change code in src, it gets loaded
#%autoreload 2
import numpy as np
import pandas as pd
from src.data import make_dataset
from src.data import read_dataset
from src.data import util
from src.data.colab_dataset import MosquitoDatasetColab
import joblib
from torchsummary import summary
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
# Parameters
params = {'batch_size': 64,
'shuffle': True,
'num_workers': 0}
max_epochs = 1
if TESTING:
params["num_workers"] = 0
version = !python3 --version
version = version[0].split(".")[1]
if int(version) < 7 and params["num_workers"]:
print("WARNING\n"*10)
print("Parallel execution only works for python3.7 or above!")
print("Running in parallel with other versions is not guaranted to work")
print("See https://discuss.pytorch.org/t/valueerror-signal-number-32-out-of-range-when-loading-data-with-num-worker-0/39615/2")
## Load gpu or cpu
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print(f"Using device {device}")
# Load scaler
#scaler = joblib.load("../data/interim/scaler.pkl")
scaler = joblib.load(BASE_DIR + "data/interim/scaler.pkl")
data = np.load(BASE_DIR + "data/interim/all_wavs.npy", allow_pickle=True)
data = data[data[:, -1].argsort()]
df = pd.read_csv(BASE_DIR + "data/interim/file_names.csv")
df.sort_values("original_name", inplace=True)
errors = (df["original_name"].values != data[:, -1]).sum()
if errors:
print(f"We have {errors} errors!")
raise ValueError("Error in WAV/CSV")
x = data[:, 0]
y = df["label"]
train_idx = df["training"] == 1
# Generators
training_set = MosquitoDatasetColab(x[train_idx], y[train_idx].values,
device=device, scaler=scaler)
training_generator = torch.utils.data.DataLoader(training_set, **params,
pin_memory=True)
test_set = MosquitoDatasetColab(x[~train_idx], y[~train_idx].values,
device=device, scaler=scaler)
test_generator = torch.utils.data.DataLoader(test_set, **params,
pin_memory=True)
#sc Generate some example data
temp_generator = torch.utils.data.DataLoader(training_set, **params)
for (local_batch, local_labels) in temp_generator:
example_x = local_batch
example_y = local_labels
break
from src.models.BasicMosquitoNet2 import BasicMosquitoNet
# create your optimizer
net = BasicMosquitoNet()
net.load_state_dict(torch.load(BASE_DIR + f"runs/colab/{COLAB_IDX-1}/model_epoch_90.pt"))
criterion = nn.BCEWithLogitsLoss()
optimizer = optim.SGD(net.parameters(), lr=0.1, momentum=0.9)
if device.type == "cuda":
net.cuda()
summary(net, input_size=example_x.shape[1:])
from torch.utils.tensorboard import SummaryWriter
save_path = BASE_DIR + f"runs/colab/{COLAB_IDX}/"
# default `log_dir` is "runs" - we'll be more specific here
writer = SummaryWriter(save_path, max_queue=3)
# Simple train function
def train(net, optimizer, max_epochs, testing=False, testing_idx=0,
save_idx=1, save_path=""):
# Loop over epochs
last_test_loss = 0
for epoch in range(max_epochs):
# Training
cumulative_train_loss = 0
cumulative_train_acc = 0
amount_train_samples = 0
for idx, (local_batch, local_labels) in enumerate(training_generator):
amount_train_samples += len(local_batch)
local_batch, local_labels = util.convert_cuda(local_batch,
local_labels,
device)
optimizer.zero_grad() # zero the gradient buffers
output = net(local_batch)
loss = criterion(output, local_labels)
cumulative_train_loss += loss.data.item()
loss.backward()
optimizer.step() # Does the update
# Stores loss
pred = output >= 0.5
cumulative_train_acc += pred.float().eq(local_labels).sum().data.item()
if testing and idx == testing_idx:
break
cumulative_train_loss /= (idx+1)
cumulative_train_acc /= amount_train_samples
writer.add_scalar("Train Loss", cumulative_train_loss, epoch)
writer.add_scalar("Train Acc", cumulative_train_acc, epoch)
# Validation
with torch.set_grad_enabled(False):
cumulative_test_loss = 0
cumulative_test_acc = 0
amount_test_samples = 0
for idx, (local_batch, local_labels) in enumerate(training_generator):
amount_test_samples += len(local_batch)
local_batch, local_labels = util.convert_cuda(local_batch,
local_labels,
device)
output = net(local_batch)
loss = criterion(output, local_labels)
cumulative_test_loss += loss.data.item()
# Stores loss
pred = output >= 0.5
cumulative_test_acc += pred.float().eq(local_labels).sum().data.item()
if testing:
break
cumulative_test_loss /= (idx+1)
cumulative_test_acc /= amount_test_samples
writer.add_scalar("Test Loss", cumulative_test_loss, epoch)
writer.add_scalar("Test Acc", cumulative_test_acc, epoch)
torch.save(net.state_dict(), save_path + f"model_epoch_{epoch}.pt")
writer.close()
return cumulative_test_loss
%%time
train(net, optimizer, 150, testing=TESTING, save_path=save_path)
print(start_time)
print(time.time() - start_time)
| 0.504394 | 0.798069 |
```
import matplotlib.pyplot as plt
import seaborn as sns
import keras
from keras.models import Sequential, load_model
from keras.layers import Dense, Conv2D , MaxPool2D , Flatten , Dropout
from keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.optimizers import Adam
from sklearn.metrics import classification_report,confusion_matrix
import tensorflow as tf
import cv2
import os
import numpy as np
import pickle
labels = ['blues', 'classical', 'country', 'disco', 'hiphop', 'jazz', 'metal', 'pop', 'reggae', 'rock']
img_size = 256
def get_data(data_dir):
data = []
for label in labels:
path = os.path.join(data_dir, label)
class_num = labels.index(label)
#class_num = label # edit1
for img in os.listdir(path):
try:
img_arr = cv2.imread(os.path.join(path, img))[...,::-1] #convert BGR to RGB format
resized_arr = cv2.resize(img_arr, (img_size, img_size)) # Reshaping images to preferred size
data.append([resized_arr, class_num]) # edit1
except Exception as e:
print(e)
return np.array(data)
train = get_data('spectrogram/train')
val = get_data('spectrogram/test')
x_train = []
y_train = []
x_val = []
y_val = []
for feature, label in train:
x_train.append(feature)
y_train.append(label)
for feature, label in val:
x_val.append(feature)
y_val.append(label)
# Normalize the data
x_train = np.array(x_train) / 255
x_val = np.array(x_val) / 255
x_train.reshape(-1, img_size, img_size, 1)
y_train = np.array(y_train)
x_val.reshape(-1, img_size, img_size, 1)
y_val = np.array(y_val)
datagen = ImageDataGenerator(
featurewise_center=False, # set input mean to 0 over the dataset
samplewise_center=False, # set each sample mean to 0
featurewise_std_normalization=False, # divide inputs by std of the dataset
samplewise_std_normalization=False, # divide each input by its std
zca_whitening=False, # apply ZCA whitening
#rotation_range = 30, # randomly rotate images in the range (degrees, 0 to 180)
zoom_range = 0.2, # Randomly zoom image
width_shift_range=0.1, # randomly shift images horizontally (fraction of total width)
height_shift_range=0.1, # randomly shift images vertically (fraction of total height)
#horizontal_flip = True, # randomly flip images
vertical_flip=False) # randomly flip images
datagen.fit(x_train)
model = Sequential()
model.add(Conv2D(32,3,padding="same", activation="relu", input_shape=(256,256,3)))
model.add(MaxPool2D())
model.add(Conv2D(32, 3, padding="same", activation="relu"))
model.add(MaxPool2D())
model.add(Conv2D(64, 3, padding="same", activation="relu"))
model.add(MaxPool2D())
model.add(Dropout(0.4))
model.add(Flatten())
model.add(Dense(128,activation="relu"))
model.add(Dense(10, activation="softmax"))
print(model.summary())
opt = Adam(lr=0.0001)
model.compile(optimizer = opt , loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) , metrics = ['accuracy'])
# The EXTREMELY time consuming step
history = model.fit(x_train,y_train,epochs = 500, validation_data = (x_val, y_val))
model.save("fullModelSave")
# definition of wavelet & spectrogram generating functions respectively (for one audio tape at a time)
def waveletGen(cls): # all testing in deployTestDirectory, test audio wavelet saved as 1.png, cls is the filename with .mp3
x, sr = librosa.load('deployTest/testAudioClip' + '/' + cls)
# plt.figure(figsize=(14, 5))
librosa.display.waveplot(x)
plt.savefig('deployTest/testWavelet/' + '1' + '.png')
plt.close()
'''
img_names = os.listdir('genres/' + cls)
os.makedirs('wavelets/train/' + cls)
os.makedirs('wavelets/test/' + cls)
print(cls)
train_names = img_names[:60]
test_names = img_names[60:]
cnt = 0
for nm in train_names:
cnt += 1
x, sr = librosa.load('genres/' + cls + '/' + nm)
# plt.figure(figsize=(14, 5))
librosa.display.waveplot(x)
plt.savefig('wavelets/train/' + cls + '/' + str(cnt) + '.png')
plt.close()
cnt = 0
for nm in test_names:
cnt += 1
x, sr = librosa.load('genres/' + cls + '/' + nm)
# plt.figure(figsize=(14, 5))
librosa.display.waveplot(x)
plt.savefig('wavelets/test/' + cls + '/' + str(cnt) + '.png')
plt.close()
'''
def spectrogramGen(cls): # all testing in deployTestDirectory, test audio spectrogram saved as 1.png, cls is the filename with .mp3
x, sr = librosa.load('deployTest/testAudioClip' + '/' + cls)
X = librosa.stft(x)
Xdb = librosa.amplitude_to_db(abs(X))
librosa.display.specshow(Xdb)
plt.savefig('deployTest/testSpectrogram/' + '1' + '.png')
plt.close()
'''
img_names = os.listdir('genres/' + cls)
os.makedirs('spectrogram/train/' + cls)
os.makedirs('spectrogram/test/' + cls)
print(cls)
train_names = img_names[:60]
test_names = img_names[60:]
cnt = 0
for nm in train_names:
cnt += 1
x, sr = librosa.load('genres/' + cls + '/' + nm)
X = librosa.stft(x)
Xdb = librosa.amplitude_to_db(abs(X))
librosa.display.specshow(Xdb)
plt.savefig('spectrogram/train/' + cls + '/' + str(cnt) + '.png')
plt.close()
cnt = 0
for nm in test_names:
cnt += 1
x, sr = librosa.load('genres/' + cls + '/' + nm)
X = librosa.stft(x)
Xdb = librosa.amplitude_to_db(abs(X))
librosa.display.specshow(Xdb)
plt.savefig('spectrogram/test/' + cls + '/' + str(cnt) + '.png')
plt.close()
'''
# spectrogram & wavelet dataset creation [left off here]
data = []
path = 'deployTest/testSpectrogram'
class_num = labels.index(label) # replace labels with genre name
#class_num = label # edit1
for img in os.listdir(path):
try:
img_arr = cv2.imread(os.path.join(path, img))[...,::-1] #convert BGR to RGB format
resized_arr = cv2.resize(img_arr, (img_size, img_size)) # Reshaping images to preferred size
data.append([resized_arr, class_num]) # edit1
except Exception as e:
print(e)
valSpectrogram = np.array(data)
data = []
path = 'deployTest/testWavelet'
class_num = labels.index(label) # replace labels with genre name
#class_num = label # edit1
for img in os.listdir(path):
try:
img_arr = cv2.imread(os.path.join(path, img))[...,::-1] #convert BGR to RGB format
resized_arr = cv2.resize(img_arr, (img_size, img_size)) # Reshaping images to preferred size
data.append([resized_arr, class_num]) # edit1
except Exception as e:
print(e)
valWavelet = np.array(data)
reconstructed_model = load_model("fullModelSave")
predictions = reconstructed_model.predict(np.array(None)) # replace None with value array
print(predictions)
```
|
github_jupyter
|
import matplotlib.pyplot as plt
import seaborn as sns
import keras
from keras.models import Sequential, load_model
from keras.layers import Dense, Conv2D , MaxPool2D , Flatten , Dropout
from keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.optimizers import Adam
from sklearn.metrics import classification_report,confusion_matrix
import tensorflow as tf
import cv2
import os
import numpy as np
import pickle
labels = ['blues', 'classical', 'country', 'disco', 'hiphop', 'jazz', 'metal', 'pop', 'reggae', 'rock']
img_size = 256
def get_data(data_dir):
data = []
for label in labels:
path = os.path.join(data_dir, label)
class_num = labels.index(label)
#class_num = label # edit1
for img in os.listdir(path):
try:
img_arr = cv2.imread(os.path.join(path, img))[...,::-1] #convert BGR to RGB format
resized_arr = cv2.resize(img_arr, (img_size, img_size)) # Reshaping images to preferred size
data.append([resized_arr, class_num]) # edit1
except Exception as e:
print(e)
return np.array(data)
train = get_data('spectrogram/train')
val = get_data('spectrogram/test')
x_train = []
y_train = []
x_val = []
y_val = []
for feature, label in train:
x_train.append(feature)
y_train.append(label)
for feature, label in val:
x_val.append(feature)
y_val.append(label)
# Normalize the data
x_train = np.array(x_train) / 255
x_val = np.array(x_val) / 255
x_train.reshape(-1, img_size, img_size, 1)
y_train = np.array(y_train)
x_val.reshape(-1, img_size, img_size, 1)
y_val = np.array(y_val)
datagen = ImageDataGenerator(
featurewise_center=False, # set input mean to 0 over the dataset
samplewise_center=False, # set each sample mean to 0
featurewise_std_normalization=False, # divide inputs by std of the dataset
samplewise_std_normalization=False, # divide each input by its std
zca_whitening=False, # apply ZCA whitening
#rotation_range = 30, # randomly rotate images in the range (degrees, 0 to 180)
zoom_range = 0.2, # Randomly zoom image
width_shift_range=0.1, # randomly shift images horizontally (fraction of total width)
height_shift_range=0.1, # randomly shift images vertically (fraction of total height)
#horizontal_flip = True, # randomly flip images
vertical_flip=False) # randomly flip images
datagen.fit(x_train)
model = Sequential()
model.add(Conv2D(32,3,padding="same", activation="relu", input_shape=(256,256,3)))
model.add(MaxPool2D())
model.add(Conv2D(32, 3, padding="same", activation="relu"))
model.add(MaxPool2D())
model.add(Conv2D(64, 3, padding="same", activation="relu"))
model.add(MaxPool2D())
model.add(Dropout(0.4))
model.add(Flatten())
model.add(Dense(128,activation="relu"))
model.add(Dense(10, activation="softmax"))
print(model.summary())
opt = Adam(lr=0.0001)
model.compile(optimizer = opt , loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) , metrics = ['accuracy'])
# The EXTREMELY time consuming step
history = model.fit(x_train,y_train,epochs = 500, validation_data = (x_val, y_val))
model.save("fullModelSave")
# definition of wavelet & spectrogram generating functions respectively (for one audio tape at a time)
def waveletGen(cls): # all testing in deployTestDirectory, test audio wavelet saved as 1.png, cls is the filename with .mp3
x, sr = librosa.load('deployTest/testAudioClip' + '/' + cls)
# plt.figure(figsize=(14, 5))
librosa.display.waveplot(x)
plt.savefig('deployTest/testWavelet/' + '1' + '.png')
plt.close()
'''
img_names = os.listdir('genres/' + cls)
os.makedirs('wavelets/train/' + cls)
os.makedirs('wavelets/test/' + cls)
print(cls)
train_names = img_names[:60]
test_names = img_names[60:]
cnt = 0
for nm in train_names:
cnt += 1
x, sr = librosa.load('genres/' + cls + '/' + nm)
# plt.figure(figsize=(14, 5))
librosa.display.waveplot(x)
plt.savefig('wavelets/train/' + cls + '/' + str(cnt) + '.png')
plt.close()
cnt = 0
for nm in test_names:
cnt += 1
x, sr = librosa.load('genres/' + cls + '/' + nm)
# plt.figure(figsize=(14, 5))
librosa.display.waveplot(x)
plt.savefig('wavelets/test/' + cls + '/' + str(cnt) + '.png')
plt.close()
'''
def spectrogramGen(cls): # all testing in deployTestDirectory, test audio spectrogram saved as 1.png, cls is the filename with .mp3
x, sr = librosa.load('deployTest/testAudioClip' + '/' + cls)
X = librosa.stft(x)
Xdb = librosa.amplitude_to_db(abs(X))
librosa.display.specshow(Xdb)
plt.savefig('deployTest/testSpectrogram/' + '1' + '.png')
plt.close()
'''
img_names = os.listdir('genres/' + cls)
os.makedirs('spectrogram/train/' + cls)
os.makedirs('spectrogram/test/' + cls)
print(cls)
train_names = img_names[:60]
test_names = img_names[60:]
cnt = 0
for nm in train_names:
cnt += 1
x, sr = librosa.load('genres/' + cls + '/' + nm)
X = librosa.stft(x)
Xdb = librosa.amplitude_to_db(abs(X))
librosa.display.specshow(Xdb)
plt.savefig('spectrogram/train/' + cls + '/' + str(cnt) + '.png')
plt.close()
cnt = 0
for nm in test_names:
cnt += 1
x, sr = librosa.load('genres/' + cls + '/' + nm)
X = librosa.stft(x)
Xdb = librosa.amplitude_to_db(abs(X))
librosa.display.specshow(Xdb)
plt.savefig('spectrogram/test/' + cls + '/' + str(cnt) + '.png')
plt.close()
'''
# spectrogram & wavelet dataset creation [left off here]
data = []
path = 'deployTest/testSpectrogram'
class_num = labels.index(label) # replace labels with genre name
#class_num = label # edit1
for img in os.listdir(path):
try:
img_arr = cv2.imread(os.path.join(path, img))[...,::-1] #convert BGR to RGB format
resized_arr = cv2.resize(img_arr, (img_size, img_size)) # Reshaping images to preferred size
data.append([resized_arr, class_num]) # edit1
except Exception as e:
print(e)
valSpectrogram = np.array(data)
data = []
path = 'deployTest/testWavelet'
class_num = labels.index(label) # replace labels with genre name
#class_num = label # edit1
for img in os.listdir(path):
try:
img_arr = cv2.imread(os.path.join(path, img))[...,::-1] #convert BGR to RGB format
resized_arr = cv2.resize(img_arr, (img_size, img_size)) # Reshaping images to preferred size
data.append([resized_arr, class_num]) # edit1
except Exception as e:
print(e)
valWavelet = np.array(data)
reconstructed_model = load_model("fullModelSave")
predictions = reconstructed_model.predict(np.array(None)) # replace None with value array
print(predictions)
| 0.528777 | 0.432782 |
# Image Data in DICOM
In the previous notebook, we explored the DICOM dictionary and the metadata that defines the image content and context. In this notebook we will explore the nature of the pixel/voxel data.
The basic piece of an image is referred to as a **pixel** for "picture element." In medical imaging, we are often working with 3D images of a patient so instead of a pixel we will say **voxel** for "volume element."
Each pixel or voxel consists of one or more numeric value. In a typical color photograph you would have three numeric values representing the amount of **R**ed, **G**reen, and **B**lue (RGB) at the pixel. A standard MRI or CT image, like those shown above, have a single vaue at each voxel, as do typical X-ray and ultrasound (US) images. However, you can have radiological images with more than one value at each pixel. This is the case with diffusion tensor imaging (DTI) like the image below.
<a title="Mim.cis / CC BY-SA (https://creativecommons.org/licenses/by-sa/4.0)" href="https://commons.wikimedia.org/wiki/File:Dti-MRI-brain-section.png"><img width="256" alt="Dti-MRI-brain-section" src="https://upload.wikimedia.org/wikipedia/commons/thumb/2/2c/Dti-MRI-brain-section.png/256px-Dti-MRI-brain-section.png"></a>
## What do the numeric values mean?
The meaning of of the numeric values in a pixel or voxel depends on the imaging modality modality used. In general the voxel values don't have an intrinsic (absolute) value. Rather they are relative to each other. A bright voxel in a particular image has a higher value than a darker voxel in the same image. But in general we can't compare the numeric values between two images. CT is an exception to this that we will address later.
### Physical processes generating images
X-ray images depict the amount X-rays are attenuated by your body. Dense materials like your bones attenuate X-rays more than say the air in your lungs. So in an X-ray image, bones appear bright and lungs appear dark. (This is the inverse of the image detection process.)
Ultrasound depicts echos of sound waves within your body. The brighter the voxel the stronger the echo. This occurs at boundaries between different tissue types, for example between water and muscle.
The voxel values in magnetic resonance imaging (MRI) are determined by complex physical processes, but you can think of them as depicting various aspects of the local chemical environment of the water (usually) in your body.
Nuclear medicine like positron emission tomography (PET) or single-photon emission computed tomography (SPECT) measure the amount of radioactivity from tracers that are injected into your body.
Computed tomography uses X-rays and so is measuring X-ray attenuation, but emits and detects the x-rays at multiple angles around your body and then uses mathematics to reconstruct what the plane/slice (Greek *tomo*) that all the X-rays passed through.
In general you can assume that pixel/voxel numeric values in a medical image have meaning only relative to other pixel/voxel values within the same image. CT is an exception and we will discuss this next.
### Meaning of CT voxel values
The numeric value at each voxel in a CT image is expressed in terms of [Hounsfield Units (HU)]()
## [DICOM Pixel/Voxel Data](http://dicom.nema.org/medical/dicom/current/output/chtml/part03/sect_C.7.6.3.html#sect_C.7.6.3.1.4)
The Orthanc (a popular open-source package for working with DICOM images) developers provide [this](https://book.orthanc-server.com/dicom-guide.html#pixel-data) description of how the image pixels/voxels are represented in DICOM. Here is an excerpt:
<blockquote>
<div class="section" id="pixel-data">
<span id="dicom-pixel-data"></span><h3><a class="toc-backref" href="#id6">Pixel data</a><a class="headerlink" href="#pixel-data" title="Permalink to this headline">¶</a></h3>
<p>The image itself is associated with the DICOM tag <code class="docutils literal"><span class="pre">PixelData</span> <span class="pre">(0x7fe0,</span>
<span class="pre">0x0010)</span></code>. The content of image can be compressed using many image
formats, such as JPEG, <a class="reference external" href="https://en.wikipedia.org/wiki/Lossless_JPEG">Lossless JPEG</a> or <a class="reference external" href="https://en.wikipedia.org/wiki/JPEG_2000">JPEG 2000</a>. Obviously,
non-destructive (lossless) compression should always be favored to
avoid any loss of medical information. Note that a DICOM file can also
act as a wrapper around a video encoded using <a class="reference external" href="https://en.wikipedia.org/wiki/MPEG-2">MPEG-2</a> or <a class="reference external" href="https://en.wikipedia.org/wiki/H.264/MPEG-4_AVC">H.264</a>.</p>
<p>The image compression algorithm can be identified by inspecting the
<strong>transfer syntax</strong> that is associated with the DICOM file in its
header.</p>
<p>In practice, few imaging devices in hospitals (besides the <a class="reference external" href="https://en.wikipedia.org/wiki/Picture_archiving_and_communication_system">PACS</a>
itself) support image compression. As a consequence, to ensure best
portability, the pixel data of most DICOM files circulating in
hospitals is <strong>uncompressed</strong>. In other words, the image is encoded as
a raw buffer, with a given width, height, pixel type (integer or
float), <a class="reference external" href="https://en.wikipedia.org/wiki/Color_depth">color depth</a>
(most often 8, 10, 12 bpp - <em>bits per pixel</em>) and photometric
interpretation (most often grayscale or RGB). The transfer syntax that
is associated with such uncompressed images can either be <a class="reference external" href="https://fr.wikipedia.org/wiki/Endianness">little
endian</a> (the most common
case) or big endian.</p>
<p>A DICOM image can be <strong>multi-frame</strong>, meaning that it encodes an array
of different image frames. This is for instance used to encode
uncompressed video sequences, that are often referred to as <strong>cine</strong>
or <strong>2D+t</strong> images (e.g. for <a class="reference external" href="https://en.wikipedia.org/wiki/Medical_ultrasound">ultrasound imaging</a>).</p>
</blockquote>
## Exploring DICOM Pixel Data
In the cells below you can do some basic exploration of medical images as stored in DICOM images.
### Window and Level Settings
Computer display typically have 8 bits of display resolution (per color channel for 24 bits per pixel). This means that there are 256 distinct gray values that can be displayed. However, as you will see. The medical images have ranges (from minimum to maximum values) that typically far exceed 256. This requires that the image values be mapped to 0-255. The processing of doing this is through a window-level scaling.

**Mathworks.com**
The window and level values that are selected to display an image have huge effects on what information is displayed. If you are interested, hered is a video about window-level scaling in Radiology.
```
# uncomment if you need to install dminteract
#!python -m pip install -U git+https://github.com/chapmanbe/dminteract#egg=dminteract
from IPython.display import YouTubeVideo
YouTubeVideo("KZld-5W99cI")
```
### A Few More Points
* Image orientation in DICOM is complex and this naive visualization doesn't always correctly orient the images.
* See if you can figure out what pixel values are used in CT images for regions where an actual image value is not computed.
* In the histogram you can filter what pixel values are used in the computation by adjusting the upper and lower values on the "win" slider. To get more precise control, you can type in the numbers rather than moving the slider.
* On the image viewer you can change the window and level values with the "w" and "l" sliders. The range for theses sliders is computed over all the images, so it is less than ideal for any given image.
```
from dminteract.modules.m4c import *
from ipywidgets import interact, fixed
import ipywidgets as widgets
import warnings
warnings.filterwarnings('ignore')
#alt.renderers.enable('notebook')
alt.renderers.enable('default')
from IPython.display import display, clear_output
import os
DATADIR = os.path.join("..", "DATA")
```
### Read in DICOM images and compute basic statistics
```
imgs = {i:pydicom.dcmread(os.path.join(DATADIR,i)) for i in ["I1.dcm","I2.dcm","I3.dcm","I4.dcm"]}
img_stats = {i:(lambda x:(np.min(x),np.max(x)))(imgs[i].pixel_array) for i in imgs}
allmin = min([v[0] for v in img_stats.values()])
allmax = max([v[1] for v in img_stats.values()])
print(allmin, allmax)
b = get_examine_dicom_widget(imgs, allmin, allmax)
clear_output()
display(b)
```
|
github_jupyter
|
# uncomment if you need to install dminteract
#!python -m pip install -U git+https://github.com/chapmanbe/dminteract#egg=dminteract
from IPython.display import YouTubeVideo
YouTubeVideo("KZld-5W99cI")
from dminteract.modules.m4c import *
from ipywidgets import interact, fixed
import ipywidgets as widgets
import warnings
warnings.filterwarnings('ignore')
#alt.renderers.enable('notebook')
alt.renderers.enable('default')
from IPython.display import display, clear_output
import os
DATADIR = os.path.join("..", "DATA")
imgs = {i:pydicom.dcmread(os.path.join(DATADIR,i)) for i in ["I1.dcm","I2.dcm","I3.dcm","I4.dcm"]}
img_stats = {i:(lambda x:(np.min(x),np.max(x)))(imgs[i].pixel_array) for i in imgs}
allmin = min([v[0] for v in img_stats.values()])
allmax = max([v[1] for v in img_stats.values()])
print(allmin, allmax)
b = get_examine_dicom_widget(imgs, allmin, allmax)
clear_output()
display(b)
| 0.176069 | 0.98724 |
```
# Dependencies
# !pip3 install mpld3
# !pip3 install casadi
# !pip3 install scipy
import numpy as np
from casadi import *
```
# Casadi Examples
https://web.casadi.org/docs/#document-ocp
```
x = MX.sym('x', 2, 2)
y = MX.sym('y')
z = 3 * x + y
print(z)
print(jacobian(sin(z), x))
x = MX.sym('x')
y = MX.sym('y')
f = Function('x', [x, y], [y + sin(x)])
f(1., 1.)
jf = jacobian(f(x, y), x)
print(jf)
opti = Opti()
x = opti.variable()
y = opti.variable()
z = (x - 1.) ** 2 + y ** 2
opti.minimize(z)
opti.subject_to(x**2 + y**2 == 1)
opti.subject_to(x + y >= 1)
opti.set_initial(x, 0)
opti.set_initial(y, 0)
opti.solver('ipopt')
sol = opti.solve()
print('x: {}, y: {}, z: {}'.format(sol.value(x), sol.value(y), sol.value(z)))
```
# Let's create a race car!!!
Inspired by CasADi usage examples
https://web.casadi.org/blog/ocp/
```
from casadi import *
from pylab import plot, step, figure, legend, show, spy
import matplotlib.pyplot as plt
import math
# choose a car
friction_acc = 6.
a_min = -5.
a_max = 3.
# TODO: add constraint on friction instead of constraint on normal acceleration
a_n_max = (friction_acc**2 - a_min**2) ** 0.5
print(a_n_max)
a_n_min = -a_n_max
c_max = 0.2
c_min = -0.2
vehicle_width = 2.0
vehicle_length = 4.0
vehicle_back_to_axis = 1.0
# create a race track
Length = 250.
road_width = 10.
road_period = 35.
road_amplitude = 24.
# TODO: remove magic margin and process borders accurately
margin = vehicle_length - vehicle_back_to_axis
def road_center(x):
return (cos(x / road_period) - 1) * road_amplitude
def road_curvature(x):
# curvature of A * cos(x / T)
A = road_amplitude
T = road_period
d = sqrt((A**2 * sin(x/T) ** 2)/T**2 + 1)
l = ((A * cos(x/T))/((T**2 + A**2*sin(x/T)**2) * sqrt((A**2 * sin(x/T)**2)/T**2 + 1)))**2
r = 1/4 * ((A**2 * sin((2 * x)/T))/(T**3 * ((A**2 * sin(x/T)**2)/T**2 + 1)**(3/2)))**2
return sqrt(l + r) / d
def road_yaw(x):
return - sin(x / road_period) * road_amplitude / road_period
def top_border(x, shift=0.):
return road_center(x) + road_width / 2. + shift
def bottom_border(x, shift=0.):
return road_center(x) - road_width / 2. + shift
# number of points in trajectory
N = 100
# Baseline solution
x_uniform = np.arange(N + 1) * Length / N
max_curvature = max(abs(np.array(road_curvature(x_uniform))))
v_max = math.sqrt(a_n_max / max_curvature)
# baseline solution - accelerate till max allowed speed limit and
# keep driving with it
t_acc = v_max / a_max
T_baseline = t_acc + (Length - t_acc ** 2 / 2) / v_max
print('''Baseline time is {0:.3f} sec.'''.format(T_baseline))
# create a race car planner!
opti = Opti()
# X = {x, y, yaw, v}
X = opti.variable(4, N + 1)
x_id, y_id, yaw_id, v_id = 0, 1, 2, 3
# U = {a, c}
a_id, c_id = 0, 1
U = opti.variable(2, N)
T = opti.variable()
opti.minimize(T)
# define system equation constraints
def system_equation(X, U):
speed = X[v_id]
yaw = X[yaw_id]
derivatives = vertcat(
speed * cos(yaw),
speed * sin(yaw),
speed * U[c_id],
U[a_id]
)
return derivatives
dt = T / N
for i in range(N):
# just Euler method
x_next = X[:, i] + system_equation(X[:, i], U[:, i]) * dt
opti.subject_to(X[:, i + 1] == x_next)
opti.subject_to(X[v_id, :] >= 0)
# start and finish constraints
opti.subject_to(X[:, 0] == [0., 0., 0., 0.])
opti.subject_to(X[x_id, -1] == Length)
# improve stability
opti.subject_to(X[y_id, -1] == road_center(Length))
opti.subject_to(X[yaw_id, -1] == road_yaw(Length))
# control constraints
opti.subject_to(U[a_id, :] <= a_max)
opti.subject_to(U[a_id, :] >= a_min)
opti.subject_to(U[c_id, :] <= c_max)
opti.subject_to(U[c_id, :] >= c_min)
# normal acc. constraint
opti.subject_to(U[c_id, :] * X[v_id, :-1] ** 2 <= a_n_max)
opti.subject_to(U[c_id, :] * X[v_id, :-1] ** 2 >= a_n_min)
# road border constraints
opti.subject_to(X[y_id, :] <= top_border(X[x_id, :]) - margin)
opti.subject_to(X[y_id, :] >= bottom_border(X[x_id, :]) + margin)
# time constraint
opti.subject_to(T >= 0.)
# prepare initial trivial solution - race slowly in the middle in the track
v_init = v_max
T_init = Length / v_init
x_init = v_init * np.arange(N + 1) * T_init / N
opti.set_initial(T, T_init)
opti.set_initial(X[v_id, :], v_init)
opti.set_initial(X[x_id, :], x_init)
opti.set_initial(X[y_id, :], road_center(x_init))
# Solve
opti.solver('ipopt')
solution = opti.solve()
def rotation_matrix(yaw):
return np.array([[cos(yaw), -sin(yaw)], [sin(yaw), cos(yaw)]])
# plot solution
import mpld3
%matplotlib inline
mpld3.enable_notebook()
plt.rcParams["figure.figsize"] = [10, 5]
x_s = solution.value(X[x_id, :])
y_s = solution.value(X[y_id, :])
v_s = solution.value(X[v_id, :])
yaw_s = solution.value(X[yaw_id, :])
c_s = solution.value(U[c_id, :])
a_s = solution.value(U[a_id, :])
T_s = solution.value(T)
t_u = np.arange(N + 1) * T_s / N
print('''Congratulations, you completed the race!!!
Your track time is {0:.3f} sec., baseline time is {1:.3f} sec.'''.format(T_s, T_baseline))
plot(x_s, y_s, 'r.', label='y')
plot(x_s, top_border(x_s, -margin), 'k-', label='top_border with margin')
plot(x_s, bottom_border(x_s, +margin), 'k-', label='bottom_border with margin')
plot(x_s, top_border(x_s), 'k-', label='top_border')
plot(x_s, bottom_border(x_s), 'k-', label='bottom_border')
plot(x_s, road_center(x_s), 'g--', label='road center')
plt.xlabel('x')
plt.ylabel('y')
for x, y, yaw in zip(x_s, y_s, yaw_s):
center = np.array([[x], [y]]) + rotation_matrix(yaw).dot(np.array([[-vehicle_back_to_axis], [-vehicle_width/2]]))
rect = plt.Rectangle(center, vehicle_length, vehicle_width, angle=math.degrees(yaw), facecolor='none', linewidth=1, edgecolor='k')
plt.gca().add_patch(rect)
legend(loc='lower right')
figure()
plot(t_u, y_s, 'r.', label='y')
plot(t_u, top_border(x_s), 'k-', label='top_border')
plot(t_u, bottom_border(x_s), 'k-', label='bottom_border')
plot(t_u, road_center(x_s), 'g--', label='road center')
plt.xlabel('time')
legend(loc='lower right')
figure()
# plot(x_s, label='x')
plot(t_u, v_s, label='v')
plot(t_u[:-1], a_s, label='a')
plot(t_u[:-1], c_s * v_s[:-1] ** 2, label='a_n')
plt.xlabel('time')
legend(loc='lower right')
figure()
plot(t_u, yaw_s, label='yaw')
plot(t_u[:-1], c_s, label='c')
plt.xlabel('time')
legend(loc='lower right')
show()
# jacobian and hessian are very sparse in this kind of tasks.
spy(solution.value(jacobian(opti.g, opti.x)), aspect='auto', markersize=1.)
plt.xlabel('variables')
plt.ylabel('constraints')
```
**Home assignment**
* maximum score possible is 10 points
Several assumptions could be improved to get a faster trajectory.
* Improve limitation of normal and tangential acceleration (2 points)
The vector addition of normal and tangential acceleration should be less than friction.
`a ** 2 = a_n ** 2 + a_t ** 2`
It means that limit for a_min, a_n_max, a_n_min should be replaced to limit based on friction_acc.
a_max is limited by engine torque which is smaller than friction and thus a_max limit is still required.
**Report:** add total acceleration to the plot
* Improve road boundary check (4 points)
In workshop coarse check of road boundary is used.
Please make a road check more precise to allow the planner to use the whole road surface.
Hint: check that car corners lay inside the road surface.
**Report:** add trajectory with car polygon and road boundary to the plot
* Obstacle avoidance (4 points)
Pick circle with radius 1 meter as an obstacle. Place the obstacle on the vehicle trajectory.
Implement obstacle avoidance restrictions.
**Report:** attach a plot with an initial trajectory going through the obstacle.
Attach a plot with the trajectory avoiding the obstacle.
If circle approximation of car is used than attach plot which shows how vehicle polygon is covered by circles.
matplotlib.Circle is useful to visualize circles.
```
obstacle = plt.Circle(xy=(obstacle_x, obstacle_y), radius=obstacle_r)
plt.gca().add_patch(obstacle)
```
Hint: a vehicle polygon could be approximated as a set of circles. Than collision avoidance is equal to check that distance from the obstacle to each circle is bigger than the sum of obstacle and approximation circle radiuses.
Approximation should cover the whole vehicle and the maximum approximation error should be less than 0.3 meters.
<img src="files/car_approximation.png">
**Home assignment (Russian)**
* максимальная оценка за домашнее задание 10 баллов
В примере на семинаре было сделано несколько допущений, улучшив которые, вы можете построить более быструю траекторию.
* Уточнить ограничение на нормальное и тангенциальное ускорение. (2 балла)
Нормальное и тангенциальное ускорение нужно выбирать таким, чтобы результирующий вектор ускорения не превышал силу трения.
Результирующее ускорение считается как векторная сумма нормального и тангенциального ускорения.
`a ** 2 = a_n ** 2 + a_t ** 2`
То есть лимит a_min, a_n_max, a_n_min надо заменить на лимит по friction_acc.
Лимит a_max при этом остаётся, так как это свойство двигателя.
**В отчёт** надо добавить график величины результирующего ускорения.
* Уточнить проверку границы дороги (4 балла)
На семинаре мы отступили по 3 метра от границы дороги.
В домашнем задании вам предстоит написать более точную проверку,
что позволит автомобилю использовать всё свободное пространство.
Подсказка: один из вариантов реализации это проверка, что углы прямоугольника находятся внутри дорожного полотна.
**В отчёте** должен быть график, показывающий границы дороги и габариты автомобиля. График с семинара подходит.
* Объезд препятствия (4 балла)
В качестве препятствия выбираем окружность радиусом 1 м. X,Y выбираем так, чтобы препятствие было расположено на траектории движения автомобиля.
Реализуйте ограничение на отсутствие коллизий с препятствием.
**В отчёт** приложите график, на котором автомобиль изначально проезжает сквозь препятствие.
После реализации ограничений объезжает препятствие.
Если для проверки коллизий будете использовать аппроксимацию окружностями(см. ниже), то приложите график, визуализирующий как окружности накрывают габариты автомобиля.
Для визуализации препятствия можете использовать matplotlib.Circle
```
obstacle = plt.Circle(xy=(obstacle_x, obstacle_y), radius=obstacle_r)
plt.gca().add_patch(obstacle)
```
Подсказка: Один из вариантов реализации это аппроксимация автомобиля набором окружностей.
Тогда ограничение на отсутствие коллизий выражается в то, что расстояние от центра препятствия до центра каждой окружности автомобиля должно быть больше, чем сумма радиусов.
Требования к точности аппроксимации - весь автомобиль должен быть накрыт окружностями. Окружности не должны выступать за боковые стороны автомобиля больше чем на 0.3 м.
<img src="files/car_approximation.png">
* **Бонус трек** (10 баллов)
Опционально можно сделать бонусную исследовательскую задачу вместо предыдущих трёх пунктов и получить тоже 10 баллов.
Исследовательская задача заключается в том, чтобы самостоятельно реализовать пример из семинара на любом другом фреймворке нелинейного програмирования и сравнить время вычисления траектории в примере из семинара и домашнем задании.
|
github_jupyter
|
# Dependencies
# !pip3 install mpld3
# !pip3 install casadi
# !pip3 install scipy
import numpy as np
from casadi import *
x = MX.sym('x', 2, 2)
y = MX.sym('y')
z = 3 * x + y
print(z)
print(jacobian(sin(z), x))
x = MX.sym('x')
y = MX.sym('y')
f = Function('x', [x, y], [y + sin(x)])
f(1., 1.)
jf = jacobian(f(x, y), x)
print(jf)
opti = Opti()
x = opti.variable()
y = opti.variable()
z = (x - 1.) ** 2 + y ** 2
opti.minimize(z)
opti.subject_to(x**2 + y**2 == 1)
opti.subject_to(x + y >= 1)
opti.set_initial(x, 0)
opti.set_initial(y, 0)
opti.solver('ipopt')
sol = opti.solve()
print('x: {}, y: {}, z: {}'.format(sol.value(x), sol.value(y), sol.value(z)))
from casadi import *
from pylab import plot, step, figure, legend, show, spy
import matplotlib.pyplot as plt
import math
# choose a car
friction_acc = 6.
a_min = -5.
a_max = 3.
# TODO: add constraint on friction instead of constraint on normal acceleration
a_n_max = (friction_acc**2 - a_min**2) ** 0.5
print(a_n_max)
a_n_min = -a_n_max
c_max = 0.2
c_min = -0.2
vehicle_width = 2.0
vehicle_length = 4.0
vehicle_back_to_axis = 1.0
# create a race track
Length = 250.
road_width = 10.
road_period = 35.
road_amplitude = 24.
# TODO: remove magic margin and process borders accurately
margin = vehicle_length - vehicle_back_to_axis
def road_center(x):
return (cos(x / road_period) - 1) * road_amplitude
def road_curvature(x):
# curvature of A * cos(x / T)
A = road_amplitude
T = road_period
d = sqrt((A**2 * sin(x/T) ** 2)/T**2 + 1)
l = ((A * cos(x/T))/((T**2 + A**2*sin(x/T)**2) * sqrt((A**2 * sin(x/T)**2)/T**2 + 1)))**2
r = 1/4 * ((A**2 * sin((2 * x)/T))/(T**3 * ((A**2 * sin(x/T)**2)/T**2 + 1)**(3/2)))**2
return sqrt(l + r) / d
def road_yaw(x):
return - sin(x / road_period) * road_amplitude / road_period
def top_border(x, shift=0.):
return road_center(x) + road_width / 2. + shift
def bottom_border(x, shift=0.):
return road_center(x) - road_width / 2. + shift
# number of points in trajectory
N = 100
# Baseline solution
x_uniform = np.arange(N + 1) * Length / N
max_curvature = max(abs(np.array(road_curvature(x_uniform))))
v_max = math.sqrt(a_n_max / max_curvature)
# baseline solution - accelerate till max allowed speed limit and
# keep driving with it
t_acc = v_max / a_max
T_baseline = t_acc + (Length - t_acc ** 2 / 2) / v_max
print('''Baseline time is {0:.3f} sec.'''.format(T_baseline))
# create a race car planner!
opti = Opti()
# X = {x, y, yaw, v}
X = opti.variable(4, N + 1)
x_id, y_id, yaw_id, v_id = 0, 1, 2, 3
# U = {a, c}
a_id, c_id = 0, 1
U = opti.variable(2, N)
T = opti.variable()
opti.minimize(T)
# define system equation constraints
def system_equation(X, U):
speed = X[v_id]
yaw = X[yaw_id]
derivatives = vertcat(
speed * cos(yaw),
speed * sin(yaw),
speed * U[c_id],
U[a_id]
)
return derivatives
dt = T / N
for i in range(N):
# just Euler method
x_next = X[:, i] + system_equation(X[:, i], U[:, i]) * dt
opti.subject_to(X[:, i + 1] == x_next)
opti.subject_to(X[v_id, :] >= 0)
# start and finish constraints
opti.subject_to(X[:, 0] == [0., 0., 0., 0.])
opti.subject_to(X[x_id, -1] == Length)
# improve stability
opti.subject_to(X[y_id, -1] == road_center(Length))
opti.subject_to(X[yaw_id, -1] == road_yaw(Length))
# control constraints
opti.subject_to(U[a_id, :] <= a_max)
opti.subject_to(U[a_id, :] >= a_min)
opti.subject_to(U[c_id, :] <= c_max)
opti.subject_to(U[c_id, :] >= c_min)
# normal acc. constraint
opti.subject_to(U[c_id, :] * X[v_id, :-1] ** 2 <= a_n_max)
opti.subject_to(U[c_id, :] * X[v_id, :-1] ** 2 >= a_n_min)
# road border constraints
opti.subject_to(X[y_id, :] <= top_border(X[x_id, :]) - margin)
opti.subject_to(X[y_id, :] >= bottom_border(X[x_id, :]) + margin)
# time constraint
opti.subject_to(T >= 0.)
# prepare initial trivial solution - race slowly in the middle in the track
v_init = v_max
T_init = Length / v_init
x_init = v_init * np.arange(N + 1) * T_init / N
opti.set_initial(T, T_init)
opti.set_initial(X[v_id, :], v_init)
opti.set_initial(X[x_id, :], x_init)
opti.set_initial(X[y_id, :], road_center(x_init))
# Solve
opti.solver('ipopt')
solution = opti.solve()
def rotation_matrix(yaw):
return np.array([[cos(yaw), -sin(yaw)], [sin(yaw), cos(yaw)]])
# plot solution
import mpld3
%matplotlib inline
mpld3.enable_notebook()
plt.rcParams["figure.figsize"] = [10, 5]
x_s = solution.value(X[x_id, :])
y_s = solution.value(X[y_id, :])
v_s = solution.value(X[v_id, :])
yaw_s = solution.value(X[yaw_id, :])
c_s = solution.value(U[c_id, :])
a_s = solution.value(U[a_id, :])
T_s = solution.value(T)
t_u = np.arange(N + 1) * T_s / N
print('''Congratulations, you completed the race!!!
Your track time is {0:.3f} sec., baseline time is {1:.3f} sec.'''.format(T_s, T_baseline))
plot(x_s, y_s, 'r.', label='y')
plot(x_s, top_border(x_s, -margin), 'k-', label='top_border with margin')
plot(x_s, bottom_border(x_s, +margin), 'k-', label='bottom_border with margin')
plot(x_s, top_border(x_s), 'k-', label='top_border')
plot(x_s, bottom_border(x_s), 'k-', label='bottom_border')
plot(x_s, road_center(x_s), 'g--', label='road center')
plt.xlabel('x')
plt.ylabel('y')
for x, y, yaw in zip(x_s, y_s, yaw_s):
center = np.array([[x], [y]]) + rotation_matrix(yaw).dot(np.array([[-vehicle_back_to_axis], [-vehicle_width/2]]))
rect = plt.Rectangle(center, vehicle_length, vehicle_width, angle=math.degrees(yaw), facecolor='none', linewidth=1, edgecolor='k')
plt.gca().add_patch(rect)
legend(loc='lower right')
figure()
plot(t_u, y_s, 'r.', label='y')
plot(t_u, top_border(x_s), 'k-', label='top_border')
plot(t_u, bottom_border(x_s), 'k-', label='bottom_border')
plot(t_u, road_center(x_s), 'g--', label='road center')
plt.xlabel('time')
legend(loc='lower right')
figure()
# plot(x_s, label='x')
plot(t_u, v_s, label='v')
plot(t_u[:-1], a_s, label='a')
plot(t_u[:-1], c_s * v_s[:-1] ** 2, label='a_n')
plt.xlabel('time')
legend(loc='lower right')
figure()
plot(t_u, yaw_s, label='yaw')
plot(t_u[:-1], c_s, label='c')
plt.xlabel('time')
legend(loc='lower right')
show()
# jacobian and hessian are very sparse in this kind of tasks.
spy(solution.value(jacobian(opti.g, opti.x)), aspect='auto', markersize=1.)
plt.xlabel('variables')
plt.ylabel('constraints')
obstacle = plt.Circle(xy=(obstacle_x, obstacle_y), radius=obstacle_r)
plt.gca().add_patch(obstacle)
obstacle = plt.Circle(xy=(obstacle_x, obstacle_y), radius=obstacle_r)
plt.gca().add_patch(obstacle)
| 0.304455 | 0.782081 |
# Energy groups
To capture the variation of neutron energies in the reactor, we can **discretize the flux into energy groups.**
\begin{align}
\phi &= \sum_{g=1}^{g=G}\phi_g\\
\phi_g &= \int_{E_g}^{E_{g-1}}\phi(E)dE\rvert_{g = 1,G}
\end{align}
The various parameters (such as cross sections) also need to be individually evaluated for each energy group such that we have a $\Sigma_{ag}$ for each group g.
The various cross sections and coefficients also need to be individually evaluated for each energy group such that we have a $\Sigma_{a,g}$ for each group $g\in[1,G] \rightarrow E_g\in[E_1,E_G]$. Also, it is important to consider possible paths of demise for potential fission neutrons.
\begin{align}
\chi_g &= \int_{E_g}^{E_{g-1}}\chi(E)dE\\
\sum_{g=1}^{g=G}\chi_g &= \int_0^\infty\chi(E) dE\\
\end{align}
And the source of fission neutrons is:
\begin{align}
S &= \sum_{g'=1}^{g'=G}(\nu\Sigma_f)_{g'}\phi_{g'}
\end{align}
### Group-wise Scattering Cross Sections
This energy group notation is used to indicate scattering from one group into another as well.
Most of the scattering is from other groups into our energy group of interest, $g$, but some of the scattering is from this group into itself.
Scattering from group $g$ to group $g'$ is denoted as $\Sigma_{s,g\rightarrow g'}$.
### Group ordering
Let's define just two groups, $E_1$ and $E_{2}$. One of these groups will represent the fast neutrons and the other will represent the thermal neutrons.
### Think-pair share:
Which group should be **$g=1$** and which should be **$g=2$**?

For simple problems, one can typically also assume that prompt neutrons are born fast.
Since they are born fast, energy group 1 is always the fastest group.
### Upscattering and Downscattering
Downscattering (with cross section $\Sigma_{s,g\rightarrow g'}$) is when energy decreases because of the scatter ( so $g < g'$ ).
Upscattering (with cross section $\Sigma_{s,g\rightarrow g'}$) is when energy increases because of the scatter ( so $g > g'$ ).
### Think Pair Share
Given what you know about reactors and elastic scattering, which is likely more common in a reactor, upscattering or downscattering?

# $\infty$ Medium, Two-Group Neutron Balance
Balance equation for group g:
\begin{align*}
\mbox{Loss} &= \mbox{Gain}\\
\mbox{Absorption} + \mbox{Outscattering} &= \mbox{Fission} + \mbox{Inscattering}\\
\Sigma_{ag} \phi_g + \Sigma_{g\rightarrow g'}\phi_g &= \frac{\chi_g}{k}\left(\nu \Sigma_{fg} \phi_g + \nu\Sigma_{fg'}\phi_{g'}\right) + \Sigma_{g'\rightarrow g}\phi_{g'}
\end{align*}
Group 1 (fast):
\begin{align*}
\Sigma_{a1} \phi_1 + \Sigma_{1\rightarrow 2}\phi_1 &= \frac{\chi_1}{k}\left(\nu \Sigma_{f1} \phi_1 + \nu\Sigma_{f2}\phi_2\right) + \Sigma_{2\rightarrow 1}\phi_2
\end{align*}
Group 2 (thermal):
\begin{align*}
\Sigma_{a2} \phi_2 + \Sigma_{2\rightarrow 1}\phi_2 &= \frac{\chi_2}{k}\left(\nu \Sigma_{f2} \phi_2 + \nu\Sigma_{f1}\phi_1\right) + \Sigma_{1\rightarrow 2}\phi_1
\end{align*}
We can rewrite these equations in matrix form:
\begin{align*}
\left[ {\begin{array}{cc}
\Sigma_{a1} & 0 \\
0 & \Sigma_{a2} \\
\end{array} } \right]
\left[ {\begin{array}{c}
\phi_1 \\
\phi_2 \\
\end{array} } \right]
+
\left[ {\begin{array}{cc}
\Sigma_{1\rightarrow 2} & 0 \\
0 & \Sigma_{2\rightarrow 1} \\
\end{array} } \right]
\left[ {\begin{array}{c}
\phi_1 \\
\phi_2 \\
\end{array} } \right]
&=
\frac{1}{k}
\left[ {\begin{array}{c}
\chi_1 \\
\chi_2 \\
\end{array} } \right]
\left[ {\begin{array}{cc}
\nu \Sigma_{f1} & \nu \Sigma_{f2}
\end{array}}\right]
\left[ {\begin{array}{c}
\phi_1 \\
\phi_2 \\
\end{array} } \right]
+
\left[ {\begin{array}{cc}
0 & \Sigma_{2\rightarrow 1} \\
\Sigma_{1\rightarrow 2} & 0 \\
\end{array} } \right]
\left[ {\begin{array}{c}
\phi_1 \\
\phi_2 \\
\end{array} } \right]
\end{align*}
Moving the inscattering to the left side of the equation, the equation becomes:
\begin{align*}
\left[ {\begin{array}{cc}
\Sigma_{a1} & 0 \\
0 & \Sigma_{a2} \\
\end{array} } \right]
\left[ {\begin{array}{c}
\phi_1 \\
\phi_2 \\
\end{array} } \right]
+
\left[ {\begin{array}{cc}
\Sigma_{1\rightarrow 2} & 0 \\
0 & \Sigma_{2\rightarrow 1} \\
\end{array} } \right]
\left[ {\begin{array}{c}
\phi_1 \\
\phi_2 \\
\end{array} } \right]
-
\left[ {\begin{array}{cc}
0 & \Sigma_{2\rightarrow 1} \\
\Sigma_{1\rightarrow 2} & 0 \\
\end{array} } \right]
\left[ {\begin{array}{c}
\phi_1 \\
\phi_2 \\
\end{array} } \right]
&=
\frac{1}{k}
\left[ {\begin{array}{cc}
\chi_1\nu \Sigma_{f1} & \chi_1\nu \Sigma_{f2}\\
\chi_2\nu \Sigma_{f1} & \chi_2\nu \Sigma_{f2}\\
\end{array}}\right]
\left[ {\begin{array}{c}
\phi_1 \\
\phi_2 \\
\end{array} } \right]
\end{align*}
Here, we can define the macroscopic absorption, inscattering, outscattering,
and fission cross-section matrices:
\begin{align*}
\mbox{Absorption Matrix } (\mathbf{A}): &=
\left[ {\begin{array}{cc}
\Sigma_{a1} & 0\\
0 & \Sigma_{a2}\\
\end{array}}\right]\\
\mbox{Inscattering Matrix } (\mathbf{S_{in}}): &=
\left[ {\begin{array}{cc}
0 & \Sigma_{2\rightarrow 1}\\
\Sigma_{1\rightarrow 2} & 0\\
\end{array}}\right]\\
\mbox{Outscattering Matrix } (\mathbf{S_{out}}): &=
\left[ {\begin{array}{cc}
\Sigma_{1\rightarrow 2} & 0\\
0 & \Sigma_{2\rightarrow 1}\\
\end{array}}\right]\\
\mbox{Fission Matrix } (\mathbf{F}): &=
\left[ {\begin{array}{cc}
\chi_1\nu \Sigma_{f1} & \chi_1\nu \Sigma_{f2}\\
\chi_2\nu \Sigma_{f1} & \chi_2\nu \Sigma_{f2}\\
\end{array}}\right]\\
\end{align*}
And now our equation is reduced to:
\begin{align*}
\left[A + S_{out} - S_{in}\right]
\left[ {\begin{array}{c}
\phi_1 \\
\phi_2 \\
\end{array} } \right]
&=
\frac{1}{k}
\left[F\right]
\left[ {\begin{array}{c}
\phi_1 \\
\phi_2 \\
\end{array} } \right]
\end{align*}
So, the final version is:
\begin{align*}
k \left[ {\begin{array}{c}
\phi_1 \\
\phi_2 \\
\end{array} } \right]
&=
\left[A + S_{out} - S_{in}\right]^{-1}
\left[F\right]
\left[ {\begin{array}{c}
\phi_1 \\
\phi_2 \\
\end{array} } \right]
\end{align*}
This gives us the final definition to note, the migration matrix:
\begin{align*}
\mbox{Migration Matrix:} &= \left[A + S_{out} - S_{in}\right]\\
\end{align*}
For the two group problem, let the eigenvector be
$\left(\begin{array}{c}\phi_1\\\phi_2\end{array}\right)$,
let the eigenvalue be $k$, and
calculate the eigenvalue and eigenvector of
$\left[A + S_{out} - S_{in}\right]^{-1}[F]$.
# Power Iteration
<a title="By Konrad Jacobs, Erlangen (https://opc.mfo.de/detail?photo_id=2896) [CC BY-SA 2.0 de
(https://creativecommons.org/licenses/by-sa/2.0/de/deed.en
)], via Wikimedia Commons" href="https://commons.wikimedia.org/wiki/File:Richard_von_Mises.jpeg"><img width="256" alt="Richard von Mises" src="https://upload.wikimedia.org/wikipedia/commons/5/56/Richard_von_Mises.jpeg"></a>
[1] Richard von Mises and H. Pollaczek-Geiringer, Praktische Verfahren der Gleichungsauflösung, ZAMM - Zeitschrift für Angewandte Mathematik und Mechanik 9, 152-164 (1929)
> Given a diagonalizable matrix $A$, the algorithm will produce a number $\lambda$, which is the greatest (in absolute value) eigenvalue of $A$, and a nonzero vector $v$, the corresponding eigenvector of $\lambda$, such that $Av=\lambda v$. The algorithm is also known as the Von Mises iteration.[1]
Facts about Power Iteration:
- It is simple to implement
- It can sometimes be quite slow to converge (reach the answer)
- It doesn't compute a matrix decomposition, so it can be used on very large sparse matrices
We'd like to solve for $\phi$. If we assume:
- B has an eigenvalue strictly greater in magnitude than its others
- the starting vector ($\phi_0$) has a nonzero component in the direction of an eigenvector associated with the dominant eigenvalue, the a sequence $\phi_k$ converges to an eigenvector associated with the dominant eigenvalue.
Iterate with the following definitions:
\begin{align*}
\phi_{i+1} &= \frac{B\phi_i}{\lVert B\phi_i \rVert_2}\\
\end{align*}
Note the definition of the euclidean norm (a.k.a the $L^2$ norm) for a vector
$\vec{x} = (x_1, x_2, \cdots, x_n)$ is :
\begin{align*}
\lVert \vec{x}\rVert_2 \equiv \sqrt{x_1^2 + x_2^2 + \cdots + x_n^2}.
\end{align*}
```
import numpy as np
def power_iteration(B, num_simulations):
"""
Returns the normalized
"""
# By chosing a random solution vector we decrease the chance
# that it is orthogonal to the eigenvector
phi_k = np.random.rand(B.shape[1])
for _ in range(num_simulations):
# calculate the matrix-by-vector product Bphi
phi_k1 = np.dot(B, phi_k)
# calculate the norm
phi_k1_norm = np.linalg.norm(phi_k1)
# re normalize the vector
phi_k = phi_k1 / phi_k1_norm
return phi_k
B = np.array([[0.5, 0.5], [0.2, 0.8]])
power_iteration(B, 100)
```
We can do the same with k using the Raleigh quotient and our solution for $\phi$. That is left for the reader. To find k, iterate with the following definition:
\begin{align*}
k_{i+1} &= \frac{\left(B\phi_{i+1}\right)^T \phi_{i+1}}{\left(\phi_{i+1}^T\phi_{i+1}\right)}\\
\end{align*}
|
github_jupyter
|
import numpy as np
def power_iteration(B, num_simulations):
"""
Returns the normalized
"""
# By chosing a random solution vector we decrease the chance
# that it is orthogonal to the eigenvector
phi_k = np.random.rand(B.shape[1])
for _ in range(num_simulations):
# calculate the matrix-by-vector product Bphi
phi_k1 = np.dot(B, phi_k)
# calculate the norm
phi_k1_norm = np.linalg.norm(phi_k1)
# re normalize the vector
phi_k = phi_k1 / phi_k1_norm
return phi_k
B = np.array([[0.5, 0.5], [0.2, 0.8]])
power_iteration(B, 100)
| 0.723114 | 0.984531 |
<h1 style='color: green; font-size: 36px; font-weight: bold;'>Data Science - Regressão Linear</h1>
# <font color='red' style='font-size: 30px;'>Conhecendo o Dataset</font>
<hr style='border: 2px solid red;'>
## Importando bibliotecas
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import warnings as wr
wr.filterwarnings('ignore')
sns.set()
```
## O Dataset e o Projeto
<hr>
### Fonte: https://www.kaggle.com/greenwing1985/housepricing
### Descrição:
<p style='font-size: 18px; line-height: 2; margin: 10px 50px; text-align: justify;'>Nosso objetivo neste exercício é criar um modelo de machine learning, utilizando a técnica de Regressão Linear, que faça previsões sobre os preços de imóveis a partir de um conjunto de características conhecidas dos imóveis.</p>
<p style='font-size: 18px; line-height: 2; margin: 10px 50px; text-align: justify;'>Vamos utilizar um dataset disponível no Kaggle que foi gerado por computador para treinamento de machine learning para iniciantes. Este dataset foi modificado para facilitar o nosso objetivo, que é fixar o conhecimento adquirido no treinamento de Regressão Linear.</p>
<p style='font-size: 18px; line-height: 2; margin: 10px 50px; text-align: justify;'>Siga os passos propostos nos comentários acima de cada célular e bons estudos.</p>
### Dados:
<ul style='font-size: 18px; line-height: 2; text-align: justify;'>
<li><b>precos</b> - Preços do imóveis</li>
<li><b>area</b> - Área do imóvel</li>
<li><b>garagem</b> - Número de vagas de garagem</li>
<li><b>banheiros</b> - Número de banheiros</li>
<li><b>lareira</b> - Número de lareiras</li>
<li><b>marmore</b> - Se o imóvel possui acabamento em mármore branco (1) ou não (0)</li>
<li><b>andares</b> - Se o imóvel possui mais de um andar (1) ou não (0)</li>
</ul>
## Leitura dos dados
Dataset está na pasta "Dados" com o nome "HousePrices_HalfMil.csv" em usa como separador ";".
```
!curl https://raw.githubusercontent.com/silvioedu/MachineLearning-Practice/master/HousePrices_HalfMil.csv -o house.csv
dados = pd.read_csv('house.csv', sep=';')
```
## Visualizar os dados
```
dados.head()
```
## Verificando o tamanho do dataset
```
dados.shape
```
# <font color='red' style='font-size: 30px;'>Análises Preliminares</font>
<hr style='border: 2px solid red;'>
## Estatísticas descritivas
```
dados.describe()
```
## Matriz de correlação
<p style='font-size: 18px; line-height: 2; margin: 10px 50px; text-align: justify;'>O <b>coeficiente de correlação</b> é uma medida de associação linear entre duas variáveis e situa-se entre <b>-1</b> e <b>+1</b> sendo que <b>-1</b> indica associação negativa perfeita e <b>+1</b> indica associação positiva perfeita.</p>
### Observe as correlações entre as variáveis:
<ul style='font-size: 16px; line-height: 2; text-align: justify;'>
<li>Quais são mais correlacionadas com a variável dependete (Preço)?</li>
<li>Qual o relacionamento entre elas (positivo ou negativo)?</li>
<li>Existe correlação forte entre as variáveis explicativas?</li>
</ul>
```
dados.corr()
```
# <font color='red' style='font-size: 30px;'>Comportamento da Variável Dependente (Y)</font>
<hr style='border: 2px solid red;'>
# Análises gráficas
## Box plot da variável *dependente* (y)
### Avalie o comportamento da distribuição da variável dependente:
<ul style='font-size: 16px; line-height: 2; text-align: justify;'>
<li>Parecem existir valores discrepantes (outliers)?</li>
<li>O box plot apresenta alguma tendência?</li>
</ul>
https://seaborn.pydata.org/generated/seaborn.boxplot.html?highlight=boxplot#seaborn.boxplot
```
ax = sns.boxplot(dados.precos)
ax.set_title('Preço dos imóveis', fontsize=16)
ax.set_xlabel('Preços')
plt.ioff()
```
## Investigando a variável *dependente* (y) juntamente com outras característica
Faça um box plot da variável dependente em conjunto com cada variável explicativa (somente as categóricas).
### Avalie o comportamento da distribuição da variável dependente com cada variável explicativa categórica:
<ul style='font-size: 16px; line-height: 2; text-align: justify;'>
<li>As estatísticas apresentam mudança significativa entre as categorias?</li>
<li>O box plot apresenta alguma tendência bem definida?</li>
</ul>
### Box-plot (Preço X Garagem)
```
ax = sns.boxplot(data = dados, x='garagem', y='precos')
ax.set_title('Preço dos imóveis', fontsize=16)
ax.set_xlabel('Preços')
ax.set_xlabel('Número de vagas de garagem')
plt.ioff()
```
### Box-plot (Preço X Banheiros)
```
ax = sns.boxplot(data = dados, x='banheiros', y='precos')
ax.set_title('Preço dos imóveis', fontsize=16)
ax.set_xlabel('Preços')
ax.set_xlabel('Número banheiros')
plt.ioff()
```
### Box-plot (Preço X Lareira)
```
ax = sns.boxplot(data = dados, x='lareira', y='precos')
ax.set_title('Preço dos imóveis', fontsize=16)
ax.set_xlabel('Preços')
ax.set_xlabel('Número de lareiras')
plt.ioff()
```
### Box-plot (Preço X Acabamento em Mármore)
```
ax = sns.boxplot(data = dados, x='marmore', y='precos')
ax.set_title('Preço dos imóveis', fontsize=16)
ax.set_xlabel('Preços')
ax.set_xlabel('Acabamento em mármore')
plt.ioff()
```
### Box-plot (Preço X Andares)
```
ax = sns.boxplot(data = dados, x='andares', y='precos')
ax.set_title('Preço dos imóveis', fontsize=16)
ax.set_xlabel('Preços')
ax.set_xlabel('Mais de 1 andar')
plt.ioff()
```
## Distribuição de frequências da variável *dependente* (y)
Construa um histograma da variável dependente (Preço).
### Avalie:
<ul style='font-size: 16px; line-height: 2; text-align: justify;'>
<li>A distribuição de frequências da variável dependente parece ser assimétrica?</li>
<li>É possível supor que a variável dependente segue uma distribuição normal?</li>
</ul>
https://seaborn.pydata.org/generated/seaborn.distplot.html?highlight=distplot#seaborn.distplot
```
ax = sns.distplot(dados.precos)
ax.set_title('Distribuição de frequência', fontsize=16)
ax.set_xlabel('$')
ax.set_xlabel('Frequência')
plt.ioff()
```
## Gráficos de dispersão entre as variáveis do dataset
## Plotando o pairplot fixando somente uma variável no eixo y
https://seaborn.pydata.org/generated/seaborn.pairplot.html?highlight=pairplot#seaborn.pairplot
Plote gráficos de dispersão da variável dependente contra cada variável explicativa. Utilize o pairplot da biblioteca seaborn para isso.
Plote o mesmo gráfico utilizando o parâmetro kind='reg'.
### Avalie:
<ul style='font-size: 16px; line-height: 2; text-align: justify;'>
<li>É possível identificar alguma relação linear entre as variáveis?</li>
<li>A relação é positiva ou negativa?</li>
<li>Compare com os resultados obtidos na matriz de correlação.</li>
</ul>
```
ax = sns.pairplot(dados, y_vars='precos', kind='reg')
ax.fig.suptitle('Preço dos imóveis', fontsize=16)
plt.ioff()
```
# <font color='red' style='font-size: 30px;'>Estimando um Modelo de Regressão Linear</font>
<hr style='border: 2px solid red;'>
## Importando o *train_test_split* da biblioteca *scikit-learn*
https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html
```
from sklearn.model_selection import train_test_split as tts
```
## Criando uma Series (pandas) para armazenar a variável dependente (y)
```
y = dados['precos']
```
## Criando um DataFrame (pandas) para armazenar as variáveis explicativas (X)
```
x = dados.drop(['precos'], axis=1)
```
## Criando os datasets de treino e de teste
```
x_train, x_test, y_train, y_test = tts(x, y, test_size=.3, random_state=2811)
```
## Importando *LinearRegression* e *metrics* da biblioteca *scikit-learn*
https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html
https://scikit-learn.org/stable/modules/classes.html#regression-metrics
```
from sklearn.linear_model import LinearRegression
from sklearn import metrics
```
## Instanciando a classe *LinearRegression()*
```
modelo = LinearRegression()
```
## Utilizando o método *fit()* para estimar o modelo linear utilizando os dados de TREINO (y_train e X_train)
https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html#sklearn.linear_model.LinearRegression.fit
```
modelo.fit(x_train, y_train)
```
## Obtendo o coeficiente de determinação (R²) do modelo estimado com os dados de TREINO
https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html#sklearn.linear_model.LinearRegression.score
### Avalie:
<ul style='font-size: 16px; line-height: 2; text-align: justify;'>
<li>O modelo apresenta um bom ajuste?</li>
<li>Você lembra o que representa o R²?</li>
<li>Qual medida podemos tomar para melhorar essa estatística?</li>
</ul>
```
print(f'R² = {modelo.score(x_train, y_train).round(2)}')
```
## Gerando previsões para os dados de TESTE (X_test) utilizando o método *predict()*
https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html#sklearn.linear_model.LinearRegression.predict
```
y_previsto = modelo.predict(x_test)
```
## Obtendo o coeficiente de determinação (R²) para as previsões do nosso modelo
https://scikit-learn.org/stable/modules/generated/sklearn.metrics.r2_score.html#sklearn.metrics.r2_score
```
print(f'R² = {metrics.r2_score(y_test, y_previsto).round(2)}')
```
# <font color='red' style='font-size: 30px;'>Obtendo Previsões Pontuais</font>
<hr style='border: 2px solid red;'>
## Criando um simulador simples
Crie um simulador que gere estimativas de preço a partir de um conjunto de informações de um imóvel.
```
area=38
garagem=2
banheiros=4
lareira=4
marmore=0
andares=1
entrada=[[area, garagem, banheiros, lareira, marmore, andares]]
print('$ {0:.2f}'.format(modelo.predict(entrada)[0]))
```
# <font color='red' style='font-size: 30px;'>Métricas de Regressão</font>
<hr style='border: 2px solid red;'>
## Métricas da regressão
<hr>
fonte: https://scikit-learn.org/stable/modules/model_evaluation.html#regression-metrics
Algumas estatísticas obtidas do modelo de regressão são muito úteis como critério de comparação entre modelos estimados e de seleção do melhor modelo, as principais métricas de regressão que o scikit-learn disponibiliza para modelos lineares são as seguintes:
### Erro Quadrático Médio
Média dos quadrados dos erros. Ajustes melhores apresentam $EQM$ mais baixo.
$$EQM(y, \hat{y}) = \frac 1n\sum_{i=0}^{n-1}(y_i-\hat{y}_i)^2$$
### Raíz do Erro Quadrático Médio
Raíz quadrada da média dos quadrados dos erros. Ajustes melhores apresentam $\sqrt{EQM}$ mais baixo.
$$\sqrt{EQM(y, \hat{y})} = \sqrt{\frac 1n\sum_{i=0}^{n-1}(y_i-\hat{y}_i)^2}$$
### Coeficiente de Determinação - R²
O coeficiente de determinação (R²) é uma medida resumida que diz quanto a linha de regressão ajusta-se aos dados. É um valor entra 0 e 1.
$$R^2(y, \hat{y}) = 1 - \frac {\sum_{i=0}^{n-1}(y_i-\hat{y}_i)^2}{\sum_{i=0}^{n-1}(y_i-\bar{y}_i)^2}$$
## Obtendo métricas para o modelo
```
R2 = metrics.r2_score(y_true=y_test, y_pred=y_previsto).round(2)
EQM = metrics.mean_squared_error(y_pred=y_test, y_true=y_previsto).round(2)
REQM = np.sqrt(metrics.mean_squared_error(y_pred=y_test, y_true=y_previsto)).round(2)
pd.DataFrame([R2, EQM, REQM], ['R²', 'EQM', 'REQM'], columns=['Métricas'])
```
# <font color='red' style='font-size: 30px;'>Salvando e Carregando o Modelo Estimado</font>
<hr style='border: 2px solid red;'>
## Importando a biblioteca pickle
```
import pickle
```
## Salvando o modelo estimado
```
output = open('modelo_preco', 'wb')
pickle.dump(modelo, output)
output.close()
```
### Em um novo notebook/projeto Python
<h4 style='color: blue; font-weight: normal'>In [1]:</h4>
```sh
import pickle
modelo = open('modelo_preço','rb')
lm_new = pickle.load(modelo)
modelo.close()
area = 38
garagem = 2
banheiros = 4
lareira = 4
marmore = 0
andares = 1
entrada = [[area, garagem, banheiros, lareira, marmore, andares]]
print('$ {0:.2f}'.format(lm_new.predict(entrada)[0]))
```
<h4 style='color: red; font-weight: normal'>Out [1]:</h4>
```
$ 46389.80
```
```
```
|
github_jupyter
|
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import warnings as wr
wr.filterwarnings('ignore')
sns.set()
!curl https://raw.githubusercontent.com/silvioedu/MachineLearning-Practice/master/HousePrices_HalfMil.csv -o house.csv
dados = pd.read_csv('house.csv', sep=';')
dados.head()
dados.shape
dados.describe()
dados.corr()
ax = sns.boxplot(dados.precos)
ax.set_title('Preço dos imóveis', fontsize=16)
ax.set_xlabel('Preços')
plt.ioff()
ax = sns.boxplot(data = dados, x='garagem', y='precos')
ax.set_title('Preço dos imóveis', fontsize=16)
ax.set_xlabel('Preços')
ax.set_xlabel('Número de vagas de garagem')
plt.ioff()
ax = sns.boxplot(data = dados, x='banheiros', y='precos')
ax.set_title('Preço dos imóveis', fontsize=16)
ax.set_xlabel('Preços')
ax.set_xlabel('Número banheiros')
plt.ioff()
ax = sns.boxplot(data = dados, x='lareira', y='precos')
ax.set_title('Preço dos imóveis', fontsize=16)
ax.set_xlabel('Preços')
ax.set_xlabel('Número de lareiras')
plt.ioff()
ax = sns.boxplot(data = dados, x='marmore', y='precos')
ax.set_title('Preço dos imóveis', fontsize=16)
ax.set_xlabel('Preços')
ax.set_xlabel('Acabamento em mármore')
plt.ioff()
ax = sns.boxplot(data = dados, x='andares', y='precos')
ax.set_title('Preço dos imóveis', fontsize=16)
ax.set_xlabel('Preços')
ax.set_xlabel('Mais de 1 andar')
plt.ioff()
ax = sns.distplot(dados.precos)
ax.set_title('Distribuição de frequência', fontsize=16)
ax.set_xlabel('$')
ax.set_xlabel('Frequência')
plt.ioff()
ax = sns.pairplot(dados, y_vars='precos', kind='reg')
ax.fig.suptitle('Preço dos imóveis', fontsize=16)
plt.ioff()
from sklearn.model_selection import train_test_split as tts
y = dados['precos']
x = dados.drop(['precos'], axis=1)
x_train, x_test, y_train, y_test = tts(x, y, test_size=.3, random_state=2811)
from sklearn.linear_model import LinearRegression
from sklearn import metrics
modelo = LinearRegression()
modelo.fit(x_train, y_train)
print(f'R² = {modelo.score(x_train, y_train).round(2)}')
y_previsto = modelo.predict(x_test)
print(f'R² = {metrics.r2_score(y_test, y_previsto).round(2)}')
area=38
garagem=2
banheiros=4
lareira=4
marmore=0
andares=1
entrada=[[area, garagem, banheiros, lareira, marmore, andares]]
print('$ {0:.2f}'.format(modelo.predict(entrada)[0]))
R2 = metrics.r2_score(y_true=y_test, y_pred=y_previsto).round(2)
EQM = metrics.mean_squared_error(y_pred=y_test, y_true=y_previsto).round(2)
REQM = np.sqrt(metrics.mean_squared_error(y_pred=y_test, y_true=y_previsto)).round(2)
pd.DataFrame([R2, EQM, REQM], ['R²', 'EQM', 'REQM'], columns=['Métricas'])
import pickle
output = open('modelo_preco', 'wb')
pickle.dump(modelo, output)
output.close()
import pickle
modelo = open('modelo_preço','rb')
lm_new = pickle.load(modelo)
modelo.close()
area = 38
garagem = 2
banheiros = 4
lareira = 4
marmore = 0
andares = 1
entrada = [[area, garagem, banheiros, lareira, marmore, andares]]
print('$ {0:.2f}'.format(lm_new.predict(entrada)[0]))
$ 46389.80
| 0.456894 | 0.927495 |
```
import os
import numpy as np
import astropy.io.fits as fits
import pylab as pl
import pandas as pd
import matplotlib.pyplot as plt
from IPython.display import YouTubeVideo
from scipy import interpolate
from scipy import optimize
from tools.wave2rgb import wavelength_to_rgb
from tools.resample_flux import trapz_rebin
```
# Una enana blanca como la nieve
Cuando miras al cielo, ¿quién sabe qué encontrarás? Todos estamos familiarizados con nuestro propio [Sol](https://solarsystem.nasa.gov/solar-system/sun/overview/),
<img src="images/sun.jpg" alt="Drawing" style="width: 800px;"/>
aparentemente siempre presente, el cual vemos continuamente día a día. ¿Le sorprendería saber que en 5,500 millones de años el sol cambiará y será irreconocible a medida que se agote la fusión nuclear que alimenta el hidrógeno?
<img src="images/RedGiant.jpg" alt="Drawing" style="width: 800px;"/>
Durante esta aparente crisis de la mediana edad, el sol comenzará a fusionar el helio para crear carbono, fundamental para la vida en la tierra, y oxígeno, necesario para sustentarla. Expandiéndose entre diez o cien veces el tamaño del Sol a día de hoy, pronto envolverá a Mercurio y Venus, y quizás [incluso a la Tierra misma](https://phys.org/news/2016-05-earth-survive-sun-red-giant.html#:~:text=Red%20Giant%20Phase%3A,collapses%20under%20its%20own%20weight.), y eventualmente explotará como una espectacular [nebulosas planetarias](https://es.wikipedia.org/wiki/Nebulosa_planetaria):
<img src="images/PlanetaryNebulae.jpg" alt="Drawing" style="width: 800px;"/>
El carbono-oxígeno ceniciento en el centro sobrevivirá como una reliquia fosilizada, disipando energía lo suficientemente lento como para que continúe sobreviviendo durante otros 13,8 mil millones de años, la edad actual de nuestro Universo, y ver en muchos más milenios.
Estudiando a las vecinas enanas blancas de la Vía Láctea podemos aprender sobre este eventual destino del Sol y su impacto en la Tierra.¡Veremos uno de estos objetos que DESI ha observado recientemente!
```
# Load the DESI spectrum
zbest = fits.open('student_andes/zbest-mws-66003-20200315-wd.fits')[1]
coadd = fits.open('student_andes/coadd-mws-66003-20200315-wd.fits')
# Get its position on the sky:
ra, dec = float(zbest.data['TARGET_RA']), float(zbest.data['TARGET_DEC'])
```
Su posición en el cielo nocturno se encuentra justo encima de la constelación [Ursa Marjor](https://es.wikipedia.org/wiki/Osa_Mayor) o la Osa Mayor,
<img src="images/UrsaMajor.jpg" alt="Drawing" style="width: 800px;"/>
familiar en el cielo nocturno:
<img src="images/UrsaMajor2.png" alt="Drawing" style="width: 800px;"/>
Si miraras el tiempo suficiente, verías un cambio casi imperceptible en la posición aparente conforme nuestro punto de vista cambia a medida que la Tierra orbita alrededor del Sol. Recuerda, ¡los dinosaurios vagaban por el planeta Tierra, cuando estaba al otro lado de la galaxia!
El movimiento de la Tierra alrededor del sol es suficiente, dado un instrumento lo suficientemente preciso, para calcular la distancia a nuestra Enana Blanca, con una trigonometría simple que probablemente ya hayas visto:
<img src="images/PDistance.jpg" alt="Drawing" style="width: 800px;"/>
El satélite espacial [GAIA](https://www.esa.int/Space_in_Member_States/Spain/Gaia_crea_el_mapa_estelar_mas_completo_de_nuestra_Galaxia_y_mas_alla) fue diseñado precisamente para hacer esto y eventualmente mapeará mil millones de estrellas en la Vía Láctea, aproximadamente una de cada cien allí, de esta manera.
<img src="images/Gaia.jpg" alt="Drawing" style="width: 800px;"/>
Con este paralaje, GAIA nos dice la distancia a nuestra enana blanca:
```
# Distancia calculada con paralaje de GAIA (Bailer-Jones et al. 2018).
# Datos de fotometría y de la [distancia calculda](https://ui.adsabs.harvard.edu/abs/2018AJ....156...58B/)
# pueden ser enconrados en los [archivos de GAIA](https://gea.esac.esa.int/archive/)
dist_para = 784.665266 # parcsecs, 1 parsec = 3.0857 x 10^16 m.
parsec = 3.085677581e16 # m
# AU: Unidad astronómica - distancia entre el Sol y la Tierra.
au = 1.495978707e11 # m
print(' El paralaje GAIA nos indica que la distancia a nuestra Enana Blanca es {:.0f} millones de veces la distancia de la Tierra al Sol'.format(dist_para * parsec / au / 1.e6))
```
La cámara GAIA está diseñada para medir el brillo de la enana blanca en tres partes diferentes del espectro visible, correspondientes a los colores que se muestran a continuación. Reconocerás esto como el mismo estilo de diagrama que exploramos para las líneas de Hidrógeno en la Introducción.
```
#( Pivote) Longitud de onda de los filtros de GAIA DR2
GAIA = {'G_WAVE': 6230.6, 'BP_WAVE': 5051.5, 'RP_WAVE': 7726.2}
for wave in GAIA.values():
# color = [r, g, b]
color = wavelength_to_rgb(wave / 10.)
pl.axvline(x=wave / 10., c=color)
pl.title('Longitudes de onda (y colores) a los que GAIA mide el brillo de cada estrella', pad=10.5, fontsize=10)
pl.xlabel('Longitud de onda en el vacío [nanometros]')
pl.xlim(380., 780.)
for band in ['G', 'BP', 'RP']:
GAIA[band + '_MAG'] = zbest.data['GAIA_PHOT_{}_MEAN_MAG'.format(band)][0]
GAIA[band + '_FLUX'] = 10.**(-(GAIA[band + '_MAG'] + (25.7934 - 25.6884)) / 2.5) * 3631. / 3.34e4 / GAIA[band + '_WAVE']**2.
# Añade los errores en la magnitud que los catálogos de DESI no contienen.
GAIA['G_MAGERR'] = 0.0044
GAIA['BP_MAGERR'] = 0.0281
GAIA['RP_MAGERR'] = 0.0780
for key, value in GAIA.items():
print('{:10s} \t {:05.4f}'.format(key, value))
```
Esta combinación, una medida de distancia (desde el paralaje) y de brillo aparente (en varios colores), es increíblemente poderosa, ya que juntas nos dicen la luminosidad o brillo intrínseco de la enana en lugar de cómo la percibimos, a partir de lo cual podemos determinar qué física podría estar determinando qué tan brillante es la enana blanca.
# DESI
Al resolver las variaciones sutiles en la cantidad de luz con la longitud de onda, DESI nos da una mucho mejor idea de la composición de la Enana Blanca y su historia a partir de todo su espectro, en lugar de unas pocas mediciones en diferentes colores:
```
# Obten la longitud de onda y el flujo.
wave = coadd[1].data['WAVELENGTH']
count = coadd[1].data['TARGET35191335094848528']
# Grafica el espectro de DESI
pl.figure(figsize=(15, 10))
pl.plot(wave, count)
pl.grid()
pl.xlabel('Wavelength $[\AA]$')
pl.ylim(ymin=0.)
pl.title('TARGET35191335094848528')
```
Los astrónomos han pasado mucho tiempo estudiando estrellas, clasificándolas de acuerdo a diferentes tipos - no menos importante [Annie Jump Cannon](https://www.mujeresenlahistoria.com/2014/08/besando-las-estrellas-annie-jump-cannon.html)([o en inglés](https://www.womenshistory.org/education-resources/biographies/annie-jump-cannon))
<img src="images/AnnieCannon.jpg" alt="Drawing" style="width: 800px;"/>
eso nos ha dejado con una nueva capacidad para predecir el espectro de una estrella a una temperatura determinada, $g$: la aceleración debida a la gravedad en su superficie, y su masa. Dadas las estrellas 'estándar', aquellas con restricciones de distancia externas, también podemos determinar qué tan intrínsecamente brillante es una estrella dada con un espectro determinado. Tomemos estos:
```
# Modelos de espectros de una enanas blancas
# [Levenhagen 2017](https://ui.adsabs.harvard.edu/abs/2017ApJS..231....1L)
spec_da_list = os.listdir('dat/WDspec/')
model_flux_spec_da = []
model_wave_spec_da = []
T_spec_da = []
logg_spec_da = []
# Haz un ciclo sobre todo los archivos en el directorio y únelos en una lista
# Loop over files in the directory and collect into a list.
for filename in spec_da_list:
if filename[-4:] != '.npz':
continue
model = np.load('dat/WDspec/' + filename)['arr_0']
model_flux_spec_da.append(model[:,1])
model_wave_spec_da.append(model[:,0])
T, logg = filename.split('.')[0].split('t0')[-1].split('g')
T_spec_da.append(float(T) * 1000.)
logg_spec_da.append(float(logg[:-1]) / 10.)
print(' {:d} Modelos de espectros colectados.'.format(len(spec_da_list)))
#Seleccionaremos uno de cada 10 modelos de enanas blancas para graficarlos
nth = 10
for model_wave, model_flux, model_temp in zip(model_wave_spec_da[::nth], model_flux_spec_da[::nth], T_spec_da[::nth]):
pl.plot(model_wave, model_flux / model_flux[-1], label=r'$T = {:.1e}$'.format(model_temp))
# Otros comandos para la gráfica
pl.xlim(3000., 10000.)
# pl.ylim(ymin=1., ymax=3.6)
pl.legend(frameon=False, ncol=2)
pl.xlabel('Longitud de Onda [Angstroms]')
pl.ylabel('Flujo Normalizado')
```
En primer lugar, ¡estas enanas blancas están calientes! A 240,000 Kelvin, no debes tocar una. Podemos ver que la enana blanca más caliente es más brillante en una longitud de onda corta y, por lo tanto, aparecerá azul. Exactamente de la misma manera que la parte más azul de una llama es la más caliente:
<img src="images/bunsen.jpg" alt="Drawing" style="width: 280px;"/>
Así que ahora tenemos todo para encontrar la temperatura de la Enana Blanca que DESI pudo encontrar. Cómo hicimos en la Introducción, simplemente buscamos el modelo que se parece más a los datos.
```
# rango de longitud de onda que será ajustado
wave_min = 3750.
wave_max = 5200.
sq_diff = []
# Haciendo una máscara en el rango que será ajustado
fitted_range = (wave > wave_min) & (wave < wave_max)
fitted_wave = wave[fitted_range]
for model_wave, model_flux in zip(model_wave_spec_da, model_flux_spec_da):
# Remuestreo de la resolución del modelo para ajustar al espectro observado
model_flux_resampled = trapz_rebin(model_wave, model_flux, fitted_wave)
# Calcula la suma cuadrática de la diferencia de los modelos individuales, normalizados, y el espectro observado.
sq_diff.append(np.sum((model_flux_resampled / np.median(model_flux_resampled) - count[fitted_range] / np.median(count[fitted_range]))**2.))
# Mejor ajuste por mínimos cuadrados ponderados, de la gravedad superficial y la temperatura a partir del espectro de DESI
arg_min = np.argmin(sq_diff)
T_desi = T_spec_da[arg_min]
logg_desi = logg_spec_da[arg_min]
# Grafica solo el mejor ajuste
fitted_range = (model_wave_spec_da[arg_min] > wave_min) & (model_wave_spec_da[arg_min] < wave_max)
fitted_range_data = (wave > wave_min) & (wave < wave_max)
pl.figure(figsize=(15, 10))
pl.plot(wave[fitted_range_data], count[fitted_range_data] / np.median(count[fitted_range_data]), label='DESI spectrum')
pl.plot(model_wave_spec_da[arg_min][fitted_range], model_flux_spec_da[arg_min][fitted_range] / np.median(model_flux_spec_da[arg_min][fitted_range]), label='Best-fit model')
pl.grid()
pl.xlim(wave_min, wave_max)
pl.xlabel('Wavelength [Angstroms]')
pl.ylabel('Normalised Flux')
pl.legend(frameon=False)
pl.title('DESI White Dwarf: Temperature = ' + str(T_desi) + ' K; $\log_{10}$(g) = ' + str(logg_desi))
```
Así que nuestra enana blanca tiene unos 26,000 Kelvin. Mientras que la gravedad superficial sería insoportable. Si recuerda, la aceleración gravitacional se deriva de la masa y el radio de un cuerpo como $g = \frac{G \cdot M}{r^2}$ y es aproximadamente una medida de cuán denso es un objeto. Veamos cómo se ve esto para algunas fuentes conocidas
```
logg = pd.read_csv('dat/logg.txt', sep='\s+', comment='#', names=['Cuerpo', 'Gravedad en superficie [g]'])
logg = logg.sort_values('Gravedad en superficie [g]')
logg
fig, ax = plt.subplots()
pl.plot(np.arange(0, len(logg), 1), logg['Surface Gravity [g]'], marker='.', c='k')
plt.xticks(np.arange(len(logg)))
ax.set_xticklabels(logg['Cuerpo'], rotation='vertical')
ax.set_ylabel('Gravedad en Superficie [g]')
```
Entonces, la aceleración en Júpiter es algunas veces mayor que en la Tierra, mientras que en el Sol sería 30 veces mayor. La fuerza que se siente durante el despegue de un vuelo es aproximadamente un 30% mayor que la aceleración debida a la gravedad en la Tierra. Para nuestra enana blanca de DESI, la aceleración debida a la gravedad en la superficie es:
```
logg = 7.6
g = 10.**7.6 # cm2 / s.
g /= 100. # m2 / s
g /= 9.81 # Relative to that on Earth, i.e. [g].
g
```
veces mayor que en la Tierra! De hecho, si no fuera por las extrañas restricciones sobre lo que los electrones pueden y no pueden hacer (según lo determinado por la Mecánica Cuántica), la Enana Blanca sería tan densa que colapsaría por completo. ¡Imagínate!
Ahora es tu turno. ¿Puedes encontrar una clase de objeto incluso más densa que una enana blanca? ¿Cuál es la aceleración debida a la gravedad en su superficie?
¡Más difícil! ¡Puede que seas uno de los primeros en ver a esta Enana Blanca 'de cerca'! ¿Qué más puedes averiguar al respecto? Aquí hay algo para comenzar ...
```
model_colors = pd.read_csv('dat/WDphot/Table_DA.txt', sep='\s+', comment='#')
model_colors = model_colors[['Teff', 'log g', 'Edad', 'G', 'G_BP', 'G_RP']]
model_colors
GAIA['G_MAG'], GAIA['BP_MAG'], GAIA['RP_MAG']
GAIA['G_MAGERR'], GAIA['BP_MAGERR'], GAIA['RP_MAGERR']
```
|
github_jupyter
|
import os
import numpy as np
import astropy.io.fits as fits
import pylab as pl
import pandas as pd
import matplotlib.pyplot as plt
from IPython.display import YouTubeVideo
from scipy import interpolate
from scipy import optimize
from tools.wave2rgb import wavelength_to_rgb
from tools.resample_flux import trapz_rebin
# Load the DESI spectrum
zbest = fits.open('student_andes/zbest-mws-66003-20200315-wd.fits')[1]
coadd = fits.open('student_andes/coadd-mws-66003-20200315-wd.fits')
# Get its position on the sky:
ra, dec = float(zbest.data['TARGET_RA']), float(zbest.data['TARGET_DEC'])
# Distancia calculada con paralaje de GAIA (Bailer-Jones et al. 2018).
# Datos de fotometría y de la [distancia calculda](https://ui.adsabs.harvard.edu/abs/2018AJ....156...58B/)
# pueden ser enconrados en los [archivos de GAIA](https://gea.esac.esa.int/archive/)
dist_para = 784.665266 # parcsecs, 1 parsec = 3.0857 x 10^16 m.
parsec = 3.085677581e16 # m
# AU: Unidad astronómica - distancia entre el Sol y la Tierra.
au = 1.495978707e11 # m
print(' El paralaje GAIA nos indica que la distancia a nuestra Enana Blanca es {:.0f} millones de veces la distancia de la Tierra al Sol'.format(dist_para * parsec / au / 1.e6))
#( Pivote) Longitud de onda de los filtros de GAIA DR2
GAIA = {'G_WAVE': 6230.6, 'BP_WAVE': 5051.5, 'RP_WAVE': 7726.2}
for wave in GAIA.values():
# color = [r, g, b]
color = wavelength_to_rgb(wave / 10.)
pl.axvline(x=wave / 10., c=color)
pl.title('Longitudes de onda (y colores) a los que GAIA mide el brillo de cada estrella', pad=10.5, fontsize=10)
pl.xlabel('Longitud de onda en el vacío [nanometros]')
pl.xlim(380., 780.)
for band in ['G', 'BP', 'RP']:
GAIA[band + '_MAG'] = zbest.data['GAIA_PHOT_{}_MEAN_MAG'.format(band)][0]
GAIA[band + '_FLUX'] = 10.**(-(GAIA[band + '_MAG'] + (25.7934 - 25.6884)) / 2.5) * 3631. / 3.34e4 / GAIA[band + '_WAVE']**2.
# Añade los errores en la magnitud que los catálogos de DESI no contienen.
GAIA['G_MAGERR'] = 0.0044
GAIA['BP_MAGERR'] = 0.0281
GAIA['RP_MAGERR'] = 0.0780
for key, value in GAIA.items():
print('{:10s} \t {:05.4f}'.format(key, value))
# Obten la longitud de onda y el flujo.
wave = coadd[1].data['WAVELENGTH']
count = coadd[1].data['TARGET35191335094848528']
# Grafica el espectro de DESI
pl.figure(figsize=(15, 10))
pl.plot(wave, count)
pl.grid()
pl.xlabel('Wavelength $[\AA]$')
pl.ylim(ymin=0.)
pl.title('TARGET35191335094848528')
# Modelos de espectros de una enanas blancas
# [Levenhagen 2017](https://ui.adsabs.harvard.edu/abs/2017ApJS..231....1L)
spec_da_list = os.listdir('dat/WDspec/')
model_flux_spec_da = []
model_wave_spec_da = []
T_spec_da = []
logg_spec_da = []
# Haz un ciclo sobre todo los archivos en el directorio y únelos en una lista
# Loop over files in the directory and collect into a list.
for filename in spec_da_list:
if filename[-4:] != '.npz':
continue
model = np.load('dat/WDspec/' + filename)['arr_0']
model_flux_spec_da.append(model[:,1])
model_wave_spec_da.append(model[:,0])
T, logg = filename.split('.')[0].split('t0')[-1].split('g')
T_spec_da.append(float(T) * 1000.)
logg_spec_da.append(float(logg[:-1]) / 10.)
print(' {:d} Modelos de espectros colectados.'.format(len(spec_da_list)))
#Seleccionaremos uno de cada 10 modelos de enanas blancas para graficarlos
nth = 10
for model_wave, model_flux, model_temp in zip(model_wave_spec_da[::nth], model_flux_spec_da[::nth], T_spec_da[::nth]):
pl.plot(model_wave, model_flux / model_flux[-1], label=r'$T = {:.1e}$'.format(model_temp))
# Otros comandos para la gráfica
pl.xlim(3000., 10000.)
# pl.ylim(ymin=1., ymax=3.6)
pl.legend(frameon=False, ncol=2)
pl.xlabel('Longitud de Onda [Angstroms]')
pl.ylabel('Flujo Normalizado')
# rango de longitud de onda que será ajustado
wave_min = 3750.
wave_max = 5200.
sq_diff = []
# Haciendo una máscara en el rango que será ajustado
fitted_range = (wave > wave_min) & (wave < wave_max)
fitted_wave = wave[fitted_range]
for model_wave, model_flux in zip(model_wave_spec_da, model_flux_spec_da):
# Remuestreo de la resolución del modelo para ajustar al espectro observado
model_flux_resampled = trapz_rebin(model_wave, model_flux, fitted_wave)
# Calcula la suma cuadrática de la diferencia de los modelos individuales, normalizados, y el espectro observado.
sq_diff.append(np.sum((model_flux_resampled / np.median(model_flux_resampled) - count[fitted_range] / np.median(count[fitted_range]))**2.))
# Mejor ajuste por mínimos cuadrados ponderados, de la gravedad superficial y la temperatura a partir del espectro de DESI
arg_min = np.argmin(sq_diff)
T_desi = T_spec_da[arg_min]
logg_desi = logg_spec_da[arg_min]
# Grafica solo el mejor ajuste
fitted_range = (model_wave_spec_da[arg_min] > wave_min) & (model_wave_spec_da[arg_min] < wave_max)
fitted_range_data = (wave > wave_min) & (wave < wave_max)
pl.figure(figsize=(15, 10))
pl.plot(wave[fitted_range_data], count[fitted_range_data] / np.median(count[fitted_range_data]), label='DESI spectrum')
pl.plot(model_wave_spec_da[arg_min][fitted_range], model_flux_spec_da[arg_min][fitted_range] / np.median(model_flux_spec_da[arg_min][fitted_range]), label='Best-fit model')
pl.grid()
pl.xlim(wave_min, wave_max)
pl.xlabel('Wavelength [Angstroms]')
pl.ylabel('Normalised Flux')
pl.legend(frameon=False)
pl.title('DESI White Dwarf: Temperature = ' + str(T_desi) + ' K; $\log_{10}$(g) = ' + str(logg_desi))
logg = pd.read_csv('dat/logg.txt', sep='\s+', comment='#', names=['Cuerpo', 'Gravedad en superficie [g]'])
logg = logg.sort_values('Gravedad en superficie [g]')
logg
fig, ax = plt.subplots()
pl.plot(np.arange(0, len(logg), 1), logg['Surface Gravity [g]'], marker='.', c='k')
plt.xticks(np.arange(len(logg)))
ax.set_xticklabels(logg['Cuerpo'], rotation='vertical')
ax.set_ylabel('Gravedad en Superficie [g]')
logg = 7.6
g = 10.**7.6 # cm2 / s.
g /= 100. # m2 / s
g /= 9.81 # Relative to that on Earth, i.e. [g].
g
model_colors = pd.read_csv('dat/WDphot/Table_DA.txt', sep='\s+', comment='#')
model_colors = model_colors[['Teff', 'log g', 'Edad', 'G', 'G_BP', 'G_RP']]
model_colors
GAIA['G_MAG'], GAIA['BP_MAG'], GAIA['RP_MAG']
GAIA['G_MAGERR'], GAIA['BP_MAGERR'], GAIA['RP_MAGERR']
| 0.510252 | 0.811003 |
```
%matplotlib inline
%load_ext autoreload
%autoreload 2
import pandas as pd
import numpy as np
import feather
from evaluator import Evaluator
from sklearn.metrics.pairwise import cosine_similarity
from sklearn import preprocessing
from scipy.sparse import csr_matrix
from pandas.api.types import CategoricalDtype
from tqdm import tqdm_notebook as tqdm
from scipy import sparse
```
# Load data
```
training_ratings = feather.read_dataframe('./feather/training_ratings')
testing_ratings = feather.read_dataframe('./feather/testing_ratings')
book_profiles = feather.read_dataframe('./feather/book_profiles').set_index('book_id').to_sparse(fill_value=0)
novelty_scores = feather.read_dataframe('./feather/novelty_scores').set_index('book_id')
books = feather.read_dataframe('./feather/books').set_index('book_id')
book_sim = pd.DataFrame(
data = cosine_similarity(book_profiles, book_profiles),
index = book_profiles.index,
columns = book_profiles.index
)
book_sim.head()
evl = Evaluator(
k = 10,
training_ratings = training_ratings,
testing_ratings = testing_ratings,
book_sim = book_sim,
novelty_scores = novelty_scores
)
```
# Preprocess for the Content-based RS and Item-Item CF RS
```
users_mean_rating = training_ratings.groupby('user_id').mean()[['rating']]
training_ratings['adjusted_rating'] = training_ratings[['rating']] - users_mean_rating.loc[training_ratings.user_id].values
training_ratings.head()
user_c = CategoricalDtype(sorted(training_ratings.user_id.unique()), ordered=True)
book_c = CategoricalDtype(sorted(training_ratings.book_id.unique()), ordered=True)
row = training_ratings.user_id.astype(user_c).cat.codes
col = training_ratings.book_id.astype(book_c).cat.codes
sparse_matrix = csr_matrix((training_ratings["adjusted_rating"], (row, col)), \
shape=(user_c.categories.size, book_c.categories.size))
sparse_matrix
cf_sim = pd.DataFrame(
data = cosine_similarity(sparse_matrix.T, sparse_matrix.T),
index = book_c.categories,
columns = book_c.categories)
cf_sim.shape
cf_sim.head()
cf_top_sim_books = {}
book_ids = cf_sim.index
for book_id in tqdm(book_ids):
cf_top_sim_books[book_id] = cf_sim.loc[book_id].sort_values(ascending=False)[1:51]
cf_top_sim_books[1].head()
cb_top_sim_books = {}
book_ids = book_sim.index
for book_id in tqdm(book_ids):
cb_top_sim_books[book_id] = book_sim.loc[book_id].sort_values(ascending=False)[1:51]
cb_top_sim_books[1].head()
list_of_5_ratings = training_ratings[training_ratings.rating==5].groupby('user_id')['book_id'].apply(list)
```
# Hybrid Recommender System
We'll make the hybrid recommender flexible by enabling it to take the proportional rate as argument to its constructor
```
class HybridRecommender:
name = "Hybrid CF RS"
preds = {}
def __init__(self, rate=1):
self.rate = rate
self.name = "Hybrid CF RS (rate=" + str(rate) + ")"
def fit(self, training_ratings):
user_ids = training_ratings.user_id.unique().tolist()
self.preds = {}
for user_id in tqdm(user_ids):
excluded_books = training_ratings[training_ratings.user_id == user_id].book_id.unique(
).tolist()
most_similar_books = pd.Series([])
for book_id in list_of_5_ratings[user_id]:
most_similar_books = most_similar_books.append(
cb_top_sim_books[book_id])
most_similar_books = most_similar_books.append(
cf_top_sim_books[book_id] * self.rate)
most_similar_books = np.array(most_similar_books.groupby(
most_similar_books.index).sum().sort_values(ascending=False).index)
recommendable = most_similar_books[~np.in1d(
most_similar_books, excluded_books)]
self.preds[user_id] = recommendable[:10].tolist()
def recommendation_for_user(self, user_id):
if user_id not in self.preds:
return []
return self.preds[user_id]
def all_recommendation(self):
return self.preds
```
## rate = 1
```
hb_rec = HybridRecommender(rate=1)
hb_rec.name
evl.evaluate(hb_rec)
evl.print_result()
```
## rate = 2
```
hb_rec2 = HybridRecommender(rate=2)
evl.evaluate(hb_rec2)
evl.print_result()
```
## rate = 4
```
hb_rec4 = HybridRecommender(rate=4)
evl.evaluate(hb_rec4)
evl.print_result()
```
# Alternate version
```
class AltHybridRecommender:
name = "Alt Hybrid RS"
preds = {}
def __init__(self, rate=1):
self.rate = rate
self.name = "Alt Hybrid RS (rate=" + str(rate) + ")"
def fit(self, training_ratings):
user_ids = training_ratings.user_id.unique().tolist()
self.preds = {}
for user_id in tqdm(user_ids):
excluded_books = training_ratings[training_ratings.user_id == user_id].book_id.unique(
).tolist()
most_similar_books = pd.Series([])
for book_id in list_of_5_ratings[user_id]:
most_similar_books = most_similar_books.append(
cb_top_sim_books[book_id])
most_similar_books = most_similar_books.append(
cf_top_sim_books[book_id] + self.rate)
most_similar_books = np.array(most_similar_books.groupby(
most_similar_books.index).sum().sort_values(ascending=False).index)
recommendable = most_similar_books[~np.in1d(
most_similar_books, excluded_books)]
self.preds[user_id] = recommendable[:10].tolist()
def recommendation_for_user(self, user_id):
if user_id not in self.preds:
return []
return self.preds[user_id]
def all_recommendation(self):
return self.preds
ahb_rec = AltHybridRecommender()
evl.evaluate(ahb_rec)
evl.print_result()
```
# Experiment
```
hb_rec7 = HybridRecommender(rate=7)
evl.evaluate(hb_rec7)
evl.print_result()
hb_rec10 = HybridRecommender(rate=10)
evl.evaluate(hb_rec10)
evl.print_result()
hb_rec15 = HybridRecommender(rate=15)
evl.evaluate(hb_rec15)
evl.print_result()
hb_rec20 = HybridRecommender(rate=20)
evl.evaluate(hb_rec20)
evl.print_result()
hb_rec30 = HybridRecommender(rate=30)
evl.evaluate(hb_rec30)
evl.print_result()
hb_rec50 = HybridRecommender(rate=50)
evl.evaluate(hb_rec50)
evl.print_result()
```
|
github_jupyter
|
%matplotlib inline
%load_ext autoreload
%autoreload 2
import pandas as pd
import numpy as np
import feather
from evaluator import Evaluator
from sklearn.metrics.pairwise import cosine_similarity
from sklearn import preprocessing
from scipy.sparse import csr_matrix
from pandas.api.types import CategoricalDtype
from tqdm import tqdm_notebook as tqdm
from scipy import sparse
training_ratings = feather.read_dataframe('./feather/training_ratings')
testing_ratings = feather.read_dataframe('./feather/testing_ratings')
book_profiles = feather.read_dataframe('./feather/book_profiles').set_index('book_id').to_sparse(fill_value=0)
novelty_scores = feather.read_dataframe('./feather/novelty_scores').set_index('book_id')
books = feather.read_dataframe('./feather/books').set_index('book_id')
book_sim = pd.DataFrame(
data = cosine_similarity(book_profiles, book_profiles),
index = book_profiles.index,
columns = book_profiles.index
)
book_sim.head()
evl = Evaluator(
k = 10,
training_ratings = training_ratings,
testing_ratings = testing_ratings,
book_sim = book_sim,
novelty_scores = novelty_scores
)
users_mean_rating = training_ratings.groupby('user_id').mean()[['rating']]
training_ratings['adjusted_rating'] = training_ratings[['rating']] - users_mean_rating.loc[training_ratings.user_id].values
training_ratings.head()
user_c = CategoricalDtype(sorted(training_ratings.user_id.unique()), ordered=True)
book_c = CategoricalDtype(sorted(training_ratings.book_id.unique()), ordered=True)
row = training_ratings.user_id.astype(user_c).cat.codes
col = training_ratings.book_id.astype(book_c).cat.codes
sparse_matrix = csr_matrix((training_ratings["adjusted_rating"], (row, col)), \
shape=(user_c.categories.size, book_c.categories.size))
sparse_matrix
cf_sim = pd.DataFrame(
data = cosine_similarity(sparse_matrix.T, sparse_matrix.T),
index = book_c.categories,
columns = book_c.categories)
cf_sim.shape
cf_sim.head()
cf_top_sim_books = {}
book_ids = cf_sim.index
for book_id in tqdm(book_ids):
cf_top_sim_books[book_id] = cf_sim.loc[book_id].sort_values(ascending=False)[1:51]
cf_top_sim_books[1].head()
cb_top_sim_books = {}
book_ids = book_sim.index
for book_id in tqdm(book_ids):
cb_top_sim_books[book_id] = book_sim.loc[book_id].sort_values(ascending=False)[1:51]
cb_top_sim_books[1].head()
list_of_5_ratings = training_ratings[training_ratings.rating==5].groupby('user_id')['book_id'].apply(list)
class HybridRecommender:
name = "Hybrid CF RS"
preds = {}
def __init__(self, rate=1):
self.rate = rate
self.name = "Hybrid CF RS (rate=" + str(rate) + ")"
def fit(self, training_ratings):
user_ids = training_ratings.user_id.unique().tolist()
self.preds = {}
for user_id in tqdm(user_ids):
excluded_books = training_ratings[training_ratings.user_id == user_id].book_id.unique(
).tolist()
most_similar_books = pd.Series([])
for book_id in list_of_5_ratings[user_id]:
most_similar_books = most_similar_books.append(
cb_top_sim_books[book_id])
most_similar_books = most_similar_books.append(
cf_top_sim_books[book_id] * self.rate)
most_similar_books = np.array(most_similar_books.groupby(
most_similar_books.index).sum().sort_values(ascending=False).index)
recommendable = most_similar_books[~np.in1d(
most_similar_books, excluded_books)]
self.preds[user_id] = recommendable[:10].tolist()
def recommendation_for_user(self, user_id):
if user_id not in self.preds:
return []
return self.preds[user_id]
def all_recommendation(self):
return self.preds
hb_rec = HybridRecommender(rate=1)
hb_rec.name
evl.evaluate(hb_rec)
evl.print_result()
hb_rec2 = HybridRecommender(rate=2)
evl.evaluate(hb_rec2)
evl.print_result()
hb_rec4 = HybridRecommender(rate=4)
evl.evaluate(hb_rec4)
evl.print_result()
class AltHybridRecommender:
name = "Alt Hybrid RS"
preds = {}
def __init__(self, rate=1):
self.rate = rate
self.name = "Alt Hybrid RS (rate=" + str(rate) + ")"
def fit(self, training_ratings):
user_ids = training_ratings.user_id.unique().tolist()
self.preds = {}
for user_id in tqdm(user_ids):
excluded_books = training_ratings[training_ratings.user_id == user_id].book_id.unique(
).tolist()
most_similar_books = pd.Series([])
for book_id in list_of_5_ratings[user_id]:
most_similar_books = most_similar_books.append(
cb_top_sim_books[book_id])
most_similar_books = most_similar_books.append(
cf_top_sim_books[book_id] + self.rate)
most_similar_books = np.array(most_similar_books.groupby(
most_similar_books.index).sum().sort_values(ascending=False).index)
recommendable = most_similar_books[~np.in1d(
most_similar_books, excluded_books)]
self.preds[user_id] = recommendable[:10].tolist()
def recommendation_for_user(self, user_id):
if user_id not in self.preds:
return []
return self.preds[user_id]
def all_recommendation(self):
return self.preds
ahb_rec = AltHybridRecommender()
evl.evaluate(ahb_rec)
evl.print_result()
hb_rec7 = HybridRecommender(rate=7)
evl.evaluate(hb_rec7)
evl.print_result()
hb_rec10 = HybridRecommender(rate=10)
evl.evaluate(hb_rec10)
evl.print_result()
hb_rec15 = HybridRecommender(rate=15)
evl.evaluate(hb_rec15)
evl.print_result()
hb_rec20 = HybridRecommender(rate=20)
evl.evaluate(hb_rec20)
evl.print_result()
hb_rec30 = HybridRecommender(rate=30)
evl.evaluate(hb_rec30)
evl.print_result()
hb_rec50 = HybridRecommender(rate=50)
evl.evaluate(hb_rec50)
evl.print_result()
| 0.430387 | 0.740808 |
```
## HW-6 (100 points)
```
## Weekly HW on Matplotlib for Plotting in Python
### 1) Pandas and Matplotlib libraries
##### **Exercise-1: Import all the libraries - numpy, pandas, and matplotlib so that we do not have to worry about importing the libraries later on in this assignment**
**(POINTS: 6)**
```
#GIVE YOUR ANSWER FOR EXERCISE-1 IN THIS CELL
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
```
**Exercise-2: The file named 'Electric_Vehicle_Population_Data.csv' contains the database of electric vehicles registered and operated in different cities and states (primarily Washington State) of the United States. It also contains the vehicle details like make, model, model year, vehicle type, electric range, base retail price and fields that are mostly self explanatory.**
**(POINTS: 54 - each task in this exercise carries 9 points)**
**Task-1:** Read the **Electric_Vehicle_Population_Data.csv** file and store in a variable name **ev_pop**. After reading, display the first 10 rows of the dataframe **ev_pop** as the output.
```
#GIVE YOUR ANSWER FOR TASK-1 IN THIS CELL
ev_pop = pd.read_csv('Electric_Vehicle_Population_Data.csv')
ev_pop.head(10)
```
**Task-2:** Drop the columns 'Clean Alternative Fuel Vehicle (CAFV) Eligibility', 'Legislative District', 'DOL Vehicle ID', 'Vehicle Location' from the dataframe **ev_pop**. After indexing, display the first 5 rows of the dataframe **ev_pop** as the output. [Hint:The drop() function with 'columns' and 'inplace' argument may be used]
```
#GIVE YOUR ANSWER FOR TASK-2 IN THIS CELL
cols_2b_dropped = ['Clean Alternative Fuel Vehicle (CAFV) Eligibility', 'Legislative District', 'DOL Vehicle ID',
'Vehicle Location']
ev_pop.drop(columns = cols_2b_dropped, inplace = True)
ev_pop.head()
```
Let's say we first want to see whether the EV purchase has generally shown an growth trend over the years. The strategy to visualize it is to have the 'Model Year' (in an ordered manner) in the x-axis and the value counts of that 'Model Year' in the y-axis as a scatter plot.
**Task-3:** Extract the value counts for each 'Model Year' and save the pandas Series in the variable name 'model_year'
```
#GIVE YOUR ANSWER FOR TASK-3 IN THIS CELL
model_year = ev_pop['Model Year'].value_counts()
```
At this point model_year is a pandas Series having the 'Model Year' values as its index and count of values as the Series.
If we explore the dataset we shall see the 'Base MSRP' column inexplacably contain a good number of values equal to 'zero' and in some case unusually high values (can be disregarded as outlier) which are greater than 100,000.
**Task-4:** Plot a scatter plot with the Model Year values in the x-axis and their value counts in the y-axis. Make sure the plot has proper labels and title. [Hint: You can use Series.index attribute to have model year as a data sequence]. Comment on your observations in a following markdown cell.
```
#GIVE YOUR ANSWER FOR TASK-4 IN THIS CELL
plt.scatter(model_year.index, model_year)
plt.xlabel('Model Year')
plt.ylabel('No. of Vehicles')
plt.title('EV Population by Model Year')
```
Student Answer (Expected): The observation is the EV usage or purchase shown a general trend of exponential growth which might be slowed down in last couple of years due to COVID and other challenging situations.
Let's work towards plotting a histogram for 'MRSP Base Price' value of the entire EV Population database with the following two tasks.
**Task-5:** Clean up the ev_pop dataframe by selecting only the rows of the dataframe where the two conditions:
a) the Base MSRP is greater than 0
b) the base MSRP is less than or equal to 100000
are simultaneously met (**and** operation).Save the modified dataframe as **ev_pop_cleaned**
```
#GIVE YOUR ANSWER FOR TASK-5 IN THIS CELL
ev_pop_cleaned = ev_pop[(ev_pop['Base MSRP']>0) & (ev_pop['Base MSRP']<100000)]
```
**Task-6:** Plot the Histogram of the 'Base MSRP' series of the **ev_pop_cleaned** dataframe
```
#GIVE YOUR ANSWER FOR TASK-6 IN THIS CELL
plt.hist(ev_pop_cleaned['Base MSRP'])
plt.xlabel('Base MSRP')
plt.title('Distribution of Base MSRP')
```
**Exercise-2: In this exercise, we shall visualize the data from the manufacturer's perspective through following four tasks"**
**(POINTS: 40 - each task in this exercise carries 10 points)**
**Task-1:** Using the **ev_pop** dataframe, extract the top ten makers of electric vehicles by the dataset. Print the names of the makers and their corresponding value counts (no. of vehicles by the maker). [Hint: Use the **value_counts()** and **nlargest()** functions in tendem to extract the series of top ten makers]
```
#GIVE YOUR ANSWER FOR TASK-1 IN THIS CELL
maker = ev_pop['Make'].value_counts().nlargest(10)
print(maker)
```
**Task-2:** Make a bar plot for the top ten makers and their vehicle counts in the **ev_pop** database. Use all the proper plotting practices for labels and title. For better readability make sure the x labels are rotated 90 degrees.
**Hint**: Use the xticks.rotation() function for the label rotation. Use pyplot.show() to avoid unwanted texts above the bar graph.
```
#GIVE YOUR ANSWER FOR TASK-2 IN THIS CELL
plt.bar(maker.index, maker, color = 'teal')
plt.ylabel('No. of Vehicles')
plt.xticks(rotation = 90)
plt.show()
```
Now we want to observe the sales (revenue) trend of the top two manufacturers through the following tasks.
**Task-3:** Extract two separate dataframes for the top two manufacturers where the 'Make' column value of **ev_pop** matches with the manufacturer. Sort the two dataframes by the ascending values for 'Model Year' and save the two sorted dataframes as **sorted_Tesla** and **sorted_Nissan**. Apply the **groupby()** and **sum()** function together for grouping the sorted dataframes by 'Model Year' and get the summation for numerical fields. It will extract two dataframes (name them **tesla** and **nissan** respectively) which will provide the yearly sum of numerical columns like 'Electric Range' and 'Base MSRP'.
```
#GIVE YOUR ANSWER FOR TASK-3 IN THIS CELL
ev_Tesla = ev_pop[ev_pop.Make == 'TESLA']
sorted_Tesla = ev_Tesla.sort_values('Model Year')
ev_Nissan = ev_pop[ev_pop.Make == 'NISSAN']
sorted_Nissan = ev_Nissan.sort_values('Model Year')
tesla = sorted_Tesla.groupby('Model Year').sum()
nissan = sorted_Nissan.groupby('Model Year').sum()
```
**Task-4:** Plot an overlaid line plot for 'Model Year' (vs.) 'Revenue (=summed up Base MSRP)' for each of the top two manufacturers. The plot must display an x-label, a y-label, a title, and a legend.
**Note-1:** To increase the line width of the line plot, use the argument **`lw`**. For example: lw = 3.
**Note-2:** To give your own labels for each color that corresponds to one of the 4 countries, use the argument **`label`**. For example: label = 'Tesla'.
```
#GIVE YOUR ANSWER FOR TASK-4 IN THIS CELL
plt.plot(tesla.index,tesla['Base MSRP'], color = 'black', label = 'Tesla')
plt.plot(nissan.index,nissan['Base MSRP'], color = 'orange', label = 'Nissan')
plt.xlabel('Year')
plt.ylabel('Maker\'s Revenue')
plt.title('Market trend of top two makers')
plt.legend()
```
|
github_jupyter
|
## HW-6 (100 points)
#GIVE YOUR ANSWER FOR EXERCISE-1 IN THIS CELL
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
#GIVE YOUR ANSWER FOR TASK-1 IN THIS CELL
ev_pop = pd.read_csv('Electric_Vehicle_Population_Data.csv')
ev_pop.head(10)
#GIVE YOUR ANSWER FOR TASK-2 IN THIS CELL
cols_2b_dropped = ['Clean Alternative Fuel Vehicle (CAFV) Eligibility', 'Legislative District', 'DOL Vehicle ID',
'Vehicle Location']
ev_pop.drop(columns = cols_2b_dropped, inplace = True)
ev_pop.head()
#GIVE YOUR ANSWER FOR TASK-3 IN THIS CELL
model_year = ev_pop['Model Year'].value_counts()
#GIVE YOUR ANSWER FOR TASK-4 IN THIS CELL
plt.scatter(model_year.index, model_year)
plt.xlabel('Model Year')
plt.ylabel('No. of Vehicles')
plt.title('EV Population by Model Year')
#GIVE YOUR ANSWER FOR TASK-5 IN THIS CELL
ev_pop_cleaned = ev_pop[(ev_pop['Base MSRP']>0) & (ev_pop['Base MSRP']<100000)]
#GIVE YOUR ANSWER FOR TASK-6 IN THIS CELL
plt.hist(ev_pop_cleaned['Base MSRP'])
plt.xlabel('Base MSRP')
plt.title('Distribution of Base MSRP')
#GIVE YOUR ANSWER FOR TASK-1 IN THIS CELL
maker = ev_pop['Make'].value_counts().nlargest(10)
print(maker)
#GIVE YOUR ANSWER FOR TASK-2 IN THIS CELL
plt.bar(maker.index, maker, color = 'teal')
plt.ylabel('No. of Vehicles')
plt.xticks(rotation = 90)
plt.show()
#GIVE YOUR ANSWER FOR TASK-3 IN THIS CELL
ev_Tesla = ev_pop[ev_pop.Make == 'TESLA']
sorted_Tesla = ev_Tesla.sort_values('Model Year')
ev_Nissan = ev_pop[ev_pop.Make == 'NISSAN']
sorted_Nissan = ev_Nissan.sort_values('Model Year')
tesla = sorted_Tesla.groupby('Model Year').sum()
nissan = sorted_Nissan.groupby('Model Year').sum()
#GIVE YOUR ANSWER FOR TASK-4 IN THIS CELL
plt.plot(tesla.index,tesla['Base MSRP'], color = 'black', label = 'Tesla')
plt.plot(nissan.index,nissan['Base MSRP'], color = 'orange', label = 'Nissan')
plt.xlabel('Year')
plt.ylabel('Maker\'s Revenue')
plt.title('Market trend of top two makers')
plt.legend()
| 0.478041 | 0.979881 |
```
from keras import models
from keras import layers
model = models.Sequential()
#卷积层,参数意义分别为:
#经过这一层之后,特征图的个数,一个卷积核,产生一个特征图,第一层:32,说明有32个卷积核;第二层64,说明在第一层的特征图基础上,每张特征图有两个卷积核进行特征采集
#卷积核大小
#激活函数
#输入大小(只在开始的第一层有,后面不需要)
model.add(layers.Conv2D(32,(3,3),activation='relu',input_shape=(200,400,3)))
model.add(layers.MaxPool2D(2,2))
model.add(layers.Conv2D(64,(3,3),activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Flatten())
model.add(layers.Dense(512, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
#配置模型的损失函数、优化器、指标名称
from keras import optimizers
model.compile(loss='binary_crossentropy', #损失函数
optimizer=optimizers.RMSprop(lr=1e-4), #优化器
metrics=['acc']) #指标名称
#图片的训练路径和验证路径
train_dir = r'H:\test\normal_x\typeII\train'
validation_dir = r'H:\test\normal_x\typeII\val'
#生成训练需要的图片和标签
from keras.preprocessing.image import ImageDataGenerator
#将图片大小调整到1以内,原先图片每个像素的格式为uint8,所以要除以255
train_datagen = ImageDataGenerator(rescale=1./255)
validation_datagen = ImageDataGenerator(rescale=1./255)
#根据目录的名称,生成对应的标签
#train_dir有Ⅱ型和Ⅲ型的图片
#每次生成batch_size数量的图片,图片大小为target_size
train_generator = train_datagen.flow_from_directory(
train_dir,
target_size=(200, 400), #生成图片的大小
batch_size=20, #一次生成图片的数量
class_mode='binary') #图片标签的类型
validation_generator = validation_datagen.flow_from_directory(
validation_dir,
target_size=(200, 400), #生成图片的大小
batch_size=10, #一次生成图片的数量
class_mode='binary') #图片标签的类型
#开始训练
history = model.fit_generator(
train_generator,
steps_per_epoch=20,
epochs=100,
validation_data=validation_generator,
validation_steps=13)
#绘制训练精度、验证精度
#绘制训练损失、验证损失
#python画图库,类似matlab的plot
import matplotlib.pyplot as plt
acc = history.history['acc'] #得到训练的指标数据
val_acc = history.history['val_acc'] #得到验证的指标数据
loss = history.history['loss'] #得到训练损失
val_loss = history.history['val_loss'] #得到验证损失
epochs = range(1, len(acc) + 1)
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.savefig('accuracy.png')
plt.legend() #画图例
plt.figure() #另一张图
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.savefig('loss.png')
plt.legend()
plt.show() #画图,最后加上
# 保存每轮的精度和损失
file = open('acc_loss_100.txt','a')
file.write('训练精度:')
for i in acc :
file.write(str(i))
file.write(" ")
file.write("\n")
file.write('验证精度:')
for i in val_acc :
file.write(str(i))
file.write(" ")
file.write("\n")
file.write('训练损失:')
for i in loss :
file.write(str(i))
file.write(" ")
file.write("\n")
file.write('验证损失:')
for i in val_loss :
file.write(str(i))
file.write(" ")
file.close()
import os
import cv2 as cv
import numpy as np
III_dir = r'H:\test\normal_x\typeII\val\II'
O_dir = r'H:\test\normal_x\typeII\val\O'
def my_image(path):
out = []
filenames = os.listdir(path)
for filename in filenames:
image = cv.imread(os.path.join(path, filename))
image = cv.resize(image, (400, 200))
image = image/255.0
out.append(image)
return np.array(out)
imgs_III = my_image(III_dir)
imgs_O = my_image(O_dir)
ret_III = model.predict_classes(imgs_III)
ret_O = model.predict_classes(imgs_O)
ret_III = ret_III.tolist()
ret_O = ret_O.tolist()
true = ret_III.count([0])
false = ret_O.count([0])
TPR = true/len(ret_III)
FPR = false/len(ret_O)
print("TPR is :{:f} ".format(TPR))
print("FPR is :{:f} ".format(FPR))
model.save('typeII_binary_normalization_100.h5')
```
|
github_jupyter
|
from keras import models
from keras import layers
model = models.Sequential()
#卷积层,参数意义分别为:
#经过这一层之后,特征图的个数,一个卷积核,产生一个特征图,第一层:32,说明有32个卷积核;第二层64,说明在第一层的特征图基础上,每张特征图有两个卷积核进行特征采集
#卷积核大小
#激活函数
#输入大小(只在开始的第一层有,后面不需要)
model.add(layers.Conv2D(32,(3,3),activation='relu',input_shape=(200,400,3)))
model.add(layers.MaxPool2D(2,2))
model.add(layers.Conv2D(64,(3,3),activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Flatten())
model.add(layers.Dense(512, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
#配置模型的损失函数、优化器、指标名称
from keras import optimizers
model.compile(loss='binary_crossentropy', #损失函数
optimizer=optimizers.RMSprop(lr=1e-4), #优化器
metrics=['acc']) #指标名称
#图片的训练路径和验证路径
train_dir = r'H:\test\normal_x\typeII\train'
validation_dir = r'H:\test\normal_x\typeII\val'
#生成训练需要的图片和标签
from keras.preprocessing.image import ImageDataGenerator
#将图片大小调整到1以内,原先图片每个像素的格式为uint8,所以要除以255
train_datagen = ImageDataGenerator(rescale=1./255)
validation_datagen = ImageDataGenerator(rescale=1./255)
#根据目录的名称,生成对应的标签
#train_dir有Ⅱ型和Ⅲ型的图片
#每次生成batch_size数量的图片,图片大小为target_size
train_generator = train_datagen.flow_from_directory(
train_dir,
target_size=(200, 400), #生成图片的大小
batch_size=20, #一次生成图片的数量
class_mode='binary') #图片标签的类型
validation_generator = validation_datagen.flow_from_directory(
validation_dir,
target_size=(200, 400), #生成图片的大小
batch_size=10, #一次生成图片的数量
class_mode='binary') #图片标签的类型
#开始训练
history = model.fit_generator(
train_generator,
steps_per_epoch=20,
epochs=100,
validation_data=validation_generator,
validation_steps=13)
#绘制训练精度、验证精度
#绘制训练损失、验证损失
#python画图库,类似matlab的plot
import matplotlib.pyplot as plt
acc = history.history['acc'] #得到训练的指标数据
val_acc = history.history['val_acc'] #得到验证的指标数据
loss = history.history['loss'] #得到训练损失
val_loss = history.history['val_loss'] #得到验证损失
epochs = range(1, len(acc) + 1)
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.savefig('accuracy.png')
plt.legend() #画图例
plt.figure() #另一张图
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.savefig('loss.png')
plt.legend()
plt.show() #画图,最后加上
# 保存每轮的精度和损失
file = open('acc_loss_100.txt','a')
file.write('训练精度:')
for i in acc :
file.write(str(i))
file.write(" ")
file.write("\n")
file.write('验证精度:')
for i in val_acc :
file.write(str(i))
file.write(" ")
file.write("\n")
file.write('训练损失:')
for i in loss :
file.write(str(i))
file.write(" ")
file.write("\n")
file.write('验证损失:')
for i in val_loss :
file.write(str(i))
file.write(" ")
file.close()
import os
import cv2 as cv
import numpy as np
III_dir = r'H:\test\normal_x\typeII\val\II'
O_dir = r'H:\test\normal_x\typeII\val\O'
def my_image(path):
out = []
filenames = os.listdir(path)
for filename in filenames:
image = cv.imread(os.path.join(path, filename))
image = cv.resize(image, (400, 200))
image = image/255.0
out.append(image)
return np.array(out)
imgs_III = my_image(III_dir)
imgs_O = my_image(O_dir)
ret_III = model.predict_classes(imgs_III)
ret_O = model.predict_classes(imgs_O)
ret_III = ret_III.tolist()
ret_O = ret_O.tolist()
true = ret_III.count([0])
false = ret_O.count([0])
TPR = true/len(ret_III)
FPR = false/len(ret_O)
print("TPR is :{:f} ".format(TPR))
print("FPR is :{:f} ".format(FPR))
model.save('typeII_binary_normalization_100.h5')
| 0.511961 | 0.651937 |
# Model with Feedback
Example testing a model with both predict and feedback in python. This could be used as a basis for a reinforcement learning model deployment.
# REST
```
!s2i build -E environment_rest . seldonio/seldon-core-s2i-python3:0.12 model-with-feedback-rest:0.1
!docker run --name "model-with-feedback" -d --rm -p 5000:5000 model-with-feedback-rest:0.1
```
## Test predict
```
!seldon-core-tester contract.json 0.0.0.0 5000 -p
```
## Test feedback
```
!seldon-core-tester contract.json 0.0.0.0 5000 -p --endpoint send-feedback
!docker rm model-with-feedback --force
```
# gRPC
```
!s2i build -E environment_grpc . seldonio/seldon-core-s2i-python3:0.12 model-with-feedback-grpc:0.1
!docker run --name "model-with-feedback" -d --rm -p 5000:5000 model-with-feedback-grpc:0.1
```
## Test predict
```
!seldon-core-tester contract.json 0.0.0.0 5000 -p --grpc
```
## Test feedback
```
!seldon-core-tester contract.json 0.0.0.0 5000 -p --endpoint send-feedback --grpc
!docker rm model-with-feedback --force
```
# Test using Minikube
**Due to a [minikube/s2i issue](https://github.com/SeldonIO/seldon-core/issues/253) you will need [s2i >= 1.1.13](https://github.com/openshift/source-to-image/releases/tag/v1.1.13)**
```
!minikube start --vm-driver kvm2 --memory 4096
!kubectl create clusterrolebinding kube-system-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default
!helm init
!kubectl rollout status deploy/tiller-deploy -n kube-system
!helm install ../../../helm-charts/seldon-core-operator --name seldon-core --set usageMetrics.enabled=true --namespace seldon-system
!kubectl rollout status deploy/seldon-controller-manager -n seldon-system
```
## Setup Ingress
Please note: There are reported gRPC issues with ambassador (see https://github.com/SeldonIO/seldon-core/issues/473).
```
!helm install stable/ambassador --name ambassador --set crds.keep=false
!kubectl rollout status deployment.apps/ambassador
```
# REST
```
!eval $(minikube docker-env) && s2i build -E environment_rest . seldonio/seldon-core-s2i-python3:0.12 model-with-feedback-rest:0.1 --loglevel 5
!kubectl create -f deployment-rest.json
!kubectl rollout status deploy/mymodel-mymodel-09f8ccd
```
## Test predict
```
!seldon-core-api-tester contract.json `minikube ip` `kubectl get svc ambassador -o jsonpath='{.spec.ports[0].nodePort}'` \
mymodel --namespace default -p
```
## Test feedback
```
!seldon-core-api-tester contract.json `minikube ip` `kubectl get svc ambassador -o jsonpath='{.spec.ports[0].nodePort}'` \
mymodel --namespace default -p --endpoint send-feedback
!kubectl delete -f deployment-rest.json
```
# gRPC
```
!eval $(minikube docker-env) && s2i build -E environment_grpc . seldonio/seldon-core-s2i-python3:0.12 model-with-feedback-grpc:0.1
!kubectl create -f deployment-grpc.json
!kubectl rollout status deploy/mymodel-mymodel-89dbe9b
```
## Test predict
```
!seldon-core-api-tester contract.json `minikube ip` `kubectl get svc ambassador -o jsonpath='{.spec.ports[0].nodePort}'` \
mymodel --namespace default -p --grpc
```
## Test feedback
```
!seldon-core-api-tester contract.json `minikube ip` `kubectl get svc ambassador -o jsonpath='{.spec.ports[0].nodePort}'` \
mymodel --namespace default -p --endpoint send-feedback --grpc
!kubectl delete -f deployment-grpc.json
!minikube delete
```
|
github_jupyter
|
!s2i build -E environment_rest . seldonio/seldon-core-s2i-python3:0.12 model-with-feedback-rest:0.1
!docker run --name "model-with-feedback" -d --rm -p 5000:5000 model-with-feedback-rest:0.1
!seldon-core-tester contract.json 0.0.0.0 5000 -p
!seldon-core-tester contract.json 0.0.0.0 5000 -p --endpoint send-feedback
!docker rm model-with-feedback --force
!s2i build -E environment_grpc . seldonio/seldon-core-s2i-python3:0.12 model-with-feedback-grpc:0.1
!docker run --name "model-with-feedback" -d --rm -p 5000:5000 model-with-feedback-grpc:0.1
!seldon-core-tester contract.json 0.0.0.0 5000 -p --grpc
!seldon-core-tester contract.json 0.0.0.0 5000 -p --endpoint send-feedback --grpc
!docker rm model-with-feedback --force
!minikube start --vm-driver kvm2 --memory 4096
!kubectl create clusterrolebinding kube-system-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default
!helm init
!kubectl rollout status deploy/tiller-deploy -n kube-system
!helm install ../../../helm-charts/seldon-core-operator --name seldon-core --set usageMetrics.enabled=true --namespace seldon-system
!kubectl rollout status deploy/seldon-controller-manager -n seldon-system
!helm install stable/ambassador --name ambassador --set crds.keep=false
!kubectl rollout status deployment.apps/ambassador
!eval $(minikube docker-env) && s2i build -E environment_rest . seldonio/seldon-core-s2i-python3:0.12 model-with-feedback-rest:0.1 --loglevel 5
!kubectl create -f deployment-rest.json
!kubectl rollout status deploy/mymodel-mymodel-09f8ccd
!seldon-core-api-tester contract.json `minikube ip` `kubectl get svc ambassador -o jsonpath='{.spec.ports[0].nodePort}'` \
mymodel --namespace default -p
!seldon-core-api-tester contract.json `minikube ip` `kubectl get svc ambassador -o jsonpath='{.spec.ports[0].nodePort}'` \
mymodel --namespace default -p --endpoint send-feedback
!kubectl delete -f deployment-rest.json
!eval $(minikube docker-env) && s2i build -E environment_grpc . seldonio/seldon-core-s2i-python3:0.12 model-with-feedback-grpc:0.1
!kubectl create -f deployment-grpc.json
!kubectl rollout status deploy/mymodel-mymodel-89dbe9b
!seldon-core-api-tester contract.json `minikube ip` `kubectl get svc ambassador -o jsonpath='{.spec.ports[0].nodePort}'` \
mymodel --namespace default -p --grpc
!seldon-core-api-tester contract.json `minikube ip` `kubectl get svc ambassador -o jsonpath='{.spec.ports[0].nodePort}'` \
mymodel --namespace default -p --endpoint send-feedback --grpc
!kubectl delete -f deployment-grpc.json
!minikube delete
| 0.233619 | 0.797517 |
```
#Import the required libraries
import numpy as np
np.random.seed(1338)
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', one_hot=True)
import tensorflow as tf
sess = tf.InteractiveSession()
x = tf.placeholder(tf.float32, shape=[None, 784])
y_ = tf.placeholder(tf.float32, shape=[None, 10])
W = tf.Variable(tf.zeros([784,10]))
b = tf.Variable(tf.zeros([10]))
sess.run(tf.global_variables_initializer())
y = tf.matmul(x,W) + b
cross_entropy = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
for _ in range(1000):
batch = mnist.train.next_batch(100)
train_step.run(feed_dict={x: batch[0], y_: batch[1]})
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print(accuracy.eval(feed_dict={x: mnist.test.images, y_: mnist.test.labels}))
def weight_variable(shape):
initial = tf.truncated_normal(shape, stddev=0.1)
return tf.Variable(initial)
def bias_variable(shape):
initial = tf.constant(0.1, shape=shape)
return tf.Variable(initial)
def conv2d(x, W):
return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')
def max_pool_2x2(x):
return tf.nn.max_pool(x, ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1], padding='SAME')
W_conv1 = weight_variable([5, 5, 1, 32])
b_conv1 = bias_variable([32])
x_image = tf.reshape(x, [-1, 28, 28, 1])
h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1)
h_pool1 = max_pool_2x2(h_conv1)
W_conv2 = weight_variable([5, 5, 32, 64])
b_conv2 = bias_variable([64])
h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)
h_pool2 = max_pool_2x2(h_conv2)
W_fc1 = weight_variable([7 * 7 * 64, 1024])
b_fc1 = bias_variable([1024])
h_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*64])
h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)
keep_prob = tf.placeholder(tf.float32)
h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)
W_fc2 = weight_variable([1024, 10])
b_fc2 = bias_variable([10])
y_conv = tf.matmul(h_fc1_drop, W_fc2) + b_fc2
cross_entropy = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y_conv))
train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
correct_prediction = tf.equal(tf.argmax(y_conv, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for i in range(20000):
batch = mnist.train.next_batch(50)
if i % 100 == 0:
train_accuracy = accuracy.eval(feed_dict={
x: batch[0], y_: batch[1], keep_prob: 1.0})
print('step %d, training accuracy %g' % (i, train_accuracy))
train_step.run(feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5})
print('test accuracy %g' % accuracy.eval(feed_dict={
x: mnist.test.images, y_: mnist.test.labels, keep_prob: 1.0}))
```
|
github_jupyter
|
#Import the required libraries
import numpy as np
np.random.seed(1338)
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', one_hot=True)
import tensorflow as tf
sess = tf.InteractiveSession()
x = tf.placeholder(tf.float32, shape=[None, 784])
y_ = tf.placeholder(tf.float32, shape=[None, 10])
W = tf.Variable(tf.zeros([784,10]))
b = tf.Variable(tf.zeros([10]))
sess.run(tf.global_variables_initializer())
y = tf.matmul(x,W) + b
cross_entropy = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
for _ in range(1000):
batch = mnist.train.next_batch(100)
train_step.run(feed_dict={x: batch[0], y_: batch[1]})
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print(accuracy.eval(feed_dict={x: mnist.test.images, y_: mnist.test.labels}))
def weight_variable(shape):
initial = tf.truncated_normal(shape, stddev=0.1)
return tf.Variable(initial)
def bias_variable(shape):
initial = tf.constant(0.1, shape=shape)
return tf.Variable(initial)
def conv2d(x, W):
return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')
def max_pool_2x2(x):
return tf.nn.max_pool(x, ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1], padding='SAME')
W_conv1 = weight_variable([5, 5, 1, 32])
b_conv1 = bias_variable([32])
x_image = tf.reshape(x, [-1, 28, 28, 1])
h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1)
h_pool1 = max_pool_2x2(h_conv1)
W_conv2 = weight_variable([5, 5, 32, 64])
b_conv2 = bias_variable([64])
h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)
h_pool2 = max_pool_2x2(h_conv2)
W_fc1 = weight_variable([7 * 7 * 64, 1024])
b_fc1 = bias_variable([1024])
h_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*64])
h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)
keep_prob = tf.placeholder(tf.float32)
h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)
W_fc2 = weight_variable([1024, 10])
b_fc2 = bias_variable([10])
y_conv = tf.matmul(h_fc1_drop, W_fc2) + b_fc2
cross_entropy = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y_conv))
train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
correct_prediction = tf.equal(tf.argmax(y_conv, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for i in range(20000):
batch = mnist.train.next_batch(50)
if i % 100 == 0:
train_accuracy = accuracy.eval(feed_dict={
x: batch[0], y_: batch[1], keep_prob: 1.0})
print('step %d, training accuracy %g' % (i, train_accuracy))
train_step.run(feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5})
print('test accuracy %g' % accuracy.eval(feed_dict={
x: mnist.test.images, y_: mnist.test.labels, keep_prob: 1.0}))
| 0.689933 | 0.776453 |
```
import pandas as pd
import math
import numpy as np
import datetime
import seaborn as sns
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
from matplotlib.widgets import TextBox
import plotly.express as px
```
# Stock Price EDA
```
df_stockprice = pd.read_csv("df_stockprice.csv")
df_stockprice
# Add Pct Change Column
pct_change = []
n = 0
for ticker in df_stockprice['ticker'].unique():
df_temp = df_stockprice[df_stockprice['ticker']==ticker]
df_temp = df_temp.reset_index()
pct_change.append(np.nan)
for i in range(df_temp.shape[0]-1):
pct_change.append((df_temp['close'][i+1]/df_temp['close'][i]) - 1)
df_stockprice['fluctuation'] = pct_change
df_temp['close']
df_stockprice['ticker'].astype(str)
fig = px.line(df_stockprice, x="date", y="close", color="ticker")
fig.update_layout(title = ('Stock Price Overview'),
xaxis_title = "Date",
yaxis_title ="Closing Price")
fig.show()
fig = px.line(df_stockprice, x="date", y="volume", color="ticker")
fig.update_layout(title = ('Volume Overview'),
xaxis_title = "Date",
yaxis_title ="Volumn")
fig.show()
fig = px.line(df_stockprice, x="date", y="fluctuation", color="ticker")
fig.update_layout(title = ('Fluctuation Overview'),
xaxis_title = "Date",
yaxis_title ="% Change")
fig.show()
# df_stockprice['month'] = [d.split('-')[1] for d in df_stockprice['date']]
df_stockprice1 = df_stockprice.copy()
df_stockprice1.set_index('date', inplace = True, drop = False)
df_stockprice1
tickers = ['PFE', 'MRNA','AZN','JNJ','NVAX','BNTX']
for t in tickers:
ax = plt.subplots(1,1, figsize=(6,3))
plt.title('{} monthly moving average'.format(t))
df_stockprice1[df_stockprice1['ticker']==t]['close'].plot(legend=True, label='closing price')
ax2 = df_stockprice1[df_stockprice1['ticker']==t]['close'].rolling(30).mean().plot(legend=True, label='Moving average')
```
# Calls EDA
```
df_calls = pd.read_csv("df_calls.csv")
df_calls
df_calls.columns
df_calls['Implied Volatility'].replace('%','')
df_calls['Implied Volatility (PCT)'] = [float(s.split('%')[0]) for s in df_calls['Implied Volatility']]
df_calls['Implied Volatility (PCT)'].pct_change().plot()
```
# Tweets EDA
```
df_tweets = pd.read_csv("df_tweets.csv")
df_tweets = df_tweets[df_tweets['language'] == 'en']
df_tweets
import re
import string
def clean_text(text):
'''Make text lowercase, remove text in square brackets,remove links,remove punctuation
and remove words containing numbers.'''
text = str(text).lower()
text = re.sub('\[.*?\]', '', text)
text = re.sub('https?://\S+|www\.\S+', '', text)
text = re.sub('<.*?>+', '', text)
text = re.sub('[%s]' % re.escape(string.punctuation), '', text)
text = re.sub('\n', '', text)
text = re.sub('\w*\d\w*', '', text)
return text
from spacy.lang.en import STOP_WORDS
def remove_stopword(x):
return " ".join([y for y in x.split() if y not in STOP_WORDS])
df_tweets['tweet'] = df_tweets['tweet'].apply(lambda x:clean_text(x))
pip install textblob
from textblob import TextBlob
df_tweets['sentiment'] = df_tweets['tweet'].apply(lambda x: TextBlob(x).sentiment.polarity)
df_tweets['str_tweet'] = ["".join(t) for t in df_tweets['tweet']]
df_tweets['str_tweet'] = df_tweets['str_tweet'].apply(lambda x:clean_text(x))
df_tweets['str_tweet'] = df_tweets['str_tweet'].apply(lambda x:remove_stopword(x))
df_tweets['str_tweet']
df_tweets.name.unique()
import plotly.graph_objects as go
fig = go.Figure()
fig.add_trace(go.Histogram(x=df_tweets[df_tweets['name'] == 'AstraZeneca']['sentiment'],name = 'AZN'))
fig.add_trace(go.Histogram(x=df_tweets[df_tweets['name'] == 'BioNTech SE']['sentiment'],name = 'BNTX'))
fig.add_trace(go.Histogram(x=df_tweets[df_tweets['name'] == 'Johnson & Johnson']['sentiment'],name = 'JNJ'))
fig.add_trace(go.Histogram(x=df_tweets[df_tweets['name'] == 'Moderna']['sentiment'],name = 'MRNA'))
fig.add_trace(go.Histogram(x=df_tweets[df_tweets['name'] == 'Novavax']['sentiment'],name = 'NVAX'))
fig.add_trace(go.Histogram(x=df_tweets[df_tweets['name'] == 'Pfizer Inc.']['sentiment'],name = 'PFE'))
# Overlay both histograms
fig.update_layout(barmode='stack')
# Reduce opacity to see both histograms
fig.update_traces(opacity=1)
fig.update_layout(title = ('Sentiment Score Breakdown'),
xaxis_title = "Sentiment Score",
yaxis_title ="Count")
fig.show()
correlation = df_tweets[['sentiment', 'replies_count','retweets_count','likes_count']].corr()
mask = np.zeros_like(correlation, dtype=np.bool)
mask[np.triu_indices_from(mask)] = True
plt.figure(figsize=(8,6))
plt.xticks(fontsize=10)
plt.yticks(fontsize=10)
sns.heatmap(correlation, cmap='coolwarm', annot=True, annot_kws={"size": 10}, linewidths=10, vmin=-1.5, mask=mask)
```
### Wordcloud
```
pip install wordcloud
from wordcloud import WordCloud
import matplotlib.pyplot as plt
from PIL import Image
text = '/n'.join(df_tweets['str_tweet'])
plt.subplots(1,1, figsize=(12,12))
mask = np.array(Image.open('mask.jpeg'))
wc = WordCloud(background_color='white', random_state=8,
mask=mask)
wc.generate(text)
plt.imshow(wc, interpolation="bilinear")
plt.axis('off')
plt.show()
```
### Tweets: Related Novavax
```
df_tweets_nocvavax = pd.read_json('novavax_related_merged.json', orient = 'records', lines = True)
# df_tweets_nocvavax = df_tweets_nocvavax[df_tweets_nocvavax['language'] == 'en']
df_tweets_nocvavax
df_tweets_nocvavax['str_tweet'] = ["".join(t) for t in df_tweets_nocvavax['tweet']]
df_tweets_nocvavax['str_tweet'] = df_tweets_nocvavax['str_tweet'].apply(lambda x:clean_text(x))
df_tweets_nocvavax['str_tweet'] = df_tweets_nocvavax['str_tweet'].apply(lambda x:remove_stopword(x))
df_tweets_nocvavax['str_tweet']
from textblob import TextBlob
df_tweets_nocvavax['tweet'] = df_tweets_nocvavax['tweet'].apply(lambda x:clean_text(x))
df_tweets_nocvavax['sentiment'] = df_tweets_nocvavax['tweet'].apply(lambda x: TextBlob(x).sentiment.polarity)
import plotly.graph_objects as go
fig = go.Figure()
fig.add_trace(go.Histogram(x=df_tweets_nocvavax['sentiment'], name = 'NVAX'))
# Overlay both histograms
# fig.update_layout(barmode='stack')
# Reduce opacity to see both histograms
fig.update_traces(opacity=1)
fig.update_layout(title = ('Novavax Related Sentiment Score'),
xaxis_title = "Sentiment Score",
yaxis_title ="Count")
fig.show()
correlation = df_tweets_nocvavax[['sentiment', 'replies_count','retweets_count','likes_count']].corr()
mask = np.zeros_like(correlation, dtype=np.bool)
mask[np.triu_indices_from(mask)] = True
plt.figure(figsize=(8,6))
plt.xticks(fontsize=10)
plt.yticks(fontsize=10)
sns.heatmap(correlation, cmap='coolwarm', annot=True, annot_kws={"size": 10}, linewidths=10, vmin=-1.5, mask=mask)
from wordcloud import WordCloud
import matplotlib.pyplot as plt
from PIL import Image
text = '/n'.join(df_tweets_nocvavax['str_tweet'])
text
plt.subplots(1,1, figsize=(12,12))
mask = np.array(Image.open('mask.jpeg'))
wc = WordCloud(background_color='white', random_state=8,
mask=mask)
wc.generate(text)
plt.imshow(wc, interpolation="bilinear")
plt.axis('off')
plt.show()
```
|
github_jupyter
|
import pandas as pd
import math
import numpy as np
import datetime
import seaborn as sns
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
from matplotlib.widgets import TextBox
import plotly.express as px
df_stockprice = pd.read_csv("df_stockprice.csv")
df_stockprice
# Add Pct Change Column
pct_change = []
n = 0
for ticker in df_stockprice['ticker'].unique():
df_temp = df_stockprice[df_stockprice['ticker']==ticker]
df_temp = df_temp.reset_index()
pct_change.append(np.nan)
for i in range(df_temp.shape[0]-1):
pct_change.append((df_temp['close'][i+1]/df_temp['close'][i]) - 1)
df_stockprice['fluctuation'] = pct_change
df_temp['close']
df_stockprice['ticker'].astype(str)
fig = px.line(df_stockprice, x="date", y="close", color="ticker")
fig.update_layout(title = ('Stock Price Overview'),
xaxis_title = "Date",
yaxis_title ="Closing Price")
fig.show()
fig = px.line(df_stockprice, x="date", y="volume", color="ticker")
fig.update_layout(title = ('Volume Overview'),
xaxis_title = "Date",
yaxis_title ="Volumn")
fig.show()
fig = px.line(df_stockprice, x="date", y="fluctuation", color="ticker")
fig.update_layout(title = ('Fluctuation Overview'),
xaxis_title = "Date",
yaxis_title ="% Change")
fig.show()
# df_stockprice['month'] = [d.split('-')[1] for d in df_stockprice['date']]
df_stockprice1 = df_stockprice.copy()
df_stockprice1.set_index('date', inplace = True, drop = False)
df_stockprice1
tickers = ['PFE', 'MRNA','AZN','JNJ','NVAX','BNTX']
for t in tickers:
ax = plt.subplots(1,1, figsize=(6,3))
plt.title('{} monthly moving average'.format(t))
df_stockprice1[df_stockprice1['ticker']==t]['close'].plot(legend=True, label='closing price')
ax2 = df_stockprice1[df_stockprice1['ticker']==t]['close'].rolling(30).mean().plot(legend=True, label='Moving average')
df_calls = pd.read_csv("df_calls.csv")
df_calls
df_calls.columns
df_calls['Implied Volatility'].replace('%','')
df_calls['Implied Volatility (PCT)'] = [float(s.split('%')[0]) for s in df_calls['Implied Volatility']]
df_calls['Implied Volatility (PCT)'].pct_change().plot()
df_tweets = pd.read_csv("df_tweets.csv")
df_tweets = df_tweets[df_tweets['language'] == 'en']
df_tweets
import re
import string
def clean_text(text):
'''Make text lowercase, remove text in square brackets,remove links,remove punctuation
and remove words containing numbers.'''
text = str(text).lower()
text = re.sub('\[.*?\]', '', text)
text = re.sub('https?://\S+|www\.\S+', '', text)
text = re.sub('<.*?>+', '', text)
text = re.sub('[%s]' % re.escape(string.punctuation), '', text)
text = re.sub('\n', '', text)
text = re.sub('\w*\d\w*', '', text)
return text
from spacy.lang.en import STOP_WORDS
def remove_stopword(x):
return " ".join([y for y in x.split() if y not in STOP_WORDS])
df_tweets['tweet'] = df_tweets['tweet'].apply(lambda x:clean_text(x))
pip install textblob
from textblob import TextBlob
df_tweets['sentiment'] = df_tweets['tweet'].apply(lambda x: TextBlob(x).sentiment.polarity)
df_tweets['str_tweet'] = ["".join(t) for t in df_tweets['tweet']]
df_tweets['str_tweet'] = df_tweets['str_tweet'].apply(lambda x:clean_text(x))
df_tweets['str_tweet'] = df_tweets['str_tweet'].apply(lambda x:remove_stopword(x))
df_tweets['str_tweet']
df_tweets.name.unique()
import plotly.graph_objects as go
fig = go.Figure()
fig.add_trace(go.Histogram(x=df_tweets[df_tweets['name'] == 'AstraZeneca']['sentiment'],name = 'AZN'))
fig.add_trace(go.Histogram(x=df_tweets[df_tweets['name'] == 'BioNTech SE']['sentiment'],name = 'BNTX'))
fig.add_trace(go.Histogram(x=df_tweets[df_tweets['name'] == 'Johnson & Johnson']['sentiment'],name = 'JNJ'))
fig.add_trace(go.Histogram(x=df_tweets[df_tweets['name'] == 'Moderna']['sentiment'],name = 'MRNA'))
fig.add_trace(go.Histogram(x=df_tweets[df_tweets['name'] == 'Novavax']['sentiment'],name = 'NVAX'))
fig.add_trace(go.Histogram(x=df_tweets[df_tweets['name'] == 'Pfizer Inc.']['sentiment'],name = 'PFE'))
# Overlay both histograms
fig.update_layout(barmode='stack')
# Reduce opacity to see both histograms
fig.update_traces(opacity=1)
fig.update_layout(title = ('Sentiment Score Breakdown'),
xaxis_title = "Sentiment Score",
yaxis_title ="Count")
fig.show()
correlation = df_tweets[['sentiment', 'replies_count','retweets_count','likes_count']].corr()
mask = np.zeros_like(correlation, dtype=np.bool)
mask[np.triu_indices_from(mask)] = True
plt.figure(figsize=(8,6))
plt.xticks(fontsize=10)
plt.yticks(fontsize=10)
sns.heatmap(correlation, cmap='coolwarm', annot=True, annot_kws={"size": 10}, linewidths=10, vmin=-1.5, mask=mask)
pip install wordcloud
from wordcloud import WordCloud
import matplotlib.pyplot as plt
from PIL import Image
text = '/n'.join(df_tweets['str_tweet'])
plt.subplots(1,1, figsize=(12,12))
mask = np.array(Image.open('mask.jpeg'))
wc = WordCloud(background_color='white', random_state=8,
mask=mask)
wc.generate(text)
plt.imshow(wc, interpolation="bilinear")
plt.axis('off')
plt.show()
df_tweets_nocvavax = pd.read_json('novavax_related_merged.json', orient = 'records', lines = True)
# df_tweets_nocvavax = df_tweets_nocvavax[df_tweets_nocvavax['language'] == 'en']
df_tweets_nocvavax
df_tweets_nocvavax['str_tweet'] = ["".join(t) for t in df_tweets_nocvavax['tweet']]
df_tweets_nocvavax['str_tweet'] = df_tweets_nocvavax['str_tweet'].apply(lambda x:clean_text(x))
df_tweets_nocvavax['str_tweet'] = df_tweets_nocvavax['str_tweet'].apply(lambda x:remove_stopword(x))
df_tweets_nocvavax['str_tweet']
from textblob import TextBlob
df_tweets_nocvavax['tweet'] = df_tweets_nocvavax['tweet'].apply(lambda x:clean_text(x))
df_tweets_nocvavax['sentiment'] = df_tweets_nocvavax['tweet'].apply(lambda x: TextBlob(x).sentiment.polarity)
import plotly.graph_objects as go
fig = go.Figure()
fig.add_trace(go.Histogram(x=df_tweets_nocvavax['sentiment'], name = 'NVAX'))
# Overlay both histograms
# fig.update_layout(barmode='stack')
# Reduce opacity to see both histograms
fig.update_traces(opacity=1)
fig.update_layout(title = ('Novavax Related Sentiment Score'),
xaxis_title = "Sentiment Score",
yaxis_title ="Count")
fig.show()
correlation = df_tweets_nocvavax[['sentiment', 'replies_count','retweets_count','likes_count']].corr()
mask = np.zeros_like(correlation, dtype=np.bool)
mask[np.triu_indices_from(mask)] = True
plt.figure(figsize=(8,6))
plt.xticks(fontsize=10)
plt.yticks(fontsize=10)
sns.heatmap(correlation, cmap='coolwarm', annot=True, annot_kws={"size": 10}, linewidths=10, vmin=-1.5, mask=mask)
from wordcloud import WordCloud
import matplotlib.pyplot as plt
from PIL import Image
text = '/n'.join(df_tweets_nocvavax['str_tweet'])
text
plt.subplots(1,1, figsize=(12,12))
mask = np.array(Image.open('mask.jpeg'))
wc = WordCloud(background_color='white', random_state=8,
mask=mask)
wc.generate(text)
plt.imshow(wc, interpolation="bilinear")
plt.axis('off')
plt.show()
| 0.380644 | 0.711544 |
# 20 Newsgroups
Section One: Importing & Processing Data
Section Two: Logistic Regression Model with Scikit-Learn
Section Three: Neural Network using Keras
```
import pandas as pd
import numpy as np
import pickle
from sklearn.preprocessing import LabelBinarizer
import sklearn.datasets as skds
from pathlib import Path
import itertools
```
## Section One: Importing Data
Importing training and test data from plain-text files.
```
#Establishing Dataframes
df_train = pd.DataFrame(columns=['Category', 'Contents'])
df_test = pd.DataFrame(columns=['Category', 'Contents'])
#Declaring the path to the text files
path_train = '20news-bydate/20news-bydate-train/'
path_test = '20news-bydate/20news-bydate-test/'
#Loading the files
files_train = skds.load_files(path_train, load_content=False)
files_test = skds.load_files(path_test, load_content=False)
#Declaring the test and training lists
data_tags = ["filename", "category", "content"]
data_list_train = []
data_list_test = []
#Adding the training data to the list
label_index = files_train.target
label_names = files_train.target_names
labelled_files = files_train.filenames
i=0
for f in labelled_files:
data_list_train.append((f,label_names[label_index[i]],Path(f).read_text(encoding='ISO-8859-1')))
i += 1
train_data = pd.DataFrame.from_records(data_list_train, columns=data_tags)
#Previewing the training data
train_data.head()
#Adding the test data to the list
label_index2 = files_test.target
label_names2 = files_test.target_names
labelled_files2 = files_test.filenames
i=0
for f in labelled_files2:
data_list_test.append((f,label_names2[label_index2[i]],Path(f).read_text(encoding='ISO-8859-1')))
i += 1
test_data = pd.DataFrame.from_records(data_list_test, columns=data_tags)
#Previewing the test data
test_data.head()
```
## Section Two: Text Classification with Scikit-Learn
Using a Bag-Of-Words (With sklearn's CountVectorizer) and a Logistic Regression algorithm.
```
#Tokenizing with scikit-learn
from sklearn.feature_extraction.text import CountVectorizer
count_vect = CountVectorizer()
X_train_counts = count_vect.fit_transform(train_data.content)
X_train_counts.shape
#Dictionary Size
count_vect.vocabulary_.get(u'algorithm')
#tf-idf (Term Frequency times Inverse Document Frequency)
from sklearn.feature_extraction.text import TfidfTransformer
tfidf_transformer = TfidfTransformer()
X_train_tfidf = tfidf_transformer.fit_transform(X_train_counts)
X_train_tfidf.shape
#Defining a pipeline
from sklearn.linear_model import LogisticRegression
from sklearn.pipeline import Pipeline
text_clf = Pipeline([
('vect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('clf', LogisticRegression(random_state=42, solver='liblinear', multi_class='ovr')),
])
text_clf.fit(train_data.content, train_data.category)
#Testing the classifer
test_docs = test_data.content
predictions = text_clf.predict(test_docs)
#Evaluating predictive accuracy of the classifier
np.mean(predictions == test_data.category)
#Printing the model metrics
from sklearn import metrics
print(metrics.classification_report(test_data.category, predictions))
#Confusion Matrix
cm = metrics.confusion_matrix(test_data.category, predictions)
cm
```
## Section Three: Text Classification with Keras
Using a Bag of Words (Using Keras' Tokenizer) and a feed forward neural network architecture.
```
import keras
from keras.preprocessing.text import Tokenizer
from keras.models import Sequential
from keras.layers import Activation, Dense, Dropout
from sklearn.preprocessing import LabelBinarizer
# 20 news groups
num_labels = 20
vocab_size = 15000
batch_size = 100
# define Tokenizer with Vocab Size
tokenizer = Tokenizer(num_words=vocab_size)
tokenizer.fit_on_texts(train_data.content)
x_train = tokenizer.texts_to_matrix(train_data.content, mode='tfidf')
x_test = tokenizer.texts_to_matrix(test_data.content, mode='tfidf')
encoder = LabelBinarizer()
encoder.fit(train_data.category)
y_train = encoder.transform(train_data.category)
y_test = encoder.transform(test_data.category)
#The Neural Network
callbacks_list = [
keras.callbacks.EarlyStopping(
monitor='val_loss',
patience=1,
verbose=1,
),
]
model = Sequential()
model.add(Dense(512, input_shape=(vocab_size,)))
model.add(Activation('relu'))
model.add(Dropout(0.3))
model.add(Dense(512))
model.add(Activation('relu'))
model.add(Dropout(0.3))
model.add(Dense(num_labels))
model.add(Activation('softmax'))
model.summary()
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
history = model.fit(x_train, y_train,
batch_size=batch_size,
epochs=30,
verbose=1,
callbacks=callbacks_list,
validation_data=(x_train, y_train))
```
After an early drop out of 4 epochs, the model achieves an 81% accuracy (see below)
```
#Evaluating the model
score = model.evaluate(x_test, y_test,
batch_size=batch_size, verbose=1)
print('Test accuracy:', score[1])
text_labels = encoder.classes_
for i in range(10):
prediction = model.predict(np.array([x_test[i]]))
predicted_label = text_labels[np.argmax(prediction[0])]
print(test_data.filename.iloc[i])
print('Actual label:' + test_data.category.iloc[i])
print("Predicted label: " + predicted_label)
#Plotting the Training and Validation loss
import matplotlib.pyplot as plt
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(loss) + 1)
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label="Validation loss")
plt.title('Training and Validation Loss')
plt.legend()
plt.show()
#Confusion Matrix
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
# print(cm)
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=90)
plt.yticks(tick_marks, classes)
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
y_pred = model.predict(x_test);
cnf_matrix = metrics.confusion_matrix(np.argmax(y_test, axis=1), np.argmax(y_pred, axis=1))
# Plot normalized confusion matrix
fig = plt.figure()
fig.set_size_inches(14, 12, forward=True)
#fig.align_labels()
# fig.subplots_adjust(left=0.0, right=1.0, bottom=0.0, top=1.0)
plot_confusion_matrix(cnf_matrix, classes=np.asarray(label_names), normalize=True,
title='Normalized confusion matrix')
fig.savefig("txt_classification-smote" + ".png", pad_inches=5.0)
```
Our Feed Foward ANN was able to achieve an accuracy of about 82% after three epochs.
Based on the confusion matrix, we observe that there are a few categories that our model confuses for each other.
These categories include rec.autos and sci.electronics, however, besides those two categories, the model's misclassifications remain in similar categories talk.politics.misc and talk.politics.guns & soc.religion.christian and talk.religion.misc, for example.
|
github_jupyter
|
import pandas as pd
import numpy as np
import pickle
from sklearn.preprocessing import LabelBinarizer
import sklearn.datasets as skds
from pathlib import Path
import itertools
#Establishing Dataframes
df_train = pd.DataFrame(columns=['Category', 'Contents'])
df_test = pd.DataFrame(columns=['Category', 'Contents'])
#Declaring the path to the text files
path_train = '20news-bydate/20news-bydate-train/'
path_test = '20news-bydate/20news-bydate-test/'
#Loading the files
files_train = skds.load_files(path_train, load_content=False)
files_test = skds.load_files(path_test, load_content=False)
#Declaring the test and training lists
data_tags = ["filename", "category", "content"]
data_list_train = []
data_list_test = []
#Adding the training data to the list
label_index = files_train.target
label_names = files_train.target_names
labelled_files = files_train.filenames
i=0
for f in labelled_files:
data_list_train.append((f,label_names[label_index[i]],Path(f).read_text(encoding='ISO-8859-1')))
i += 1
train_data = pd.DataFrame.from_records(data_list_train, columns=data_tags)
#Previewing the training data
train_data.head()
#Adding the test data to the list
label_index2 = files_test.target
label_names2 = files_test.target_names
labelled_files2 = files_test.filenames
i=0
for f in labelled_files2:
data_list_test.append((f,label_names2[label_index2[i]],Path(f).read_text(encoding='ISO-8859-1')))
i += 1
test_data = pd.DataFrame.from_records(data_list_test, columns=data_tags)
#Previewing the test data
test_data.head()
#Tokenizing with scikit-learn
from sklearn.feature_extraction.text import CountVectorizer
count_vect = CountVectorizer()
X_train_counts = count_vect.fit_transform(train_data.content)
X_train_counts.shape
#Dictionary Size
count_vect.vocabulary_.get(u'algorithm')
#tf-idf (Term Frequency times Inverse Document Frequency)
from sklearn.feature_extraction.text import TfidfTransformer
tfidf_transformer = TfidfTransformer()
X_train_tfidf = tfidf_transformer.fit_transform(X_train_counts)
X_train_tfidf.shape
#Defining a pipeline
from sklearn.linear_model import LogisticRegression
from sklearn.pipeline import Pipeline
text_clf = Pipeline([
('vect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('clf', LogisticRegression(random_state=42, solver='liblinear', multi_class='ovr')),
])
text_clf.fit(train_data.content, train_data.category)
#Testing the classifer
test_docs = test_data.content
predictions = text_clf.predict(test_docs)
#Evaluating predictive accuracy of the classifier
np.mean(predictions == test_data.category)
#Printing the model metrics
from sklearn import metrics
print(metrics.classification_report(test_data.category, predictions))
#Confusion Matrix
cm = metrics.confusion_matrix(test_data.category, predictions)
cm
import keras
from keras.preprocessing.text import Tokenizer
from keras.models import Sequential
from keras.layers import Activation, Dense, Dropout
from sklearn.preprocessing import LabelBinarizer
# 20 news groups
num_labels = 20
vocab_size = 15000
batch_size = 100
# define Tokenizer with Vocab Size
tokenizer = Tokenizer(num_words=vocab_size)
tokenizer.fit_on_texts(train_data.content)
x_train = tokenizer.texts_to_matrix(train_data.content, mode='tfidf')
x_test = tokenizer.texts_to_matrix(test_data.content, mode='tfidf')
encoder = LabelBinarizer()
encoder.fit(train_data.category)
y_train = encoder.transform(train_data.category)
y_test = encoder.transform(test_data.category)
#The Neural Network
callbacks_list = [
keras.callbacks.EarlyStopping(
monitor='val_loss',
patience=1,
verbose=1,
),
]
model = Sequential()
model.add(Dense(512, input_shape=(vocab_size,)))
model.add(Activation('relu'))
model.add(Dropout(0.3))
model.add(Dense(512))
model.add(Activation('relu'))
model.add(Dropout(0.3))
model.add(Dense(num_labels))
model.add(Activation('softmax'))
model.summary()
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
history = model.fit(x_train, y_train,
batch_size=batch_size,
epochs=30,
verbose=1,
callbacks=callbacks_list,
validation_data=(x_train, y_train))
#Evaluating the model
score = model.evaluate(x_test, y_test,
batch_size=batch_size, verbose=1)
print('Test accuracy:', score[1])
text_labels = encoder.classes_
for i in range(10):
prediction = model.predict(np.array([x_test[i]]))
predicted_label = text_labels[np.argmax(prediction[0])]
print(test_data.filename.iloc[i])
print('Actual label:' + test_data.category.iloc[i])
print("Predicted label: " + predicted_label)
#Plotting the Training and Validation loss
import matplotlib.pyplot as plt
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(loss) + 1)
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label="Validation loss")
plt.title('Training and Validation Loss')
plt.legend()
plt.show()
#Confusion Matrix
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
# print(cm)
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=90)
plt.yticks(tick_marks, classes)
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
y_pred = model.predict(x_test);
cnf_matrix = metrics.confusion_matrix(np.argmax(y_test, axis=1), np.argmax(y_pred, axis=1))
# Plot normalized confusion matrix
fig = plt.figure()
fig.set_size_inches(14, 12, forward=True)
#fig.align_labels()
# fig.subplots_adjust(left=0.0, right=1.0, bottom=0.0, top=1.0)
plot_confusion_matrix(cnf_matrix, classes=np.asarray(label_names), normalize=True,
title='Normalized confusion matrix')
fig.savefig("txt_classification-smote" + ".png", pad_inches=5.0)
| 0.617974 | 0.874721 |
# Linear Algebra for CpE
## Laboratory 13 : Eigen-things
We are now going to use the cumulative concepts of matrix algebra, systems of linear equations, and linear transformations to understand the concept and applications of solving for eigenvalues and eigenvectors.
### Objectives
At the end of this activity you will be able to:
1. Be familiar with the concept of eigen-stuffs.
2. Solve for eigen values and eigen vectors.
3. Write eigen-solutions in Python code.
## Discussion
```
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
```
## Eigen
In our lesson, <b>eigen</b> does not denote famous actors but rather coming from a German etymology meining "characteristic". So we can say in analogy, that solving for the <b>eigen</b> of anything is finiding their characteristics.
### Eigenvectors
Referring to our definition of eigen earlier, we can deduct that eigenvectors are characteristic vectors or representative vectors of a matrix. In the more technical sense, these are vectors that can be considered constant/unchanging even when a linear transformation. So whether if we do any geometric translation, that vector in the span of the matrix will not translate to a different vector but rather just scale —meaning it is linearly dependent from its original vector.
So for example we'll have a matrix $x$ wherein we apply a matrix transformation $F$ it gives us a resulting vector $A$.
$$F\cdot x = A$$
So in matrix $x$ there would exist a vector $v$ upond having a reultant matrix $A$ it will just be a scalar transform of itself (eigenvector). We can denote the scaling factor as $\lambda$. We can then define the eigenvector as:
$$A\cdot v = \lambda * v$$
```
def plot_quiv(x,y=None,eig=None):
size= (5,5)
plt.figure(figsize=(4,4))
plt.xlim(-size[0],size[0])
plt.ylim(-size[1],size[1])
plt.xticks(np.arange((-size[0]), size[0]+1, 1.0))
plt.yticks(np.arange((-size[1]), size[1]+1, 1.0))
plt.quiver([0,0],[0,0], x[0,:], x[1,:],
angles='xy', scale_units='xy',scale=1,
color=['red','red'], label='Original Vector')## use column spaces
if y is not None:
plt.quiver([0,0],[0,0], y[0,:], y[1,:],
angles='xy', scale_units='xy',scale=1,
color=['blue','blue'], label='Transformed Vector')## use column spaces
if eig is not None:
c = np.arange(-10,10,0.25)
# plt.plot(c*eig[0,0],c*eig[1,0], color='orange')
plt.plot(c*eig[0,1], c*eig[1,1], color='orange', label='Eigenspace')
plt.plot(c*eig[0,0],c*eig[1,0], color='orange')
plt.plot(c*eig[0,0], c*eig[0,1], color='orange', label='Eigenspace')
plt.plot(c*eig[1,0],c*eig[1,1], color='orange')
plt.grid()
plt.legend()
plt.show()
## Let's try to determine that manually
x = np.array([
[1,-1],
[0,2]
])
plot_quiv(x)
F = np.array([
[2, 1.5],
[0, 1]
])
A = F@x
print(A)
plot_quiv(x,A)
```
In the linear transformation above, we can see that the first vector (red) did not shift or rotate to any other coordinate in the 2D space. We can say that the first vector is an eigenvector since it remains on its span even if a linear transformation is applied. But do note there could be more than one eigenvector for a matrix, and most of the times these vectors cannot be identified through visual inspection. We can try to solve this using the formula we set above.
$$(A \cdot v) - (\lambda * v) = 0 $$
$$(A-\lambda)\cdot v = 0$$
Assuming that $v$ is non-zero, well try to solve for $A-\lambda$ in which it will equate to 0. Take note that $A$ is a vector and $\lambda$ is a scalar.
Initially we cannot perform a matrix and lambda subtraction (except considering Broadcasting), with that we must turn $\lambda$ from a scalar to a scalar matrix by multiplying it with $I$.So if we have $A = \begin{bmatrix} a & b \\ c & d\end{bmatrix}$ and a scalar matrix $\lambda$ as $\begin{bmatrix}\lambda & 0 \\ 0 & \lambda\end{bmatrix}$ we would have:
$$A - \lambda = \begin{bmatrix}a-\lambda & b \\ c & d-\lambda\end{bmatrix}$$
Now, what's left would be solving for $\lambda$s wherein $(A - \lambda) \cdot v = 0$. Since the requirement is that transformation of the vector is that it would stay on the same span, it is a <b>linearly dependent vector</b>. Recalling one of the methods to determine linearly depenent vectors is when their determinant is equal to zero. So we can say in our case:
$$det(A-\lambda) \cdot v = 0\\ \mbox{or} \\ det \left( \begin{bmatrix}a-\lambda & b \\ c & d-\lambda\end{bmatrix} \right) = 0$$
By solving for the determinant we will have a polynomial equation:
$$(a-\lambda)(d-\lambda) - cb = 0 \\ ad - d\lambda - a\lambda + \lambda^2 - cb = 0 \\ \mbox{or} \\ \lambda^2 - ad(\lambda) + ad -cb = 0 \\ \lambda^2 - (a+d)\lambda + ad - cb = 0$$
We can then solve for $\lambda$ by getting the root of the polynomial. The roots that would be find are the <b>eigenvalues</b>.
So given $A$ earlier as $\begin{bmatrix}2 & 1 \\ 0 & 2\end{bmatrix}$ we can try to solve for the eigenvalues as
$$det \left( \begin{bmatrix}2-\lambda & 1 \\ 0 & 2-\lambda\end{bmatrix} \right) = 0$$
The eigenvalues could then be solved as:
$$ \lambda^2 - (a+d)\lambda + det(A) = 0 $$
$$ \lambda^2 - 4\lambda + 4 = 0 $$
You can use the quadratic formula to solve for the roots of the polymial. But for ease of discussion we'll use the `np.roots()` function.
```
coeff = [1,-4,4]
eigvals = np.around(np.roots(coeff),3)
eigvals
```
The next step would be re-substituting both $\lambda$ in to the $A-\lambda = 0$ equation, that would yield
Lastly, to solve for the <b>eigenvectors</b> we need to solve these matrices as system of linear equations equating to 0.
```
eigm1 = A-(eigvals[0]*np.eye((2))).astype("int32")
print(eigm1)
# eigm2 = A-(eigvals[1]*np.eye((2))).astype("int32")
# print(eigm2)
```
Beyond this point this requires sophisticated algorithms to perfrom programatically such as power iterations. It could be assumed that solving the eigenvectors was already demonstrated during the lecture. <br>
But then again, we can implement this simpler using built in NumPy functions
```
eigval, eigvect = np.linalg.eig(A)
print(eigval)
print(eigvect)
plot_quiv(x,A,eig=eigvect)
def plot_tr_eig(inp, trans, eig, q1=False):
c1 = np.arange(-5, 5, 0.5)
c2 = np.arange(-5, 5, 0.5)
if q1:
c1 = np.arange(0, 5, 0.5)
c2 = np.arange(0, 5, 0.5)
X,Y= np.meshgrid(c1, c2)
v = np.array([X.flatten(),Y.flatten()])
A = F@inp@v
fig, ax = plt.subplots()
size= (5,5)
fig.set_size_inches(10,10)
ax.set_xlim(-size[0],size[0])
ax.set_ylim(-size[1],size[1])
ax.set_xticks(np.arange((-size[0]), size[0]+1, 1.0))
ax.set_yticks(np.arange((-size[1]), size[1]+1, 1.0))
if q1:
ax.set_xlim(0,size[0])
ax.set_ylim(0,size[1])
ax.set_xticks(np.arange(0, size[0]+1, 1.0))
ax.set_yticks(np.arange(0, size[1]+1, 1.0))
q = ax.quiver(X, Y,
A[0,:].reshape(int(np.sqrt(A[0,:].size)), int(np.sqrt(A[0,:].size))),
A[1,:].reshape(int(np.sqrt(A[1,:].size)), int(np.sqrt(A[1,:].size))),
color='royalblue')
ax.quiverkey(q, X=0.3, Y=1.1, U=10,
label='Quiver key, length = 10', labelpos='E')
if eig is not None:
c = np.arange(-20,20,0.25)
plt.plot(c*eig[0,1],c*eig[0,0], color='orange')
plt.plot(c*eig[1,1], c*eig[1,0], color='orange', label='Eigenvector')
plt.show()
x = np.array([
[1,0],
[0,1]
])
F = np.array([
[-1,2],
[0,2]
])
eigval, eigvect = np.linalg.eig(F@x)
plot_tr_eig(x,F,eigvect)
```
## Practice
A direct way applying eigenvalues and eigenvectors is through differential equations or determining rate of change between variables. Let's say we want to determine the probabilty of surviving a pandemic or how long till the number of victims and healthy people meet equilibirium. Let's say we have a pandemic named the "Pink Plague". We can characterize the infection rate and recovery rate as follows:
$\mbox{Let} : \\
\mbox{Healthy} = 1-\frac{dI}{dt} + \frac{dR}{dt} \\
\mbox{Infected} = \frac{dI}{dt} + 1-\frac{dR}{dt} \\
\frac{dI}{dt} = 30\% \\
\frac{dR}{dt} = 80\% \\
\mbox{Healthy}_{population} = 70\%H + 80\%I \\
\mbox{Infected}_{population} = 30\%H + 20\%I$
This will give us a system of linear equation with a linear transformation as a Markov Matrix characterizing the state of the human population:
$$
\begin{bmatrix}0.70 & 0.80 \\ 0.30 & 0.20\end{bmatrix} \begin{bmatrix}H_0 \\ I_0\end{bmatrix} = \begin{bmatrix}H_f \\ I_f\end{bmatrix}$$
```
init = np.eye(2) #represents initial population by 1:100
# print(init)
rate = np.array([
[0.7, 0.8],
[0.3, 0.2]
])
eigvals, equilibrium= np.linalg.eig(rate@init)
# print(equilibrium)
plot_tr_eig(init,rate,equilibrium, q1=True)
```
## Supplementary Activity
Try to implement your own function for solving eigenvalues.
### Conclusion
As a conclusion briefly explain the essence of eigenvectors and eigenvalues. Additionally, cite an example of using eigenvalues and eigenvectors in social sciences.
|
github_jupyter
|
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
def plot_quiv(x,y=None,eig=None):
size= (5,5)
plt.figure(figsize=(4,4))
plt.xlim(-size[0],size[0])
plt.ylim(-size[1],size[1])
plt.xticks(np.arange((-size[0]), size[0]+1, 1.0))
plt.yticks(np.arange((-size[1]), size[1]+1, 1.0))
plt.quiver([0,0],[0,0], x[0,:], x[1,:],
angles='xy', scale_units='xy',scale=1,
color=['red','red'], label='Original Vector')## use column spaces
if y is not None:
plt.quiver([0,0],[0,0], y[0,:], y[1,:],
angles='xy', scale_units='xy',scale=1,
color=['blue','blue'], label='Transformed Vector')## use column spaces
if eig is not None:
c = np.arange(-10,10,0.25)
# plt.plot(c*eig[0,0],c*eig[1,0], color='orange')
plt.plot(c*eig[0,1], c*eig[1,1], color='orange', label='Eigenspace')
plt.plot(c*eig[0,0],c*eig[1,0], color='orange')
plt.plot(c*eig[0,0], c*eig[0,1], color='orange', label='Eigenspace')
plt.plot(c*eig[1,0],c*eig[1,1], color='orange')
plt.grid()
plt.legend()
plt.show()
## Let's try to determine that manually
x = np.array([
[1,-1],
[0,2]
])
plot_quiv(x)
F = np.array([
[2, 1.5],
[0, 1]
])
A = F@x
print(A)
plot_quiv(x,A)
coeff = [1,-4,4]
eigvals = np.around(np.roots(coeff),3)
eigvals
eigm1 = A-(eigvals[0]*np.eye((2))).astype("int32")
print(eigm1)
# eigm2 = A-(eigvals[1]*np.eye((2))).astype("int32")
# print(eigm2)
eigval, eigvect = np.linalg.eig(A)
print(eigval)
print(eigvect)
plot_quiv(x,A,eig=eigvect)
def plot_tr_eig(inp, trans, eig, q1=False):
c1 = np.arange(-5, 5, 0.5)
c2 = np.arange(-5, 5, 0.5)
if q1:
c1 = np.arange(0, 5, 0.5)
c2 = np.arange(0, 5, 0.5)
X,Y= np.meshgrid(c1, c2)
v = np.array([X.flatten(),Y.flatten()])
A = F@inp@v
fig, ax = plt.subplots()
size= (5,5)
fig.set_size_inches(10,10)
ax.set_xlim(-size[0],size[0])
ax.set_ylim(-size[1],size[1])
ax.set_xticks(np.arange((-size[0]), size[0]+1, 1.0))
ax.set_yticks(np.arange((-size[1]), size[1]+1, 1.0))
if q1:
ax.set_xlim(0,size[0])
ax.set_ylim(0,size[1])
ax.set_xticks(np.arange(0, size[0]+1, 1.0))
ax.set_yticks(np.arange(0, size[1]+1, 1.0))
q = ax.quiver(X, Y,
A[0,:].reshape(int(np.sqrt(A[0,:].size)), int(np.sqrt(A[0,:].size))),
A[1,:].reshape(int(np.sqrt(A[1,:].size)), int(np.sqrt(A[1,:].size))),
color='royalblue')
ax.quiverkey(q, X=0.3, Y=1.1, U=10,
label='Quiver key, length = 10', labelpos='E')
if eig is not None:
c = np.arange(-20,20,0.25)
plt.plot(c*eig[0,1],c*eig[0,0], color='orange')
plt.plot(c*eig[1,1], c*eig[1,0], color='orange', label='Eigenvector')
plt.show()
x = np.array([
[1,0],
[0,1]
])
F = np.array([
[-1,2],
[0,2]
])
eigval, eigvect = np.linalg.eig(F@x)
plot_tr_eig(x,F,eigvect)
init = np.eye(2) #represents initial population by 1:100
# print(init)
rate = np.array([
[0.7, 0.8],
[0.3, 0.2]
])
eigvals, equilibrium= np.linalg.eig(rate@init)
# print(equilibrium)
plot_tr_eig(init,rate,equilibrium, q1=True)
| 0.267887 | 0.990459 |
## MovieLens 1M Dataset
GroupLens Research provides a number of collections of movie ratings data collectedfrom users of MovieLens in the late 1990s and early 2000s. The data provide movieratings, movie metadata (genres and year), and demographic data about the users(age, zip code, gender identification, and occupation). Such data is often of interest in the development of recommendation systems based on machine learning algorithms.
The MovieLens 1M dataset contains 1 million ratings collected from 6,000 users on4,000 movies. It’s spread across three tables: ratings, user information, and movieinformation. After extracting the data from the ZIP file, we can load each table into apandas DataFrame object using pandas.read_table.
```
from numpy.random import randn
import numpy as np
import os
import matplotlib.pyplot as plt
import pandas as pd
np.set_printoptions(precision=4)
pd.options.display.max_rows = 20
import seaborn as sns
# Make display smaller
pd.options.display.max_rows = 10
unames = ['user_id', 'gender', 'age', 'occupation', 'zip']
users = pd.read_table('datasets/movielens/users.dat', sep='::',
header=None, names=unames, engine='python')
rnames = ['user_id', 'movie_id', 'rating', 'timestamp']
ratings = pd.read_table('datasets/movielens/ratings.dat', sep='::',
header=None, names=rnames, engine='python')
mnames = ['movie_id', 'title', 'genres']
movies = pd.read_table('datasets/movielens/movies.dat', sep='::',
header=None, names=mnames, engine='python')
users[:5]
ratings[:5]
movies[:5]
ratings
data = pd.merge(pd.merge(ratings, users), movies)
data
data.iloc[0]
mean_ratings = data.pivot_table('rating', index='title',
columns='gender', aggfunc='mean')
mean_ratings[:5]
ratings_by_title = data.groupby('title').size()
ratings_by_title[:10]
active_titles = ratings_by_title.index[ratings_by_title >= 250]
active_titles
# Select rows on the index
mean_ratings = mean_ratings.loc[active_titles]
mean_ratings
mean_ratings = mean_ratings.rename(index={'Seven Samurai (The Magnificent Seven) (Shichinin no samurai) (1954)':
'Seven Samurai (Shichinin no samurai) (1954)'})
top_female_ratings = mean_ratings.sort_values(by='F', ascending=False)
top_female_ratings[:10]
```
### Measuring Rating Disagreement
```
mean_ratings['diff'] = mean_ratings['M'] - mean_ratings['F']
sorted_by_diff = mean_ratings.sort_values(by='diff')
sorted_by_diff[:10]
# Reverse order of rows, take first 10 rows
sorted_by_diff[::-1][:10]
# Standard deviation of rating grouped by title
rating_std_by_title = data.groupby('title')['rating'].std()
# Filter down to active_titles
rating_std_by_title = rating_std_by_title.loc[active_titles]
# Order Series by value in descending order
rating_std_by_title.sort_values(ascending=False)[:10]
```
|
github_jupyter
|
from numpy.random import randn
import numpy as np
import os
import matplotlib.pyplot as plt
import pandas as pd
np.set_printoptions(precision=4)
pd.options.display.max_rows = 20
import seaborn as sns
# Make display smaller
pd.options.display.max_rows = 10
unames = ['user_id', 'gender', 'age', 'occupation', 'zip']
users = pd.read_table('datasets/movielens/users.dat', sep='::',
header=None, names=unames, engine='python')
rnames = ['user_id', 'movie_id', 'rating', 'timestamp']
ratings = pd.read_table('datasets/movielens/ratings.dat', sep='::',
header=None, names=rnames, engine='python')
mnames = ['movie_id', 'title', 'genres']
movies = pd.read_table('datasets/movielens/movies.dat', sep='::',
header=None, names=mnames, engine='python')
users[:5]
ratings[:5]
movies[:5]
ratings
data = pd.merge(pd.merge(ratings, users), movies)
data
data.iloc[0]
mean_ratings = data.pivot_table('rating', index='title',
columns='gender', aggfunc='mean')
mean_ratings[:5]
ratings_by_title = data.groupby('title').size()
ratings_by_title[:10]
active_titles = ratings_by_title.index[ratings_by_title >= 250]
active_titles
# Select rows on the index
mean_ratings = mean_ratings.loc[active_titles]
mean_ratings
mean_ratings = mean_ratings.rename(index={'Seven Samurai (The Magnificent Seven) (Shichinin no samurai) (1954)':
'Seven Samurai (Shichinin no samurai) (1954)'})
top_female_ratings = mean_ratings.sort_values(by='F', ascending=False)
top_female_ratings[:10]
mean_ratings['diff'] = mean_ratings['M'] - mean_ratings['F']
sorted_by_diff = mean_ratings.sort_values(by='diff')
sorted_by_diff[:10]
# Reverse order of rows, take first 10 rows
sorted_by_diff[::-1][:10]
# Standard deviation of rating grouped by title
rating_std_by_title = data.groupby('title')['rating'].std()
# Filter down to active_titles
rating_std_by_title = rating_std_by_title.loc[active_titles]
# Order Series by value in descending order
rating_std_by_title.sort_values(ascending=False)[:10]
| 0.453262 | 0.834339 |
```
import pandas as pd
import numpy as np
from datetime import datetime
%matplotlib inline
pd.set_option('display.max_rows', 500)
```

# Groupby apply on large (relational) data set
## Attentions all writen functions assume a data frame where the date is sorted!!
```
pd_JH_data=pd.read_csv('../data/processed/COVID_relational_confirmed.csv',sep=';',parse_dates=[0])
pd_JH_data=pd_JH_data.sort_values('date',ascending=True).reset_index(drop=True).copy()
pd_JH_data.head()
```
# Test data
```
test_data=pd_JH_data[((pd_JH_data['country']=='US')|
(pd_JH_data['country']=='Germany'))&
(pd_JH_data['date']>'2020-03-20')]
test_data.head()
test_data.groupby(['country']).agg(np.max)
import numpy as np
from sklearn import linear_model
reg = linear_model.LinearRegression(fit_intercept=True)
def get_doubling_time_via_regression(in_array):
y = np.array(in_array)
X = np.arange(-1,2).reshape(-1, 1)
assert len(in_array)==3
reg.fit(X,y)
intercept=reg.intercept_
slope=reg.coef_
return intercept/slope
test_data.groupby(['state','country']).agg(np.max)
def rolling_reg(df_input,col='confirmed'):
days_back=3
result=df_input[col].rolling(
window=days_back,
min_periods=days_back).apply(get_doubling_time_via_regression,raw=False)
return result
test_data[['state','country','confirmed']].groupby(['state','country']).apply(rolling_reg,'confirmed')
pd_DR_result=pd_JH_data[['state','country','confirmed']].groupby(['state','country']).apply(rolling_reg,'confirmed').reset_index()
pd_DR_result=pd_DR_result.rename(columns={'confirmed':'confirmed_DR',
'level_2':'index'})
pd_DR_result.head()
pd_JH_data=pd_JH_data.reset_index()
pd_JH_data.head()
pd_result_larg=pd.merge(pd_JH_data,pd_DR_result[['index','confirmed_DR']],on=['index'],how='left')
pd_result_larg.head()
```
# Filtering the data with groupby apply
```
from scipy import signal
def savgol_filter(df_input,column='confirmed',window=5):
window=5,
degree=1
df_result=df_input
filter_in=df_input[column].fillna(0)
result=signal.savgol_filter(np.array(filter_in),
5,
1)
df_result[column+'_filtered']=result
return df_result
pd_filtered_result=pd_JH_data[['state','country','confirmed']].groupby(['state','country']).apply(savgol_filter).reset_index()
pd_result_larg=pd.merge(pd_result_larg,pd_filtered_result[['index','confirmed_filtered']],on=['index'],how='left')
pd_result_larg.head()
```
# Filtered doubling rate
```
pd_filtered_doubling=pd_result_larg[['state','country','confirmed_filtered']].groupby(['state','country']).apply(rolling_reg,'confirmed_filtered').reset_index()
pd_filtered_doubling=pd_filtered_doubling.rename(columns={'confirmed_filtered':'confirmed_filtered_DR',
'level_2':'index'})
pd_filtered_doubling.tail()
pd_result_larg=pd.merge(pd_result_larg,pd_filtered_doubling[['index','confirmed_filtered_DR']],on=['index'],how='left')
pd_result_larg.tail()
mask=pd_result_larg['confirmed']>100
pd_result_larg['confirmed_filtered_DR']=pd_result_larg['confirmed_filtered_DR'].where(mask, other=np.NaN)
pd_result_larg[pd_result_larg['country']=='Germany'].tail()
pd_result_larg.to_csv('../data/processed/COVID_final_set.csv',sep=';',index=False)
```
|
github_jupyter
|
import pandas as pd
import numpy as np
from datetime import datetime
%matplotlib inline
pd.set_option('display.max_rows', 500)
pd_JH_data=pd.read_csv('../data/processed/COVID_relational_confirmed.csv',sep=';',parse_dates=[0])
pd_JH_data=pd_JH_data.sort_values('date',ascending=True).reset_index(drop=True).copy()
pd_JH_data.head()
test_data=pd_JH_data[((pd_JH_data['country']=='US')|
(pd_JH_data['country']=='Germany'))&
(pd_JH_data['date']>'2020-03-20')]
test_data.head()
test_data.groupby(['country']).agg(np.max)
import numpy as np
from sklearn import linear_model
reg = linear_model.LinearRegression(fit_intercept=True)
def get_doubling_time_via_regression(in_array):
y = np.array(in_array)
X = np.arange(-1,2).reshape(-1, 1)
assert len(in_array)==3
reg.fit(X,y)
intercept=reg.intercept_
slope=reg.coef_
return intercept/slope
test_data.groupby(['state','country']).agg(np.max)
def rolling_reg(df_input,col='confirmed'):
days_back=3
result=df_input[col].rolling(
window=days_back,
min_periods=days_back).apply(get_doubling_time_via_regression,raw=False)
return result
test_data[['state','country','confirmed']].groupby(['state','country']).apply(rolling_reg,'confirmed')
pd_DR_result=pd_JH_data[['state','country','confirmed']].groupby(['state','country']).apply(rolling_reg,'confirmed').reset_index()
pd_DR_result=pd_DR_result.rename(columns={'confirmed':'confirmed_DR',
'level_2':'index'})
pd_DR_result.head()
pd_JH_data=pd_JH_data.reset_index()
pd_JH_data.head()
pd_result_larg=pd.merge(pd_JH_data,pd_DR_result[['index','confirmed_DR']],on=['index'],how='left')
pd_result_larg.head()
from scipy import signal
def savgol_filter(df_input,column='confirmed',window=5):
window=5,
degree=1
df_result=df_input
filter_in=df_input[column].fillna(0)
result=signal.savgol_filter(np.array(filter_in),
5,
1)
df_result[column+'_filtered']=result
return df_result
pd_filtered_result=pd_JH_data[['state','country','confirmed']].groupby(['state','country']).apply(savgol_filter).reset_index()
pd_result_larg=pd.merge(pd_result_larg,pd_filtered_result[['index','confirmed_filtered']],on=['index'],how='left')
pd_result_larg.head()
pd_filtered_doubling=pd_result_larg[['state','country','confirmed_filtered']].groupby(['state','country']).apply(rolling_reg,'confirmed_filtered').reset_index()
pd_filtered_doubling=pd_filtered_doubling.rename(columns={'confirmed_filtered':'confirmed_filtered_DR',
'level_2':'index'})
pd_filtered_doubling.tail()
pd_result_larg=pd.merge(pd_result_larg,pd_filtered_doubling[['index','confirmed_filtered_DR']],on=['index'],how='left')
pd_result_larg.tail()
mask=pd_result_larg['confirmed']>100
pd_result_larg['confirmed_filtered_DR']=pd_result_larg['confirmed_filtered_DR'].where(mask, other=np.NaN)
pd_result_larg[pd_result_larg['country']=='Germany'].tail()
pd_result_larg.to_csv('../data/processed/COVID_final_set.csv',sep=';',index=False)
| 0.262653 | 0.754282 |
___
# Ecommerce Purchases Exercise
In this Exercise you will be given some Fake Data about some purchases done through Amazon! Just go ahead and follow the directions and try your best to answer the questions and complete the tasks. Feel free to reference the solutions. Most of the tasks can be solved in different ways. For the most part, the questions get progressively harder.
Please excuse anything that doesn't make "Real-World" sense in the dataframe, all the data is fake and made-up.
Also note that all of these questions can be answered with one line of code.
____
** Import pandas and read in the Ecommerce Purchases csv file and set it to a DataFrame called ecom. **
```
import pandas as pd
ecom = pd.read_csv('Ecommerce Purchases')
```
**Check the head of the DataFrame.**
```
ecom.head()
```
** How many rows and columns are there? **
```
ecom.info()
```
** What is the average Purchase Price? **
```
ecom['Purchase Price'].mean()
```
** What were the highest and lowest purchase prices? **
```
ecom['Purchase Price'].max()
ecom['Purchase Price'].min()
```
** How many people have English 'en' as their Language of choice on the website? **
```
ecom[ecom['Language']=='en'].count()
```
** How many people have the job title of "Lawyer" ? **
```
ecom[ecom['Job'] == 'Lawyer'].info()
```
** How many people made the purchase during the AM and how many people made the purchase during PM ? **
**(Hint: Check out [value_counts()](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.value_counts.html) ) **
```
ecom['AM or PM'].value_counts()
```
** What are the 5 most common Job Titles? **
```
ecom['Job'].value_counts().head(5)
```
** Someone made a purchase that came from Lot: "90 WT" , what was the Purchase Price for this transaction? **
```
ecom[ecom['Lot']=='90 WT']['Purchase Price']
```
** What is the email of the person with the following Credit Card Number: 4926535242672853 **
```
ecom[ecom["Credit Card"] == 4926535242672853]['Email']
```
** How many people have American Express as their Credit Card Provider *and* made a purchase above $95 ?**
```
ecom[(ecom['CC Provider']=='American Express') & (ecom['Purchase Price']>95)].count()
```
** Hard: How many people have a credit card that expires in 2025? **
```
sum(ecom['CC Exp Date'].apply(lambda x: x[3:]) == '25')
```
** Hard: What are the top 5 most popular email providers/hosts (e.g. gmail.com, yahoo.com, etc...) **
```
ecom['Email'].apply(lambda x: x.split('@')[1]).value_counts().head(5)
```
# Great Job!
|
github_jupyter
|
import pandas as pd
ecom = pd.read_csv('Ecommerce Purchases')
ecom.head()
ecom.info()
ecom['Purchase Price'].mean()
ecom['Purchase Price'].max()
ecom['Purchase Price'].min()
ecom[ecom['Language']=='en'].count()
ecom[ecom['Job'] == 'Lawyer'].info()
ecom['AM or PM'].value_counts()
ecom['Job'].value_counts().head(5)
ecom[ecom['Lot']=='90 WT']['Purchase Price']
ecom[ecom["Credit Card"] == 4926535242672853]['Email']
ecom[(ecom['CC Provider']=='American Express') & (ecom['Purchase Price']>95)].count()
sum(ecom['CC Exp Date'].apply(lambda x: x[3:]) == '25')
ecom['Email'].apply(lambda x: x.split('@')[1]).value_counts().head(5)
| 0.081831 | 0.962143 |
# Read data from weather mast at Haukeliseter site
## create temperature file for apriori guess
## calculate daily snowfall
```
import sys
sys.path.append('/Volumes/SANDISK128/Documents/Thesis/Python/')
import matplotlib.pyplot as plt
import matplotlib.dates as dates
import numpy as np
import csv
import pandas as pd
import datetime
from datetime import date
import calendar
import get_Haukeli_obs_data as obsDat
import createFolder as cF
import netCDF4
import calc_station_properties as cs
%matplotlib inline
txt_save_dir = '../../Data/sfc_temp/txt'
cF.createFolder(txt_save_dir)
nc_save_dir = '../../Data/sfc_temp/nc'
cF.createFolder(nc_save_dir)
Haukeli = pd.read_csv('../../Data/MetNo_obs/201612_Haukeliseter.txt',\
sep = ',',header=22, usecols = [0,1,2,3,4,6,7,8, 14], \
names = ['Date','TimeStamp', 'RA1', 'RA2', 'RA3', 'X2RA1', 'X2RA2', 'X2RA3', 'TA'])
dd = Haukeli['Date']
TimeStamp = Haukeli['TimeStamp'] # Time Stamp
RA1 = Haukeli['RA1'].astype(float) # total accumulation from Geonor inside DOUBLE FENCE [mm] RA1
RA2 = Haukeli['RA2'].astype(float)
RA3 = Haukeli['RA3'].astype(float)
X2RA1 = Haukeli['X2RA1'].astype(float) # Total accumulation from Geonor South, shielded with SINGLE FENCE [mm] X2RA1
X2RA2 = Haukeli['X2RA2'].astype(float)
X2RA3 = Haukeli['X2RA3'].astype(float)
TA = Haukeli['TA'].astype(float) # Air temperature, PT100 [deg C] TA
#wind45 = Hauke['3DFFL4'] # wind speed 4.5m @ mast 2 [m/s] 3DFFL4
### exclude missing values
RA1 = RA1.where(RA1 != -999.00)
RA2 = RA2.where(RA2 != -999.00)
RA3 = RA3.where(RA3 != -999.00)
X2RA1 = X2RA1.where(X2RA1 != -999.00)
X2RA2 = X2RA2.where(X2RA2 != -999.00)
X2RA3 = X2RA3.where(X2RA3 != -999.00)
TA = TA.where(TA != -999.00)
#wind45 = wind45.where(wind45 != -999.00)
# --------- connect values from double fence and calculate mean -------------------------------------------------------------------------
#
RA = pd.concat([RA1, RA2, RA3], axis = 1)
RA = np.nanmean(RA, axis = 1)
yr = []
mm = []
dy = []
tt = []
df1 = []
df2 = []
df3 = []
df = []
temp = []
for i in range(0,31):
idx = datetime.datetime.strptime(str(dd[i*1440]), '%Y%m%d')
yr.append(int(idx.year))
mm.append(int(idx.month))
dy.append(int(idx.day))
tt.append(TimeStamp[ i*1440 : (i+1)*1440 ])
df1.append(RA1[i*1440: (i+1)*1440])
df2.append(RA2[i*1440: (i+1)*1440])
df3.append(RA3[i*1440: (i+1)*1440])
df.append(RA[i*1440: (i+1)*1440])
temp.append(TA[i*1440: (i+1)*1440])
df1 = np.transpose(df1)
df2 = np.transpose(df2)
df3 = np.transpose(df3)
df = np.transpose(df)
temp = np.transpose(temp)
idx = []
t_minute = []
for k in range(0,60):
idx.append(k*100)
t_minute = idx
for h in range(1,24):
t_minute = np.concatenate([t_minute, np.asarray(idx)+h*10000])
fmt = '%1.f %2.5f'
#fmt = '%2.5f'
#fmt = '%1.1f'
for i in range(0,31):
if dy[i] < 10:
day = '0%s' %dy[i]
else:
day = str(dy[i])
filename = '%s/Haukeli_sfc_temp_%s%s%s.txt' %(txt_save_dir,yr[i],mm[i],day)
np.savetxt(filename, np.transpose([t_minute,temp[:,i]]), delimiter = ',', fmt = fmt, header = 'time, sfc_temp')
for i in range(0,31):
### write netCDF file including t_minute, temp
var = temp
mask = np.ma.getmaskarray(var[:,i])
var = np.ma.array(var[:,i], mask = mask, fill_value = -9999.)
var = var.filled()
if dy[i] < 10:
day = '0%s' %dy[i]
else:
day = str(dy[i])
f = netCDF4.Dataset('%s/Haukeli_sfc_temp_%s%s%s.nc' %(nc_save_dir,yr[i],mm[i],day), 'w')
### create dimensions
f.createDimension('time', t_minute.shape[0])
tid = f.createVariable('time', t_minute.dtype,'time',zlib = True)
tid[:] = t_minute[:]
dim = ('time', )
sfc_T = f.createVariable(varname='sfc_temp', datatype=var.dtype, dimensions=dim,
fill_value = -9999., zlib=True)
sfc_T[:] = var
f.close()
```
|
github_jupyter
|
import sys
sys.path.append('/Volumes/SANDISK128/Documents/Thesis/Python/')
import matplotlib.pyplot as plt
import matplotlib.dates as dates
import numpy as np
import csv
import pandas as pd
import datetime
from datetime import date
import calendar
import get_Haukeli_obs_data as obsDat
import createFolder as cF
import netCDF4
import calc_station_properties as cs
%matplotlib inline
txt_save_dir = '../../Data/sfc_temp/txt'
cF.createFolder(txt_save_dir)
nc_save_dir = '../../Data/sfc_temp/nc'
cF.createFolder(nc_save_dir)
Haukeli = pd.read_csv('../../Data/MetNo_obs/201612_Haukeliseter.txt',\
sep = ',',header=22, usecols = [0,1,2,3,4,6,7,8, 14], \
names = ['Date','TimeStamp', 'RA1', 'RA2', 'RA3', 'X2RA1', 'X2RA2', 'X2RA3', 'TA'])
dd = Haukeli['Date']
TimeStamp = Haukeli['TimeStamp'] # Time Stamp
RA1 = Haukeli['RA1'].astype(float) # total accumulation from Geonor inside DOUBLE FENCE [mm] RA1
RA2 = Haukeli['RA2'].astype(float)
RA3 = Haukeli['RA3'].astype(float)
X2RA1 = Haukeli['X2RA1'].astype(float) # Total accumulation from Geonor South, shielded with SINGLE FENCE [mm] X2RA1
X2RA2 = Haukeli['X2RA2'].astype(float)
X2RA3 = Haukeli['X2RA3'].astype(float)
TA = Haukeli['TA'].astype(float) # Air temperature, PT100 [deg C] TA
#wind45 = Hauke['3DFFL4'] # wind speed 4.5m @ mast 2 [m/s] 3DFFL4
### exclude missing values
RA1 = RA1.where(RA1 != -999.00)
RA2 = RA2.where(RA2 != -999.00)
RA3 = RA3.where(RA3 != -999.00)
X2RA1 = X2RA1.where(X2RA1 != -999.00)
X2RA2 = X2RA2.where(X2RA2 != -999.00)
X2RA3 = X2RA3.where(X2RA3 != -999.00)
TA = TA.where(TA != -999.00)
#wind45 = wind45.where(wind45 != -999.00)
# --------- connect values from double fence and calculate mean -------------------------------------------------------------------------
#
RA = pd.concat([RA1, RA2, RA3], axis = 1)
RA = np.nanmean(RA, axis = 1)
yr = []
mm = []
dy = []
tt = []
df1 = []
df2 = []
df3 = []
df = []
temp = []
for i in range(0,31):
idx = datetime.datetime.strptime(str(dd[i*1440]), '%Y%m%d')
yr.append(int(idx.year))
mm.append(int(idx.month))
dy.append(int(idx.day))
tt.append(TimeStamp[ i*1440 : (i+1)*1440 ])
df1.append(RA1[i*1440: (i+1)*1440])
df2.append(RA2[i*1440: (i+1)*1440])
df3.append(RA3[i*1440: (i+1)*1440])
df.append(RA[i*1440: (i+1)*1440])
temp.append(TA[i*1440: (i+1)*1440])
df1 = np.transpose(df1)
df2 = np.transpose(df2)
df3 = np.transpose(df3)
df = np.transpose(df)
temp = np.transpose(temp)
idx = []
t_minute = []
for k in range(0,60):
idx.append(k*100)
t_minute = idx
for h in range(1,24):
t_minute = np.concatenate([t_minute, np.asarray(idx)+h*10000])
fmt = '%1.f %2.5f'
#fmt = '%2.5f'
#fmt = '%1.1f'
for i in range(0,31):
if dy[i] < 10:
day = '0%s' %dy[i]
else:
day = str(dy[i])
filename = '%s/Haukeli_sfc_temp_%s%s%s.txt' %(txt_save_dir,yr[i],mm[i],day)
np.savetxt(filename, np.transpose([t_minute,temp[:,i]]), delimiter = ',', fmt = fmt, header = 'time, sfc_temp')
for i in range(0,31):
### write netCDF file including t_minute, temp
var = temp
mask = np.ma.getmaskarray(var[:,i])
var = np.ma.array(var[:,i], mask = mask, fill_value = -9999.)
var = var.filled()
if dy[i] < 10:
day = '0%s' %dy[i]
else:
day = str(dy[i])
f = netCDF4.Dataset('%s/Haukeli_sfc_temp_%s%s%s.nc' %(nc_save_dir,yr[i],mm[i],day), 'w')
### create dimensions
f.createDimension('time', t_minute.shape[0])
tid = f.createVariable('time', t_minute.dtype,'time',zlib = True)
tid[:] = t_minute[:]
dim = ('time', )
sfc_T = f.createVariable(varname='sfc_temp', datatype=var.dtype, dimensions=dim,
fill_value = -9999., zlib=True)
sfc_T[:] = var
f.close()
| 0.068444 | 0.661363 |
# VacationPy
```
#Import gmaps, pandas, os, requests, and json and config gkey
import requests
import json
import os
import pandas as pd
import gmaps
from configure import gkey
gmaps.configure(api_key=gkey)
#Import and display list of cities saved in Part I
cities_list=pd.read_csv("weather_data.csv")
cities_list
# Create latitude and longitude in locations
locations = cities_list[["Latitude", "Longitude"]]
humidity = cities_list["Humidity %"].astype(float)
max_humidity=humidity.max()
# Plot heatmap
figure_layout = {
'width': '800px',
'height': '600px',
'border': '1px solid black',
'padding': '1px',
'margin': '0 auto 0 auto'}
fig = gmaps.figure(layout=figure_layout, zoom_level=1,center=(15,25))
# Plot heat layer
heat_layer = gmaps.heatmap_layer(
locations,
weights=humidity,
dissipating=False,
max_intensity=max_humidity,
point_radius=3)
# Add heat layer
fig.add_layer(heat_layer)
# Display figure
fig
```
[heat_map_humidity.png]
```
#Select data with perfect weather
perfect_weather_df=cities_list.loc[(cities_list["Temperature (F)"] > 70) & (cities_list["Temperature (F)"] < 80) & (cities_list["Cloudiness %"] == 0) & (cities_list["Wind Speed (mph)"] <15),:]
perfect_weather_df
#Create column shell to add hotel names
hotel_df=pd.DataFrame()
#Create column shell to add hotel names
hotel_df["Hotel Name"] = ""
#Set parameters
params = {
"radius": 5000,
"types": "lodging",
"keyword": "Hotel",
"key": gkey
}
# Loop through data to pull latitude and longitude
for index, row in perfect_weather_df.iterrows():
lat = row["Latitude"]
lng = row["Longitude"]
# Change location each iteration while leaving original params in place
params["location"] = f"{lat},{lng}"
# Search for hotels based on latitude and longitude
base_url = "https://maps.googleapis.com/maps/api/place/nearbysearch/json"
# Request data
name_address = requests.get(base_url, params=params)
# Convert to json
name_address = name_address.json()
# Add try and except to incorporate missing data
try:
perfect_weather_df.loc[index, "Hotel Name"] = name_address["results"][0]["name"]
except (KeyError, IndexError):
print("Missing field/result... skipping.")
perfect_weather_df
# Add the hotel marks to the heatmap
info_box_template = """
<dl>
<dt>Name</dt><dd>{Hotel Name}</dd>
<dt>City</dt><dd>{City Name}</dd>
<dt>Country</dt><dd>{Country Code}</dd>
</dl>
"""
# Store the DataFrame Row
hotel_location = [info_box_template.format(**row) for index, row in perfect_weather_df.iterrows()]
locations = perfect_weather_df[["Latitude", "Longitude"]]
figure_layout = {
'width': '900px',
'height': '600px',
'border': '1px solid black',
'padding': '1px',
'margin': '0 auto 0 auto'
}
fig = gmaps.figure(layout=figure_layout,zoom_level=1,center=(15,25))
# Create hotel symbol to add to map
hotel_with_layer = gmaps.marker_layer(
locations,info_box_content=[info_box_template.format(**row) for index, row in perfect_weather_df.iterrows()]
)
# Add layers to map
fig.add_layer(heat_layer)
fig.add_layer(hotel_with_layer)
fig
```
[perfect_weater_hotels.png]
|
github_jupyter
|
#Import gmaps, pandas, os, requests, and json and config gkey
import requests
import json
import os
import pandas as pd
import gmaps
from configure import gkey
gmaps.configure(api_key=gkey)
#Import and display list of cities saved in Part I
cities_list=pd.read_csv("weather_data.csv")
cities_list
# Create latitude and longitude in locations
locations = cities_list[["Latitude", "Longitude"]]
humidity = cities_list["Humidity %"].astype(float)
max_humidity=humidity.max()
# Plot heatmap
figure_layout = {
'width': '800px',
'height': '600px',
'border': '1px solid black',
'padding': '1px',
'margin': '0 auto 0 auto'}
fig = gmaps.figure(layout=figure_layout, zoom_level=1,center=(15,25))
# Plot heat layer
heat_layer = gmaps.heatmap_layer(
locations,
weights=humidity,
dissipating=False,
max_intensity=max_humidity,
point_radius=3)
# Add heat layer
fig.add_layer(heat_layer)
# Display figure
fig
#Select data with perfect weather
perfect_weather_df=cities_list.loc[(cities_list["Temperature (F)"] > 70) & (cities_list["Temperature (F)"] < 80) & (cities_list["Cloudiness %"] == 0) & (cities_list["Wind Speed (mph)"] <15),:]
perfect_weather_df
#Create column shell to add hotel names
hotel_df=pd.DataFrame()
#Create column shell to add hotel names
hotel_df["Hotel Name"] = ""
#Set parameters
params = {
"radius": 5000,
"types": "lodging",
"keyword": "Hotel",
"key": gkey
}
# Loop through data to pull latitude and longitude
for index, row in perfect_weather_df.iterrows():
lat = row["Latitude"]
lng = row["Longitude"]
# Change location each iteration while leaving original params in place
params["location"] = f"{lat},{lng}"
# Search for hotels based on latitude and longitude
base_url = "https://maps.googleapis.com/maps/api/place/nearbysearch/json"
# Request data
name_address = requests.get(base_url, params=params)
# Convert to json
name_address = name_address.json()
# Add try and except to incorporate missing data
try:
perfect_weather_df.loc[index, "Hotel Name"] = name_address["results"][0]["name"]
except (KeyError, IndexError):
print("Missing field/result... skipping.")
perfect_weather_df
# Add the hotel marks to the heatmap
info_box_template = """
<dl>
<dt>Name</dt><dd>{Hotel Name}</dd>
<dt>City</dt><dd>{City Name}</dd>
<dt>Country</dt><dd>{Country Code}</dd>
</dl>
"""
# Store the DataFrame Row
hotel_location = [info_box_template.format(**row) for index, row in perfect_weather_df.iterrows()]
locations = perfect_weather_df[["Latitude", "Longitude"]]
figure_layout = {
'width': '900px',
'height': '600px',
'border': '1px solid black',
'padding': '1px',
'margin': '0 auto 0 auto'
}
fig = gmaps.figure(layout=figure_layout,zoom_level=1,center=(15,25))
# Create hotel symbol to add to map
hotel_with_layer = gmaps.marker_layer(
locations,info_box_content=[info_box_template.format(**row) for index, row in perfect_weather_df.iterrows()]
)
# Add layers to map
fig.add_layer(heat_layer)
fig.add_layer(hotel_with_layer)
fig
| 0.621196 | 0.608943 |
**[Machine Learning Course Home Page](https://www.kaggle.com/learn/machine-learning)**
---
This exercise will test your ability to read a data file and understand statistics about the data.
In later exercises, you will apply techniques to filter the data, build a machine learning model, and iteratively improve your model.
The course examples use data from Melbourne. To ensure you can apply these techniques on your own, you will have to apply them to a new dataset (with house prices from Iowa).
The exercises use a "notebook" coding environment. In case you are unfamiliar with notebooks, we have a [90-second intro video](https://www.youtube.com/watch?v=4C2qMnaIKL4).
# Exercises
Run the following cell to set up code-checking, which will verify your work as you go.
```
# Set up code checking
from learntools.core import binder
binder.bind(globals())
from learntools.machine_learning.ex2 import *
print("Setup Complete")
```
## Step 1: Loading Data
Read the Iowa data file into a Pandas DataFrame called `home_data`.
```
import pandas as pd
# Path of the file to read
iowa_file_path = '../input/home-data-for-ml-course/train.csv'
# Fill in the line below to read the file into a variable home_data
home_data = pd.read_csv(iowa_file_path)
# Call line below with no argument to check that you've loaded the data correctly
step_1.check()
# Lines below will give you a hint or solution code
#step_1.hint()
#step_1.solution()
```
## Step 2: Review The Data
Use the command you learned to view summary statistics of the data. Then fill in variables to answer the following questions
```
# Print summary statistics in next line
home_data.describe()
# What is the average lot size (rounded to nearest integer)?
avg_lot_size = 10517
# As of today, how old is the newest home (current year - the date in which it was built)
newest_home_age = 11
# Checks your answers
step_2.check()
#step_2.hint()
#step_2.solution()
```
## Think About Your Data
The newest house in your data isn't that new. A few potential explanations for this:
1. They haven't built new houses where this data was collected.
1. The data was collected a long time ago. Houses built after the data publication wouldn't show up.
If the reason is explanation #1 above, does that affect your trust in the model you build with this data? What about if it is reason #2?
How could you dig into the data to see which explanation is more plausible?
Check out this **[discussion thread](https://www.kaggle.com/learn-forum/60581)** to see what others think or to add your ideas.
# Keep Going
You are ready for **[Your First Machine Learning Model](https://www.kaggle.com/dansbecker/your-first-machine-learning-model).**
---
**[Machine Learning Course Home Page](https://www.kaggle.com/learn/machine-learning)**
|
github_jupyter
|
# Set up code checking
from learntools.core import binder
binder.bind(globals())
from learntools.machine_learning.ex2 import *
print("Setup Complete")
import pandas as pd
# Path of the file to read
iowa_file_path = '../input/home-data-for-ml-course/train.csv'
# Fill in the line below to read the file into a variable home_data
home_data = pd.read_csv(iowa_file_path)
# Call line below with no argument to check that you've loaded the data correctly
step_1.check()
# Lines below will give you a hint or solution code
#step_1.hint()
#step_1.solution()
# Print summary statistics in next line
home_data.describe()
# What is the average lot size (rounded to nearest integer)?
avg_lot_size = 10517
# As of today, how old is the newest home (current year - the date in which it was built)
newest_home_age = 11
# Checks your answers
step_2.check()
#step_2.hint()
#step_2.solution()
| 0.432543 | 0.983816 |
```
# Author: Naveen Lalwani
# Script to distill knowledge from LeNet-300-100 trained on CIFAR-10 to student model
import tensorflow as tf
import numpy as np
import keras
from tensorflow.keras.datasets import mnist
from tensorflow.keras import layers, models
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.constraints import max_norm
from tensorflow.keras.models import Model
from tensorflow.keras.losses import categorical_crossentropy
from tensorflow.keras.optimizers import RMSprop, SGD, Adam
from keras.utils import np_utils
import time
num_classes = 10
n_input = 3072
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.cifar10.load_data()
# Enabling One Hot Encoding
y_train = np_utils.to_categorical(y_train, num_classes)
y_test = np_utils.to_categorical(y_test, num_classes)
# Changing input image datatype to float
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
# Normalizaig data
x_train /= 255
x_test /= 255
x_train = x_train.reshape([50000, 3072])
x_test = x_test.reshape([10000, 3072])
# Teacher Model: LeNet-300-100
def lenet_300_100_model():
inputs = layers.Input(shape = (3072,))
x = layers.Dense(300, activation='relu', name='FC1')(inputs)
x = layers.Dense(100, activation='relu', name='FC2')(x)
x = layers.Dense(10, name='logits')(x)
preds = layers.Activation('softmax', name='Softmax')(x)
model = Model(inputs=inputs, outputs=preds)
model.summary()
return model
```
#**Build Model LeNet-300-100**
```
model = lenet_300_100_model()
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['categorical_accuracy'])
model.fit(x_train, y_train, epochs=20, batch_size = 512)
test_loss, test_acc = model.evaluate(x_test, y_test)
print("Test Loss:", test_loss)
print("Test Accuracy:", test_acc)
getSoftmaxKnowledge = Model(inputs=model.input, outputs=model.get_layer("logits").output)
model_logits = getSoftmaxKnowledge.predict(x_train)
# Defining function described by Geoffrey Hinton in his paper of Knowledge Distillation
def softmax_with_temperature(logits, temperature):
logits = logits / temperature
return (np.exp(logits) / np.sum(np.exp(logits)))
# Temperature is a hyperparameter
temperature = 2
softened_train_prob = softmax_with_temperature(model_logits, temperature)
# Model Definition for the Student Model
def build_small_model():
inputs = layers.Input(shape = (3072,))
x = layers.Dense(50, activation='relu', name='FC1')(inputs)
x = layers.Dense(10, name='logits')(x)
preds = layers.Activation('softmax', name='Softmax')(x)
model = Model(inputs=inputs, outputs=preds)
model.summary()
return model
small_model = build_small_model()
```
# **Distilling Knowledge in the student model**
```
# Optimization = Adam
# Loss = Cross Entropy loss
# Epochs = 50
# Trained with dark knowledge
small_model.compile(optimizer='adam', loss= 'categorical_crossentropy', metrics=['categorical_accuracy'])
small_model.fit(x_train, softened_train_prob, epochs=50, batch_size=128)
test_loss, test_acc = small_model.evaluate(x_test, y_test)
print("Test Loss:", test_loss)
print("Test Accuracy:", test_acc)
small_model.save('model_50_LeNet-300-100_Distilled_CIFAR-10.h5')
model.save('model_LeNet-300-100_CIFAR10.h5')
```
|
github_jupyter
|
# Author: Naveen Lalwani
# Script to distill knowledge from LeNet-300-100 trained on CIFAR-10 to student model
import tensorflow as tf
import numpy as np
import keras
from tensorflow.keras.datasets import mnist
from tensorflow.keras import layers, models
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.constraints import max_norm
from tensorflow.keras.models import Model
from tensorflow.keras.losses import categorical_crossentropy
from tensorflow.keras.optimizers import RMSprop, SGD, Adam
from keras.utils import np_utils
import time
num_classes = 10
n_input = 3072
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.cifar10.load_data()
# Enabling One Hot Encoding
y_train = np_utils.to_categorical(y_train, num_classes)
y_test = np_utils.to_categorical(y_test, num_classes)
# Changing input image datatype to float
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
# Normalizaig data
x_train /= 255
x_test /= 255
x_train = x_train.reshape([50000, 3072])
x_test = x_test.reshape([10000, 3072])
# Teacher Model: LeNet-300-100
def lenet_300_100_model():
inputs = layers.Input(shape = (3072,))
x = layers.Dense(300, activation='relu', name='FC1')(inputs)
x = layers.Dense(100, activation='relu', name='FC2')(x)
x = layers.Dense(10, name='logits')(x)
preds = layers.Activation('softmax', name='Softmax')(x)
model = Model(inputs=inputs, outputs=preds)
model.summary()
return model
model = lenet_300_100_model()
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['categorical_accuracy'])
model.fit(x_train, y_train, epochs=20, batch_size = 512)
test_loss, test_acc = model.evaluate(x_test, y_test)
print("Test Loss:", test_loss)
print("Test Accuracy:", test_acc)
getSoftmaxKnowledge = Model(inputs=model.input, outputs=model.get_layer("logits").output)
model_logits = getSoftmaxKnowledge.predict(x_train)
# Defining function described by Geoffrey Hinton in his paper of Knowledge Distillation
def softmax_with_temperature(logits, temperature):
logits = logits / temperature
return (np.exp(logits) / np.sum(np.exp(logits)))
# Temperature is a hyperparameter
temperature = 2
softened_train_prob = softmax_with_temperature(model_logits, temperature)
# Model Definition for the Student Model
def build_small_model():
inputs = layers.Input(shape = (3072,))
x = layers.Dense(50, activation='relu', name='FC1')(inputs)
x = layers.Dense(10, name='logits')(x)
preds = layers.Activation('softmax', name='Softmax')(x)
model = Model(inputs=inputs, outputs=preds)
model.summary()
return model
small_model = build_small_model()
# Optimization = Adam
# Loss = Cross Entropy loss
# Epochs = 50
# Trained with dark knowledge
small_model.compile(optimizer='adam', loss= 'categorical_crossentropy', metrics=['categorical_accuracy'])
small_model.fit(x_train, softened_train_prob, epochs=50, batch_size=128)
test_loss, test_acc = small_model.evaluate(x_test, y_test)
print("Test Loss:", test_loss)
print("Test Accuracy:", test_acc)
small_model.save('model_50_LeNet-300-100_Distilled_CIFAR-10.h5')
model.save('model_LeNet-300-100_CIFAR10.h5')
| 0.828731 | 0.840979 |

[](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/streamlit_notebooks/healthcare/CONTEXTUAL_SPELL_CHECKER.ipynb)
# **Spell checking for clinical documents**
To run this yourself, you will need to upload your license keys to the notebook. Otherwise, you can look at the example outputs at the bottom of the notebook. To upload license keys, open the file explorer on the left side of the screen and upload `workshop_license_keys.json` to the folder that opens.
## 1. Colab Setup
```
import os
import json
with open('/content/spark_nlp_for_healthcare.json', 'r') as f:
license_keys = json.load(f)
license_keys.keys()
secret = license_keys['SECRET']
os.environ['SPARK_NLP_LICENSE'] = license_keys['SPARK_NLP_LICENSE']
os.environ['AWS_ACCESS_KEY_ID'] = license_keys['AWS_ACCESS_KEY_ID']
os.environ['AWS_SECRET_ACCESS_KEY'] = license_keys['AWS_SECRET_ACCESS_KEY']
sparknlp_version = license_keys["PUBLIC_VERSION"]
jsl_version = license_keys["JSL_VERSION"]
print ('SparkNLP Version:', sparknlp_version)
print ('SparkNLP-JSL Version:', jsl_version)
```
Install dependencies
```
# Install Java
! apt-get update -qq
! apt-get install -y openjdk-8-jdk-headless -qq > /dev/null
! java -version
# Install pyspark
! pip install --ignore-installed -q pyspark==2.4.4
# Install Spark NLP
! pip install --ignore-installed spark-nlp==$sparknlp_version
! python -m pip install --upgrade spark-nlp-jsl==$jsl_version --extra-index-url https://pypi.johnsnowlabs.com/$secret
```
Import dependencies into Python and start the Spark session
```
os.environ['JAVA_HOME'] = "/usr/lib/jvm/java-8-openjdk-amd64"
os.environ['PATH'] = os.environ['JAVA_HOME'] + "/bin:" + os.environ['PATH']
import pandas as pd
from pyspark.ml import Pipeline
from pyspark.sql import SparkSession
import pyspark.sql.functions as F
import sparknlp
from sparknlp.annotator import *
from sparknlp_jsl.annotator import *
from sparknlp.base import *
import sparknlp_jsl
spark = sparknlp_jsl.start(secret)
```
## 2. Select the NER model and construct the pipeline
```
document_assembler = DocumentAssembler() \
.setInputCol('text') \
.setOutputCol('document')
tokenizer = RecursiveTokenizer() \
.setInputCols(['document']) \
.setOutputCol('token') \
.setPrefixes(["\"", "(", "[", "\n"]) \
.setSuffixes([".", ",", "?", ")","!", "‘s"])
spell_model = ContextSpellCheckerModel.pretrained('spellcheck_clinical', 'en', 'clinical/models') \
.setInputCols('token') \
.setOutputCol('corrected')
finisher = Finisher().setInputCols('corrected')
light_pipeline = Pipeline(stages=[
document_assembler,
tokenizer,
spell_model,
finisher
])
full_pipeline = Pipeline(stages=[
document_assembler,
tokenizer,
spell_model
])
empty_df = spark.createDataFrame([[""]]).toDF('text')
pipeline_model = full_pipeline.fit(empty_df)
light_pipeline_model = LightPipeline(light_pipeline.fit(empty_df))
```
## 3. Create example inputs
```
# Enter examples as strings in this array
input_list = [
"The pateint is a 5-mont-old infnt who presented initially on Monday with a cold, cugh, and runny nse for 2 days. Mom states she had no fevr. Her appetite was good but she was spitting up a lot. She had no difficulty breathin and her cough was described as dry and hacky. At that time, pysicl exam showed a right TM, which was red. Left TM was okay. She was fairly congsted but looked happy and playful. She was started on Amxil and Aldx and we told to recheck in 2 weaks to recheck her ear. Mom returned to clinic again today because she got much worse ovrnght. She was having dificlty breathing. She was much more congested and her apetit had decrsed significantly today. She also spked a tempratre yesterday of 102.6 and always hvng trouble sleping scondry to congestion."
]
```
## 4. Use the pipeline to create outputs
Full Pipeline
```
df = spark.createDataFrame(pd.DataFrame({'text': input_list}))
result = pipeline_model.transform(df)
```
Light Pipeline
```
# Light pipelines use plain string inputs instead of data frame inputs
light_result = light_pipeline_model.annotate(input_list[0])
```
## 5. Visualize results
Visualize comparison as dataframe
```
exploded = F.explode(F.arrays_zip('token.result', 'corrected.result'))
select_expression_0 = F.expr("cols['0']").alias("original")
select_expression_1 = F.expr("cols['1']").alias("corrected")
result.select(exploded.alias("cols")) \
.select(select_expression_0, select_expression_1).show(truncate=False)
```
Vizualise light pipeline and finished result
```
# This finished result does not need parsing and can directly be used in any
# other task
light_result['corrected']
```
|
github_jupyter
|
import os
import json
with open('/content/spark_nlp_for_healthcare.json', 'r') as f:
license_keys = json.load(f)
license_keys.keys()
secret = license_keys['SECRET']
os.environ['SPARK_NLP_LICENSE'] = license_keys['SPARK_NLP_LICENSE']
os.environ['AWS_ACCESS_KEY_ID'] = license_keys['AWS_ACCESS_KEY_ID']
os.environ['AWS_SECRET_ACCESS_KEY'] = license_keys['AWS_SECRET_ACCESS_KEY']
sparknlp_version = license_keys["PUBLIC_VERSION"]
jsl_version = license_keys["JSL_VERSION"]
print ('SparkNLP Version:', sparknlp_version)
print ('SparkNLP-JSL Version:', jsl_version)
# Install Java
! apt-get update -qq
! apt-get install -y openjdk-8-jdk-headless -qq > /dev/null
! java -version
# Install pyspark
! pip install --ignore-installed -q pyspark==2.4.4
# Install Spark NLP
! pip install --ignore-installed spark-nlp==$sparknlp_version
! python -m pip install --upgrade spark-nlp-jsl==$jsl_version --extra-index-url https://pypi.johnsnowlabs.com/$secret
os.environ['JAVA_HOME'] = "/usr/lib/jvm/java-8-openjdk-amd64"
os.environ['PATH'] = os.environ['JAVA_HOME'] + "/bin:" + os.environ['PATH']
import pandas as pd
from pyspark.ml import Pipeline
from pyspark.sql import SparkSession
import pyspark.sql.functions as F
import sparknlp
from sparknlp.annotator import *
from sparknlp_jsl.annotator import *
from sparknlp.base import *
import sparknlp_jsl
spark = sparknlp_jsl.start(secret)
document_assembler = DocumentAssembler() \
.setInputCol('text') \
.setOutputCol('document')
tokenizer = RecursiveTokenizer() \
.setInputCols(['document']) \
.setOutputCol('token') \
.setPrefixes(["\"", "(", "[", "\n"]) \
.setSuffixes([".", ",", "?", ")","!", "‘s"])
spell_model = ContextSpellCheckerModel.pretrained('spellcheck_clinical', 'en', 'clinical/models') \
.setInputCols('token') \
.setOutputCol('corrected')
finisher = Finisher().setInputCols('corrected')
light_pipeline = Pipeline(stages=[
document_assembler,
tokenizer,
spell_model,
finisher
])
full_pipeline = Pipeline(stages=[
document_assembler,
tokenizer,
spell_model
])
empty_df = spark.createDataFrame([[""]]).toDF('text')
pipeline_model = full_pipeline.fit(empty_df)
light_pipeline_model = LightPipeline(light_pipeline.fit(empty_df))
# Enter examples as strings in this array
input_list = [
"The pateint is a 5-mont-old infnt who presented initially on Monday with a cold, cugh, and runny nse for 2 days. Mom states she had no fevr. Her appetite was good but she was spitting up a lot. She had no difficulty breathin and her cough was described as dry and hacky. At that time, pysicl exam showed a right TM, which was red. Left TM was okay. She was fairly congsted but looked happy and playful. She was started on Amxil and Aldx and we told to recheck in 2 weaks to recheck her ear. Mom returned to clinic again today because she got much worse ovrnght. She was having dificlty breathing. She was much more congested and her apetit had decrsed significantly today. She also spked a tempratre yesterday of 102.6 and always hvng trouble sleping scondry to congestion."
]
df = spark.createDataFrame(pd.DataFrame({'text': input_list}))
result = pipeline_model.transform(df)
# Light pipelines use plain string inputs instead of data frame inputs
light_result = light_pipeline_model.annotate(input_list[0])
exploded = F.explode(F.arrays_zip('token.result', 'corrected.result'))
select_expression_0 = F.expr("cols['0']").alias("original")
select_expression_1 = F.expr("cols['1']").alias("corrected")
result.select(exploded.alias("cols")) \
.select(select_expression_0, select_expression_1).show(truncate=False)
# This finished result does not need parsing and can directly be used in any
# other task
light_result['corrected']
| 0.314682 | 0.860486 |
# Using PULP
```
from pulp import *
#pulp.pulpTestAll()
# Create the model for the problem
prob = LpProblem("MaxProfit",LpMaximize)
# The 2 variables x1, x2 have a lower limit of zero
x1=LpVariable("x1",150, None)
x2=LpVariable("x2",100, None)
# The objective function
prob += 3500.0 *x1 + 5000.0 *x2, "Objective"
# The three constraints are
prob += 20.0*x1 + 40.0*x2 <= 12000.0, "Constraint 1"
prob += 25.0*x1 + 10.0*x2 <= 10000.0, "Constraint 2"
prob += 45.0*x1 + 50.0*x2 >= 14000.0, "Constraint 3"
prob += x1 <= 500.0, "Constraint 4"
prob += x2 <= 200.0, "Constraint 5"
# Write the problem data to an .lp file
prob.writeLP("MaxProfit.lp")
```
## Simple Solve
```
# Solve the optimization problem using the specified Solver
# solve the problem
#status = prob.solve(GLPK(msg=0))
#print(LpStatus[status])
# print the results x1 = 20, x2 = 60
#print(value(x1))
#print(value(x2))
```
## Solve with Sensitivity Report (GLPK Solver)
```
status=prob.solve(GLPK(options=["--ranges","MaxProfit.sen"]))
#glpsol --cpxlp MaxProfit.lp --ranges Maxprofit.sen
print(LpStatus[status])
print(value(x1))
print(value(x2))
33 # Print the status of the solution
print("Status:", LpStatus[prob.status])
# Print each of the variables with it’s resolved optimum value
for v in prob.variables():
print(v.name, "=", v.varValue)
# Print the optimised value of the objective function
print("Objective", value(prob.objective))
```
## Display sensitivity Report
```
%load Maxprofit.sen
GLPK 4.65 - SENSITIVITY ANALYSIS REPORT Page 1
Problem:
Objective: Objective = 1850000 (MAXimum)
No. Row name St Activity Slack Lower bound Activity Obj coef Obj value at Limiting
Marginal Upper bound range range break point variable
------ ------------ -- ------------- ------------- ------------- ------------- ------------- ------------- ------------
1 Constraint_1 NU 12000.00000 . -Inf 11200.00000 -112.50000 1.76e+06 x2
112.50000 12000.00000 28000.00000 +Inf 3.65e+06 x1
2 Constraint_2 NU 10000.00000 . -Inf 6000.00000 -50.00000 1.65e+06 x1
50.00000 10000.00000 11000.00000 +Inf 1.9e+06 x2
3 Constraint_3 BS 22000.00000 -8000.00000 14000.00000 18000.00000 -50.00000 750000.00000 Constraint_2
. +Inf 22000.00000 +Inf +Inf
GLPK 4.65 - SENSITIVITY ANALYSIS REPORT Page 2
Problem:
Objective: Objective = 1850000 (MAXimum)
No. Column name St Activity Obj coef Lower bound Activity Obj coef Obj value at Limiting
Marginal Upper bound range range break point variable
------ ------------ -- ------------- ------------- ------------- ------------- ------------- ------------- ------------
1 x1 BS 350.00000 3500.00000 150.00000 -50.00000 2500.00000 1.5e+06 Constraint_2
. +Inf 360.00000 12500.00000 5e+06 Constraint_1
2 x2 BS 125.00000 5000.00000 100.00000 -125.00000 1400.00000 1.4e+06 Constraint_1
. +Inf 225.00000 7000.00000 2.1e+06 Constraint_2
End of report
```
# Solve as equations
```
# initialize the model
manviMaxProfit = LpProblem("ManviMaxProfit", LpMaximize)
#List of decision variables
vehicles = ['car', 'truck']
# create a dictionary of pulp variables with keys from ingredients
# the default lower bound is -inf
x = pulp.LpVariable.dict('x_%s', vehicles, lowBound = 0)
# Objective function
profit = [3500.0, 5000.0]
cost = dict(zip(vehicles, profit))
manviMaxProfit += sum([cost[i] * x[i] for i in vehicles]), "Objective"
# Constraints
const1 = dict(zip(vehicles, [20, 40]))
const2 = dict(zip(vehicles, [25, 10]))
const3 = dict(zip(vehicles, [45, 50]))
manviMaxProfit += sum([const1[i] * x[i] for i in vehicles]) <= 12000, "Constraint 1"
manviMaxProfit += sum([const2[i] * x[i] for i in vehicles]) <= 10000, "Constraint 2"
manviMaxProfit += sum([const3[i] * x[i] for i in vehicles]) >= 14000, "Constraint 3"
mincnt = dict(zip(vehicles, [150, 100]))
for i in vehicles:
manviMaxProfit += x[i] >= mincnt[i]
maxcnt = dict(zip(vehicles, [500, 200]))
for i in vehicles:
manviMaxProfit += x[i] <= maxcnt[i]
manviMaxProfit.writeLP("manviMaxProfit.lp")
status=manviMaxProfit.solve(GLPK(options=["--ranges","manviMaxProfit.sen"]))
print(status)
#print the result
for vehicle in vehicles:
print(' {} :: {} ::'.format(vehicle,
x[vehicle].value()))
print("Objective", value(manviMaxProfit.objective))
# %load manviMaxProfit.sen
GLPK 4.65 - SENSITIVITY ANALYSIS REPORT Page 1
Problem:
Objective: Objective = 1850000 (MAXimum)
No. Row name St Activity Slack Lower bound Activity Obj coef Obj value at Limiting
Marginal Upper bound range range break point variable
------ ------------ -- ------------- ------------- ------------- ------------- ------------- ------------- ------------
1 Constraint_1 NU 12000.00000 . -Inf 11200.00000 -112.50000 1.76e+06 _C2
112.50000 12000.00000 14400.00000 +Inf 2.12e+06 _C4
2 Constraint_2 NU 10000.00000 . -Inf 7000.00000 -50.00000 1.7e+06 _C4
50.00000 10000.00000 11000.00000 +Inf 1.9e+06 _C2
3 Constraint_3 BS 22000.00000 -8000.00000 14000.00000 19000.00000 -50.00000 750000.00000 Constraint_2
. +Inf 22000.00000 +Inf +Inf
4 _C1 BS 350.00000 -200.00000 150.00000 200.00000 -1000.00000 1.5e+06 Constraint_2
. +Inf 360.00000 9000.00000 5e+06 Constraint_1
5 _C2 BS 125.00000 -25.00000 100.00000 . -3600.00000 1.4e+06 Constraint_1
. +Inf 200.00000 2000.00000 2.1e+06 Constraint_2
6 _C3 BS 350.00000 150.00000 -Inf 200.00000 -1000.00000 1.5e+06 Constraint_2
. 500.00000 360.00000 9000.00000 5e+06 Constraint_1
7 _C4 BS 125.00000 75.00000 -Inf 100.00000 -3600.00000 1.4e+06 Constraint_1
. 200.00000 225.00000 2000.00000 2.1e+06 Constraint_2
GLPK 4.65 - SENSITIVITY ANALYSIS REPORT Page 2
Problem:
Objective: Objective = 1850000 (MAXimum)
No. Column name St Activity Obj coef Lower bound Activity Obj coef Obj value at Limiting
Marginal Upper bound range range break point variable
------ ------------ -- ------------- ------------- ------------- ------------- ------------- ------------- ------------
1 x_car BS 350.00000 3500.00000 . 200.00000 2500.00000 1.5e+06 Constraint_2
. +Inf 360.00000 12500.00000 5e+06 Constraint_1
2 x_truck BS 125.00000 5000.00000 . 100.00000 1400.00000 1.4e+06 Constraint_1
. +Inf 200.00000 7000.00000 2.1e+06 Constraint_2
End of report
```
#### Solve with Gurobi (Pending)
#### Solve with CPLEX (Pending)
## New Product (Van) introduced to mix
```
# initialize the model
manviMaxProfit = LpProblem("ManviMaxProfit", LpMaximize)
#List of decision variables
vehicles = ['car', 'truck', 'van']
# create a dictionary of pulp variables with keys from ingredients
# the default lower bound is -inf
x = pulp.LpVariable.dict('x_%s', vehicles, lowBound = 0)
# Objective function
profit = [3500.0, 5000.0, 4000.0]
cost = dict(zip(vehicles, profit))
manviMaxProfit += sum([cost[i] * x[i] for i in vehicles]), "Objective"
# Constraints
const1 = dict(zip(vehicles, [20, 40, 30]))
const2 = dict(zip(vehicles, [25, 10, 20]))
const3 = dict(zip(vehicles, [45, 50, 50]))
manviMaxProfit += sum([const1[i] * x[i] for i in vehicles]) <= 12000, "Constraint 1"
manviMaxProfit += sum([const2[i] * x[i] for i in vehicles]) <= 10000, "Constraint 2"
manviMaxProfit += sum([const3[i] * x[i] for i in vehicles]) >= 14000, "Constraint 3"
mincnt = dict(zip(vehicles, [150, 100, 0]))
for i in vehicles:
manviMaxProfit += x[i] >= mincnt[i]
maxcnt = dict(zip(vehicles, [500, 200, 1000]))
for i in vehicles:
manviMaxProfit += x[i] <= maxcnt[i]
manviMaxProfit.writeLP("manviMaxProfit2.lp")
status=manviMaxProfit.solve(GLPK(options=["--ranges","manviMaxProfit2.sen"]))
print(status)
#print the result
for vehicle in vehicles:
print(' {} :: {} ::'.format(vehicle,
x[vehicle].value()))
print("Objective", value(manviMaxProfit.objective))
# %load manviMaxProfit2.sen
GLPK 4.65 - SENSITIVITY ANALYSIS REPORT Page 1
Problem:
Objective: Objective = 1850000 (MAXimum)
No. Row name St Activity Slack Lower bound Activity Obj coef Obj value at Limiting
Marginal Upper bound range range break point variable
------ ------------ -- ------------- ------------- ------------- ------------- ------------- ------------- ------------
1 Constraint_1 NU 12000.00000 . -Inf 11200.00000 -112.50000 1.76e+06 _C2
112.50000 12000.00000 14400.00000 +Inf 2.12e+06 _C5
2 Constraint_2 NU 10000.00000 . -Inf 7000.00000 -50.00000 1.7e+06 _C5
50.00000 10000.00000 11000.00000 +Inf 1.9e+06 _C2
3 Constraint_3 BS 22000.00000 -8000.00000 14000.00000 19000.00000 -50.00000 750000.00000 Constraint_2
. +Inf 22000.00000 +Inf +Inf
4 _C1 BS 350.00000 -200.00000 150.00000 314.28571 -600.00000 1.64e+06 x_van
. +Inf 360.00000 9000.00000 5e+06 Constraint_1
5 _C2 BS 125.00000 -25.00000 100.00000 . -857.14286 1.74286e+06 x_van
. +Inf 200.00000 2000.00000 2.1e+06 Constraint_2
6 _C3 BS . . . . -Inf 1.85e+06
. +Inf 57.14286 375.00000 1.85e+06 x_van
7 _C4 BS 350.00000 150.00000 -Inf 314.28571 -600.00000 1.64e+06 x_van
. 500.00000 360.00000 9000.00000 5e+06 Constraint_1
8 _C5 BS 125.00000 75.00000 -Inf 100.00000 -857.14286 1.74286e+06 x_van
. 200.00000 225.00000 2000.00000 2.1e+06 Constraint_2
9 _C6 BS . 1000.00000 -Inf . -Inf 1.85e+06
. 1000.00000 57.14286 375.00000 1.85e+06 x_van
GLPK 4.65 - SENSITIVITY ANALYSIS REPORT Page 2
Problem:
Objective: Objective = 1850000 (MAXimum)
No. Column name St Activity Obj coef Lower bound Activity Obj coef Obj value at Limiting
Marginal Upper bound range range break point variable
------ ------------ -- ------------- ------------- ------------- ------------- ------------- ------------- ------------
1 x_car BS 350.00000 3500.00000 . 314.28571 2900.00000 1.64e+06 x_van
. +Inf 360.00000 12500.00000 5e+06 Constraint_1
2 x_truck BS 125.00000 5000.00000 . 100.00000 4142.85714 1.74286e+06 x_van
. +Inf 200.00000 7000.00000 2.1e+06 Constraint_2
3 x_van NL . 4000.00000 . . -Inf 1.85e+06 _C3
-375.00000 +Inf 57.14286 4375.00000 1.82857e+06 _C2
End of report
```
|
github_jupyter
|
from pulp import *
#pulp.pulpTestAll()
# Create the model for the problem
prob = LpProblem("MaxProfit",LpMaximize)
# The 2 variables x1, x2 have a lower limit of zero
x1=LpVariable("x1",150, None)
x2=LpVariable("x2",100, None)
# The objective function
prob += 3500.0 *x1 + 5000.0 *x2, "Objective"
# The three constraints are
prob += 20.0*x1 + 40.0*x2 <= 12000.0, "Constraint 1"
prob += 25.0*x1 + 10.0*x2 <= 10000.0, "Constraint 2"
prob += 45.0*x1 + 50.0*x2 >= 14000.0, "Constraint 3"
prob += x1 <= 500.0, "Constraint 4"
prob += x2 <= 200.0, "Constraint 5"
# Write the problem data to an .lp file
prob.writeLP("MaxProfit.lp")
# Solve the optimization problem using the specified Solver
# solve the problem
#status = prob.solve(GLPK(msg=0))
#print(LpStatus[status])
# print the results x1 = 20, x2 = 60
#print(value(x1))
#print(value(x2))
status=prob.solve(GLPK(options=["--ranges","MaxProfit.sen"]))
#glpsol --cpxlp MaxProfit.lp --ranges Maxprofit.sen
print(LpStatus[status])
print(value(x1))
print(value(x2))
33 # Print the status of the solution
print("Status:", LpStatus[prob.status])
# Print each of the variables with it’s resolved optimum value
for v in prob.variables():
print(v.name, "=", v.varValue)
# Print the optimised value of the objective function
print("Objective", value(prob.objective))
%load Maxprofit.sen
GLPK 4.65 - SENSITIVITY ANALYSIS REPORT Page 1
Problem:
Objective: Objective = 1850000 (MAXimum)
No. Row name St Activity Slack Lower bound Activity Obj coef Obj value at Limiting
Marginal Upper bound range range break point variable
------ ------------ -- ------------- ------------- ------------- ------------- ------------- ------------- ------------
1 Constraint_1 NU 12000.00000 . -Inf 11200.00000 -112.50000 1.76e+06 x2
112.50000 12000.00000 28000.00000 +Inf 3.65e+06 x1
2 Constraint_2 NU 10000.00000 . -Inf 6000.00000 -50.00000 1.65e+06 x1
50.00000 10000.00000 11000.00000 +Inf 1.9e+06 x2
3 Constraint_3 BS 22000.00000 -8000.00000 14000.00000 18000.00000 -50.00000 750000.00000 Constraint_2
. +Inf 22000.00000 +Inf +Inf
GLPK 4.65 - SENSITIVITY ANALYSIS REPORT Page 2
Problem:
Objective: Objective = 1850000 (MAXimum)
No. Column name St Activity Obj coef Lower bound Activity Obj coef Obj value at Limiting
Marginal Upper bound range range break point variable
------ ------------ -- ------------- ------------- ------------- ------------- ------------- ------------- ------------
1 x1 BS 350.00000 3500.00000 150.00000 -50.00000 2500.00000 1.5e+06 Constraint_2
. +Inf 360.00000 12500.00000 5e+06 Constraint_1
2 x2 BS 125.00000 5000.00000 100.00000 -125.00000 1400.00000 1.4e+06 Constraint_1
. +Inf 225.00000 7000.00000 2.1e+06 Constraint_2
End of report
# initialize the model
manviMaxProfit = LpProblem("ManviMaxProfit", LpMaximize)
#List of decision variables
vehicles = ['car', 'truck']
# create a dictionary of pulp variables with keys from ingredients
# the default lower bound is -inf
x = pulp.LpVariable.dict('x_%s', vehicles, lowBound = 0)
# Objective function
profit = [3500.0, 5000.0]
cost = dict(zip(vehicles, profit))
manviMaxProfit += sum([cost[i] * x[i] for i in vehicles]), "Objective"
# Constraints
const1 = dict(zip(vehicles, [20, 40]))
const2 = dict(zip(vehicles, [25, 10]))
const3 = dict(zip(vehicles, [45, 50]))
manviMaxProfit += sum([const1[i] * x[i] for i in vehicles]) <= 12000, "Constraint 1"
manviMaxProfit += sum([const2[i] * x[i] for i in vehicles]) <= 10000, "Constraint 2"
manviMaxProfit += sum([const3[i] * x[i] for i in vehicles]) >= 14000, "Constraint 3"
mincnt = dict(zip(vehicles, [150, 100]))
for i in vehicles:
manviMaxProfit += x[i] >= mincnt[i]
maxcnt = dict(zip(vehicles, [500, 200]))
for i in vehicles:
manviMaxProfit += x[i] <= maxcnt[i]
manviMaxProfit.writeLP("manviMaxProfit.lp")
status=manviMaxProfit.solve(GLPK(options=["--ranges","manviMaxProfit.sen"]))
print(status)
#print the result
for vehicle in vehicles:
print(' {} :: {} ::'.format(vehicle,
x[vehicle].value()))
print("Objective", value(manviMaxProfit.objective))
# %load manviMaxProfit.sen
GLPK 4.65 - SENSITIVITY ANALYSIS REPORT Page 1
Problem:
Objective: Objective = 1850000 (MAXimum)
No. Row name St Activity Slack Lower bound Activity Obj coef Obj value at Limiting
Marginal Upper bound range range break point variable
------ ------------ -- ------------- ------------- ------------- ------------- ------------- ------------- ------------
1 Constraint_1 NU 12000.00000 . -Inf 11200.00000 -112.50000 1.76e+06 _C2
112.50000 12000.00000 14400.00000 +Inf 2.12e+06 _C4
2 Constraint_2 NU 10000.00000 . -Inf 7000.00000 -50.00000 1.7e+06 _C4
50.00000 10000.00000 11000.00000 +Inf 1.9e+06 _C2
3 Constraint_3 BS 22000.00000 -8000.00000 14000.00000 19000.00000 -50.00000 750000.00000 Constraint_2
. +Inf 22000.00000 +Inf +Inf
4 _C1 BS 350.00000 -200.00000 150.00000 200.00000 -1000.00000 1.5e+06 Constraint_2
. +Inf 360.00000 9000.00000 5e+06 Constraint_1
5 _C2 BS 125.00000 -25.00000 100.00000 . -3600.00000 1.4e+06 Constraint_1
. +Inf 200.00000 2000.00000 2.1e+06 Constraint_2
6 _C3 BS 350.00000 150.00000 -Inf 200.00000 -1000.00000 1.5e+06 Constraint_2
. 500.00000 360.00000 9000.00000 5e+06 Constraint_1
7 _C4 BS 125.00000 75.00000 -Inf 100.00000 -3600.00000 1.4e+06 Constraint_1
. 200.00000 225.00000 2000.00000 2.1e+06 Constraint_2
GLPK 4.65 - SENSITIVITY ANALYSIS REPORT Page 2
Problem:
Objective: Objective = 1850000 (MAXimum)
No. Column name St Activity Obj coef Lower bound Activity Obj coef Obj value at Limiting
Marginal Upper bound range range break point variable
------ ------------ -- ------------- ------------- ------------- ------------- ------------- ------------- ------------
1 x_car BS 350.00000 3500.00000 . 200.00000 2500.00000 1.5e+06 Constraint_2
. +Inf 360.00000 12500.00000 5e+06 Constraint_1
2 x_truck BS 125.00000 5000.00000 . 100.00000 1400.00000 1.4e+06 Constraint_1
. +Inf 200.00000 7000.00000 2.1e+06 Constraint_2
End of report
# initialize the model
manviMaxProfit = LpProblem("ManviMaxProfit", LpMaximize)
#List of decision variables
vehicles = ['car', 'truck', 'van']
# create a dictionary of pulp variables with keys from ingredients
# the default lower bound is -inf
x = pulp.LpVariable.dict('x_%s', vehicles, lowBound = 0)
# Objective function
profit = [3500.0, 5000.0, 4000.0]
cost = dict(zip(vehicles, profit))
manviMaxProfit += sum([cost[i] * x[i] for i in vehicles]), "Objective"
# Constraints
const1 = dict(zip(vehicles, [20, 40, 30]))
const2 = dict(zip(vehicles, [25, 10, 20]))
const3 = dict(zip(vehicles, [45, 50, 50]))
manviMaxProfit += sum([const1[i] * x[i] for i in vehicles]) <= 12000, "Constraint 1"
manviMaxProfit += sum([const2[i] * x[i] for i in vehicles]) <= 10000, "Constraint 2"
manviMaxProfit += sum([const3[i] * x[i] for i in vehicles]) >= 14000, "Constraint 3"
mincnt = dict(zip(vehicles, [150, 100, 0]))
for i in vehicles:
manviMaxProfit += x[i] >= mincnt[i]
maxcnt = dict(zip(vehicles, [500, 200, 1000]))
for i in vehicles:
manviMaxProfit += x[i] <= maxcnt[i]
manviMaxProfit.writeLP("manviMaxProfit2.lp")
status=manviMaxProfit.solve(GLPK(options=["--ranges","manviMaxProfit2.sen"]))
print(status)
#print the result
for vehicle in vehicles:
print(' {} :: {} ::'.format(vehicle,
x[vehicle].value()))
print("Objective", value(manviMaxProfit.objective))
# %load manviMaxProfit2.sen
GLPK 4.65 - SENSITIVITY ANALYSIS REPORT Page 1
Problem:
Objective: Objective = 1850000 (MAXimum)
No. Row name St Activity Slack Lower bound Activity Obj coef Obj value at Limiting
Marginal Upper bound range range break point variable
------ ------------ -- ------------- ------------- ------------- ------------- ------------- ------------- ------------
1 Constraint_1 NU 12000.00000 . -Inf 11200.00000 -112.50000 1.76e+06 _C2
112.50000 12000.00000 14400.00000 +Inf 2.12e+06 _C5
2 Constraint_2 NU 10000.00000 . -Inf 7000.00000 -50.00000 1.7e+06 _C5
50.00000 10000.00000 11000.00000 +Inf 1.9e+06 _C2
3 Constraint_3 BS 22000.00000 -8000.00000 14000.00000 19000.00000 -50.00000 750000.00000 Constraint_2
. +Inf 22000.00000 +Inf +Inf
4 _C1 BS 350.00000 -200.00000 150.00000 314.28571 -600.00000 1.64e+06 x_van
. +Inf 360.00000 9000.00000 5e+06 Constraint_1
5 _C2 BS 125.00000 -25.00000 100.00000 . -857.14286 1.74286e+06 x_van
. +Inf 200.00000 2000.00000 2.1e+06 Constraint_2
6 _C3 BS . . . . -Inf 1.85e+06
. +Inf 57.14286 375.00000 1.85e+06 x_van
7 _C4 BS 350.00000 150.00000 -Inf 314.28571 -600.00000 1.64e+06 x_van
. 500.00000 360.00000 9000.00000 5e+06 Constraint_1
8 _C5 BS 125.00000 75.00000 -Inf 100.00000 -857.14286 1.74286e+06 x_van
. 200.00000 225.00000 2000.00000 2.1e+06 Constraint_2
9 _C6 BS . 1000.00000 -Inf . -Inf 1.85e+06
. 1000.00000 57.14286 375.00000 1.85e+06 x_van
GLPK 4.65 - SENSITIVITY ANALYSIS REPORT Page 2
Problem:
Objective: Objective = 1850000 (MAXimum)
No. Column name St Activity Obj coef Lower bound Activity Obj coef Obj value at Limiting
Marginal Upper bound range range break point variable
------ ------------ -- ------------- ------------- ------------- ------------- ------------- ------------- ------------
1 x_car BS 350.00000 3500.00000 . 314.28571 2900.00000 1.64e+06 x_van
. +Inf 360.00000 12500.00000 5e+06 Constraint_1
2 x_truck BS 125.00000 5000.00000 . 100.00000 4142.85714 1.74286e+06 x_van
. +Inf 200.00000 7000.00000 2.1e+06 Constraint_2
3 x_van NL . 4000.00000 . . -Inf 1.85e+06 _C3
-375.00000 +Inf 57.14286 4375.00000 1.82857e+06 _C2
End of report
| 0.276788 | 0.859133 |
# Meraki Python SDK Demo: Offline Switch Finder
*This notebook demonstrates using the Meraki Python SDK to create a list of all the switches in a network that are offline.*
---
>NB: Throughout this notebook, we will print values for demonstration purposes. In a production Python script, the coder would likely remove these print statements to clean up the console output.
In this first cell, we install and import the required meraki and os modules, and open the Dashboard API connection using the SDK. Make sure you have [set up your environment variables first](https://github.com/meraki/dashboard-api-python/blob/master/notebooks/README.md#setting-up-your-environment-variables).
```
# Install the relevant modules. If you are using a local editor (e.g. VS Code, rather than Colab) you can run these commands, without the preceding %, via a terminal. NB: Run `pip install meraki==` to find the latest version of the Meraki SDK.
%pip install meraki
%pip install tablib
# If you are using Google Colab, please ensure you have set up your environment variables as linked above, then delete the two lines of ''' to activate the following code:
'''
%pip install colab-env -qU
import colab_env
'''
# Rely on meraki SDK, os, and tablib -- more on tablib later
import meraki
import os
import tablib
# We're also going to import Python's built-in JSON module, but only to make the console output pretty. In production, you wouldn't need any of the printing calls at all, nor this import!
import json
# Setting API key this way, and storing it in the env variables, lets us keep the sensitive API key out of the script itself
# The meraki.DashboardAPI() method does not require explicitly passing this value; it will check the environment for a variable
# called 'MERAKI_DASHBOARD_API_KEY' on its own. In this case, API_KEY is shown simply as an reference to where that information is
# stored.
API_KEY = os.getenv('MERAKI_DASHBOARD_API_KEY')
# Initialize the Dashboard connection.
dashboard = meraki.DashboardAPI()
```
Let's make a basic pretty print formatter, `printj()`. It will make reading the JSON later a lot easier, but won't be necessary in production scripts.
```
def printj(ugly_json_object):
# The json.dumps() method converts a JSON object into human-friendly formatted text
pretty_json_string = json.dumps(ugly_json_object, indent = 2, sort_keys = False)
return print(pretty_json_string)
```
Most API calls require passing values for the organization ID and/or the network ID. In this second cell, we fetch a list of the organizations the API key can access, then pick the first org in the list, and the first network in that organization, to use for later operations. You could re-use this code presuming your API key only has access to a single organization, and that organization only contains a single network. Otherwise, you would want to review the organizations object declared and printed here to review its contents.
```
# Let's make it easier to call this data later
# getOrganizations will return all orgs to which the supplied API key has access
organizations = dashboard.organizations.getOrganizations()
print('Organizations:')
printj(organizations)
# This example presumes we want to use the first organization as the scope for later operations.
firstOrganizationId = organizations[0]['id']
firstOrganizationName = organizations[0]['name']
# Print a blank line for legibility before showing the firstOrganizationId
print('')
print(f'The firstOrganizationId is {firstOrganizationId}, and its name is {firstOrganizationName}.')
```
This example presumes we want to use the first network of the chosen organization as the scope for later operations. It is fine to re-use presuming that your organization only contains a single network. Otherwise, you would want to review the organizations object declared and printed below to review its contents.
```
networks = dashboard.organizations.getOrganizationNetworks(organizationId=firstOrganizationId)
print('Networks:')
printj(networks)
firstNetworkId = networks[0]['id']
firstNetworkName = networks[0]['name']
# Print a blank line for legibility before showing the firstNetworkId
print('')
print(f'The firstNetworkId is {firstNetworkId}, and its name is {firstNetworkName}.')
```
Now that we've got the organization and network values figured out, we can get to the ask at hand:
> Show me a list of all the offline switches in a network.
The `getOrganizationDevicesStatuses` endpoint will return the devices (switches and otherwise) and their statuses, but it will not return their model numbers. To get that info, we use `getOrganizationDevices`.
So first, we create the `devices` list. Then we create a list of those devices' statuses in `device_statuses`. Then, we use a [list comprehension](https://www.datacamp.com/community/tutorials/python-list-comprehension) to find all instances of switches in the `devices` list and put them in a new list, `devices_switches`.
```
devices = dashboard.organizations.getOrganizationDevices(organizationId=firstOrganizationId)
devices_statuses = dashboard.organizations.getOrganizationDevicesStatuses(organizationId=firstOrganizationId)
# Let's get the switch devices list
devices_switches = [i for i in devices if 'MS' in i['model']]
print('These are the switches:')
printj(devices_switches)
# Make a new list of all the serials from devices_switches
devices_switches_serials = [i['serial'] for i in devices_switches]
```
Great! Now we have a list of all the switches and their statuses `devices_switches`, as well a separate list of just their serials `devices_switches_serials`, but we only want the ones that are offline. So here, we'll use more list comprehensions to narrow down the list and create a new list with only the information we need.
```
# Now we can list out the statuses (and whatever metadata we need) for these switches
print('We can list out the statuses (and whatever metadata we need) for these switches:')
devices_statuses_switches = [{'serial': i['serial'], 'status':i['status']} for i in devices_statuses if i['serial'] in devices_switches_serials]
print('Switch statuses are:')
printj(devices_statuses_switches)
# Print a blank line for legibility
print('')
# We can narrow it down to the ones that are offline
print('We can narrow it down to the ones that are offline:')
devices_statuses_switches_offline = [i for i in devices_statuses_switches if i['status'] != 'online']
print('Offline switches are:')
printj(devices_statuses_switches_offline)
```
We started from the devices list, and the devices statuses list, and created a list of offline switches. Now let's look at when they last reported to the Dashboard.
```
# Another list comprehension!
devices_last_reported_times = [{'serial': i['serial'], 'lastReportedAt': i['lastReportedAt']} for i in devices_statuses if i['serial'] in devices_switches_serials]
printj(devices_last_reported_times)
```
# Final steps
Excellent, now we have a list of offline switches, and it's pretty easy to read. But what if we could use this in an Excel format? Well, there's a Python module for that, too. In this case, we're using [tablib](https://pypi.org/project/tablib/).
```
# Let's convert that JSON-formatted data to a tabular dataset. You can copy/paste this into Excel, or write additional Python to create a new Excel file entirely!
excel_formatted = tablib.import_set(json.dumps(devices_last_reported_times), format = 'json')
# Let's see how it looks!
print(excel_formatted)
```
# Final thoughts
And we're done! Hopefully you found this a useful demonstration of just a few things that are possible with Meraki's Python SDK. These additional resources may prove useful along the way.
[Meraki Interactive API Docs](https://developer.cisco.com/meraki/api-v1/#!overview): The official (and interactive!) Meraki API and SDK documentation repository on DevNet.
[VS Code](https://code.visualstudio.com/): An excellent code editor with full support for Python and Python notebooks.
[Automate the Boring Stuff with Python](https://automatetheboringstuff.com/): An excellent learning resource that puts the real-world problem first, then teaches you the Pythonic solution along the way.
|
github_jupyter
|
# Install the relevant modules. If you are using a local editor (e.g. VS Code, rather than Colab) you can run these commands, without the preceding %, via a terminal. NB: Run `pip install meraki==` to find the latest version of the Meraki SDK.
%pip install meraki
%pip install tablib
# If you are using Google Colab, please ensure you have set up your environment variables as linked above, then delete the two lines of ''' to activate the following code:
'''
%pip install colab-env -qU
import colab_env
'''
# Rely on meraki SDK, os, and tablib -- more on tablib later
import meraki
import os
import tablib
# We're also going to import Python's built-in JSON module, but only to make the console output pretty. In production, you wouldn't need any of the printing calls at all, nor this import!
import json
# Setting API key this way, and storing it in the env variables, lets us keep the sensitive API key out of the script itself
# The meraki.DashboardAPI() method does not require explicitly passing this value; it will check the environment for a variable
# called 'MERAKI_DASHBOARD_API_KEY' on its own. In this case, API_KEY is shown simply as an reference to where that information is
# stored.
API_KEY = os.getenv('MERAKI_DASHBOARD_API_KEY')
# Initialize the Dashboard connection.
dashboard = meraki.DashboardAPI()
def printj(ugly_json_object):
# The json.dumps() method converts a JSON object into human-friendly formatted text
pretty_json_string = json.dumps(ugly_json_object, indent = 2, sort_keys = False)
return print(pretty_json_string)
# Let's make it easier to call this data later
# getOrganizations will return all orgs to which the supplied API key has access
organizations = dashboard.organizations.getOrganizations()
print('Organizations:')
printj(organizations)
# This example presumes we want to use the first organization as the scope for later operations.
firstOrganizationId = organizations[0]['id']
firstOrganizationName = organizations[0]['name']
# Print a blank line for legibility before showing the firstOrganizationId
print('')
print(f'The firstOrganizationId is {firstOrganizationId}, and its name is {firstOrganizationName}.')
networks = dashboard.organizations.getOrganizationNetworks(organizationId=firstOrganizationId)
print('Networks:')
printj(networks)
firstNetworkId = networks[0]['id']
firstNetworkName = networks[0]['name']
# Print a blank line for legibility before showing the firstNetworkId
print('')
print(f'The firstNetworkId is {firstNetworkId}, and its name is {firstNetworkName}.')
devices = dashboard.organizations.getOrganizationDevices(organizationId=firstOrganizationId)
devices_statuses = dashboard.organizations.getOrganizationDevicesStatuses(organizationId=firstOrganizationId)
# Let's get the switch devices list
devices_switches = [i for i in devices if 'MS' in i['model']]
print('These are the switches:')
printj(devices_switches)
# Make a new list of all the serials from devices_switches
devices_switches_serials = [i['serial'] for i in devices_switches]
# Now we can list out the statuses (and whatever metadata we need) for these switches
print('We can list out the statuses (and whatever metadata we need) for these switches:')
devices_statuses_switches = [{'serial': i['serial'], 'status':i['status']} for i in devices_statuses if i['serial'] in devices_switches_serials]
print('Switch statuses are:')
printj(devices_statuses_switches)
# Print a blank line for legibility
print('')
# We can narrow it down to the ones that are offline
print('We can narrow it down to the ones that are offline:')
devices_statuses_switches_offline = [i for i in devices_statuses_switches if i['status'] != 'online']
print('Offline switches are:')
printj(devices_statuses_switches_offline)
# Another list comprehension!
devices_last_reported_times = [{'serial': i['serial'], 'lastReportedAt': i['lastReportedAt']} for i in devices_statuses if i['serial'] in devices_switches_serials]
printj(devices_last_reported_times)
# Let's convert that JSON-formatted data to a tabular dataset. You can copy/paste this into Excel, or write additional Python to create a new Excel file entirely!
excel_formatted = tablib.import_set(json.dumps(devices_last_reported_times), format = 'json')
# Let's see how it looks!
print(excel_formatted)
| 0.64232 | 0.94699 |
```
from google.colab import drive
drive.mount('/content/drive')
import torch.nn as nn
import torch.nn.functional as F
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import torch
import torchvision
import torchvision.transforms as transforms
from torch.utils.data import Dataset, DataLoader
from torchvision import transforms, utils
from matplotlib import pyplot as plt
import copy
# Ignore warnings
import warnings
warnings.filterwarnings("ignore")
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform)
testset = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=10, shuffle=True)
testloader = torch.utils.data.DataLoader(testset, batch_size=10, shuffle=False)
classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
foreground_classes = {'plane', 'car', 'bird'}
background_classes = {'cat', 'deer', 'dog', 'frog', 'horse','ship', 'truck'}
fg1,fg2,fg3 = 0,1,2
dataiter = iter(trainloader)
background_data=[]
background_label=[]
foreground_data=[]
foreground_label=[]
batch_size=10
for i in range(5000):
images, labels = dataiter.next()
for j in range(batch_size):
if(classes[labels[j]] in background_classes):
img = images[j].tolist()
background_data.append(img)
background_label.append(labels[j])
else:
img = images[j].tolist()
foreground_data.append(img)
foreground_label.append(labels[j])
foreground_data = torch.tensor(foreground_data)
foreground_label = torch.tensor(foreground_label)
background_data = torch.tensor(background_data)
background_label = torch.tensor(background_label)
def create_mosaic_img(bg_idx,fg_idx,fg):
"""
bg_idx : list of indexes of background_data[] to be used as background images in mosaic
fg_idx : index of image to be used as foreground image from foreground data
fg : at what position/index foreground image has to be stored out of 0-8
"""
image_list=[]
j=0
for i in range(9):
if i != fg:
image_list.append(background_data[bg_idx[j]].type("torch.DoubleTensor"))
j+=1
else:
image_list.append(foreground_data[fg_idx].type("torch.DoubleTensor"))
label = foreground_label[fg_idx]- fg1 # minus 7 because our fore ground classes are 7,8,9 but we have to store it as 0,1,2
#image_list = np.concatenate(image_list ,axis=0)
image_list = torch.stack(image_list)
return image_list,label
desired_num = 30000
mosaic_list_of_images =[] # list of mosaic images, each mosaic image is saved as list of 9 images
fore_idx =[] # list of indexes at which foreground image is present in a mosaic image i.e from 0 to 9
mosaic_label=[] # label of mosaic image = foreground class present in that mosaic
for i in range(desired_num):
bg_idx = np.random.randint(0,35000,8)
fg_idx = np.random.randint(0,15000)
fg = np.random.randint(0,9)
fore_idx.append(fg)
image_list,label = create_mosaic_img(bg_idx,fg_idx,fg)
mosaic_list_of_images.append(image_list)
mosaic_label.append(label)
class MosaicDataset(Dataset):
"""MosaicDataset dataset."""
def __init__(self, mosaic_list_of_images, mosaic_label, fore_idx):
"""
Args:
csv_file (string): Path to the csv file with annotations.
root_dir (string): Directory with all the images.
transform (callable, optional): Optional transform to be applied
on a sample.
"""
self.mosaic = mosaic_list_of_images
self.label = mosaic_label
self.fore_idx = fore_idx
def __len__(self):
return len(self.label)
def __getitem__(self, idx):
return self.mosaic[idx] , self.label[idx], self.fore_idx[idx]
batch = 125
msd = MosaicDataset(mosaic_list_of_images, mosaic_label , fore_idx)
train_loader = DataLoader( msd,batch_size= batch ,shuffle=True)
class Focus(nn.Module):
def __init__(self):
super(Focus, self).__init__()
self.conv1 = nn.Conv2d(in_channels=3, out_channels=32, kernel_size=3, padding=0)
self.conv2 = nn.Conv2d(in_channels=32, out_channels=64, kernel_size=3, padding=0)
self.conv3 = nn.Conv2d(in_channels=64, out_channels=128, kernel_size=3, padding=0)
self.conv4 = nn.Conv2d(in_channels=128, out_channels=128, kernel_size=3, padding=0)
self.conv5 = nn.Conv2d(in_channels=128, out_channels=128, kernel_size=3, padding=0)
self.conv6 = nn.Conv2d(in_channels=128, out_channels=128, kernel_size=3, padding=1)
self.pool = nn.MaxPool2d(kernel_size=2, stride=2)
self.batch_norm1 = nn.BatchNorm2d(32)
self.batch_norm2 = nn.BatchNorm2d(128)
self.dropout1 = nn.Dropout2d(p=0.05)
self.dropout2 = nn.Dropout2d(p=0.1)
self.fc1 = nn.Linear(128,64)
self.fc2 = nn.Linear(64, 32)
self.fc3 = nn.Linear(32, 10)
self.fc4 = nn.Linear(10, 1)
def forward(self,z): #y is avg image #z batch of list of 9 images
y = torch.zeros([batch,3, 32,32], dtype=torch.float64)
x = torch.zeros([batch,9],dtype=torch.float64)
y = y.to("cuda")
x = x.to("cuda")
for i in range(9):
x[:,i] = self.helper(z[:,i])[:,0]
x = F.softmax(x,dim=1)
x1 = x[:,0]
torch.mul(x1[:,None,None,None],z[:,0])
for i in range(9):
x1 = x[:,i]
y = y + torch.mul(x1[:,None,None,None],z[:,i])
return x, y
def helper(self, x):
x = self.conv1(x)
x = F.relu(self.batch_norm1(x))
x = (F.relu(self.conv2(x)))
x = self.pool(x)
x = self.conv3(x)
x = F.relu(self.batch_norm2(x))
x = (F.relu(self.conv4(x)))
x = self.pool(x)
x = self.dropout1(x)
x = self.conv5(x)
x = F.relu(self.batch_norm2(x))
x = (F.relu(self.conv6(x)))
x = self.pool(x)
x = x.view(x.size(0), -1)
x = self.dropout2(x)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.dropout2(x)
x = F.relu(self.fc3(x))
x = self.fc4(x)
return x
focus_net = Focus().double()
focus_net = focus_net.to("cuda")
class Classification(nn.Module):
def __init__(self):
super(Classification, self).__init__()
self.conv1 = nn.Conv2d(in_channels=3, out_channels=32, kernel_size=3, padding=0)
self.conv2 = nn.Conv2d(in_channels=32, out_channels=64, kernel_size=3, padding=0)
self.conv3 = nn.Conv2d(in_channels=64, out_channels=128, kernel_size=3, padding=0)
self.conv4 = nn.Conv2d(in_channels=128, out_channels=128, kernel_size=3, padding=0)
self.conv5 = nn.Conv2d(in_channels=128, out_channels=128, kernel_size=3, padding=0)
self.conv6 = nn.Conv2d(in_channels=128, out_channels=128, kernel_size=3, padding=1)
self.pool = nn.MaxPool2d(kernel_size=2, stride=2)
self.batch_norm1 = nn.BatchNorm2d(32)
self.batch_norm2 = nn.BatchNorm2d(128)
self.dropout1 = nn.Dropout2d(p=0.05)
self.dropout2 = nn.Dropout2d(p=0.1)
self.fc1 = nn.Linear(128,64)
self.fc2 = nn.Linear(64, 32)
self.fc3 = nn.Linear(32, 10)
self.fc4 = nn.Linear(10, 3)
def forward(self,x):
x = self.conv1(x)
x = F.relu(self.batch_norm1(x))
x = (F.relu(self.conv2(x)))
x = self.pool(x)
x = self.conv3(x)
x = F.relu(self.batch_norm2(x))
x = (F.relu(self.conv4(x)))
x = self.pool(x)
x = self.dropout1(x)
x = self.conv5(x)
x = F.relu(self.batch_norm2(x))
x = (F.relu(self.conv6(x)))
x = self.pool(x)
x = x.view(x.size(0), -1)
x = self.dropout2(x)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.dropout2(x)
x = F.relu(self.fc3(x))
x = self.fc4(x)
return x
classify = Classification().double()
classify = classify.to("cuda")
classify.load_state_dict( torch.load("/content/drive/My Drive/Research/Cheating_data/Classify_net_weights/classify_net_6layer_cnn.pt"))
for params in classify.parameters():
print(params.requires_grad)
break;
for params in classify.parameters():
print(params)
break;
test_images =[] #list of mosaic images, each mosaic image is saved as laist of 9 images
fore_idx_test =[] #list of indexes at which foreground image is present in a mosaic image
test_label=[] # label of mosaic image = foreground class present in that mosaic
for i in range(10000):
bg_idx = np.random.randint(0,35000,8)
fg_idx = np.random.randint(0,15000)
fg = np.random.randint(0,9)
fore_idx_test.append(fg)
image_list,label = create_mosaic_img(bg_idx,fg_idx,fg)
test_images.append(image_list)
test_label.append(label)
test_data = MosaicDataset(test_images,test_label,fore_idx_test)
test_loader = DataLoader( test_data,batch_size= batch ,shuffle=False)
import torch.optim as optim
criterion = nn.CrossEntropyLoss()
optimizer_classify = optim.SGD(classify.parameters(), lr=0.01, momentum=0.9)
optimizer_focus = optim.SGD(focus_net.parameters(), lr=0.01, momentum=0.9)
col1=[]
col2=[]
col3=[]
col4=[]
col5=[]
col6=[]
col7=[]
col8=[]
col9=[]
col10=[]
col11=[]
col12=[]
col13=[]
correct = 0
total = 0
count = 0
flag = 1
focus_true_pred_true =0
focus_false_pred_true =0
focus_true_pred_false =0
focus_false_pred_false =0
argmax_more_than_half = 0
argmax_less_than_half =0
with torch.no_grad():
for data in train_loader:
inputs, labels , fore_idx = data
inputs, labels , fore_idx = inputs.to("cuda"),labels.to("cuda"), fore_idx.to("cuda")
alphas, avg_images = focus_net(inputs)
outputs = classify(avg_images)
_, predicted = torch.max(outputs.data, 1)
for j in range(labels.size(0)):
count += 1
focus = torch.argmax(alphas[j])
if alphas[j][focus] >= 0.5 :
argmax_more_than_half += 1
else:
argmax_less_than_half += 1
if(focus == fore_idx[j] and predicted[j] == labels[j]):
focus_true_pred_true += 1
elif(focus != fore_idx[j] and predicted[j] == labels[j]):
focus_false_pred_true += 1
elif(focus == fore_idx[j] and predicted[j] != labels[j]):
focus_true_pred_false += 1
elif(focus != fore_idx[j] and predicted[j] != labels[j]):
focus_false_pred_false += 1
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 30000 train images: %d %%' % ( 100 * correct / total))
print("total correct", correct)
print("total train set images", total)
print("focus_true_pred_true %d =============> FTPT : %d %%" % (focus_true_pred_true , (100 * focus_true_pred_true / total) ) )
print("focus_false_pred_true %d =============> FFPT : %d %%" % (focus_false_pred_true, (100 * focus_false_pred_true / total) ) )
print("focus_true_pred_false %d =============> FTPF : %d %%" %( focus_true_pred_false , ( 100 * focus_true_pred_false / total) ) )
print("focus_false_pred_false %d =============> FFPF : %d %%" % (focus_false_pred_false, ( 100 * focus_false_pred_false / total) ) )
print("argmax_more_than_half ==================> ",argmax_more_than_half)
print("argmax_less_than_half ==================> ",argmax_less_than_half)
print(count)
print("="*100)
col1.append(0)
col2.append(argmax_more_than_half)
col3.append(argmax_less_than_half)
col4.append(focus_true_pred_true)
col5.append(focus_false_pred_true)
col6.append(focus_true_pred_false)
col7.append(focus_false_pred_false)
correct = 0
total = 0
count = 0
flag = 1
focus_true_pred_true =0
focus_false_pred_true =0
focus_true_pred_false =0
focus_false_pred_false =0
argmax_more_than_half = 0
argmax_less_than_half =0
with torch.no_grad():
for data in test_loader:
inputs, labels , fore_idx = data
inputs, labels , fore_idx = inputs.to("cuda"),labels.to("cuda"), fore_idx.to("cuda")
alphas, avg_images = focus_net(inputs)
outputs = classify(avg_images)
_, predicted = torch.max(outputs.data, 1)
for j in range(labels.size(0)):
focus = torch.argmax(alphas[j])
if alphas[j][focus] >= 0.5 :
argmax_more_than_half += 1
else:
argmax_less_than_half += 1
if(focus == fore_idx[j] and predicted[j] == labels[j]):
focus_true_pred_true += 1
elif(focus != fore_idx[j] and predicted[j] == labels[j]):
focus_false_pred_true += 1
elif(focus == fore_idx[j] and predicted[j] != labels[j]):
focus_true_pred_false += 1
elif(focus != fore_idx[j] and predicted[j] != labels[j]):
focus_false_pred_false += 1
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 10000 test images: %d %%' % (
100 * correct / total))
print("total correct", correct)
print("total train set images", total)
print("focus_true_pred_true %d =============> FTPT : %d %%" % (focus_true_pred_true , (100 * focus_true_pred_true / total) ) )
print("focus_false_pred_true %d =============> FFPT : %d %%" % (focus_false_pred_true, (100 * focus_false_pred_true / total) ) )
print("focus_true_pred_false %d =============> FTPF : %d %%" %( focus_true_pred_false , ( 100 * focus_true_pred_false / total) ) )
print("focus_false_pred_false %d =============> FFPF : %d %%" % (focus_false_pred_false, ( 100 * focus_false_pred_false / total) ) )
print("argmax_more_than_half ==================> ",argmax_more_than_half)
print("argmax_less_than_half ==================> ",argmax_less_than_half)
col8.append(argmax_more_than_half)
col9.append(argmax_less_than_half)
col10.append(focus_true_pred_true)
col11.append(focus_false_pred_true)
col12.append(focus_true_pred_false)
col13.append(focus_false_pred_false)
nos_epochs = 300
focus_true_pred_true =0
focus_false_pred_true =0
focus_true_pred_false =0
focus_false_pred_false =0
argmax_more_than_half = 0
argmax_less_than_half =0
for epoch in range(nos_epochs): # loop over the dataset multiple times
focus_true_pred_true =0
focus_false_pred_true =0
focus_true_pred_false =0
focus_false_pred_false =0
argmax_more_than_half = 0
argmax_less_than_half =0
running_loss = 0.0
epoch_loss = []
cnt=0
iteration = desired_num // batch
#training data set
for i, data in enumerate(train_loader):
inputs , labels , fore_idx = data
inputs, labels = inputs.to("cuda"), labels.to("cuda")
# zero the parameter gradients
optimizer_focus.zero_grad()
optimizer_classify.zero_grad()
alphas, avg_images = focus_net(inputs)
outputs = classify(avg_images)
_, predicted = torch.max(outputs.data, 1)
# print(outputs)
# print(outputs.shape,labels.shape , torch.argmax(outputs, dim=1))
loss = criterion(outputs, labels)
loss.backward()
optimizer_focus.step()
optimizer_classify.step()
running_loss += loss.item()
mini = 60
if cnt % mini == mini-1: # print every 40 mini-batches
print('[%d, %5d] loss: %.3f' %(epoch + 1, cnt + 1, running_loss / mini))
epoch_loss.append(running_loss/mini)
running_loss = 0.0
cnt=cnt+1
if epoch % 5 == 0:
for j in range (batch):
focus = torch.argmax(alphas[j])
if(alphas[j][focus] >= 0.5):
argmax_more_than_half +=1
else:
argmax_less_than_half +=1
if(focus == fore_idx[j] and predicted[j] == labels[j]):
focus_true_pred_true += 1
elif(focus != fore_idx[j] and predicted[j] == labels[j]):
focus_false_pred_true +=1
elif(focus == fore_idx[j] and predicted[j] != labels[j]):
focus_true_pred_false +=1
elif(focus != fore_idx[j] and predicted[j] != labels[j]):
focus_false_pred_false +=1
if(np.mean(epoch_loss) <= 0.03):
break;
if epoch % 5 == 0:
col1.append(epoch+1)
col2.append(argmax_more_than_half)
col3.append(argmax_less_than_half)
col4.append(focus_true_pred_true)
col5.append(focus_false_pred_true)
col6.append(focus_true_pred_false)
col7.append(focus_false_pred_false)
#************************************************************************
#testing data set
with torch.no_grad():
focus_true_pred_true =0
focus_false_pred_true =0
focus_true_pred_false =0
focus_false_pred_false =0
argmax_more_than_half = 0
argmax_less_than_half =0
for data in test_loader:
inputs, labels , fore_idx = data
inputs, labels = inputs.to("cuda"), labels.to("cuda")
alphas, avg_images = focus_net(inputs)
outputs = classify(avg_images)
_, predicted = torch.max(outputs.data, 1)
for j in range (batch):
focus = torch.argmax(alphas[j])
if(alphas[j][focus] >= 0.5):
argmax_more_than_half +=1
else:
argmax_less_than_half +=1
if(focus == fore_idx[j] and predicted[j] == labels[j]):
focus_true_pred_true += 1
elif(focus != fore_idx[j] and predicted[j] == labels[j]):
focus_false_pred_true +=1
elif(focus == fore_idx[j] and predicted[j] != labels[j]):
focus_true_pred_false +=1
elif(focus != fore_idx[j] and predicted[j] != labels[j]):
focus_false_pred_false +=1
col8.append(argmax_more_than_half)
col9.append(argmax_less_than_half)
col10.append(focus_true_pred_true)
col11.append(focus_false_pred_true)
col12.append(focus_true_pred_false)
col13.append(focus_false_pred_false)
print('Finished Training')
for params in classify.parameters():
print(params.requires_grad)
break
for params in classify.parameters():
print(params)
break;
name = "10_focus_random_classify_pretrained_train_both"
torch.save(classify.state_dict(),"/content/drive/My Drive/Research/Cheating_data/16_experiments_on_cnn/"+name+".pt")
columns = ["epochs", "argmax > 0.5" ,"argmax < 0.5", "focus_true_pred_true", "focus_false_pred_true", "focus_true_pred_false", "focus_false_pred_false" ]
df_train = pd.DataFrame()
df_test = pd.DataFrame()
df_train[columns[0]] = col1
df_train[columns[1]] = col2
df_train[columns[2]] = col3
df_train[columns[3]] = col4
df_train[columns[4]] = col5
df_train[columns[5]] = col6
df_train[columns[6]] = col7
df_test[columns[0]] = col1
df_test[columns[1]] = col8
df_test[columns[2]] = col9
df_test[columns[3]] = col10
df_test[columns[4]] = col11
df_test[columns[5]] = col12
df_test[columns[6]] = col13
df_train
# plt.figure(12,12)
plt.plot(col1,col2, label='argmax > 0.5')
plt.plot(col1,col3, label='argmax < 0.5')
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.xlabel("epochs")
plt.ylabel("training data")
plt.title("On Training set")
plt.show()
plt.plot(col1,col4, label ="focus_true_pred_true ")
plt.plot(col1,col5, label ="focus_false_pred_true ")
plt.plot(col1,col6, label ="focus_true_pred_false ")
plt.plot(col1,col7, label ="focus_false_pred_false ")
plt.title("On Training set")
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.xlabel("epochs")
plt.ylabel("training data")
plt.show()
df_test
# plt.figure(12,12)
plt.plot(col1,col8, label='argmax > 0.5')
plt.plot(col1,col9, label='argmax < 0.5')
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.xlabel("epochs")
plt.ylabel("Testing data")
plt.title("On Testing set")
plt.show()
plt.plot(col1,col10, label ="focus_true_pred_true ")
plt.plot(col1,col11, label ="focus_false_pred_true ")
plt.plot(col1,col12, label ="focus_true_pred_false ")
plt.plot(col1,col13, label ="focus_false_pred_false ")
plt.title("On Testing set")
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.xlabel("epochs")
plt.ylabel("Testing data")
plt.show()
correct = 0
total = 0
count = 0
flag = 1
focus_true_pred_true =0
focus_false_pred_true =0
focus_true_pred_false =0
focus_false_pred_false =0
argmax_more_than_half = 0
argmax_less_than_half =0
with torch.no_grad():
for data in train_loader:
inputs, labels , fore_idx = data
inputs, labels , fore_idx = inputs.to("cuda"),labels.to("cuda"), fore_idx.to("cuda")
alphas, avg_images = focus_net(inputs)
outputs = classify(avg_images)
_, predicted = torch.max(outputs.data, 1)
for j in range(labels.size(0)):
focus = torch.argmax(alphas[j])
if alphas[j][focus] >= 0.5 :
argmax_more_than_half += 1
else:
argmax_less_than_half += 1
if(focus == fore_idx[j] and predicted[j] == labels[j]):
focus_true_pred_true += 1
elif(focus != fore_idx[j] and predicted[j] == labels[j]):
focus_false_pred_true += 1
elif(focus == fore_idx[j] and predicted[j] != labels[j]):
focus_true_pred_false += 1
elif(focus != fore_idx[j] and predicted[j] != labels[j]):
focus_false_pred_false += 1
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 30000 train images: %d %%' % (
100 * correct / total))
print("total correct", correct)
print("total train set images", total)
print("focus_true_pred_true %d =============> FTPT : %d %%" % (focus_true_pred_true , (100 * focus_true_pred_true / total) ) )
print("focus_false_pred_true %d =============> FFPT : %d %%" % (focus_false_pred_true, (100 * focus_false_pred_true / total) ) )
print("focus_true_pred_false %d =============> FTPF : %d %%" %( focus_true_pred_false , ( 100 * focus_true_pred_false / total) ) )
print("focus_false_pred_false %d =============> FFPF : %d %%" % (focus_false_pred_false, ( 100 * focus_false_pred_false / total) ) )
print("argmax_more_than_half ==================> ",argmax_more_than_half)
print("argmax_less_than_half ==================> ",argmax_less_than_half)
correct = 0
total = 0
count = 0
flag = 1
focus_true_pred_true =0
focus_false_pred_true =0
focus_true_pred_false =0
focus_false_pred_false =0
argmax_more_than_half = 0
argmax_less_than_half =0
with torch.no_grad():
for data in test_loader:
inputs, labels , fore_idx = data
inputs, labels , fore_idx = inputs.to("cuda"),labels.to("cuda"), fore_idx.to("cuda")
alphas, avg_images = focus_net(inputs)
outputs = classify(avg_images)
_, predicted = torch.max(outputs.data, 1)
for j in range(labels.size(0)):
focus = torch.argmax(alphas[j])
if alphas[j][focus] >= 0.5 :
argmax_more_than_half += 1
else:
argmax_less_than_half += 1
if(focus == fore_idx[j] and predicted[j] == labels[j]):
focus_true_pred_true += 1
elif(focus != fore_idx[j] and predicted[j] == labels[j]):
focus_false_pred_true += 1
elif(focus == fore_idx[j] and predicted[j] != labels[j]):
focus_true_pred_false += 1
elif(focus != fore_idx[j] and predicted[j] != labels[j]):
focus_false_pred_false += 1
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 10000 test images: %d %%' % (
100 * correct / total))
print("total correct", correct)
print("total train set images", total)
print("focus_true_pred_true %d =============> FTPT : %d %%" % (focus_true_pred_true , (100 * focus_true_pred_true / total) ) )
print("focus_false_pred_true %d =============> FFPT : %d %%" % (focus_false_pred_true, (100 * focus_false_pred_true / total) ) )
print("focus_true_pred_false %d =============> FTPF : %d %%" %( focus_true_pred_false , ( 100 * focus_true_pred_false / total) ) )
print("focus_false_pred_false %d =============> FFPF : %d %%" % (focus_false_pred_false, ( 100 * focus_false_pred_false / total) ) )
print("argmax_more_than_half ==================> ",argmax_more_than_half)
print("argmax_less_than_half ==================> ",argmax_less_than_half)
correct = 0
total = 0
with torch.no_grad():
for data in train_loader:
inputs, labels , fore_idx = data
inputs, labels = inputs.to("cuda"), labels.to("cuda")
alphas, avg_images = focus_net(inputs)
outputs = classify(avg_images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 30000 train images: %d %%' % ( 100 * correct / total))
print("total correct", correct)
print("total train set images", total)
correct = 0
total = 0
with torch.no_grad():
for data in test_loader:
inputs, labels , fore_idx = data
inputs, labels = inputs.to("cuda"), labels.to("cuda")
alphas, avg_images = focus_net(inputs)
outputs = classify(avg_images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 10000 test images: %d %%' % ( 100 * correct / total))
print("total correct", correct)
print("total train set images", total)
```
|
github_jupyter
|
from google.colab import drive
drive.mount('/content/drive')
import torch.nn as nn
import torch.nn.functional as F
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import torch
import torchvision
import torchvision.transforms as transforms
from torch.utils.data import Dataset, DataLoader
from torchvision import transforms, utils
from matplotlib import pyplot as plt
import copy
# Ignore warnings
import warnings
warnings.filterwarnings("ignore")
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform)
testset = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=10, shuffle=True)
testloader = torch.utils.data.DataLoader(testset, batch_size=10, shuffle=False)
classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
foreground_classes = {'plane', 'car', 'bird'}
background_classes = {'cat', 'deer', 'dog', 'frog', 'horse','ship', 'truck'}
fg1,fg2,fg3 = 0,1,2
dataiter = iter(trainloader)
background_data=[]
background_label=[]
foreground_data=[]
foreground_label=[]
batch_size=10
for i in range(5000):
images, labels = dataiter.next()
for j in range(batch_size):
if(classes[labels[j]] in background_classes):
img = images[j].tolist()
background_data.append(img)
background_label.append(labels[j])
else:
img = images[j].tolist()
foreground_data.append(img)
foreground_label.append(labels[j])
foreground_data = torch.tensor(foreground_data)
foreground_label = torch.tensor(foreground_label)
background_data = torch.tensor(background_data)
background_label = torch.tensor(background_label)
def create_mosaic_img(bg_idx,fg_idx,fg):
"""
bg_idx : list of indexes of background_data[] to be used as background images in mosaic
fg_idx : index of image to be used as foreground image from foreground data
fg : at what position/index foreground image has to be stored out of 0-8
"""
image_list=[]
j=0
for i in range(9):
if i != fg:
image_list.append(background_data[bg_idx[j]].type("torch.DoubleTensor"))
j+=1
else:
image_list.append(foreground_data[fg_idx].type("torch.DoubleTensor"))
label = foreground_label[fg_idx]- fg1 # minus 7 because our fore ground classes are 7,8,9 but we have to store it as 0,1,2
#image_list = np.concatenate(image_list ,axis=0)
image_list = torch.stack(image_list)
return image_list,label
desired_num = 30000
mosaic_list_of_images =[] # list of mosaic images, each mosaic image is saved as list of 9 images
fore_idx =[] # list of indexes at which foreground image is present in a mosaic image i.e from 0 to 9
mosaic_label=[] # label of mosaic image = foreground class present in that mosaic
for i in range(desired_num):
bg_idx = np.random.randint(0,35000,8)
fg_idx = np.random.randint(0,15000)
fg = np.random.randint(0,9)
fore_idx.append(fg)
image_list,label = create_mosaic_img(bg_idx,fg_idx,fg)
mosaic_list_of_images.append(image_list)
mosaic_label.append(label)
class MosaicDataset(Dataset):
"""MosaicDataset dataset."""
def __init__(self, mosaic_list_of_images, mosaic_label, fore_idx):
"""
Args:
csv_file (string): Path to the csv file with annotations.
root_dir (string): Directory with all the images.
transform (callable, optional): Optional transform to be applied
on a sample.
"""
self.mosaic = mosaic_list_of_images
self.label = mosaic_label
self.fore_idx = fore_idx
def __len__(self):
return len(self.label)
def __getitem__(self, idx):
return self.mosaic[idx] , self.label[idx], self.fore_idx[idx]
batch = 125
msd = MosaicDataset(mosaic_list_of_images, mosaic_label , fore_idx)
train_loader = DataLoader( msd,batch_size= batch ,shuffle=True)
class Focus(nn.Module):
def __init__(self):
super(Focus, self).__init__()
self.conv1 = nn.Conv2d(in_channels=3, out_channels=32, kernel_size=3, padding=0)
self.conv2 = nn.Conv2d(in_channels=32, out_channels=64, kernel_size=3, padding=0)
self.conv3 = nn.Conv2d(in_channels=64, out_channels=128, kernel_size=3, padding=0)
self.conv4 = nn.Conv2d(in_channels=128, out_channels=128, kernel_size=3, padding=0)
self.conv5 = nn.Conv2d(in_channels=128, out_channels=128, kernel_size=3, padding=0)
self.conv6 = nn.Conv2d(in_channels=128, out_channels=128, kernel_size=3, padding=1)
self.pool = nn.MaxPool2d(kernel_size=2, stride=2)
self.batch_norm1 = nn.BatchNorm2d(32)
self.batch_norm2 = nn.BatchNorm2d(128)
self.dropout1 = nn.Dropout2d(p=0.05)
self.dropout2 = nn.Dropout2d(p=0.1)
self.fc1 = nn.Linear(128,64)
self.fc2 = nn.Linear(64, 32)
self.fc3 = nn.Linear(32, 10)
self.fc4 = nn.Linear(10, 1)
def forward(self,z): #y is avg image #z batch of list of 9 images
y = torch.zeros([batch,3, 32,32], dtype=torch.float64)
x = torch.zeros([batch,9],dtype=torch.float64)
y = y.to("cuda")
x = x.to("cuda")
for i in range(9):
x[:,i] = self.helper(z[:,i])[:,0]
x = F.softmax(x,dim=1)
x1 = x[:,0]
torch.mul(x1[:,None,None,None],z[:,0])
for i in range(9):
x1 = x[:,i]
y = y + torch.mul(x1[:,None,None,None],z[:,i])
return x, y
def helper(self, x):
x = self.conv1(x)
x = F.relu(self.batch_norm1(x))
x = (F.relu(self.conv2(x)))
x = self.pool(x)
x = self.conv3(x)
x = F.relu(self.batch_norm2(x))
x = (F.relu(self.conv4(x)))
x = self.pool(x)
x = self.dropout1(x)
x = self.conv5(x)
x = F.relu(self.batch_norm2(x))
x = (F.relu(self.conv6(x)))
x = self.pool(x)
x = x.view(x.size(0), -1)
x = self.dropout2(x)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.dropout2(x)
x = F.relu(self.fc3(x))
x = self.fc4(x)
return x
focus_net = Focus().double()
focus_net = focus_net.to("cuda")
class Classification(nn.Module):
def __init__(self):
super(Classification, self).__init__()
self.conv1 = nn.Conv2d(in_channels=3, out_channels=32, kernel_size=3, padding=0)
self.conv2 = nn.Conv2d(in_channels=32, out_channels=64, kernel_size=3, padding=0)
self.conv3 = nn.Conv2d(in_channels=64, out_channels=128, kernel_size=3, padding=0)
self.conv4 = nn.Conv2d(in_channels=128, out_channels=128, kernel_size=3, padding=0)
self.conv5 = nn.Conv2d(in_channels=128, out_channels=128, kernel_size=3, padding=0)
self.conv6 = nn.Conv2d(in_channels=128, out_channels=128, kernel_size=3, padding=1)
self.pool = nn.MaxPool2d(kernel_size=2, stride=2)
self.batch_norm1 = nn.BatchNorm2d(32)
self.batch_norm2 = nn.BatchNorm2d(128)
self.dropout1 = nn.Dropout2d(p=0.05)
self.dropout2 = nn.Dropout2d(p=0.1)
self.fc1 = nn.Linear(128,64)
self.fc2 = nn.Linear(64, 32)
self.fc3 = nn.Linear(32, 10)
self.fc4 = nn.Linear(10, 3)
def forward(self,x):
x = self.conv1(x)
x = F.relu(self.batch_norm1(x))
x = (F.relu(self.conv2(x)))
x = self.pool(x)
x = self.conv3(x)
x = F.relu(self.batch_norm2(x))
x = (F.relu(self.conv4(x)))
x = self.pool(x)
x = self.dropout1(x)
x = self.conv5(x)
x = F.relu(self.batch_norm2(x))
x = (F.relu(self.conv6(x)))
x = self.pool(x)
x = x.view(x.size(0), -1)
x = self.dropout2(x)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.dropout2(x)
x = F.relu(self.fc3(x))
x = self.fc4(x)
return x
classify = Classification().double()
classify = classify.to("cuda")
classify.load_state_dict( torch.load("/content/drive/My Drive/Research/Cheating_data/Classify_net_weights/classify_net_6layer_cnn.pt"))
for params in classify.parameters():
print(params.requires_grad)
break;
for params in classify.parameters():
print(params)
break;
test_images =[] #list of mosaic images, each mosaic image is saved as laist of 9 images
fore_idx_test =[] #list of indexes at which foreground image is present in a mosaic image
test_label=[] # label of mosaic image = foreground class present in that mosaic
for i in range(10000):
bg_idx = np.random.randint(0,35000,8)
fg_idx = np.random.randint(0,15000)
fg = np.random.randint(0,9)
fore_idx_test.append(fg)
image_list,label = create_mosaic_img(bg_idx,fg_idx,fg)
test_images.append(image_list)
test_label.append(label)
test_data = MosaicDataset(test_images,test_label,fore_idx_test)
test_loader = DataLoader( test_data,batch_size= batch ,shuffle=False)
import torch.optim as optim
criterion = nn.CrossEntropyLoss()
optimizer_classify = optim.SGD(classify.parameters(), lr=0.01, momentum=0.9)
optimizer_focus = optim.SGD(focus_net.parameters(), lr=0.01, momentum=0.9)
col1=[]
col2=[]
col3=[]
col4=[]
col5=[]
col6=[]
col7=[]
col8=[]
col9=[]
col10=[]
col11=[]
col12=[]
col13=[]
correct = 0
total = 0
count = 0
flag = 1
focus_true_pred_true =0
focus_false_pred_true =0
focus_true_pred_false =0
focus_false_pred_false =0
argmax_more_than_half = 0
argmax_less_than_half =0
with torch.no_grad():
for data in train_loader:
inputs, labels , fore_idx = data
inputs, labels , fore_idx = inputs.to("cuda"),labels.to("cuda"), fore_idx.to("cuda")
alphas, avg_images = focus_net(inputs)
outputs = classify(avg_images)
_, predicted = torch.max(outputs.data, 1)
for j in range(labels.size(0)):
count += 1
focus = torch.argmax(alphas[j])
if alphas[j][focus] >= 0.5 :
argmax_more_than_half += 1
else:
argmax_less_than_half += 1
if(focus == fore_idx[j] and predicted[j] == labels[j]):
focus_true_pred_true += 1
elif(focus != fore_idx[j] and predicted[j] == labels[j]):
focus_false_pred_true += 1
elif(focus == fore_idx[j] and predicted[j] != labels[j]):
focus_true_pred_false += 1
elif(focus != fore_idx[j] and predicted[j] != labels[j]):
focus_false_pred_false += 1
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 30000 train images: %d %%' % ( 100 * correct / total))
print("total correct", correct)
print("total train set images", total)
print("focus_true_pred_true %d =============> FTPT : %d %%" % (focus_true_pred_true , (100 * focus_true_pred_true / total) ) )
print("focus_false_pred_true %d =============> FFPT : %d %%" % (focus_false_pred_true, (100 * focus_false_pred_true / total) ) )
print("focus_true_pred_false %d =============> FTPF : %d %%" %( focus_true_pred_false , ( 100 * focus_true_pred_false / total) ) )
print("focus_false_pred_false %d =============> FFPF : %d %%" % (focus_false_pred_false, ( 100 * focus_false_pred_false / total) ) )
print("argmax_more_than_half ==================> ",argmax_more_than_half)
print("argmax_less_than_half ==================> ",argmax_less_than_half)
print(count)
print("="*100)
col1.append(0)
col2.append(argmax_more_than_half)
col3.append(argmax_less_than_half)
col4.append(focus_true_pred_true)
col5.append(focus_false_pred_true)
col6.append(focus_true_pred_false)
col7.append(focus_false_pred_false)
correct = 0
total = 0
count = 0
flag = 1
focus_true_pred_true =0
focus_false_pred_true =0
focus_true_pred_false =0
focus_false_pred_false =0
argmax_more_than_half = 0
argmax_less_than_half =0
with torch.no_grad():
for data in test_loader:
inputs, labels , fore_idx = data
inputs, labels , fore_idx = inputs.to("cuda"),labels.to("cuda"), fore_idx.to("cuda")
alphas, avg_images = focus_net(inputs)
outputs = classify(avg_images)
_, predicted = torch.max(outputs.data, 1)
for j in range(labels.size(0)):
focus = torch.argmax(alphas[j])
if alphas[j][focus] >= 0.5 :
argmax_more_than_half += 1
else:
argmax_less_than_half += 1
if(focus == fore_idx[j] and predicted[j] == labels[j]):
focus_true_pred_true += 1
elif(focus != fore_idx[j] and predicted[j] == labels[j]):
focus_false_pred_true += 1
elif(focus == fore_idx[j] and predicted[j] != labels[j]):
focus_true_pred_false += 1
elif(focus != fore_idx[j] and predicted[j] != labels[j]):
focus_false_pred_false += 1
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 10000 test images: %d %%' % (
100 * correct / total))
print("total correct", correct)
print("total train set images", total)
print("focus_true_pred_true %d =============> FTPT : %d %%" % (focus_true_pred_true , (100 * focus_true_pred_true / total) ) )
print("focus_false_pred_true %d =============> FFPT : %d %%" % (focus_false_pred_true, (100 * focus_false_pred_true / total) ) )
print("focus_true_pred_false %d =============> FTPF : %d %%" %( focus_true_pred_false , ( 100 * focus_true_pred_false / total) ) )
print("focus_false_pred_false %d =============> FFPF : %d %%" % (focus_false_pred_false, ( 100 * focus_false_pred_false / total) ) )
print("argmax_more_than_half ==================> ",argmax_more_than_half)
print("argmax_less_than_half ==================> ",argmax_less_than_half)
col8.append(argmax_more_than_half)
col9.append(argmax_less_than_half)
col10.append(focus_true_pred_true)
col11.append(focus_false_pred_true)
col12.append(focus_true_pred_false)
col13.append(focus_false_pred_false)
nos_epochs = 300
focus_true_pred_true =0
focus_false_pred_true =0
focus_true_pred_false =0
focus_false_pred_false =0
argmax_more_than_half = 0
argmax_less_than_half =0
for epoch in range(nos_epochs): # loop over the dataset multiple times
focus_true_pred_true =0
focus_false_pred_true =0
focus_true_pred_false =0
focus_false_pred_false =0
argmax_more_than_half = 0
argmax_less_than_half =0
running_loss = 0.0
epoch_loss = []
cnt=0
iteration = desired_num // batch
#training data set
for i, data in enumerate(train_loader):
inputs , labels , fore_idx = data
inputs, labels = inputs.to("cuda"), labels.to("cuda")
# zero the parameter gradients
optimizer_focus.zero_grad()
optimizer_classify.zero_grad()
alphas, avg_images = focus_net(inputs)
outputs = classify(avg_images)
_, predicted = torch.max(outputs.data, 1)
# print(outputs)
# print(outputs.shape,labels.shape , torch.argmax(outputs, dim=1))
loss = criterion(outputs, labels)
loss.backward()
optimizer_focus.step()
optimizer_classify.step()
running_loss += loss.item()
mini = 60
if cnt % mini == mini-1: # print every 40 mini-batches
print('[%d, %5d] loss: %.3f' %(epoch + 1, cnt + 1, running_loss / mini))
epoch_loss.append(running_loss/mini)
running_loss = 0.0
cnt=cnt+1
if epoch % 5 == 0:
for j in range (batch):
focus = torch.argmax(alphas[j])
if(alphas[j][focus] >= 0.5):
argmax_more_than_half +=1
else:
argmax_less_than_half +=1
if(focus == fore_idx[j] and predicted[j] == labels[j]):
focus_true_pred_true += 1
elif(focus != fore_idx[j] and predicted[j] == labels[j]):
focus_false_pred_true +=1
elif(focus == fore_idx[j] and predicted[j] != labels[j]):
focus_true_pred_false +=1
elif(focus != fore_idx[j] and predicted[j] != labels[j]):
focus_false_pred_false +=1
if(np.mean(epoch_loss) <= 0.03):
break;
if epoch % 5 == 0:
col1.append(epoch+1)
col2.append(argmax_more_than_half)
col3.append(argmax_less_than_half)
col4.append(focus_true_pred_true)
col5.append(focus_false_pred_true)
col6.append(focus_true_pred_false)
col7.append(focus_false_pred_false)
#************************************************************************
#testing data set
with torch.no_grad():
focus_true_pred_true =0
focus_false_pred_true =0
focus_true_pred_false =0
focus_false_pred_false =0
argmax_more_than_half = 0
argmax_less_than_half =0
for data in test_loader:
inputs, labels , fore_idx = data
inputs, labels = inputs.to("cuda"), labels.to("cuda")
alphas, avg_images = focus_net(inputs)
outputs = classify(avg_images)
_, predicted = torch.max(outputs.data, 1)
for j in range (batch):
focus = torch.argmax(alphas[j])
if(alphas[j][focus] >= 0.5):
argmax_more_than_half +=1
else:
argmax_less_than_half +=1
if(focus == fore_idx[j] and predicted[j] == labels[j]):
focus_true_pred_true += 1
elif(focus != fore_idx[j] and predicted[j] == labels[j]):
focus_false_pred_true +=1
elif(focus == fore_idx[j] and predicted[j] != labels[j]):
focus_true_pred_false +=1
elif(focus != fore_idx[j] and predicted[j] != labels[j]):
focus_false_pred_false +=1
col8.append(argmax_more_than_half)
col9.append(argmax_less_than_half)
col10.append(focus_true_pred_true)
col11.append(focus_false_pred_true)
col12.append(focus_true_pred_false)
col13.append(focus_false_pred_false)
print('Finished Training')
for params in classify.parameters():
print(params.requires_grad)
break
for params in classify.parameters():
print(params)
break;
name = "10_focus_random_classify_pretrained_train_both"
torch.save(classify.state_dict(),"/content/drive/My Drive/Research/Cheating_data/16_experiments_on_cnn/"+name+".pt")
columns = ["epochs", "argmax > 0.5" ,"argmax < 0.5", "focus_true_pred_true", "focus_false_pred_true", "focus_true_pred_false", "focus_false_pred_false" ]
df_train = pd.DataFrame()
df_test = pd.DataFrame()
df_train[columns[0]] = col1
df_train[columns[1]] = col2
df_train[columns[2]] = col3
df_train[columns[3]] = col4
df_train[columns[4]] = col5
df_train[columns[5]] = col6
df_train[columns[6]] = col7
df_test[columns[0]] = col1
df_test[columns[1]] = col8
df_test[columns[2]] = col9
df_test[columns[3]] = col10
df_test[columns[4]] = col11
df_test[columns[5]] = col12
df_test[columns[6]] = col13
df_train
# plt.figure(12,12)
plt.plot(col1,col2, label='argmax > 0.5')
plt.plot(col1,col3, label='argmax < 0.5')
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.xlabel("epochs")
plt.ylabel("training data")
plt.title("On Training set")
plt.show()
plt.plot(col1,col4, label ="focus_true_pred_true ")
plt.plot(col1,col5, label ="focus_false_pred_true ")
plt.plot(col1,col6, label ="focus_true_pred_false ")
plt.plot(col1,col7, label ="focus_false_pred_false ")
plt.title("On Training set")
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.xlabel("epochs")
plt.ylabel("training data")
plt.show()
df_test
# plt.figure(12,12)
plt.plot(col1,col8, label='argmax > 0.5')
plt.plot(col1,col9, label='argmax < 0.5')
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.xlabel("epochs")
plt.ylabel("Testing data")
plt.title("On Testing set")
plt.show()
plt.plot(col1,col10, label ="focus_true_pred_true ")
plt.plot(col1,col11, label ="focus_false_pred_true ")
plt.plot(col1,col12, label ="focus_true_pred_false ")
plt.plot(col1,col13, label ="focus_false_pred_false ")
plt.title("On Testing set")
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.xlabel("epochs")
plt.ylabel("Testing data")
plt.show()
correct = 0
total = 0
count = 0
flag = 1
focus_true_pred_true =0
focus_false_pred_true =0
focus_true_pred_false =0
focus_false_pred_false =0
argmax_more_than_half = 0
argmax_less_than_half =0
with torch.no_grad():
for data in train_loader:
inputs, labels , fore_idx = data
inputs, labels , fore_idx = inputs.to("cuda"),labels.to("cuda"), fore_idx.to("cuda")
alphas, avg_images = focus_net(inputs)
outputs = classify(avg_images)
_, predicted = torch.max(outputs.data, 1)
for j in range(labels.size(0)):
focus = torch.argmax(alphas[j])
if alphas[j][focus] >= 0.5 :
argmax_more_than_half += 1
else:
argmax_less_than_half += 1
if(focus == fore_idx[j] and predicted[j] == labels[j]):
focus_true_pred_true += 1
elif(focus != fore_idx[j] and predicted[j] == labels[j]):
focus_false_pred_true += 1
elif(focus == fore_idx[j] and predicted[j] != labels[j]):
focus_true_pred_false += 1
elif(focus != fore_idx[j] and predicted[j] != labels[j]):
focus_false_pred_false += 1
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 30000 train images: %d %%' % (
100 * correct / total))
print("total correct", correct)
print("total train set images", total)
print("focus_true_pred_true %d =============> FTPT : %d %%" % (focus_true_pred_true , (100 * focus_true_pred_true / total) ) )
print("focus_false_pred_true %d =============> FFPT : %d %%" % (focus_false_pred_true, (100 * focus_false_pred_true / total) ) )
print("focus_true_pred_false %d =============> FTPF : %d %%" %( focus_true_pred_false , ( 100 * focus_true_pred_false / total) ) )
print("focus_false_pred_false %d =============> FFPF : %d %%" % (focus_false_pred_false, ( 100 * focus_false_pred_false / total) ) )
print("argmax_more_than_half ==================> ",argmax_more_than_half)
print("argmax_less_than_half ==================> ",argmax_less_than_half)
correct = 0
total = 0
count = 0
flag = 1
focus_true_pred_true =0
focus_false_pred_true =0
focus_true_pred_false =0
focus_false_pred_false =0
argmax_more_than_half = 0
argmax_less_than_half =0
with torch.no_grad():
for data in test_loader:
inputs, labels , fore_idx = data
inputs, labels , fore_idx = inputs.to("cuda"),labels.to("cuda"), fore_idx.to("cuda")
alphas, avg_images = focus_net(inputs)
outputs = classify(avg_images)
_, predicted = torch.max(outputs.data, 1)
for j in range(labels.size(0)):
focus = torch.argmax(alphas[j])
if alphas[j][focus] >= 0.5 :
argmax_more_than_half += 1
else:
argmax_less_than_half += 1
if(focus == fore_idx[j] and predicted[j] == labels[j]):
focus_true_pred_true += 1
elif(focus != fore_idx[j] and predicted[j] == labels[j]):
focus_false_pred_true += 1
elif(focus == fore_idx[j] and predicted[j] != labels[j]):
focus_true_pred_false += 1
elif(focus != fore_idx[j] and predicted[j] != labels[j]):
focus_false_pred_false += 1
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 10000 test images: %d %%' % (
100 * correct / total))
print("total correct", correct)
print("total train set images", total)
print("focus_true_pred_true %d =============> FTPT : %d %%" % (focus_true_pred_true , (100 * focus_true_pred_true / total) ) )
print("focus_false_pred_true %d =============> FFPT : %d %%" % (focus_false_pred_true, (100 * focus_false_pred_true / total) ) )
print("focus_true_pred_false %d =============> FTPF : %d %%" %( focus_true_pred_false , ( 100 * focus_true_pred_false / total) ) )
print("focus_false_pred_false %d =============> FFPF : %d %%" % (focus_false_pred_false, ( 100 * focus_false_pred_false / total) ) )
print("argmax_more_than_half ==================> ",argmax_more_than_half)
print("argmax_less_than_half ==================> ",argmax_less_than_half)
correct = 0
total = 0
with torch.no_grad():
for data in train_loader:
inputs, labels , fore_idx = data
inputs, labels = inputs.to("cuda"), labels.to("cuda")
alphas, avg_images = focus_net(inputs)
outputs = classify(avg_images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 30000 train images: %d %%' % ( 100 * correct / total))
print("total correct", correct)
print("total train set images", total)
correct = 0
total = 0
with torch.no_grad():
for data in test_loader:
inputs, labels , fore_idx = data
inputs, labels = inputs.to("cuda"), labels.to("cuda")
alphas, avg_images = focus_net(inputs)
outputs = classify(avg_images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 10000 test images: %d %%' % ( 100 * correct / total))
print("total correct", correct)
print("total train set images", total)
| 0.809841 | 0.560614 |
# Decision Tree classification
We will use Decision Tree classification algorithm to build a model from historical data of patients, and their response to different medications.
Then we will use the trained decision tree to predict the class of a unknown patient, or to find a proper drug for a new patient.
We have data about a set of patients, all of whom suffered from the same illness. During their course of treatment, each patient responded to one of 5 medications, Drug A, Drug B, Drug c, Drug x and y.
We want to build a model to find out which drug might be appropriate for a future patient with the same illness.
<h4>Table of contents</h4>
<div class="alert alert-block alert-info" style="margin-top: 20px">
<ol>
<li><a href="#ref1">Decision Tree with Python</a></li>
<li><a href="#ref2">Decision Tree with Pyspark</a></li>
</ol>
</div>
<br>
<a id="ref1"></a>
### Decision Tree classification with Python
```
import numpy as np
import pandas as pd
from sklearn.tree import DecisionTreeClassifier
my_data = pd.read_csv("drug200.csv", delimiter=",")
my_data[0:5]
```
Using <b>my_data</b> as the Drug.csv data read by pandas, declare the following variables: <br>
<ul>
<li> <b> X </b> as the <b> Feature Matrix </b> (data of my_data) </li>
<li> <b> y </b> as the <b> response vector (target) </b> </li>
</ul>
```
X = my_data[['Age', 'Sex', 'BP', 'Cholesterol', 'Na_to_K']].values
X[0:5]
from sklearn import preprocessing
le_sex = preprocessing.LabelEncoder()
le_sex.fit(['F','M'])
X[:,1] = le_sex.transform(X[:,1])
le_BP = preprocessing.LabelEncoder()
le_BP.fit([ 'LOW', 'NORMAL', 'HIGH'])
X[:,2] = le_BP.transform(X[:,2])
le_Chol = preprocessing.LabelEncoder()
le_Chol.fit([ 'NORMAL', 'HIGH'])
X[:,3] = le_Chol.transform(X[:,3])
X[0:5]
y = my_data["Drug"]
y[0:5]
```
We will be using train/test split on our decision tree Let's import train_test_split from sklearn.cross_validation
```
from sklearn.model_selection import train_test_split
X_trainset, X_testset, y_trainset, y_testset = train_test_split(X, y, test_size=0.3, random_state=3)
```
We will first create an instance of the DecisionTreeClassifier called drugTree.
Inside of the classifier, specify criterion="entropy" so we can see the information gain of each node.
```
drugTree = DecisionTreeClassifier(criterion="entropy", max_depth = 4)
drugTree # it shows the default parameters
drugTree.fit(X_trainset,y_trainset)
```
<div id="prediction">
<h2>Prediction</h2>
Let's make some <b>predictions</b> on the testing dataset and store it into a variable called <b>predTree</b>.
</div>
```
predTree = drugTree.predict(X_testset)
print (predTree [0:5])
print (y_testset [0:5])
```
<hr>
<div id="evaluation">
<h2>Evaluation</h2>
Next, let's import <b>metrics</b> from sklearn and check the accuracy of our model.
</div>
```
from sklearn import metrics
import matplotlib.pyplot as plt
print("DecisionTrees's Accuracy: ", metrics.accuracy_score(y_testset, predTree))
```
__Accuracy classification score__ computes subset accuracy: the set of labels predicted for a sample must exactly match the corresponding set of labels in y_true.
In multilabel classification, the function returns the subset accuracy. If the entire set of predicted labels for a sample strictly match with the true set of labels, then the subset accuracy is 1.0; otherwise it is 0.0.
<a id="ref2"></a>
### Decision Tree classification with Pyspark
```
import findspark
findspark.init()
#Tree methods Example
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName('treecode').getOrCreate()
```
<h3 id="understanding_data">Understanding the Data</h2>
```
data = spark.read.csv('drug200.csv',inferSchema=True,header=True)
data.printSchema()
```
The feature sets of this dataset are Age, Sex, Blood Pressure, and Cholesterol of patients, and the target is the drug that each patient responded to.
It is a sample of binary classifier, we will use the training part of the dataset to build a decision tree, and then use it to predict the class of a unknown patient, or to prescribe it to a new patient.
```
data.head()
```
### Spark Formatting of Data
```
# A few things we need to do before Spark can accept the data!
# It needs to be in the form of two columns
# ("label","features")
# Import VectorAssembler and Vectors
from pyspark.ml.linalg import Vectors
from pyspark.ml.feature import VectorAssembler
data.columns
data.show()
from pyspark.ml import Pipeline
from pyspark.ml.feature import IndexToString, StringIndexer
data.show()
```
As you may figure out, some features in this dataset are categorical such as __Sex__ or __BP__.
Decision Trees do not handle categorical variables. But still we can convert these features to numerical values.
```
data.columns
```
We can apply StringIndexer to several columns in a PySpark Dataframe
```
indexers = [StringIndexer(inputCol=column, outputCol=column+"_index").fit(data) for column in list(set(data.columns)-set(['Drug','Na_to_K','Age'])) ]
pipeline = Pipeline(stages=indexers)
df_r = pipeline.fit(data).transform(data)
df_r.show()
assembler = VectorAssembler(
inputCols=['Age',
'Sex_index',
'BP_index',
'Cholesterol_index',
'Na_to_K'],
outputCol="features")
output = assembler.transform(df_r)
```
Now we can fill the target variable Drug,
Deal with type of Drug
```
from pyspark.ml.feature import StringIndexer
indexer = StringIndexer(inputCol="Drug", outputCol="DrugIndex")
output_fixed = indexer.fit(output).transform(output)
final_data = output_fixed.select("features",'DrugIndex')
train_data,test_data = final_data.randomSplit([0.7,0.3])
train_data.show()
```
### The Classifiers
```
from pyspark.ml.classification import DecisionTreeClassifier,GBTClassifier,RandomForestClassifier
from pyspark.ml import Pipeline
```
Create two models:
* A single decision tree
* A random forest
We will be using a college dataset to try to classify colleges as Private or Public based off these features
```
# Use mostly defaults to make this comparison "fair"
dtc = DecisionTreeClassifier(labelCol='DrugIndex',featuresCol='features')
rfc = RandomForestClassifier(labelCol='DrugIndex',featuresCol='features')
#A gradient boosted tree classifier
#gbt = GBTClassifier(labelCol='DrugIndex',featuresCol='features')
```
Train models:
```
# Train the models (its three models, so it might take some time)
dtc_model = dtc.fit(train_data)
rfc_model = rfc.fit(train_data)
#gbt_model = gbt.fit(train_data)
```
## Model Comparison
Let's compare each of these models!
```
dtc_predictions = dtc_model.transform(test_data)
rfc_predictions = rfc_model.transform(test_data)
#gbt_predictions = gbt_model.transform(test_data)
```
**Evaluation Metrics:**
```
from pyspark.ml.evaluation import MulticlassClassificationEvaluator
# Select (prediction, true label) and compute test error
acc_evaluator = MulticlassClassificationEvaluator(labelCol="DrugIndex", predictionCol="prediction", metricName="accuracy")
dtc_acc = acc_evaluator.evaluate(dtc_predictions)
rfc_acc = acc_evaluator.evaluate(rfc_predictions)
#gbt_acc = acc_evaluator.evaluate(gbt_predictions)
print("Here are the results!")
print('-'*80)
print('A single decision tree had an accuracy of: {0:2.2f}%'.format(dtc_acc*100))
print('-'*80)
print('A random forest ensemble had an accuracy of: {0:2.2f}%'.format(rfc_acc*100))
#print('-'*80)
#print('A ensemble using GBT had an accuracy of: {0:2.2f}%'.format(gbt_acc*100))
```
|
github_jupyter
|
import numpy as np
import pandas as pd
from sklearn.tree import DecisionTreeClassifier
my_data = pd.read_csv("drug200.csv", delimiter=",")
my_data[0:5]
X = my_data[['Age', 'Sex', 'BP', 'Cholesterol', 'Na_to_K']].values
X[0:5]
from sklearn import preprocessing
le_sex = preprocessing.LabelEncoder()
le_sex.fit(['F','M'])
X[:,1] = le_sex.transform(X[:,1])
le_BP = preprocessing.LabelEncoder()
le_BP.fit([ 'LOW', 'NORMAL', 'HIGH'])
X[:,2] = le_BP.transform(X[:,2])
le_Chol = preprocessing.LabelEncoder()
le_Chol.fit([ 'NORMAL', 'HIGH'])
X[:,3] = le_Chol.transform(X[:,3])
X[0:5]
y = my_data["Drug"]
y[0:5]
from sklearn.model_selection import train_test_split
X_trainset, X_testset, y_trainset, y_testset = train_test_split(X, y, test_size=0.3, random_state=3)
drugTree = DecisionTreeClassifier(criterion="entropy", max_depth = 4)
drugTree # it shows the default parameters
drugTree.fit(X_trainset,y_trainset)
predTree = drugTree.predict(X_testset)
print (predTree [0:5])
print (y_testset [0:5])
from sklearn import metrics
import matplotlib.pyplot as plt
print("DecisionTrees's Accuracy: ", metrics.accuracy_score(y_testset, predTree))
import findspark
findspark.init()
#Tree methods Example
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName('treecode').getOrCreate()
data = spark.read.csv('drug200.csv',inferSchema=True,header=True)
data.printSchema()
data.head()
# A few things we need to do before Spark can accept the data!
# It needs to be in the form of two columns
# ("label","features")
# Import VectorAssembler and Vectors
from pyspark.ml.linalg import Vectors
from pyspark.ml.feature import VectorAssembler
data.columns
data.show()
from pyspark.ml import Pipeline
from pyspark.ml.feature import IndexToString, StringIndexer
data.show()
data.columns
indexers = [StringIndexer(inputCol=column, outputCol=column+"_index").fit(data) for column in list(set(data.columns)-set(['Drug','Na_to_K','Age'])) ]
pipeline = Pipeline(stages=indexers)
df_r = pipeline.fit(data).transform(data)
df_r.show()
assembler = VectorAssembler(
inputCols=['Age',
'Sex_index',
'BP_index',
'Cholesterol_index',
'Na_to_K'],
outputCol="features")
output = assembler.transform(df_r)
from pyspark.ml.feature import StringIndexer
indexer = StringIndexer(inputCol="Drug", outputCol="DrugIndex")
output_fixed = indexer.fit(output).transform(output)
final_data = output_fixed.select("features",'DrugIndex')
train_data,test_data = final_data.randomSplit([0.7,0.3])
train_data.show()
from pyspark.ml.classification import DecisionTreeClassifier,GBTClassifier,RandomForestClassifier
from pyspark.ml import Pipeline
# Use mostly defaults to make this comparison "fair"
dtc = DecisionTreeClassifier(labelCol='DrugIndex',featuresCol='features')
rfc = RandomForestClassifier(labelCol='DrugIndex',featuresCol='features')
#A gradient boosted tree classifier
#gbt = GBTClassifier(labelCol='DrugIndex',featuresCol='features')
# Train the models (its three models, so it might take some time)
dtc_model = dtc.fit(train_data)
rfc_model = rfc.fit(train_data)
#gbt_model = gbt.fit(train_data)
dtc_predictions = dtc_model.transform(test_data)
rfc_predictions = rfc_model.transform(test_data)
#gbt_predictions = gbt_model.transform(test_data)
from pyspark.ml.evaluation import MulticlassClassificationEvaluator
# Select (prediction, true label) and compute test error
acc_evaluator = MulticlassClassificationEvaluator(labelCol="DrugIndex", predictionCol="prediction", metricName="accuracy")
dtc_acc = acc_evaluator.evaluate(dtc_predictions)
rfc_acc = acc_evaluator.evaluate(rfc_predictions)
#gbt_acc = acc_evaluator.evaluate(gbt_predictions)
print("Here are the results!")
print('-'*80)
print('A single decision tree had an accuracy of: {0:2.2f}%'.format(dtc_acc*100))
print('-'*80)
print('A random forest ensemble had an accuracy of: {0:2.2f}%'.format(rfc_acc*100))
#print('-'*80)
#print('A ensemble using GBT had an accuracy of: {0:2.2f}%'.format(gbt_acc*100))
| 0.518302 | 0.983085 |
# How to access the data
[](https://mybinder.org/v2/gh/remifan/LabPtPTm2/HEAD?filepath=examples%2Fbasics.ipynb)
[](https://colab.research.google.com/github/remifan/LabPtPTm2/blob/master/examples/basics.ipynb)
(Binder is better for the interactive content of this notebook)
```
import labptptm2
```
## Convenience API
### Data selection
The typical data selection looks like this
```
dat_grp, sup_grp = labptptm2.select(1, 0, 4, 2)
```
the 4 input arguments of `select` identify each collected data file:
- arg#1: int, random source sequence identifier, which can be either 1 or 2
- arg#2: int, launched power in dBm unit, which must be a member of [-5, -4, -3, -2, -1, 0, 1, 2, 3]
- arg#3: int, channel index, which is member of [1, 2, 3, 4, 5, 6, 7]
- arg#4: int, index of scope captures under the same link configuration, a member of [1, 2, 3]
`select` returns 2 objects, a `list` of data groups and a `list` of supplymentary data groups.
Each data group contains recieved samples (resampled to 2 samples/symbol), synchronized sent symbols and attributes; each supplymentary data group contains auxiliary infomation such as estimated frequency offset and chromatic dispersion.
multi-selection is supported by a list of specifications:
``` python
dat_grp, sup_grp = labptptm2.select(1, [0, 1], [4, 7], 2)
```
since we only input a single specification, the returned list has only 1 group:
```
dat_grp[0].info
```
The data group contains 2 arrays named 'recv' and 'sent',
and we can inspect their shapes and No. bytes
```
dat_grp[0]['recv'].info # you don't have to understand all of those information.
```
The attributes is a `dict` contains the information of the experiment
```
dict(dat_grp[0].attrs)
```
Similarily, you can `info` the supplimentary data group
```
sup_grp[0].info
dict(sup_grp[0].attrs)
```
### Data downloading
So far, there is no actual data downloading yet. We can trigger the data download by slicing the arrays
```
%time dat_grp[0]['recv'][:100000] # download the first 100K samples
%time dat_grp[0]['recv'][:] # download the all 4.5M samples
```
Downloading full waveform takes longer time, it is because the amount of data to download is determined by the slicing window. It is nice to have this feature when we run a small demo in a ad-hoc environment.
### Local cache
We may often re-use the same data many times, for example, re-running the above codes after restarting Notebook's kernel
```
%time dat_grp[0]['recv'][:] # download the all 4.5M samples
```
the same code that took seconds at first call now just finished in no time! This is because the data is cached locally at first call, so that further calls only load data from local cache.
```
labptptm2.config.cache_storage # this is the cache location for this Notebook environment
import os
os.listdir(labptptm2.config.cache_storage) # these cached files are not human-readable:)
```
The default cache location is the OS {Temporary Folder}, which gets cleared each time OS restarts. You can set `labptptm2.config.cache_storage` to other path to enable permanent data cache:
``` python
labptptm2.config.cache_storage = os.getcwd() # current working directory
labptptm2.config.cache_storage = 'D:/data' # Windows
labptptm2.config.cache_storage = '/home/spongebob/data' # *nix
labptptm2.config.cache_storage = '/Users/spongebob/data' # MacOS
```
## Configuration
there are a few options to customize:
```
labptptm2.config
```
- `store`: target store from which data is quried if not locally cached, `None` means use remote store
- `remote`: address of remote store, which is not supposed to be changed
- `cache_storage`: the location of local cache, change this to other path to enable permanent cache.
In the above Local cache example, to persist cache storage without hardcoding the `labptptm2.config.cache_storage = xxx`, you may use configuration file in YAML format. You can dump the default setting to make one first.
```
labptptm2.config.dump() # dump to current working directory by default
with open('labptptm2.yaml', 'r') as f:
print(f.read())
```
now you can update this config file, and `labptptm2.config` will automatically load its content if it is found on initial import.
## Remote storage and direct access
Now let's take a look at the remote storage
```
root = labptptm2.open_group()
root.info
```
there are 3 groups, let's see the info of '1125km_SSMF' group
```
root['1125km_SSMF'].info
```
by repeatedly entering the deeper groups, you would end up with the data's metadata we have seen above
```
root['1125km_SSMF/src1/-1dBm_ch1_1/recv'].info
```
Such `root[{path string}]` access shows that data is organized by directory-like (so called 'Group') structure. (DON'T use Win-stype `\` seperator)
You can interactively navigate it! (needs Kernel, [not working in Colab](https://github.com/QuantStack/ipytree/issues/32#issuecomment-647098636))
```
root.tree() # click + to expand that directory and - to fold it, no heavy data download
```
you could get this interactive view

```
root['1125km_SSMF'].tree() # sub-groups also have tree function
```
similarly,

Still, data is only downloaded when it gets sliced
```
data = root['1125km_SSMF/src1/-1dBm_ch1_1/recv'][:10000] # slicer [:] would download the whole file
```
The convenience data API introduced at begining is an extra layer built on top of such direct access.
## Clone the entire remote store
Though we suggest using remote store with local cache layer to achieve on-demand data query, we still provide single function to clone the entire data store conveniently.
The entire data store has size of around 27 GB, so clone it if you have good bandwidth.
``` python
import sys
labptptm2.clone_store('./labptptm2', log=sys.stdout) # copy the entire store to ./labptptm2 !
# once the local store is ready, you need to update labptptm2.config.store
labptptm2.config.store = './labptptm2'
# it is suggested to use configuration file
```
Fiannly, no more data downloading and caching, data would be loaded from the local store directly from now on.
|
github_jupyter
|
import labptptm2
dat_grp, sup_grp = labptptm2.select(1, 0, 4, 2)
since we only input a single specification, the returned list has only 1 group:
The data group contains 2 arrays named 'recv' and 'sent',
and we can inspect their shapes and No. bytes
The attributes is a `dict` contains the information of the experiment
Similarily, you can `info` the supplimentary data group
### Data downloading
So far, there is no actual data downloading yet. We can trigger the data download by slicing the arrays
Downloading full waveform takes longer time, it is because the amount of data to download is determined by the slicing window. It is nice to have this feature when we run a small demo in a ad-hoc environment.
### Local cache
We may often re-use the same data many times, for example, re-running the above codes after restarting Notebook's kernel
the same code that took seconds at first call now just finished in no time! This is because the data is cached locally at first call, so that further calls only load data from local cache.
The default cache location is the OS {Temporary Folder}, which gets cleared each time OS restarts. You can set `labptptm2.config.cache_storage` to other path to enable permanent data cache:
## Configuration
there are a few options to customize:
- `store`: target store from which data is quried if not locally cached, `None` means use remote store
- `remote`: address of remote store, which is not supposed to be changed
- `cache_storage`: the location of local cache, change this to other path to enable permanent cache.
In the above Local cache example, to persist cache storage without hardcoding the `labptptm2.config.cache_storage = xxx`, you may use configuration file in YAML format. You can dump the default setting to make one first.
now you can update this config file, and `labptptm2.config` will automatically load its content if it is found on initial import.
## Remote storage and direct access
Now let's take a look at the remote storage
there are 3 groups, let's see the info of '1125km_SSMF' group
by repeatedly entering the deeper groups, you would end up with the data's metadata we have seen above
Such `root[{path string}]` access shows that data is organized by directory-like (so called 'Group') structure. (DON'T use Win-stype `\` seperator)
You can interactively navigate it! (needs Kernel, [not working in Colab](https://github.com/QuantStack/ipytree/issues/32#issuecomment-647098636))
you could get this interactive view

similarly,

Still, data is only downloaded when it gets sliced
The convenience data API introduced at begining is an extra layer built on top of such direct access.
## Clone the entire remote store
Though we suggest using remote store with local cache layer to achieve on-demand data query, we still provide single function to clone the entire data store conveniently.
The entire data store has size of around 27 GB, so clone it if you have good bandwidth.
| 0.70069 | 0.977926 |
```
%matplotlib inline
from matplotlib import style
style.use('fivethirtyeight')
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import datetime as dt
```
# Reflect Tables into SQLAlchemy ORM
```
# Python SQL toolkit and Object Relational Mapper
import sqlalchemy
from sqlalchemy.ext.automap import automap_base
from sqlalchemy.orm import Session
from sqlalchemy import create_engine, func
# create engine to hawaii.sqlite
engine = create_engine("sqlite:///hawaii.sqlite")
# reflect an existing database into a new model
# delcare Base using automap
Base = automap_base()
# reflect the tables
Base.prepare(engine, reflect=True)
# View all of the classes that automap found
Base.classes.keys()
# Save references to each table
Measurement = Base.classes.measurement
Station = Base.classes.station
# Create our session (link) from Python to the DB
session = Session(engine)
# Display the row's columns and data in dictionary format-Measurement
first_row = session.query(Measurement).first()
first_row.__dict__
# Display the row's columns and data in dictionary format-Station
first_row = session.query(Station).first()
first_row.__dict__
```
# Exploratory Precipitation Analysis
```
# Find the most recent date in the data set.
measurement_date = session.query(Measurement).order_by(Measurement.date.desc()).first()
measurement_date.date
# Design a query to retrieve the last 12 months of precipitation data and plot the results.
# Starting from the most recent data point in the database.
# 1 Calculate the date one year from the last date in data set.
# 2 Perform a query to retrieve the data and precipitation scores
# 3 Save the query results as a Pandas DataFrame and set the index to the date column
# 4 Sort the dataframe by date
# 5 Use Pandas Plotting with Matplotlib to plot the data
# 1 Calculate the date one year from the last date in data set.
one_year_ago = dt.date(2017,8,23) - dt.timedelta(days=365)
one_year_ago
# 2 Perform a query to retrieve the data and precipitation scores
#SELECT prcp FROM Measurement(day 2 stu plotting)
precip_data = session.query(Measurement.date, Measurement.prcp).all()
# 3 Save the query results as a Pandas DataFrame and set the index to the date column
df = pd.DataFrame(precip_data[:], columns=['date', 'prcp'])
df.set_index('date', inplace=True, )
df.head()
# 4 Sort the dataframe by date
df.sort_values(by='date',inplace=True)
df.head()
# Use Pandas to calcualte the summary statistics for the precipitation data
df.describe()
#total daily precip all stations
daily_precip = df.groupby(['date']).sum()
daily_precip.head()
# 5 Use Pandas Plotting with Matplotlib to plot the data
df.plot(title = 'Precipitation by Date ',figsize=(20,10), rot=90)
plt.ylabel('Precipitation (inches)')
plt.show()
```
# Exploratory Station Analysis
```
# Design a query to calculate the total number stations in the dataset
stations = session.query(Station.station).count()
stations
# Design a query to find the most active stations (i.e. what stations have the most rows?)
# List the stations and the counts in descending order.
# chinook day 3 example
session.query(Measurement.station, func.count(Measurement.station)).\
group_by(Measurement.station).\
order_by(func.count(Measurement.station).desc()).all()
# Using the most active station id from the previous query, calculate the lowest, highest, and average temperature.
session.query(Measurement.station,
func.min(Measurement.tobs),
func.max(Measurement.tobs),
func.round(func.avg(Measurement.tobs),2)).\
filter(Measurement.station=='USC00519281').all()
temps = session.query(Measurement.date, Measurement.tobs).\
filter(Measurement.station=='USC00519281').all()
temps
# Using the most active station id
# Query the last 12 months of temperature observation data for this station and plot the results as a histogram
temperature = session.query(Measurement.tobs).filter(Measurement.station=='USC00519281').\
filter(Measurement.date>=one_year_ago).all()
#check for temp var
#temperature
temperature = np.ravel(temperature)
plt.hist(temperature, color = 'green', bins =12)
plt.title('Station USC00519281 Temp Readings')
plt.ylabel('Frequency')
plt.xlabel('Temp (F)')
plt.show()
```
# Close session
```
start
data = session.query(func.min(Measurement.tobs),func.avg(Measurement.tobs),func.max(Measurement.tobs)).\
filter(Measurement.date >= start).all()
data
# Close Session
session.close()
```
|
github_jupyter
|
%matplotlib inline
from matplotlib import style
style.use('fivethirtyeight')
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import datetime as dt
# Python SQL toolkit and Object Relational Mapper
import sqlalchemy
from sqlalchemy.ext.automap import automap_base
from sqlalchemy.orm import Session
from sqlalchemy import create_engine, func
# create engine to hawaii.sqlite
engine = create_engine("sqlite:///hawaii.sqlite")
# reflect an existing database into a new model
# delcare Base using automap
Base = automap_base()
# reflect the tables
Base.prepare(engine, reflect=True)
# View all of the classes that automap found
Base.classes.keys()
# Save references to each table
Measurement = Base.classes.measurement
Station = Base.classes.station
# Create our session (link) from Python to the DB
session = Session(engine)
# Display the row's columns and data in dictionary format-Measurement
first_row = session.query(Measurement).first()
first_row.__dict__
# Display the row's columns and data in dictionary format-Station
first_row = session.query(Station).first()
first_row.__dict__
# Find the most recent date in the data set.
measurement_date = session.query(Measurement).order_by(Measurement.date.desc()).first()
measurement_date.date
# Design a query to retrieve the last 12 months of precipitation data and plot the results.
# Starting from the most recent data point in the database.
# 1 Calculate the date one year from the last date in data set.
# 2 Perform a query to retrieve the data and precipitation scores
# 3 Save the query results as a Pandas DataFrame and set the index to the date column
# 4 Sort the dataframe by date
# 5 Use Pandas Plotting with Matplotlib to plot the data
# 1 Calculate the date one year from the last date in data set.
one_year_ago = dt.date(2017,8,23) - dt.timedelta(days=365)
one_year_ago
# 2 Perform a query to retrieve the data and precipitation scores
#SELECT prcp FROM Measurement(day 2 stu plotting)
precip_data = session.query(Measurement.date, Measurement.prcp).all()
# 3 Save the query results as a Pandas DataFrame and set the index to the date column
df = pd.DataFrame(precip_data[:], columns=['date', 'prcp'])
df.set_index('date', inplace=True, )
df.head()
# 4 Sort the dataframe by date
df.sort_values(by='date',inplace=True)
df.head()
# Use Pandas to calcualte the summary statistics for the precipitation data
df.describe()
#total daily precip all stations
daily_precip = df.groupby(['date']).sum()
daily_precip.head()
# 5 Use Pandas Plotting with Matplotlib to plot the data
df.plot(title = 'Precipitation by Date ',figsize=(20,10), rot=90)
plt.ylabel('Precipitation (inches)')
plt.show()
# Design a query to calculate the total number stations in the dataset
stations = session.query(Station.station).count()
stations
# Design a query to find the most active stations (i.e. what stations have the most rows?)
# List the stations and the counts in descending order.
# chinook day 3 example
session.query(Measurement.station, func.count(Measurement.station)).\
group_by(Measurement.station).\
order_by(func.count(Measurement.station).desc()).all()
# Using the most active station id from the previous query, calculate the lowest, highest, and average temperature.
session.query(Measurement.station,
func.min(Measurement.tobs),
func.max(Measurement.tobs),
func.round(func.avg(Measurement.tobs),2)).\
filter(Measurement.station=='USC00519281').all()
temps = session.query(Measurement.date, Measurement.tobs).\
filter(Measurement.station=='USC00519281').all()
temps
# Using the most active station id
# Query the last 12 months of temperature observation data for this station and plot the results as a histogram
temperature = session.query(Measurement.tobs).filter(Measurement.station=='USC00519281').\
filter(Measurement.date>=one_year_ago).all()
#check for temp var
#temperature
temperature = np.ravel(temperature)
plt.hist(temperature, color = 'green', bins =12)
plt.title('Station USC00519281 Temp Readings')
plt.ylabel('Frequency')
plt.xlabel('Temp (F)')
plt.show()
start
data = session.query(func.min(Measurement.tobs),func.avg(Measurement.tobs),func.max(Measurement.tobs)).\
filter(Measurement.date >= start).all()
data
# Close Session
session.close()
| 0.757884 | 0.944944 |
# Long-Form Question Answering
[](https://colab.research.google.com/github/deepset-ai/haystack/blob/master/tutorials/Tutorial12_LFQA.ipynb)
### Prepare environment
#### Colab: Enable the GPU runtime
Make sure you enable the GPU runtime to experience decent speed in this tutorial.
**Runtime -> Change Runtime type -> Hardware accelerator -> GPU**
<img src="https://raw.githubusercontent.com/deepset-ai/haystack/master/docs/_src/img/colab_gpu_runtime.jpg">
```
# Make sure you have a GPU running
!nvidia-smi
# Install the latest release of Haystack in your own environment
#! pip install farm-haystack
# Install the latest master of Haystack
!pip install --upgrade pip
!pip install -q git+https://github.com/deepset-ai/haystack.git#egg=farm-haystack[colab,faiss]
from haystack.utils import convert_files_to_dicts, fetch_archive_from_http, clean_wiki_text
from haystack.nodes import Seq2SeqGenerator
```
### Document Store
FAISS is a library for efficient similarity search on a cluster of dense vectors.
The `FAISSDocumentStore` uses a SQL(SQLite in-memory be default) database under-the-hood
to store the document text and other meta data. The vector embeddings of the text are
indexed on a FAISS Index that later is queried for searching answers.
The default flavour of FAISSDocumentStore is "Flat" but can also be set to "HNSW" for
faster search at the expense of some accuracy. Just set the faiss_index_factor_str argument in the constructor.
For more info on which suits your use case: https://github.com/facebookresearch/faiss/wiki/Guidelines-to-choose-an-index
```
from haystack.document_stores import FAISSDocumentStore
document_store = FAISSDocumentStore(embedding_dim=128, faiss_index_factory_str="Flat")
```
### Cleaning & indexing documents
Similarly to the previous tutorials, we download, convert and index some Game of Thrones articles to our DocumentStore
```
# Let's first get some files that we want to use
doc_dir = "data/article_txt_got"
s3_url = "https://s3.eu-central-1.amazonaws.com/deepset.ai-farm-qa/datasets/documents/wiki_gameofthrones_txt.zip"
fetch_archive_from_http(url=s3_url, output_dir=doc_dir)
# Convert files to dicts
dicts = convert_files_to_dicts(dir_path=doc_dir, clean_func=clean_wiki_text, split_paragraphs=True)
# Now, let's write the dicts containing documents to our DB.
document_store.write_documents(dicts)
```
### Initalize Retriever and Reader/Generator
#### Retriever
We use a `DensePassageRetriever` and we invoke `update_embeddings` to index the embeddings of documents in the `FAISSDocumentStore`
```
from haystack.nodes import DensePassageRetriever
retriever = DensePassageRetriever(
document_store=document_store,
query_embedding_model="vblagoje/dpr-question_encoder-single-lfqa-wiki",
passage_embedding_model="vblagoje/dpr-ctx_encoder-single-lfqa-wiki",
)
document_store.update_embeddings(retriever)
```
Before we blindly use the `DensePassageRetriever` let's empirically test it to make sure a simple search indeed finds the relevant documents.
```
from haystack.utils import print_documents
from haystack.pipelines import DocumentSearchPipeline
p_retrieval = DocumentSearchPipeline(retriever)
res = p_retrieval.run(query="Tell me something about Arya Stark?", params={"Retriever": {"top_k": 10}})
print_documents(res, max_text_len=512)
```
#### Reader/Generator
Similar to previous Tutorials we now initalize our reader/generator.
Here we use a `Seq2SeqGenerator` with the *vblagoje/bart_lfqa* model (see: https://huggingface.co/vblagoje/bart_lfqa)
```
generator = Seq2SeqGenerator(model_name_or_path="vblagoje/bart_lfqa")
```
### Pipeline
With a Haystack `Pipeline` you can stick together your building blocks to a search pipeline.
Under the hood, `Pipelines` are Directed Acyclic Graphs (DAGs) that you can easily customize for your own use cases.
To speed things up, Haystack also comes with a few predefined Pipelines. One of them is the `GenerativeQAPipeline` that combines a retriever and a reader/generator to answer our questions.
You can learn more about `Pipelines` in the [docs](https://haystack.deepset.ai/docs/latest/pipelinesmd).
```
from haystack.pipelines import GenerativeQAPipeline
pipe = GenerativeQAPipeline(generator, retriever)
```
## Voilà! Ask a question!
```
pipe.run(
query="How did Arya Stark's character get portrayed in a television adaptation?", params={"Retriever": {"top_k": 3}}
)
pipe.run(query="Why is Arya Stark an unusual character?", params={"Retriever": {"top_k": 3}})
```
## About us
This [Haystack](https://github.com/deepset-ai/haystack/) notebook was made with love by [deepset](https://deepset.ai/) in Berlin, Germany
We bring NLP to the industry via open source!
Our focus: Industry specific language models & large scale QA systems.
Some of our other work:
- [German BERT](https://deepset.ai/german-bert)
- [GermanQuAD and GermanDPR](https://deepset.ai/germanquad)
- [FARM](https://github.com/deepset-ai/FARM)
Get in touch:
[Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Slack](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai)
By the way: [we're hiring!](https://www.deepset.ai/jobs)
|
github_jupyter
|
# Make sure you have a GPU running
!nvidia-smi
# Install the latest release of Haystack in your own environment
#! pip install farm-haystack
# Install the latest master of Haystack
!pip install --upgrade pip
!pip install -q git+https://github.com/deepset-ai/haystack.git#egg=farm-haystack[colab,faiss]
from haystack.utils import convert_files_to_dicts, fetch_archive_from_http, clean_wiki_text
from haystack.nodes import Seq2SeqGenerator
from haystack.document_stores import FAISSDocumentStore
document_store = FAISSDocumentStore(embedding_dim=128, faiss_index_factory_str="Flat")
# Let's first get some files that we want to use
doc_dir = "data/article_txt_got"
s3_url = "https://s3.eu-central-1.amazonaws.com/deepset.ai-farm-qa/datasets/documents/wiki_gameofthrones_txt.zip"
fetch_archive_from_http(url=s3_url, output_dir=doc_dir)
# Convert files to dicts
dicts = convert_files_to_dicts(dir_path=doc_dir, clean_func=clean_wiki_text, split_paragraphs=True)
# Now, let's write the dicts containing documents to our DB.
document_store.write_documents(dicts)
from haystack.nodes import DensePassageRetriever
retriever = DensePassageRetriever(
document_store=document_store,
query_embedding_model="vblagoje/dpr-question_encoder-single-lfqa-wiki",
passage_embedding_model="vblagoje/dpr-ctx_encoder-single-lfqa-wiki",
)
document_store.update_embeddings(retriever)
from haystack.utils import print_documents
from haystack.pipelines import DocumentSearchPipeline
p_retrieval = DocumentSearchPipeline(retriever)
res = p_retrieval.run(query="Tell me something about Arya Stark?", params={"Retriever": {"top_k": 10}})
print_documents(res, max_text_len=512)
generator = Seq2SeqGenerator(model_name_or_path="vblagoje/bart_lfqa")
from haystack.pipelines import GenerativeQAPipeline
pipe = GenerativeQAPipeline(generator, retriever)
pipe.run(
query="How did Arya Stark's character get portrayed in a television adaptation?", params={"Retriever": {"top_k": 3}}
)
pipe.run(query="Why is Arya Stark an unusual character?", params={"Retriever": {"top_k": 3}})
| 0.613121 | 0.940024 |
```
import re
from collections import Counter
import string
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import spacy
from spacy.tokenizer import Tokenizer
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
from sklearn.metrics.pairwise import cosine_similarity
from sklearn.neighbors import NearestNeighbors
from sklearn.decomposition import PCA
import squarify
import spacy
nlp = spacy.load("en_core_web_lg")
from bs4 import BeautifulSoup
import requests
df = pd.read_csv('get_follower_data.csv')
print(df.shape)
df.head()
df['text'] = df['text'].apply(lambda x: x[2:-1].replace('\\n', ' ').replace('\n\n', ' ').replace('\n', ' '))
# df['text'] = [BeautifulSoup(text).get_text() for text in df['text'] ]
df.head()
tokenizer = Tokenizer(nlp.vocab)
STOP_WORDS = nlp.Defaults.stop_words.union([' '])
spanish_stop_words = ['de', 'e', 'la', 'le', 'que', 'y', 'el', 'un', 'r', 'en', 'w', "it's", "it’s", 'u', "i'm"]
spanish_stop_words
tokens = []
"""create tokens without stop,, punct, pronouns or extended stop words"""
for doc in tokenizer.pipe(df['text'], batch_size=500):
doc_tokens = []
for token in doc:
if ((token.is_stop == False) and (token.is_punct == False) and (token.pos_!= 'PRON') and token.text.lower() not in STOP_WORDS and token.text.lower() not in spanish_stop_words):
doc_tokens.append(token.text.lower())
tokens.append(doc_tokens)
df['tokens'] = tokens
def count(docs):
word_counts = Counter()
appears_in = Counter()
total_docs = len(docs)
for doc in docs:
word_counts.update(doc)
appears_in.update(set(doc))
temp = zip(word_counts.keys(), word_counts.values())
wc = pd.DataFrame(temp, columns = ['word', 'count'])
wc['rank'] = wc['count'].rank(method='first', ascending=False)
total = wc['count'].sum()
wc['pct_total'] = wc['count'].apply(lambda x: x / total)
wc = wc.sort_values(by='rank')
wc['cul_pct_total'] = wc['pct_total'].cumsum()
t2 = zip(appears_in.keys(), appears_in.values())
ac = pd.DataFrame(t2, columns=['word', 'appears_in'])
wc = ac.merge(wc, on='word')
wc['appears_in_pct'] = wc['appears_in'].apply(lambda x: x / total_docs)
return wc.sort_values(by='rank')
wc = count(df['tokens'])
wc.head(20)
wc_top20 = wc[wc['rank'] <= 20]
squarify.plot(sizes=wc_top20['pct_total'], label=wc_top20['word'], alpha=.7 )
plt.axis('off')
plt.show()
```
## Word count per tweet
```
def tokenize(document):
doc = nlp(document)
return [token.lemma_.strip() for token in doc if (token.is_stop != True) and (token.is_punct != True)]
# Tunning Parameters
# Instantiate vectorizer object
tfidf = TfidfVectorizer(stop_words='english',
ngram_range=(1,2),
max_df=.97,
min_df=3,
tokenizer=tokenize)
# Create a vocabulary and get word counts per document
dtm = tfidf.fit_transform(df['text']) # Similiar to fit_predict
# Print word counts
# Get feature names to use as dataframe column headers
dtm = pd.DataFrame(dtm.todense(), columns=tfidf.get_feature_names())
# View Feature Matrix as DataFrame
dtm.head()
doc_len = [len(doc) for doc in df['text']]
sns.distplot(doc_len); # Visualizing the lenth of characters.
```
## Sentiment Analysis
```
from sklearn.model_selection import train_test_split
df = pd.read_csv('get_follower_data.csv')
train, test = train_test_split(df.text, train_size = 0.5)
train.shape, test.shape
import re
REPLACE_NO_SPACE = re.compile("[.;:!\'?,\"()\[\]]")
REPLACE_WITH_SPACE = re.compile("(<br\s*/><br\s*/>)|(\-)|(\/)")
def preprocess_reviews(reviews):
reviews = [REPLACE_NO_SPACE.sub("", line.lower()) for line in reviews]
reviews = [REPLACE_WITH_SPACE.sub(" ", line) for line in reviews]
return reviews
reviews_train_clean = preprocess_reviews(train)
reviews_test_clean = preprocess_reviews(test)
from sklearn.feature_extraction.text import CountVectorizer
cv = CountVectorizer(binary=True)
cv.fit(reviews_train_clean)
X = cv.transform(reviews_train_clean)
X_test = cv.transform(reviews_test_clean)
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
target = [1 if i < 4080 else 0 for i in range(8161)]
X_train, X_val, y_train, y_val = train_test_split(
X, target, train_size = 0.75
)
for c in [0.01, 0.05, 0.25, 0.5, 1]:
lr = LogisticRegression(C=c)
lr.fit(X_train, y_train)
print ("Accuracy for C=%s: %s"
% (c, accuracy_score(y_val, lr.predict(X_val))))
final_model = LogisticRegression(C=0.05)
final_model.fit(X, target)
final_model = LogisticRegression(C=0.05)
final_model.fit(X, target)
print ("Final Accuracy: %s"
% accuracy_score(target, final_model.predict(X_test)))
feature_to_coef = {
word: coef for word, coef in zip(
cv.get_feature_names(), final_model.coef_[0]
)
}
print('Positive')
for best_positive in sorted(
feature_to_coef.items(),
key=lambda x: x[1],
reverse=True)[:5]:
print (best_positive)
print('Negative')
for best_negative in sorted(
feature_to_coef.items(),
key=lambda x: x[1])[:5]:
print (best_negative)
```
|
github_jupyter
|
import re
from collections import Counter
import string
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import spacy
from spacy.tokenizer import Tokenizer
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
from sklearn.metrics.pairwise import cosine_similarity
from sklearn.neighbors import NearestNeighbors
from sklearn.decomposition import PCA
import squarify
import spacy
nlp = spacy.load("en_core_web_lg")
from bs4 import BeautifulSoup
import requests
df = pd.read_csv('get_follower_data.csv')
print(df.shape)
df.head()
df['text'] = df['text'].apply(lambda x: x[2:-1].replace('\\n', ' ').replace('\n\n', ' ').replace('\n', ' '))
# df['text'] = [BeautifulSoup(text).get_text() for text in df['text'] ]
df.head()
tokenizer = Tokenizer(nlp.vocab)
STOP_WORDS = nlp.Defaults.stop_words.union([' '])
spanish_stop_words = ['de', 'e', 'la', 'le', 'que', 'y', 'el', 'un', 'r', 'en', 'w', "it's", "it’s", 'u', "i'm"]
spanish_stop_words
tokens = []
"""create tokens without stop,, punct, pronouns or extended stop words"""
for doc in tokenizer.pipe(df['text'], batch_size=500):
doc_tokens = []
for token in doc:
if ((token.is_stop == False) and (token.is_punct == False) and (token.pos_!= 'PRON') and token.text.lower() not in STOP_WORDS and token.text.lower() not in spanish_stop_words):
doc_tokens.append(token.text.lower())
tokens.append(doc_tokens)
df['tokens'] = tokens
def count(docs):
word_counts = Counter()
appears_in = Counter()
total_docs = len(docs)
for doc in docs:
word_counts.update(doc)
appears_in.update(set(doc))
temp = zip(word_counts.keys(), word_counts.values())
wc = pd.DataFrame(temp, columns = ['word', 'count'])
wc['rank'] = wc['count'].rank(method='first', ascending=False)
total = wc['count'].sum()
wc['pct_total'] = wc['count'].apply(lambda x: x / total)
wc = wc.sort_values(by='rank')
wc['cul_pct_total'] = wc['pct_total'].cumsum()
t2 = zip(appears_in.keys(), appears_in.values())
ac = pd.DataFrame(t2, columns=['word', 'appears_in'])
wc = ac.merge(wc, on='word')
wc['appears_in_pct'] = wc['appears_in'].apply(lambda x: x / total_docs)
return wc.sort_values(by='rank')
wc = count(df['tokens'])
wc.head(20)
wc_top20 = wc[wc['rank'] <= 20]
squarify.plot(sizes=wc_top20['pct_total'], label=wc_top20['word'], alpha=.7 )
plt.axis('off')
plt.show()
def tokenize(document):
doc = nlp(document)
return [token.lemma_.strip() for token in doc if (token.is_stop != True) and (token.is_punct != True)]
# Tunning Parameters
# Instantiate vectorizer object
tfidf = TfidfVectorizer(stop_words='english',
ngram_range=(1,2),
max_df=.97,
min_df=3,
tokenizer=tokenize)
# Create a vocabulary and get word counts per document
dtm = tfidf.fit_transform(df['text']) # Similiar to fit_predict
# Print word counts
# Get feature names to use as dataframe column headers
dtm = pd.DataFrame(dtm.todense(), columns=tfidf.get_feature_names())
# View Feature Matrix as DataFrame
dtm.head()
doc_len = [len(doc) for doc in df['text']]
sns.distplot(doc_len); # Visualizing the lenth of characters.
from sklearn.model_selection import train_test_split
df = pd.read_csv('get_follower_data.csv')
train, test = train_test_split(df.text, train_size = 0.5)
train.shape, test.shape
import re
REPLACE_NO_SPACE = re.compile("[.;:!\'?,\"()\[\]]")
REPLACE_WITH_SPACE = re.compile("(<br\s*/><br\s*/>)|(\-)|(\/)")
def preprocess_reviews(reviews):
reviews = [REPLACE_NO_SPACE.sub("", line.lower()) for line in reviews]
reviews = [REPLACE_WITH_SPACE.sub(" ", line) for line in reviews]
return reviews
reviews_train_clean = preprocess_reviews(train)
reviews_test_clean = preprocess_reviews(test)
from sklearn.feature_extraction.text import CountVectorizer
cv = CountVectorizer(binary=True)
cv.fit(reviews_train_clean)
X = cv.transform(reviews_train_clean)
X_test = cv.transform(reviews_test_clean)
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
target = [1 if i < 4080 else 0 for i in range(8161)]
X_train, X_val, y_train, y_val = train_test_split(
X, target, train_size = 0.75
)
for c in [0.01, 0.05, 0.25, 0.5, 1]:
lr = LogisticRegression(C=c)
lr.fit(X_train, y_train)
print ("Accuracy for C=%s: %s"
% (c, accuracy_score(y_val, lr.predict(X_val))))
final_model = LogisticRegression(C=0.05)
final_model.fit(X, target)
final_model = LogisticRegression(C=0.05)
final_model.fit(X, target)
print ("Final Accuracy: %s"
% accuracy_score(target, final_model.predict(X_test)))
feature_to_coef = {
word: coef for word, coef in zip(
cv.get_feature_names(), final_model.coef_[0]
)
}
print('Positive')
for best_positive in sorted(
feature_to_coef.items(),
key=lambda x: x[1],
reverse=True)[:5]:
print (best_positive)
print('Negative')
for best_negative in sorted(
feature_to_coef.items(),
key=lambda x: x[1])[:5]:
print (best_negative)
| 0.480479 | 0.42316 |
# VencoPy Tutorial 1
This tutorial showcases the general structure and workflow of VencoPy, as well as some basic features of its 4 main classes:
- DataParser
- TripDiaryBuilder
- GridModeler
- FlexEstimator
All tutorials run on a very small subset of data from the 2017 German national travel survey (Mobilität in Deutschland (MiD17)), which might result in profiles having uncommon shapes. As such, the calculations and the examples proposed throughout all tutorials have the mere goal to exemplify the modelling steps and guide the use throughout the structure of VencoPy and do not aim at providing an accurate quantification of demand-side flexibility from EVs.
For a more detailed description of the VencoPy, you can refer to https://www.mdpi.com/1996-1073/14/14/4349/htm
## Setting up the working space
This section allows you to import all required Python packages for data input and manipulation. The function os.chdir(path) allows us to point Python towards the top most directory which contains all useful VencoPy funtions that are going to be used in the tutorials.
Additionally we set and read in the input dataframe (here the MiD17) and load the necessary yaml file, which contains some configuration settings.
```
import sys
import os
from os import path
import pandas as pd
from pathlib import Path
sys.path.append(path.dirname(path.dirname(path.dirname(path.dirname(__file__)))))
from vencopy.classes.dataParsers import DataParser
from vencopy.classes.tripDiaryBuilders import TripDiaryBuilder
from vencopy.classes.gridModelers import GridModeler
from vencopy.classes.flexEstimators import FlexEstimator
from vencopy.classes.evaluators import Evaluator
from vencopy.scripts.globalFunctions import loadConfigDict
print("Current working directory: {0}".format(os.getcwd()))
```
We will have a look more in detail at each config file and what you can specify within it for each class throughtout the tutorials. For the time being, it is enough to know that the config files specify configurations, variable namings and settings for the different classes. There is one config file for each class, a global config and a local configuration config to specify eventual file paths on your machine.
```
configNames = ('globalConfig', 'localPathConfig', 'parseConfig', 'tripConfig', 'gridConfig', 'flexConfig', 'evaluatorConfig')
configDict = loadConfigDict(configNames)
```
## _DataParser_ class
To be able to estimate EV electric consumption and flexibililty, the first step in the VencoPy framework implies accessing a travel survey data set, such as the MiD. This is carried out through a parsing interface to the original database. In the parsing interface to the data set, three main operations are carried out: the read-in of the travel survey trip data, stored in .dta or .csv files, filtering and cleaning of the original raw data set and a set of variable replacement operations to allow the composition of travel diaries in a second step (in the tripDiaryBuilder class).
In order to have consistent entry data for all variables and for different data sets, all database entries are harmonised, which includes generating unified data types and consistent variable naming. The naming convention for the variables and their respective input type can be specified in the VencoPy-config files that have been loaded previously.
First off, we modify the localConfig and globalConfig files so that it point to the current working directory and to the database subset we will use to explain the different classes.
```
# Set reference dataset
datasetID = 'MiD17'
# Modify the localPathConfig file to point to the .csv file in the sampling folder in the tutorials directory where the dataset for the tutorials lies.
configDict['localPathConfig']['pathAbsolute'][datasetID] = Path(__file__).parent.parent / 'data_sampling'
# Assign to vencoPyRoot the folder in which you cloned your repository
#localPathConfig['pathAbsolute']['vencoPyRoot'] = Path.cwd().parent.parent
# Similarly we modify the datasetID in the global config file
configDict['globalConfig']['files'][datasetID]['tripsDataRaw'] = datasetID + '.csv'
# Adapt relative paths in config for tutorials
configDict['globalConfig']['pathRelative']['plots'] = Path(__file__).parent.parent / configDict['globalConfig']['pathRelative']['plots']
configDict['globalConfig']['pathRelative']['parseOutput'] = Path(__file__).parent.parent / configDict['globalConfig']['pathRelative']['parseOutput']
configDict['globalConfig']['pathRelative']['diaryOutput'] = Path(__file__).parent.parent / configDict['globalConfig']['pathRelative']['diaryOutput']
configDict['globalConfig']['pathRelative']['gridOutput'] = Path(__file__).parent.parent / configDict['globalConfig']['pathRelative']['gridOutput']
configDict['globalConfig']['pathRelative']['flexOutput'] = Path(__file__).parent.parent / configDict['globalConfig']['pathRelative']['flexOutput']
configDict['globalConfig']['pathRelative']['evalOutput'] = Path(__file__).parent.parent / configDict['globalConfig']['pathRelative']['evalOutput']
# We also modify the parseConfig by removing some of the columns that are normally parsed from the MiD, which are not available in our semplified test dataframe
del configDict['parseConfig']['dataVariables']['hhID']
del configDict['parseConfig']['dataVariables']['personID']
```
We can now run the first class and parse the dataset with the collection of mobility patterns into a more useful form for our scope.
```
vpData = DataParser(datasetID=datasetID, configDict=configDict, loadEncrypted=False)
```
We can see from the print statements in the class that after reading in the initial dataset, which contained 2124 rows, and applying 8 filters, we end up with a database containing 950 suitable entries, which corresponds to about 45% of the initial sample.
## _TripDiaryBuilder_ class
In the second VencoPy component, the travelDiaryBuilder, individual trips at the survey day are consolidated into person-specific travel diaries comprising multiple trips.
The daily travel diary composition consists of three main steps: reformatting the database, allocating trip purposes and merging the obtained dataframe with other relevant variables from the original database.
In the first step, reformatting, the time dimension is transferred from the raw data (usually in minutes) to the necessary output format (e.g., hours). Each trip is split into shares, which are then assigned to the respective hour in which they took place, generating an hourly dataframe with a timestamp instead of a dataframe containing single trip entries.
Similarly, miles driven and the trip purpose are allocated to their respective hour and merged into daily travel diaries. Trips are assumed to determine the respective person’s stay in the consecutive hours up to the next trip and therefore are related to the charging availability between two trips. Trip purposes included in surveys may comprise trips carried out for work or education reasons, trips returning to home, trips to shopping facilities and other leisure activities. Currently, trips whose purpose is not specified are allocated to trips returning to their own household.
At the end of the second VencoPy component TripDiaryBuilder, two intermediary data sets are available either directly from the class within Python or from the hard-drive as .csv files.
```
# Trip distance and purpose diary compositions
vpTripDiary = TripDiaryBuilder(datasetID=datasetID, configDict=configDict, ParseData=vpData, debug=True)
```
After the calculation of the hourly shares and the composition of the 950 database rows from the DataParser class, our dataset now contains 267 trip diaries.
You can also see that the two available datasets, the drive data and the trip purposes are written to inputProfiles_Drive_masterBranch_MiD17.csv and inputProfiles_Purpose_masterBranch_MiD17.csv respectively.
## _GridModeler_ class
The charging infrastructure allocation makes use of a basic charging infrastructure model, which assumes the availability of charging stations when vehicles are parked. Since the analytical focus of the framework lies on a regional level (NUTS1-NUTS0), the infrastructure model is kept simple in the current version.
Charging availability is allocated based on a binary True–False mapping to a respective trip purpose in the VencoPy-config. Thus, different scenarios describing different charging availability scenarios, e.g., at home or at home and at work etc. can be distinguished, but neither a regional differentiation nor a charging availability probability or distribution are assumed.
At the end of the execution of the GridModeler class, a given parking purpose diary parkingType(v,t) is transferred into a binary grid connection diary connectgrid(v,t) with the same format but consisting only of True–False values.
```
vpGrid = GridModeler(datasetID=datasetID, configDict=configDict)
vpGrid.assignSimpleGridViaPurposes()
vpGrid.writeOutGridAvailability()
```
## _Evaluator_ class
The Evaluator class contains a collection of function to analyse and visualise the results. With the 'hourlyAggregates' and 'plotAggregates' functions we can see the average hourly trips in km in our dataset as well as the sum of total km per week day.
```
vpEval = Evaluator(configDict=configDict, parseData=pd.Series(data=vpData, index=[datasetID]))
vpEval.hourlyAggregates = vpEval.calcVariableSpecAggregates(by=['tripStartWeekday'])
vpEval.plotAggregates()
```
## _FlexEstimator_ class
The flexEstimator class is the final class that is used to estimate the charging flexibility based on driving profiles and charge connection shares.
There are three integral inputs to the flexibililty estimation:
- A profile describing hourly distances for each vehicle d(v,t)
- A boolean set of profiles describing if a vehicle is connected to the grid at a given hour connectgrid(v,t)
- Techno–economic input assumptions
After reading in the input scalars the drive profiles and the boolean plug profiles, the flexEstimator class outputs 6 profiles.
The first four profiles can be used as constraints for other models to determine optimal charging strategies; the fifth profile simulates a case where charging is not controlled an EVs charge as soon as a charging possibility becomes available. Lastly, the sixth profile quantifies the demand for additional fuel for trips that cannot be fully carried out with an EV.
```
# Estimate charging flexibility based on driving profiles and charge connection
vpFlex = FlexEstimator(datasetID=datasetID, configDict=configDict, ParseData=vpData)
vpFlex.baseProfileCalculation()
vpFlex.filter()
vpFlex.aggregate()
vpFlex.correct()
vpFlex.normalize()
vpFlex.writeOut()
```
As we can see, there are 262 considered profiles and 259 DSM eligible profiles.
## _Evaluator_ class
Again using the Evaluator class, we can have a look more in detail at the grid connection share of the fleet, at the average power flow in kW in the uncontrolled charging situation anf at the power used for driving. Similarly, we can see a view of the averae minimum and maximum state of charge of the battery.
```
vpEval.plotProfiles(flexEstimator=vpFlex)
```
## Next Steps
In the next tutorials, you will learn more in detail the internal workings of each class and how to customise some settings.
|
github_jupyter
|
import sys
import os
from os import path
import pandas as pd
from pathlib import Path
sys.path.append(path.dirname(path.dirname(path.dirname(path.dirname(__file__)))))
from vencopy.classes.dataParsers import DataParser
from vencopy.classes.tripDiaryBuilders import TripDiaryBuilder
from vencopy.classes.gridModelers import GridModeler
from vencopy.classes.flexEstimators import FlexEstimator
from vencopy.classes.evaluators import Evaluator
from vencopy.scripts.globalFunctions import loadConfigDict
print("Current working directory: {0}".format(os.getcwd()))
configNames = ('globalConfig', 'localPathConfig', 'parseConfig', 'tripConfig', 'gridConfig', 'flexConfig', 'evaluatorConfig')
configDict = loadConfigDict(configNames)
# Set reference dataset
datasetID = 'MiD17'
# Modify the localPathConfig file to point to the .csv file in the sampling folder in the tutorials directory where the dataset for the tutorials lies.
configDict['localPathConfig']['pathAbsolute'][datasetID] = Path(__file__).parent.parent / 'data_sampling'
# Assign to vencoPyRoot the folder in which you cloned your repository
#localPathConfig['pathAbsolute']['vencoPyRoot'] = Path.cwd().parent.parent
# Similarly we modify the datasetID in the global config file
configDict['globalConfig']['files'][datasetID]['tripsDataRaw'] = datasetID + '.csv'
# Adapt relative paths in config for tutorials
configDict['globalConfig']['pathRelative']['plots'] = Path(__file__).parent.parent / configDict['globalConfig']['pathRelative']['plots']
configDict['globalConfig']['pathRelative']['parseOutput'] = Path(__file__).parent.parent / configDict['globalConfig']['pathRelative']['parseOutput']
configDict['globalConfig']['pathRelative']['diaryOutput'] = Path(__file__).parent.parent / configDict['globalConfig']['pathRelative']['diaryOutput']
configDict['globalConfig']['pathRelative']['gridOutput'] = Path(__file__).parent.parent / configDict['globalConfig']['pathRelative']['gridOutput']
configDict['globalConfig']['pathRelative']['flexOutput'] = Path(__file__).parent.parent / configDict['globalConfig']['pathRelative']['flexOutput']
configDict['globalConfig']['pathRelative']['evalOutput'] = Path(__file__).parent.parent / configDict['globalConfig']['pathRelative']['evalOutput']
# We also modify the parseConfig by removing some of the columns that are normally parsed from the MiD, which are not available in our semplified test dataframe
del configDict['parseConfig']['dataVariables']['hhID']
del configDict['parseConfig']['dataVariables']['personID']
vpData = DataParser(datasetID=datasetID, configDict=configDict, loadEncrypted=False)
# Trip distance and purpose diary compositions
vpTripDiary = TripDiaryBuilder(datasetID=datasetID, configDict=configDict, ParseData=vpData, debug=True)
vpGrid = GridModeler(datasetID=datasetID, configDict=configDict)
vpGrid.assignSimpleGridViaPurposes()
vpGrid.writeOutGridAvailability()
vpEval = Evaluator(configDict=configDict, parseData=pd.Series(data=vpData, index=[datasetID]))
vpEval.hourlyAggregates = vpEval.calcVariableSpecAggregates(by=['tripStartWeekday'])
vpEval.plotAggregates()
# Estimate charging flexibility based on driving profiles and charge connection
vpFlex = FlexEstimator(datasetID=datasetID, configDict=configDict, ParseData=vpData)
vpFlex.baseProfileCalculation()
vpFlex.filter()
vpFlex.aggregate()
vpFlex.correct()
vpFlex.normalize()
vpFlex.writeOut()
vpEval.plotProfiles(flexEstimator=vpFlex)
| 0.335677 | 0.958187 |
```
%matplotlib inline
import numpy as np
from numpy.random import seed
import pandas as pd
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from sklearn.cluster import KMeans
from sklearn.datasets import make_blobs
from sklearn.metrics import silhouette_samples, silhouette_score
import warnings
from time import sleep
warnings.filterwarnings('ignore')
seed(42)
SMALL_SIZE = 8
MEDIUM_SIZE = 10
BIGGER_SIZE = 12
plt.rc('font', size=MEDIUM_SIZE)
plt.rc('axes', titlesize=MEDIUM_SIZE)
plt.rc('axes', labelsize=SMALL_SIZE)
plt.rc('xtick', labelsize=SMALL_SIZE)
plt.rc('ytick', labelsize=SMALL_SIZE)
```
### 2D Cluster Demo
```
def sample_clusters(n_points=500, n_dimensions=2,
n_clusters=5, cluster_std=1):
data, labels = make_blobs(n_samples=n_points,
n_features=n_dimensions,
centers=n_clusters,
cluster_std=cluster_std,
random_state=42)
return data, labels
```
### Evaluate Number of Clusters
```
def inertia_plot_update(inertias, ax, delay=1):
inertias.plot(color='k', lw=1, title='Inertia', ax=ax, xlim=(inertias.index[0], inertias.index[-1]), ylim=(0, inertias.max()))
fig.canvas.draw()
sleep(delay)
def plot_kmeans_result(data, labels, centroids,
assignments, ncluster, Z, ax):
ax.scatter(*data.T, c=labels, s=15) # plot data
# plot cluster centers
ax.scatter(*centroids.T, marker='o', c='k',
s=200, edgecolor='w', zorder=9)
for i, c in enumerate(centroids):
ax.scatter(*c, marker='${}$'.format(i),
s=50, edgecolor='', zorder=10)
xy = pd.DataFrame(data[assignments == i],
columns=['x', 'y']).assign(cx=c[0], cy=c[1])
ax.plot(xy[['x', 'cx']].T, xy[['y', 'cy']].T,
ls='--', color='w', lw=0.5)
# plot voronoi
ax.imshow(Z, interpolation='nearest',
extent=(xx.min(), xx.max(), yy.min(), yy.max()),
cmap=plt.cm.viridis, aspect='auto', origin='lower', alpha=.2)
ax.set_title('Number of Clusters: {}'.format(ncluster))
plt.tight_layout()
```
### Run Elbow Experiment
```
n_clusters, max_clusters = 4, 7
cluster_list = list(range(1, max_clusters + 1))
inertias = pd.Series(index=cluster_list)
data, labels = sample_clusters(n_clusters=n_clusters)
x, y = data.T
xx, yy = np.meshgrid(np.arange(x.min() - 1, x.max() + 1, .01),
np.arange(y.min() - 1, y.max() + 1, .01))
fig, axes = plt.subplots(ncols=3, nrows=3, figsize=(12, 8))
axes = np.array(axes).flatten()
plt.tight_layout();
# Plot Sample Data
axes[0].scatter(x, y, c=labels, s=10)
axes[0].set_title('{} Sample Clusters'.format(n_clusters));
for c, n_clusters in enumerate(range(1, max_clusters + 1), 2):
kmeans = KMeans(n_clusters=n_clusters, random_state=42).fit(data)
centroids, assignments, inertia = kmeans.cluster_centers_, kmeans.labels_, kmeans.inertia_
inertias[n_clusters] = inertia
inertia_plot_update(inertias, axes[1])
Z = kmeans.predict(np.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape)
plot_kmeans_result(data, labels, centroids, assignments, n_clusters, Z, axes[c])
```
### Evaluating the Silhouette Score
```
def plot_silhouette(values, y_lower, i, n_cluster, ax):
cluster_size = values.shape[0]
y_upper = y_lower + cluster_size
color = plt.cm.viridis(i / n_cluster)
ax.fill_betweenx(np.arange(y_lower, y_upper), 0, values,
facecolor=color, edgecolor=color, alpha=0.7)
ax.text(-0.05, y_lower + 0.5 * cluster_size, str(i))
y_lower = y_upper + 10
return y_lower
def format_silhouette_plot(ax):
ax.set_title("Silhouette Plot")
ax.set_xlabel("Silhouette Coefficient")
ax.set_ylabel("Cluster Label")
ax.axvline(
x=silhouette_avg, color="red", linestyle="--", lw=1)
ax.set_xticks([-0.1, 0, 0.2, 0.4, 0.6, 0.8, 1])
def plot_final_assignments(x, y, centroids,
assignments, n_cluster, ax):
c = plt.cm.viridis(assignments / n_cluster)
ax.scatter(x, y, marker='.', s=30,
lw=0, alpha=0.7, c=c, edgecolor='k')
ax.scatter(*centroids.T, marker='o',
c='w', s=200, edgecolor='k')
for i, c in enumerate(centroids):
ax.scatter(*c, marker='${}$'.format(i),
s=50, edgecolor='k')
ax.set_title('{} Clusters'.format(n_cluster))
n_clusters = 4
max_clusters = 7
cluster_list = list(range(1, max_clusters + 1))
inertias = pd.Series(index=cluster_list)
data, labels = sample_clusters(n_clusters=n_clusters)
x, y = data.T
fig, axes = plt.subplots(ncols=2,
nrows=max_clusters, figsize=(12, 20))
fig.tight_layout()
axes[0][0].scatter(x, y, c=labels, s=10)
axes[0][0].set_title('Sample Clusters')
for row, n_cluster in enumerate(range(2, max_clusters + 1), 1):
kmeans = KMeans(n_clusters=n_cluster, random_state=42).fit(data)
centroids, assignments, inertia = \
kmeans.cluster_centers_, kmeans.labels_, kmeans.inertia_
inertias[n_cluster] = inertia
inertia_plot_update(inertias, axes[0][1])
silhouette_avg = silhouette_score(data, assignments)
silhouette_values = silhouette_samples(data, assignments)
silhouette_plot, cluster_plot = axes[row]
y_lower = 10
for i in range(n_cluster):
y_lower = plot_silhouette(np.sort(silhouette_values[assignments == i]), y_lower, i, n_cluster, silhouette_plot)
format_silhouette_plot(silhouette_plot)
plot_final_assignments(x, y, centroids, assignments, n_cluster, cluster_plot)
fig.tight_layout()
fig.suptitle('KMeans Silhouette Plot with {} Clusters'.format(n_clusters), fontsize=14)
fig.tight_layout()
fig.subplots_adjust(top=.95)
```
|
github_jupyter
|
%matplotlib inline
import numpy as np
from numpy.random import seed
import pandas as pd
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from sklearn.cluster import KMeans
from sklearn.datasets import make_blobs
from sklearn.metrics import silhouette_samples, silhouette_score
import warnings
from time import sleep
warnings.filterwarnings('ignore')
seed(42)
SMALL_SIZE = 8
MEDIUM_SIZE = 10
BIGGER_SIZE = 12
plt.rc('font', size=MEDIUM_SIZE)
plt.rc('axes', titlesize=MEDIUM_SIZE)
plt.rc('axes', labelsize=SMALL_SIZE)
plt.rc('xtick', labelsize=SMALL_SIZE)
plt.rc('ytick', labelsize=SMALL_SIZE)
def sample_clusters(n_points=500, n_dimensions=2,
n_clusters=5, cluster_std=1):
data, labels = make_blobs(n_samples=n_points,
n_features=n_dimensions,
centers=n_clusters,
cluster_std=cluster_std,
random_state=42)
return data, labels
def inertia_plot_update(inertias, ax, delay=1):
inertias.plot(color='k', lw=1, title='Inertia', ax=ax, xlim=(inertias.index[0], inertias.index[-1]), ylim=(0, inertias.max()))
fig.canvas.draw()
sleep(delay)
def plot_kmeans_result(data, labels, centroids,
assignments, ncluster, Z, ax):
ax.scatter(*data.T, c=labels, s=15) # plot data
# plot cluster centers
ax.scatter(*centroids.T, marker='o', c='k',
s=200, edgecolor='w', zorder=9)
for i, c in enumerate(centroids):
ax.scatter(*c, marker='${}$'.format(i),
s=50, edgecolor='', zorder=10)
xy = pd.DataFrame(data[assignments == i],
columns=['x', 'y']).assign(cx=c[0], cy=c[1])
ax.plot(xy[['x', 'cx']].T, xy[['y', 'cy']].T,
ls='--', color='w', lw=0.5)
# plot voronoi
ax.imshow(Z, interpolation='nearest',
extent=(xx.min(), xx.max(), yy.min(), yy.max()),
cmap=plt.cm.viridis, aspect='auto', origin='lower', alpha=.2)
ax.set_title('Number of Clusters: {}'.format(ncluster))
plt.tight_layout()
n_clusters, max_clusters = 4, 7
cluster_list = list(range(1, max_clusters + 1))
inertias = pd.Series(index=cluster_list)
data, labels = sample_clusters(n_clusters=n_clusters)
x, y = data.T
xx, yy = np.meshgrid(np.arange(x.min() - 1, x.max() + 1, .01),
np.arange(y.min() - 1, y.max() + 1, .01))
fig, axes = plt.subplots(ncols=3, nrows=3, figsize=(12, 8))
axes = np.array(axes).flatten()
plt.tight_layout();
# Plot Sample Data
axes[0].scatter(x, y, c=labels, s=10)
axes[0].set_title('{} Sample Clusters'.format(n_clusters));
for c, n_clusters in enumerate(range(1, max_clusters + 1), 2):
kmeans = KMeans(n_clusters=n_clusters, random_state=42).fit(data)
centroids, assignments, inertia = kmeans.cluster_centers_, kmeans.labels_, kmeans.inertia_
inertias[n_clusters] = inertia
inertia_plot_update(inertias, axes[1])
Z = kmeans.predict(np.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape)
plot_kmeans_result(data, labels, centroids, assignments, n_clusters, Z, axes[c])
def plot_silhouette(values, y_lower, i, n_cluster, ax):
cluster_size = values.shape[0]
y_upper = y_lower + cluster_size
color = plt.cm.viridis(i / n_cluster)
ax.fill_betweenx(np.arange(y_lower, y_upper), 0, values,
facecolor=color, edgecolor=color, alpha=0.7)
ax.text(-0.05, y_lower + 0.5 * cluster_size, str(i))
y_lower = y_upper + 10
return y_lower
def format_silhouette_plot(ax):
ax.set_title("Silhouette Plot")
ax.set_xlabel("Silhouette Coefficient")
ax.set_ylabel("Cluster Label")
ax.axvline(
x=silhouette_avg, color="red", linestyle="--", lw=1)
ax.set_xticks([-0.1, 0, 0.2, 0.4, 0.6, 0.8, 1])
def plot_final_assignments(x, y, centroids,
assignments, n_cluster, ax):
c = plt.cm.viridis(assignments / n_cluster)
ax.scatter(x, y, marker='.', s=30,
lw=0, alpha=0.7, c=c, edgecolor='k')
ax.scatter(*centroids.T, marker='o',
c='w', s=200, edgecolor='k')
for i, c in enumerate(centroids):
ax.scatter(*c, marker='${}$'.format(i),
s=50, edgecolor='k')
ax.set_title('{} Clusters'.format(n_cluster))
n_clusters = 4
max_clusters = 7
cluster_list = list(range(1, max_clusters + 1))
inertias = pd.Series(index=cluster_list)
data, labels = sample_clusters(n_clusters=n_clusters)
x, y = data.T
fig, axes = plt.subplots(ncols=2,
nrows=max_clusters, figsize=(12, 20))
fig.tight_layout()
axes[0][0].scatter(x, y, c=labels, s=10)
axes[0][0].set_title('Sample Clusters')
for row, n_cluster in enumerate(range(2, max_clusters + 1), 1):
kmeans = KMeans(n_clusters=n_cluster, random_state=42).fit(data)
centroids, assignments, inertia = \
kmeans.cluster_centers_, kmeans.labels_, kmeans.inertia_
inertias[n_cluster] = inertia
inertia_plot_update(inertias, axes[0][1])
silhouette_avg = silhouette_score(data, assignments)
silhouette_values = silhouette_samples(data, assignments)
silhouette_plot, cluster_plot = axes[row]
y_lower = 10
for i in range(n_cluster):
y_lower = plot_silhouette(np.sort(silhouette_values[assignments == i]), y_lower, i, n_cluster, silhouette_plot)
format_silhouette_plot(silhouette_plot)
plot_final_assignments(x, y, centroids, assignments, n_cluster, cluster_plot)
fig.tight_layout()
fig.suptitle('KMeans Silhouette Plot with {} Clusters'.format(n_clusters), fontsize=14)
fig.tight_layout()
fig.subplots_adjust(top=.95)
| 0.691706 | 0.870707 |
# Finite Size Scaling
## Figure 3
Investigate the finite size scaling of $S_2(n=1)-\ln(N)$ vs $N^{-(4g+1)}$ for $N=2\to 14$ and $V/t = -1.3 \to 1.8$.
```
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
def g(V):
'''Interaction parameter g.'''
K = np.pi/np.arccos(-V/2)/2
return (K+1/K)/4-1/2
```
### Interaction strengths and boundary conditions
```
V = [1.8,1.4,1.0,0.6,0.2,-0.1,-0.5,-0.9,-1.3]
BC = ['APBC','PBC']
```
## Load the data and perform the linear fit for each BC and interaction strength
```
S2scaled = {}
for cBC in BC:
for cV in V:
# load raw data
data = np.loadtxt('N1%sn1u_%3.1f.dat'%(cBC[0],cV))
# Raises each N to the power of the leading finite size correction γ=(4g+1)
x = data[:,0]**(-(1.0+4*g(cV)))
# perform the linear fit
p = np.polyfit(x,data[:,3],1)
# rescale the data
y = (data[:,3]-p[1])/p[0]
# store
S2scaled['%s%3.1f'%(cBC[0],cV)] = np.stack((x,y),axis=1)
```
## Plot the scaled finite-size data collapse
```
colors = ['#4173b3','#e95c47','#7dcba4','#5e4ea2','#fdbe6e','#808080','#2e8b57','#b8860b','#87ceeb']
markers = ['o','s','h','D','H','>','^','<','v']
plt.style.reload_library()
with plt.style.context('../IOP_large.mplstyle'):
# Create the figure
fig1 = plt.figure()
ax2 = fig1.add_subplot(111)
ax3 = fig1.add_subplot(111)
ax2.set_xlabel(r'$N^{-(1+4g)}$')
ax2.set_ylabel(r'$(S_2(n=1)-{\rm{ln}}(N)-a)/b$')
ax2.set_xlim(0,0.52)
ax2.set_ylim(0,0.52)
ax3 = ax2.twinx()
ax3.set_xlim(0,0.52)
ax3.set_ylim(0,0.52)
ax1 = fig1.add_subplot(111)
ax1.set_xlim(0,0.52)
ax1.set_ylim(0,0.52)
# Plots (S2(n=1)-ln(N)-a)/b vs. N^-(4g+1)
# anti-periodic boundary conditions
for i,cV in enumerate(V):
data = S2scaled['A%3.1f'%cV]
if cV > 0:
ax = ax2
else:
ax = ax3
ax.plot(data[:,0],data[:,1], marker=markers[i], color=colors[i], mfc='None', mew='1.0',
linestyle='None', label=r'$%3.1f$'%cV)
# U/t > 0 legend
ax2.legend(loc=(0.01,0.29),frameon=False,numpoints=1,ncol=1,title=r'$V/t$')
# periodic boundary conditions
for i,cV in enumerate(V):
data = S2scaled['P%3.1f'%cV]
ax1.plot(data[:,0],data[:,1], marker=markers[i], color=colors[i], mfc='None', mew=1.0,
linestyle='None', label='None')
# U/t < 0 legend
ax3.legend(loc=(0.65,0.04),frameon=False,numpoints=1,ncol=1,title=r'$V/t$')
ax3.tick_params(
axis='both', # changes apply to the x-axis
which='both', # both major and minor ticks are affected
right='off', # ticks along the bottom edge are off
top='off', # ticks along the top edge are off
labelright='off') # labels along the bottom edge are off
ax2.tick_params(
axis='both', # changes apply to the x-axis
which='both', # both major and minor ticks are affected
right='on',
top='on') # ticks along the top edge are off
# Save the figure
plt.savefig('finiteSizeScaling.pdf')
plt.savefig('finiteSizeScaling.png')
```
|
github_jupyter
|
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
def g(V):
'''Interaction parameter g.'''
K = np.pi/np.arccos(-V/2)/2
return (K+1/K)/4-1/2
V = [1.8,1.4,1.0,0.6,0.2,-0.1,-0.5,-0.9,-1.3]
BC = ['APBC','PBC']
S2scaled = {}
for cBC in BC:
for cV in V:
# load raw data
data = np.loadtxt('N1%sn1u_%3.1f.dat'%(cBC[0],cV))
# Raises each N to the power of the leading finite size correction γ=(4g+1)
x = data[:,0]**(-(1.0+4*g(cV)))
# perform the linear fit
p = np.polyfit(x,data[:,3],1)
# rescale the data
y = (data[:,3]-p[1])/p[0]
# store
S2scaled['%s%3.1f'%(cBC[0],cV)] = np.stack((x,y),axis=1)
colors = ['#4173b3','#e95c47','#7dcba4','#5e4ea2','#fdbe6e','#808080','#2e8b57','#b8860b','#87ceeb']
markers = ['o','s','h','D','H','>','^','<','v']
plt.style.reload_library()
with plt.style.context('../IOP_large.mplstyle'):
# Create the figure
fig1 = plt.figure()
ax2 = fig1.add_subplot(111)
ax3 = fig1.add_subplot(111)
ax2.set_xlabel(r'$N^{-(1+4g)}$')
ax2.set_ylabel(r'$(S_2(n=1)-{\rm{ln}}(N)-a)/b$')
ax2.set_xlim(0,0.52)
ax2.set_ylim(0,0.52)
ax3 = ax2.twinx()
ax3.set_xlim(0,0.52)
ax3.set_ylim(0,0.52)
ax1 = fig1.add_subplot(111)
ax1.set_xlim(0,0.52)
ax1.set_ylim(0,0.52)
# Plots (S2(n=1)-ln(N)-a)/b vs. N^-(4g+1)
# anti-periodic boundary conditions
for i,cV in enumerate(V):
data = S2scaled['A%3.1f'%cV]
if cV > 0:
ax = ax2
else:
ax = ax3
ax.plot(data[:,0],data[:,1], marker=markers[i], color=colors[i], mfc='None', mew='1.0',
linestyle='None', label=r'$%3.1f$'%cV)
# U/t > 0 legend
ax2.legend(loc=(0.01,0.29),frameon=False,numpoints=1,ncol=1,title=r'$V/t$')
# periodic boundary conditions
for i,cV in enumerate(V):
data = S2scaled['P%3.1f'%cV]
ax1.plot(data[:,0],data[:,1], marker=markers[i], color=colors[i], mfc='None', mew=1.0,
linestyle='None', label='None')
# U/t < 0 legend
ax3.legend(loc=(0.65,0.04),frameon=False,numpoints=1,ncol=1,title=r'$V/t$')
ax3.tick_params(
axis='both', # changes apply to the x-axis
which='both', # both major and minor ticks are affected
right='off', # ticks along the bottom edge are off
top='off', # ticks along the top edge are off
labelright='off') # labels along the bottom edge are off
ax2.tick_params(
axis='both', # changes apply to the x-axis
which='both', # both major and minor ticks are affected
right='on',
top='on') # ticks along the top edge are off
# Save the figure
plt.savefig('finiteSizeScaling.pdf')
plt.savefig('finiteSizeScaling.png')
| 0.380644 | 0.895065 |
```
# ATTENTION: Please do not alter any of the provided code in the exercise. Only add your own code where indicated
# ATTENTION: Please do not add or remove any cells in the exercise. The grader will check specific cells based on the cell position.
# ATTENTION: Please use the provided epoch values when training.
# Import all the necessary files!
import os
import tensorflow as tf
from tensorflow.keras import layers
from tensorflow.keras import Model
from os import getcwd
path_inception = f"{getcwd()}/../tmp2/inception_v3_weights_tf_dim_ordering_tf_kernels_notop.h5"
# Import the inception model
from tensorflow.keras.applications.inception_v3 import InceptionV3
# Create an instance of the inception model from the local pre-trained weights
local_weights_file = path_inception
pre_trained_model = InceptionV3(input_shape=(150,150,3),
include_top=False,
weights=None)
pre_trained_model.load_weights(local_weights_file)
# Make all the layers in the pre-trained model non-trainable
for layer in pre_trained_model.layers:
layer.trainable=False
# Print the model summary
pre_trained_model.summary()
# Expected Output is extremely large, but should end with:
#batch_normalization_v1_281 (Bat (None, 3, 3, 192) 576 conv2d_281[0][0]
#__________________________________________________________________________________________________
#activation_273 (Activation) (None, 3, 3, 320) 0 batch_normalization_v1_273[0][0]
#__________________________________________________________________________________________________
#mixed9_1 (Concatenate) (None, 3, 3, 768) 0 activation_275[0][0]
# activation_276[0][0]
#__________________________________________________________________________________________________
#concatenate_5 (Concatenate) (None, 3, 3, 768) 0 activation_279[0][0]
# activation_280[0][0]
#__________________________________________________________________________________________________
#activation_281 (Activation) (None, 3, 3, 192) 0 batch_normalization_v1_281[0][0]
#__________________________________________________________________________________________________
#mixed10 (Concatenate) (None, 3, 3, 2048) 0 activation_273[0][0]
# mixed9_1[0][0]
# concatenate_5[0][0]
# activation_281[0][0]
#==================================================================================================
#Total params: 21,802,784
#Trainable params: 0
#Non-trainable params: 21,802,784
last_layer = pre_trained_model.get_layer('mixed7')
print('last layer output shape: ', last_layer.output_shape)
last_output = last_layer.output
# Expected Output:
# ('last layer output shape: ', (None, 7, 7, 768))
# Define a Callback class that stops training once accuracy reaches 97.0%
class myCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs={}):
if(logs.get('acc')>0.97):
print("\nReached 97.0% accuracy so cancelling training!")
self.model.stop_training = True
from tensorflow.keras.optimizers import RMSprop
# Flatten the output layer to 1 dimension
x = layers.Flatten()(last_output)
# Add a fully connected layer with 1,024 hidden units and ReLU activation
x = layers.Dense(1024, activation='relu')(x)
# Add a dropout rate of 0.2
x = layers.Dropout(0.2)(x)
# Add a final sigmoid layer for classification
x = layers.Dense(1, activation='sigmoid')(x)
model = Model(pre_trained_model.input, x)
model.compile(optimizer = RMSprop(lr=0.0001),
loss = 'binary_crossentropy',
metrics = ['accuracy'])
model.summary()
# Expected output will be large. Last few lines should be:
# mixed7 (Concatenate) (None, 7, 7, 768) 0 activation_248[0][0]
# activation_251[0][0]
# activation_256[0][0]
# activation_257[0][0]
# __________________________________________________________________________________________________
# flatten_4 (Flatten) (None, 37632) 0 mixed7[0][0]
# __________________________________________________________________________________________________
# dense_8 (Dense) (None, 1024) 38536192 flatten_4[0][0]
# __________________________________________________________________________________________________
# dropout_4 (Dropout) (None, 1024) 0 dense_8[0][0]
# __________________________________________________________________________________________________
# dense_9 (Dense) (None, 1) 1025 dropout_4[0][0]
# ==================================================================================================
# Total params: 47,512,481
# Trainable params: 38,537,217
# Non-trainable params: 8,975,264
# Get the Horse or Human dataset
path_horse_or_human = f"{getcwd()}/../tmp2/horse-or-human.zip"
# Get the Horse or Human Validation dataset
path_validation_horse_or_human = f"{getcwd()}/../tmp2/validation-horse-or-human.zip"
from tensorflow.keras.preprocessing.image import ImageDataGenerator
import os
import zipfile
import shutil
shutil.rmtree('/tmp')
local_zip = path_horse_or_human
zip_ref = zipfile.ZipFile(local_zip, 'r')
zip_ref.extractall('/tmp/training')
zip_ref.close()
local_zip = path_validation_horse_or_human
zip_ref = zipfile.ZipFile(local_zip, 'r')
zip_ref.extractall('/tmp/validation')
zip_ref.close()
# Define our example directories and files
train_dir = '/tmp/training'
validation_dir = '/tmp/validation'
train_horses_dir = os.path.join(train_dir, 'horses')
train_humans_dir = os.path.join(train_dir, 'humans')
validation_horses_dir = os.path.join(validation_dir, 'horses')
validation_humans_dir = os.path.join(validation_dir, 'humans')
train_horses_fnames = os.listdir(train_horses_dir)
train_humans_fnames = os.listdir(train_humans_dir)
validation_horses_fnames = os.listdir(validation_horses_dir)
validation_humans_fnames = os.listdir(validation_humans_dir)
print(len(train_horses_fnames))
print(len(train_humans_fnames))
print(len(validation_horses_fnames))
print(len(validation_humans_fnames))
# Expected Output:
# 500
# 527
# 128
# 128
# Add our data-augmentation parameters to ImageDataGenerator
train_datagen = ImageDataGenerator(rescale=1/255,
rotation_range = 40,
width_shift_range = 0.2,
height_shift_range = 0.2,
shear_range = 0.2,
zoom_range = 0.2,
horizontal_flip = True)
# Note that the validation data should not be augmented!
test_datagen = ImageDataGenerator(rescale=1/255)
# Flow training images in batches of 20 using train_datagen generator
train_generator = train_datagen.flow_from_directory(train_dir,
batch_size=20,
class_mode='binary',
target_size=(150,150)
)
# Flow validation images in batches of 20 using test_datagen generator
validation_generator = test_datagen.flow_from_directory(validation_dir,
batch_size=20,
class_mode='binary',
target_size=(150,150)
)
# Expected Output:
# Found 1027 images belonging to 2 classes.
# Found 256 images belonging to 2 classes.
# Run this and see how many epochs it should take before the callback
# fires, and stops training at 97% accuracy
callbacks = myCallback()
history = model.fit_generator(train_generator,
epochs=3,
validation_data=validation_generator,
verbose=2
)
%matplotlib inline
import matplotlib.pyplot as plt
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'r', label='Training accuracy')
plt.plot(epochs, val_acc, 'b', label='Validation accuracy')
plt.title('Training and validation accuracy')
plt.legend(loc=0)
plt.figure()
plt.show()
```
# Submission Instructions
```
# Now click the 'Submit Assignment' button above.
```
# When you're done or would like to take a break, please run the two cells below to save your work and close the Notebook. This will free up resources for your fellow learners.
```
%%javascript
<!-- Save the notebook -->
IPython.notebook.save_checkpoint();
%%javascript
IPython.notebook.session.delete();
window.onbeforeunload = null
setTimeout(function() { window.close(); }, 1000);
```
|
github_jupyter
|
# ATTENTION: Please do not alter any of the provided code in the exercise. Only add your own code where indicated
# ATTENTION: Please do not add or remove any cells in the exercise. The grader will check specific cells based on the cell position.
# ATTENTION: Please use the provided epoch values when training.
# Import all the necessary files!
import os
import tensorflow as tf
from tensorflow.keras import layers
from tensorflow.keras import Model
from os import getcwd
path_inception = f"{getcwd()}/../tmp2/inception_v3_weights_tf_dim_ordering_tf_kernels_notop.h5"
# Import the inception model
from tensorflow.keras.applications.inception_v3 import InceptionV3
# Create an instance of the inception model from the local pre-trained weights
local_weights_file = path_inception
pre_trained_model = InceptionV3(input_shape=(150,150,3),
include_top=False,
weights=None)
pre_trained_model.load_weights(local_weights_file)
# Make all the layers in the pre-trained model non-trainable
for layer in pre_trained_model.layers:
layer.trainable=False
# Print the model summary
pre_trained_model.summary()
# Expected Output is extremely large, but should end with:
#batch_normalization_v1_281 (Bat (None, 3, 3, 192) 576 conv2d_281[0][0]
#__________________________________________________________________________________________________
#activation_273 (Activation) (None, 3, 3, 320) 0 batch_normalization_v1_273[0][0]
#__________________________________________________________________________________________________
#mixed9_1 (Concatenate) (None, 3, 3, 768) 0 activation_275[0][0]
# activation_276[0][0]
#__________________________________________________________________________________________________
#concatenate_5 (Concatenate) (None, 3, 3, 768) 0 activation_279[0][0]
# activation_280[0][0]
#__________________________________________________________________________________________________
#activation_281 (Activation) (None, 3, 3, 192) 0 batch_normalization_v1_281[0][0]
#__________________________________________________________________________________________________
#mixed10 (Concatenate) (None, 3, 3, 2048) 0 activation_273[0][0]
# mixed9_1[0][0]
# concatenate_5[0][0]
# activation_281[0][0]
#==================================================================================================
#Total params: 21,802,784
#Trainable params: 0
#Non-trainable params: 21,802,784
last_layer = pre_trained_model.get_layer('mixed7')
print('last layer output shape: ', last_layer.output_shape)
last_output = last_layer.output
# Expected Output:
# ('last layer output shape: ', (None, 7, 7, 768))
# Define a Callback class that stops training once accuracy reaches 97.0%
class myCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs={}):
if(logs.get('acc')>0.97):
print("\nReached 97.0% accuracy so cancelling training!")
self.model.stop_training = True
from tensorflow.keras.optimizers import RMSprop
# Flatten the output layer to 1 dimension
x = layers.Flatten()(last_output)
# Add a fully connected layer with 1,024 hidden units and ReLU activation
x = layers.Dense(1024, activation='relu')(x)
# Add a dropout rate of 0.2
x = layers.Dropout(0.2)(x)
# Add a final sigmoid layer for classification
x = layers.Dense(1, activation='sigmoid')(x)
model = Model(pre_trained_model.input, x)
model.compile(optimizer = RMSprop(lr=0.0001),
loss = 'binary_crossentropy',
metrics = ['accuracy'])
model.summary()
# Expected output will be large. Last few lines should be:
# mixed7 (Concatenate) (None, 7, 7, 768) 0 activation_248[0][0]
# activation_251[0][0]
# activation_256[0][0]
# activation_257[0][0]
# __________________________________________________________________________________________________
# flatten_4 (Flatten) (None, 37632) 0 mixed7[0][0]
# __________________________________________________________________________________________________
# dense_8 (Dense) (None, 1024) 38536192 flatten_4[0][0]
# __________________________________________________________________________________________________
# dropout_4 (Dropout) (None, 1024) 0 dense_8[0][0]
# __________________________________________________________________________________________________
# dense_9 (Dense) (None, 1) 1025 dropout_4[0][0]
# ==================================================================================================
# Total params: 47,512,481
# Trainable params: 38,537,217
# Non-trainable params: 8,975,264
# Get the Horse or Human dataset
path_horse_or_human = f"{getcwd()}/../tmp2/horse-or-human.zip"
# Get the Horse or Human Validation dataset
path_validation_horse_or_human = f"{getcwd()}/../tmp2/validation-horse-or-human.zip"
from tensorflow.keras.preprocessing.image import ImageDataGenerator
import os
import zipfile
import shutil
shutil.rmtree('/tmp')
local_zip = path_horse_or_human
zip_ref = zipfile.ZipFile(local_zip, 'r')
zip_ref.extractall('/tmp/training')
zip_ref.close()
local_zip = path_validation_horse_or_human
zip_ref = zipfile.ZipFile(local_zip, 'r')
zip_ref.extractall('/tmp/validation')
zip_ref.close()
# Define our example directories and files
train_dir = '/tmp/training'
validation_dir = '/tmp/validation'
train_horses_dir = os.path.join(train_dir, 'horses')
train_humans_dir = os.path.join(train_dir, 'humans')
validation_horses_dir = os.path.join(validation_dir, 'horses')
validation_humans_dir = os.path.join(validation_dir, 'humans')
train_horses_fnames = os.listdir(train_horses_dir)
train_humans_fnames = os.listdir(train_humans_dir)
validation_horses_fnames = os.listdir(validation_horses_dir)
validation_humans_fnames = os.listdir(validation_humans_dir)
print(len(train_horses_fnames))
print(len(train_humans_fnames))
print(len(validation_horses_fnames))
print(len(validation_humans_fnames))
# Expected Output:
# 500
# 527
# 128
# 128
# Add our data-augmentation parameters to ImageDataGenerator
train_datagen = ImageDataGenerator(rescale=1/255,
rotation_range = 40,
width_shift_range = 0.2,
height_shift_range = 0.2,
shear_range = 0.2,
zoom_range = 0.2,
horizontal_flip = True)
# Note that the validation data should not be augmented!
test_datagen = ImageDataGenerator(rescale=1/255)
# Flow training images in batches of 20 using train_datagen generator
train_generator = train_datagen.flow_from_directory(train_dir,
batch_size=20,
class_mode='binary',
target_size=(150,150)
)
# Flow validation images in batches of 20 using test_datagen generator
validation_generator = test_datagen.flow_from_directory(validation_dir,
batch_size=20,
class_mode='binary',
target_size=(150,150)
)
# Expected Output:
# Found 1027 images belonging to 2 classes.
# Found 256 images belonging to 2 classes.
# Run this and see how many epochs it should take before the callback
# fires, and stops training at 97% accuracy
callbacks = myCallback()
history = model.fit_generator(train_generator,
epochs=3,
validation_data=validation_generator,
verbose=2
)
%matplotlib inline
import matplotlib.pyplot as plt
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'r', label='Training accuracy')
plt.plot(epochs, val_acc, 'b', label='Validation accuracy')
plt.title('Training and validation accuracy')
plt.legend(loc=0)
plt.figure()
plt.show()
# Now click the 'Submit Assignment' button above.
%%javascript
<!-- Save the notebook -->
IPython.notebook.save_checkpoint();
%%javascript
IPython.notebook.session.delete();
window.onbeforeunload = null
setTimeout(function() { window.close(); }, 1000);
| 0.591841 | 0.474509 |
# Import Libraries
```
from __future__ import print_function
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import torchvision
from torchvision import datasets, transforms
%matplotlib inline
import matplotlib.pyplot as plt
```
## Data Transformations
We first start with defining our data transformations. We need to think what our data is and how can we augment it to correct represent images which it might not see otherwise.
```
# Train Phase transformations
train_transforms = transforms.Compose([
# transforms.Resize((28, 28)),
# transforms.ColorJitter(brightness=0.10, contrast=0.1, saturation=0.10, hue=0.1),
# transforms.RandomRotation((-7.0, 7.0), fill=(1,)),
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,)) # The mean and std have to be sequences (e.g., tuples), therefore you should add a comma after the values.
# Note the difference between (0.1307) and (0.1307,)
])
# Test Phase transformations
test_transforms = transforms.Compose([
# transforms.Resize((28, 28)),
# transforms.ColorJitter(brightness=0.10, contrast=0.1, saturation=0.10, hue=0.1),
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])
```
# Dataset and Creating Train/Test Split
```
train = datasets.MNIST('./data', train=True, download=True, transform=train_transforms)
test = datasets.MNIST('./data', train=False, download=True, transform=test_transforms)
```
# Dataloader Arguments & Test/Train Dataloaders
```
SEED = 1
# CUDA?
cuda = torch.cuda.is_available()
print("CUDA Available?", cuda)
# For reproducibility
torch.manual_seed(SEED)
if cuda:
torch.cuda.manual_seed(SEED)
# dataloader arguments - something you'll fetch these from cmdprmt
dataloader_args = dict(shuffle=True, batch_size=128, num_workers=4, pin_memory=True) if cuda else dict(shuffle=True, batch_size=64)
# train dataloader
train_loader = torch.utils.data.DataLoader(train, **dataloader_args)
# test dataloader
test_loader = torch.utils.data.DataLoader(test, **dataloader_args)
```
# The model
Let's start with the model we first saw
```
dropout_value=0.05
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# Input Block
self.convblock1 = nn.Sequential(
nn.Conv2d(in_channels=1, out_channels=16, kernel_size=(3, 3), padding=0, bias=False),
nn.ReLU(),
nn.BatchNorm2d(16),
nn.Dropout(dropout_value)
) # output_size = 26
# CONVOLUTION BLOCK 1
self.convblock2 = nn.Sequential(
nn.Conv2d(in_channels=16, out_channels=16, kernel_size=(3, 3), padding=0, bias=False),
nn.ReLU(),
nn.BatchNorm2d(16),
nn.Dropout(dropout_value)
) # output_size = 24
# TRANSITION BLOCK 1
self.convblock3 = nn.Sequential(
nn.Conv2d(in_channels=16, out_channels=16, kernel_size=(1, 1), padding=0, bias=False),
nn.ReLU(),
) # output_size = 24
self.pool1 = nn.MaxPool2d(2, 2) # output_size = 12
# CONVOLUTION BLOCK 2
self.convblock4 = nn.Sequential(
nn.Conv2d(in_channels=16, out_channels=16, kernel_size=(3, 3), padding=0, bias=False),
nn.ReLU(),
nn.BatchNorm2d(16),
nn.Dropout(dropout_value)
) # output_size = 10
self.convblock5 = nn.Sequential(
nn.Conv2d(in_channels=16, out_channels=16, kernel_size=(3, 3), padding=0, bias=False),
nn.ReLU(),
nn.BatchNorm2d(16),
nn.Dropout(dropout_value)
) # output_size = 8
self.convblock6 = nn.Sequential(
nn.Conv2d(in_channels=16, out_channels=10, kernel_size=(3, 3), padding=0, bias=False),
nn.ReLU(),
nn.BatchNorm2d(10),
nn.Dropout(dropout_value)
) # output_size = 6
# OUTPUT BLOCK
self.convblock7 = nn.Sequential(
nn.Conv2d(in_channels=10, out_channels=10, kernel_size=(3, 3), padding=1, bias=False),
nn.ReLU(),
nn.BatchNorm2d(10),
nn.Dropout(dropout_value)
) # output_size = 6
self.gap = nn.Sequential(
nn.AvgPool2d(kernel_size=6)
)
self.convblock8 = nn.Sequential(
nn.Conv2d(in_channels=10, out_channels=10, kernel_size=(1, 1), padding=0, bias=False),
# nn.BatchNorm2d(10), NEVER
# nn.ReLU() NEVER!
) # output_size = 1
def forward(self, x):
x = self.convblock1(x)
x = self.convblock2(x)
x = self.convblock3(x)
x = self.pool1(x)
x = self.convblock4(x)
x = self.convblock5(x)
x = self.convblock6(x)
x = self.convblock7(x)
x = self.gap(x)
x = self.convblock8(x)
x = x.view(-1, 10)
return F.log_softmax(x, dim=-1)
```
# Model Params
Can't emphasize on how important viewing Model Summary is.
Unfortunately, there is no in-built model visualizer, so we have to take external help
```
!pip install torchsummary
from torchsummary import summary
use_cuda = torch.cuda.is_available()
device = torch.device("cuda" if use_cuda else "cpu")
print(device)
model = Net().to(device)
summary(model, input_size=(1, 28, 28))
```
# Training and Testing
Looking at logs can be boring, so we'll introduce **tqdm** progressbar to get cooler logs.
Let's write train and test functions
```
from tqdm import tqdm
train_losses = []
test_losses = []
train_acc = []
test_acc = []
def train(model, device, train_loader, optimizer, epoch):
global train_max
model.train()
pbar = tqdm(train_loader)
correct = 0
processed = 0
for batch_idx, (data, target) in enumerate(pbar):
# get samples
data, target = data.to(device), target.to(device)
# Init
optimizer.zero_grad()
# In PyTorch, we need to set the gradients to zero before starting to do backpropragation because PyTorch accumulates the gradients on subsequent backward passes.
# Because of this, when you start your training loop, ideally you should zero out the gradients so that you do the parameter update correctly.
# Predict
y_pred = model(data)
# Calculate loss
loss = F.nll_loss(y_pred, target)
train_losses.append(loss)
# Backpropagation
loss.backward()
optimizer.step()
# Update pbar-tqdm
pred = y_pred.argmax(dim=1, keepdim=True) # get the index of the max log-probability
correct += pred.eq(target.view_as(pred)).sum().item()
processed += len(data)
pbar.set_description(desc= f'Loss={loss.item()} Batch_id={batch_idx} Accuracy={100*correct/processed:0.2f}')
train_acc.append(100*correct/processed)
if (train_max < 100*correct/processed):
train_max = 100*correct/processed
def test(model, device, test_loader):
global test_max
model.eval()
test_loss = 0
correct = 0
with torch.no_grad():
for data, target in test_loader:
data, target = data.to(device), target.to(device)
output = model(data)
test_loss += F.nll_loss(output, target, reduction='sum').item() # sum up batch loss
pred = output.argmax(dim=1, keepdim=True) # get the index of the max log-probability
correct += pred.eq(target.view_as(pred)).sum().item()
test_loss /= len(test_loader.dataset)
test_losses.append(test_loss)
print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.2f}%)\n'.format(
test_loss, correct, len(test_loader.dataset),
100. * correct / len(test_loader.dataset)))
if (test_max < 100. * correct / len(test_loader.dataset)):
test_max = 100. * correct / len(test_loader.dataset)
test_acc.append(100. * correct / len(test_loader.dataset))
```
# Let's Train and test our model
```
model = Net().to(device)
optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.9)
EPOCHS = 15
train_max=0
test_max=0
for epoch in range(EPOCHS):
print("EPOCH:", epoch)
train(model, device, train_loader, optimizer, epoch)
test(model, device, test_loader)
print(f"\nMaximum training accuracy: {train_max}\n")
print(f"\nMaximum test accuracy: {test_max}\n")
fig, axs = plt.subplots(2,2,figsize=(15,10))
axs[0, 0].plot(train_losses)
axs[0, 0].set_title("Training Loss")
axs[1, 0].plot(train_acc)
axs[1, 0].set_title("Training Accuracy")
axs[0, 1].plot(test_losses)
axs[0, 1].set_title("Test Loss")
axs[1, 1].plot(test_acc)
axs[1, 1].set_title("Test Accuracy")
fig, ((axs1, axs2), (axs3, axs4)) = plt.subplots(2,2,figsize=(15,10))
# Train plot
axs1.plot(train_losses)
axs1.set_title("Training Loss")
axs3.plot(train_acc)
axs3.set_title("Training Accuracy")
# axs1.set_xlim([0, 5])
axs1.set_ylim([0, 5])
axs3.set_ylim([0, 100])
# Test plot
axs2.plot(test_losses)
axs2.set_title("Test Loss")
axs4.plot(test_acc)
axs4.set_title("Test Accuracy")
axs2.set_ylim([0, 5])
axs4.set_ylim([0, 100])
```
|
github_jupyter
|
from __future__ import print_function
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import torchvision
from torchvision import datasets, transforms
%matplotlib inline
import matplotlib.pyplot as plt
# Train Phase transformations
train_transforms = transforms.Compose([
# transforms.Resize((28, 28)),
# transforms.ColorJitter(brightness=0.10, contrast=0.1, saturation=0.10, hue=0.1),
# transforms.RandomRotation((-7.0, 7.0), fill=(1,)),
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,)) # The mean and std have to be sequences (e.g., tuples), therefore you should add a comma after the values.
# Note the difference between (0.1307) and (0.1307,)
])
# Test Phase transformations
test_transforms = transforms.Compose([
# transforms.Resize((28, 28)),
# transforms.ColorJitter(brightness=0.10, contrast=0.1, saturation=0.10, hue=0.1),
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])
train = datasets.MNIST('./data', train=True, download=True, transform=train_transforms)
test = datasets.MNIST('./data', train=False, download=True, transform=test_transforms)
SEED = 1
# CUDA?
cuda = torch.cuda.is_available()
print("CUDA Available?", cuda)
# For reproducibility
torch.manual_seed(SEED)
if cuda:
torch.cuda.manual_seed(SEED)
# dataloader arguments - something you'll fetch these from cmdprmt
dataloader_args = dict(shuffle=True, batch_size=128, num_workers=4, pin_memory=True) if cuda else dict(shuffle=True, batch_size=64)
# train dataloader
train_loader = torch.utils.data.DataLoader(train, **dataloader_args)
# test dataloader
test_loader = torch.utils.data.DataLoader(test, **dataloader_args)
dropout_value=0.05
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# Input Block
self.convblock1 = nn.Sequential(
nn.Conv2d(in_channels=1, out_channels=16, kernel_size=(3, 3), padding=0, bias=False),
nn.ReLU(),
nn.BatchNorm2d(16),
nn.Dropout(dropout_value)
) # output_size = 26
# CONVOLUTION BLOCK 1
self.convblock2 = nn.Sequential(
nn.Conv2d(in_channels=16, out_channels=16, kernel_size=(3, 3), padding=0, bias=False),
nn.ReLU(),
nn.BatchNorm2d(16),
nn.Dropout(dropout_value)
) # output_size = 24
# TRANSITION BLOCK 1
self.convblock3 = nn.Sequential(
nn.Conv2d(in_channels=16, out_channels=16, kernel_size=(1, 1), padding=0, bias=False),
nn.ReLU(),
) # output_size = 24
self.pool1 = nn.MaxPool2d(2, 2) # output_size = 12
# CONVOLUTION BLOCK 2
self.convblock4 = nn.Sequential(
nn.Conv2d(in_channels=16, out_channels=16, kernel_size=(3, 3), padding=0, bias=False),
nn.ReLU(),
nn.BatchNorm2d(16),
nn.Dropout(dropout_value)
) # output_size = 10
self.convblock5 = nn.Sequential(
nn.Conv2d(in_channels=16, out_channels=16, kernel_size=(3, 3), padding=0, bias=False),
nn.ReLU(),
nn.BatchNorm2d(16),
nn.Dropout(dropout_value)
) # output_size = 8
self.convblock6 = nn.Sequential(
nn.Conv2d(in_channels=16, out_channels=10, kernel_size=(3, 3), padding=0, bias=False),
nn.ReLU(),
nn.BatchNorm2d(10),
nn.Dropout(dropout_value)
) # output_size = 6
# OUTPUT BLOCK
self.convblock7 = nn.Sequential(
nn.Conv2d(in_channels=10, out_channels=10, kernel_size=(3, 3), padding=1, bias=False),
nn.ReLU(),
nn.BatchNorm2d(10),
nn.Dropout(dropout_value)
) # output_size = 6
self.gap = nn.Sequential(
nn.AvgPool2d(kernel_size=6)
)
self.convblock8 = nn.Sequential(
nn.Conv2d(in_channels=10, out_channels=10, kernel_size=(1, 1), padding=0, bias=False),
# nn.BatchNorm2d(10), NEVER
# nn.ReLU() NEVER!
) # output_size = 1
def forward(self, x):
x = self.convblock1(x)
x = self.convblock2(x)
x = self.convblock3(x)
x = self.pool1(x)
x = self.convblock4(x)
x = self.convblock5(x)
x = self.convblock6(x)
x = self.convblock7(x)
x = self.gap(x)
x = self.convblock8(x)
x = x.view(-1, 10)
return F.log_softmax(x, dim=-1)
!pip install torchsummary
from torchsummary import summary
use_cuda = torch.cuda.is_available()
device = torch.device("cuda" if use_cuda else "cpu")
print(device)
model = Net().to(device)
summary(model, input_size=(1, 28, 28))
from tqdm import tqdm
train_losses = []
test_losses = []
train_acc = []
test_acc = []
def train(model, device, train_loader, optimizer, epoch):
global train_max
model.train()
pbar = tqdm(train_loader)
correct = 0
processed = 0
for batch_idx, (data, target) in enumerate(pbar):
# get samples
data, target = data.to(device), target.to(device)
# Init
optimizer.zero_grad()
# In PyTorch, we need to set the gradients to zero before starting to do backpropragation because PyTorch accumulates the gradients on subsequent backward passes.
# Because of this, when you start your training loop, ideally you should zero out the gradients so that you do the parameter update correctly.
# Predict
y_pred = model(data)
# Calculate loss
loss = F.nll_loss(y_pred, target)
train_losses.append(loss)
# Backpropagation
loss.backward()
optimizer.step()
# Update pbar-tqdm
pred = y_pred.argmax(dim=1, keepdim=True) # get the index of the max log-probability
correct += pred.eq(target.view_as(pred)).sum().item()
processed += len(data)
pbar.set_description(desc= f'Loss={loss.item()} Batch_id={batch_idx} Accuracy={100*correct/processed:0.2f}')
train_acc.append(100*correct/processed)
if (train_max < 100*correct/processed):
train_max = 100*correct/processed
def test(model, device, test_loader):
global test_max
model.eval()
test_loss = 0
correct = 0
with torch.no_grad():
for data, target in test_loader:
data, target = data.to(device), target.to(device)
output = model(data)
test_loss += F.nll_loss(output, target, reduction='sum').item() # sum up batch loss
pred = output.argmax(dim=1, keepdim=True) # get the index of the max log-probability
correct += pred.eq(target.view_as(pred)).sum().item()
test_loss /= len(test_loader.dataset)
test_losses.append(test_loss)
print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.2f}%)\n'.format(
test_loss, correct, len(test_loader.dataset),
100. * correct / len(test_loader.dataset)))
if (test_max < 100. * correct / len(test_loader.dataset)):
test_max = 100. * correct / len(test_loader.dataset)
test_acc.append(100. * correct / len(test_loader.dataset))
model = Net().to(device)
optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.9)
EPOCHS = 15
train_max=0
test_max=0
for epoch in range(EPOCHS):
print("EPOCH:", epoch)
train(model, device, train_loader, optimizer, epoch)
test(model, device, test_loader)
print(f"\nMaximum training accuracy: {train_max}\n")
print(f"\nMaximum test accuracy: {test_max}\n")
fig, axs = plt.subplots(2,2,figsize=(15,10))
axs[0, 0].plot(train_losses)
axs[0, 0].set_title("Training Loss")
axs[1, 0].plot(train_acc)
axs[1, 0].set_title("Training Accuracy")
axs[0, 1].plot(test_losses)
axs[0, 1].set_title("Test Loss")
axs[1, 1].plot(test_acc)
axs[1, 1].set_title("Test Accuracy")
fig, ((axs1, axs2), (axs3, axs4)) = plt.subplots(2,2,figsize=(15,10))
# Train plot
axs1.plot(train_losses)
axs1.set_title("Training Loss")
axs3.plot(train_acc)
axs3.set_title("Training Accuracy")
# axs1.set_xlim([0, 5])
axs1.set_ylim([0, 5])
axs3.set_ylim([0, 100])
# Test plot
axs2.plot(test_losses)
axs2.set_title("Test Loss")
axs4.plot(test_acc)
axs4.set_title("Test Accuracy")
axs2.set_ylim([0, 5])
axs4.set_ylim([0, 100])
| 0.942599 | 0.917635 |
```
import numpy as np
import pickle
import matplotlib.pyplot as plt
import matplotlib
from sklearn.decomposition import PCA
from sklearn.manifold import TSNE
from sklearn.metrics import confusion_matrix
import random
import seaborn as sns
import pandas as pd
import glob
from sklearn.metrics import accuracy_score
from sklearn.metrics import balanced_accuracy_score
%matplotlib inline
pdbs = glob.glob('*_labels.npy')
pdbs = ['_'.join(p.split('_')[0:2]) for p in pdbs]
len(pdbs)
ytrue = []
for pdb in pdbs:
ytrue = ytrue + list(np.load('{}_labels.npy'.format(pdb)))
len(ytrue)
ypred = []
all_logits = np.empty((0,7))
n_lig_tot = 0
all_pdb_ids = []
for i,pdb in enumerate(pdbs):
logits = np.load('{}_logits.npy'.format(pdb))
n_ligands = logits.shape[0]
n_lig_tot += n_ligands
if n_ligands == 0:
continue
all_logits = np.vstack([all_logits,np.squeeze(np.mean(logits,axis=1))])
all_pdb_ids = all_pdb_ids + n_ligands*[pdb]
if n_ligands ==1 :
ypred = ypred + [np.argmax(np.squeeze(np.mean(logits,axis=1)))]
else:
ypred = ypred + list(np.argmax(np.squeeze(np.mean(logits,axis=1)),axis=1))
len(ypred)
conf = confusion_matrix(ytrue, ypred)
conf
conf_frac = conf/np.sum(conf,axis=1)[:,None]
conf_frac = pd.DataFrame(conf_frac, index=['True ADP', 'True COA',
'True FAD', 'True HEM', 'True NAD', 'True NAP', 'True SAM'],
columns= ['Pred ADP', 'Pred COA', 'Pred FAD', 'Pred HEM', 'Pred NAD',
'Pred NAP', 'Pred SAM'])
ax = sns.heatmap(conf_frac,vmin=0.0, vmax=1.0, annot=True)
plt.tight_layout()
#plt.savefig('/Users/freyr/Desktop/paper_images/classification/sequenceSplit_allFeatures_numbers.pdf', type='pdf')
#plt.savefig('/Users/freyr/Desktop/thesis_images/pocket_clustering/heatmap_noStructClust.eps')
high_conf = []
thresh = 0.75
for log,pred,true in zip(all_logits,ypred,ytrue):
if log[pred]>thresh:
high_conf.append(True)
else:
high_conf.append(False)
np.sum(high_conf)
conf_highconf = confusion_matrix(np.array(ytrue)[high_conf], np.array(ypred)[high_conf])
conf_highconf
conf__highconf_frac = conf_highconf/np.sum(conf_highconf,axis=1)[:,None]
conf__highconf_frac = pd.DataFrame(conf__highconf_frac, index=['True ADP', 'True COA',
'True FAD', 'True HEM', 'True NAD', 'True NAP', 'True SAM'],
columns= ['Pred ADP', 'Pred COA', 'Pred FAD', 'Pred HEM', 'Pred NAD',
'Pred NAP', 'Pred SAM'])
ax = sns.heatmap(conf__highconf_frac,vmin=0.0, vmax=1.0, annot=True)
plt.tight_layout()
#plt.savefig('/Users/freyr/Desktop/paper_images/classification/sequenceSplit_allFeatures_numbers.pdf', type='pdf')
#plt.savefig('/Users/freyr/Desktop/thesis_images/pocket_clustering/heatmap_noStructClust.eps')
for ids,log,pred,true in zip(all_pdb_ids,all_logits,ypred,ytrue):
if (true==6) and (pred==4) and (log[pred]>thresh):
print(ids,log,pred,true)
for ids,log,pred,true in zip(all_pdb_ids,all_logits,ypred,ytrue):
if (true==4) and (pred==5) and (log[pred]>thresh):
print(ids,log,pred,true)
```
|
github_jupyter
|
import numpy as np
import pickle
import matplotlib.pyplot as plt
import matplotlib
from sklearn.decomposition import PCA
from sklearn.manifold import TSNE
from sklearn.metrics import confusion_matrix
import random
import seaborn as sns
import pandas as pd
import glob
from sklearn.metrics import accuracy_score
from sklearn.metrics import balanced_accuracy_score
%matplotlib inline
pdbs = glob.glob('*_labels.npy')
pdbs = ['_'.join(p.split('_')[0:2]) for p in pdbs]
len(pdbs)
ytrue = []
for pdb in pdbs:
ytrue = ytrue + list(np.load('{}_labels.npy'.format(pdb)))
len(ytrue)
ypred = []
all_logits = np.empty((0,7))
n_lig_tot = 0
all_pdb_ids = []
for i,pdb in enumerate(pdbs):
logits = np.load('{}_logits.npy'.format(pdb))
n_ligands = logits.shape[0]
n_lig_tot += n_ligands
if n_ligands == 0:
continue
all_logits = np.vstack([all_logits,np.squeeze(np.mean(logits,axis=1))])
all_pdb_ids = all_pdb_ids + n_ligands*[pdb]
if n_ligands ==1 :
ypred = ypred + [np.argmax(np.squeeze(np.mean(logits,axis=1)))]
else:
ypred = ypred + list(np.argmax(np.squeeze(np.mean(logits,axis=1)),axis=1))
len(ypred)
conf = confusion_matrix(ytrue, ypred)
conf
conf_frac = conf/np.sum(conf,axis=1)[:,None]
conf_frac = pd.DataFrame(conf_frac, index=['True ADP', 'True COA',
'True FAD', 'True HEM', 'True NAD', 'True NAP', 'True SAM'],
columns= ['Pred ADP', 'Pred COA', 'Pred FAD', 'Pred HEM', 'Pred NAD',
'Pred NAP', 'Pred SAM'])
ax = sns.heatmap(conf_frac,vmin=0.0, vmax=1.0, annot=True)
plt.tight_layout()
#plt.savefig('/Users/freyr/Desktop/paper_images/classification/sequenceSplit_allFeatures_numbers.pdf', type='pdf')
#plt.savefig('/Users/freyr/Desktop/thesis_images/pocket_clustering/heatmap_noStructClust.eps')
high_conf = []
thresh = 0.75
for log,pred,true in zip(all_logits,ypred,ytrue):
if log[pred]>thresh:
high_conf.append(True)
else:
high_conf.append(False)
np.sum(high_conf)
conf_highconf = confusion_matrix(np.array(ytrue)[high_conf], np.array(ypred)[high_conf])
conf_highconf
conf__highconf_frac = conf_highconf/np.sum(conf_highconf,axis=1)[:,None]
conf__highconf_frac = pd.DataFrame(conf__highconf_frac, index=['True ADP', 'True COA',
'True FAD', 'True HEM', 'True NAD', 'True NAP', 'True SAM'],
columns= ['Pred ADP', 'Pred COA', 'Pred FAD', 'Pred HEM', 'Pred NAD',
'Pred NAP', 'Pred SAM'])
ax = sns.heatmap(conf__highconf_frac,vmin=0.0, vmax=1.0, annot=True)
plt.tight_layout()
#plt.savefig('/Users/freyr/Desktop/paper_images/classification/sequenceSplit_allFeatures_numbers.pdf', type='pdf')
#plt.savefig('/Users/freyr/Desktop/thesis_images/pocket_clustering/heatmap_noStructClust.eps')
for ids,log,pred,true in zip(all_pdb_ids,all_logits,ypred,ytrue):
if (true==6) and (pred==4) and (log[pred]>thresh):
print(ids,log,pred,true)
for ids,log,pred,true in zip(all_pdb_ids,all_logits,ypred,ytrue):
if (true==4) and (pred==5) and (log[pred]>thresh):
print(ids,log,pred,true)
| 0.203549 | 0.302987 |
```
import pandas as pd
import numpy as np
from sklearn.linear_model import LogisticRegression
from sklearn import linear_model
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix
from imblearn.over_sampling import SMOTE
from sklearn.metrics import recall_score
from sklearn.metrics import precision_score
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score
from sklearn.naive_bayes import GaussianNB
from sklearn.svm import SVC
from sklearn.ensemble import AdaBoostClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.preprocessing import StandardScaler
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.metrics import classification_report
dataset = pd.read_csv(r"sample_clean.csv")
df=dataset.copy()
X = df.drop('action_taken',axis=1)
Y = df['action_taken']
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.25)
dataset['action_taken'].value_counts()
dataset.action_taken.value_counts().plot(kind='bar', title='Count');
```
### SMOTE Analysis
```
smt = SMOTE()
x_train, y_train = smt.fit_sample(X_train, Y_train)
np.bincount(Y_train)
np.bincount(y_train)
dataset.action_taken.value_counts().plot(kind='bar', title='Count');
sc = StandardScaler()
x_train = sc.fit_transform(x_train)
X_test = sc.transform(X_test)
```
### Random Forest
```
rand_class_smote = RandomForestClassifier(n_estimators = 8,random_state=2)
rand_class_smote.fit(x_train, y_train)
y_pred_smote = rand_class_smote.predict(X_test)
y_pred_train_smote = rand_class_smote.predict(x_train)
print("Test")
print('Accuracy:',accuracy_score(Y_test,y_pred_smote))
print('\n')
print('Precision of 0:',precision_score(Y_test,y_pred_smote,pos_label=0))
print('Precision of 1:',precision_score(Y_test,y_pred_smote,pos_label=1))
print('\n')
print('Recall of 0:',recall_score(Y_test,y_pred_smote,pos_label=0))
print('Recall of 1:',recall_score(Y_test,y_pred_smote,pos_label=1))
print("------------------------")
print("Train")
print('Accuracy:',accuracy_score(y_train,y_pred_train_smote))
conf_matrix = confusion_matrix(Y_test, y_pred_smote)
plot_confusion_matrix(conf_matrix,cmap='tab10',colorbar=True)
print(conf_matrix)
print(confusion_matrix(Y_test,y_pred_log,labels=[0,1]))
# save the model to disk
#joblib_file_random_forest = "rand_class_smote.pkl"
#joblib.dump(rand_class_smote, joblib_file_random_forest)
print(classification_report(Y_test,y_pred_smote))
```
### Logistic Regression
```
log_reg = LogisticRegression()
log_reg.fit(x_train, y_train)
y_pred_log = log_reg.predict(X_test)
y_pred_train_log = log_reg.predict(x_train)
print('Test')
print('Accuracy',accuracy_score(y_pred_log,Y_test))
print('Precision of 0:',precision_score(Y_test,y_pred_log,pos_label=0))
print('Precision of 1:',precision_score(Y_test,y_pred_log,pos_label=1))
print('\n')
print('Recall of 0:',recall_score(Y_test,y_pred_log,pos_label=0))
print('Recall of 1:',recall_score(Y_test,y_pred_log,pos_label=1))
print('Recall',recall_score(y_pred_log,Y_test))
print('--------------------------')
print('Train')
print('Accuracy',accuracy_score(y_pred_train_log,y_train))
conf_matrix = confusion_matrix(Y_test, y_pred_log)
plot_confusion_matrix(conf_matrix,cmap='tab10',colorbar=True)
print(conf_matrix)
# save the model to disk
#joblib_file_logistic = "log_reg.pkl"
#joblib.dump(log_reg, joblib_file_logistic)
print(classification_report(Y_test,y_pred_log))
```
### Naive Bayes
```
naive_bayes = GaussianNB()
naive_bayes.fit(x_train,y_train)
y_pred_naive = naive_bayes.predict(X_test)
y_pred_train_naive = naive_bayes.predict(x_train)
print('Test')
print('Accuracy',accuracy_score(y_pred_naive,Y_test))
print('Precision of 0:',precision_score(Y_test,y_pred_naive,pos_label=0))
print('Precision of 1:',precision_score(Y_test,y_pred_naive,pos_label=1))
print('\n')
print('Recall of 0:',recall_score(Y_test,y_pred_naive,pos_label=0))
print('Recall of 1:',recall_score(Y_test,y_pred_naive,pos_label=1))
print('--------------------------')
print('Train')
print('Accuracy',accuracy_score(y_pred_train_naive,y_train))
conf_matrix = confusion_matrix(Y_test, y_pred_naive)
plot_confusion_matrix(conf_matrix,cmap='tab10',colorbar=True)
print(conf_matrix)
# save the model to disk
#joblib_file_naive_bayes = "naive_bayes.pkl"
#joblib.dump(naive_bayes, joblib_file_naive_bayes)
print(classification_report(Y_test,y_pred_naive))
```
### LDA
```
lda = LinearDiscriminantAnalysis()
lda.fit(x_train,y_train)
y_pred_lda = lda.predict(X_test)
y_pred_train_lda = lda.predict(x_train)
print('Test')
print('Accuracy',accuracy_score(y_pred_lda,Y_test))
print('Precision of 0:',precision_score(Y_test,y_pred_lda,pos_label=0))
print('Precision of 1:',precision_score(Y_test,y_pred_lda,pos_label=1))
print('\n')
print('Recall of 0:',recall_score(Y_test,y_pred_lda,pos_label=0))
print('Recall of 1:',recall_score(Y_test,y_pred_lda,pos_label=1))
print('--------------------------')
print('Train')
print('Accuracy',accuracy_score(y_pred_train_lda,y_train))
conf_matrix = confusion_matrix(Y_test, y_pred_lda)
plot_confusion_matrix(conf_matrix,cmap='tab10',colorbar=True)
print(conf_matrix)
print(classification_report(Y_test,y_pred_lda))
# save the model to disk
#joblib_file_lda = "lda.pkl"
#joblib.dump(lda, joblib_file_lda)
```
### XG Boost
```
from xgboost import XGBClassifier
xg_boost = XGBClassifier()
xg_boost.fit(x_train,y_train)
y_pred_xg = xg_boost.predict(X_test)
y_pred_xg_train = xg_boost.predict(x_train)
print('Accuracy test',accuracy_score(y_pred_xg,Y_test))
print('Precision of 0:',precision_score(Y_test,y_pred_xg,pos_label=0))
print('Precision of 1:',precision_score(Y_test,y_pred_xg,pos_label=1))
print('\n')
print('Recall of 0:',recall_score(Y_test,y_pred_xg,pos_label=0))
print('Recall of 1:',recall_score(Y_test,y_pred_xg,pos_label=1))
print('Recall',recall_score(y_pred_xg,Y_test))
print('Accuracy Train',accuracy_score(y_pred_xg_train,y_train))
conf_matrix = confusion_matrix(Y_test, y_pred_xg)
plot_confusion_matrix(conf_matrix,cmap='tab10',colorbar=True)
print(conf_matrix)
# save the model to disk
#joblib_file_xgboost = "xgboost.pkl"
#joblib.dump(xg_boost, joblib_file_xgboost)
print(classification_report(Y_test,y_pred_xg))
```
### Adaboost
```
ada_boost = AdaBoostClassifier(DecisionTreeClassifier(max_depth=10),n_estimators=12)
ada_boost.fit(x_train, y_train)
y_pred_ada = ada_boost.predict(X_test)
y_pred_ada_train = ada_boost.predict(x_train)
print('Test')
print('Accuracy test',accuracy_score(y_pred_ada,Y_test))
print('Precision of 0:',precision_score(Y_test,y_pred_ada,pos_label=0))
print('Precision of 1:',precision_score(Y_test,y_pred_ada,pos_label=1))
print('\n')
print('Recall of 0:',recall_score(Y_test,y_pred_ada,pos_label=0))
print('Recall of 1:',recall_score(Y_test,y_pred_ada,pos_label=1))
print('--------------------------')
print('Train')
print('Accuracy',accuracy_score(y_pred_ada_train,y_train))
conf_matrix = confusion_matrix(Y_test, y_pred_ada)
plot_confusion_matrix(conf_matrix,cmap='tab10',colorbar=True)
print(conf_matrix)
#joblib_file_adaboost = "adaboost.pkl"
#joblib.dump(ada_boost, joblib_file_adaboost)
print(classification_report(Y_test,y_pred_ada))
from sklearn.metrics import roc_curve, auc
import matplotlib.pyplot as plt
classifier_predictions = [y_pred_smote,y_pred_log,y_pred_naive,y_pred_lda,y_pred_xg,y_pred_ada]
classifier_names = ["Random Forest","Logistic Regression",'Naive Bayes','LDA','XGBoost','AdaBoost']
for i in range(len(classifier_predictions)):
fpr,tpr,thresholds = roc_curve(Y_test,classifier_predictions[i])
plt.plot(fpr,tpr,label= classifier_names[i])
plt.xlabel("False Positive Rate(FPR)")
plt.ylabel("True Positive Rate(TPR)")
plt.legend()
print("AUC for ",classifier_names[i]," = ",auc(fpr,tpr))
from sklearn.metrics import precision_recall_curve
classifier_predictions = [y_pred_smote,y_pred_log,y_pred_naive,y_pred_lda,y_pred_xg,y_pred_ada]
classifier_names = ["Random Forest","Logistic Regression",'Naive Bayes','LDA','XGBoost','AdaBoost']
for i in range(len(classifier_predictions)):
precision, recall, thresholds = precision_recall_curve(Y_test,classifier_predictions[i])
plt.plot(recall,precision,label= classifier_names[i])
plt.xlabel("Recall")
plt.ylabel("Precision")
plt.legend()
print("Precision Recall score for ",classifier_names[i]," = ",(average_precision_score(Y_test,classifier_predictions[i])))
```
### Selected top features that we got from Lasso and Random Forest and run our model
```
features = ['aus_1','applicant_credit_score_type','loan_term','purchaser_type','property_value','loan_purpose','income','applicant_age','loan_amount','manufactured_home_land_property_interest']
X_new = df[features]
Y_new = df['action_taken']
X_train_new, X_test_new, Y_train_new, Y_test_new = train_test_split(X_new, Y_new, test_size=0.25)
smt = SMOTE()
x_train_new, y_train_new = smt.fit_sample(X_train_new, Y_train_new)
np.bincount(Y_train_new)
np.bincount(y_train_new)
sc = StandardScaler()
x_train_new = sc.fit_transform(x_train_new)
X_test_new = sc.transform(X_test_new)
log_reg_top = LogisticRegression()
log_reg_top.fit(x_train_new, y_train_new)
y_pred_log_top = log_reg_top.predict(X_test_new)
y_pred_train_log_top = log_reg_top.predict(x_train_new)
print('Accuracy Test',accuracy_score(y_pred_log_top,Y_test_new))
print('Accuracy Train',accuracy_score(y_pred_train_log_top,y_train_new))
print(classification_report(y_pred_log_top,Y_test_new))
rand_class_top = RandomForestClassifier(n_estimators = 8,random_state=2)
rand_class_top.fit(x_train_new, y_train_new)
y_pred_rand_top = rand_class_top.predict(X_test_new)
y_pred_rand_train_top = rand_class_top.predict(x_train_new)
print('Accuracy Test',accuracy_score(y_pred_rand_top,Y_test_new))
print('Accuracy Train',accuracy_score(y_pred_rand_train_top,y_train_new))
print(classification_report(y_pred_rand_top,Y_test_new))
ada_boost_top = AdaBoostClassifier(DecisionTreeClassifier(max_depth=10),n_estimators=12)
ada_boost_top.fit(x_train_new, y_train_new)
y_pred_ada_top = ada_boost_top.predict(X_test_new)
y_pred_ada_train_top = ada_boost_top.predict(x_train_new)
print('Accuracy Test',accuracy_score(y_pred_ada_top,Y_test_new))
print('Accuracy Train',accuracy_score(y_pred_ada_train_top,y_train_new))
print(classification_report(y_pred_ada_top,Y_test_new))
from sklearn.metrics import precision_recall_curve
classifier_predictions = [y_pred_rand_top,y_pred_log_top,y_pred_ada_top]
classifier_names = ["Random Forest","Logistic Regression",'AdaBoost']
for i in range(len(classifier_predictions)):
precision, recall, thresholds = precision_recall_curve(Y_test_new,classifier_predictions[i])
plt.plot(recall,precision,label= classifier_names[i])
plt.xlabel("Recall")
plt.ylabel("Precision")
plt.legend()
print("Precision Recall score for ",classifier_names[i]," = ",(average_precision_score(Y_test_new,classifier_predictions[i])))
```
|
github_jupyter
|
import pandas as pd
import numpy as np
from sklearn.linear_model import LogisticRegression
from sklearn import linear_model
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix
from imblearn.over_sampling import SMOTE
from sklearn.metrics import recall_score
from sklearn.metrics import precision_score
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score
from sklearn.naive_bayes import GaussianNB
from sklearn.svm import SVC
from sklearn.ensemble import AdaBoostClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.preprocessing import StandardScaler
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.metrics import classification_report
dataset = pd.read_csv(r"sample_clean.csv")
df=dataset.copy()
X = df.drop('action_taken',axis=1)
Y = df['action_taken']
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.25)
dataset['action_taken'].value_counts()
dataset.action_taken.value_counts().plot(kind='bar', title='Count');
smt = SMOTE()
x_train, y_train = smt.fit_sample(X_train, Y_train)
np.bincount(Y_train)
np.bincount(y_train)
dataset.action_taken.value_counts().plot(kind='bar', title='Count');
sc = StandardScaler()
x_train = sc.fit_transform(x_train)
X_test = sc.transform(X_test)
rand_class_smote = RandomForestClassifier(n_estimators = 8,random_state=2)
rand_class_smote.fit(x_train, y_train)
y_pred_smote = rand_class_smote.predict(X_test)
y_pred_train_smote = rand_class_smote.predict(x_train)
print("Test")
print('Accuracy:',accuracy_score(Y_test,y_pred_smote))
print('\n')
print('Precision of 0:',precision_score(Y_test,y_pred_smote,pos_label=0))
print('Precision of 1:',precision_score(Y_test,y_pred_smote,pos_label=1))
print('\n')
print('Recall of 0:',recall_score(Y_test,y_pred_smote,pos_label=0))
print('Recall of 1:',recall_score(Y_test,y_pred_smote,pos_label=1))
print("------------------------")
print("Train")
print('Accuracy:',accuracy_score(y_train,y_pred_train_smote))
conf_matrix = confusion_matrix(Y_test, y_pred_smote)
plot_confusion_matrix(conf_matrix,cmap='tab10',colorbar=True)
print(conf_matrix)
print(confusion_matrix(Y_test,y_pred_log,labels=[0,1]))
# save the model to disk
#joblib_file_random_forest = "rand_class_smote.pkl"
#joblib.dump(rand_class_smote, joblib_file_random_forest)
print(classification_report(Y_test,y_pred_smote))
log_reg = LogisticRegression()
log_reg.fit(x_train, y_train)
y_pred_log = log_reg.predict(X_test)
y_pred_train_log = log_reg.predict(x_train)
print('Test')
print('Accuracy',accuracy_score(y_pred_log,Y_test))
print('Precision of 0:',precision_score(Y_test,y_pred_log,pos_label=0))
print('Precision of 1:',precision_score(Y_test,y_pred_log,pos_label=1))
print('\n')
print('Recall of 0:',recall_score(Y_test,y_pred_log,pos_label=0))
print('Recall of 1:',recall_score(Y_test,y_pred_log,pos_label=1))
print('Recall',recall_score(y_pred_log,Y_test))
print('--------------------------')
print('Train')
print('Accuracy',accuracy_score(y_pred_train_log,y_train))
conf_matrix = confusion_matrix(Y_test, y_pred_log)
plot_confusion_matrix(conf_matrix,cmap='tab10',colorbar=True)
print(conf_matrix)
# save the model to disk
#joblib_file_logistic = "log_reg.pkl"
#joblib.dump(log_reg, joblib_file_logistic)
print(classification_report(Y_test,y_pred_log))
naive_bayes = GaussianNB()
naive_bayes.fit(x_train,y_train)
y_pred_naive = naive_bayes.predict(X_test)
y_pred_train_naive = naive_bayes.predict(x_train)
print('Test')
print('Accuracy',accuracy_score(y_pred_naive,Y_test))
print('Precision of 0:',precision_score(Y_test,y_pred_naive,pos_label=0))
print('Precision of 1:',precision_score(Y_test,y_pred_naive,pos_label=1))
print('\n')
print('Recall of 0:',recall_score(Y_test,y_pred_naive,pos_label=0))
print('Recall of 1:',recall_score(Y_test,y_pred_naive,pos_label=1))
print('--------------------------')
print('Train')
print('Accuracy',accuracy_score(y_pred_train_naive,y_train))
conf_matrix = confusion_matrix(Y_test, y_pred_naive)
plot_confusion_matrix(conf_matrix,cmap='tab10',colorbar=True)
print(conf_matrix)
# save the model to disk
#joblib_file_naive_bayes = "naive_bayes.pkl"
#joblib.dump(naive_bayes, joblib_file_naive_bayes)
print(classification_report(Y_test,y_pred_naive))
lda = LinearDiscriminantAnalysis()
lda.fit(x_train,y_train)
y_pred_lda = lda.predict(X_test)
y_pred_train_lda = lda.predict(x_train)
print('Test')
print('Accuracy',accuracy_score(y_pred_lda,Y_test))
print('Precision of 0:',precision_score(Y_test,y_pred_lda,pos_label=0))
print('Precision of 1:',precision_score(Y_test,y_pred_lda,pos_label=1))
print('\n')
print('Recall of 0:',recall_score(Y_test,y_pred_lda,pos_label=0))
print('Recall of 1:',recall_score(Y_test,y_pred_lda,pos_label=1))
print('--------------------------')
print('Train')
print('Accuracy',accuracy_score(y_pred_train_lda,y_train))
conf_matrix = confusion_matrix(Y_test, y_pred_lda)
plot_confusion_matrix(conf_matrix,cmap='tab10',colorbar=True)
print(conf_matrix)
print(classification_report(Y_test,y_pred_lda))
# save the model to disk
#joblib_file_lda = "lda.pkl"
#joblib.dump(lda, joblib_file_lda)
from xgboost import XGBClassifier
xg_boost = XGBClassifier()
xg_boost.fit(x_train,y_train)
y_pred_xg = xg_boost.predict(X_test)
y_pred_xg_train = xg_boost.predict(x_train)
print('Accuracy test',accuracy_score(y_pred_xg,Y_test))
print('Precision of 0:',precision_score(Y_test,y_pred_xg,pos_label=0))
print('Precision of 1:',precision_score(Y_test,y_pred_xg,pos_label=1))
print('\n')
print('Recall of 0:',recall_score(Y_test,y_pred_xg,pos_label=0))
print('Recall of 1:',recall_score(Y_test,y_pred_xg,pos_label=1))
print('Recall',recall_score(y_pred_xg,Y_test))
print('Accuracy Train',accuracy_score(y_pred_xg_train,y_train))
conf_matrix = confusion_matrix(Y_test, y_pred_xg)
plot_confusion_matrix(conf_matrix,cmap='tab10',colorbar=True)
print(conf_matrix)
# save the model to disk
#joblib_file_xgboost = "xgboost.pkl"
#joblib.dump(xg_boost, joblib_file_xgboost)
print(classification_report(Y_test,y_pred_xg))
ada_boost = AdaBoostClassifier(DecisionTreeClassifier(max_depth=10),n_estimators=12)
ada_boost.fit(x_train, y_train)
y_pred_ada = ada_boost.predict(X_test)
y_pred_ada_train = ada_boost.predict(x_train)
print('Test')
print('Accuracy test',accuracy_score(y_pred_ada,Y_test))
print('Precision of 0:',precision_score(Y_test,y_pred_ada,pos_label=0))
print('Precision of 1:',precision_score(Y_test,y_pred_ada,pos_label=1))
print('\n')
print('Recall of 0:',recall_score(Y_test,y_pred_ada,pos_label=0))
print('Recall of 1:',recall_score(Y_test,y_pred_ada,pos_label=1))
print('--------------------------')
print('Train')
print('Accuracy',accuracy_score(y_pred_ada_train,y_train))
conf_matrix = confusion_matrix(Y_test, y_pred_ada)
plot_confusion_matrix(conf_matrix,cmap='tab10',colorbar=True)
print(conf_matrix)
#joblib_file_adaboost = "adaboost.pkl"
#joblib.dump(ada_boost, joblib_file_adaboost)
print(classification_report(Y_test,y_pred_ada))
from sklearn.metrics import roc_curve, auc
import matplotlib.pyplot as plt
classifier_predictions = [y_pred_smote,y_pred_log,y_pred_naive,y_pred_lda,y_pred_xg,y_pred_ada]
classifier_names = ["Random Forest","Logistic Regression",'Naive Bayes','LDA','XGBoost','AdaBoost']
for i in range(len(classifier_predictions)):
fpr,tpr,thresholds = roc_curve(Y_test,classifier_predictions[i])
plt.plot(fpr,tpr,label= classifier_names[i])
plt.xlabel("False Positive Rate(FPR)")
plt.ylabel("True Positive Rate(TPR)")
plt.legend()
print("AUC for ",classifier_names[i]," = ",auc(fpr,tpr))
from sklearn.metrics import precision_recall_curve
classifier_predictions = [y_pred_smote,y_pred_log,y_pred_naive,y_pred_lda,y_pred_xg,y_pred_ada]
classifier_names = ["Random Forest","Logistic Regression",'Naive Bayes','LDA','XGBoost','AdaBoost']
for i in range(len(classifier_predictions)):
precision, recall, thresholds = precision_recall_curve(Y_test,classifier_predictions[i])
plt.plot(recall,precision,label= classifier_names[i])
plt.xlabel("Recall")
plt.ylabel("Precision")
plt.legend()
print("Precision Recall score for ",classifier_names[i]," = ",(average_precision_score(Y_test,classifier_predictions[i])))
features = ['aus_1','applicant_credit_score_type','loan_term','purchaser_type','property_value','loan_purpose','income','applicant_age','loan_amount','manufactured_home_land_property_interest']
X_new = df[features]
Y_new = df['action_taken']
X_train_new, X_test_new, Y_train_new, Y_test_new = train_test_split(X_new, Y_new, test_size=0.25)
smt = SMOTE()
x_train_new, y_train_new = smt.fit_sample(X_train_new, Y_train_new)
np.bincount(Y_train_new)
np.bincount(y_train_new)
sc = StandardScaler()
x_train_new = sc.fit_transform(x_train_new)
X_test_new = sc.transform(X_test_new)
log_reg_top = LogisticRegression()
log_reg_top.fit(x_train_new, y_train_new)
y_pred_log_top = log_reg_top.predict(X_test_new)
y_pred_train_log_top = log_reg_top.predict(x_train_new)
print('Accuracy Test',accuracy_score(y_pred_log_top,Y_test_new))
print('Accuracy Train',accuracy_score(y_pred_train_log_top,y_train_new))
print(classification_report(y_pred_log_top,Y_test_new))
rand_class_top = RandomForestClassifier(n_estimators = 8,random_state=2)
rand_class_top.fit(x_train_new, y_train_new)
y_pred_rand_top = rand_class_top.predict(X_test_new)
y_pred_rand_train_top = rand_class_top.predict(x_train_new)
print('Accuracy Test',accuracy_score(y_pred_rand_top,Y_test_new))
print('Accuracy Train',accuracy_score(y_pred_rand_train_top,y_train_new))
print(classification_report(y_pred_rand_top,Y_test_new))
ada_boost_top = AdaBoostClassifier(DecisionTreeClassifier(max_depth=10),n_estimators=12)
ada_boost_top.fit(x_train_new, y_train_new)
y_pred_ada_top = ada_boost_top.predict(X_test_new)
y_pred_ada_train_top = ada_boost_top.predict(x_train_new)
print('Accuracy Test',accuracy_score(y_pred_ada_top,Y_test_new))
print('Accuracy Train',accuracy_score(y_pred_ada_train_top,y_train_new))
print(classification_report(y_pred_ada_top,Y_test_new))
from sklearn.metrics import precision_recall_curve
classifier_predictions = [y_pred_rand_top,y_pred_log_top,y_pred_ada_top]
classifier_names = ["Random Forest","Logistic Regression",'AdaBoost']
for i in range(len(classifier_predictions)):
precision, recall, thresholds = precision_recall_curve(Y_test_new,classifier_predictions[i])
plt.plot(recall,precision,label= classifier_names[i])
plt.xlabel("Recall")
plt.ylabel("Precision")
plt.legend()
print("Precision Recall score for ",classifier_names[i]," = ",(average_precision_score(Y_test_new,classifier_predictions[i])))
| 0.587707 | 0.739452 |
<img src="Drone2.png" width="400" height="400">
For the rest of the lesson you will be working with a drone that is able to move in two dimensions.
This drone has two propellers each located a distance $l$ from the center of mass. In this exercise, we will ignore the yaw-inducing reactive moment from each propeller.
The state can be described by the vector:
$$X = [z , y, \phi, \dot{z}, \dot{y},\dot{\phi}]$$
We will have to track the drone's position in 2 dimensions and its rotation about the $x$ axis, which is directed into the plane.
```
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import math
import matplotlib.pyplot as plt
import matplotlib.pylab as pylab
import jdc
from ExerciseAnswers import Answers
pylab.rcParams['figure.figsize'] = 10, 10
class Drone2D:
def __init__(self,
k_f = 0.1, # value of the thrust coefficient
i = 0.1, # moment of inertia around the x-axis
m = 1.0, # mass of the vehicle
l = 0.15, # distance between the center of
# mass and the propeller axis
):
self.k_f = k_f
self.i = i
self.l = l
self.m = m
self.omega_1 = 0.0
self.omega_2 = 0.0
self.g = 9.81
# z, y, phi, z_dot, y_dot, phi_dot
self.X = np.array([0.0, 0.0, 0.0, 0.0, 0.0, 0.0])
def advance_state_uncontrolled(self, dt):
"""Advances the state of the drone by dt seconds.
Note that this method assumes zero rotational speed
for both propellers."""
X_dot = np.array([self.X[3],
self.X[4],
self.X[5],
self.g,
0,
0])
self.X += X_dot * dt
return self.X
```
### Visual Code Check
If your code is working correctly then running the cell below should produce a graph that looks like this

```
drone = Drone2D()
Z_history = []
Y_history = []
dt = 0.1
# add a slight initial horizontal velocity
drone.X[4] = 1.0
for _ in range(100):
Z_history.append(drone.X[0])
Y_history.append(drone.X[1])
# call the uncontrolled (free fall) advance state function
drone.advance_state_uncontrolled(dt)
plt.plot(Y_history, Z_history )
# invert the vertical axis so down is positive
plt.gca().invert_yaxis()
plt.xlabel("Horizontal Position (y)")
plt.ylabel("Vertical Position (z)")
plt.show()
```
[Solution](/notebooks/3.%20Uncontrolled%202D%20Drone%20SOLUTION.ipynb)
|
github_jupyter
|
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import math
import matplotlib.pyplot as plt
import matplotlib.pylab as pylab
import jdc
from ExerciseAnswers import Answers
pylab.rcParams['figure.figsize'] = 10, 10
class Drone2D:
def __init__(self,
k_f = 0.1, # value of the thrust coefficient
i = 0.1, # moment of inertia around the x-axis
m = 1.0, # mass of the vehicle
l = 0.15, # distance between the center of
# mass and the propeller axis
):
self.k_f = k_f
self.i = i
self.l = l
self.m = m
self.omega_1 = 0.0
self.omega_2 = 0.0
self.g = 9.81
# z, y, phi, z_dot, y_dot, phi_dot
self.X = np.array([0.0, 0.0, 0.0, 0.0, 0.0, 0.0])
def advance_state_uncontrolled(self, dt):
"""Advances the state of the drone by dt seconds.
Note that this method assumes zero rotational speed
for both propellers."""
X_dot = np.array([self.X[3],
self.X[4],
self.X[5],
self.g,
0,
0])
self.X += X_dot * dt
return self.X
drone = Drone2D()
Z_history = []
Y_history = []
dt = 0.1
# add a slight initial horizontal velocity
drone.X[4] = 1.0
for _ in range(100):
Z_history.append(drone.X[0])
Y_history.append(drone.X[1])
# call the uncontrolled (free fall) advance state function
drone.advance_state_uncontrolled(dt)
plt.plot(Y_history, Z_history )
# invert the vertical axis so down is positive
plt.gca().invert_yaxis()
plt.xlabel("Horizontal Position (y)")
plt.ylabel("Vertical Position (z)")
plt.show()
| 0.570331 | 0.955444 |
This workshop will work in a terminal. The first thing to do is to open a new terminal: from the jupyter home page click the new button on the top right and select terminal. You might want to open the terminal in a new window (right click on terminal and select open in new window). Don't close this notebook (until you have finished the tutorial)!
Lines in this notebook that begin with $ are the lines that you type into the terminal command line. Sample input files can be found in data/implicit and data/solvated. If you want to add a new cell (perhaps at the end), select "Insert" -> "Insert Cell Below" or "Insert Cell Above" from the menus above.
You will want to keep your workspace tidy, so we will organize files into directories. Start by making a directory for your current work
$ mkdir modelling
Enter that directory
$ cd modelling
You may want to make directories for gbsa (implicit) and solvated simulations (notebook 2) for later on.
$ mkdir gbsa
$ mkdir solvated
# Using tLeap
The tLeap module connects the protein structure with the AMBER forcefield.
AMBER contains “residue templates” for standard biological units (eg amino acids) that the program uses to assign forcefield parameters from a pdb file. It understands chemistry!
tLeap produces files called “parm” (or top) and “crd”.
“parm” contains the parameters and connectivities
“crd” contains the starting coordinates.
* Choose your forcefield
There are different forcefields even within AMBER!
1 - parm14SB (recommended for proteins)
source leaprc.ff14SB
2 - parmbsc1 (recommended for DNA)
3 - Gaff (generalised AMBER forcefield) – can simulate anything with this – caution needed!!
source leaprc.gaff
* Change to the directory where you want your implicit solvent simulations
$ cd gbsa
* Start the tLeap module:
$ tleap
* Load the forcefield.
To use the 14SB forcefield:
$ source leaprc.protein.ff14SB
The name of the forcefield file can depend on the version of AmberTools. The command above will work for Amber18, but for older versions of Amber you might need leaprc.ff14SB.
The AMBER manual and additional tutorials can be found at http://ambermd.org
# Building peptides in tLeap
To see which molecules you have available, type:
$ list
Think of 5 amino acids ~ eg GLY, ALA, TYR, PRO and ASP
Now type:
$ peptide = sequence {NGLY ALA TYR PRO CASP}
You need NXXX and CXXX at the ends of the peptide to correctly chemically cap the ends. You can use any 5 amino acids (the ones listed here are just an example).
# Save your molecule:
$ savepdb peptide YOUR_FILE_NAME.pdb
$ saveamberparm peptide YOUR_FILE_NAME.parm7 YOUR_FILE_NAME.crd
and exit tleap to return to the terminal shell:
$ quit
# Implicit solvent simulation of your peptide
To run Sander you need:
1. The parm (topology) and crd (coordinate) files
2. Sander input files (min1.in, min2.in, md1.in, md2.in, md3.in)
Take a look at the input files (implicit-\*.in). Try and get a gist of what the input file is doing.
What is the difference between implicit-min1.in and implicit-min2.in? How much dynamics is involved in each step of the equilibration?
# Using sander
The molecular dynamics is run by the sander module. The sander executable is found in $AMBERHOME/bin. You can read the manual to find the details of all the options for sander. Some of the important ones are:
-O or -A : Overwrite or Append output files if they already exist
-i FILE_NAME.in (input) : control data for the run
-o FILE_NAME.out (output) : user readable information
-p FILE_NAME.parm7 (input) : molecular topology, force field, atom and residue names, periodic box type
-c FILE_NAME.crd (input) : initial coordinates and periodic box size (may include velocities)
-r FILE_NAME.rst (output) : final coordinates, velocities, and box dimentions (for restarting run)
* Example (you will have to use your own filenames)
$ sander -O -i min1.in -o min1.out -inf min1.inf -c pep.crd -r pep.rst -p pep.parm7 -ref pep.crd -x pepmin1.nc
# Equilibration
To equilibrate the system, we first relax by running an energy minimisation (min1.in, min2.in). This helps to remove any bad contacts (slightly misplaced atoms) in the initial structure.
We then heat the system up in the presence of restraints on the solute (md1.in). Heating the system is followed by MD at the desired temperature (md2.in) and then removing the restraints (md3.in). This stepwise equilibration procedure allows the system to gradually relax without changing too much at any one time (which could cause simulations to become unstable and crash the MD program).
This equilibration needs to be particularly gentle for DNA simulations to ensure that the solvent and counterions screen the negatively charged backbone.
# Visualising the results
There are several programs available for visualising MD trajectories (VMD and Chimera are popular). In Jupyter notebooks, we can use NGLview (with MDtraj to import the trajectory).
```
import mdtraj as mdt
import nglview as nv
# select your files
top_file = 'YOUR_FILENAME.parm7'
traj_file = 'YOUR_FILENAME.nc'
# load trajectory with MDtraj
traj = mdt.load(traj_file, top=top_file)
#view with NGLview
view = nv.show_mdtraj(traj)
# Clear all representations to try new ones
view.clear_representations()
# Show licorice style representation
view.add_licorice()
view
```
# Repeat or replicate simulations
One way to perform an independent “repeat” of a simulation is to reassign the velocities at a chosen point in the trajectory.
The ntx and irest flags in the .in file change (ntx=1, irest=0).
Make a new directory.
Running on from your restart file from md2 (eg pep.md2), run an independent repeat of your previous simulation by asking md3.in to reassign a new set of velocities. Call this trajectory something different.
Compare the two trajectories, and convince yourself that they are different.
|
github_jupyter
|
import mdtraj as mdt
import nglview as nv
# select your files
top_file = 'YOUR_FILENAME.parm7'
traj_file = 'YOUR_FILENAME.nc'
# load trajectory with MDtraj
traj = mdt.load(traj_file, top=top_file)
#view with NGLview
view = nv.show_mdtraj(traj)
# Clear all representations to try new ones
view.clear_representations()
# Show licorice style representation
view.add_licorice()
view
| 0.189521 | 0.871693 |
```
# https://ipython.readthedocs.io/en/stable/config/extensions/autoreload.html?highlight=autoreload
%load_ext autoreload
%autoreload 2
import gc
import numpy as np
import os
import pandas as pd
import sqlalchemy
# Local imports
import imdb
import transform
```
# MySQL parameters
Increase general performance:
* --innodb-buffer-pool-size=2147483648 (2G)
* --innodb-buffer-pool-instances=8
Increase write performance:
* --innodb-log-file-size=536870912 (512MB)(Assuming 2 file groups)
# Useful resources
1. [How to Work with BIG Datasets on Kaggle Kernels (16G RAM)](https://www.kaggle.com/yuliagm/how-to-work-with-big-datasets-on-16g-ram-dask)
1. [Using pandas with large data](https://www.dataquest.io/blog/pandas-big-data/)
1. [14.8.3.1 Configuring InnoDB Buffer Pool Size](https://dev.mysql.com/doc/refman/5.7/en/innodb-buffer-pool-resize.html)
1. [innodb-log-file-size](https://dev.mysql.com/doc/refman/8.0/en/innodb-parameters.html#sysvar_innodb_log_file_size)
1. [13.2.7 LOAD DATA INFILE Syntax](https://dev.mysql.com/doc/refman/8.0/en/load-data.html#load-data-duplicate-key-handling)
```
data_folder = './imdb'
if not os.path.exists(data_folder):
os.mkdir(data_folder)
def save_csv(df, file):
filename = os.path.join(data_folder, file)
df.to_csv(filename, index=False)
mysql_url = 'mysql+pymysql://imdb:imdb@localhost:3306/imdb'
engine = sqlalchemy.create_engine(mysql_url)
def df_to_mysql(df, table_name, delete_before=True):
if delete_before:
# Delete table before adding new rows
connection = engine.connect()
trans = connection.begin()
connection.execute('SET FOREIGN_KEY_CHECKS = 0;')
stmt = 'TRUNCATE {};'.format(table_name)
print(stmt)
connection.execute(stmt)
connection.execute('SET FOREIGN_KEY_CHECKS = 1;')
trans.commit()
connection.close()
print('Table {} deleted'.format(table_name))
df.to_sql(table_name, con=engine, if_exists='append', index=False, chunksize=10**4)
```
# Name_basics
```
name_basics = imdb.name_basics_df()
name_basics_pre = name_basics.copy()
# nconst to int
name_basics_pre['nconst'] = transform.nconst_to_float(name_basics_pre['nconst'])
# Preserve nconst
name_basics_nconst = name_basics_pre['nconst'].copy()
name_basics_pre.info(memory_usage='deep')
name_basics_pre.head()
%%timeit -n 1 -r 1
df_to_mysql(name_basics_pre, 'name_basics')
#save_csv(name_basics_pre, 'name_basics.csv')
del name_basics
del name_basics_pre
gc.collect()
```
# title_basics
```
title_basics = imdb.title_basics_df()
title_basics_pre = title_basics.copy()
title_basics_pre['tconst'] = transform.tconst_to_float(title_basics_pre['tconst'])
# Preserve tconst for future filterings
title_basics_tconst = title_basics_pre['tconst'].copy()
title_basics_pre.info()
title_basics_pre.head()
%%timeit -n 1 -r 1
df_to_mysql(title_basics_pre, 'title_basics')
# save_csv(title_basics_pre, 'title_basics.csv')
del title_basics
del title_basics_pre
gc.collect()
```
# title_akas
```
title_akas = imdb.title_akas_df()
title_akas.head()
title_akas_pre = title_akas.copy()
title_akas_pre['titleId'] = transform.tconst_to_float(title_akas_pre['titleId'])
print('Shape', title_akas_pre.shape)
# Remove title_akas for non-existing# title_basic
title_akas_pre = title_akas_pre[title_akas_pre['titleId'].isin(title_basics_pre['tconst'])]
print('Shape', title_akas_pre.shape)
title_akas_pre.head()
%%timeit -n 1 -r 1
df_to_mysql(title_akas_pre, 'title_akas')
#save_csv(title_akas_pre, 'title_akas.csv')
```
# title_crew
```
title_crew = imdb.title_crew_df()
title_crew_director = title_crew[['tconst', 'directors']].copy()
title_crew_writer = title_crew[['tconst', 'writers']].copy()
# Drop rows with null values
title_crew_director.dropna(inplace=True)
title_crew_writer.dropna(inplace=True)
# Expand rows based on directors and writers list
title_crew_director['directors'] = title_crew_director['directors'].astype('str')
title_crew_director = transform.expand_rows_using_repeat(title_crew_director, 'directors', ',')
title_crew_director.rename(index=str, columns={"directors": "director"}, inplace=True)
title_crew_writer['writers'] = title_crew_writer['writers'].astype('str')
title_crew_writer = transform.expand_rows_using_repeat(title_crew_writer, 'writers', ',')
title_crew_writer.rename(index=str, columns={"writers": "writer"}, inplace=True)
# Transform identifiers
title_crew_director['tconst'] = transform.tconst_to_float(title_crew_director['tconst'])
title_crew_director['director'] = transform.nconst_to_float(title_crew_director['director'])
title_crew_writer['tconst'] = transform.tconst_to_float(title_crew_writer['tconst'])
title_crew_writer['writer'] = transform.nconst_to_float(title_crew_writer['writer'])
# Remove rows for non-existing titles or names
title_crew_director = title_crew_director[title_crew_director['director'].isin(name_basics_nconst)]
title_crew_writer = title_crew_writer[title_crew_writer['writer'].isin(name_basics_nconst)]
title_crew_director.info(memory_usage='deep')
title_crew_director.head()
title_crew_writer.info(memory_usage='deep')
title_crew_writer.head()
%%timeit -n 1 -r 1
df_to_mysql(title_crew_director, 'title_crew_director')
%%timeit -n 1 -r 1
df_to_mysql(title_crew_writer, 'title_crew_writer')
del title_crew_director
del title_crew_writer
gc.collect()
```
# title_episode
```
title_episode = imdb.title_episode_df()
title_episode_pre = title_episode.copy()
# Transform identifiers
title_episode_pre['tconst'] = transform.tconst_to_float(title_episode_pre['tconst'])
title_episode_pre['parentTconst'] = transform.tconst_to_float(title_episode_pre['parentTconst'])
# Remove rows for non-existing titles
title_episode_pre = title_episode_pre[(title_episode_pre['tconst'].isin(title_basics_tconst))]
title_episode_pre = title_episode_pre[title_episode_pre['parentTconst'].isin(title_basics_tconst)]
title_episode_pre.info(memory_usage='deep')
title_episode_pre.head()
%%timeit -n 1 -r 1
df_to_mysql(title_episode_pre, 'title_episode')
#save_csv(title_episode_pre, 'title_episode.csv')
del title_episode
del title_episode_pre
gc.collect()
```
# title_principals
```
title_principals = imdb.title_principals_df()
title_principals_pre = title_principals.copy()
# Transform identifiers
title_principals_pre['tconst'] = transform.tconst_to_float(title_principals_pre['tconst'])
title_principals_pre['nconst'] = transform.nconst_to_float(title_principals_pre['nconst'])
title_principals_pre['tconst'] = pd.to_numeric(title_principals_pre['tconst'], downcast='unsigned')
# Remove rows for non-existing titles
title_principals_pre = title_principals_pre[title_principals_pre['tconst'].isin(title_basics_tconst)]
title_principals_pre = title_principals_pre[title_principals_pre['nconst'].isin(name_basics_nconst)]
title_principals_pre.info(memory_usage='deep')
title_principals_pre.head()
%%timeit -n 1 -r 1
df_to_mysql(title_principals_pre, 'title_principals')
#save_csv(title_principals_pre, 'title_principals.csv')
del title_principals
del title_principals_pre
gc.collect()
```
# title_ratings
```
title_ratings = imdb.title_ratings_df()
title_ratings_pre = title_ratings.copy()
# Transform identifiers
title_ratings_pre['tconst'] = transform.tconst_to_float(title_ratings_pre['tconst'])
title_ratings_pre.info(memory_usage='deep')
title_ratings_pre.head()
%%timeit -n 1 -r 1
df_to_mysql(title_ratings_pre, 'title_ratings')
#save_csv(title_ratings_pre, 'title_ratings.csv')
del title_ratings
del title_ratings_pre
gc.collect()
```
|
github_jupyter
|
# https://ipython.readthedocs.io/en/stable/config/extensions/autoreload.html?highlight=autoreload
%load_ext autoreload
%autoreload 2
import gc
import numpy as np
import os
import pandas as pd
import sqlalchemy
# Local imports
import imdb
import transform
data_folder = './imdb'
if not os.path.exists(data_folder):
os.mkdir(data_folder)
def save_csv(df, file):
filename = os.path.join(data_folder, file)
df.to_csv(filename, index=False)
mysql_url = 'mysql+pymysql://imdb:imdb@localhost:3306/imdb'
engine = sqlalchemy.create_engine(mysql_url)
def df_to_mysql(df, table_name, delete_before=True):
if delete_before:
# Delete table before adding new rows
connection = engine.connect()
trans = connection.begin()
connection.execute('SET FOREIGN_KEY_CHECKS = 0;')
stmt = 'TRUNCATE {};'.format(table_name)
print(stmt)
connection.execute(stmt)
connection.execute('SET FOREIGN_KEY_CHECKS = 1;')
trans.commit()
connection.close()
print('Table {} deleted'.format(table_name))
df.to_sql(table_name, con=engine, if_exists='append', index=False, chunksize=10**4)
name_basics = imdb.name_basics_df()
name_basics_pre = name_basics.copy()
# nconst to int
name_basics_pre['nconst'] = transform.nconst_to_float(name_basics_pre['nconst'])
# Preserve nconst
name_basics_nconst = name_basics_pre['nconst'].copy()
name_basics_pre.info(memory_usage='deep')
name_basics_pre.head()
%%timeit -n 1 -r 1
df_to_mysql(name_basics_pre, 'name_basics')
#save_csv(name_basics_pre, 'name_basics.csv')
del name_basics
del name_basics_pre
gc.collect()
title_basics = imdb.title_basics_df()
title_basics_pre = title_basics.copy()
title_basics_pre['tconst'] = transform.tconst_to_float(title_basics_pre['tconst'])
# Preserve tconst for future filterings
title_basics_tconst = title_basics_pre['tconst'].copy()
title_basics_pre.info()
title_basics_pre.head()
%%timeit -n 1 -r 1
df_to_mysql(title_basics_pre, 'title_basics')
# save_csv(title_basics_pre, 'title_basics.csv')
del title_basics
del title_basics_pre
gc.collect()
title_akas = imdb.title_akas_df()
title_akas.head()
title_akas_pre = title_akas.copy()
title_akas_pre['titleId'] = transform.tconst_to_float(title_akas_pre['titleId'])
print('Shape', title_akas_pre.shape)
# Remove title_akas for non-existing# title_basic
title_akas_pre = title_akas_pre[title_akas_pre['titleId'].isin(title_basics_pre['tconst'])]
print('Shape', title_akas_pre.shape)
title_akas_pre.head()
%%timeit -n 1 -r 1
df_to_mysql(title_akas_pre, 'title_akas')
#save_csv(title_akas_pre, 'title_akas.csv')
title_crew = imdb.title_crew_df()
title_crew_director = title_crew[['tconst', 'directors']].copy()
title_crew_writer = title_crew[['tconst', 'writers']].copy()
# Drop rows with null values
title_crew_director.dropna(inplace=True)
title_crew_writer.dropna(inplace=True)
# Expand rows based on directors and writers list
title_crew_director['directors'] = title_crew_director['directors'].astype('str')
title_crew_director = transform.expand_rows_using_repeat(title_crew_director, 'directors', ',')
title_crew_director.rename(index=str, columns={"directors": "director"}, inplace=True)
title_crew_writer['writers'] = title_crew_writer['writers'].astype('str')
title_crew_writer = transform.expand_rows_using_repeat(title_crew_writer, 'writers', ',')
title_crew_writer.rename(index=str, columns={"writers": "writer"}, inplace=True)
# Transform identifiers
title_crew_director['tconst'] = transform.tconst_to_float(title_crew_director['tconst'])
title_crew_director['director'] = transform.nconst_to_float(title_crew_director['director'])
title_crew_writer['tconst'] = transform.tconst_to_float(title_crew_writer['tconst'])
title_crew_writer['writer'] = transform.nconst_to_float(title_crew_writer['writer'])
# Remove rows for non-existing titles or names
title_crew_director = title_crew_director[title_crew_director['director'].isin(name_basics_nconst)]
title_crew_writer = title_crew_writer[title_crew_writer['writer'].isin(name_basics_nconst)]
title_crew_director.info(memory_usage='deep')
title_crew_director.head()
title_crew_writer.info(memory_usage='deep')
title_crew_writer.head()
%%timeit -n 1 -r 1
df_to_mysql(title_crew_director, 'title_crew_director')
%%timeit -n 1 -r 1
df_to_mysql(title_crew_writer, 'title_crew_writer')
del title_crew_director
del title_crew_writer
gc.collect()
title_episode = imdb.title_episode_df()
title_episode_pre = title_episode.copy()
# Transform identifiers
title_episode_pre['tconst'] = transform.tconst_to_float(title_episode_pre['tconst'])
title_episode_pre['parentTconst'] = transform.tconst_to_float(title_episode_pre['parentTconst'])
# Remove rows for non-existing titles
title_episode_pre = title_episode_pre[(title_episode_pre['tconst'].isin(title_basics_tconst))]
title_episode_pre = title_episode_pre[title_episode_pre['parentTconst'].isin(title_basics_tconst)]
title_episode_pre.info(memory_usage='deep')
title_episode_pre.head()
%%timeit -n 1 -r 1
df_to_mysql(title_episode_pre, 'title_episode')
#save_csv(title_episode_pre, 'title_episode.csv')
del title_episode
del title_episode_pre
gc.collect()
title_principals = imdb.title_principals_df()
title_principals_pre = title_principals.copy()
# Transform identifiers
title_principals_pre['tconst'] = transform.tconst_to_float(title_principals_pre['tconst'])
title_principals_pre['nconst'] = transform.nconst_to_float(title_principals_pre['nconst'])
title_principals_pre['tconst'] = pd.to_numeric(title_principals_pre['tconst'], downcast='unsigned')
# Remove rows for non-existing titles
title_principals_pre = title_principals_pre[title_principals_pre['tconst'].isin(title_basics_tconst)]
title_principals_pre = title_principals_pre[title_principals_pre['nconst'].isin(name_basics_nconst)]
title_principals_pre.info(memory_usage='deep')
title_principals_pre.head()
%%timeit -n 1 -r 1
df_to_mysql(title_principals_pre, 'title_principals')
#save_csv(title_principals_pre, 'title_principals.csv')
del title_principals
del title_principals_pre
gc.collect()
title_ratings = imdb.title_ratings_df()
title_ratings_pre = title_ratings.copy()
# Transform identifiers
title_ratings_pre['tconst'] = transform.tconst_to_float(title_ratings_pre['tconst'])
title_ratings_pre.info(memory_usage='deep')
title_ratings_pre.head()
%%timeit -n 1 -r 1
df_to_mysql(title_ratings_pre, 'title_ratings')
#save_csv(title_ratings_pre, 'title_ratings.csv')
del title_ratings
del title_ratings_pre
gc.collect()
| 0.428233 | 0.676813 |

# Herencia.
Es posible crear nuevas clases a partir de una o varias clases mediante la herencia.
La clase original se denomina "superclase" y la clase que hereda los atributos y métodos de la superclase se denomina "subclase".
Se pueden definir atributos y métodos adicionales a la superclase e incluso se pueden sobrescribir los atributos y métodos heredados en la subclase.
La sintaxis de herencia es la siguiente:
```
class <nombre de la subclase>(<nombre de la superclase>):
...
...
```
**Ejemplo:**
Se creará la clase _Estudiante_ a partir de la clase _Persona_.
```
class Persona:
'''Clase base para creación de datos personales.'''
def __init__(self):
'''Genera una clave única a partir de una estampa de tiempo y la relaciona con el atributo __clave.'''
from time import time
self.__clave = str(int(time() / 0.017))[1:]
@property
def clave(self):
'''Regresa el valor del atributo "escondido" __clave.'''
return self.__clave
@property
def nombre(self):
'''Regresa una cadena de caracteres a partir de la lista contenida en lista_nombre.'''
return " ".join(self.lista_nombre)
@nombre.setter
def nombre(self, nombre):
'''Debe ingresarse una lista o tupla con entre 2 y 3 elementos.'''
if len(nombre) < 2 or len(nombre) > 3 or type(nombre) not in (list, tuple):
raise ValueError("Formato incorrecto.")
else:
self.lista_nombre = nombre
dir(Persona)
class Estudiante(Persona):
'''Clase que hereda a Persona.'''
tira_de_materias = []
def inscripcion(self, materia):
'''Añade elementos a tira_de_materias.'''
self.tira_de_materias.append(materia)
alumno_1 = Estudiante()
alumno_1
alumno_1.inscripcion("Álgebra I")
alumno_1.tira_de_materias
help(Estudiante)
dir(Estudiante)
alumno_1.clave
alumno_1._Persona__clave
alumno_1.nombre
alumno_1.nombre=['Juan', 'Pérez']
alumno_1.nombre
alumno_1.lista_nombre
```
## La función _issubclass()_.
Para saber si una clase es subclase de otra, se utiliza la función _subclass()_.
Sintaxis:
```
issubclass(<clase_1>, <clase_2>)
```
**Ejemplo:**
```
issubclass(alumno_1.__class__, Persona)
issubclass(Estudiante, Persona)
issubclass(Estudiante, object)
```
## Sobrescritura de métodos con _super()_.
En algunos casos es conveniente reutilizar parte del código de un método de una superclase que ha sido sobrescrito por el método de la subclase.
La funcion _super()_ permite insertar el código del método de la superclase que ha sido sobrescrito.
```
class <SubClase>(<SuperClase>):
...
...
def <método>(<parámetros>):
...
...
super().<método de la superclase>(<argumentos>)
...
...
```
**Ejemplo:**
```
class Estudiante(Persona):
tira_de_materias = []
def __init__(self, genero='otro'):
'''Añade el atributo genero al método __init__ de la superclase.'''
if genero.casefold() in ['masculino', 'femenino', 'otro']:
self.genero = genero
else:
raise ValueError
super().__init__()
def inscripcion(self, materia):
'''Añade elementos a tira_de_materias.'''
self.tira_de_materias.append(materia)
estudiante_2 = Estudiante('masculino')
estudiante_2.genero
estudiante_2.clave
```
## Herencia múltiple.
Python permite que una subclase pueda heredar de varias subclases. Sólo hay que ingresar el nombre de las subclases como argumentos en la definición de la clase.
Sintaxis:
```
class <SubClase>(<SuperClase 1>, <SuperClase 2>, ..., <SuperClase n>)
```
La primera superclase que se ingrese sobrescribirá los atributos de la siguiente y así sucesivamente.
**Ejemplo:**
* En este caso, la clase _Ornitorrinco_ es subclase de _Reptil_ y _Manifero_, las cuales a su vez son subclases de _Animal_.
* El método *\_\_init\_\_()* de _Ornitorrinco_ sobrescribe al método *\_\_init\_\_()* de _Animal_, pero es recuperado mediante la función _super()_.
* Debido a que la superclase _Reptil_ fue ingresad antes que _Mamifero_ en _Ornitorrinco_, el método _reproduccion()_ de _Reptil_ es el que va a sobrescribir al resto.
```
class Animal:
'''Clase base de todos los animales.'''
def __init__(self, nombre):
self.nombre = nombre
print('Hola. Mi nombre es {}.'.format(self.nombre))
def reproduccion(self):
'''Sólo define una interfaz.'''
pass
def __del__(self):
print("El animal {} acaba de fallecer.".format(self.nombre))
class Mamifero(Animal):
'''Clase que incluye actividades de los mamíferos.'''
def reproduccion(self):
'''Es la implementación de la interfaz reproducción de la superclase.'''
print('Toma un cachorro.')
def amamanta(self):
print('Toma un vaso de leche.')
class Reptil(Animal):
'''Clase que incluye actividades de los reptiles.'''
venenoso = True
def reproduccion(self):
'''Es la implementación de la interfaz reproducción de la superclase.'''
print('Toma un huevo.')
def veneno(self):
if self.venenoso:
print("Estás envenenado.")
else:
print("No soy venenoso.")
class Ornitorrinco(Reptil, Mamifero):
'''Los ornitorrincos son animales muy raros.'''
def __init__(self, nombre):
'''Despliega un texto y ejecuta elcódigo del método __init__() de la superclase.'''
super().__init__(nombre)
print('¿Pero qué es esto?')
help(Ornitorrinco)
perry = Ornitorrinco("Agente P")
perry.veneno()
perry.reproduccion()
perry.amamanta()
del perry
issubclass(Ornitorrinco, Reptil)
issubclass(Ornitorrinco, Mamifero)
issubclass(Ornitorrinco, Animal)
```
|
github_jupyter
|
class <nombre de la subclase>(<nombre de la superclase>):
...
...
class Persona:
'''Clase base para creación de datos personales.'''
def __init__(self):
'''Genera una clave única a partir de una estampa de tiempo y la relaciona con el atributo __clave.'''
from time import time
self.__clave = str(int(time() / 0.017))[1:]
@property
def clave(self):
'''Regresa el valor del atributo "escondido" __clave.'''
return self.__clave
@property
def nombre(self):
'''Regresa una cadena de caracteres a partir de la lista contenida en lista_nombre.'''
return " ".join(self.lista_nombre)
@nombre.setter
def nombre(self, nombre):
'''Debe ingresarse una lista o tupla con entre 2 y 3 elementos.'''
if len(nombre) < 2 or len(nombre) > 3 or type(nombre) not in (list, tuple):
raise ValueError("Formato incorrecto.")
else:
self.lista_nombre = nombre
dir(Persona)
class Estudiante(Persona):
'''Clase que hereda a Persona.'''
tira_de_materias = []
def inscripcion(self, materia):
'''Añade elementos a tira_de_materias.'''
self.tira_de_materias.append(materia)
alumno_1 = Estudiante()
alumno_1
alumno_1.inscripcion("Álgebra I")
alumno_1.tira_de_materias
help(Estudiante)
dir(Estudiante)
alumno_1.clave
alumno_1._Persona__clave
alumno_1.nombre
alumno_1.nombre=['Juan', 'Pérez']
alumno_1.nombre
alumno_1.lista_nombre
issubclass(<clase_1>, <clase_2>)
issubclass(alumno_1.__class__, Persona)
issubclass(Estudiante, Persona)
issubclass(Estudiante, object)
class <SubClase>(<SuperClase>):
...
...
def <método>(<parámetros>):
...
...
super().<método de la superclase>(<argumentos>)
...
...
class Estudiante(Persona):
tira_de_materias = []
def __init__(self, genero='otro'):
'''Añade el atributo genero al método __init__ de la superclase.'''
if genero.casefold() in ['masculino', 'femenino', 'otro']:
self.genero = genero
else:
raise ValueError
super().__init__()
def inscripcion(self, materia):
'''Añade elementos a tira_de_materias.'''
self.tira_de_materias.append(materia)
estudiante_2 = Estudiante('masculino')
estudiante_2.genero
estudiante_2.clave
class <SubClase>(<SuperClase 1>, <SuperClase 2>, ..., <SuperClase n>)
class Animal:
'''Clase base de todos los animales.'''
def __init__(self, nombre):
self.nombre = nombre
print('Hola. Mi nombre es {}.'.format(self.nombre))
def reproduccion(self):
'''Sólo define una interfaz.'''
pass
def __del__(self):
print("El animal {} acaba de fallecer.".format(self.nombre))
class Mamifero(Animal):
'''Clase que incluye actividades de los mamíferos.'''
def reproduccion(self):
'''Es la implementación de la interfaz reproducción de la superclase.'''
print('Toma un cachorro.')
def amamanta(self):
print('Toma un vaso de leche.')
class Reptil(Animal):
'''Clase que incluye actividades de los reptiles.'''
venenoso = True
def reproduccion(self):
'''Es la implementación de la interfaz reproducción de la superclase.'''
print('Toma un huevo.')
def veneno(self):
if self.venenoso:
print("Estás envenenado.")
else:
print("No soy venenoso.")
class Ornitorrinco(Reptil, Mamifero):
'''Los ornitorrincos son animales muy raros.'''
def __init__(self, nombre):
'''Despliega un texto y ejecuta elcódigo del método __init__() de la superclase.'''
super().__init__(nombre)
print('¿Pero qué es esto?')
help(Ornitorrinco)
perry = Ornitorrinco("Agente P")
perry.veneno()
perry.reproduccion()
perry.amamanta()
del perry
issubclass(Ornitorrinco, Reptil)
issubclass(Ornitorrinco, Mamifero)
issubclass(Ornitorrinco, Animal)
| 0.450843 | 0.826292 |
```
# K-邻近算法API -- sklearn.neigbors.KNeighborsClassifier(n_neighbors=5)
from sklearn .neighbors import KNeighborsClassifier
x = [[0], [1], [10], [20]]
y = [0, 0, 1, 1]
# 实例化API
estimator = KNeighborsClassifier(n_neighbors=2)
# 使用fit方法进行训练
estimator.fit(x, y)
print(estimator.predict([[5]]))
```
## 欧式距离(Euclidean Distance)
## 曼哈顿距离(Manhattan Distance)
d = |x1 - x2| + |y1 - y2|
## 切比雪夫距离(Chebyshev Distance)
d = max(|x1 - x2|, |y1 - y2|)
```
from sklearn.datasets import load_iris, fetch_20newsgroups
iris = load_iris()
# print(iris)
print("*********特征值: ", iris.data)
print("*********目标值: ",iris["target"])
print("*********特折值名字: ",iris.feature_names)
print("*********目标值名字: ",iris.target_names)
print("*********描述: ",iris.DESCR)
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
iris_d = pd.DataFrame(iris["data"], columns = ['sepal length (cm)', 'sepal width (cm)', 'petal length (cm)', 'petal width (cm)'])
iris_d["Species"] = iris.target
def plot_iris(iris, col1, col2):
sns.lmplot(x = col1, y = col2, data = iris, hue = "Species", fit_reg= False)
plt.xlabel(col1)
plt.ylabel(col2)
plt.show()
plot_iris(iris_d, 'sepal width (cm)', "petal length (cm)")
```
## 数据集的划分
```
from sklearn.model_selection import train_test_split
# test_size 测试集的大小
# random_state是随机数种子,不同的种子会造成不同的随机采样结果。相同的种子采样结果相同
x_train, x_test, y_train, y_test = train_test_split(iris.data, iris.target, test_size = 0.2, random_state = 22)
print(x_train, x_test, y_train, y_test)
```
## 特征工程
通过转换函数将特征数据转换成更适合算法模型的特征数据过程
### 归一化
通过对原始数据进行变换把数据映射到\[0, 1\]之间

max min为一列中最大值和最小值, mx和mi分别为指定区间值默认mx为1, mi为0
最大值和最小值是在变化的, 并且最大值和最小值容易受异常点的影响
```
from sklearn.preprocessing import MinMaxScaler
data = pd.read_csv("./untitled.txt")
print(data)
print()
transfer = MinMaxScaler(feature_range=(2, 3))
data = transfer.fit_transform(data[[data.columns[0], data.columns[1], data.columns[2]]])
print(data)
```
### 标准化
通过原始元素进行变换把数据变换到均值为0,标准差为1的范围内

mean: 平均值
标准差
```
from sklearn.preprocessing import StandardScaler
data = pd.read_csv("./untitled.txt")
print(data)
print()
transfer = StandardScaler()
data = transfer.fit_transform(data[[data.columns[0], data.columns[1], data.columns[2]]])
print(data)
print("每一列的平均值 ", transfer.mean_)
print("每一列特征的方差 ", transfer.var_)
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.neighbors import KNeighborsClassifier
# 获取数据
iris = load_iris()
# 数据基本处理
x_train, x_test, y_train, y_test = train_test_split(iris.data, iris.target, test_size=0.2, random_state=22)
# 特征工程
transfer = StandardScaler()
x_train = transfer.fit_transform(x_train)
x_test = transfer.transform(x_test)
# KNN
estimator = KNeighborsClassifier(n_neighbors=5)
estimator.fit(x_train, y_train)
# 模型评估
y_pre = estimator.predict(x_test)
print("预测值: ", y_pre)
print("预测值和真实值的对比:", y_pre == y_test)
# 准确率计算
score = estimator.score(x_test, y_test)
print("准确率: ",score)
```
## 交叉验证 cross validation
将拿到的训练数据,分为训练和验证集。将数据分成4份,其中一份作为验证集。经过4次测试,每次更换不同的验证集,就得到了4组模型的结果,取平均值作为最终结果。又称4折交叉验证。

目的: 为了让被评估的模型更加可信
## 网格搜索 Grid Search
有很多参数是需要手动指定的,例如“n_neighbors=5”,这种叫超参数。但是手动过程繁杂,所以需要对模型预设几种超参数组合。每组超参数都采用交叉验证来进行评估,最后选出最优参数组合建立模型。

#### API:
```python
sklearn.model_selection.GridSearchCV(estimator, param_grid=None, cv=None)
```
- estimator: 估计器对象
- param_grid: 估计其参数(dict) {"n_neighbors": \[1, 3, 5\]}
- cv:指定几折交叉验证
```
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import GridSearchCV
# 获取数据
iris = load_iris()
# 数据基本处理
x_train, x_test, y_train, y_test = train_test_split(iris.data, iris.target, test_size=0.2, random_state=22)
# 特征工程
transfer = StandardScaler()
x_train = transfer.fit_transform(x_train)
x_test = transfer.transform(x_test)
# KNN
estimator = KNeighborsClassifier()
param_dict = {"n_neighbors": [1, 3, 5, 7, 9, 11]}
estimator = GridSearchCV(estimator, param_grid = param_dict, cv=8)
estimator.fit(x_train, y_train)
# 模型评估
y_pre = estimator.predict(x_test)
print("预测值: ", y_pre)
print()
print("预测值和真实值的对比:", y_pre == y_test)
print()
# 准确率计算
score = estimator.score(x_test, y_test)
print("准确率: ",score)
print()
print("交叉验证中验证的最好结果\n", estimator.best_score_)
print()
print("最好的参数模型\n", estimator.best_estimator_)
print()
print("准确率结果:\n", estimator.cv_results_)
```
|
github_jupyter
|
# K-邻近算法API -- sklearn.neigbors.KNeighborsClassifier(n_neighbors=5)
from sklearn .neighbors import KNeighborsClassifier
x = [[0], [1], [10], [20]]
y = [0, 0, 1, 1]
# 实例化API
estimator = KNeighborsClassifier(n_neighbors=2)
# 使用fit方法进行训练
estimator.fit(x, y)
print(estimator.predict([[5]]))
from sklearn.datasets import load_iris, fetch_20newsgroups
iris = load_iris()
# print(iris)
print("*********特征值: ", iris.data)
print("*********目标值: ",iris["target"])
print("*********特折值名字: ",iris.feature_names)
print("*********目标值名字: ",iris.target_names)
print("*********描述: ",iris.DESCR)
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
iris_d = pd.DataFrame(iris["data"], columns = ['sepal length (cm)', 'sepal width (cm)', 'petal length (cm)', 'petal width (cm)'])
iris_d["Species"] = iris.target
def plot_iris(iris, col1, col2):
sns.lmplot(x = col1, y = col2, data = iris, hue = "Species", fit_reg= False)
plt.xlabel(col1)
plt.ylabel(col2)
plt.show()
plot_iris(iris_d, 'sepal width (cm)', "petal length (cm)")
from sklearn.model_selection import train_test_split
# test_size 测试集的大小
# random_state是随机数种子,不同的种子会造成不同的随机采样结果。相同的种子采样结果相同
x_train, x_test, y_train, y_test = train_test_split(iris.data, iris.target, test_size = 0.2, random_state = 22)
print(x_train, x_test, y_train, y_test)
from sklearn.preprocessing import MinMaxScaler
data = pd.read_csv("./untitled.txt")
print(data)
print()
transfer = MinMaxScaler(feature_range=(2, 3))
data = transfer.fit_transform(data[[data.columns[0], data.columns[1], data.columns[2]]])
print(data)
from sklearn.preprocessing import StandardScaler
data = pd.read_csv("./untitled.txt")
print(data)
print()
transfer = StandardScaler()
data = transfer.fit_transform(data[[data.columns[0], data.columns[1], data.columns[2]]])
print(data)
print("每一列的平均值 ", transfer.mean_)
print("每一列特征的方差 ", transfer.var_)
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.neighbors import KNeighborsClassifier
# 获取数据
iris = load_iris()
# 数据基本处理
x_train, x_test, y_train, y_test = train_test_split(iris.data, iris.target, test_size=0.2, random_state=22)
# 特征工程
transfer = StandardScaler()
x_train = transfer.fit_transform(x_train)
x_test = transfer.transform(x_test)
# KNN
estimator = KNeighborsClassifier(n_neighbors=5)
estimator.fit(x_train, y_train)
# 模型评估
y_pre = estimator.predict(x_test)
print("预测值: ", y_pre)
print("预测值和真实值的对比:", y_pre == y_test)
# 准确率计算
score = estimator.score(x_test, y_test)
print("准确率: ",score)
sklearn.model_selection.GridSearchCV(estimator, param_grid=None, cv=None)
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import GridSearchCV
# 获取数据
iris = load_iris()
# 数据基本处理
x_train, x_test, y_train, y_test = train_test_split(iris.data, iris.target, test_size=0.2, random_state=22)
# 特征工程
transfer = StandardScaler()
x_train = transfer.fit_transform(x_train)
x_test = transfer.transform(x_test)
# KNN
estimator = KNeighborsClassifier()
param_dict = {"n_neighbors": [1, 3, 5, 7, 9, 11]}
estimator = GridSearchCV(estimator, param_grid = param_dict, cv=8)
estimator.fit(x_train, y_train)
# 模型评估
y_pre = estimator.predict(x_test)
print("预测值: ", y_pre)
print()
print("预测值和真实值的对比:", y_pre == y_test)
print()
# 准确率计算
score = estimator.score(x_test, y_test)
print("准确率: ",score)
print()
print("交叉验证中验证的最好结果\n", estimator.best_score_)
print()
print("最好的参数模型\n", estimator.best_estimator_)
print()
print("准确率结果:\n", estimator.cv_results_)
| 0.395951 | 0.875999 |
# Chapter 2: Building Abstractions with Data
## 2.1 Introduction
native data type
1. There are expressions that evaluate to values of native types, called literals.
2. There are built-in functions and operators to manipulate values of native types.
*Int, float, complex*
* int objects represent integers exactly
* float objects can represent a wide range of fractional numbers
```
type(1)
```
## 2.2 Data Abstraction
Some functions enforce abstract barrier. These functions are called by a higher level and implemented using a lower level of abstraction.
An abstraction barrier violation occurs whenever a part of the program that can use a higher level function instead uses a function in a lower level. For example, a function that computes the square of a rational number is best implemented in terms of mul_rational, which does not assume anything about the implementation of a rational number.
**Abstraction barrier makes program easier to maintain and debug.**
In general, we can express abstract data using a collection of selectors and constructors, together with some behavior conditions.
## 2.3 Sequences
Sequences are not instances of a particular built-in type or abstract data representation, but instead a collection of behaviors that are shared among several different types of data.
* Length
* Element selection
* Membership
* Slicing
This underscore is just another name in the environment as far as the interpreter is concerned, but has a conventional meaning among programmers that indicates the name will not appear in any future expressions.
The general form of list comprehension is
```python
[<map expression> for <name> in <sequence expression> if <filter expression>]
```
* Map
* Filter
* Aggregation
Higher order functions
* apply to all (map)
* kepp if (filter)
* reduce
String
String membership check is for substring
Python does not have a separate character type
```
def width(area, height):
assert area % height == 0
return area // height
def perimeter(width, height):
return 2 * width + 2 * height
def minimum_perimeter(area):
heights = divisors(area)
perimeters = [perimeter(width(area, h), h) for h in heights]
return min(perimeters)
def divisors(n):
return [1] + [x for x in range(2, n) if n % x == 0]
area = 80
width(area, 5)
minimum_perimeter(area)
'您好'
import this
s = """UW-Madison
is great!"""
s
```
**Tree**
The tree is a fundamental data abstraction that imposes regularity on how hierarchical values are structured and manipulated.
**This chapter is unfinished!**
## 2.4 Mutable Data
Effective programming also requires organizational principles that can guide us in formulating the overall design of a program. In particular, we need strategies to help us structure large systems to be modular, meaning that they divide naturally into coherent parts that can be separately developed and maintained. One powerful technique for creating modular programs is to incorporate data that may change state over time.
```
"1234".isnumeric()
"AbC".swapcase()
```
|
github_jupyter
|
type(1)
[<map expression> for <name> in <sequence expression> if <filter expression>]
def width(area, height):
assert area % height == 0
return area // height
def perimeter(width, height):
return 2 * width + 2 * height
def minimum_perimeter(area):
heights = divisors(area)
perimeters = [perimeter(width(area, h), h) for h in heights]
return min(perimeters)
def divisors(n):
return [1] + [x for x in range(2, n) if n % x == 0]
area = 80
width(area, 5)
minimum_perimeter(area)
'您好'
import this
s = """UW-Madison
is great!"""
s
"1234".isnumeric()
"AbC".swapcase()
| 0.546738 | 0.990131 |
<table class="ee-notebook-buttons" align="left">
<td><a target="_blank" href="https://github.com/giswqs/earthengine-py-notebooks/tree/master/FeatureCollection/centroid.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td>
<td><a target="_blank" href="https://nbviewer.jupyter.org/github/giswqs/earthengine-py-notebooks/blob/master/FeatureCollection/centroid.ipynb"><img width=26px src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png" />Notebook Viewer</a></td>
<td><a target="_blank" href="https://mybinder.org/v2/gh/giswqs/earthengine-py-notebooks/master?filepath=FeatureCollection/centroid.ipynb"><img width=58px src="https://mybinder.org/static/images/logo_social.png" />Run in binder</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/giswqs/earthengine-py-notebooks/blob/master/FeatureCollection/centroid.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a></td>
</table>
## Install Earth Engine API and geemap
Install the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.
The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemap#dependencies), including earthengine-api, folium, and ipyleaflet.
**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60#issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
```
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as emap
except:
import geemap as emap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
```
## Create an interactive map
The default basemap is `Google Satellite`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py#L13) can be added using the `Map.add_basemap()` function.
```
Map = emap.Map(center=[40,-100], zoom=4)
Map.add_basemap('ROADMAP') # Add Google Map
Map
```
## Add Earth Engine Python script
```
# Add Earth Engine dataset
# Create a geodesic polygon.
polygon = ee.Geometry.Polygon([
[[-5, 40], [65, 40], [65, 60], [-5, 60], [-5, 60]]
])
# Compute a buffer of the polygon.
buffer = polygon.buffer(0.1)
# Compute the centroid of the polygon.
centroid = polygon.centroid()
Map.addLayer(buffer, {}, 'buffer')
Map.addLayer(centroid, {'color': 'red'}, 'centroid')
```
## Display Earth Engine data layers
```
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
```
|
github_jupyter
|
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as emap
except:
import geemap as emap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
Map = emap.Map(center=[40,-100], zoom=4)
Map.add_basemap('ROADMAP') # Add Google Map
Map
# Add Earth Engine dataset
# Create a geodesic polygon.
polygon = ee.Geometry.Polygon([
[[-5, 40], [65, 40], [65, 60], [-5, 60], [-5, 60]]
])
# Compute a buffer of the polygon.
buffer = polygon.buffer(0.1)
# Compute the centroid of the polygon.
centroid = polygon.centroid()
Map.addLayer(buffer, {}, 'buffer')
Map.addLayer(centroid, {'color': 'red'}, 'centroid')
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
| 0.650467 | 0.959611 |
To train a model, you first need to convert your sequences and targets into the input HDF5 format. Check out my tutorials for how to do that; they're linked from the [main page](../README.md).
For this tutorial, grab a small example HDF5 that I constructed here with 10% of the training sequences and only GM12878 targets for various DNase-seq, ChIP-seq, and CAGE experiments.
```
import os, subprocess
if not os.path.isfile('data/gm12878_l262k_w128_d10.h5'):
subprocess.call('curl -o data/gm12878_l262k_w128_d10.h5 https://storage.googleapis.com/262k_binned/gm12878_l262k_w128_d10.h5', shell=True)
```
Next, you need to decide what sort of architecture to use. This grammar probably needs work; my goal was to enable hyperparameter searches to write the parameters to file so that I could run parallel training jobs to explore the hyperparameter space. I included an example set of parameters that will work well with this data in models/params_small.txt.
Then, run [basenji_train.py](https://github.com/calico/basenji/blob/master/bin/basenji_train.py) to train a model. The program will offer training feedback via stdout and write the model output files to the prefix given by the *-s* parameter.
The most relevant options here are:
| Option/Argument | Value | Note |
|:---|:---|:---|
| --rc | | Process even-numbered epochs as forward, odd-numbered as reverse complemented. Average the forward and reverse complement to assess validation accuracy. |
| -s | models/gm12878 | File path prefix to save the model. |
| params_file | models/params_small.txt | Table of parameters to setup the model architecture and optimization. |
| data_file | data/gm12878_l262k_w128_d10.h5 | HDF5 file containing the training and validation input and output datasets as generated by [basenji_hdf5_single.py](https://github.com/calico/basenji/blob/master/bin/basenji_hdf5_single.py) |
If you want to train, uncomment the following line and run it. Depending on your hardware, it may require many hours.
```
# ! basenji_train.py -s models/gm12878 models/params_small.txt data/gm12878_l262k_w128_d10.h5
```
Alternatively, you can just download a trained model.
```
if not os.path.isfile('models/gm12878_d10.tf.meta'):
subprocess.call('curl -o models/gm12878_d10.tf.index https://storage.googleapis.com/basenji_tutorial_data/model_gm12878_d10.tf.index', shell=True)
subprocess.call('curl -o models/gm12878_d10.tf.meta https://storage.googleapis.com/basenji_tutorial_data/model_gm12878_d10.tf.meta', shell=True)
subprocess.call('curl -o models/gm12878_d10.tf.data-00000-of-00001 https://storage.googleapis.com/basenji_tutorial_data/model_gm12878_d10.tf.data-00000-of-00001', shell=True)
```
models/gm12878_best.tf will now specify the name of your saved model to be provided to other programs.
To further benchmark the accuracy (e.g. computing significant "peak" accuracy), use [basenji_test.py](https://github.com/calico/basenji/blob/master/bin/basenji_test.py).
The most relevant options here are:
| Option/Argument | Value | Note |
|:---|:---|:---|
| --rc | | Average the forward and reverse complement to form prediction. |
| -o | data/gm12878_test | Output directory. |
| --ai | 0,1,2 | Make accuracy scatter plots for targets 0, 1, and 2. |
| --ti | 3,4,5 | Make BigWig tracks for targets 3, 4, and 5. |
| -t | data/gm12878_l262k_w128_d10.bed | BED file describing sequence regions for BigWig track output. |
| params_file | models/params_small.txt | Table of parameters to setup the model architecture and optimization. |
| model_file | models/gm12878_d10.tf | Trained saved model prefix. |
| data_file | data/gm12878_l262k_w128_d10.h5 | HDF5 file containing the test input and output datasets as generated by [basenji_hdf5_single.py](https://github.com/calico/basenji/blob/master/bin/basenji_hdf5_single.py) |
```
! basenji_test.py --rc -o data/gm12878_test --ai 0,1,2 -t data/gm12878_l262k_w128_d10.bed --ti 3,4,5 models/params_small.txt models/gm12878_d10.tf data/gm12878_l262k_w128_d10.h5
```
*data/gm12878_test/acc.txt* is a table specifiying the loss function value, R2, R2 after log2, and Spearman correlation for each dataset.
```
! cat data/gm12878_test/acc.txt
```
*data/gm12878_test/peak.txt* is a table specifiying the number of peaks called, AUROC, and AUPRC for each dataset.
```
! cat data/gm12878_test/peaks.txt
```
The directories *pr*, *roc*, *violin*, and *scatter* in *data/gm12878_test* contain plots for the targets indexed by 0, 1, and 2 as specified by the --ai option above.
E.g.
```
from IPython.display import IFrame
IFrame('data/gm12878_test/pr/t0.pdf', width=600, height=500)
```
|
github_jupyter
|
import os, subprocess
if not os.path.isfile('data/gm12878_l262k_w128_d10.h5'):
subprocess.call('curl -o data/gm12878_l262k_w128_d10.h5 https://storage.googleapis.com/262k_binned/gm12878_l262k_w128_d10.h5', shell=True)
# ! basenji_train.py -s models/gm12878 models/params_small.txt data/gm12878_l262k_w128_d10.h5
if not os.path.isfile('models/gm12878_d10.tf.meta'):
subprocess.call('curl -o models/gm12878_d10.tf.index https://storage.googleapis.com/basenji_tutorial_data/model_gm12878_d10.tf.index', shell=True)
subprocess.call('curl -o models/gm12878_d10.tf.meta https://storage.googleapis.com/basenji_tutorial_data/model_gm12878_d10.tf.meta', shell=True)
subprocess.call('curl -o models/gm12878_d10.tf.data-00000-of-00001 https://storage.googleapis.com/basenji_tutorial_data/model_gm12878_d10.tf.data-00000-of-00001', shell=True)
! basenji_test.py --rc -o data/gm12878_test --ai 0,1,2 -t data/gm12878_l262k_w128_d10.bed --ti 3,4,5 models/params_small.txt models/gm12878_d10.tf data/gm12878_l262k_w128_d10.h5
! cat data/gm12878_test/acc.txt
! cat data/gm12878_test/peaks.txt
from IPython.display import IFrame
IFrame('data/gm12878_test/pr/t0.pdf', width=600, height=500)
| 0.35768 | 0.915053 |
# Support the notebok "LIGER_alignment_all_data": UMAP and plot
```
import os, sys
import numpy as np
import pandas as pd
import seaborn as sns
import umap
import pylab
import matplotlib.pyplot as plt
from sklearn.decomposition import PCA
from copy import deepcopy
%config IPCompleter.use_jedi = False
```
## Import data
### Kim
```
# Tumor data
tumor_annot_df = pd.read_csv(
'../data/Kim/raw/GSE131907_Lung_Cancer_cell_annotation.txt',
sep='\t'
)
tumor_annot_df = tumor_annot_df[['Index', 'Sample', 'Cell_type', 'Sample']]
tumor_annot_df.columns = ['index', 'sample', 'type', 'pool']
tumor_annot_df['specimen'] = 'TUMOR'
tumor_annot_df['char_type'] = tumor_annot_df['type'] == 'Epithelial cells'
is_non_epith = ~ tumor_annot_df['char_type']
tumor_annot_df.loc[tumor_annot_df['char_type'], 'char_type'] = 'LUNG'
tumor_annot_df.loc[is_non_epith, 'char_type'] = 'OTHER'
tumor_umis = pd.read_csv(
'../data/Kim/raw/GSE131907_Lung_Cancer_raw_UMI_matrix.txt',
nrows=1,
header=None,
index_col=0,
sep='\t'
).values[0].astype(str)
tumor_annot_df = tumor_annot_df.set_index('index').loc[tumor_umis].reset_index()
```
### Kinker
```
# Cell line data
cell_line_annot_df = pd.read_csv(
'../data/Kinker/raw/Metadata.txt',
header=[0,1],
sep='\t'
)
cell_line_annot_df.columns = cell_line_annot_df.columns.droplevel(1)
cell_line_annot_df = cell_line_annot_df[['NAME', 'Cell_line', 'Cancer_type', 'Pool_ID']]
cell_line_annot_df.columns = ['index', 'sample', 'type', 'pool']
cell_line_annot_df['specimen'] = 'CELL_LINE'
cell_line_annot_df['char_type'] = cell_line_annot_df['type'] == 'Lung Cancer'
is_non_lung = ~ cell_line_annot_df['char_type']
cell_line_annot_df.loc[cell_line_annot_df['char_type'], 'char_type'] = 'LUNG'
cell_line_annot_df.loc[is_non_lung, 'char_type'] = 'OTHER'
cell_line_cpm = pd.read_csv(
'../data/Kinker/raw/CPM_data.txt',
nrows=1,
header=None,
index_col=0,
sep='\t'
).T
cell_line_cpm.columns = ['index']
cell_line_annot_df = cell_line_cpm.merge(cell_line_annot_df, on='index', how='left')
```
## Analyse integrated
```
tumor_subsample_file = './output/liger/subsampled_tumor_samples.csv'
cell_line_subsample_file = './output/liger/subsampled_cell_lines_samples.csv'
cell_line_corrected_file = './output/liger/matrix_H_cell_lines.csv'
tumor_corrected_file = './output/liger/matrix_H_tumors.csv'
scaled_corrected_file = './output/liger/matrix_H_normalized.csv'
ccle_annot_file = '../data/cell_lines/sample_info.csv'
cell_line_samples = pd.read_csv(cell_line_subsample_file)['x'].values.astype(str)
cell_line_annot_df = cell_line_annot_df.set_index('index').loc[cell_line_samples].reset_index()
tumor_subsamples = pd.read_csv(tumor_subsample_file)['x'].values.astype(str)
tumor_annot_df = tumor_annot_df.set_index('index').loc[tumor_subsamples].reset_index()
annot_df = pd.concat([cell_line_annot_df, tumor_annot_df])
annot_df = annot_df.set_index('index')
ccle_annot_df = pd.read_csv(ccle_annot_file)
combined_quantile_normalized_df = pd.read_csv(scaled_corrected_file, index_col=0)
```
## UMAP
```
metric = 'cosine'
n_neighbors = 15
min_dist = 0.9
n_epochs = 5000
umap_integrated_clf = umap.UMAP(
verbose=5,
n_neighbors=n_neighbors,
metric=metric,
min_dist=min_dist,
n_components=2,
n_epochs=n_epochs)
umap_integrated_proj = umap_integrated_clf.fit_transform(combined_quantile_normalized_df)
umap_integrated_proj_df = pd.DataFrame(
umap_integrated_proj,
index=annot_df.index,
columns=['UMAP 1', 'UMAP 2'])
umap_integrated_proj_df = umap_integrated_proj_df
umap_integrated_proj_df = umap_integrated_proj_df.merge(annot_df, how='left', left_index=True, right_index=True)
umap_integrated_proj_df = umap_integrated_proj_df.merge(ccle_annot_df,
left_on='sample',
right_on='CCLE_Name',
how='left')
umap_integrated_proj_df['is_nsclc'] = (umap_integrated_proj_df['lineage_subtype'] == 'NSCLC') | (umap_integrated_proj_df['type'] == 'Epithelial cells')
umap_integrated_proj_df['str'] = umap_integrated_proj_df['is_nsclc'].apply(lambda x: 'NSCLC' if x else 'Other')
umap_integrated_proj_df['str'] = umap_integrated_proj_df['specimen'] + ' ' + umap_integrated_proj_df['str']
umap_integrated_proj_df['plot_str'] = [
'Cell-line: NSCLC' if x == 'CELL_LINE NSCLC' else (
'Cell-line: other' if x == 'CELL_LINE Other' else (
'Tumor: NSCLC' if x == 'TUMOR NSCLC'
else 'Tumor: micro-environment'
)
)
for x in umap_integrated_proj_df['str']
]
# Save umap
umap_integrated_proj_df.to_csv('./figures/liger/UMAP_df.csv'%(figure_folder))
# All scatterplot
palette = {
'Cell-line: NSCLC': '#D62728',#'tab:red',
'Cell-line: other': (0.984313725490196, 0.6039215686274509, 0.6),#'#F5A3A5',#'#D6ABAB',#'lightcoral',
'Tumor: NSCLC': (0.12156862745098039, 0.47058823529411764, 0.7058823529411765),
'Tumor: micro-environment': (0.6509803921568628, 0.807843137254902, 0.8901960784313725)#'#B9F1F6'
}
fig = pylab.figure(figsize=(10,10))
figlegend = pylab.figure(figsize=(10,10))
ax = fig.add_subplot(111)
sns.scatterplot(
data=umap_integrated_proj_df.sort_values(['specimen', 'is_nsclc'], ascending=False).sample(frac=1),
x='UMAP 1', y='UMAP 2', hue='plot_str',
alpha=0.8, palette=palette, marker='x', ax=ax
)
ax.set_xlabel('UMAP 1', fontsize=25, color='black')
ax.set_ylabel('UMAP 2', fontsize=25, color='black')
ax.tick_params(labelsize=20, labelcolor='black')
pylab.figlegend(*ax.get_legend_handles_labels(), loc = 'center', ncol=1, fontsize=15)
figlegend.tight_layout()
figlegend.savefig('./figures/liger/UMAP_neighbors_%s_metrics_%s_mindist_%s_epochs_%s_legend.png'%(
n_neighbors, metric, min_dist, n_epochs
), dpi=300)
ax.get_legend().remove()
fig.tight_layout()
fig.savefig('./figures/liger/UMAP_neighbors_%s_metrics_%s_mindist_%s_epochs_%s.png'%(
n_neighbors, metric, min_dist, n_epochs
), dpi=300)
```
## Tumors
```
# Zoomed scatterplot
plot_df = umap_integrated_proj_df[umap_integrated_proj_df['specimen'] == 'TUMOR']
plot_df = plot_df.sample(plot_df.shape[0])
markers = ['o' if x else '+' for x in plot_df['is_nsclc']]
plt.figure(figsize=(10,10))
g = sns.FacetGrid(
plot_df,
col='plot_str',
hue='sample',
palette='colorblind',
sharex=True,
sharey=True,
size=6
)
g.map(
sns.scatterplot,
'UMAP 1',
'UMAP 2',
alpha=0.5,
marker='x'
)
g.set_xlabels('UMAP 1', fontsize=20)
g.set_ylabels('UMAP 2', fontsize=20)
g.set_titles(col_template="{col_name}", row_template="", size=25, color='black')
plt.tight_layout()
plt.savefig('./figures/liger/UMAP_neighbors_tumors_sample_%s_metrics_%s_mindist_%s_epochs_%s.png'%(
n_neighbors, metric, min_dist, n_epochs),
dpi=300)
plt.show()
del plot_df
```
## Cell-lines
```
# Zoomed scatterplot
plt.figure(figsize=(10,10))
g = sns.scatterplot(data=umap_integrated_proj_df[umap_integrated_proj_df['specimen'] == 'CELL_LINE'],
x='UMAP 1',
y='UMAP 2',
hue='sample',
palette='colorblind',
alpha=0.5,
marker='x')
plt.xlabel('UMAP 1', fontsize=20, color='black')
plt.ylabel('UMAP 2', fontsize=20, color='black')
plt.xticks(fontsize=15, color='black')
plt.yticks(fontsize=15, color='black')
plt.legend([],[], frameon=False)
plt.tight_layout()
plt.savefig('%s/UMAP_neighbors_cell_lines_sample_%s_metrics_%s_mindist_%s_epochs_%s.png'%(
figure_folder,
n_neighbors,
metric,
min_dist,
n_epochs),
dpi=300)
plt.show()
```
|
github_jupyter
|
import os, sys
import numpy as np
import pandas as pd
import seaborn as sns
import umap
import pylab
import matplotlib.pyplot as plt
from sklearn.decomposition import PCA
from copy import deepcopy
%config IPCompleter.use_jedi = False
# Tumor data
tumor_annot_df = pd.read_csv(
'../data/Kim/raw/GSE131907_Lung_Cancer_cell_annotation.txt',
sep='\t'
)
tumor_annot_df = tumor_annot_df[['Index', 'Sample', 'Cell_type', 'Sample']]
tumor_annot_df.columns = ['index', 'sample', 'type', 'pool']
tumor_annot_df['specimen'] = 'TUMOR'
tumor_annot_df['char_type'] = tumor_annot_df['type'] == 'Epithelial cells'
is_non_epith = ~ tumor_annot_df['char_type']
tumor_annot_df.loc[tumor_annot_df['char_type'], 'char_type'] = 'LUNG'
tumor_annot_df.loc[is_non_epith, 'char_type'] = 'OTHER'
tumor_umis = pd.read_csv(
'../data/Kim/raw/GSE131907_Lung_Cancer_raw_UMI_matrix.txt',
nrows=1,
header=None,
index_col=0,
sep='\t'
).values[0].astype(str)
tumor_annot_df = tumor_annot_df.set_index('index').loc[tumor_umis].reset_index()
# Cell line data
cell_line_annot_df = pd.read_csv(
'../data/Kinker/raw/Metadata.txt',
header=[0,1],
sep='\t'
)
cell_line_annot_df.columns = cell_line_annot_df.columns.droplevel(1)
cell_line_annot_df = cell_line_annot_df[['NAME', 'Cell_line', 'Cancer_type', 'Pool_ID']]
cell_line_annot_df.columns = ['index', 'sample', 'type', 'pool']
cell_line_annot_df['specimen'] = 'CELL_LINE'
cell_line_annot_df['char_type'] = cell_line_annot_df['type'] == 'Lung Cancer'
is_non_lung = ~ cell_line_annot_df['char_type']
cell_line_annot_df.loc[cell_line_annot_df['char_type'], 'char_type'] = 'LUNG'
cell_line_annot_df.loc[is_non_lung, 'char_type'] = 'OTHER'
cell_line_cpm = pd.read_csv(
'../data/Kinker/raw/CPM_data.txt',
nrows=1,
header=None,
index_col=0,
sep='\t'
).T
cell_line_cpm.columns = ['index']
cell_line_annot_df = cell_line_cpm.merge(cell_line_annot_df, on='index', how='left')
tumor_subsample_file = './output/liger/subsampled_tumor_samples.csv'
cell_line_subsample_file = './output/liger/subsampled_cell_lines_samples.csv'
cell_line_corrected_file = './output/liger/matrix_H_cell_lines.csv'
tumor_corrected_file = './output/liger/matrix_H_tumors.csv'
scaled_corrected_file = './output/liger/matrix_H_normalized.csv'
ccle_annot_file = '../data/cell_lines/sample_info.csv'
cell_line_samples = pd.read_csv(cell_line_subsample_file)['x'].values.astype(str)
cell_line_annot_df = cell_line_annot_df.set_index('index').loc[cell_line_samples].reset_index()
tumor_subsamples = pd.read_csv(tumor_subsample_file)['x'].values.astype(str)
tumor_annot_df = tumor_annot_df.set_index('index').loc[tumor_subsamples].reset_index()
annot_df = pd.concat([cell_line_annot_df, tumor_annot_df])
annot_df = annot_df.set_index('index')
ccle_annot_df = pd.read_csv(ccle_annot_file)
combined_quantile_normalized_df = pd.read_csv(scaled_corrected_file, index_col=0)
metric = 'cosine'
n_neighbors = 15
min_dist = 0.9
n_epochs = 5000
umap_integrated_clf = umap.UMAP(
verbose=5,
n_neighbors=n_neighbors,
metric=metric,
min_dist=min_dist,
n_components=2,
n_epochs=n_epochs)
umap_integrated_proj = umap_integrated_clf.fit_transform(combined_quantile_normalized_df)
umap_integrated_proj_df = pd.DataFrame(
umap_integrated_proj,
index=annot_df.index,
columns=['UMAP 1', 'UMAP 2'])
umap_integrated_proj_df = umap_integrated_proj_df
umap_integrated_proj_df = umap_integrated_proj_df.merge(annot_df, how='left', left_index=True, right_index=True)
umap_integrated_proj_df = umap_integrated_proj_df.merge(ccle_annot_df,
left_on='sample',
right_on='CCLE_Name',
how='left')
umap_integrated_proj_df['is_nsclc'] = (umap_integrated_proj_df['lineage_subtype'] == 'NSCLC') | (umap_integrated_proj_df['type'] == 'Epithelial cells')
umap_integrated_proj_df['str'] = umap_integrated_proj_df['is_nsclc'].apply(lambda x: 'NSCLC' if x else 'Other')
umap_integrated_proj_df['str'] = umap_integrated_proj_df['specimen'] + ' ' + umap_integrated_proj_df['str']
umap_integrated_proj_df['plot_str'] = [
'Cell-line: NSCLC' if x == 'CELL_LINE NSCLC' else (
'Cell-line: other' if x == 'CELL_LINE Other' else (
'Tumor: NSCLC' if x == 'TUMOR NSCLC'
else 'Tumor: micro-environment'
)
)
for x in umap_integrated_proj_df['str']
]
# Save umap
umap_integrated_proj_df.to_csv('./figures/liger/UMAP_df.csv'%(figure_folder))
# All scatterplot
palette = {
'Cell-line: NSCLC': '#D62728',#'tab:red',
'Cell-line: other': (0.984313725490196, 0.6039215686274509, 0.6),#'#F5A3A5',#'#D6ABAB',#'lightcoral',
'Tumor: NSCLC': (0.12156862745098039, 0.47058823529411764, 0.7058823529411765),
'Tumor: micro-environment': (0.6509803921568628, 0.807843137254902, 0.8901960784313725)#'#B9F1F6'
}
fig = pylab.figure(figsize=(10,10))
figlegend = pylab.figure(figsize=(10,10))
ax = fig.add_subplot(111)
sns.scatterplot(
data=umap_integrated_proj_df.sort_values(['specimen', 'is_nsclc'], ascending=False).sample(frac=1),
x='UMAP 1', y='UMAP 2', hue='plot_str',
alpha=0.8, palette=palette, marker='x', ax=ax
)
ax.set_xlabel('UMAP 1', fontsize=25, color='black')
ax.set_ylabel('UMAP 2', fontsize=25, color='black')
ax.tick_params(labelsize=20, labelcolor='black')
pylab.figlegend(*ax.get_legend_handles_labels(), loc = 'center', ncol=1, fontsize=15)
figlegend.tight_layout()
figlegend.savefig('./figures/liger/UMAP_neighbors_%s_metrics_%s_mindist_%s_epochs_%s_legend.png'%(
n_neighbors, metric, min_dist, n_epochs
), dpi=300)
ax.get_legend().remove()
fig.tight_layout()
fig.savefig('./figures/liger/UMAP_neighbors_%s_metrics_%s_mindist_%s_epochs_%s.png'%(
n_neighbors, metric, min_dist, n_epochs
), dpi=300)
# Zoomed scatterplot
plot_df = umap_integrated_proj_df[umap_integrated_proj_df['specimen'] == 'TUMOR']
plot_df = plot_df.sample(plot_df.shape[0])
markers = ['o' if x else '+' for x in plot_df['is_nsclc']]
plt.figure(figsize=(10,10))
g = sns.FacetGrid(
plot_df,
col='plot_str',
hue='sample',
palette='colorblind',
sharex=True,
sharey=True,
size=6
)
g.map(
sns.scatterplot,
'UMAP 1',
'UMAP 2',
alpha=0.5,
marker='x'
)
g.set_xlabels('UMAP 1', fontsize=20)
g.set_ylabels('UMAP 2', fontsize=20)
g.set_titles(col_template="{col_name}", row_template="", size=25, color='black')
plt.tight_layout()
plt.savefig('./figures/liger/UMAP_neighbors_tumors_sample_%s_metrics_%s_mindist_%s_epochs_%s.png'%(
n_neighbors, metric, min_dist, n_epochs),
dpi=300)
plt.show()
del plot_df
# Zoomed scatterplot
plt.figure(figsize=(10,10))
g = sns.scatterplot(data=umap_integrated_proj_df[umap_integrated_proj_df['specimen'] == 'CELL_LINE'],
x='UMAP 1',
y='UMAP 2',
hue='sample',
palette='colorblind',
alpha=0.5,
marker='x')
plt.xlabel('UMAP 1', fontsize=20, color='black')
plt.ylabel('UMAP 2', fontsize=20, color='black')
plt.xticks(fontsize=15, color='black')
plt.yticks(fontsize=15, color='black')
plt.legend([],[], frameon=False)
plt.tight_layout()
plt.savefig('%s/UMAP_neighbors_cell_lines_sample_%s_metrics_%s_mindist_%s_epochs_%s.png'%(
figure_folder,
n_neighbors,
metric,
min_dist,
n_epochs),
dpi=300)
plt.show()
| 0.278944 | 0.602559 |
```
# Crash on purpose to get more ram :
import torch
# torch.tensor([10.]*10000000000)
import torch
import nlp
from transformers import T5Tokenizer
tokenizer = T5Tokenizer.from_pretrained('t5-small')
# process the examples in input and target text format and the eos token at the end
def add_eos_to_examples(example):
example['input_text'] = 'question: %s context: %s </s>' % (example['question'], example['context'])
example['target_text'] = '%s </s>' % example['answers']['text'][0]
return example
# tokenize the examples
def convert_to_features(example_batch):
input_encodings = tokenizer.batch_encode_plus(example_batch['input_text'], pad_to_max_length=True, max_length=128)
target_encodings = tokenizer.batch_encode_plus(example_batch['target_text'], pad_to_max_length=True, max_length=32)
encodings = {
'input_ids': input_encodings['input_ids'],
'attention_mask': input_encodings['attention_mask'],
'target_ids': target_encodings['input_ids'],
'target_attention_mask': target_encodings['attention_mask']
}
return encodings
# load train and validation split of squad
train_dataset = nlp.load_dataset('squad', split=nlp.Split.TRAIN)
valid_dataset = nlp.load_dataset('squad', split=nlp.Split.VALIDATION)
# map add_eos_to_examples function to the dataset example wise
train_dataset = train_dataset.map(add_eos_to_examples)
# map convert_to_features batch wise
train_dataset = train_dataset.map(convert_to_features, batched=True)
valid_dataset = valid_dataset.map(add_eos_to_examples, load_from_cache_file=False)
valid_dataset = valid_dataset.map(convert_to_features, batched=True, load_from_cache_file=False)
# set the tensor type and the columns which the dataset should return
columns = ['input_ids', 'target_ids', 'attention_mask', 'target_attention_mask']
train_dataset.set_format(type='torch', columns=columns)
valid_dataset.set_format(type='torch', columns=columns)
len(train_dataset), len(valid_dataset)
train_dataset[0]
# cach the dataset, so we can load it directly for training
torch.save(train_dataset, 'train_data.pt')
torch.save(valid_dataset, 'valid_data.pt')
```
For more details on how to use the nlp library check out this [notebook](https://colab.research.google.com/github/huggingface/nlp/blob/master/notebooks/Overview.ipynb).
## Write training script
Using the `Trainer` is pretty straightforward. Here are the 4 basic steps which are needed to use trainer.
1. **Parse the arguments needed**. These are divided in 3 parts for clarity and seperation (TrainingArguments, ModelArguments and DataTrainingArguments).
1. **TrainingArguments**: These are basicaly the training hyperparameters such as learning rate, batch size, weight decay, gradient accumulation steps etc. See all possible arguments [here](https://github.com/huggingface/transformers/blob/master/src/transformers/training_args.py). These are used by the Trainer.
2. **ModelArguments**: These are the arguments for the model that you want to use such as the model_name_or_path, tokenizer_name etc. You'll need these to load the model and tokenizer.
3. **DataTrainingArguments**: These are as the name suggests arguments needed for the dataset. Such as the directory name where your files are stored etc. You'll need these to load/process the dataset.
TrainingArguments are already defined in the `TrainingArguments` class, you'll need to define `ModelArguments` and `DataTrainingArguments` classes for your task.
2. Load train and eval datasets
3. Initialize the `Trainer`
These are the mininum parameters which you'll for initializing `Trainer`. For full list check [here](https://github.com/huggingface/transformers/blob/master/src/transformers/trainer.py#L107)
```
model: PreTrainedModel
args: TrainingArguments
train_dataset: Optional[Dataset]
eval_dataset: Optional[Dataset]
```
4. Start training with `trainer.train`
Call `trainer.train` and let the magic begin!
There are lots of things which the trainer handles for you out of the box such as gradient_accumulation, fp16 training, setting up the optimizer and scheduler, logging with wandb etc. I didn't set-up wandb for this experiment, but will explore it for sure in future experiment.
```
import dataclasses
import logging
import os
import sys
from dataclasses import dataclass, field
from typing import Dict, List, Optional
import numpy as np
import torch
from transformers import T5ForConditionalGeneration, T5Tokenizer, EvalPrediction
from transformers import (
HfArgumentParser,
DataCollator,
Trainer,
TrainingArguments,
set_seed,
)
logger = logging.getLogger(__name__)
# prepares lm_labels from target_ids, returns examples with keys as expected by the forward method
# this is necessacry because the trainer directly passes this dict as arguments to the model
# so make sure the keys match the parameter names of the forward method
@dataclass
class T2TDataCollator(DataCollator):
def collate_batch(self, batch: List) -> Dict[str, torch.Tensor]:
"""
Take a list of samples from a Dataset and collate them into a batch.
Returns:
A dictionary of tensors
"""
input_ids = torch.stack([example['input_ids'] for example in batch])
lm_labels = torch.stack([example['target_ids'] for example in batch])
lm_labels[lm_labels[:, :] == 0] = -100
attention_mask = torch.stack([example['attention_mask'] for example in batch])
decoder_attention_mask = torch.stack([example['target_attention_mask'] for example in batch])
return {
'input_ids': input_ids,
'attention_mask': attention_mask,
'lm_labels': lm_labels,
'decoder_attention_mask': decoder_attention_mask
}
@dataclass
class ModelArguments:
"""
Arguments pertaining to which model/config/tokenizer we are going to fine-tune from.
"""
model_name_or_path: str = field(
metadata={"help": "Path to pretrained model or model identifier from huggingface.co/models"}
)
tokenizer_name: Optional[str] = field(
default=None, metadata={"help": "Pretrained tokenizer name or path if not the same as model_name"}
)
cache_dir: Optional[str] = field(
default=None, metadata={"help": "Where do you want to store the pretrained models downloaded from s3"}
)
@dataclass
class DataTrainingArguments:
"""
Arguments pertaining to what data we are going to input our model for training and eval.
"""
train_file_path: Optional[str] = field(
default='train_data.pt',
metadata={"help": "Path for cached train dataset"},
)
valid_file_path: Optional[str] = field(
default='valid_data.pt',
metadata={"help": "Path for cached valid dataset"},
)
max_len: Optional[int] = field(
default=512,
metadata={"help": "Max input length for the source text"},
)
target_max_len: Optional[int] = field(
default=32,
metadata={"help": "Max input length for the target text"},
)
def main():
# See all possible arguments in src/transformers/training_args.py
# or by passing the --help flag to this script.
# We now keep distinct sets of args, for a cleaner separation of concerns.
parser = HfArgumentParser((ModelArguments, DataTrainingArguments, TrainingArguments))
# we will load the arguments from a json file,
#make sure you save the arguments in at ./args.json
model_args, data_args, training_args = parser.parse_json_file(json_file=os.path.abspath('args.json'))
if (
os.path.exists(training_args.output_dir)
and os.listdir(training_args.output_dir)
and training_args.do_train
and not training_args.overwrite_output_dir
):
raise ValueError(
f"Output directory ({training_args.output_dir}) already exists and is not empty. Use --overwrite_output_dir to overcome."
)
# Setup logging
logging.basicConfig(
format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
datefmt="%m/%d/%Y %H:%M:%S",
level=logging.INFO if training_args.local_rank in [-1, 0] else logging.WARN,
)
logger.warning(
"Process rank: %s, device: %s, n_gpu: %s, distributed training: %s, 16-bits training: %s",
training_args.local_rank,
training_args.device,
training_args.n_gpu,
bool(training_args.local_rank != -1),
training_args.fp16,
)
logger.info("Training/evaluation parameters %s", training_args)
# Set seed
set_seed(training_args.seed)
# Load pretrained model and tokenizer
#
# Distributed training:
# The .from_pretrained methods guarantee that only one local process can concurrently
# download model & vocab.
tokenizer = T5Tokenizer.from_pretrained(
model_args.tokenizer_name if model_args.tokenizer_name else model_args.model_name_or_path,
cache_dir=model_args.cache_dir,
)
model = T5ForConditionalGeneration.from_pretrained(
model_args.model_name_or_path,
cache_dir=model_args.cache_dir,
)
# Get datasets
print('loading data')
train_dataset = torch.load(data_args.train_file_path)
valid_dataset = torch.load(data_args.valid_file_path)
print('loading done')
# Initialize our Trainer
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_dataset,
eval_dataset=valid_dataset,
data_collator=T2TDataCollator(),
prediction_loss_only=True
)
# Training
if training_args.do_train:
trainer.train(
model_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None
)
trainer.save_model()
# For convenience, we also re-save the tokenizer to the same directory,
# so that you can share your model easily on huggingface.co/models =)
if trainer.is_world_master():
tokenizer.save_pretrained(training_args.output_dir)
# Evaluation
results = {}
if training_args.do_eval and training_args.local_rank in [-1, 0]:
logger.info("*** Evaluate ***")
eval_output = trainer.evaluate()
output_eval_file = os.path.join(training_args.output_dir, "eval_results.txt")
with open(output_eval_file, "w") as writer:
logger.info("***** Eval results *****")
for key in sorted(eval_output.keys()):
logger.info(" %s = %s", key, str(eval_output[key]))
writer.write("%s = %s\n" % (key, str(eval_output[key])))
results.update(eval_output)
return results
def _mp_fn(index):
# For xla_spawn (TPUs)
main()
```
## Train
```
import json
```
Let's write the arguments in a dict and store in a json file. The above code will load this file and parse the arguments.
```
args_dict = {
"num_cores": 8,
'training_script': 'train_t5_squad.py',
"model_name_or_path": 't5-base',
"max_len": 512 ,
"target_max_len": 16,
"output_dir": './models/tpu',
"overwrite_output_dir": True,
"per_gpu_train_batch_size": 8,
"per_gpu_eval_batch_size": 8,
"gradient_accumulation_steps": 4,
"learning_rate": 1e-4,
"tpu_num_cores": 8,
"num_train_epochs": 4,
"do_train": True
}
with open('args.json', 'w') as f:
json.dump(args_dict, f)
```
Start training!
```
import torch_xla.distributed.xla_multiprocessing as xmp
xmp.spawn(_mp_fn, args=(), nprocs=8, start_method='fork')
```
## Eval
There are two gotchas here. First the metrics functionality in the nlp package is still work-in-progress so we will use the official squad evaluation script. Second, for some reason which I couldn't figure out, the `.generate` method is not working on TPU so will need to do prediction on CPU. For predicting the validation set it almost takes 40 mins.
```
## SQuAD evaluation script. Modifed slightly for this notebook
from __future__ import print_function
from collections import Counter
import string
import re
import argparse
import json
import sys
def normalize_answer(s):
"""Lower text and remove punctuation, articles and extra whitespace."""
def remove_articles(text):
return re.sub(r'\b(a|an|the)\b', ' ', text)
def white_space_fix(text):
return ' '.join(text.split())
def remove_punc(text):
exclude = set(string.punctuation)
return ''.join(ch for ch in text if ch not in exclude)
def lower(text):
return text.lower()
return white_space_fix(remove_articles(remove_punc(lower(s))))
def f1_score(prediction, ground_truth):
prediction_tokens = normalize_answer(prediction).split()
ground_truth_tokens = normalize_answer(ground_truth).split()
common = Counter(prediction_tokens) & Counter(ground_truth_tokens)
num_same = sum(common.values())
if num_same == 0:
return 0
precision = 1.0 * num_same / len(prediction_tokens)
recall = 1.0 * num_same / len(ground_truth_tokens)
f1 = (2 * precision * recall) / (precision + recall)
return f1
def exact_match_score(prediction, ground_truth):
return (normalize_answer(prediction) == normalize_answer(ground_truth))
def metric_max_over_ground_truths(metric_fn, prediction, ground_truths):
scores_for_ground_truths = []
for ground_truth in ground_truths:
score = metric_fn(prediction, ground_truth)
scores_for_ground_truths.append(score)
return max(scores_for_ground_truths)
def evaluate(gold_answers, predictions):
f1 = exact_match = total = 0
for ground_truths, prediction in zip(gold_answers, predictions):
total += 1
exact_match += metric_max_over_ground_truths(
exact_match_score, prediction, ground_truths)
f1 += metric_max_over_ground_truths(
f1_score, prediction, ground_truths)
exact_match = 100.0 * exact_match / total
f1 = 100.0 * f1 / total
return {'exact_match': exact_match, 'f1': f1}
import torch
import torch_xla
import torch_xla.core.xla_model as xm
import nlp
from transformers import T5ForConditionalGeneration, T5Tokenizer
from tqdm.auto import tqdm
model = T5ForConditionalGeneration.from_pretrained('models/tpu').to('cpu') # because its loaded on xla by default
tokenizer = T5Tokenizer.from_pretrained('models/tpu')
valid_dataset = torch.load('valid_data.pt')
dataloader = torch.utils.data.DataLoader(valid_dataset, batch_size=32)
answers = []
for batch in tqdm(dataloader):
outs = model.generate(input_ids=batch['input_ids'],
attention_mask=batch['attention_mask'],
max_length=16,
early_stopping=True)
outs = [tokenizer.decode(ids) for ids in outs]
answers.extend(outs)
predictions = []
references = []
for ref, pred in zip(valid_dataset, answers):
predictions.append(pred)
references.append(ref['answers']['text'])
predictions[0], references[0]
evaluate(references, predictions)
```
|
github_jupyter
|
# Crash on purpose to get more ram :
import torch
# torch.tensor([10.]*10000000000)
import torch
import nlp
from transformers import T5Tokenizer
tokenizer = T5Tokenizer.from_pretrained('t5-small')
# process the examples in input and target text format and the eos token at the end
def add_eos_to_examples(example):
example['input_text'] = 'question: %s context: %s </s>' % (example['question'], example['context'])
example['target_text'] = '%s </s>' % example['answers']['text'][0]
return example
# tokenize the examples
def convert_to_features(example_batch):
input_encodings = tokenizer.batch_encode_plus(example_batch['input_text'], pad_to_max_length=True, max_length=128)
target_encodings = tokenizer.batch_encode_plus(example_batch['target_text'], pad_to_max_length=True, max_length=32)
encodings = {
'input_ids': input_encodings['input_ids'],
'attention_mask': input_encodings['attention_mask'],
'target_ids': target_encodings['input_ids'],
'target_attention_mask': target_encodings['attention_mask']
}
return encodings
# load train and validation split of squad
train_dataset = nlp.load_dataset('squad', split=nlp.Split.TRAIN)
valid_dataset = nlp.load_dataset('squad', split=nlp.Split.VALIDATION)
# map add_eos_to_examples function to the dataset example wise
train_dataset = train_dataset.map(add_eos_to_examples)
# map convert_to_features batch wise
train_dataset = train_dataset.map(convert_to_features, batched=True)
valid_dataset = valid_dataset.map(add_eos_to_examples, load_from_cache_file=False)
valid_dataset = valid_dataset.map(convert_to_features, batched=True, load_from_cache_file=False)
# set the tensor type and the columns which the dataset should return
columns = ['input_ids', 'target_ids', 'attention_mask', 'target_attention_mask']
train_dataset.set_format(type='torch', columns=columns)
valid_dataset.set_format(type='torch', columns=columns)
len(train_dataset), len(valid_dataset)
train_dataset[0]
# cach the dataset, so we can load it directly for training
torch.save(train_dataset, 'train_data.pt')
torch.save(valid_dataset, 'valid_data.pt')
model: PreTrainedModel
args: TrainingArguments
train_dataset: Optional[Dataset]
eval_dataset: Optional[Dataset]
```
4. Start training with `trainer.train`
Call `trainer.train` and let the magic begin!
There are lots of things which the trainer handles for you out of the box such as gradient_accumulation, fp16 training, setting up the optimizer and scheduler, logging with wandb etc. I didn't set-up wandb for this experiment, but will explore it for sure in future experiment.
## Train
Let's write the arguments in a dict and store in a json file. The above code will load this file and parse the arguments.
Start training!
## Eval
There are two gotchas here. First the metrics functionality in the nlp package is still work-in-progress so we will use the official squad evaluation script. Second, for some reason which I couldn't figure out, the `.generate` method is not working on TPU so will need to do prediction on CPU. For predicting the validation set it almost takes 40 mins.
| 0.781539 | 0.811564 |
```
!python3 -m pip install git+https://github.com/zebrabug/tensortrade.git
!git clone https://github.com/zebrabug/tensortrade.git
!cd tensortrade/examples
!pip install -r tensortrade/requirements.txt
!pip install git+https://github.com/AI4Finance-LLC/FinRL-Library.git
!pip install ta
!pip install tensorforce
import ta
import pandas as pd
from tensortrade.feed.core import Stream, DataFeed, NameSpace
from tensortrade.oms.exchanges import Exchange, ExchangeOptions
from tensortrade.oms.services.execution.simulated import execute_order
from tensortrade.oms.instruments import USD, BTC, RUR
from tensortrade.oms.wallets import Wallet, Portfolio
import tensortrade.env.default as default
from tensortrade.agents import DQNAgent
import matplotlib.pyplot as plt
from finrl.model.models import DRLAgent
from IPython.core.debugger import set_trace
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
df = pd.read_pickle('./tensortrade/examples/data/OHLC_deals_df.pkl')
df = df.loc[df['Close'].notnull(), :]
df.rename(columns = {
'Time':'date',
'Open':'open',
'Close':'close',
'Low':'low',
'High':'high',
'Volume':'volume'
}, inplace = True)
df['h'] = df.date.dt.hour
df['d'] = df.date.dt.date
import numpy as np
N = 30
rand_dates = np.random.choice(df.date.dt.date.unique(), N)
rand_hours = np.random.choice([10,12,13,14,15,16], N)
agent = None
l = []
a = 73000
for i in range(N):
#set_trace()
tmp = df.loc[(df.d == rand_dates[i]) & (df.h >= rand_hours[i]),:]
if tmp.shape[0] < 360:
continue
dataset = ta.add_all_ta_features(tmp, 'open', 'high', 'low', 'close', 'volume', fillna=True)
dataset.drop(columns = ['h', 'd'], inplace = True)
price_history = dataset[['date', 'open', 'high', 'low', 'close', 'volume']] # chart data
dataset.drop(columns=['date', 'open', 'high', 'low', 'close', 'volume'], inplace=True)
micex = Exchange("MICEX",
service=execute_order,
options=ExchangeOptions(commission = 0.003, #0.003,
min_trade_size = 1e-6,
max_trade_size = 1e6,
min_trade_price = 1e-8,
max_trade_price= 1e8,
is_live=False) )(
Stream.source(price_history['close'].tolist(), dtype="float").rename("RUR-USD"))
portfolio = Portfolio(RUR, [
Wallet(micex, 0 * USD),
Wallet(micex, a * RUR),
])
with NameSpace("MICEX"):
streams = [Stream.source(dataset[c].tolist(), dtype="float").rename(c) for c in dataset.columns]
feed = DataFeed(streams)
feed.next()
env = default.create(
portfolio=portfolio,
action_scheme="simple",
reward_scheme="simple",
feed=feed,
renderer="screen-log", # ScreenLogger used with default settings
window_size=20
)
if agent is None:
agent = DQNAgent(env)
else:
agent = DQNAgent(env,policy_network=agent.policy_network)
agent.train(n_episodes=1, n_steps=360, render_interval=10)
perf = pd.DataFrame(portfolio.performance).transpose()[['net_worth']]
#perf.net_worth.plot()
plt.show()
l += [perf['net_worth'][perf.shape[0]-1] - perf['net_worth'][0]]
print('Рещультаты:')
print(f'Средняя прибыльность сессии {np.mean(l)}, стандартное отклонение {np.std(l)}, количество сессий {len(l)}')
```
|
github_jupyter
|
!python3 -m pip install git+https://github.com/zebrabug/tensortrade.git
!git clone https://github.com/zebrabug/tensortrade.git
!cd tensortrade/examples
!pip install -r tensortrade/requirements.txt
!pip install git+https://github.com/AI4Finance-LLC/FinRL-Library.git
!pip install ta
!pip install tensorforce
import ta
import pandas as pd
from tensortrade.feed.core import Stream, DataFeed, NameSpace
from tensortrade.oms.exchanges import Exchange, ExchangeOptions
from tensortrade.oms.services.execution.simulated import execute_order
from tensortrade.oms.instruments import USD, BTC, RUR
from tensortrade.oms.wallets import Wallet, Portfolio
import tensortrade.env.default as default
from tensortrade.agents import DQNAgent
import matplotlib.pyplot as plt
from finrl.model.models import DRLAgent
from IPython.core.debugger import set_trace
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
df = pd.read_pickle('./tensortrade/examples/data/OHLC_deals_df.pkl')
df = df.loc[df['Close'].notnull(), :]
df.rename(columns = {
'Time':'date',
'Open':'open',
'Close':'close',
'Low':'low',
'High':'high',
'Volume':'volume'
}, inplace = True)
df['h'] = df.date.dt.hour
df['d'] = df.date.dt.date
import numpy as np
N = 30
rand_dates = np.random.choice(df.date.dt.date.unique(), N)
rand_hours = np.random.choice([10,12,13,14,15,16], N)
agent = None
l = []
a = 73000
for i in range(N):
#set_trace()
tmp = df.loc[(df.d == rand_dates[i]) & (df.h >= rand_hours[i]),:]
if tmp.shape[0] < 360:
continue
dataset = ta.add_all_ta_features(tmp, 'open', 'high', 'low', 'close', 'volume', fillna=True)
dataset.drop(columns = ['h', 'd'], inplace = True)
price_history = dataset[['date', 'open', 'high', 'low', 'close', 'volume']] # chart data
dataset.drop(columns=['date', 'open', 'high', 'low', 'close', 'volume'], inplace=True)
micex = Exchange("MICEX",
service=execute_order,
options=ExchangeOptions(commission = 0.003, #0.003,
min_trade_size = 1e-6,
max_trade_size = 1e6,
min_trade_price = 1e-8,
max_trade_price= 1e8,
is_live=False) )(
Stream.source(price_history['close'].tolist(), dtype="float").rename("RUR-USD"))
portfolio = Portfolio(RUR, [
Wallet(micex, 0 * USD),
Wallet(micex, a * RUR),
])
with NameSpace("MICEX"):
streams = [Stream.source(dataset[c].tolist(), dtype="float").rename(c) for c in dataset.columns]
feed = DataFeed(streams)
feed.next()
env = default.create(
portfolio=portfolio,
action_scheme="simple",
reward_scheme="simple",
feed=feed,
renderer="screen-log", # ScreenLogger used with default settings
window_size=20
)
if agent is None:
agent = DQNAgent(env)
else:
agent = DQNAgent(env,policy_network=agent.policy_network)
agent.train(n_episodes=1, n_steps=360, render_interval=10)
perf = pd.DataFrame(portfolio.performance).transpose()[['net_worth']]
#perf.net_worth.plot()
plt.show()
l += [perf['net_worth'][perf.shape[0]-1] - perf['net_worth'][0]]
print('Рещультаты:')
print(f'Средняя прибыльность сессии {np.mean(l)}, стандартное отклонение {np.std(l)}, количество сессий {len(l)}')
| 0.287668 | 0.369742 |
# Test localization and co-localization of two diseases, using network propagation
Test on simulated networks, where we can control how localized and co-localized node sets are
### Author: Brin Rosenthal (sbrosenthal@ucsd.edu)
```
import numpy as np
import matplotlib.pyplot as plt
import seaborn
import networkx as nx
import pandas as pd
import random
import scipy
import mygene
mg = mygene.MyGeneInfo()
# latex rendering of text in graphs
import matplotlib as mpl
mpl.rc('text', usetex = False)
mpl.rc('font', family = 'serif')
import sys
sys.path.append('../source')
import plotting_results
import network_prop
import imp
imp.reload(plotting_results)
imp.reload(network_prop)
% matplotlib inline
```
## First, let's create a random graph
- We will start with the connected Watts Strogatz random graph, created using the NetworkX package. This graph generator will allow us to create random graphs which are guaranteed to be fully connected, and gives us control over how connected the graph is, and how structured it is. Documentation for the function can be found here https://networkx.github.io/documentation/latest/reference/generated/networkx.generators.random_graphs.connected_watts_strogatz_graph.html#networkx.generators.random_graphs.connected_watts_strogatz_graph
<img src="screenshots/connected_watts_strogatz_graph_nx_docs.png" width="600" height="600">
## Control localization
- We can control the localization of nodes by seeding the network propagation with a focal node and that focal node's neighbors. This will guarantee that the seed nodes will be very localized in the graph
- As a first example, let's create a random network, with two localized sets.
- The network contains 100 nodes, with each node first connected to its 5 nearest neighbors.
- Once these first edges are connected, each edge is randomly rewired with probability p = 0.12 (so approximately 12 percent of the edges in the graph will be rewired)
- With this rewiring probability of 0.12, most of the structure in the graph is maintained, but some randomness has been introduced
```
# Create a random connected-Watts-Strogatz graph
Gsim = nx.connected_watts_strogatz_graph(100,5,.12)
seed1 = [0]
seed1.extend(nx.neighbors(Gsim,seed1[0]))
seed2 = [10]
seed2.extend(nx.neighbors(Gsim,seed2[0]))
#seed = list(np.random.choice(Gsim.nodes(),size=6,replace=False))
pos = nx.spring_layout(Gsim)
nx.draw_networkx_nodes(Gsim,pos=pos,node_size=100,alpha=.5,node_color = 'blue')
nx.draw_networkx_nodes(Gsim,pos=pos,nodelist=seed1,node_size=120,alpha=.9,node_color='orange',linewidths=3)
nx.draw_networkx_nodes(Gsim,pos=pos,nodelist=seed2,node_size=120,alpha=.9,node_color='red',linewidths=3)
nx.draw_networkx_edges(Gsim,pos=pos,alpha=.1)
plt.grid('off')
#plt.savefig('/Users/brin/Google Drive/UCSD/update_16_03/non_colocalization_illustration.png',dpi=300,bbox_inches='tight')
```
- In the network shown above, we plot our random connected Watts-Strogatz graph, highlighting two localized seed node sets, shown in red and orange, with bold outlines.
- These seed node sets were created by selecting two focal node, and those focal node's neighbors, thus resulting in two node sets which appear highly localized to the eye.
- Since the graph is composed of nearest neighbor relations (with some randomness added on), and it was initiated with node ids ranging from 0 to 99 (these are the default node names- they can be changed using nx.relabel_nodes()), we can control the co-localization of these node sets by selecting seed nodes which are close together, for high co-localization (e.g. 0 and 5), or which are far apart, for low co-localization (e.g. 0 and 50).
- Below, we will display node sets with both high and low co-localization
- Our ability to control the co-localization in this way will become worse as the rewiring probability increases, and the structure in the graph is destroyed.
```
# highly co-localized gene sets
seed1 = [0]
seed1.extend(nx.neighbors(Gsim,seed1[0]))
seed2 = [5]
seed2.extend(nx.neighbors(Gsim,seed2[0]))
#seed = list(np.random.choice(Gsim.nodes(),size=6,replace=False))
plt.figure(figsize=(12,5))
plt.subplot(1,2,1)
nx.draw_networkx_nodes(Gsim,pos=pos,node_size=100,alpha=.5,node_color = 'blue')
nx.draw_networkx_nodes(Gsim,pos=pos,nodelist=seed1,node_size=120,alpha=.9,node_color='orange',linewidths=3)
nx.draw_networkx_nodes(Gsim,pos=pos,nodelist=seed2,node_size=120,alpha=.9,node_color='red',linewidths=3)
nx.draw_networkx_edges(Gsim,pos=pos,alpha=.1)
plt.title('High Co-localization',fontsize=16)
plt.grid('off')
# low co-localized gene sets
seed1 = [5]
seed1.extend(nx.neighbors(Gsim,seed1[0]))
seed2 = [30]
seed2.extend(nx.neighbors(Gsim,seed2[0]))
#seed = list(np.random.choice(Gsim.nodes(),size=6,replace=False))
plt.subplot(1,2,2)
nx.draw_networkx_nodes(Gsim,pos=pos,node_size=100,alpha=.5,node_color = 'blue')
nx.draw_networkx_nodes(Gsim,pos=pos,nodelist=seed1,node_size=120,alpha=.9,node_color='orange',linewidths=3)
nx.draw_networkx_nodes(Gsim,pos=pos,nodelist=seed2,node_size=120,alpha=.9,node_color='red',linewidths=3)
nx.draw_networkx_edges(Gsim,pos=pos,alpha=.1)
plt.title('Low Co-localization',fontsize=16)
plt.grid('off')
```
# Can we quantify this concept of localization?
- Sometimes it's not easy to tell by eye if a node set is localized.
- We can use network propagation simulations to quantify this concept of localization
- Network propagation is a tool which initiates a seed node set with high 'heat', and then over the course of a number of iterations spreads this heat around to nearby nodes.
- At the end of the simulation, nodes with the highest heat are those which are most closely related to the seed nodes.
- We implemented the network propagation method described in Vanunu et. al. 2010 (Vanunu, Oron, et al. "Associating genes and protein complexes with disease via network propagation." PLoS Comput Biol 6.1 (2010): e1000641.)
<img src="screenshots/vanunu_abstracg.png">
### Localization using network propagation
- We can use network propagation to evaluate how localized a seed node set is in the network.
- If the seed node set is highly localized, the 'heat' from the network propagation simulation will be bounced around between seed nodes, and less of it will dissipate to distant parts of the network.
- We will evaluate the distribution of the heat from all the nodes, using the kurtosis (the fourth standardized moment), which measures how 'tailed' the distribution is. If our distribution has high kurtosis, this indicates that much of the 'heat' has stayed localized near the seed set. If our distribution has a low kurtosis, this indicates that the 'heat' has not stayed localized, but has diffused to distant parts of the network.
<img src="screenshots/kurtosis.png">
### Random baseline for comparison
- To evaluate localization in this way, we need a baseline to compare to.
- Te establish the baseline we take our original network, and shuffle the edges, while preserving degree (so nodes which originally had 5 neighbors will still have 5 neighbors, although these neighbors will now be spread randomly throughout the graph)
- For example, below we show the heat propagation on a non-shuffled graph, from a localized seed set (left), and the heat propagation from the same seed set, on an edge-shuffled graph (right). The nodes on the left and right have the same _number_ of neighbors, but they have different identities.
- The total amount of heat in the graph is conserved in both cases, but the heat distributions look very different- the seed nodes retain much less of their original heat in the edge-shuffled case.
<img src="screenshots/L_edge_shuffled.png">
- We will calculate the kurtosis of the heat distribution over a large number of different edge-shuffled networks (below- 1000 repetitions), to build up the baseline distribution of kurtosis values.
```
Wprime_ring = network_prop.normalized_adj_matrix(Gsim)
Fnew_ring = network_prop.network_propagation(Gsim,Wprime_ring,seed1)
plt.figure(figsize=(18,5))
plt.subplot(1,3,1)
nx.draw_networkx_nodes(Gsim,pos=pos,node_size=100,alpha=.5,node_color=Fnew_ring[Gsim.nodes()],cmap='jet',
vmin=0,vmax=max(Fnew_ring))
nx.draw_networkx_edges(Gsim,pos=pos,alpha=.2)
var_ring = plotting_results.nsf(np.var(Fnew_ring),3)
kurt_ring = plotting_results.nsf(scipy.stats.kurtosis(Fnew_ring),3)
plt.annotate('kurtosis = ' + str(kurt_ring),
xy=(.08,.1),xycoords='figure fraction')
plt.annotate('Heat: original',xy=(.08,.93),xycoords='figure fraction',fontsize=16)
plt.xticks([],[])
plt.yticks([],[])
plt.grid('off')
num_reps = 1000
var_rand_list,kurt_rand_list = [],[]
for r in range(num_reps):
G_temp = nx.configuration_model(Gsim.degree().values())
G_rand = nx.Graph() # switch from multigraph to digraph
G_rand.add_edges_from(G_temp.edges())
G_rand = nx.relabel_nodes(G_rand,dict(zip(range(len(G_rand.nodes())),Gsim.degree().keys())))
Wprime_rand = network_prop.normalized_adj_matrix(G_rand)
Fnew_rand = network_prop.network_propagation(G_rand,Wprime_rand,seed1)
var_rand_list.append(np.var(Fnew_rand))
kurt_rand_list.append(scipy.stats.kurtosis(Fnew_rand))
plt.subplot(1,3,2)
nx.draw_networkx_nodes(G_rand,pos=pos,node_size=100,alpha=.5,node_color=Fnew_rand[G_rand.nodes()],cmap='jet',
vmin=0,vmax=max(Fnew_ring))
nx.draw_networkx_edges(G_rand,pos=pos,alpha=.2)
var_rand = plotting_results.nsf(np.var(Fnew_rand),3)
kurt_rand = plotting_results.nsf(scipy.stats.kurtosis(Fnew_rand),3)
plt.annotate('kurtosis = ' + str(kurt_rand),
xy=(.40,.1),xycoords='figure fraction')
plt.annotate('Heat: edge-shuffled',xy=(.40,.93),xycoords='figure fraction',fontsize=16)
plt.xticks([],[])
plt.yticks([],[])
plt.grid('off')
plt.subplot(1,3,3)
plt.boxplot(kurt_rand_list)
z_score = (kurt_ring-np.mean(kurt_rand_list))/np.std(kurt_rand_list)
z_score = plotting_results.nsf(z_score,n=2)
plt.plot(1,kurt_ring,'*',color='darkorange',markersize=16,label='original: \nz-score = '+ str(z_score))
plt.annotate('Kurtosis',xy=(.73,.93),xycoords='figure fraction',fontsize=16)
plt.legend(loc='lower left')
#plt.savefig('/Users/brin/Google Drive/UCSD/update_16_03/localization_NWS_p1_variance.png',dpi=300,bbox_inches='tight')
```
- Above (right panel) we see that when a node set is highly localized, it has a higher kurtosis value than would be expected from a non-localized gene set (the orange star represents the kurtosis of the heat distribution on the original graph, and the boxplot represents the distribution of 1000 kurtosis values on edge-shuffled networks). The orange star is significantly higher than the baseline distribution.
# Co-localization using network propagation
- We now build on our understanding of localization using network propagation to establish a measurement of how _co-localized_ two node sets are in a network.
- In the first example we discussed (above), we came up with a general understanding of co-localization, where two node sets were co-localized if they were individually localized, _and_ were nearby in network space.
- In order to measure this co-localization using network propagation, we will first seed 2 simulations with each node set, then we will take the dot-product (or the sum of the pairwise product) of the resulting heat vectors.
- When node sets are co-localized, there will be more nodes which are hot in both heat vectors (again we compare to a distribution of heat dot-products on degree preserving edge-shuffled graphs)
```
seed1 = Gsim.nodes()[0:5] #nx.neighbors(Gsim,Gsim.nodes()[0])
seed2 = Gsim.nodes()[10:15] #nx.neighbors(Gsim,Gsim.nodes()[5]) #Gsim.nodes()[27:32]
seed3 = Gsim.nodes()[20:25]
Fnew1 = network_prop.network_propagation(Gsim,Wprime_ring,seed1,alpha=.9,num_its=20)
Fnew2 = network_prop.network_propagation(Gsim,Wprime_ring,seed2,alpha=.9,num_its=20)
F12 = Fnew1*Fnew2
F12.sort(ascending=False)
#Fnew1.sort(ascending=False)
#Fnew2.sort(ascending=False)
Fnew1_norm = Fnew1/np.linalg.norm(Fnew1)
Fnew2_norm = Fnew2/np.linalg.norm(Fnew2)
dot_12 = np.sum(F12.head(10))
print(dot_12)
plt.figure(figsize=(18,6))
plt.subplot(1,3,1)
nx.draw_networkx_nodes(Gsim,pos=pos,node_size=100,alpha=.5,node_color=Fnew1[Gsim.nodes()],
cmap='jet', vmin=0,vmax=max(Fnew1))
nx.draw_networkx_nodes(Gsim,pos=pos,nodelist=seed1,
node_size=100,alpha=.9,node_color=Fnew1[seed1],
cmap='jet', vmin=0,vmax=max(Fnew1),linewidths=3)
nx.draw_networkx_edges(Gsim,pos=pos,alpha=.2)
plt.grid('off')
plt.xticks([],[])
plt.yticks([],[])
plt.annotate('Heat: nodes A ($H_A$)',xy=(.08,.93),xycoords='figure fraction',fontsize=16)
plt.subplot(1,3,2)
nx.draw_networkx_nodes(Gsim,pos=pos,node_size=100,alpha=.5,node_color=Fnew2[Gsim.nodes()],
cmap='jet', vmin=0,vmax=max(Fnew1))
nx.draw_networkx_nodes(Gsim,pos=pos,nodelist=seed2,
node_size=100,alpha=.9,node_color=Fnew2[seed2],
cmap='jet', vmin=0,vmax=max(Fnew1),linewidths=3)
nx.draw_networkx_edges(Gsim,pos=pos,alpha=.2)
plt.grid('off')
plt.xticks([],[])
plt.yticks([],[])
plt.annotate('Heat: nodes B ($H_B$)',xy=(.4,.93),xycoords='figure fraction',fontsize=16)
plt.subplot(1,3,3)
nx.draw_networkx_nodes(Gsim,pos=pos,node_size=100,alpha=.5,node_color=Fnew1[Gsim.nodes()]*Fnew2[Gsim.nodes()],
cmap='jet', vmin=0,vmax=max(Fnew1*Fnew2))
nx.draw_networkx_nodes(Gsim,pos=pos,nodelist=seed2,
node_size=100,alpha=.9,node_color=Fnew1[seed2]*Fnew2[seed2],
cmap='jet', vmin=0,vmax=max(Fnew1*Fnew2),linewidths=3)
nx.draw_networkx_nodes(Gsim,pos=pos,nodelist=seed1,
node_size=100,alpha=.9,node_color=Fnew1[seed1]*Fnew2[seed1],
cmap='jet', vmin=0,vmax=max(Fnew1*Fnew2),linewidths=3)
nx.draw_networkx_edges(Gsim,pos=pos,alpha=.2)
plt.grid('off')
plt.xticks([],[])
plt.yticks([],[])
plt.annotate('$H_A \cdot H_B$',xy=(.73,.93),xycoords='figure fraction',fontsize=16)
```
- In the figure above, we show an example of this co-localization concept.
- In the left panel we show the heat vector of the simulation seeded by node set A (warmer colors indicate hotter nodes, and bold outlines indicate seed nodes)
- In the middle panel we show the heat vector of the simulation seeded by node set B
- The right panel shows the pairwise product of the heat vectors (note color scale is different for this panel). The nodes between the two seed sets are the hottest, meaning that these are the nodes most likely related to both seed gene sets.
- If these node sets are truly co-localized, then the sum of the heat product (the dot product) will be higher than random. This is what we will test below.
```
results_dict = network_prop.calc_3way_colocalization(Gsim,seed1,seed2,seed3,num_reps=100,num_genes=5,
replace=False,savefile=False,alpha=.5,print_flag=False,)
import scipy
num_reps = results_dict['num_reps']
dot_sfari_epi=results_dict['sfari_epi']
dot_sfari_epi_rand=results_dict['sfari_epi_rand']
#U,p = scipy.stats.mannwhitneyu(dot_sfari_epi,dot_sfari_epi_rand)
t,p = scipy.stats.ttest_ind(dot_sfari_epi,dot_sfari_epi_rand)
psig_SE = plotting_results.nsf(p,n=2)
plt.figure(figsize=(7,5))
plt.errorbar(-.1,np.mean(dot_sfari_epi_rand),2*np.std(dot_sfari_epi_rand)/np.sqrt(num_reps),fmt='o',
ecolor='gray',markerfacecolor='gray',label='edge-shuffled graph')
plt.errorbar(0,np.mean(dot_sfari_epi),2*np.std(dot_sfari_epi)/np.sqrt(num_reps),fmt='bo',
label='original graph')
plt.xlim(-.8,.5)
plt.legend(loc='lower left',fontsize=12)
plt.xticks([0],['A-B \np='+str(psig_SE)],rotation=45,fontsize=12)
plt.ylabel('$H_{A} \cdot H_{B}$',fontsize=18)
```
- In the figure above, we show the heat dot product of node set A and node set B, on the original graph (blue dot), and on 100 edge-shuffled graphs (gray dot with error bars).
- Using a two sided independent t-test, we find that the dot product on the original graph is significantly higher than on the edge-shuffled graphs, if node sets A and B are indeed co-localized.
# Can we control how co-localized two node sets are?
- We can use a parameter in our random graph generator function to control the co-localization of two node sets.
- By varying the rewiring probability, we can move from a graph which is highly structured (low p-rewire: mostly nearest neighbor connections), to a graph which is mostly random (high p-rewire: mostly random connections).
- In the following section we will sweet through values of p-rewire, ranging from 0 to 1, and measure the co-localization of identical node sets.
```
H12 = []
H12_rand = []
num_G_reps=5
for p_rewire in np.linspace(0,1,5):
print('rewiring probability = ' + str(p_rewire) + '...')
H12_temp = []
H12_temp_rand = []
for r in range(num_G_reps):
Gsim = nx.connected_watts_strogatz_graph(500,5,p_rewire)
seed1 = Gsim.nodes()[0:5]
seed2 = Gsim.nodes()[5:10]
seed3 = Gsim.nodes()[20:30]
results_dict = network_prop.calc_3way_colocalization(Gsim,seed1,seed2,seed3,num_reps=20,num_genes=5,
replace=False,savefile=False,alpha=.5,print_flag=False)
H12_temp.append(np.mean(results_dict['sfari_epi']))
H12_temp_rand.append(np.mean(results_dict['sfari_epi_rand']))
H12.append(np.mean(H12_temp))
H12_rand.append(np.mean(H12_temp_rand))
plt.plot(np.linspace(0,1,5),H12,'r.-',label='original')
plt.plot(np.linspace(0,1,5),H12_rand,'.-',color='gray',label='edge-shuffled')
plt.xlabel('link rewiring probability',fontsize=14)
plt.ylabel('$H_A \cdot H_B$',fontsize=16)
plt.legend(loc='upper right',fontsize=12)
```
- We see above, as expected, that as the rewiring probability increases (on the x-axis), and the graph becomes more random, the heat dot-product (co-localization) decreases (on the y-axis), until the co-localization on the original graph matches the edge-shuffled graph.
- We expect this to be the case because once p-rewire becomes very high, the original graph becomes essentially random, so not much is changed by shuffling the edges.
# Three-way Co-localization
- Finally, we will look at how our co-localization using network propagation method applies to three seed node sets instead of two.
- This could be useful if the user was interested in establishing if one node set provided a link between two other node sets. For example, one might find that two node sets are individually not co-localized, but each is co-localized with a third node set. This third node set would essentially provide the missing link between the two, as illustrated below, where node sets A and C are far apart, but B is close to A, and B is close to C.
<img src="screenshots/CL_triangle.png">
```
seed1 = Gsim.nodes()[0:5]
seed2 = Gsim.nodes()[5:10]
seed3 = Gsim.nodes()[10:15]
results_dict = network_prop.calc_3way_colocalization(Gsim,seed1,seed2,seed3,num_reps=100,num_genes=5,
replace=False,savefile=False,alpha=.5,print_flag=False,)
import scipy
num_reps = results_dict['num_reps']
dot_sfari_epi=results_dict['sfari_epi']
dot_sfari_epi_rand=results_dict['sfari_epi_rand']
#U,p = scipy.stats.mannwhitneyu(dot_sfari_epi,dot_sfari_epi_rand)
t,p = scipy.stats.ttest_ind(dot_sfari_epi,dot_sfari_epi_rand)
psig_SE = plotting_results.nsf(p,n=2)
dot_sfari_aem=results_dict['sfari_aem']
dot_aem_sfari_rand=results_dict['aem_sfari_rand']
#U,p = scipy.stats.mannwhitneyu(dot_sfari_aem,dot_aem_sfari_rand)
t,p = scipy.stats.ttest_ind(dot_sfari_aem,dot_aem_sfari_rand)
psig_SA = plotting_results.nsf(p,n=2)
dot_aem_epi=results_dict['aem_epi']
dot_aem_epi_rand=results_dict['aem_epi_rand']
#U,p = scipy.stats.mannwhitneyu(dot_aem_epi,dot_aem_epi_rand)
t,p = scipy.stats.ttest_ind(dot_aem_epi,dot_aem_epi_rand)
psig_AE = plotting_results.nsf(p,n=2)
plt.figure(figsize=(7,5))
plt.errorbar(-.1,np.mean(dot_sfari_epi_rand),2*np.std(dot_sfari_epi_rand)/np.sqrt(num_reps),fmt='o',
ecolor='gray',markerfacecolor='gray')
plt.errorbar(0,np.mean(dot_sfari_epi),2*np.std(dot_sfari_epi)/np.sqrt(num_reps),fmt='bo')
plt.errorbar(.9,np.mean(dot_aem_sfari_rand),2*np.std(dot_aem_sfari_rand)/np.sqrt(num_reps),fmt='o',
ecolor='gray',markerfacecolor='gray')
plt.errorbar(1,np.mean(dot_sfari_aem),2*np.std(dot_sfari_aem)/np.sqrt(num_reps),fmt='ro')
plt.errorbar(1.9,np.mean(dot_aem_epi_rand),2*np.std(dot_aem_epi_rand)/np.sqrt(num_reps),fmt='o',
ecolor='gray',markerfacecolor='gray')
plt.errorbar(2,np.mean(dot_aem_epi),2*np.std(dot_aem_epi)/np.sqrt(num_reps),fmt='go')
plt.xticks([0,1,2],['A-B \np='+str(psig_SE),'A-C \np='+str(psig_SA),'B-C\np='+str(psig_AE)],rotation=45,fontsize=12)
plt.xlim(-.5,2.5)
plt.ylabel('$H_{1} \cdot H_{2}$',fontsize=18)
```
- In the figure above, we show how three-way co-localization looks in practice.
- We have selected three node sets, two of which are distant (A and C), and one which is close to both (B).
- We find that indeed $H_A\cdot H_B$ and $H_B\cdot H_C$ (blue dot and green dot) are much higher on the original graph than on the edge shuffled graphs.
- However, we find that $H_A\cdot H_C$ is actually _lower_ than the background noise. This is telling us that node sets A and C are actually individually localized, but not co-localized at all, because more of the heat remains close to each individual seed set than would happen if each node set was not individually co-localized.
|
github_jupyter
|
import numpy as np
import matplotlib.pyplot as plt
import seaborn
import networkx as nx
import pandas as pd
import random
import scipy
import mygene
mg = mygene.MyGeneInfo()
# latex rendering of text in graphs
import matplotlib as mpl
mpl.rc('text', usetex = False)
mpl.rc('font', family = 'serif')
import sys
sys.path.append('../source')
import plotting_results
import network_prop
import imp
imp.reload(plotting_results)
imp.reload(network_prop)
% matplotlib inline
# Create a random connected-Watts-Strogatz graph
Gsim = nx.connected_watts_strogatz_graph(100,5,.12)
seed1 = [0]
seed1.extend(nx.neighbors(Gsim,seed1[0]))
seed2 = [10]
seed2.extend(nx.neighbors(Gsim,seed2[0]))
#seed = list(np.random.choice(Gsim.nodes(),size=6,replace=False))
pos = nx.spring_layout(Gsim)
nx.draw_networkx_nodes(Gsim,pos=pos,node_size=100,alpha=.5,node_color = 'blue')
nx.draw_networkx_nodes(Gsim,pos=pos,nodelist=seed1,node_size=120,alpha=.9,node_color='orange',linewidths=3)
nx.draw_networkx_nodes(Gsim,pos=pos,nodelist=seed2,node_size=120,alpha=.9,node_color='red',linewidths=3)
nx.draw_networkx_edges(Gsim,pos=pos,alpha=.1)
plt.grid('off')
#plt.savefig('/Users/brin/Google Drive/UCSD/update_16_03/non_colocalization_illustration.png',dpi=300,bbox_inches='tight')
# highly co-localized gene sets
seed1 = [0]
seed1.extend(nx.neighbors(Gsim,seed1[0]))
seed2 = [5]
seed2.extend(nx.neighbors(Gsim,seed2[0]))
#seed = list(np.random.choice(Gsim.nodes(),size=6,replace=False))
plt.figure(figsize=(12,5))
plt.subplot(1,2,1)
nx.draw_networkx_nodes(Gsim,pos=pos,node_size=100,alpha=.5,node_color = 'blue')
nx.draw_networkx_nodes(Gsim,pos=pos,nodelist=seed1,node_size=120,alpha=.9,node_color='orange',linewidths=3)
nx.draw_networkx_nodes(Gsim,pos=pos,nodelist=seed2,node_size=120,alpha=.9,node_color='red',linewidths=3)
nx.draw_networkx_edges(Gsim,pos=pos,alpha=.1)
plt.title('High Co-localization',fontsize=16)
plt.grid('off')
# low co-localized gene sets
seed1 = [5]
seed1.extend(nx.neighbors(Gsim,seed1[0]))
seed2 = [30]
seed2.extend(nx.neighbors(Gsim,seed2[0]))
#seed = list(np.random.choice(Gsim.nodes(),size=6,replace=False))
plt.subplot(1,2,2)
nx.draw_networkx_nodes(Gsim,pos=pos,node_size=100,alpha=.5,node_color = 'blue')
nx.draw_networkx_nodes(Gsim,pos=pos,nodelist=seed1,node_size=120,alpha=.9,node_color='orange',linewidths=3)
nx.draw_networkx_nodes(Gsim,pos=pos,nodelist=seed2,node_size=120,alpha=.9,node_color='red',linewidths=3)
nx.draw_networkx_edges(Gsim,pos=pos,alpha=.1)
plt.title('Low Co-localization',fontsize=16)
plt.grid('off')
Wprime_ring = network_prop.normalized_adj_matrix(Gsim)
Fnew_ring = network_prop.network_propagation(Gsim,Wprime_ring,seed1)
plt.figure(figsize=(18,5))
plt.subplot(1,3,1)
nx.draw_networkx_nodes(Gsim,pos=pos,node_size=100,alpha=.5,node_color=Fnew_ring[Gsim.nodes()],cmap='jet',
vmin=0,vmax=max(Fnew_ring))
nx.draw_networkx_edges(Gsim,pos=pos,alpha=.2)
var_ring = plotting_results.nsf(np.var(Fnew_ring),3)
kurt_ring = plotting_results.nsf(scipy.stats.kurtosis(Fnew_ring),3)
plt.annotate('kurtosis = ' + str(kurt_ring),
xy=(.08,.1),xycoords='figure fraction')
plt.annotate('Heat: original',xy=(.08,.93),xycoords='figure fraction',fontsize=16)
plt.xticks([],[])
plt.yticks([],[])
plt.grid('off')
num_reps = 1000
var_rand_list,kurt_rand_list = [],[]
for r in range(num_reps):
G_temp = nx.configuration_model(Gsim.degree().values())
G_rand = nx.Graph() # switch from multigraph to digraph
G_rand.add_edges_from(G_temp.edges())
G_rand = nx.relabel_nodes(G_rand,dict(zip(range(len(G_rand.nodes())),Gsim.degree().keys())))
Wprime_rand = network_prop.normalized_adj_matrix(G_rand)
Fnew_rand = network_prop.network_propagation(G_rand,Wprime_rand,seed1)
var_rand_list.append(np.var(Fnew_rand))
kurt_rand_list.append(scipy.stats.kurtosis(Fnew_rand))
plt.subplot(1,3,2)
nx.draw_networkx_nodes(G_rand,pos=pos,node_size=100,alpha=.5,node_color=Fnew_rand[G_rand.nodes()],cmap='jet',
vmin=0,vmax=max(Fnew_ring))
nx.draw_networkx_edges(G_rand,pos=pos,alpha=.2)
var_rand = plotting_results.nsf(np.var(Fnew_rand),3)
kurt_rand = plotting_results.nsf(scipy.stats.kurtosis(Fnew_rand),3)
plt.annotate('kurtosis = ' + str(kurt_rand),
xy=(.40,.1),xycoords='figure fraction')
plt.annotate('Heat: edge-shuffled',xy=(.40,.93),xycoords='figure fraction',fontsize=16)
plt.xticks([],[])
plt.yticks([],[])
plt.grid('off')
plt.subplot(1,3,3)
plt.boxplot(kurt_rand_list)
z_score = (kurt_ring-np.mean(kurt_rand_list))/np.std(kurt_rand_list)
z_score = plotting_results.nsf(z_score,n=2)
plt.plot(1,kurt_ring,'*',color='darkorange',markersize=16,label='original: \nz-score = '+ str(z_score))
plt.annotate('Kurtosis',xy=(.73,.93),xycoords='figure fraction',fontsize=16)
plt.legend(loc='lower left')
#plt.savefig('/Users/brin/Google Drive/UCSD/update_16_03/localization_NWS_p1_variance.png',dpi=300,bbox_inches='tight')
seed1 = Gsim.nodes()[0:5] #nx.neighbors(Gsim,Gsim.nodes()[0])
seed2 = Gsim.nodes()[10:15] #nx.neighbors(Gsim,Gsim.nodes()[5]) #Gsim.nodes()[27:32]
seed3 = Gsim.nodes()[20:25]
Fnew1 = network_prop.network_propagation(Gsim,Wprime_ring,seed1,alpha=.9,num_its=20)
Fnew2 = network_prop.network_propagation(Gsim,Wprime_ring,seed2,alpha=.9,num_its=20)
F12 = Fnew1*Fnew2
F12.sort(ascending=False)
#Fnew1.sort(ascending=False)
#Fnew2.sort(ascending=False)
Fnew1_norm = Fnew1/np.linalg.norm(Fnew1)
Fnew2_norm = Fnew2/np.linalg.norm(Fnew2)
dot_12 = np.sum(F12.head(10))
print(dot_12)
plt.figure(figsize=(18,6))
plt.subplot(1,3,1)
nx.draw_networkx_nodes(Gsim,pos=pos,node_size=100,alpha=.5,node_color=Fnew1[Gsim.nodes()],
cmap='jet', vmin=0,vmax=max(Fnew1))
nx.draw_networkx_nodes(Gsim,pos=pos,nodelist=seed1,
node_size=100,alpha=.9,node_color=Fnew1[seed1],
cmap='jet', vmin=0,vmax=max(Fnew1),linewidths=3)
nx.draw_networkx_edges(Gsim,pos=pos,alpha=.2)
plt.grid('off')
plt.xticks([],[])
plt.yticks([],[])
plt.annotate('Heat: nodes A ($H_A$)',xy=(.08,.93),xycoords='figure fraction',fontsize=16)
plt.subplot(1,3,2)
nx.draw_networkx_nodes(Gsim,pos=pos,node_size=100,alpha=.5,node_color=Fnew2[Gsim.nodes()],
cmap='jet', vmin=0,vmax=max(Fnew1))
nx.draw_networkx_nodes(Gsim,pos=pos,nodelist=seed2,
node_size=100,alpha=.9,node_color=Fnew2[seed2],
cmap='jet', vmin=0,vmax=max(Fnew1),linewidths=3)
nx.draw_networkx_edges(Gsim,pos=pos,alpha=.2)
plt.grid('off')
plt.xticks([],[])
plt.yticks([],[])
plt.annotate('Heat: nodes B ($H_B$)',xy=(.4,.93),xycoords='figure fraction',fontsize=16)
plt.subplot(1,3,3)
nx.draw_networkx_nodes(Gsim,pos=pos,node_size=100,alpha=.5,node_color=Fnew1[Gsim.nodes()]*Fnew2[Gsim.nodes()],
cmap='jet', vmin=0,vmax=max(Fnew1*Fnew2))
nx.draw_networkx_nodes(Gsim,pos=pos,nodelist=seed2,
node_size=100,alpha=.9,node_color=Fnew1[seed2]*Fnew2[seed2],
cmap='jet', vmin=0,vmax=max(Fnew1*Fnew2),linewidths=3)
nx.draw_networkx_nodes(Gsim,pos=pos,nodelist=seed1,
node_size=100,alpha=.9,node_color=Fnew1[seed1]*Fnew2[seed1],
cmap='jet', vmin=0,vmax=max(Fnew1*Fnew2),linewidths=3)
nx.draw_networkx_edges(Gsim,pos=pos,alpha=.2)
plt.grid('off')
plt.xticks([],[])
plt.yticks([],[])
plt.annotate('$H_A \cdot H_B$',xy=(.73,.93),xycoords='figure fraction',fontsize=16)
results_dict = network_prop.calc_3way_colocalization(Gsim,seed1,seed2,seed3,num_reps=100,num_genes=5,
replace=False,savefile=False,alpha=.5,print_flag=False,)
import scipy
num_reps = results_dict['num_reps']
dot_sfari_epi=results_dict['sfari_epi']
dot_sfari_epi_rand=results_dict['sfari_epi_rand']
#U,p = scipy.stats.mannwhitneyu(dot_sfari_epi,dot_sfari_epi_rand)
t,p = scipy.stats.ttest_ind(dot_sfari_epi,dot_sfari_epi_rand)
psig_SE = plotting_results.nsf(p,n=2)
plt.figure(figsize=(7,5))
plt.errorbar(-.1,np.mean(dot_sfari_epi_rand),2*np.std(dot_sfari_epi_rand)/np.sqrt(num_reps),fmt='o',
ecolor='gray',markerfacecolor='gray',label='edge-shuffled graph')
plt.errorbar(0,np.mean(dot_sfari_epi),2*np.std(dot_sfari_epi)/np.sqrt(num_reps),fmt='bo',
label='original graph')
plt.xlim(-.8,.5)
plt.legend(loc='lower left',fontsize=12)
plt.xticks([0],['A-B \np='+str(psig_SE)],rotation=45,fontsize=12)
plt.ylabel('$H_{A} \cdot H_{B}$',fontsize=18)
H12 = []
H12_rand = []
num_G_reps=5
for p_rewire in np.linspace(0,1,5):
print('rewiring probability = ' + str(p_rewire) + '...')
H12_temp = []
H12_temp_rand = []
for r in range(num_G_reps):
Gsim = nx.connected_watts_strogatz_graph(500,5,p_rewire)
seed1 = Gsim.nodes()[0:5]
seed2 = Gsim.nodes()[5:10]
seed3 = Gsim.nodes()[20:30]
results_dict = network_prop.calc_3way_colocalization(Gsim,seed1,seed2,seed3,num_reps=20,num_genes=5,
replace=False,savefile=False,alpha=.5,print_flag=False)
H12_temp.append(np.mean(results_dict['sfari_epi']))
H12_temp_rand.append(np.mean(results_dict['sfari_epi_rand']))
H12.append(np.mean(H12_temp))
H12_rand.append(np.mean(H12_temp_rand))
plt.plot(np.linspace(0,1,5),H12,'r.-',label='original')
plt.plot(np.linspace(0,1,5),H12_rand,'.-',color='gray',label='edge-shuffled')
plt.xlabel('link rewiring probability',fontsize=14)
plt.ylabel('$H_A \cdot H_B$',fontsize=16)
plt.legend(loc='upper right',fontsize=12)
seed1 = Gsim.nodes()[0:5]
seed2 = Gsim.nodes()[5:10]
seed3 = Gsim.nodes()[10:15]
results_dict = network_prop.calc_3way_colocalization(Gsim,seed1,seed2,seed3,num_reps=100,num_genes=5,
replace=False,savefile=False,alpha=.5,print_flag=False,)
import scipy
num_reps = results_dict['num_reps']
dot_sfari_epi=results_dict['sfari_epi']
dot_sfari_epi_rand=results_dict['sfari_epi_rand']
#U,p = scipy.stats.mannwhitneyu(dot_sfari_epi,dot_sfari_epi_rand)
t,p = scipy.stats.ttest_ind(dot_sfari_epi,dot_sfari_epi_rand)
psig_SE = plotting_results.nsf(p,n=2)
dot_sfari_aem=results_dict['sfari_aem']
dot_aem_sfari_rand=results_dict['aem_sfari_rand']
#U,p = scipy.stats.mannwhitneyu(dot_sfari_aem,dot_aem_sfari_rand)
t,p = scipy.stats.ttest_ind(dot_sfari_aem,dot_aem_sfari_rand)
psig_SA = plotting_results.nsf(p,n=2)
dot_aem_epi=results_dict['aem_epi']
dot_aem_epi_rand=results_dict['aem_epi_rand']
#U,p = scipy.stats.mannwhitneyu(dot_aem_epi,dot_aem_epi_rand)
t,p = scipy.stats.ttest_ind(dot_aem_epi,dot_aem_epi_rand)
psig_AE = plotting_results.nsf(p,n=2)
plt.figure(figsize=(7,5))
plt.errorbar(-.1,np.mean(dot_sfari_epi_rand),2*np.std(dot_sfari_epi_rand)/np.sqrt(num_reps),fmt='o',
ecolor='gray',markerfacecolor='gray')
plt.errorbar(0,np.mean(dot_sfari_epi),2*np.std(dot_sfari_epi)/np.sqrt(num_reps),fmt='bo')
plt.errorbar(.9,np.mean(dot_aem_sfari_rand),2*np.std(dot_aem_sfari_rand)/np.sqrt(num_reps),fmt='o',
ecolor='gray',markerfacecolor='gray')
plt.errorbar(1,np.mean(dot_sfari_aem),2*np.std(dot_sfari_aem)/np.sqrt(num_reps),fmt='ro')
plt.errorbar(1.9,np.mean(dot_aem_epi_rand),2*np.std(dot_aem_epi_rand)/np.sqrt(num_reps),fmt='o',
ecolor='gray',markerfacecolor='gray')
plt.errorbar(2,np.mean(dot_aem_epi),2*np.std(dot_aem_epi)/np.sqrt(num_reps),fmt='go')
plt.xticks([0,1,2],['A-B \np='+str(psig_SE),'A-C \np='+str(psig_SA),'B-C\np='+str(psig_AE)],rotation=45,fontsize=12)
plt.xlim(-.5,2.5)
plt.ylabel('$H_{1} \cdot H_{2}$',fontsize=18)
| 0.250546 | 0.941761 |
# Test the API
This notebook present some tests to assert the API performance.
# **1) Initialization**
----
```
import requests
import joblib
model = joblib.load("models/lda_model.joblib")
article2 = "An economy (from Ancient Greek οἰκονομία (oikonomía) 'management of a household, administration'; from οἶκος (oîkos) 'household', and νέμω (némō) 'distribute, allocate') is an area of the production, distribution and trade, as well as consumption of goods and services by different agents. In general, it is defined 'as a social domain that emphasize the practices, discourses, and material expressions associated with the production, use, and management of resources'.[1] A given economy is the result of a set of processes that involves its culture, values, education, technological evolution, history, social organization, political structure and legal systems, as well as its geography, natural resource endowment, and ecology, as main factors. These factors give context, content, and set the conditions and parameters in which an economy functions. In other words, the economic domain is a social domain of interrelated human practices and transactions that does not stand alone. Economic agents can be individuals, businesses, organizations, or governments. Economic transactions occur when two groups or parties agree to the value or price of the transacted good or service, commonly expressed in a certain currency. However, monetary transactions only account for a small part of the economic domain.Economic activity is spurred by production which uses natural resources, labor and capital. It has changed over time due to technology, innovation (new products, services, processes, expanding markets, diversification of markets, niche markets, increases revenue functions) such as, that which produces intellectual property and changes in industrial relations (most notably child labor being replaced in some parts of the world with universal access to education).A market-based economy is one where goods and services are produced and exchanged according to demand and supply between participants (economic agents) by barter or a medium of exchange with a credit or debit value accepted within the network, such as a unit of currency. A command-based economy is one where political agents directly control what is produced and how it is sold and distributed. A green economy is low-carbon, resource efficient and socially inclusive. In a green economy, growth in income and employment is driven by public and private investments that reduce carbon emissions and pollution, enhance energy and resource efficiency, and prevent the loss of biodiversity and ecosystem services.[2] A gig economy is one in which short-term jobs are assigned or chosen via online platforms.[3] New economy is a term that referred to the whole emerging ecosystem where new standards and practices were introduced, usually as a result of technological innovations. The global economy refers to humanity's economic system or systems overall."
url = "http://localhost:5000/api"
```
# **2) API returning a result**
----
```
input_simple= {"text": "Amnesty International (also referred to as Amnesty or AI) is an international non-governmental organization",
"l_keywords":1,
"n_keywords":3,
"n_topics":3
}
res = requests.post(url, json=input_simple)
assert res.status_code == 200
print(res.json())
```
# **1) API not returning a result**
----
```
input_simple= {"text": "Amnesty International (also referred to as Amnesty or AI) is an international non-governmental organization",
#"l_keywords":1,
#"n_keywords":3,
#"n_topics":3
}
res = requests.post(url, json=input_simple)
assert res.status_code == 200
print(res.json())
```
|
github_jupyter
|
import requests
import joblib
model = joblib.load("models/lda_model.joblib")
article2 = "An economy (from Ancient Greek οἰκονομία (oikonomía) 'management of a household, administration'; from οἶκος (oîkos) 'household', and νέμω (némō) 'distribute, allocate') is an area of the production, distribution and trade, as well as consumption of goods and services by different agents. In general, it is defined 'as a social domain that emphasize the practices, discourses, and material expressions associated with the production, use, and management of resources'.[1] A given economy is the result of a set of processes that involves its culture, values, education, technological evolution, history, social organization, political structure and legal systems, as well as its geography, natural resource endowment, and ecology, as main factors. These factors give context, content, and set the conditions and parameters in which an economy functions. In other words, the economic domain is a social domain of interrelated human practices and transactions that does not stand alone. Economic agents can be individuals, businesses, organizations, or governments. Economic transactions occur when two groups or parties agree to the value or price of the transacted good or service, commonly expressed in a certain currency. However, monetary transactions only account for a small part of the economic domain.Economic activity is spurred by production which uses natural resources, labor and capital. It has changed over time due to technology, innovation (new products, services, processes, expanding markets, diversification of markets, niche markets, increases revenue functions) such as, that which produces intellectual property and changes in industrial relations (most notably child labor being replaced in some parts of the world with universal access to education).A market-based economy is one where goods and services are produced and exchanged according to demand and supply between participants (economic agents) by barter or a medium of exchange with a credit or debit value accepted within the network, such as a unit of currency. A command-based economy is one where political agents directly control what is produced and how it is sold and distributed. A green economy is low-carbon, resource efficient and socially inclusive. In a green economy, growth in income and employment is driven by public and private investments that reduce carbon emissions and pollution, enhance energy and resource efficiency, and prevent the loss of biodiversity and ecosystem services.[2] A gig economy is one in which short-term jobs are assigned or chosen via online platforms.[3] New economy is a term that referred to the whole emerging ecosystem where new standards and practices were introduced, usually as a result of technological innovations. The global economy refers to humanity's economic system or systems overall."
url = "http://localhost:5000/api"
input_simple= {"text": "Amnesty International (also referred to as Amnesty or AI) is an international non-governmental organization",
"l_keywords":1,
"n_keywords":3,
"n_topics":3
}
res = requests.post(url, json=input_simple)
assert res.status_code == 200
print(res.json())
input_simple= {"text": "Amnesty International (also referred to as Amnesty or AI) is an international non-governmental organization",
#"l_keywords":1,
#"n_keywords":3,
#"n_topics":3
}
res = requests.post(url, json=input_simple)
assert res.status_code == 200
print(res.json())
| 0.393152 | 0.934991 |
```
from scipy.cluster.hierarchy import linkage, fcluster, dendrogram
from scipy.cluster.vq import kmeans, vq, whiten
from numpy import random
import matplotlib.pyplot as plt
import seaborn as sns, pandas as pd
```
## FIFA 18
FIFA 18 is a football video game that was released in 2017 for PC and consoles. The dataset contains data on the 1000 top individual players in the game.
### Explore two columns, eur_wage, the wage of a player in Euros and eur_value, their current transfer market value.
```
fifa = pd.read_csv("../00_DataSets/fifa_18_sample_data.csv")
fifa.head(3)
# Scaling the values of eur_wage and eur_value using the whiten() function.
fifa['scaled_wage'] = whiten(fifa["eur_wage"])
fifa['scaled_value'] = whiten(fifa["eur_value"])
# Plot the scaled wages and transfer values
fifa.plot(x='scaled_wage', y='scaled_value', kind='scatter')
plt.show()
# Check the mean and standard deviation of the scaled data
fifa[['scaled_wage', 'scaled_value']].describe().T
```
### Exploring defenders
In the FIFA 18 dataset, various attributes of players are present. Two such attributes are:
* sliding tackle: a number between 0-99 which signifies how accurate a player is able to perform sliding tackles
* aggression: a number between 0-99 which signifies the commitment and will of a player.
These are typically high in defense-minded players.
```
# scaling the columns
fifa["scaled_sliding_tackle"] = whiten(fifa["sliding_tackle"])
fifa["scaled_aggression"] = whiten(fifa["aggression"])
# Fit the data into a hierarchical clustering algorithm
distance_matrix = linkage(fifa[['scaled_sliding_tackle', 'scaled_aggression']], 'ward')
# Assign cluster labels to each row of data
fifa['cluster_labels'] = fcluster(distance_matrix, 3, criterion='maxclust')
# Display cluster centers of each cluster
print(fifa[['scaled_sliding_tackle', 'scaled_aggression', 'cluster_labels']].groupby('cluster_labels').mean())
# Create a scatter plot through seaborn
sns.scatterplot(x='scaled_sliding_tackle', y='scaled_aggression', hue='cluster_labels', data=fifa)
plt.show()
fifa.columns[fifa.columns.str.contains("phy")]
# scaling the defence and phy columns
fifa["scaled_def"] = whiten(fifa["def"])
fifa["scaled_phy"] = whiten(fifa["phy"])
# Set up a random seed in numpy
random.seed([1000,2000])
# Fit the data into a k-means algorithm
cluster_centers,_ = kmeans(fifa[['scaled_def', 'scaled_phy']], 3)
# Assign cluster labels
fifa['cluster_labels'], _ = vq(fifa[['scaled_def', 'scaled_phy']], cluster_centers)
# Display cluster centers
print(fifa[['scaled_def', 'scaled_phy', 'cluster_labels']].groupby('cluster_labels').mean())
# Create a scatter plot through seaborn
sns.scatterplot(x="scaled_def", y="scaled_phy", hue="cluster_labels", data=fifa)
plt.show()
# seed impacts the clusters above.
```
Basic checks on clusters
In the FIFA 18 dataset, we have concentrated on defenders in previous exercises. Let us try to focus on attacking attributes of a player. Pace (pac), Dribbling (dri) and Shooting (sho) are features that are present in attack minded players. In this exercise, k-means clustering has already been applied on the data using the scaled values of these three attributes. Try some basic checks on the clusters so formed.
The data is stored in a Pandas data frame, fifa. The scaled column names are present in a list scaled_features. The cluster labels are stored in the cluster_labels column. Recall the .count() and .mean() methods in Pandas help you find the number of observations and mean of observations in a data frame.
```
# Print the size of the clusters
print(fifa.groupby('cluster_labels')['ID'].count())
# Print the mean value of wages in each cluster
print(fifa.groupby('cluster_labels')['eur_wage'].mean())
scaled_features = ['scaled_pac',
'scaled_sho',
'scaled_pas',
'scaled_dri',
'scaled_def',
'scaled_phy']
# Create centroids with kmeans for 2 clusters
cluster_centers,_ = kmeans(fifa[scaled_features], 2)
# Assign cluster labels and print cluster centers
fifa['cluster_labels'], _ = vq(fifa[scaled_features], cluster_centers)
print(fifa.groupby('cluster_labels')[scaled_features].mean())
# Plot cluster centers to visualize clusters
fifa.groupby('cluster_labels')[scaled_features].mean().plot(legend=True, kind='bar')
plt.show()
# Get the name column of first 5 players in each cluster
for cluster in fifa['cluster_labels'].unique():
print(cluster, fifa[fifa['cluster_labels'] == cluster]['name'].values[:5])
```
|
github_jupyter
|
from scipy.cluster.hierarchy import linkage, fcluster, dendrogram
from scipy.cluster.vq import kmeans, vq, whiten
from numpy import random
import matplotlib.pyplot as plt
import seaborn as sns, pandas as pd
fifa = pd.read_csv("../00_DataSets/fifa_18_sample_data.csv")
fifa.head(3)
# Scaling the values of eur_wage and eur_value using the whiten() function.
fifa['scaled_wage'] = whiten(fifa["eur_wage"])
fifa['scaled_value'] = whiten(fifa["eur_value"])
# Plot the scaled wages and transfer values
fifa.plot(x='scaled_wage', y='scaled_value', kind='scatter')
plt.show()
# Check the mean and standard deviation of the scaled data
fifa[['scaled_wage', 'scaled_value']].describe().T
# scaling the columns
fifa["scaled_sliding_tackle"] = whiten(fifa["sliding_tackle"])
fifa["scaled_aggression"] = whiten(fifa["aggression"])
# Fit the data into a hierarchical clustering algorithm
distance_matrix = linkage(fifa[['scaled_sliding_tackle', 'scaled_aggression']], 'ward')
# Assign cluster labels to each row of data
fifa['cluster_labels'] = fcluster(distance_matrix, 3, criterion='maxclust')
# Display cluster centers of each cluster
print(fifa[['scaled_sliding_tackle', 'scaled_aggression', 'cluster_labels']].groupby('cluster_labels').mean())
# Create a scatter plot through seaborn
sns.scatterplot(x='scaled_sliding_tackle', y='scaled_aggression', hue='cluster_labels', data=fifa)
plt.show()
fifa.columns[fifa.columns.str.contains("phy")]
# scaling the defence and phy columns
fifa["scaled_def"] = whiten(fifa["def"])
fifa["scaled_phy"] = whiten(fifa["phy"])
# Set up a random seed in numpy
random.seed([1000,2000])
# Fit the data into a k-means algorithm
cluster_centers,_ = kmeans(fifa[['scaled_def', 'scaled_phy']], 3)
# Assign cluster labels
fifa['cluster_labels'], _ = vq(fifa[['scaled_def', 'scaled_phy']], cluster_centers)
# Display cluster centers
print(fifa[['scaled_def', 'scaled_phy', 'cluster_labels']].groupby('cluster_labels').mean())
# Create a scatter plot through seaborn
sns.scatterplot(x="scaled_def", y="scaled_phy", hue="cluster_labels", data=fifa)
plt.show()
# seed impacts the clusters above.
# Print the size of the clusters
print(fifa.groupby('cluster_labels')['ID'].count())
# Print the mean value of wages in each cluster
print(fifa.groupby('cluster_labels')['eur_wage'].mean())
scaled_features = ['scaled_pac',
'scaled_sho',
'scaled_pas',
'scaled_dri',
'scaled_def',
'scaled_phy']
# Create centroids with kmeans for 2 clusters
cluster_centers,_ = kmeans(fifa[scaled_features], 2)
# Assign cluster labels and print cluster centers
fifa['cluster_labels'], _ = vq(fifa[scaled_features], cluster_centers)
print(fifa.groupby('cluster_labels')[scaled_features].mean())
# Plot cluster centers to visualize clusters
fifa.groupby('cluster_labels')[scaled_features].mean().plot(legend=True, kind='bar')
plt.show()
# Get the name column of first 5 players in each cluster
for cluster in fifa['cluster_labels'].unique():
print(cluster, fifa[fifa['cluster_labels'] == cluster]['name'].values[:5])
| 0.787073 | 0.914825 |
Code that implements global, local and MTL binary classification using SVM type objective
```
import numpy as np
import glob
from sklearn.svm import SVC
from sklearn.grid_search import GridSearchCV
from scipy import linalg as LA
from sklearn.cross_validation import train_test_split
from sklearn.preprocessing import StandardScaler
def read_prep_data():
# the folder should contain the features and labels for each task in seperate csv files.
# the name of the files should be feature_#task or label_#task.
# this function reads the data and returns two lists for features and labels.
# Inside length of the list is equal to the number of tasks and each item is a numpy ndarray
feature_list = glob.glob('features_*.csv')
label_list = glob.glob('labels_*.csv')
if (len(label_list)!=len(feature_list)):
assert('input diparity');
feature_list.sort()
label_list.sort()
X = []
Y = []
for f in feature_list:
X.append(np.genfromtxt(f, delimiter=','))
for el in label_list:
Y.append(np.genfromtxt(el, delimiter=','))
return X,Y
def flatten_tasks(XX,YY):
# flattens the mt data for global modeling
X = XX[0]
Y = YY[0]
for t in range(len(XX)-1):
X = np.append(X,XX[t+1],0)
Y = np.append(Y,YY[t+1],0)
return X,Y
def mt_data_split(X, y, perc, random_state):
m = len(X);
X_train = []
y_train = []
X_test = []
y_test = []
for t in range(m):
Xt_train, Xt_test, yt_train, yt_test = train_test_split(X[t], y[t], test_size=perc)#, stratify=y[t])#, random_state=random_state)
X_train.append(Xt_train);
X_test.append(Xt_test);
y_train.append(yt_train);
y_test.append(yt_test);
return X_train, X_test, y_train, y_test
XX,YY = read_prep_data()
X,Y = flatten_tasks(XX,YY)
```
The following cell defines the class object for mtl classifier
```
class mtl:
def __init__(self, lam = 1.0 , max_outer_iter = 100, max_inner_iter = 10, max_local_iter = 1, random_seed = 3):
self.lam = lam # lambda: regularization parameter
self.max_outer_iter = max_outer_iter;
self.max_inner_iter = max_inner_iter;
self.max_local_iter = max_local_iter;
self.random_seed = random_seed;
def predict(self, X):
y = []
for t in range(self.num_tasks):
temp = np.sign(np.dot(X[t], self.model[:,t])).reshape((X[t].shape[0],1))
y.append(temp)
return y
def score(self, X, y):
yy = self.predict(X);
score_vec=np.zeros((self.num_tasks,1))
for t in range(self.num_tasks):
score_vec[t] = 1.0-np.sum(yy[t]!=y[t].reshape(y[t].shape[0],1))*1.0/(y[t].shape[0])
return score_vec
def fit(self, X, y):
# initialize
#np.random.seed(self.random_seed) # used for debugging
self.num_tasks = len(X);
self.d = X[0].shape[1]
d = self.d
m = self.num_tasks
self.Sigma = np.eye(m) * (1.0/m);
self.Omega = np.eye(m) * (1.0*m);
rho = 1.0
self.model = np.zeros((self.d,m))
self.alpha = []
W = self.model;
self.n = []
self.totaln = 0
for t in range(m):
temp = y[t].shape[0]
self.n.append(temp)
self.totaln = self.totaln + temp
self.alpha.append(np.zeros((temp,1)));
size_x = np.zeros((max(self.n),m));
for t in range(m):
for i in range(self.n[t]):
curr_x = X[t][i, :].reshape((1,d));
size_x[i,t] = np.dot(curr_x,curr_x.transpose())
self.train_mse_iter = np.zeros((self.max_outer_iter,1));
for h in range(self.max_outer_iter):
self.train_mse_iter[h] = 1.0 - np.mean(self.score(X,y));
## update W
for hh in range(self.max_inner_iter):
deltaW = np.zeros((self.d, m));
deltaB = np.zeros((self.d, m));
## going over tasks
for t in range(m):
alpha_t = self.alpha[t];
curr_sig = self.Sigma[t,t];
perm_t = np.random.permutation(self.n[t])
local_iters_t = round(self.max_local_iter*self.n[t])
for s in range(local_iters_t):
idx = perm_t[(s%self.n[t])];
# get current variables
alpha_old = alpha_t[idx];
curr_x = X[t][idx, :].reshape((1,d));
curr_y = y[t][idx];
size_xx = np.dot(curr_x,curr_x.transpose())
update = (curr_y * np.dot(curr_x, (W[:,t] + rho * deltaW[:, t])));
grad = self.lam * self.n[t] * (1.0 - update) / (curr_sig * rho * size_xx) + (alpha_old * curr_y);
alpha_new = curr_y * max(0.0, min(1.0, grad));
deltaW[:, t] = deltaW[:, t] + curr_sig * (alpha_new - alpha_old) * curr_x.transpose().squeeze()/ (self.lam * self.n[t]);
deltaB[:, t] = deltaB[:, t] + (alpha_new - alpha_old) * curr_x.transpose().squeeze() / self.n[t];
alpha_t[idx] = alpha_new;
# combine updates globally
for t in range(m):
for tt in range(m):
W[:, t] = W[:, t] + deltaB[:, tt] * self.Sigma[t, tt] * (1.0 / self.lam);
# update the Sigmas
epsil = 0.0000001;
A = np.dot(W.transpose(),W)
D, V = LA.eigh(A)
D = (D * (D>epsil)) + epsil*(D<=epsil);
sqm = np.sqrt(D)
s = np.sum(sqm)
sqm = sqm / s;
self.Sigma = (np.dot(np.dot(V, np.diag(sqm)), V.transpose()))
rho = max(np.sum(np.absolute(self.Sigma),0) / np.diag(self.Sigma));
return self
def get_params(self, deep=True):
return {"lam": self.lam, "max_outer_iter": self.max_outer_iter, "max_inner_iter": self.max_inner_iter,
"max_local_iter": self.max_local_iter}
def set_params(self, **parameters):
for parameter, value in parameters.items():
setattr(self, parameter, value)
return self
def data_split(self, X, y, perc, random_state):
self.num_tasks = len(X);
m = self.num_tasks
X_train = []
y_train = []
X_test = []
y_test = []
for t in range(m):
Xt_train, Xt_test, yt_train, yt_test = train_test_split(X[t], y[t], test_size=perc)#, random_state = random_state)
X_train.append(Xt_train);
X_test.append(Xt_test);
y_train.append(yt_train);
y_test.append(yt_test);
return X_train, X_test, y_train, y_test
def cross_validate(self, X, y, folds = 5, lam_range=[.1,1.0,10.0], outer_iters_range = [1,10,20], inner_iters_range = [1,2], local_iters_range = [.1,1.0]):
#print('start running cv',flush=True)
el = len(lam_range);
oi = len(outer_iters_range)
ii = len(inner_iters_range)
eli = len(local_iters_range)
score_results = np.zeros((folds, el, oi ,ii, eli));
perc = 1.0/folds;
for f in range(folds):
random_state = np.random.randint(10000)
X_train, X_test, y_train, y_test = self.data_split(X, y, perc, random_state=random_state)
for el_it in range(el):
self.lam = lam_range[el_it]
for oi_it in range(oi):
self.max_outer_iter = outer_iters_range[oi_it]
for ii_it in range(ii):
self.max_inner_iter = inner_iters_range[ii_it]
for eli_it in range(eli):
self.max_local_iter = local_iters_range[eli_it]
self.fit(X_train, y_train)
score_results[f,el_it,oi_it,ii_it, eli_it] = np.mean(self.score(X_test, y_test));
score_results_avg = np.mean(score_results, axis=0);
score_results_std = np.std(score_results, axis=0);
# finding the best score
arg_max = np.argmax(score_results_avg);
args = np.unravel_index(arg_max, (el,oi,ii,eli))
self.best_lam = lam_range[args[0]];
self.best_outer = outer_iters_range[args[1]];
self.best_inner = inner_iters_range[args[2]];
self.best_local = local_iters_range[args[3]];
self.lam = self.best_lam
self.max_outer_iter = self.best_outer
self.max_inner_iter = self.best_inner
self.max_local_iter = self.best_local
self.fit(X,y)
return self.score(X, y)
```
The following piece of code is for performing simple SVM that is used for local and global
```
class simple_svm:
def __init__(self, lam = 1.0, max_iter = 10, random_seed = 3):
self.lam = lam # lambda: regularization parameter
self.max_iter = max_iter;
self.random_seed = random_seed;
def fit(self, X,y):
# Simple function for solving an SVM with SDCA
# Used for local & global baselines
# At each point, the objective is g_i(w) = max(0,1-y_i x_i^T w).
# The overall objective is (1/N)*g_i(w) + (lambda/2)*||w||^2
# Inputs
# X: input training data
# y: output training data. should be -1,1's
# Output
# w: the learned model
## initialize
#np.random.seed(self.random_seed)
[n, d] = X.shape;
w = np.zeros((d, 1))
alpha = np.zeros((n, 1))
size_x = np.zeros((n,1))
for i in range(n):
curr_x = X[i, :].reshape((1,d));
size_x[i] = np.dot(curr_x,curr_x.transpose())
for iter in range(self.max_iter):
## update coordinates cyclically
for i in np.random.permutation(n):
# get current variables
alpha_old = alpha[i];
curr_x = X[i, :].reshape((1,d));
curr_y = y[i];
# calculate update
update = self.lam*n*(1.0 - (curr_y*np.dot(curr_x, w)))/size_x[i] + (alpha_old*curr_y)
# apply update
alpha_new = curr_y*max(0, min(1.0, update))
w = w + ((alpha_new - alpha_old) * curr_x.transpose() * (1.0 / (self.lam * n)));
alpha[i] = alpha_new;
self.model = w
self.support_vector = alpha
return self
def predict(self, X):
return np.sign(np.dot(X, self.model)).reshape((X.shape[0],1))
def score(self, X, y):
return 1.0-np.sum(self.predict(X)!=y.reshape(len(y),1))*1.0/len(y)
def get_params(self, deep=True):
return {"lam": self.lam, "max_iter": self.max_iter}
def set_params(self, **parameters):
for parameter, value in parameters.items():
setattr(self, parameter, value)
return self
def run_local(Xtrain,ytrain, Xtest, ytest, cv = 5, lam_range = [1.0], max_iter_range = [10], tol_range = [.001]):
m = len(Xtrain)
results = np.zeros((m,1));
for t in range(m):
# doing the scikitlearn svm
nt = ytrain[t].shape[0]
C_list = []
for lam in lam_range:
C_list.append(1.0/(lam*nt))
param_grid = [{'C': C_list, 'tol': tol_range}]
classifier = SVC(C=1.0, kernel='linear')
train_cv = GridSearchCV(classifier, param_grid, cv = cv)
# doing the simple svm
#classifier = simple_svm()
#param_grid = [{'lam': lam_range, 'max_iter':max_iter_range}]
#train_cv = GridSearchCV(classifier, param_grid, cv = cv)
train_cv.fit(Xtrain[t],ytrain[t]);
results[t] = train_cv.score(Xtest[t],ytest[t]);
print('local method best params for task '+ str(t)+':')
print(train_cv.best_params_)
print('------')
return results
def run_global(Xtrain,ytrain, Xtest, ytest, cv = 5, lam_range = [1.0], max_iter_range = [10], tol_range = [.001]):
m = len(Xtrain)
results = np.zeros((m,1))
Xf, yf = flatten_tasks(Xtrain,ytrain)
# doing the scikitlearn SVC
n = yf.shape[0]
C_list = []
for lam in lam_range:
C_list.append(1.0/(lam*n))
param_grid = [{'C': C_list, 'tol': tol_range}]
classifier = SVC(C=1.0, kernel='linear')
train_cv = GridSearchCV(classifier, param_grid, cv = cv)
# doing the simple svm
#classifier = simple_svm()
#param_grid = [{'lam': lam_range, 'max_iter':max_iter_range}]
#train_cv = GridSearchCV(classifier, param_grid, cv = cv)
train_cv.fit(Xf, yf);
print('global method best params:')
print(train_cv.best_params_)
print('----')
for t in range(m):
results[t] = train_cv.score(Xtest[t],ytest[t]);
return results
def run_mtl(Xtrain, ytrain, Xtest, ytest, cv = 5, lam_range=[1.0],
outer_iters_range = [1], inner_iters_range = [1],
local_iters_range = [1.0]):
mtl_clf = mtl()
mtl_clf.cross_validate(Xtrain,ytrain, folds = cv, lam_range=lam_range,
outer_iters_range = outer_iters_range,
inner_iters_range = inner_iters_range,
local_iters_range = local_iters_range)
print('mtl best parameters:')
print(['lambda:',mtl_clf.best_lam, 'outer:', mtl_clf.best_outer, 'inner:', mtl_clf.best_inner, 'local:', mtl_clf.best_local])
print('----')
return mtl_clf.score(Xtest, ytest)
```
The following cell runs the experiments by calling the methods
```
num_trials = 10
test_perc = .9
cv = 5;
m = len(XX)
np.random.seed(0)
local_results = np.zeros((m,num_trials))
global_results = np.zeros((m,num_trials))
mtl_results = np.zeros((m,num_trials))
for t in range(num_trials):
Xtrain, Xtest, Ytrain, Ytest = mt_data_split(XX, YY, test_perc, 10000*t+10000)
# doing normalization an adding the bias term
for tasks in range(m):
scaler = StandardScaler(copy = False)
scaler.fit(Xtrain[tasks]);
scaler.transform(Xtrain[tasks], copy = False)
scaler.transform(Xtest[tasks], copy = False)
# adding a relatively large constant (10) at the beginning (large value such that it cancels regularizing the bias)
all_ones = 10*np.ones((Xtrain[tasks].shape[0],1))
Xtrain[tasks] = np.append(all_ones,Xtrain[tasks],1);
all_ones = 10*np.ones((Xtest[tasks].shape[0],1))
Xtest[tasks] = np.append(all_ones,Xtest[tasks],1);
local_lam_range = [10, 1.0, .1, .01, .001, .0001, .00001, .000001]
local_max_iter_range = [1, 5, 10]
local_tol_range = [.1, .01, .001, .0001]
local_results[:,t] = run_local(Xtrain,Ytrain, Xtest, Ytest,
cv = cv, lam_range = local_lam_range,
max_iter_range = local_max_iter_range, tol_range = local_tol_range).squeeze()
global_lam_range = [10, 1.0, .1, .01, .001, .0001, .00001, .000001]
global_max_iter_range = [1, 5, 10]
global_tol_range = [.1, .01, .001, .0001]
global_results[:,t] = run_global(Xtrain,Ytrain, Xtest, Ytest,
cv = cv, lam_range = global_lam_range,
max_iter_range = global_max_iter_range, tol_range = global_tol_range).squeeze()
mtl_lam_range = [1.0, .1, .01, .001, .0001, .00001, .000001]
mtl_outer_iters_range = [5, 10, 50]
mtl_inner_iters_range = [1, 5, 10]
mtl_local_iters_range = [.5, 1.0, 2.0]
mtl_results[:,t] = run_mtl(Xtrain, Ytrain, Xtest, Ytest, cv = cv, lam_range = mtl_lam_range,
outer_iters_range = mtl_outer_iters_range, inner_iters_range = mtl_inner_iters_range,
local_iters_range = mtl_local_iters_range).squeeze()
local_results_avg = np.mean(local_results, axis = 1)
global_results_avg = np.mean(global_results, axis = 1)
mtl_results_avg = np.mean(mtl_results, axis = 1)
print('local score:')
print(local_results_avg)
print('global score:')
print(global_results_avg)
print('mtl score:')
print(mtl_results_avg)
```
Printing the results in another form
```
local_results_avg = np.mean(local_results, axis = 0)
global_results_avg = np.mean(global_results, axis = 0)
mtl_results_avg = np.mean(mtl_results, axis = 0)
print('local score:')
print(local_results_avg)
print('global score:')
print(global_results_avg)
print('mtl score:')
print(mtl_results_avg)
print('local mean and std:')
print(np.mean(local_results_avg))
print(np.std(local_results_avg))
print('global mean and std:')
print(np.mean(global_results_avg))
print(np.std(global_results_avg))
print('mtl mean and std:')
print(np.mean(mtl_results_avg))
print(np.std(mtl_results_avg))
```
|
github_jupyter
|
import numpy as np
import glob
from sklearn.svm import SVC
from sklearn.grid_search import GridSearchCV
from scipy import linalg as LA
from sklearn.cross_validation import train_test_split
from sklearn.preprocessing import StandardScaler
def read_prep_data():
# the folder should contain the features and labels for each task in seperate csv files.
# the name of the files should be feature_#task or label_#task.
# this function reads the data and returns two lists for features and labels.
# Inside length of the list is equal to the number of tasks and each item is a numpy ndarray
feature_list = glob.glob('features_*.csv')
label_list = glob.glob('labels_*.csv')
if (len(label_list)!=len(feature_list)):
assert('input diparity');
feature_list.sort()
label_list.sort()
X = []
Y = []
for f in feature_list:
X.append(np.genfromtxt(f, delimiter=','))
for el in label_list:
Y.append(np.genfromtxt(el, delimiter=','))
return X,Y
def flatten_tasks(XX,YY):
# flattens the mt data for global modeling
X = XX[0]
Y = YY[0]
for t in range(len(XX)-1):
X = np.append(X,XX[t+1],0)
Y = np.append(Y,YY[t+1],0)
return X,Y
def mt_data_split(X, y, perc, random_state):
m = len(X);
X_train = []
y_train = []
X_test = []
y_test = []
for t in range(m):
Xt_train, Xt_test, yt_train, yt_test = train_test_split(X[t], y[t], test_size=perc)#, stratify=y[t])#, random_state=random_state)
X_train.append(Xt_train);
X_test.append(Xt_test);
y_train.append(yt_train);
y_test.append(yt_test);
return X_train, X_test, y_train, y_test
XX,YY = read_prep_data()
X,Y = flatten_tasks(XX,YY)
class mtl:
def __init__(self, lam = 1.0 , max_outer_iter = 100, max_inner_iter = 10, max_local_iter = 1, random_seed = 3):
self.lam = lam # lambda: regularization parameter
self.max_outer_iter = max_outer_iter;
self.max_inner_iter = max_inner_iter;
self.max_local_iter = max_local_iter;
self.random_seed = random_seed;
def predict(self, X):
y = []
for t in range(self.num_tasks):
temp = np.sign(np.dot(X[t], self.model[:,t])).reshape((X[t].shape[0],1))
y.append(temp)
return y
def score(self, X, y):
yy = self.predict(X);
score_vec=np.zeros((self.num_tasks,1))
for t in range(self.num_tasks):
score_vec[t] = 1.0-np.sum(yy[t]!=y[t].reshape(y[t].shape[0],1))*1.0/(y[t].shape[0])
return score_vec
def fit(self, X, y):
# initialize
#np.random.seed(self.random_seed) # used for debugging
self.num_tasks = len(X);
self.d = X[0].shape[1]
d = self.d
m = self.num_tasks
self.Sigma = np.eye(m) * (1.0/m);
self.Omega = np.eye(m) * (1.0*m);
rho = 1.0
self.model = np.zeros((self.d,m))
self.alpha = []
W = self.model;
self.n = []
self.totaln = 0
for t in range(m):
temp = y[t].shape[0]
self.n.append(temp)
self.totaln = self.totaln + temp
self.alpha.append(np.zeros((temp,1)));
size_x = np.zeros((max(self.n),m));
for t in range(m):
for i in range(self.n[t]):
curr_x = X[t][i, :].reshape((1,d));
size_x[i,t] = np.dot(curr_x,curr_x.transpose())
self.train_mse_iter = np.zeros((self.max_outer_iter,1));
for h in range(self.max_outer_iter):
self.train_mse_iter[h] = 1.0 - np.mean(self.score(X,y));
## update W
for hh in range(self.max_inner_iter):
deltaW = np.zeros((self.d, m));
deltaB = np.zeros((self.d, m));
## going over tasks
for t in range(m):
alpha_t = self.alpha[t];
curr_sig = self.Sigma[t,t];
perm_t = np.random.permutation(self.n[t])
local_iters_t = round(self.max_local_iter*self.n[t])
for s in range(local_iters_t):
idx = perm_t[(s%self.n[t])];
# get current variables
alpha_old = alpha_t[idx];
curr_x = X[t][idx, :].reshape((1,d));
curr_y = y[t][idx];
size_xx = np.dot(curr_x,curr_x.transpose())
update = (curr_y * np.dot(curr_x, (W[:,t] + rho * deltaW[:, t])));
grad = self.lam * self.n[t] * (1.0 - update) / (curr_sig * rho * size_xx) + (alpha_old * curr_y);
alpha_new = curr_y * max(0.0, min(1.0, grad));
deltaW[:, t] = deltaW[:, t] + curr_sig * (alpha_new - alpha_old) * curr_x.transpose().squeeze()/ (self.lam * self.n[t]);
deltaB[:, t] = deltaB[:, t] + (alpha_new - alpha_old) * curr_x.transpose().squeeze() / self.n[t];
alpha_t[idx] = alpha_new;
# combine updates globally
for t in range(m):
for tt in range(m):
W[:, t] = W[:, t] + deltaB[:, tt] * self.Sigma[t, tt] * (1.0 / self.lam);
# update the Sigmas
epsil = 0.0000001;
A = np.dot(W.transpose(),W)
D, V = LA.eigh(A)
D = (D * (D>epsil)) + epsil*(D<=epsil);
sqm = np.sqrt(D)
s = np.sum(sqm)
sqm = sqm / s;
self.Sigma = (np.dot(np.dot(V, np.diag(sqm)), V.transpose()))
rho = max(np.sum(np.absolute(self.Sigma),0) / np.diag(self.Sigma));
return self
def get_params(self, deep=True):
return {"lam": self.lam, "max_outer_iter": self.max_outer_iter, "max_inner_iter": self.max_inner_iter,
"max_local_iter": self.max_local_iter}
def set_params(self, **parameters):
for parameter, value in parameters.items():
setattr(self, parameter, value)
return self
def data_split(self, X, y, perc, random_state):
self.num_tasks = len(X);
m = self.num_tasks
X_train = []
y_train = []
X_test = []
y_test = []
for t in range(m):
Xt_train, Xt_test, yt_train, yt_test = train_test_split(X[t], y[t], test_size=perc)#, random_state = random_state)
X_train.append(Xt_train);
X_test.append(Xt_test);
y_train.append(yt_train);
y_test.append(yt_test);
return X_train, X_test, y_train, y_test
def cross_validate(self, X, y, folds = 5, lam_range=[.1,1.0,10.0], outer_iters_range = [1,10,20], inner_iters_range = [1,2], local_iters_range = [.1,1.0]):
#print('start running cv',flush=True)
el = len(lam_range);
oi = len(outer_iters_range)
ii = len(inner_iters_range)
eli = len(local_iters_range)
score_results = np.zeros((folds, el, oi ,ii, eli));
perc = 1.0/folds;
for f in range(folds):
random_state = np.random.randint(10000)
X_train, X_test, y_train, y_test = self.data_split(X, y, perc, random_state=random_state)
for el_it in range(el):
self.lam = lam_range[el_it]
for oi_it in range(oi):
self.max_outer_iter = outer_iters_range[oi_it]
for ii_it in range(ii):
self.max_inner_iter = inner_iters_range[ii_it]
for eli_it in range(eli):
self.max_local_iter = local_iters_range[eli_it]
self.fit(X_train, y_train)
score_results[f,el_it,oi_it,ii_it, eli_it] = np.mean(self.score(X_test, y_test));
score_results_avg = np.mean(score_results, axis=0);
score_results_std = np.std(score_results, axis=0);
# finding the best score
arg_max = np.argmax(score_results_avg);
args = np.unravel_index(arg_max, (el,oi,ii,eli))
self.best_lam = lam_range[args[0]];
self.best_outer = outer_iters_range[args[1]];
self.best_inner = inner_iters_range[args[2]];
self.best_local = local_iters_range[args[3]];
self.lam = self.best_lam
self.max_outer_iter = self.best_outer
self.max_inner_iter = self.best_inner
self.max_local_iter = self.best_local
self.fit(X,y)
return self.score(X, y)
class simple_svm:
def __init__(self, lam = 1.0, max_iter = 10, random_seed = 3):
self.lam = lam # lambda: regularization parameter
self.max_iter = max_iter;
self.random_seed = random_seed;
def fit(self, X,y):
# Simple function for solving an SVM with SDCA
# Used for local & global baselines
# At each point, the objective is g_i(w) = max(0,1-y_i x_i^T w).
# The overall objective is (1/N)*g_i(w) + (lambda/2)*||w||^2
# Inputs
# X: input training data
# y: output training data. should be -1,1's
# Output
# w: the learned model
## initialize
#np.random.seed(self.random_seed)
[n, d] = X.shape;
w = np.zeros((d, 1))
alpha = np.zeros((n, 1))
size_x = np.zeros((n,1))
for i in range(n):
curr_x = X[i, :].reshape((1,d));
size_x[i] = np.dot(curr_x,curr_x.transpose())
for iter in range(self.max_iter):
## update coordinates cyclically
for i in np.random.permutation(n):
# get current variables
alpha_old = alpha[i];
curr_x = X[i, :].reshape((1,d));
curr_y = y[i];
# calculate update
update = self.lam*n*(1.0 - (curr_y*np.dot(curr_x, w)))/size_x[i] + (alpha_old*curr_y)
# apply update
alpha_new = curr_y*max(0, min(1.0, update))
w = w + ((alpha_new - alpha_old) * curr_x.transpose() * (1.0 / (self.lam * n)));
alpha[i] = alpha_new;
self.model = w
self.support_vector = alpha
return self
def predict(self, X):
return np.sign(np.dot(X, self.model)).reshape((X.shape[0],1))
def score(self, X, y):
return 1.0-np.sum(self.predict(X)!=y.reshape(len(y),1))*1.0/len(y)
def get_params(self, deep=True):
return {"lam": self.lam, "max_iter": self.max_iter}
def set_params(self, **parameters):
for parameter, value in parameters.items():
setattr(self, parameter, value)
return self
def run_local(Xtrain,ytrain, Xtest, ytest, cv = 5, lam_range = [1.0], max_iter_range = [10], tol_range = [.001]):
m = len(Xtrain)
results = np.zeros((m,1));
for t in range(m):
# doing the scikitlearn svm
nt = ytrain[t].shape[0]
C_list = []
for lam in lam_range:
C_list.append(1.0/(lam*nt))
param_grid = [{'C': C_list, 'tol': tol_range}]
classifier = SVC(C=1.0, kernel='linear')
train_cv = GridSearchCV(classifier, param_grid, cv = cv)
# doing the simple svm
#classifier = simple_svm()
#param_grid = [{'lam': lam_range, 'max_iter':max_iter_range}]
#train_cv = GridSearchCV(classifier, param_grid, cv = cv)
train_cv.fit(Xtrain[t],ytrain[t]);
results[t] = train_cv.score(Xtest[t],ytest[t]);
print('local method best params for task '+ str(t)+':')
print(train_cv.best_params_)
print('------')
return results
def run_global(Xtrain,ytrain, Xtest, ytest, cv = 5, lam_range = [1.0], max_iter_range = [10], tol_range = [.001]):
m = len(Xtrain)
results = np.zeros((m,1))
Xf, yf = flatten_tasks(Xtrain,ytrain)
# doing the scikitlearn SVC
n = yf.shape[0]
C_list = []
for lam in lam_range:
C_list.append(1.0/(lam*n))
param_grid = [{'C': C_list, 'tol': tol_range}]
classifier = SVC(C=1.0, kernel='linear')
train_cv = GridSearchCV(classifier, param_grid, cv = cv)
# doing the simple svm
#classifier = simple_svm()
#param_grid = [{'lam': lam_range, 'max_iter':max_iter_range}]
#train_cv = GridSearchCV(classifier, param_grid, cv = cv)
train_cv.fit(Xf, yf);
print('global method best params:')
print(train_cv.best_params_)
print('----')
for t in range(m):
results[t] = train_cv.score(Xtest[t],ytest[t]);
return results
def run_mtl(Xtrain, ytrain, Xtest, ytest, cv = 5, lam_range=[1.0],
outer_iters_range = [1], inner_iters_range = [1],
local_iters_range = [1.0]):
mtl_clf = mtl()
mtl_clf.cross_validate(Xtrain,ytrain, folds = cv, lam_range=lam_range,
outer_iters_range = outer_iters_range,
inner_iters_range = inner_iters_range,
local_iters_range = local_iters_range)
print('mtl best parameters:')
print(['lambda:',mtl_clf.best_lam, 'outer:', mtl_clf.best_outer, 'inner:', mtl_clf.best_inner, 'local:', mtl_clf.best_local])
print('----')
return mtl_clf.score(Xtest, ytest)
num_trials = 10
test_perc = .9
cv = 5;
m = len(XX)
np.random.seed(0)
local_results = np.zeros((m,num_trials))
global_results = np.zeros((m,num_trials))
mtl_results = np.zeros((m,num_trials))
for t in range(num_trials):
Xtrain, Xtest, Ytrain, Ytest = mt_data_split(XX, YY, test_perc, 10000*t+10000)
# doing normalization an adding the bias term
for tasks in range(m):
scaler = StandardScaler(copy = False)
scaler.fit(Xtrain[tasks]);
scaler.transform(Xtrain[tasks], copy = False)
scaler.transform(Xtest[tasks], copy = False)
# adding a relatively large constant (10) at the beginning (large value such that it cancels regularizing the bias)
all_ones = 10*np.ones((Xtrain[tasks].shape[0],1))
Xtrain[tasks] = np.append(all_ones,Xtrain[tasks],1);
all_ones = 10*np.ones((Xtest[tasks].shape[0],1))
Xtest[tasks] = np.append(all_ones,Xtest[tasks],1);
local_lam_range = [10, 1.0, .1, .01, .001, .0001, .00001, .000001]
local_max_iter_range = [1, 5, 10]
local_tol_range = [.1, .01, .001, .0001]
local_results[:,t] = run_local(Xtrain,Ytrain, Xtest, Ytest,
cv = cv, lam_range = local_lam_range,
max_iter_range = local_max_iter_range, tol_range = local_tol_range).squeeze()
global_lam_range = [10, 1.0, .1, .01, .001, .0001, .00001, .000001]
global_max_iter_range = [1, 5, 10]
global_tol_range = [.1, .01, .001, .0001]
global_results[:,t] = run_global(Xtrain,Ytrain, Xtest, Ytest,
cv = cv, lam_range = global_lam_range,
max_iter_range = global_max_iter_range, tol_range = global_tol_range).squeeze()
mtl_lam_range = [1.0, .1, .01, .001, .0001, .00001, .000001]
mtl_outer_iters_range = [5, 10, 50]
mtl_inner_iters_range = [1, 5, 10]
mtl_local_iters_range = [.5, 1.0, 2.0]
mtl_results[:,t] = run_mtl(Xtrain, Ytrain, Xtest, Ytest, cv = cv, lam_range = mtl_lam_range,
outer_iters_range = mtl_outer_iters_range, inner_iters_range = mtl_inner_iters_range,
local_iters_range = mtl_local_iters_range).squeeze()
local_results_avg = np.mean(local_results, axis = 1)
global_results_avg = np.mean(global_results, axis = 1)
mtl_results_avg = np.mean(mtl_results, axis = 1)
print('local score:')
print(local_results_avg)
print('global score:')
print(global_results_avg)
print('mtl score:')
print(mtl_results_avg)
local_results_avg = np.mean(local_results, axis = 0)
global_results_avg = np.mean(global_results, axis = 0)
mtl_results_avg = np.mean(mtl_results, axis = 0)
print('local score:')
print(local_results_avg)
print('global score:')
print(global_results_avg)
print('mtl score:')
print(mtl_results_avg)
print('local mean and std:')
print(np.mean(local_results_avg))
print(np.std(local_results_avg))
print('global mean and std:')
print(np.mean(global_results_avg))
print(np.std(global_results_avg))
print('mtl mean and std:')
print(np.mean(mtl_results_avg))
print(np.std(mtl_results_avg))
| 0.35869 | 0.78691 |
# Solution for Exercise 02
The goal of this exercise is to evalutate the impact of using an arbitrary
integer encoding for categorical variables along with a linear
classification model such as Logistic Regression.
To do so, let's try to use `OrdinalEncoder` to preprocess the categorical
variables. This preprocessor is assembled in a pipeline with
`LogisticRegression`. The performance of the pipeline can be evaluated as
usual by cross-validation and then compared to the score obtained when using
`OneHotEncoding` or to some other baseline score.
Because `OrdinalEncoder` can raise errors if it sees an unknown category at
prediction time, we need to pre-compute the list of all possible categories
ahead of time:
```python
categories = [data[column].unique()
for column in data[categorical_columns]]
OrdinalEncoder(categories=categories)
```
```
import pandas as pd
df = pd.read_csv(
"https://www.openml.org/data/get_csv/1595261/adult-census.csv")
# Or use the local copy:
# df = pd.read_csv('../datasets/adult-census.csv')
target_name = "class"
target = df[target_name].to_numpy()
data = df.drop(columns=[target_name, "fnlwgt"])
categorical_columns = [
c for c in data.columns if data[c].dtype.kind not in ["i", "f"]]
data_categorical = data[categorical_columns]
categories = [
data[column].unique() for column in data[categorical_columns]]
categories
from sklearn.model_selection import cross_val_score
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import OrdinalEncoder
from sklearn.linear_model import LogisticRegression
model = make_pipeline(
OrdinalEncoder(categories=categories),
LogisticRegression(solver='lbfgs', max_iter=1000))
scores = cross_val_score(model, data_categorical, target)
print(f"The different scores obtained are: \n{scores}")
print(f"The accuracy is: {scores.mean():.3f} +- {scores.std():.3f}")
```
Using an arbitrary mapping from string labels to integers as done here causes the linear model to make bad assumptions on the relative ordering of categories.
This prevent the model to learning anything predictive enough and the cross-validated score is even lower that the baseline we obtained by ignoring the input data and just always predict the most frequent class:
```
from sklearn.dummy import DummyClassifier
scores = cross_val_score(DummyClassifier(strategy="most_frequent"),
data_categorical, target)
print(f"The different scores obtained are: \n{scores}")
print(f"The accuracy is: {scores.mean():.3f} +- {scores.std():.3f}")
```
By comparison, a categorical encoding that does not assume any ordering in the
categories can lead to a significantly higher score:
```
from sklearn.preprocessing import OneHotEncoder
model = make_pipeline(
OneHotEncoder(handle_unknown="ignore"),
LogisticRegression(solver='lbfgs', max_iter=1000))
scores = cross_val_score(model, data_categorical, target)
print(f"The different scores obtained are: \n{scores}")
print(f"The accuracy is: {scores.mean():.3f} +- {scores.std():.3f}")
```
|
github_jupyter
|
categories = [data[column].unique()
for column in data[categorical_columns]]
OrdinalEncoder(categories=categories)
import pandas as pd
df = pd.read_csv(
"https://www.openml.org/data/get_csv/1595261/adult-census.csv")
# Or use the local copy:
# df = pd.read_csv('../datasets/adult-census.csv')
target_name = "class"
target = df[target_name].to_numpy()
data = df.drop(columns=[target_name, "fnlwgt"])
categorical_columns = [
c for c in data.columns if data[c].dtype.kind not in ["i", "f"]]
data_categorical = data[categorical_columns]
categories = [
data[column].unique() for column in data[categorical_columns]]
categories
from sklearn.model_selection import cross_val_score
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import OrdinalEncoder
from sklearn.linear_model import LogisticRegression
model = make_pipeline(
OrdinalEncoder(categories=categories),
LogisticRegression(solver='lbfgs', max_iter=1000))
scores = cross_val_score(model, data_categorical, target)
print(f"The different scores obtained are: \n{scores}")
print(f"The accuracy is: {scores.mean():.3f} +- {scores.std():.3f}")
from sklearn.dummy import DummyClassifier
scores = cross_val_score(DummyClassifier(strategy="most_frequent"),
data_categorical, target)
print(f"The different scores obtained are: \n{scores}")
print(f"The accuracy is: {scores.mean():.3f} +- {scores.std():.3f}")
from sklearn.preprocessing import OneHotEncoder
model = make_pipeline(
OneHotEncoder(handle_unknown="ignore"),
LogisticRegression(solver='lbfgs', max_iter=1000))
scores = cross_val_score(model, data_categorical, target)
print(f"The different scores obtained are: \n{scores}")
print(f"The accuracy is: {scores.mean():.3f} +- {scores.std():.3f}")
| 0.677474 | 0.978611 |
```
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import norm
import pathlib
from time import time
import keras
from keras import layers
from keras import backend as K
from keras.models import Model
from keras.metrics import binary_crossentropy
from keras.datasets import mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train = x_train.astype('float32') / 255
x_train = x_train.reshape(x_train.shape + (1,)) # (60000, 28, 28) -> (60000, 28, 28, 1)
x_test = x_test.astype('float32') / 255
x_test = x_test.reshape(x_test.shape + (1,))
batch_size = 16
latent_dim = 2
img_shape = (28, 28, 1)
epochs = 10
```
A Variational Autoencoder (VAE) mapes an image to two vectors, $\mu$ and $\log (\sigma^2)$, that define a probability distribution (normally distributed) over the latent space. We sample from the distribution, then decode the sample to a reconstructed image.
## Encoder
```
img_input = keras.Input(shape=img_shape)
x = layers.Conv2D(32, (3, 3), padding='same', activation='relu')(img_input)
x = layers.Conv2D(64, (3, 3), strides=(2, 2), padding='same', activation='relu')(x)
x = layers.Conv2D(64, (3, 3), padding='same', activation='relu')(x)
x = layers.Conv2D(64, (3, 3), padding='same', activation='relu')(x)
shape_before_flat = K.int_shape(x)
# Flatten to FC layer
x = layers.Flatten()(x)
x = layers.Dense(32, activation='relu')(x)
z_mean = layers.Dense(latent_dim)(x)
z_log_var = layers.Dense(latent_dim)(x)
def sampling(args):
z_mean, z_log_var = args
epsilon = K.random_normal(shape=(K.shape(z_mean)[0], latent_dim)) # standard normal noise
return z_mean + K.exp(z_log_var) * epsilon
z = layers.Lambda(sampling)([z_mean, z_log_var])
```
## Decoder
```
decoder_input = layers.Input(K.int_shape(z)[1:])
x = layers.Dense(np.prod(shape_before_flat[1:]),
activation='relu')(decoder_input)
x = layers.Reshape(shape_before_flat[1:])(x)
x = layers.Conv2DTranspose(32, (3, 3), padding='same',
activation='relu', strides=(2, 2))(x)
x = layers.Conv2D(1, (3, 3), padding='same', activation='sigmoid')(x)
decoder = Model(decoder_input, x)
z_decoded = decoder(z)
class CustomVAELayer(layers.Layer):
def vae_loss(self, x, z_decoded):
# Flatten
x = K.flatten(x)
z_decoded = K.flatten(z_decoded)
xent_loss = binary_crossentropy(x, z_decoded)
kl_loss = -5e-4 * K.mean(
1 + z_log_var - K.square(z_mean) - K.exp(z_log_var), axis=-1)
return K.mean(xent_loss + kl_loss)
def call(self, inputs):
x = inputs[0]
z_decoded = inputs[1]
loss = self.vae_loss(x, z_decoded)
self.add_loss(loss, inputs=inputs)
return x
y = CustomVAELayer()([img_input, z_decoded])
vae = Model(img_input, y)
vae.compile(optimizer='rmsprop')
vae.summary()
history = vae.fit(x=x_train, y=None, epochs=epochs, batch_size=batch_size, validation_data=(x_test, None))
dirpath = "saved_models/vae/mnist/"
filename = str(round(time())) + ".h5"
pathlib.Path(dirpath).mkdir(parents=True, exist_ok=True)
vae.save(dirpath + filename)
# Display a 2D manifold of the digits
n = 15 # figure with 15x15 digits
digit_size = 28
figure = np.zeros((digit_size * n, digit_size * n))
# Linearly spaced coordinates on the unit square were transformed
# through the inverse CDF (ppf) of the Gaussian
# to produce values of the latent variables z,
# since the prior of the latent space is Gaussian
grid_x = norm.ppf(np.linspace(0.05, 0.95, n))
grid_y = norm.ppf(np.linspace(0.05, 0.95, n))
for i, yi in enumerate(grid_x):
for j, xi in enumerate(grid_y):
z_sample = np.array([[xi, yi]])
z_sample = np.tile(z_sample, batch_size).reshape(batch_size, 2)
x_decoded = decoder.predict(z_sample, batch_size=batch_size)
digit = x_decoded[0].reshape(digit_size, digit_size)
figure[i * digit_size: (i + 1) * digit_size,
j * digit_size: (j + 1) * digit_size] = digit
plt.figure(figsize=(10, 10))
plt.imshow(figure, cmap='Greys_r')
plt.show()
```
|
github_jupyter
|
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import norm
import pathlib
from time import time
import keras
from keras import layers
from keras import backend as K
from keras.models import Model
from keras.metrics import binary_crossentropy
from keras.datasets import mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train = x_train.astype('float32') / 255
x_train = x_train.reshape(x_train.shape + (1,)) # (60000, 28, 28) -> (60000, 28, 28, 1)
x_test = x_test.astype('float32') / 255
x_test = x_test.reshape(x_test.shape + (1,))
batch_size = 16
latent_dim = 2
img_shape = (28, 28, 1)
epochs = 10
img_input = keras.Input(shape=img_shape)
x = layers.Conv2D(32, (3, 3), padding='same', activation='relu')(img_input)
x = layers.Conv2D(64, (3, 3), strides=(2, 2), padding='same', activation='relu')(x)
x = layers.Conv2D(64, (3, 3), padding='same', activation='relu')(x)
x = layers.Conv2D(64, (3, 3), padding='same', activation='relu')(x)
shape_before_flat = K.int_shape(x)
# Flatten to FC layer
x = layers.Flatten()(x)
x = layers.Dense(32, activation='relu')(x)
z_mean = layers.Dense(latent_dim)(x)
z_log_var = layers.Dense(latent_dim)(x)
def sampling(args):
z_mean, z_log_var = args
epsilon = K.random_normal(shape=(K.shape(z_mean)[0], latent_dim)) # standard normal noise
return z_mean + K.exp(z_log_var) * epsilon
z = layers.Lambda(sampling)([z_mean, z_log_var])
decoder_input = layers.Input(K.int_shape(z)[1:])
x = layers.Dense(np.prod(shape_before_flat[1:]),
activation='relu')(decoder_input)
x = layers.Reshape(shape_before_flat[1:])(x)
x = layers.Conv2DTranspose(32, (3, 3), padding='same',
activation='relu', strides=(2, 2))(x)
x = layers.Conv2D(1, (3, 3), padding='same', activation='sigmoid')(x)
decoder = Model(decoder_input, x)
z_decoded = decoder(z)
class CustomVAELayer(layers.Layer):
def vae_loss(self, x, z_decoded):
# Flatten
x = K.flatten(x)
z_decoded = K.flatten(z_decoded)
xent_loss = binary_crossentropy(x, z_decoded)
kl_loss = -5e-4 * K.mean(
1 + z_log_var - K.square(z_mean) - K.exp(z_log_var), axis=-1)
return K.mean(xent_loss + kl_loss)
def call(self, inputs):
x = inputs[0]
z_decoded = inputs[1]
loss = self.vae_loss(x, z_decoded)
self.add_loss(loss, inputs=inputs)
return x
y = CustomVAELayer()([img_input, z_decoded])
vae = Model(img_input, y)
vae.compile(optimizer='rmsprop')
vae.summary()
history = vae.fit(x=x_train, y=None, epochs=epochs, batch_size=batch_size, validation_data=(x_test, None))
dirpath = "saved_models/vae/mnist/"
filename = str(round(time())) + ".h5"
pathlib.Path(dirpath).mkdir(parents=True, exist_ok=True)
vae.save(dirpath + filename)
# Display a 2D manifold of the digits
n = 15 # figure with 15x15 digits
digit_size = 28
figure = np.zeros((digit_size * n, digit_size * n))
# Linearly spaced coordinates on the unit square were transformed
# through the inverse CDF (ppf) of the Gaussian
# to produce values of the latent variables z,
# since the prior of the latent space is Gaussian
grid_x = norm.ppf(np.linspace(0.05, 0.95, n))
grid_y = norm.ppf(np.linspace(0.05, 0.95, n))
for i, yi in enumerate(grid_x):
for j, xi in enumerate(grid_y):
z_sample = np.array([[xi, yi]])
z_sample = np.tile(z_sample, batch_size).reshape(batch_size, 2)
x_decoded = decoder.predict(z_sample, batch_size=batch_size)
digit = x_decoded[0].reshape(digit_size, digit_size)
figure[i * digit_size: (i + 1) * digit_size,
j * digit_size: (j + 1) * digit_size] = digit
plt.figure(figsize=(10, 10))
plt.imshow(figure, cmap='Greys_r')
plt.show()
| 0.872388 | 0.907476 |
<img src="images/pyladiesmadrid_alargado.png" style="width: 300px;"/>
# Taller 003 - PyLadies Madrid
# "Funciones y Retos Prácticos"
## Ejercicio 1
Escribe código en Python que calcule la suma de los números del 1 al 650.
Ahora calcula la suma de los 650 primeros números **pares**.
## Ejercicio 2: *Problema FizzBuzz*
Escribe un fragmento de código que imprima los números del 1 al 20, uno por línea. Pero para múltiplos de 3 ha de imprimir "Fizz" en vez del número y para múltiplos de 5, "Buzz". Para los números que son múltiplos de 3 y 5 a la vez, tiene que imprimir "FizzBuzz".
## Ejercicio 3
A partir de la lista de números
```
numeros = [1, 5, 6, 13, 56, 45, 111, 0]
```
construye:
- Una lista que contenga el resultado de multiplicar cada número por 4.
- Una lista que contenga, en vez de cada número, tantos guiones (-) seguidos como cifras tenga el número.
## Ejercicio 4
A partir de la lista de palabras
```
palabras = ["Estoy", "aprendiendo", "Python", "y", "casi", "lo", "domino", "ya"]
```
construye:
- Una lista que contenga solo las palabras de seis letras.
- Una lista que contenga, en vez de cada palabra, una tupla de dos elementos: su primera y su última letra.
## Ejercicio 5
Con las dos listas de números
```
numeros1 = [4, 10, 25]
numeros2 = [0, 3, 5]
```
calcula todas las posibles sumas entre pares de números, cogiendo uno de cada lista.
## Ejercicio 6
Crea un diccionario con los números del 1 al 15 como claves y los cuadrados de esos números como valores asociados.
## Ejercicio 7
Partiendo del string
```
frase = "La clave para aprender a programar es practicar mucho"
```
crea un diccionario cuyas claves sean las letras que contiene, asociadas con el número de veces que se repite cada una.
## Ejercicio 8
Crea una función que dada una lista, te devuelva la lista invertida.
Por ejemplo, si recibe [1, 3, 5], tiene que devolver [5, 3, 1]
## Ejercicio 9
Crea una función que dada una lista, te devuelve esa misma lista pero eliminando aquellos elementos que aparecen más de una vez.
Por ejemplo, si recibe [1, 2, 1, 2, 3], tiene que devolver [1, 2, 3]. Si recibe ['a', None, 'b', 2, 'a', None] tiene que devolver ['a', None, 'b', 2]
## Ejercicio 10
Crea una función que dado un string, te devuelva el número de vocales diferentes que tiene. (Ten en cuenta que pueden entrar mayúsculas y minúsculas, pero la misma vocal en mayúscula o en minúscula sólo se debería contar una vez).
Por ejemplo, si recibe "murcielago" tiene que devolver 5, si recibe "hola" tiene que devolver 2, y si recibe "Ahora" tiene que devolver 2.
## Ejercicio 11
Crea una función que dada una lista de números enteros, te devuelva otra lista que contiene sólo los números impares.
Por ejemplo, si recibe [1, 2, 3, 4, 5], debería devolver [1, 3, 5]. Si recibe [0, 5, 2, 3], debería devolver [5, 3].
## Ejercicio 12
Escribe una función que cante la canción de "Un elefante se balanceaba" durante tantas iteraciones como le indiques en un número que reciba como argumento.
## Ejercicio 13
Escribe una función que acepte un número como argumento e imprima "El número es par" o "El número es impar" según el caso.
## Ejercicio 14
Escribe una función que reciba el nombre de un fichero de texto como argumento e imprima:
- El número de líneas.
- El número de palabras.
- El número de caracteres.
## Ejercicio 15
Escribe un **programa** (en un fichero .py) que reciba como parámetros de línea de comandos la base y la altura de un triángulo e imprima su área.
##### Este notebook ha sido realizado por: Mabel Delgado y María Medina
```
# Esta celda da el estilo al notebook
from IPython.core.display import HTML
css_file = 'styles/pyladiesmadrid_notebook.css'
HTML(open(css_file, "r").read())
```
|
github_jupyter
|
numeros = [1, 5, 6, 13, 56, 45, 111, 0]
palabras = ["Estoy", "aprendiendo", "Python", "y", "casi", "lo", "domino", "ya"]
numeros1 = [4, 10, 25]
numeros2 = [0, 3, 5]
frase = "La clave para aprender a programar es practicar mucho"
# Esta celda da el estilo al notebook
from IPython.core.display import HTML
css_file = 'styles/pyladiesmadrid_notebook.css'
HTML(open(css_file, "r").read())
| 0.222109 | 0.917229 |
Linear Algebra Examples
====
This just shows the machanics of linear algebra calculations with python. See Lecture 5 for motivation and understanding.
```
import numpy as np
import scipy.linalg as la
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('ggplot')
```
Resources
----
- [Tutorial for `scipy.linalg`](http://docs.scipy.org/doc/scipy/reference/tutorial/linalg.html)
Exact solution of linear system of equations
----
\begin{align}
x + 2y &= 3 \\
3x + 4y &= 17
\end{align}
```
A = np.array([[1,2],[3,4]])
A
b = np.array([3,17])
b
x = la.solve(A, b)
x
np.allclose(A @ x, b)
A1 = np.random.random((1000,1000))
b1 = np.random.random(1000)
```
### Using solve is faster and more stable numerically than using matrix inversion
```
%timeit la.solve(A1, b1)
%timeit la.inv(A1) @ b1
```
### Under the hood (Optional)
The `solve` function uses the `dgesv` fortran function to do the actual work. Here is an example of how to do this directly with the `lapack` function. There is rarely any reason to use `blas` or `lapack` functions directly becuase the `linalg` package provides more convenient functions that also perfrom error checking, but you can use Python to experiment with `lapack` or `blas` before using them in a language like C or Fortran.
- [How to interpret lapack function names](http://www.netlib.org/lapack/lug/node24.html)
- [Summary of BLAS functions](http://cvxopt.org/userguide/blas.html)
- [Sumary of Lapack functions](http://cvxopt.org/userguide/lapack.html)
```
import scipy.linalg.lapack as lapack
lu, piv, x, info = lapack.dgesv(A, b)
x
```
Basic information about a matrix
----
```
C = np.array([[1, 2+3j], [3-2j, 4]])
C
C.conjugate()
```
#### Trace
```
def trace(M):
return np.diag(M).sum()
trace(C)
np.allclose(trace(C), la.eigvals(C).sum())
```
#### Determinant
```
la.det(C)
```
#### Rank
```
np.linalg.matrix_rank(C)
```
#### Norm
```
la.norm(C, None) # Frobenius (default)
la.norm(C, 2) # largest sinular value
la.norm(C, -2) # smallest singular value
la.svdvals(C)
```
Least-squares solution
----
```
la.solve(A, b)
x, resid, rank, s = la.lstsq(A, b)
x
A1 = np.array([[1,2],[2,4]])
A1
b1 = np.array([3, 17])
b1
try:
la.solve(A1, b1)
except la.LinAlgError as e:
print(e)
x, resid, rank, s = la.lstsq(A1, b1)
x
A2 = np.random.random((10,3))
b2 = np.random.random(10)
try:
la.solve(A2, b2)
except ValueError as e:
print(e)
x, resid, rank, s = la.lstsq(A2, b2)
x
```
### Normal equations
One way to solve least squares equations $X\beta = y$ for $\beta$ is by using the formula $\beta = (X^TX)^{-1}X^Ty$ as you may have learnt in statistical theory classes (or can derive yourself with a bit of calculus). This is implemented below.
Note: This is not how the `la.lstsq` function solves least square problems as it can be inefficent for large matrices.
```
def least_squares(X, y):
return la.solve(X.T @ X, X.T @ y)
least_squares(A2, b2)
```
Matrix Decompositions
----
```
A = np.array([[1,0.6],[0.6,4]])
A
```
### LU
```
p, l, u = la.lu(A)
p
l
u
np.allclose(p@l@u, A)
```
### Choleskey
```
U = la.cholesky(A)
U
np.allclose(U.T @ U, A)
# If workiing wiht complex matrices
np.allclose(U.T.conj() @ U, A)
```
### QR
```
Q, R = la.qr(A)
Q
np.allclose((la.norm(Q[:,0]), la.norm(Q[:,1])), (1,1))
np.allclose(Q@R, A)
```
### Spectral
```
u, v = la.eig(A)
u
v
np.allclose((la.norm(v[:,0]), la.norm(v[:,1])), (1,1))
np.allclose(v @ np.diag(u) @ v.T, A)
```
#### Inverting A
```
np.allclose(v @ np.diag(1/u) @ v.T, la.inv(A))
```
#### Powers of A
```
np.allclose(v @ np.diag(u**5) @ v.T, np.linalg.matrix_power(A, 5))
```
### SVD
```
U, s, V = la.svd(A)
U
np.allclose((la.norm(U[:,0]), la.norm(U[:,1])), (1,1))
s
V
np.allclose((la.norm(V[:,0]), la.norm(V[:,1])), (1,1))
np.allclose(U @ np.diag(s) @ V, A)
```
|
github_jupyter
|
import numpy as np
import scipy.linalg as la
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('ggplot')
A = np.array([[1,2],[3,4]])
A
b = np.array([3,17])
b
x = la.solve(A, b)
x
np.allclose(A @ x, b)
A1 = np.random.random((1000,1000))
b1 = np.random.random(1000)
%timeit la.solve(A1, b1)
%timeit la.inv(A1) @ b1
import scipy.linalg.lapack as lapack
lu, piv, x, info = lapack.dgesv(A, b)
x
C = np.array([[1, 2+3j], [3-2j, 4]])
C
C.conjugate()
def trace(M):
return np.diag(M).sum()
trace(C)
np.allclose(trace(C), la.eigvals(C).sum())
la.det(C)
np.linalg.matrix_rank(C)
la.norm(C, None) # Frobenius (default)
la.norm(C, 2) # largest sinular value
la.norm(C, -2) # smallest singular value
la.svdvals(C)
la.solve(A, b)
x, resid, rank, s = la.lstsq(A, b)
x
A1 = np.array([[1,2],[2,4]])
A1
b1 = np.array([3, 17])
b1
try:
la.solve(A1, b1)
except la.LinAlgError as e:
print(e)
x, resid, rank, s = la.lstsq(A1, b1)
x
A2 = np.random.random((10,3))
b2 = np.random.random(10)
try:
la.solve(A2, b2)
except ValueError as e:
print(e)
x, resid, rank, s = la.lstsq(A2, b2)
x
def least_squares(X, y):
return la.solve(X.T @ X, X.T @ y)
least_squares(A2, b2)
A = np.array([[1,0.6],[0.6,4]])
A
p, l, u = la.lu(A)
p
l
u
np.allclose(p@l@u, A)
U = la.cholesky(A)
U
np.allclose(U.T @ U, A)
# If workiing wiht complex matrices
np.allclose(U.T.conj() @ U, A)
Q, R = la.qr(A)
Q
np.allclose((la.norm(Q[:,0]), la.norm(Q[:,1])), (1,1))
np.allclose(Q@R, A)
u, v = la.eig(A)
u
v
np.allclose((la.norm(v[:,0]), la.norm(v[:,1])), (1,1))
np.allclose(v @ np.diag(u) @ v.T, A)
np.allclose(v @ np.diag(1/u) @ v.T, la.inv(A))
np.allclose(v @ np.diag(u**5) @ v.T, np.linalg.matrix_power(A, 5))
U, s, V = la.svd(A)
U
np.allclose((la.norm(U[:,0]), la.norm(U[:,1])), (1,1))
s
V
np.allclose((la.norm(V[:,0]), la.norm(V[:,1])), (1,1))
np.allclose(U @ np.diag(s) @ V, A)
| 0.308815 | 0.979116 |
## Prerequirement
### 1. Download data
All data could be found at https://rapidsai.github.io/demos/datasets/mortgage-data
### 2. Download needed jars
* [cudf-21.08.2-cuda11.jar](https://repo1.maven.org/maven2/ai/rapids/cudf/21.08.2/)
* [rapids-4-spark_2.12-21.08.0.jar](https://repo1.maven.org/maven2/com/nvidia/rapids-4-spark_2.12/21.08.0/rapids-4-spark_2.12-21.08.0.jar)
### 3. Start Spark Standalone
Before Running the script, please setup Spark standalone mode
### 4. Add ENV
```
$ export SPARK_JARS=cudf-21.08.2-cuda11.jar,rapids-4-spark_2.12-21.08.0.jar
$ export PYSPARK_DRIVER_PYTHON=jupyter
$ export PYSPARK_DRIVER_PYTHON_OPTS=notebook
```
### 5. Start Jupyter Notebook with plugin config
```
$ pyspark --master ${SPARK_MASTER} \
--jars ${SPARK_JARS} \
--conf spark.plugins=com.nvidia.spark.SQLPlugin \
--py-files ${SPARK_PY_FILES}
```
## Import Libs
```
import time
from pyspark import broadcast
from pyspark.sql import SparkSession
from pyspark.sql.functions import *
from pyspark.sql.types import *
from pyspark.sql.window import Window
```
## Create Spark Session
```
spark = (SparkSession
.builder
.appName("MortgageETL")
.getOrCreate())
```
## Function Define
### 1. Define the constants
* Define input file schema (Performance and Acquisition)
```
# File schema
_csv_perf_schema = StructType([
StructField('loan_id', LongType()),
StructField('monthly_reporting_period', StringType()),
StructField('servicer', StringType()),
StructField('interest_rate', DoubleType()),
StructField('current_actual_upb', DoubleType()),
StructField('loan_age', DoubleType()),
StructField('remaining_months_to_legal_maturity', DoubleType()),
StructField('adj_remaining_months_to_maturity', DoubleType()),
StructField('maturity_date', StringType()),
StructField('msa', DoubleType()),
StructField('current_loan_delinquency_status', IntegerType()),
StructField('mod_flag', StringType()),
StructField('zero_balance_code', StringType()),
StructField('zero_balance_effective_date', StringType()),
StructField('last_paid_installment_date', StringType()),
StructField('foreclosed_after', StringType()),
StructField('disposition_date', StringType()),
StructField('foreclosure_costs', DoubleType()),
StructField('prop_preservation_and_repair_costs', DoubleType()),
StructField('asset_recovery_costs', DoubleType()),
StructField('misc_holding_expenses', DoubleType()),
StructField('holding_taxes', DoubleType()),
StructField('net_sale_proceeds', DoubleType()),
StructField('credit_enhancement_proceeds', DoubleType()),
StructField('repurchase_make_whole_proceeds', StringType()),
StructField('other_foreclosure_proceeds', DoubleType()),
StructField('non_interest_bearing_upb', DoubleType()),
StructField('principal_forgiveness_upb', StringType()),
StructField('repurchase_make_whole_proceeds_flag', StringType()),
StructField('foreclosure_principal_write_off_amount', StringType()),
StructField('servicing_activity_indicator', StringType())])
_csv_acq_schema = StructType([
StructField('loan_id', LongType()),
StructField('orig_channel', StringType()),
StructField('seller_name', StringType()),
StructField('orig_interest_rate', DoubleType()),
StructField('orig_upb', IntegerType()),
StructField('orig_loan_term', IntegerType()),
StructField('orig_date', StringType()),
StructField('first_pay_date', StringType()),
StructField('orig_ltv', DoubleType()),
StructField('orig_cltv', DoubleType()),
StructField('num_borrowers', DoubleType()),
StructField('dti', DoubleType()),
StructField('borrower_credit_score', DoubleType()),
StructField('first_home_buyer', StringType()),
StructField('loan_purpose', StringType()),
StructField('property_type', StringType()),
StructField('num_units', IntegerType()),
StructField('occupancy_status', StringType()),
StructField('property_state', StringType()),
StructField('zip', IntegerType()),
StructField('mortgage_insurance_percent', DoubleType()),
StructField('product_type', StringType()),
StructField('coborrow_credit_score', DoubleType()),
StructField('mortgage_insurance_type', DoubleType()),
StructField('relocation_mortgage_indicator', StringType())])
```
* Define seller name mapping
```
# name mappings
_name_mapping = [
("WITMER FUNDING, LLC", "Witmer"),
("WELLS FARGO CREDIT RISK TRANSFER SECURITIES TRUST 2015", "Wells Fargo"),
("WELLS FARGO BANK, NA" , "Wells Fargo"),
("WELLS FARGO BANK, N.A." , "Wells Fargo"),
("WELLS FARGO BANK, NA" , "Wells Fargo"),
("USAA FEDERAL SAVINGS BANK" , "USAA"),
("UNITED SHORE FINANCIAL SERVICES, LLC D\\/B\\/A UNITED WHOLESALE MORTGAGE" , "United Seq(e"),
("U.S. BANK N.A." , "US Bank"),
("SUNTRUST MORTGAGE INC." , "Suntrust"),
("STONEGATE MORTGAGE CORPORATION" , "Stonegate Mortgage"),
("STEARNS LENDING, LLC" , "Stearns Lending"),
("STEARNS LENDING, INC." , "Stearns Lending"),
("SIERRA PACIFIC MORTGAGE COMPANY, INC." , "Sierra Pacific Mortgage"),
("REGIONS BANK" , "Regions"),
("RBC MORTGAGE COMPANY" , "RBC"),
("QUICKEN LOANS INC." , "Quicken Loans"),
("PULTE MORTGAGE, L.L.C." , "Pulte Mortgage"),
("PROVIDENT FUNDING ASSOCIATES, L.P." , "Provident Funding"),
("PROSPECT MORTGAGE, LLC" , "Prospect Mortgage"),
("PRINCIPAL RESIDENTIAL MORTGAGE CAPITAL RESOURCES, LLC" , "Principal Residential"),
("PNC BANK, N.A." , "PNC"),
("PMT CREDIT RISK TRANSFER TRUST 2015-2" , "PennyMac"),
("PHH MORTGAGE CORPORATION" , "PHH Mortgage"),
("PENNYMAC CORP." , "PennyMac"),
("PACIFIC UNION FINANCIAL, LLC" , "Other"),
("OTHER" , "Other"),
("NYCB MORTGAGE COMPANY, LLC" , "NYCB"),
("NEW YORK COMMUNITY BANK" , "NYCB"),
("NETBANK FUNDING SERVICES" , "Netbank"),
("NATIONSTAR MORTGAGE, LLC" , "Nationstar Mortgage"),
("METLIFE BANK, NA" , "Metlife"),
("LOANDEPOT.COM, LLC" , "LoanDepot.com"),
("J.P. MORGAN MADISON AVENUE SECURITIES TRUST, SERIES 2015-1" , "JP Morgan Chase"),
("J.P. MORGAN MADISON AVENUE SECURITIES TRUST, SERIES 2014-1" , "JP Morgan Chase"),
("JPMORGAN CHASE BANK, NATIONAL ASSOCIATION" , "JP Morgan Chase"),
("JPMORGAN CHASE BANK, NA" , "JP Morgan Chase"),
("JP MORGAN CHASE BANK, NA" , "JP Morgan Chase"),
("IRWIN MORTGAGE, CORPORATION" , "Irwin Mortgage"),
("IMPAC MORTGAGE CORP." , "Impac Mortgage"),
("HSBC BANK USA, NATIONAL ASSOCIATION" , "HSBC"),
("HOMEWARD RESIDENTIAL, INC." , "Homeward Mortgage"),
("HOMESTREET BANK" , "Other"),
("HOMEBRIDGE FINANCIAL SERVICES, INC." , "HomeBridge"),
("HARWOOD STREET FUNDING I, LLC" , "Harwood Mortgage"),
("GUILD MORTGAGE COMPANY" , "Guild Mortgage"),
("GMAC MORTGAGE, LLC (USAA FEDERAL SAVINGS BANK)" , "GMAC"),
("GMAC MORTGAGE, LLC" , "GMAC"),
("GMAC (USAA)" , "GMAC"),
("FREMONT BANK" , "Fremont Bank"),
("FREEDOM MORTGAGE CORP." , "Freedom Mortgage"),
("FRANKLIN AMERICAN MORTGAGE COMPANY" , "Franklin America"),
("FLEET NATIONAL BANK" , "Fleet National"),
("FLAGSTAR CAPITAL MARKETS CORPORATION" , "Flagstar Bank"),
("FLAGSTAR BANK, FSB" , "Flagstar Bank"),
("FIRST TENNESSEE BANK NATIONAL ASSOCIATION" , "Other"),
("FIFTH THIRD BANK" , "Fifth Third Bank"),
("FEDERAL HOME LOAN BANK OF CHICAGO" , "Fedral Home of Chicago"),
("FDIC, RECEIVER, INDYMAC FEDERAL BANK FSB" , "FDIC"),
("DOWNEY SAVINGS AND LOAN ASSOCIATION, F.A." , "Downey Mortgage"),
("DITECH FINANCIAL LLC" , "Ditech"),
("CITIMORTGAGE, INC." , "Citi"),
("CHICAGO MORTGAGE SOLUTIONS DBA INTERFIRST MORTGAGE COMPANY" , "Chicago Mortgage"),
("CHICAGO MORTGAGE SOLUTIONS DBA INTERBANK MORTGAGE COMPANY" , "Chicago Mortgage"),
("CHASE HOME FINANCE, LLC" , "JP Morgan Chase"),
("CHASE HOME FINANCE FRANKLIN AMERICAN MORTGAGE COMPANY" , "JP Morgan Chase"),
("CHASE HOME FINANCE (CIE 1)" , "JP Morgan Chase"),
("CHASE HOME FINANCE" , "JP Morgan Chase"),
("CASHCALL, INC." , "CashCall"),
("CAPITAL ONE, NATIONAL ASSOCIATION" , "Capital One"),
("CALIBER HOME LOANS, INC." , "Caliber Funding"),
("BISHOPS GATE RESIDENTIAL MORTGAGE TRUST" , "Bishops Gate Mortgage"),
("BANK OF AMERICA, N.A." , "Bank of America"),
("AMTRUST BANK" , "AmTrust"),
("AMERISAVE MORTGAGE CORPORATION" , "Amerisave"),
("AMERIHOME MORTGAGE COMPANY, LLC" , "AmeriHome Mortgage"),
("ALLY BANK" , "Ally Bank"),
("ACADEMY MORTGAGE CORPORATION" , "Academy Mortgage"),
("NO CASH-OUT REFINANCE" , "OTHER REFINANCE"),
("REFINANCE - NOT SPECIFIED" , "OTHER REFINANCE"),
("Other REFINANCE" , "OTHER REFINANCE")]
```
* Define category (string) column and numeric column
```
# String columns
cate_col_names = [
"orig_channel",
"first_home_buyer",
"loan_purpose",
"property_type",
"occupancy_status",
"property_state",
"product_type",
"relocation_mortgage_indicator",
"seller_name",
"mod_flag"
]
# Numberic columns
label_col_name = "delinquency_12"
numeric_col_names = [
"orig_interest_rate",
"orig_upb",
"orig_loan_term",
"orig_ltv",
"orig_cltv",
"num_borrowers",
"dti",
"borrower_credit_score",
"num_units",
"zip",
"mortgage_insurance_percent",
"current_loan_delinquency_status",
"current_actual_upb",
"interest_rate",
"loan_age",
"msa",
"non_interest_bearing_upb",
label_col_name
]
all_col_names = cate_col_names + numeric_col_names
```
### 2. Define ETL Process
Define the function to do the ETL process
#### 2.1 Define Functions to Read Raw CSV File
* Define function to get quarter from input CSV file name
```
def _get_quarter_from_csv_file_name():
return substring_index(substring_index(input_file_name(), '.', 1), '_', -1)
```
* Define function to read Performance CSV data file
```
def read_perf_csv(spark, path):
return spark.read.format('csv') \
.option('nullValue', '') \
.option('header', 'false') \
.option('delimiter', '|') \
.schema(_csv_perf_schema) \
.load(path) \
.withColumn('quarter', _get_quarter_from_csv_file_name())
```
* Define function to read Acquisition CSV file
```
def read_acq_csv(spark, path):
return spark.read.format('csv') \
.option('nullValue', '') \
.option('header', 'false') \
.option('delimiter', '|') \
.schema(_csv_acq_schema) \
.load(path) \
.withColumn('quarter', _get_quarter_from_csv_file_name())
```
#### 2.2 Define ETL Process
* Define function to parse dates in Performance data
```
def _parse_dates(perf):
return perf \
.withColumn('monthly_reporting_period', to_date(col('monthly_reporting_period'), 'MM/dd/yyyy')) \
.withColumn('monthly_reporting_period_month', month(col('monthly_reporting_period'))) \
.withColumn('monthly_reporting_period_year', year(col('monthly_reporting_period'))) \
.withColumn('monthly_reporting_period_day', dayofmonth(col('monthly_reporting_period'))) \
.withColumn('last_paid_installment_date', to_date(col('last_paid_installment_date'), 'MM/dd/yyyy')) \
.withColumn('foreclosed_after', to_date(col('foreclosed_after'), 'MM/dd/yyyy')) \
.withColumn('disposition_date', to_date(col('disposition_date'), 'MM/dd/yyyy')) \
.withColumn('maturity_date', to_date(col('maturity_date'), 'MM/yyyy')) \
.withColumn('zero_balance_effective_date', to_date(col('zero_balance_effective_date'), 'MM/yyyy'))
```
* Define function to create deliquency data frame from Performance data
```
def _create_perf_deliquency(spark, perf):
aggDF = perf.select(
col("quarter"),
col("loan_id"),
col("current_loan_delinquency_status"),
when(col("current_loan_delinquency_status") >= 1, col("monthly_reporting_period")).alias("delinquency_30"),
when(col("current_loan_delinquency_status") >= 3, col("monthly_reporting_period")).alias("delinquency_90"),
when(col("current_loan_delinquency_status") >= 6, col("monthly_reporting_period")).alias("delinquency_180")) \
.groupBy("quarter", "loan_id") \
.agg(
max("current_loan_delinquency_status").alias("delinquency_12"),
min("delinquency_30").alias("delinquency_30"),
min("delinquency_90").alias("delinquency_90"),
min("delinquency_180").alias("delinquency_180")) \
.select(
col("quarter"),
col("loan_id"),
(col("delinquency_12") >= 1).alias("ever_30"),
(col("delinquency_12") >= 3).alias("ever_90"),
(col("delinquency_12") >= 6).alias("ever_180"),
col("delinquency_30"),
col("delinquency_90"),
col("delinquency_180"))
joinedDf = perf \
.withColumnRenamed("monthly_reporting_period", "timestamp") \
.withColumnRenamed("monthly_reporting_period_month", "timestamp_month") \
.withColumnRenamed("monthly_reporting_period_year", "timestamp_year") \
.withColumnRenamed("current_loan_delinquency_status", "delinquency_12") \
.withColumnRenamed("current_actual_upb", "upb_12") \
.select("quarter", "loan_id", "timestamp", "delinquency_12", "upb_12", "timestamp_month", "timestamp_year") \
.join(aggDF, ["loan_id", "quarter"], "left_outer")
# calculate the 12 month delinquency and upb values
months = 12
monthArray = [lit(x) for x in range(0, 12)]
# explode on a small amount of data is actually slightly more efficient than a cross join
testDf = joinedDf \
.withColumn("month_y", explode(array(monthArray))) \
.select(
col("quarter"),
floor(((col("timestamp_year") * 12 + col("timestamp_month")) - 24000) / months).alias("josh_mody"),
floor(((col("timestamp_year") * 12 + col("timestamp_month")) - 24000 - col("month_y")) / months).alias("josh_mody_n"),
col("ever_30"),
col("ever_90"),
col("ever_180"),
col("delinquency_30"),
col("delinquency_90"),
col("delinquency_180"),
col("loan_id"),
col("month_y"),
col("delinquency_12"),
col("upb_12")) \
.groupBy("quarter", "loan_id", "josh_mody_n", "ever_30", "ever_90", "ever_180", "delinquency_30", "delinquency_90", "delinquency_180", "month_y") \
.agg(max("delinquency_12").alias("delinquency_12"), min("upb_12").alias("upb_12")) \
.withColumn("timestamp_year", floor((lit(24000) + (col("josh_mody_n") * lit(months)) + (col("month_y") - 1)) / lit(12))) \
.selectExpr('*', 'pmod(24000 + (josh_mody_n * {}) + month_y, 12) as timestamp_month_tmp'.format(months)) \
.withColumn("timestamp_month", when(col("timestamp_month_tmp") == lit(0), lit(12)).otherwise(col("timestamp_month_tmp"))) \
.withColumn("delinquency_12", ((col("delinquency_12") > 3).cast("int") + (col("upb_12") == 0).cast("int")).alias("delinquency_12")) \
.drop("timestamp_month_tmp", "josh_mody_n", "month_y")
return perf.withColumnRenamed("monthly_reporting_period_month", "timestamp_month") \
.withColumnRenamed("monthly_reporting_period_year", "timestamp_year") \
.join(testDf, ["quarter", "loan_id", "timestamp_year", "timestamp_month"], "left") \
.drop("timestamp_year", "timestamp_month")
```
* Define function to create acquisition data frame from Acquisition data
```
def _create_acquisition(spark, acq):
nameMapping = spark.createDataFrame(_name_mapping, ["from_seller_name", "to_seller_name"])
return acq.join(nameMapping, col("seller_name") == col("from_seller_name"), "left") \
.drop("from_seller_name") \
.withColumn("old_name", col("seller_name")) \
.withColumn("seller_name", coalesce(col("to_seller_name"), col("seller_name"))) \
.drop("to_seller_name") \
.withColumn("orig_date", to_date(col("orig_date"), "MM/yyyy")) \
.withColumn("first_pay_date", to_date(col("first_pay_date"), "MM/yyyy"))
```
#### 2.3 Define Casting Process
This part is casting String column to Numbric.
Example:
```
col_1
"a"
"b"
"c"
"a"
# After String ====> Numberic
col_1
0
1
2
0
```
<br>
* Define function to get column dictionary
Example
```
col1 = [row(data="a",id=0), row(data="b",id=1)]
```
```
def _gen_dictionary(etl_df, col_names):
cnt_table = etl_df.select(posexplode(array([col(i) for i in col_names])))\
.withColumnRenamed("pos", "column_id")\
.withColumnRenamed("col", "data")\
.filter("data is not null")\
.groupBy("column_id", "data")\
.count()
windowed = Window.partitionBy("column_id").orderBy(desc("count"))
return cnt_table.withColumn("id", row_number().over(windowed)).drop("count")
```
* Define function to convert string columns to numeric
```
def _cast_string_columns_to_numeric(spark, input_df):
cached_dict_df = _gen_dictionary(input_df, cate_col_names).cache()
output_df = input_df
# Generate the final table with all columns being numeric.
for col_pos, col_name in enumerate(cate_col_names):
col_dict_df = cached_dict_df.filter(col("column_id") == col_pos)\
.drop("column_id")\
.withColumnRenamed("data", col_name)
output_df = output_df.join(broadcast(col_dict_df), col_name, "left")\
.drop(col_name)\
.withColumnRenamed("id", col_name)
return output_df
```
#### 2.4 Define Main Function
In this function:
1. Parse date in Performance data by calling _parse_dates (parsed_perf)
2. Create deliqency dataframe(perf_deliqency) form Performance data by calling _create_perf_deliquency
3. Create cleaned acquisition dataframe(cleaned_acq) from Acquisition data by calling _create_acquisition
4. Join deliqency dataframe(perf_deliqency) and cleaned acquisition dataframe(cleaned_acq), get clean_df
5. Cast String column to Numbric in clean_df by calling _cast_string_columns_to_numeric, get casted_clean_df
6. Return casted_clean_df as final result
```
def run_mortgage(spark, perf, acq):
parsed_perf = _parse_dates(perf)
perf_deliqency = _create_perf_deliquency(spark, parsed_perf)
cleaned_acq = _create_acquisition(spark, acq)
clean_df = perf_deliqency.join(cleaned_acq, ["loan_id", "quarter"], "inner").drop("quarter")
casted_clean_df = _cast_string_columns_to_numeric(spark, clean_df)\
.select(all_col_names)\
.withColumn(label_col_name, when(col(label_col_name) > 0, 1).otherwise(0))\
.fillna(float(0))
return casted_clean_df
```
## Script Settings
### 1. File Path Settings
* Define input file path
```
orig_perf_path='/home/mortgage/data/Performance/'
orig_acq_path='/home/mortgage/data/Acquisition/'
```
* Define temporary folder path
```
tmp_perf_path='/home/mortgage/data/perf/'
tmp_acq_path='/home/mortgage/data/acq/'
```
* Define output folder path
```
output_path='/home/mortgage/data/output/'
```
### 2. Common Spark Settings
```
spark.conf.set('spark.rapids.sql.explain', 'ALL')
spark.conf.set('spark.rapids.sql.incompatibleOps.enabled', 'true')
spark.conf.set('spark.rapids.sql.batchSizeBytes', '512M')
spark.conf.set('spark.rapids.sql.reader.batchSizeBytes', '768M')
```
## Run Part
### Read Raw File and Transcode Data
#### 1. Add additional Spark settings
```
# we want a few big files instead of lots of small files
spark.conf.set('spark.sql.files.maxPartitionBytes', '200G')
```
#### 2. Read Raw File and Transcode to Parquet
```
start = time.time()
# read data and transcode to qarquet
acq = read_acq_csv(spark, orig_acq_path)
acq.repartition(12).write.parquet(tmp_acq_path, mode='overwrite')
perf = read_perf_csv(spark, orig_perf_path)
perf.coalesce(96).write.parquet(tmp_perf_path, mode='overwrite')
end = time.time()
print(end - start)
```
### Run ETL
#### 1. Add additional Spark settings
```
# GPU run, set to true
spark.conf.set('spark.rapids.sql.enabled', 'true')
# CPU run, set to false
# spark.conf.set('spark.rapids.sql.enabled', 'false')
spark.conf.set('spark.sql.files.maxPartitionBytes', '1G')
spark.conf.set('spark.sql.shuffle.partitions', '192')
```
#### 2.Read Parquet File and Run ETL Process, Save the Result
```
start = time.time()
# read parquet
perf = spark.read.parquet(tmp_perf_path)
acq = spark.read.parquet(tmp_acq_path)
# run main function to process data
out = run_mortgage(spark, perf, acq)
# save processed data
out.write.parquet(output_path, mode='overwrite')
# print explain and time
print(out.explain())
end = time.time()
print(end - start)
spark.stop()
```
|
github_jupyter
|
$ export SPARK_JARS=cudf-21.08.2-cuda11.jar,rapids-4-spark_2.12-21.08.0.jar
$ export PYSPARK_DRIVER_PYTHON=jupyter
$ export PYSPARK_DRIVER_PYTHON_OPTS=notebook
$ pyspark --master ${SPARK_MASTER} \
--jars ${SPARK_JARS} \
--conf spark.plugins=com.nvidia.spark.SQLPlugin \
--py-files ${SPARK_PY_FILES}
import time
from pyspark import broadcast
from pyspark.sql import SparkSession
from pyspark.sql.functions import *
from pyspark.sql.types import *
from pyspark.sql.window import Window
spark = (SparkSession
.builder
.appName("MortgageETL")
.getOrCreate())
# File schema
_csv_perf_schema = StructType([
StructField('loan_id', LongType()),
StructField('monthly_reporting_period', StringType()),
StructField('servicer', StringType()),
StructField('interest_rate', DoubleType()),
StructField('current_actual_upb', DoubleType()),
StructField('loan_age', DoubleType()),
StructField('remaining_months_to_legal_maturity', DoubleType()),
StructField('adj_remaining_months_to_maturity', DoubleType()),
StructField('maturity_date', StringType()),
StructField('msa', DoubleType()),
StructField('current_loan_delinquency_status', IntegerType()),
StructField('mod_flag', StringType()),
StructField('zero_balance_code', StringType()),
StructField('zero_balance_effective_date', StringType()),
StructField('last_paid_installment_date', StringType()),
StructField('foreclosed_after', StringType()),
StructField('disposition_date', StringType()),
StructField('foreclosure_costs', DoubleType()),
StructField('prop_preservation_and_repair_costs', DoubleType()),
StructField('asset_recovery_costs', DoubleType()),
StructField('misc_holding_expenses', DoubleType()),
StructField('holding_taxes', DoubleType()),
StructField('net_sale_proceeds', DoubleType()),
StructField('credit_enhancement_proceeds', DoubleType()),
StructField('repurchase_make_whole_proceeds', StringType()),
StructField('other_foreclosure_proceeds', DoubleType()),
StructField('non_interest_bearing_upb', DoubleType()),
StructField('principal_forgiveness_upb', StringType()),
StructField('repurchase_make_whole_proceeds_flag', StringType()),
StructField('foreclosure_principal_write_off_amount', StringType()),
StructField('servicing_activity_indicator', StringType())])
_csv_acq_schema = StructType([
StructField('loan_id', LongType()),
StructField('orig_channel', StringType()),
StructField('seller_name', StringType()),
StructField('orig_interest_rate', DoubleType()),
StructField('orig_upb', IntegerType()),
StructField('orig_loan_term', IntegerType()),
StructField('orig_date', StringType()),
StructField('first_pay_date', StringType()),
StructField('orig_ltv', DoubleType()),
StructField('orig_cltv', DoubleType()),
StructField('num_borrowers', DoubleType()),
StructField('dti', DoubleType()),
StructField('borrower_credit_score', DoubleType()),
StructField('first_home_buyer', StringType()),
StructField('loan_purpose', StringType()),
StructField('property_type', StringType()),
StructField('num_units', IntegerType()),
StructField('occupancy_status', StringType()),
StructField('property_state', StringType()),
StructField('zip', IntegerType()),
StructField('mortgage_insurance_percent', DoubleType()),
StructField('product_type', StringType()),
StructField('coborrow_credit_score', DoubleType()),
StructField('mortgage_insurance_type', DoubleType()),
StructField('relocation_mortgage_indicator', StringType())])
# name mappings
_name_mapping = [
("WITMER FUNDING, LLC", "Witmer"),
("WELLS FARGO CREDIT RISK TRANSFER SECURITIES TRUST 2015", "Wells Fargo"),
("WELLS FARGO BANK, NA" , "Wells Fargo"),
("WELLS FARGO BANK, N.A." , "Wells Fargo"),
("WELLS FARGO BANK, NA" , "Wells Fargo"),
("USAA FEDERAL SAVINGS BANK" , "USAA"),
("UNITED SHORE FINANCIAL SERVICES, LLC D\\/B\\/A UNITED WHOLESALE MORTGAGE" , "United Seq(e"),
("U.S. BANK N.A." , "US Bank"),
("SUNTRUST MORTGAGE INC." , "Suntrust"),
("STONEGATE MORTGAGE CORPORATION" , "Stonegate Mortgage"),
("STEARNS LENDING, LLC" , "Stearns Lending"),
("STEARNS LENDING, INC." , "Stearns Lending"),
("SIERRA PACIFIC MORTGAGE COMPANY, INC." , "Sierra Pacific Mortgage"),
("REGIONS BANK" , "Regions"),
("RBC MORTGAGE COMPANY" , "RBC"),
("QUICKEN LOANS INC." , "Quicken Loans"),
("PULTE MORTGAGE, L.L.C." , "Pulte Mortgage"),
("PROVIDENT FUNDING ASSOCIATES, L.P." , "Provident Funding"),
("PROSPECT MORTGAGE, LLC" , "Prospect Mortgage"),
("PRINCIPAL RESIDENTIAL MORTGAGE CAPITAL RESOURCES, LLC" , "Principal Residential"),
("PNC BANK, N.A." , "PNC"),
("PMT CREDIT RISK TRANSFER TRUST 2015-2" , "PennyMac"),
("PHH MORTGAGE CORPORATION" , "PHH Mortgage"),
("PENNYMAC CORP." , "PennyMac"),
("PACIFIC UNION FINANCIAL, LLC" , "Other"),
("OTHER" , "Other"),
("NYCB MORTGAGE COMPANY, LLC" , "NYCB"),
("NEW YORK COMMUNITY BANK" , "NYCB"),
("NETBANK FUNDING SERVICES" , "Netbank"),
("NATIONSTAR MORTGAGE, LLC" , "Nationstar Mortgage"),
("METLIFE BANK, NA" , "Metlife"),
("LOANDEPOT.COM, LLC" , "LoanDepot.com"),
("J.P. MORGAN MADISON AVENUE SECURITIES TRUST, SERIES 2015-1" , "JP Morgan Chase"),
("J.P. MORGAN MADISON AVENUE SECURITIES TRUST, SERIES 2014-1" , "JP Morgan Chase"),
("JPMORGAN CHASE BANK, NATIONAL ASSOCIATION" , "JP Morgan Chase"),
("JPMORGAN CHASE BANK, NA" , "JP Morgan Chase"),
("JP MORGAN CHASE BANK, NA" , "JP Morgan Chase"),
("IRWIN MORTGAGE, CORPORATION" , "Irwin Mortgage"),
("IMPAC MORTGAGE CORP." , "Impac Mortgage"),
("HSBC BANK USA, NATIONAL ASSOCIATION" , "HSBC"),
("HOMEWARD RESIDENTIAL, INC." , "Homeward Mortgage"),
("HOMESTREET BANK" , "Other"),
("HOMEBRIDGE FINANCIAL SERVICES, INC." , "HomeBridge"),
("HARWOOD STREET FUNDING I, LLC" , "Harwood Mortgage"),
("GUILD MORTGAGE COMPANY" , "Guild Mortgage"),
("GMAC MORTGAGE, LLC (USAA FEDERAL SAVINGS BANK)" , "GMAC"),
("GMAC MORTGAGE, LLC" , "GMAC"),
("GMAC (USAA)" , "GMAC"),
("FREMONT BANK" , "Fremont Bank"),
("FREEDOM MORTGAGE CORP." , "Freedom Mortgage"),
("FRANKLIN AMERICAN MORTGAGE COMPANY" , "Franklin America"),
("FLEET NATIONAL BANK" , "Fleet National"),
("FLAGSTAR CAPITAL MARKETS CORPORATION" , "Flagstar Bank"),
("FLAGSTAR BANK, FSB" , "Flagstar Bank"),
("FIRST TENNESSEE BANK NATIONAL ASSOCIATION" , "Other"),
("FIFTH THIRD BANK" , "Fifth Third Bank"),
("FEDERAL HOME LOAN BANK OF CHICAGO" , "Fedral Home of Chicago"),
("FDIC, RECEIVER, INDYMAC FEDERAL BANK FSB" , "FDIC"),
("DOWNEY SAVINGS AND LOAN ASSOCIATION, F.A." , "Downey Mortgage"),
("DITECH FINANCIAL LLC" , "Ditech"),
("CITIMORTGAGE, INC." , "Citi"),
("CHICAGO MORTGAGE SOLUTIONS DBA INTERFIRST MORTGAGE COMPANY" , "Chicago Mortgage"),
("CHICAGO MORTGAGE SOLUTIONS DBA INTERBANK MORTGAGE COMPANY" , "Chicago Mortgage"),
("CHASE HOME FINANCE, LLC" , "JP Morgan Chase"),
("CHASE HOME FINANCE FRANKLIN AMERICAN MORTGAGE COMPANY" , "JP Morgan Chase"),
("CHASE HOME FINANCE (CIE 1)" , "JP Morgan Chase"),
("CHASE HOME FINANCE" , "JP Morgan Chase"),
("CASHCALL, INC." , "CashCall"),
("CAPITAL ONE, NATIONAL ASSOCIATION" , "Capital One"),
("CALIBER HOME LOANS, INC." , "Caliber Funding"),
("BISHOPS GATE RESIDENTIAL MORTGAGE TRUST" , "Bishops Gate Mortgage"),
("BANK OF AMERICA, N.A." , "Bank of America"),
("AMTRUST BANK" , "AmTrust"),
("AMERISAVE MORTGAGE CORPORATION" , "Amerisave"),
("AMERIHOME MORTGAGE COMPANY, LLC" , "AmeriHome Mortgage"),
("ALLY BANK" , "Ally Bank"),
("ACADEMY MORTGAGE CORPORATION" , "Academy Mortgage"),
("NO CASH-OUT REFINANCE" , "OTHER REFINANCE"),
("REFINANCE - NOT SPECIFIED" , "OTHER REFINANCE"),
("Other REFINANCE" , "OTHER REFINANCE")]
# String columns
cate_col_names = [
"orig_channel",
"first_home_buyer",
"loan_purpose",
"property_type",
"occupancy_status",
"property_state",
"product_type",
"relocation_mortgage_indicator",
"seller_name",
"mod_flag"
]
# Numberic columns
label_col_name = "delinquency_12"
numeric_col_names = [
"orig_interest_rate",
"orig_upb",
"orig_loan_term",
"orig_ltv",
"orig_cltv",
"num_borrowers",
"dti",
"borrower_credit_score",
"num_units",
"zip",
"mortgage_insurance_percent",
"current_loan_delinquency_status",
"current_actual_upb",
"interest_rate",
"loan_age",
"msa",
"non_interest_bearing_upb",
label_col_name
]
all_col_names = cate_col_names + numeric_col_names
def _get_quarter_from_csv_file_name():
return substring_index(substring_index(input_file_name(), '.', 1), '_', -1)
def read_perf_csv(spark, path):
return spark.read.format('csv') \
.option('nullValue', '') \
.option('header', 'false') \
.option('delimiter', '|') \
.schema(_csv_perf_schema) \
.load(path) \
.withColumn('quarter', _get_quarter_from_csv_file_name())
def read_acq_csv(spark, path):
return spark.read.format('csv') \
.option('nullValue', '') \
.option('header', 'false') \
.option('delimiter', '|') \
.schema(_csv_acq_schema) \
.load(path) \
.withColumn('quarter', _get_quarter_from_csv_file_name())
def _parse_dates(perf):
return perf \
.withColumn('monthly_reporting_period', to_date(col('monthly_reporting_period'), 'MM/dd/yyyy')) \
.withColumn('monthly_reporting_period_month', month(col('monthly_reporting_period'))) \
.withColumn('monthly_reporting_period_year', year(col('monthly_reporting_period'))) \
.withColumn('monthly_reporting_period_day', dayofmonth(col('monthly_reporting_period'))) \
.withColumn('last_paid_installment_date', to_date(col('last_paid_installment_date'), 'MM/dd/yyyy')) \
.withColumn('foreclosed_after', to_date(col('foreclosed_after'), 'MM/dd/yyyy')) \
.withColumn('disposition_date', to_date(col('disposition_date'), 'MM/dd/yyyy')) \
.withColumn('maturity_date', to_date(col('maturity_date'), 'MM/yyyy')) \
.withColumn('zero_balance_effective_date', to_date(col('zero_balance_effective_date'), 'MM/yyyy'))
def _create_perf_deliquency(spark, perf):
aggDF = perf.select(
col("quarter"),
col("loan_id"),
col("current_loan_delinquency_status"),
when(col("current_loan_delinquency_status") >= 1, col("monthly_reporting_period")).alias("delinquency_30"),
when(col("current_loan_delinquency_status") >= 3, col("monthly_reporting_period")).alias("delinquency_90"),
when(col("current_loan_delinquency_status") >= 6, col("monthly_reporting_period")).alias("delinquency_180")) \
.groupBy("quarter", "loan_id") \
.agg(
max("current_loan_delinquency_status").alias("delinquency_12"),
min("delinquency_30").alias("delinquency_30"),
min("delinquency_90").alias("delinquency_90"),
min("delinquency_180").alias("delinquency_180")) \
.select(
col("quarter"),
col("loan_id"),
(col("delinquency_12") >= 1).alias("ever_30"),
(col("delinquency_12") >= 3).alias("ever_90"),
(col("delinquency_12") >= 6).alias("ever_180"),
col("delinquency_30"),
col("delinquency_90"),
col("delinquency_180"))
joinedDf = perf \
.withColumnRenamed("monthly_reporting_period", "timestamp") \
.withColumnRenamed("monthly_reporting_period_month", "timestamp_month") \
.withColumnRenamed("monthly_reporting_period_year", "timestamp_year") \
.withColumnRenamed("current_loan_delinquency_status", "delinquency_12") \
.withColumnRenamed("current_actual_upb", "upb_12") \
.select("quarter", "loan_id", "timestamp", "delinquency_12", "upb_12", "timestamp_month", "timestamp_year") \
.join(aggDF, ["loan_id", "quarter"], "left_outer")
# calculate the 12 month delinquency and upb values
months = 12
monthArray = [lit(x) for x in range(0, 12)]
# explode on a small amount of data is actually slightly more efficient than a cross join
testDf = joinedDf \
.withColumn("month_y", explode(array(monthArray))) \
.select(
col("quarter"),
floor(((col("timestamp_year") * 12 + col("timestamp_month")) - 24000) / months).alias("josh_mody"),
floor(((col("timestamp_year") * 12 + col("timestamp_month")) - 24000 - col("month_y")) / months).alias("josh_mody_n"),
col("ever_30"),
col("ever_90"),
col("ever_180"),
col("delinquency_30"),
col("delinquency_90"),
col("delinquency_180"),
col("loan_id"),
col("month_y"),
col("delinquency_12"),
col("upb_12")) \
.groupBy("quarter", "loan_id", "josh_mody_n", "ever_30", "ever_90", "ever_180", "delinquency_30", "delinquency_90", "delinquency_180", "month_y") \
.agg(max("delinquency_12").alias("delinquency_12"), min("upb_12").alias("upb_12")) \
.withColumn("timestamp_year", floor((lit(24000) + (col("josh_mody_n") * lit(months)) + (col("month_y") - 1)) / lit(12))) \
.selectExpr('*', 'pmod(24000 + (josh_mody_n * {}) + month_y, 12) as timestamp_month_tmp'.format(months)) \
.withColumn("timestamp_month", when(col("timestamp_month_tmp") == lit(0), lit(12)).otherwise(col("timestamp_month_tmp"))) \
.withColumn("delinquency_12", ((col("delinquency_12") > 3).cast("int") + (col("upb_12") == 0).cast("int")).alias("delinquency_12")) \
.drop("timestamp_month_tmp", "josh_mody_n", "month_y")
return perf.withColumnRenamed("monthly_reporting_period_month", "timestamp_month") \
.withColumnRenamed("monthly_reporting_period_year", "timestamp_year") \
.join(testDf, ["quarter", "loan_id", "timestamp_year", "timestamp_month"], "left") \
.drop("timestamp_year", "timestamp_month")
def _create_acquisition(spark, acq):
nameMapping = spark.createDataFrame(_name_mapping, ["from_seller_name", "to_seller_name"])
return acq.join(nameMapping, col("seller_name") == col("from_seller_name"), "left") \
.drop("from_seller_name") \
.withColumn("old_name", col("seller_name")) \
.withColumn("seller_name", coalesce(col("to_seller_name"), col("seller_name"))) \
.drop("to_seller_name") \
.withColumn("orig_date", to_date(col("orig_date"), "MM/yyyy")) \
.withColumn("first_pay_date", to_date(col("first_pay_date"), "MM/yyyy"))
col_1
"a"
"b"
"c"
"a"
# After String ====> Numberic
col_1
0
1
2
0
col1 = [row(data="a",id=0), row(data="b",id=1)]
```
* Define function to convert string columns to numeric
#### 2.4 Define Main Function
In this function:
1. Parse date in Performance data by calling _parse_dates (parsed_perf)
2. Create deliqency dataframe(perf_deliqency) form Performance data by calling _create_perf_deliquency
3. Create cleaned acquisition dataframe(cleaned_acq) from Acquisition data by calling _create_acquisition
4. Join deliqency dataframe(perf_deliqency) and cleaned acquisition dataframe(cleaned_acq), get clean_df
5. Cast String column to Numbric in clean_df by calling _cast_string_columns_to_numeric, get casted_clean_df
6. Return casted_clean_df as final result
## Script Settings
### 1. File Path Settings
* Define input file path
* Define temporary folder path
* Define output folder path
### 2. Common Spark Settings
## Run Part
### Read Raw File and Transcode Data
#### 1. Add additional Spark settings
#### 2. Read Raw File and Transcode to Parquet
### Run ETL
#### 1. Add additional Spark settings
#### 2.Read Parquet File and Run ETL Process, Save the Result
| 0.464173 | 0.76005 |
# Music generation notebook
In this notebook we provide minimal replication guidelines of the music generation part of our work.
The main code to replicate all of our plots and results sits in the highly unstructured music_generation.ipynb.
First, we need to import all relevant packages (virtual environment needed for all of this to work can be provided upon request):
```
import os
# os.environ["CUDA_VISIBLE_DEVICES"]="-1" # uncomment this if you dont want to use GPU
import pretty_midi
import midi
import numpy as np
import matplotlib.pyplot as plt
from keras.models import Model
from keras.layers import Dense, Input, Lambda, Concatenate, LSTM
from keras.optimizers import Adam
from keras import backend as K
import copy
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
import csv
import sys
from sys import stdout
import random
import librosa.display
import pypianoroll
import scipy.stats as st
from os import path
import pickle
################################### Our code
from loading import *
from models import *
from data import *
from midi_to_statematrix import *
%matplotlib inline
print("TensorFlow version: {}".format(tf.__version__))
print("GPU is available: {}".format(tf.test.is_gpu_available()))
```
The above should print:
TensorFlow version: 2.0.0 \
GPU is available: True
## Training model
To train our best model (bi-axial LSTM for both encoder and decoder) type this in bash console (takes around 50h to train):
```{bash}
python3 train_biaxial_long.py -lr 0.001 -bs 64
```
For now the encoder output size is fixed to 32, which can be easily changed in the train_biaxial_long.py script.
## Generating music
Here we give our generation process in one jupyter notebook cell. It involves the model going one timestep at a time and predicting the entire sequence in the target patch. We provide flexibility in the high level parameters of the generation process, which are mostly connected in translating probabilities from the model into actually played notes:
```
##################### GENERATION PARAMETERS #####################
my_model_name = "biaxial_pn_encoder_concat_deeplstm_cont.h5" # name of model in .h5 format
foldername = 'experiment_switch_order3' # folder where to save the output of generation
# data
what_type = 'test' # can be train or test
train_tms = 40 # length of the context in timesteps
test_tms = 20 # length of the target in timesteps
batch_size = 64 # size of the batch (we will generate batch_size patches)
songs_per_batch = 16 # how many different piano scores we want per batch (must divide batch_size)
seed = 1212 # random seed for replication (applies only to choosing scores and patch start times)
# turn_probabilities_to_notes params
how = 'random' # look into function for more details
normalize = False # whether to normalize the probabilities outputted from the model
remap_to_max = True # whether to divide probabilities by the max probability in that timestep
turn_on_notes = 8 # how many notes we want to turn on maximally at any timestep (humans have 10 fingers)
divide_prob = 2 # value by which we divide probabilities
articulation_prob = 0.0018 # if probability of stroking note is higher than that and it was played in last timestep then we articulate
remap_prob = 0.35 # if remap_to_max is True, this is the value we multiply the resulting probabilities by
```
Now running the below will generate the patches and save them to foldername:
```
def load_model(file, curr_batch, modelname, *modelparams):
new_model = modelname(curr_batch, *modelparams)
new_model.load_weights(file)
return new_model
def turn_probabilities_to_notes(prediction,
turn_on,
how = 'random',
normalize = True,
threshold = 0.1,
divide_prob = 2,
remap_to_max = True):
for batch in range(prediction.shape[0]):
if turn_on[batch] <= 1:
prediction[batch, :] = 0
continue
turn_off = prediction[batch, :].argsort()[:-int(turn_on[batch])]
prediction[batch, :][turn_off] = 0
if normalize:
prediction[batch, timestep, :] = st.norm.cdf((prediction[batch, timestep, :] -
np.mean(prediction[batch, timestep, :][prediction[batch, timestep, :] > 0]))/
np.sqrt(np.var(prediction[batch, timestep, :][prediction[batch, timestep, :]>0])))/divide_prob
prediction[batch, timestep, :][turn_off] = 0
if remap_to_max:
prediction[batch, :] /= prediction[batch, :].max()
prediction[batch, :] *= remap_prob
if how == 'random':
notes = np.random.binomial(1, p=prediction)
elif how == 'random_thresholded':
prediction[prediction >= threshold] += 0.5
prediction[prediction > 1] = 1
prediction[prediction < threshold] = 0
notes = np.random.binomial(1, p=prediction)
elif how == 'thresholded':
prediction[prediction >= threshold] = 1
prediction[prediction < threshold] = 0
notes = prediction
return notes
############################################# LOAD DATA ####################################################
file = 'maestro-v2.0.0/maestro-v2.0.0.csv'
# Get a batch we want to predict
data_test = DataObject(file, what_type = what_type,
train_tms = train_tms, test_tms = test_tms,
fs = 20, window_size = 15,
seed = seed)
# Create a batch class which we will iterate over
test_batch = Batch(data_test, batch_size = batch_size, songs_per_batch = songs_per_batch)
############################################# START GENERATING #############################################
curr_test_batch = copy.deepcopy(test_batch.data)
# Uncomment below line if you want to switch the ordering of the contexts
#curr_test_batch.context[[0,1],:,:,:] = curr_test_batch.context[[1,0],:,:,:]
final_output = np.zeros((test_batch.batch_size,
19+data_test.test_tms+19,
78))
# The next is not necessary but makes the samples a bit better
# Populate from the front
final_output[:,0:19,:] = curr_test_batch.context[0,:,-19:,:]
final_output[:,20,:] = DataObject.drop_articulation3d(curr_test_batch.target[:,0,:,:])
# Populate from the back
final_output[:,-19:,:] = curr_test_batch.context[1,:,0:19,:]
curr_test_batch.target[:,0:20,:,0] = final_output[:,0:20,:]
curr_test_batch.target[:,0:20,:,1] = np.zeros(final_output[:,0:20,:].shape)
curr_test_batch.target_split = 0
curr_test_batch.window_size = 20
curr_test_batch.featurize(use_biaxial = True)
# If you have trained a different model from the models.py file, change the last argument of the next function to that models name
model = load_model(my_model_name, curr_test_batch, biaxial_pn_encoder_concat_deeplstm)
def take_prediction(t):
if t<20:
return -t
else:
return -20
def take_actual(t):
if t <= test_tms:
return np.arange(19, 19+t, 1)
else:
return np.arange(t-test_tms+19, t-19, 1)
# Start looping over the target patch
for timestep in range(1,test_tms):
stdout.write('\rtimestep {}/{}'.format(timestep, test_tms))
stdout.flush()
prediction = model.predict([tf.convert_to_tensor(curr_test_batch.context, dtype = tf.float32),
tf.convert_to_tensor(curr_test_batch.target_train, dtype = tf.float32)],
steps = 1)[:,take_prediction(timestep):,:]
notes = np.zeros(prediction.shape)
turn_on = [turn_on_notes]*batch_size
# Loop over notes to determine which one to play
for t in range(notes.shape[1]):
articulation = np.multiply(prediction[:,t,:], final_output[:,20+t,:])
articulation[articulation >= articulation_prob] = 1
articulation[articulation < articulation_prob] = 0
articulated_notes = np.sum(articulation, axis = -1)
play_notes = turn_probabilities_to_notes(prediction[:,t,:],
turn_on = turn_on - articulated_notes,
how = 'random',
normalize = normalize,
divide_prob = divide_prob,
remap_to_max = remap_to_max)
play_notes = play_notes + articulation
play_notes[play_notes >= 1] = 1
play_notes[play_notes < 1] = 0
final_output[:,21+t,:] = play_notes
# Now reinitialize the model and everything (quite an inefficient implementation)
curr_test_batch = copy.deepcopy(test_batch.data)
curr_test_batch.target[:,0:20,:,0] = final_output[:,timestep:(20+timestep)]
curr_test_batch.target_split = 0
curr_test_batch.window_size = 20
curr_test_batch.featurize(use_biaxial = True)
# End of timestep loop
true_batch = copy.deepcopy(test_batch.data)
# This enables us to save the patches with the actual score names they come from
song_names = np.zeros(len(true_batch.link))
song_names = song_names.tolist()
i = 0
for i, link in enumerate(true_batch.link):
with open(data_test.file) as csv_file:
csv_reader = csv.reader(csv_file, delimiter=',')
line_count = 0
for row in csv_reader:
if line_count == 0:
line_count += 1
else:
if row[4] == link:
name = str(row[0]) + '_' + str(row[1]) + '___' + str(i)
name = name.replace(" ", "-")
name = name.replace("/", "")
song_names[i] = name
break
##########################################################
if path.isdir(foldername):
os.system('rm -r {}'.format(foldername))
if not path.isdir(foldername):
os.mkdir(foldername)
with open('{}/setup.txt'.format(foldername), 'w+') as f:
f.write('what_type = {} \n \
train_tms = {} \n \
test_tms = {} \n \
batch_size = {} \n \
songs_per_batch ={} \n \
how = {} \n \
normalize = {} \n \
turn_on = {} \n \
divide_prob = {} \n \
articulation_prob = {}'.format(what_type,
str(train_tms),
str(test_tms),
str(batch_size),
str(songs_per_batch),
how,
str(normalize),
str(turn_on[0]),
str(divide_prob),
str(articulation_prob)))
##########################################################
true_batch = copy.deepcopy(test_batch.data)
true_batch.target = DataObject.drop_articulation(true_batch.target)
# Combine context
true_sample = np.append(np.squeeze(curr_test_batch.context[:,0,:,:]), true_batch.target, axis = 1)
true_sample = np.append(true_sample, np.squeeze(curr_test_batch.context[:,1,:,:]), axis = 1)
true_sample = np.append(np.expand_dims(true_sample, axis = 3),
np.expand_dims(true_sample, axis = 3), axis = 3)
predicted_sample = np.append(np.squeeze(curr_test_batch.context[:,0,:,:]), final_output[:,20:(20+test_tms),:], axis = 1)
predicted_sample = np.append(predicted_sample, np.squeeze(curr_test_batch.context[:,1,:,:]), axis = 1)
predicted_sample = np.append(np.expand_dims(predicted_sample, axis = 3),
np.expand_dims(predicted_sample, axis = 3), axis = 3)
# Save final midi
save_indices = np.arange(0,test_batch.batch_size)
for idx, i in enumerate(save_indices):
print("saving {}".format(idx))
noteStateMatrixToMidi(true_sample[i,:,:], name = '{}/NO_{}_TRUE_{}'.format(foldername,i,song_names[i]))
noteStateMatrixToMidi(predicted_sample[i,:,:], name = '{}/NO_{}_PRED_{}'.format(foldername,i,song_names[i]))
```
## Plotting functions
```
def plot_batch_element2(batch, fig, which_element = 0, cmap_ctx = 'viridis', cmap_tar = 'Reds', num_subplot = 2):
ax = fig.add_subplot(300 + 10 + num_subplot)
full_segment = combine_pianoroll(batch.context[which_element,0,:,:],
np.zeros(batch.target[which_element,:,:].shape),
batch.context[which_element,1,:,:])
just_target = np.zeros(full_segment.shape)
just_target[40:60, :] = batch.target[which_element,:,:]
plot_pianoroll(ax, full_segment, cmap = cmap_ctx)
plot_pianoroll(ax, just_target, cmap = cmap_tar, alpha = 1)
ax.axvline(data_test.train_tms)
ax.axvline(data_test.train_tms+data_test.test_tms)
return fig, ax
def pad_with_zeros(pianoroll):
return np.pad(pianoroll, ((0,0),(25, 25)), 'constant', constant_values=(0, 0))
def combine_pianoroll(*pianorolls):
for idx, pianoroll in enumerate(pianorolls):
if idx == 0:
new_pianoroll = pianoroll
else:
new_pianoroll = np.append(new_pianoroll, pianoroll, axis = 0)
return new_pianoroll
def plot_batch_element(batch, which_element = 0, cmap_ctx = 'viridis', cmap_tar = 'Reds', num_subplots = 3, figsize = (12,8)):
fig = plt.figure(figsize = figsize)
ax = fig.add_subplot(num_subplots*100 + 11)
full_segment = combine_pianoroll(batch.context[which_element,0,:,:],
np.zeros(DataObject.drop_articulation3d(batch.target[which_element,:,:]).shape),
batch.context[which_element,1,:,:])
just_target = np.zeros(full_segment.shape)
just_target[40:60, :] = DataObject.drop_articulation3d(batch.target[which_element,:,:])
plot_pianoroll(ax, full_segment, cmap = cmap_ctx)
plot_pianoroll(ax, just_target, cmap = cmap_tar, alpha = 1)
ax.axvline(data_test.train_tms)
ax.axvline(data_test.train_tms+data_test.test_tms)
return fig, ax
# The next function is a modified function from the packacge pypianoroll
def plot_pianoroll(
ax,
pianoroll,
is_drum=False,
beat_resolution=None,
downbeats=None,
preset="default",
cmap="Blues",
xtick="auto",
ytick="octave",
xticklabel=True,
yticklabel="auto",
tick_loc=None,
tick_direction="in",
label="both",
grid="both",
grid_linestyle=":",
grid_linewidth=0.5,
num_notes = 78,
x_start = 1,
alpha = 1,
):
"""
Plot a pianoroll given as a numpy array.
Parameters
----------
ax : matplotlib.axes.Axes object
A :class:`matplotlib.axes.Axes` object where the pianoroll will be
plotted on.
pianoroll : np.ndarray
A pianoroll to be plotted. The values should be in [0, 1] when data type
is float, and in [0, 127] when data type is integer.
- For a 2D array, shape=(num_time_step, num_pitch).
- For a 3D array, shape=(num_time_step, num_pitch, num_channel), where
channels can be either RGB or RGBA.
is_drum : bool
A boolean number that indicates whether it is a percussion track.
Defaults to False.
beat_resolution : int
The number of time steps used to represent a beat. Required and only
effective when `xtick` is 'beat'.
downbeats : list
An array that indicates whether the time step contains a downbeat (i.e.,
the first time step of a bar).
preset : {'default', 'plain', 'frame'}
A string that indicates the preset theme to use.
- In 'default' preset, the ticks, grid and labels are on.
- In 'frame' preset, the ticks and grid are both off.
- In 'plain' preset, the x- and y-axis are both off.
cmap : `matplotlib.colors.Colormap`
The colormap to use in :func:`matplotlib.pyplot.imshow`. Defaults to
'Blues'. Only effective when `pianoroll` is 2D.
xtick : {'auto', 'beat', 'step', 'off'}
A string that indicates what to use as ticks along the x-axis. If 'auto'
is given, automatically set to 'beat' if `beat_resolution` is also given
and set to 'step', otherwise. Defaults to 'auto'.
ytick : {'octave', 'pitch', 'off'}
A string that indicates what to use as ticks along the y-axis.
Defaults to 'octave'.
xticklabel : bool
Whether to add tick labels along the x-axis. Only effective when `xtick`
is not 'off'.
yticklabel : {'auto', 'name', 'number', 'off'}
If 'name', use octave name and pitch name (key name when `is_drum` is
True) as tick labels along the y-axis. If 'number', use pitch number. If
'auto', set to 'name' when `ytick` is 'octave' and 'number' when `ytick`
is 'pitch'. Defaults to 'auto'. Only effective when `ytick` is not
'off'.
tick_loc : tuple or list
The locations to put the ticks. Availables elements are 'bottom', 'top',
'left' and 'right'. Defaults to ('bottom', 'left').
tick_direction : {'in', 'out', 'inout'}
A string that indicates where to put the ticks. Defaults to 'in'. Only
effective when one of `xtick` and `ytick` is on.
label : {'x', 'y', 'both', 'off'}
A string that indicates whether to add labels to the x-axis and y-axis.
Defaults to 'both'.
grid : {'x', 'y', 'both', 'off'}
A string that indicates whether to add grids to the x-axis, y-axis, both
or neither. Defaults to 'both'.
grid_linestyle : str
Will be passed to :meth:`matplotlib.axes.Axes.grid` as 'linestyle'
argument.
grid_linewidth : float
Will be passed to :meth:`matplotlib.axes.Axes.grid` as 'linewidth'
argument.
"""
if pianoroll.ndim not in (2, 3):
raise ValueError("`pianoroll` must be a 2D or 3D numpy array")
if pianoroll.shape[1] != num_notes:
raise ValueError("The length of the second axis of `pianoroll` must be 128.")
if xtick not in ("auto", "beat", "step", "off"):
raise ValueError("`xtick` must be one of {'auto', 'beat', 'step', 'none'}.")
if xtick == "beat" and beat_resolution is None:
raise ValueError("`beat_resolution` must be specified when `xtick` is 'beat'.")
if ytick not in ("octave", "pitch", "off"):
raise ValueError("`ytick` must be one of {octave', 'pitch', 'off'}.")
if not isinstance(xticklabel, bool):
raise TypeError("`xticklabel` must be bool.")
if yticklabel not in ("auto", "name", "number", "off"):
raise ValueError(
"`yticklabel` must be one of {'auto', 'name', 'number', 'off'}."
)
if tick_direction not in ("in", "out", "inout"):
raise ValueError("`tick_direction` must be one of {'in', 'out', 'inout'}.")
if label not in ("x", "y", "both", "off"):
raise ValueError("`label` must be one of {'x', 'y', 'both', 'off'}.")
if grid not in ("x", "y", "both", "off"):
raise ValueError("`grid` must be one of {'x', 'y', 'both', 'off'}.")
# plotting
if pianoroll.ndim > 2:
to_plot = pianoroll.transpose(1, 0, 2)
else:
to_plot = pianoroll.T
if np.issubdtype(pianoroll.dtype, np.bool_) or np.issubdtype(
pianoroll.dtype, np.floating
):
ax.imshow(
to_plot,
cmap=cmap,
aspect="auto",
vmin=0,
vmax=1,
origin="lower",
interpolation="none",
alpha = alpha,
)
elif np.issubdtype(pianoroll.dtype, np.integer):
ax.imshow(
to_plot,
cmap=cmap,
aspect="auto",
vmin=0,
vmax=127,
origin="lower",
interpolation="none",
alpha = alpha,
)
else:
raise TypeError("Unsupported data type for `pianoroll`.")
# tick setting
if tick_loc is None:
tick_loc = ("bottom", "left")
if xtick == "auto":
xtick = "beat" if beat_resolution is not None else "step"
if yticklabel == "auto":
yticklabel = "name" if ytick == "octave" else "number"
if preset == "plain":
ax.axis("off")
elif preset == "frame":
ax.tick_params(
direction=tick_direction,
bottom=False,
top=False,
left=False,
right=False,
labelbottom=False,
labeltop=False,
labelleft=False,
labelright=False,
)
else:
ax.tick_params(
direction=tick_direction,
bottom=("bottom" in tick_loc),
top=("top" in tick_loc),
left=("left" in tick_loc),
right=("right" in tick_loc),
labelbottom=(xticklabel != "off"),
labelleft=(yticklabel != "off"),
labeltop=False,
labelright=False,
)
# x-axis
if xtick == "beat" and preset != "frame":
num_beat = pianoroll.shape[0] // beat_resolution
ax.set_xticks(beat_resolution * np.arange(num_beat) - 0.5)
ax.set_xticklabels("")
ax.set_xticks(beat_resolution * (np.arange(num_beat) + 0.5) - 0.5, minor=True)
ax.set_xticklabels(np.arange(x_start, num_beat + 1), minor=True)
ax.tick_params(axis="x", which="minor", width=0)
# y-axis
if ytick == "octave":
ax.set_yticks(np.arange(0, num_notes, 12))
if yticklabel == "name":
ax.set_yticklabels(["C{}".format(i - 2) for i in range(11)])
elif ytick == "step":
ax.set_yticks(np.arange(0, num_notes))
if yticklabel == "name":
if is_drum:
ax.set_yticklabels(
[pretty_midi.note_number_to_drum_name(i) for i in range(num_notes)]
)
else:
ax.set_yticklabels(
[pretty_midi.note_number_to_name(i) for i in range(num_notes)]
)
# axis labels
if label in ("x", "both"):
if xtick == "step" or not xticklabel:
ax.set_xlabel("time (step)")
else:
ax.set_xlabel("time (beat)")
if label in ("y", "both"):
if is_drum:
ax.set_ylabel("key name")
else:
ax.set_ylabel("pitch")
# grid
if grid != "off":
ax.grid(
axis=grid, color="k", linestyle=grid_linestyle, linewidth=grid_linewidth
)
# downbeat boarder
if downbeats is not None and preset != "plain":
for step in downbeats:
ax.axvline(x=step, color="k", linewidth=1)
```
|
github_jupyter
|
import os
# os.environ["CUDA_VISIBLE_DEVICES"]="-1" # uncomment this if you dont want to use GPU
import pretty_midi
import midi
import numpy as np
import matplotlib.pyplot as plt
from keras.models import Model
from keras.layers import Dense, Input, Lambda, Concatenate, LSTM
from keras.optimizers import Adam
from keras import backend as K
import copy
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
import csv
import sys
from sys import stdout
import random
import librosa.display
import pypianoroll
import scipy.stats as st
from os import path
import pickle
################################### Our code
from loading import *
from models import *
from data import *
from midi_to_statematrix import *
%matplotlib inline
print("TensorFlow version: {}".format(tf.__version__))
print("GPU is available: {}".format(tf.test.is_gpu_available()))
For now the encoder output size is fixed to 32, which can be easily changed in the train_biaxial_long.py script.
## Generating music
Here we give our generation process in one jupyter notebook cell. It involves the model going one timestep at a time and predicting the entire sequence in the target patch. We provide flexibility in the high level parameters of the generation process, which are mostly connected in translating probabilities from the model into actually played notes:
Now running the below will generate the patches and save them to foldername:
## Plotting functions
| 0.329607 | 0.822474 |
# Lab 4. Keras and Deep Feedforward Network
```
import keras
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Dropout
import matplotlib.pyplot as plt
from PIL import Image
import tensorflow as tf
import os
CPU_ONLY = False
if CPU_ONLY: os.environ['CUDA_VISIBLE_DEVICES'] = '-1'
print(f"TensorFlow version: {tf.__version__}")
print("CUDA version:")
print(os.popen('nvcc --version').read())
with_cuda = tf.test.is_built_with_cuda()
print(f"Can build with CUDA: {with_cuda}")
gpus = tf.config.experimental.list_physical_devices('GPU')
for gpu in gpus: tf.config.experimental.set_memory_growth(gpu, True)
print("Num GPUs Available: ", len(gpus))
for x in gpus: print(x)
(x_train, y_train), (x_test, y_test) = mnist.load_data()
for i in range(0, 9):
plt.subplot(330 + 1 + i)
plt.imshow(Image.fromarray(x_train[i]))
plt.show()
num_classes = 10
# batch_size = 128
batch_size = 64
# epochs = 20
epochs = sys.maxsize
accuracy_threshold = 0.977
x_train = x_train.reshape(60000, 784)
x_test = x_test.reshape(10000, 784)
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
model = Sequential()
model.add(Dense(512, activation='relu', input_shape=(784,)))
model.add(Dropout(0.2))
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(num_classes, activation='softmax'))
model.summary()
model.compile(loss='categorical_crossentropy', optimizer='sgd', metrics=['accuracy'])
from tqdm.keras import TqdmCallback
from tqdm import tqdm_notebook
from livelossplot import PlotLossesKeras
class AccuracyStopping(keras.callbacks.Callback):
def __init__(self, acc_threshold):
super(AccuracyStopping, self).__init__()
self._acc_threshold = acc_threshold
def on_epoch_end(self, epoch, logs={}):
train_acc = logs.get('accuracy') != None and logs.get('accuracy')
print(f'Training Accuracy Threshold: {train_acc} / {self._acc_threshold}')
self.model.stop_training = train_acc >= self._acc_threshold
acc_callback = AccuracyStopping(accuracy_threshold)
pbar = TqdmCallback(verbose=1,tqdm_class=tqdm_notebook, leave = True, display = False)
pbar.epoch_bar.ncols=0
pbar.epoch_bar.bar_format='{elapsed} | {n} finished | {rate_fmt}{postfix}'
plot = PlotLossesKeras()
%%time
pbar.display()
history = model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
verbose=0,
validation_data=(x_test, y_test),
callbacks=[pbar,plot,acc_callback]
)
score = model.evaluate(x_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
#Plot the Loss Curves
plt.figure(figsize=[8,6])
plt.plot(history.history['loss'],'r',linewidth=3.0)
plt.plot(history.history['val_loss'],'b',linewidth=3.0)
plt.legend(['Training loss', 'Validation Loss'],fontsize=18)
plt.xlabel('Epochs ',fontsize=16)
plt.ylabel('Loss',fontsize=16)
plt.title('Loss Curves',fontsize=16)
#Plot the Accuracy Curves
plt.figure(figsize=[8,6])
plt.plot(history.history['accuracy'],'r',linewidth=3.0)
plt.plot(history.history['val_accuracy'],'b',linewidth=3.0)
plt.legend(['Training Accuracy', 'Validation Accuracy'],fontsize=18)
plt.xlabel('Epochs ',fontsize=16)
plt.ylabel('Accuracy',fontsize=16)
plt.title('Accuracy Curves',fontsize=16)
```
# Assignment
```
matplotlib.rcParams.update(matplotlib.rcParamsDefault)
plt.rcParams["figure.figsize"] = (7,10)
plt.rcParams["figure.facecolor"]=(0, 0, 0, 0)
plt.rcParams["axes.facecolor"]=(0, 0, 0, 0)
import networkx as nx
from networkx.drawing.nx_pydot import graphviz_layout
import pydot
```
Draw the computation graph for the following function:
$$𝑓(𝑎, 𝑏, 𝑐, 𝑑, 𝑒) = \frac{1}{(1+(a^b+c^d)\times e)^2}$$
```
G = nx.DiGraph()
G.add_node('a',shape='s', lable='a')
G.add_node('b',shape='s', lable='b')
G.add_node('c',shape='s', lable='c')
G.add_node('d',shape='s', lable='d')
G.add_node('e',shape='s', lable='e')
G.add_node('const_1',shape='d', lable='1')
G.add_node('const_2',shape='d', lable='2')
G.add_node('const_-1',shape='d', lable='-1')
G.add_node('a**b',shape='o', lable='pow')
G.add_node('c**d',shape='o', lable='pow')
G.add_node('ab+cd',shape='o', lable='+')
G.add_node('abcd*e',shape='o', lable='*')
G.add_node('abcde+1',shape='o', lable='+')
G.add_node('abcde1**2',shape='o', lable='pow')
G.add_node('abcde12**-1',shape='o', lable='pow')
G.add_node('f',shape='8', lable='output')
G.add_edges_from([('a','a**b'), ('b','a**b'), ('c','c**d'), ('d','c**d')])
G.add_edges_from([('a**b', 'ab+cd'), ('c**d', 'ab+cd')])
G.add_edges_from([('ab+cd', 'abcd*e'), ('e', 'abcd*e')])
G.add_edges_from([('abcd*e', 'abcde+1'), ('const_1', 'abcde+1')])
G.add_edges_from([('abcde+1', 'abcde1**2'), ('const_2', 'abcde1**2')])
G.add_edges_from([('abcde1**2', 'abcde12**-1'), ('const_-1', 'abcde12**-1')])
G.add_edges_from([('abcde12**-1', 'f')])
plt.axis('off')
pos = graphviz_layout(G, prog="dot", root = 'f')
pos = nx.rescale_layout_dict(pos, scale=10)
nodeShapes = set((aShape[1]["shape"] for aShape in G.nodes(data = True)))
for aShape in nodeShapes:
filteredlist = [sNode for sNode in filter(lambda x: x[1]["shape"]==aShape, G.nodes(data = True))]
nodelist = [n[0] for n in filteredlist]
nodelable = {}
for n in filteredlist: nodelable[n[0]] = n[1]["lable"]
nx.draw_networkx_nodes(G,pos, node_size= 2000, node_shape = aShape, nodelist = nodelist, node_color='#30475e')
nx.draw_networkx_labels(G,pos,nodelable,font_size=12, font_color='#e8e8e8')
nx.draw_networkx(G, pos, arrows=True, arrowstyle= '->', node_size= 2000, nodelist=[], with_labels=False, edge_color='#f05454')
```
Compute the gradient of the function with respect it to its inputs at (a, b, c, d, e) = (1, 1, 1, 1, 1) (refer to Lecture 3)
## Code Solution:
```
from scipy.misc import derivative
def partial_derivative(func, var=0, point=[]):
args = point[:]
def wraps(x):
args[var] = x
return func(*args)
return derivative(wraps, point[var], dx = 1e-6)
f = lambda a,b,c,d,e: ((1+(a**b+c**d)*e)**2)**-1
[partial_derivative(f, i, [1]*f.__code__.co_argcount) for i in range(f.__code__.co_argcount)]
```
## Manual Calculation:
$$
\nabla f(a,b,c,d,e) = \nabla \left(\frac{1}{(e(a^b +c^d)+1)^2}\right)=\left(\frac{\delta f}{\delta a},\frac{\delta f}{\delta b},\frac{\delta f}{\delta c},\frac{\delta f}{\delta d},\frac{\delta f}{\delta e}\right)
$$
Function $f(a,b,c,d,e)$ is the composition $F(G(a,b,c,d,e))$ of two functions:
\begin{align*}
F(u) &= \frac{1}{u^2}\\
G(a,b,c,d,e) &= 1+e(a^b +c^d)
\end{align*}
Therefore chain rule can be applied as following:
\begin{align*}
\frac{d}{d a} \left(F{\left(G{\left(a \right)} \right)}\right) &= \frac{d}{d u} \left(F{\left(u \right)}\right) \frac{d}{d a} \left(G{\left(a \right)}\right)\\
\frac{d}{d b} \left(F{\left(G{\left(b \right)} \right)}\right) &= \frac{d}{d u} \left(F{\left(u \right)}\right) \frac{d}{d b} \left(G{\left(b \right)}\right)\\
\frac{d}{d c} \left(F{\left(G{\left(c \right)} \right)}\right) &= \frac{d}{d u} \left(F{\left(u \right)}\right) \frac{d}{d c} \left(G{\left(c \right)}\right)\\
\frac{d}{d d} \left(F{\left(G{\left(d \right)} \right)}\right) &= \frac{d}{d u} \left(F{\left(u \right)}\right) \frac{d}{d d} \left(G{\left(d \right)}\right)\\
\frac{d}{d e} \left(F{\left(G{\left(e \right)} \right)}\right) &= \frac{d}{d u} \left(F{\left(u \right)}\right) \frac{d}{d e} \left(G{\left(e \right)}\right)
\end{align*}
## $\color{#f05454}{\frac{\delta f}{\delta a}}$:
\begin{align*}
\left(\frac{d}{d a} \left(\frac{1}{\left(1 + e \left(a^{b} + c^{d}\right)\right)^{2}}\right)\right) &= \left(\frac{d}{d u} \left(\frac{1}{u^{2}}\right) \frac{d}{d a} \left(1 + e \left(a^{b} + c^{d}\right)\right)\right)\\
&= \frac{d}{d a} \left(1 + e \left(a^{b} + c^{d}\right)\right) {\left(- \frac{2}{u^{3}}\right)}\\
&= - \frac{2 \left(\frac{d}{d a} \left(1\right) + {\left(e \frac{d}{d a} \left(a^{b} + c^{d}\right)\right)}\right)}{\left(1 + e \left(a^{b} + c^{d}\right)\right)^{3}}\\
&= - \frac{2 e {\left(\frac{d}{d a} \left(a^{b}\right) + \frac{d}{d a} \left(c^{d}\right)\right)}}{\left(1 + e \left(a^{b} + c^{d}\right)\right)^{3}}\\
&= - \frac{2 e \left(b a^{-1 + b} + {\left(0\right)}\right)}{\left(1 + e \left(a^{b} + c^{d}\right)\right)^{3}}\\
&=\color{#f05454}{-\frac{2ea^{b-1}b}{(e(a^b + c^d)+1)^3}}
\end{align*}
## $\color{#f05454}{\frac{\delta f}{\delta b}}$:
\begin{align*}
{\left(\frac{d}{d b} \left(\frac{1}{\left(1 + e \left(a^{b} + c^{d}\right)\right)^{2}}\right)\right)} &= {\left(\frac{d}{d u} \left(\frac{1}{u^{2}}\right) \frac{d}{d b} \left(1 + e \left(a^{b} + c^{d}\right)\right)\right)}\\
&= \frac{d}{d b} \left(1 + e \left(a^{b} + c^{d}\right)\right) {\left(- \frac{2}{u^{3}}\right)}\\
&= - \frac{2 {\left(\frac{d}{d b} \left(1\right) + \frac{d}{d b} \left(e \left(a^{b} + c^{d}\right)\right)\right)}}{\left(1 + e \left(a^{b} + c^{d}\right)\right)^{3}}\\
&= - \frac{2 e {\left(\frac{d}{d b} \left(a^{b}\right) + \frac{d}{d b} \left(c^{d}\right)\right)}}{\left(1 + e \left(a^{b} + c^{d}\right)\right)^{3}}\\
&= \color{#f05454}{- \frac{2 e a^{b} \ln{\left(a \right)}}{\left(1 + e \left(a^{b} + c^{d}\right)\right)^{3}}}
\end{align*}
## $\color{#f05454}{\frac{\delta f}{\delta c}}$:
\begin{align*}
{\left(\frac{d}{d c} \left(\frac{1}{\left(1 + e \left(a^{b} + c^{d}\right)\right)^{2}}\right)\right)} &= {\left(\frac{d}{d u} \left(\frac{1}{u^{2}}\right) \frac{d}{d c} \left(1 + e \left(a^{b} + c^{d}\right)\right)\right)}\\
&= \frac{d}{d c} \left(1 + e \left(a^{b} + c^{d}\right)\right) {\left(- \frac{2}{u^{3}}\right)}\\
&= - \frac{2 {\left(\frac{d}{d c} \left(1\right) + \frac{d}{d c} \left(e \left(a^{b} + c^{d}\right)\right)\right)}}{\left(1 + e \left(a^{b} + c^{d}\right)\right)^{3}}\\
&= - \frac{2 e {\left(\frac{d}{d c} \left(a^{b}\right) + \frac{d}{d c} \left(c^{d}\right)\right)}}{\left(1 + e \left(a^{b} + c^{d}\right)\right)^{3}}\\
&=\color{#f05454}{ - \frac{2 d e c^{-1 + d}}{\left(1 + e \left(a^{b} + c^{d}\right)\right)^{3}}}
\end{align*}
## $\color{#f05454}{\frac{\delta f}{\delta d}}$:
\begin{align*}
{\left(\frac{d}{d d} \left(\frac{1}{\left(1 + e \left(a^{b} + c^{d}\right)\right)^{2}}\right)\right)} &= {\left(\frac{d}{d u} \left(\frac{1}{u^{2}}\right) \frac{d}{d d} \left(1 + e \left(a^{b} + c^{d}\right)\right)\right)}\\
& = \frac{d}{d d} \left(1 + e \left(a^{b} + c^{d}\right)\right) {\left(- \frac{2}{u^{3}}\right)}\\
&= - \frac{2 {\left(\frac{d}{d d} \left(1\right) + \frac{d}{d d} \left(e \left(a^{b} + c^{d}\right)\right)\right)}}{\left(1 + e \left(a^{b} + c^{d}\right)\right)^{3}}\\
&= - \frac{2 e {\left(\frac{d}{d d} \left(a^{b}\right) + \frac{d}{d d} \left(c^{d}\right)\right)}}{\left(1 + e \left(a^{b} + c^{d}\right)\right)^{3}}\\
&= \color{#f05454}{- \frac{2 e c^{d} \ln{\left(c \right)}}{\left(1 + e \left(a^{b} + c^{d}\right)\right)^{3}}}
\end{align*}
## $\color{#f05454}{\frac{\delta f}{\delta e}}$:
\begin{align*}
{\left(\frac{d}{d e} \left(\frac{1}{\left(1 + e \left(a^{b} + c^{d}\right)\right)^{2}}\right)\right)} &= {\left(\frac{d}{d u} \left(\frac{1}{u^{2}}\right) \frac{d}{d e} \left(1 + e \left(a^{b} + c^{d}\right)\right)\right)}\\
&= \frac{d}{d e} \left(1 + e \left(a^{b} + c^{d}\right)\right) {\left(- \frac{2}{u^{3}}\right)}\\
&= - \frac{2 {\left(\frac{d}{d e} \left(1\right) + \frac{d}{d e} \left(e \left(a^{b} + c^{d}\right)\right)\right)}}{\left(1 + e \left(a^{b} + c^{d}\right)\right)^{3}}\\
&= - \frac{2 \left(\left(a^{b} + c^{d}\right) {\left(1\right)} + \frac{d}{d e} \left(1\right)\right)}{\left(1 + e \left(a^{b} + c^{d}\right)\right)^{3}}\\
&= \color{#f05454}{- \frac{2 a^{b} + 2 c^{d}}{\left(1 + e \left(a^{b} + c^{d}\right)\right)^{3}}}
\end{align*}
\begin{align*}
\nabla f(a,b,c,d,e) &= \nabla \left(\frac{1}{(e(a^b +c^d)+1)^2}\right)\\
&=\left(\frac{\delta f}{\delta a},\frac{\delta f}{\delta b},\frac{\delta f}{\delta c},\frac{\delta f}{\delta d},\frac{\delta f}{\delta e}\right)\\
&=\left(- \frac{2 a^{b - 1} b e}{\left(e \left(a^{b} + c^{d}\right) + 1\right)^{3}},- \frac{2 a^{b} e \ln{\left(a \right)}}{\left(e \left(a^{b} + c^{d}\right) + 1\right)^{3}},- \frac{2 c^{d - 1} d e}{\left(e \left(a^{b} + c^{d}\right) + 1\right)^{3}},- \frac{2 c^{d} e \ln{\left(c \right)}}{\left(e \left(a^{b} + c^{d}\right) + 1\right)^{3}},- \frac{2 a^{b} + 2 c^{d}}{\left(e \left(a^{b} + c^{d}\right) + 1\right)^{3}}\right)
\end{align*}
# $$\color{#f05454}{\nabla \left(\frac{1}{\left(e \left(a^{b} + c^{d}\right) + 1\right)^{2}}\right)|_{\left(a,b,c,d,e\right)=\left(1,1,1,1,1\right)}=\left(- \frac{2}{27},0,- \frac{2}{27},0,- \frac{4}{27}\right)}$$
|
github_jupyter
|
import keras
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Dropout
import matplotlib.pyplot as plt
from PIL import Image
import tensorflow as tf
import os
CPU_ONLY = False
if CPU_ONLY: os.environ['CUDA_VISIBLE_DEVICES'] = '-1'
print(f"TensorFlow version: {tf.__version__}")
print("CUDA version:")
print(os.popen('nvcc --version').read())
with_cuda = tf.test.is_built_with_cuda()
print(f"Can build with CUDA: {with_cuda}")
gpus = tf.config.experimental.list_physical_devices('GPU')
for gpu in gpus: tf.config.experimental.set_memory_growth(gpu, True)
print("Num GPUs Available: ", len(gpus))
for x in gpus: print(x)
(x_train, y_train), (x_test, y_test) = mnist.load_data()
for i in range(0, 9):
plt.subplot(330 + 1 + i)
plt.imshow(Image.fromarray(x_train[i]))
plt.show()
num_classes = 10
# batch_size = 128
batch_size = 64
# epochs = 20
epochs = sys.maxsize
accuracy_threshold = 0.977
x_train = x_train.reshape(60000, 784)
x_test = x_test.reshape(10000, 784)
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
model = Sequential()
model.add(Dense(512, activation='relu', input_shape=(784,)))
model.add(Dropout(0.2))
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(num_classes, activation='softmax'))
model.summary()
model.compile(loss='categorical_crossentropy', optimizer='sgd', metrics=['accuracy'])
from tqdm.keras import TqdmCallback
from tqdm import tqdm_notebook
from livelossplot import PlotLossesKeras
class AccuracyStopping(keras.callbacks.Callback):
def __init__(self, acc_threshold):
super(AccuracyStopping, self).__init__()
self._acc_threshold = acc_threshold
def on_epoch_end(self, epoch, logs={}):
train_acc = logs.get('accuracy') != None and logs.get('accuracy')
print(f'Training Accuracy Threshold: {train_acc} / {self._acc_threshold}')
self.model.stop_training = train_acc >= self._acc_threshold
acc_callback = AccuracyStopping(accuracy_threshold)
pbar = TqdmCallback(verbose=1,tqdm_class=tqdm_notebook, leave = True, display = False)
pbar.epoch_bar.ncols=0
pbar.epoch_bar.bar_format='{elapsed} | {n} finished | {rate_fmt}{postfix}'
plot = PlotLossesKeras()
%%time
pbar.display()
history = model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
verbose=0,
validation_data=(x_test, y_test),
callbacks=[pbar,plot,acc_callback]
)
score = model.evaluate(x_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
#Plot the Loss Curves
plt.figure(figsize=[8,6])
plt.plot(history.history['loss'],'r',linewidth=3.0)
plt.plot(history.history['val_loss'],'b',linewidth=3.0)
plt.legend(['Training loss', 'Validation Loss'],fontsize=18)
plt.xlabel('Epochs ',fontsize=16)
plt.ylabel('Loss',fontsize=16)
plt.title('Loss Curves',fontsize=16)
#Plot the Accuracy Curves
plt.figure(figsize=[8,6])
plt.plot(history.history['accuracy'],'r',linewidth=3.0)
plt.plot(history.history['val_accuracy'],'b',linewidth=3.0)
plt.legend(['Training Accuracy', 'Validation Accuracy'],fontsize=18)
plt.xlabel('Epochs ',fontsize=16)
plt.ylabel('Accuracy',fontsize=16)
plt.title('Accuracy Curves',fontsize=16)
matplotlib.rcParams.update(matplotlib.rcParamsDefault)
plt.rcParams["figure.figsize"] = (7,10)
plt.rcParams["figure.facecolor"]=(0, 0, 0, 0)
plt.rcParams["axes.facecolor"]=(0, 0, 0, 0)
import networkx as nx
from networkx.drawing.nx_pydot import graphviz_layout
import pydot
G = nx.DiGraph()
G.add_node('a',shape='s', lable='a')
G.add_node('b',shape='s', lable='b')
G.add_node('c',shape='s', lable='c')
G.add_node('d',shape='s', lable='d')
G.add_node('e',shape='s', lable='e')
G.add_node('const_1',shape='d', lable='1')
G.add_node('const_2',shape='d', lable='2')
G.add_node('const_-1',shape='d', lable='-1')
G.add_node('a**b',shape='o', lable='pow')
G.add_node('c**d',shape='o', lable='pow')
G.add_node('ab+cd',shape='o', lable='+')
G.add_node('abcd*e',shape='o', lable='*')
G.add_node('abcde+1',shape='o', lable='+')
G.add_node('abcde1**2',shape='o', lable='pow')
G.add_node('abcde12**-1',shape='o', lable='pow')
G.add_node('f',shape='8', lable='output')
G.add_edges_from([('a','a**b'), ('b','a**b'), ('c','c**d'), ('d','c**d')])
G.add_edges_from([('a**b', 'ab+cd'), ('c**d', 'ab+cd')])
G.add_edges_from([('ab+cd', 'abcd*e'), ('e', 'abcd*e')])
G.add_edges_from([('abcd*e', 'abcde+1'), ('const_1', 'abcde+1')])
G.add_edges_from([('abcde+1', 'abcde1**2'), ('const_2', 'abcde1**2')])
G.add_edges_from([('abcde1**2', 'abcde12**-1'), ('const_-1', 'abcde12**-1')])
G.add_edges_from([('abcde12**-1', 'f')])
plt.axis('off')
pos = graphviz_layout(G, prog="dot", root = 'f')
pos = nx.rescale_layout_dict(pos, scale=10)
nodeShapes = set((aShape[1]["shape"] for aShape in G.nodes(data = True)))
for aShape in nodeShapes:
filteredlist = [sNode for sNode in filter(lambda x: x[1]["shape"]==aShape, G.nodes(data = True))]
nodelist = [n[0] for n in filteredlist]
nodelable = {}
for n in filteredlist: nodelable[n[0]] = n[1]["lable"]
nx.draw_networkx_nodes(G,pos, node_size= 2000, node_shape = aShape, nodelist = nodelist, node_color='#30475e')
nx.draw_networkx_labels(G,pos,nodelable,font_size=12, font_color='#e8e8e8')
nx.draw_networkx(G, pos, arrows=True, arrowstyle= '->', node_size= 2000, nodelist=[], with_labels=False, edge_color='#f05454')
from scipy.misc import derivative
def partial_derivative(func, var=0, point=[]):
args = point[:]
def wraps(x):
args[var] = x
return func(*args)
return derivative(wraps, point[var], dx = 1e-6)
f = lambda a,b,c,d,e: ((1+(a**b+c**d)*e)**2)**-1
[partial_derivative(f, i, [1]*f.__code__.co_argcount) for i in range(f.__code__.co_argcount)]
| 0.717705 | 0.789923 |
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc" style="margin-top: 1em;"><ul class="toc-item"><li><span><a href="#Name" data-toc-modified-id="Name-1"><span class="toc-item-num">1 </span>Name</a></span></li><li><span><a href="#Search" data-toc-modified-id="Search-2"><span class="toc-item-num">2 </span>Search</a></span><ul class="toc-item"><li><span><a href="#Load-Cached-Results" data-toc-modified-id="Load-Cached-Results-2.1"><span class="toc-item-num">2.1 </span>Load Cached Results</a></span></li><li><span><a href="#Build-Model-From-Google-Images" data-toc-modified-id="Build-Model-From-Google-Images-2.2"><span class="toc-item-num">2.2 </span>Build Model From Google Images</a></span></li></ul></li><li><span><a href="#Analysis" data-toc-modified-id="Analysis-3"><span class="toc-item-num">3 </span>Analysis</a></span><ul class="toc-item"><li><span><a href="#Gender-cross-validation" data-toc-modified-id="Gender-cross-validation-3.1"><span class="toc-item-num">3.1 </span>Gender cross validation</a></span></li><li><span><a href="#Face-Sizes" data-toc-modified-id="Face-Sizes-3.2"><span class="toc-item-num">3.2 </span>Face Sizes</a></span></li><li><span><a href="#Screen-Time-Across-All-Shows" data-toc-modified-id="Screen-Time-Across-All-Shows-3.3"><span class="toc-item-num">3.3 </span>Screen Time Across All Shows</a></span></li><li><span><a href="#Appearances-on-a-Single-Show" data-toc-modified-id="Appearances-on-a-Single-Show-3.4"><span class="toc-item-num">3.4 </span>Appearances on a Single Show</a></span></li></ul></li><li><span><a href="#Persist-to-Cloud" data-toc-modified-id="Persist-to-Cloud-4"><span class="toc-item-num">4 </span>Persist to Cloud</a></span><ul class="toc-item"><li><span><a href="#Save-Model-to-Google-Cloud-Storage" data-toc-modified-id="Save-Model-to-Google-Cloud-Storage-4.1"><span class="toc-item-num">4.1 </span>Save Model to Google Cloud Storage</a></span></li><li><span><a href="#Save-Labels-to-DB" data-toc-modified-id="Save-Labels-to-DB-4.2"><span class="toc-item-num">4.2 </span>Save Labels to DB</a></span><ul class="toc-item"><li><span><a href="#Commit-the-person-and-labeler" data-toc-modified-id="Commit-the-person-and-labeler-4.2.1"><span class="toc-item-num">4.2.1 </span>Commit the person and labeler</a></span></li><li><span><a href="#Commit-the-FaceIdentity-labels" data-toc-modified-id="Commit-the-FaceIdentity-labels-4.2.2"><span class="toc-item-num">4.2.2 </span>Commit the FaceIdentity labels</a></span></li></ul></li></ul></li></ul></div>
```
from esper.prelude import *
from esper.identity import *
from esper import embed_google_images
```
# Name
Please add the person's name and their expected gender below (Male/Female).
```
name = 'Juan Williams'
gender = 'Male'
```
# Search
## Load Cached Results
Reads cached identity model from local disk. Run this if the person has been labelled before and you only wish to regenerate the graphs. Otherwise, if you have never created a model for this person, please see the next section.
```
assert name != ''
results = FaceIdentityModel.load(name=name)
imshow(tile_imgs([cv2.resize(x[1][0], (200, 200)) for x in results.model_params['images']], cols=10))
plt.show()
plot_precision_and_cdf(results)
```
## Build Model From Google Images
Run this section if you do not have a cached model and precision curve estimates. This section will grab images using Google Image Search and score each of the faces in the dataset. We will interactively build the precision vs score curve.
It is important that the images that you select are accurate. If you make a mistake, rerun the cell below.
```
assert name != ''
# Grab face images from Google
img_dir = embed_google_images.fetch_images(name)
# If the images returned are not satisfactory, rerun the above with extra params:
# query_extras='' # additional keywords to add to search
# force=True # ignore cached images
face_imgs = load_and_select_faces_from_images(img_dir)
face_embs = embed_google_images.embed_images(face_imgs)
assert(len(face_embs) == len(face_imgs))
reference_imgs = tile_imgs([cv2.resize(x[0], (200, 200)) for x in face_imgs if x], cols=10)
def show_reference_imgs():
print('User selected reference images for {}.'.format(name))
imshow(reference_imgs)
plt.show()
show_reference_imgs()
# Score all of the faces in the dataset (this can take a minute)
face_ids_by_bucket, face_ids_to_score = face_search_by_embeddings(face_embs)
precision_model = PrecisionModel(face_ids_by_bucket)
```
Now we will validate which of the images in the dataset are of the target identity.
__Hover over with mouse and press S to select a face. Press F to expand the frame.__
```
show_reference_imgs()
print(('Mark all images that ARE NOT {}. Thumbnails are ordered by DESCENDING distance '
'to your selected images. (The first page is more likely to have non "{}" images.) '
'There are a total of {} frames. (CLICK THE DISABLE JUPYTER KEYBOARD BUTTON '
'BEFORE PROCEEDING.)').format(
name, name, precision_model.get_lower_count()))
lower_widget = precision_model.get_lower_widget()
lower_widget
show_reference_imgs()
print(('Mark all images that ARE {}. Thumbnails are ordered by ASCENDING distance '
'to your selected images. (The first page is more likely to have "{}" images.) '
'There are a total of {} frames. (CLICK THE DISABLE JUPYTER KEYBOARD BUTTON '
'BEFORE PROCEEDING.)').format(
name, name, precision_model.get_lower_count()))
upper_widget = precision_model.get_upper_widget()
upper_widget
```
Run the following cell after labelling to compute the precision curve. Do not forget to re-enable jupyter shortcuts.
```
# Compute the precision from the selections
lower_precision = precision_model.compute_precision_for_lower_buckets(lower_widget.selected)
upper_precision = precision_model.compute_precision_for_upper_buckets(upper_widget.selected)
precision_by_bucket = {**lower_precision, **upper_precision}
results = FaceIdentityModel(
name=name,
face_ids_by_bucket=face_ids_by_bucket,
face_ids_to_score=face_ids_to_score,
precision_by_bucket=precision_by_bucket,
model_params={
'images': list(zip(face_embs, face_imgs))
}
)
plot_precision_and_cdf(results)
```
The next cell persists the model locally.
```
results.save()
```
# Analysis
## Gender cross validation
Situations where the identity model disagrees with the gender classifier may be cause for alarm. We would like to check that instances of the person have the expected gender as a sanity check. This section shows the breakdown of the identity instances and their labels from the gender classifier.
```
gender_breakdown = compute_gender_breakdown(results)
print('Expected counts by gender:')
for k, v in gender_breakdown.items():
print(' {} : {}'.format(k, int(v)))
print()
print('Percentage by gender:')
denominator = sum(v for v in gender_breakdown.values())
for k, v in gender_breakdown.items():
print(' {} : {:0.1f}%'.format(k, 100 * v / denominator))
print()
```
Situations where the identity detector returns high confidence, but where the gender is not the expected gender indicate either an error on the part of the identity detector or the gender detector. The following visualization shows randomly sampled images, where the identity detector returns high confidence, grouped by the gender label.
```
high_probability_threshold = 0.8
show_gender_examples(results, high_probability_threshold)
```
## Face Sizes
Faces shown on-screen vary in size. For a person such as a host, they may be shown in a full body shot or as a face in a box. Faces in the background or those part of side graphics might be smaller than the rest. When calculuating screentime for a person, we would like to know whether the results represent the time the person was featured as opposed to merely in the background or as a tiny thumbnail in some graphic.
The next cell, plots the distribution of face sizes. Some possible anomalies include there only being very small faces or large faces.
```
plot_histogram_of_face_sizes(results)
```
The histogram above shows the distribution of face sizes, but not how those sizes occur in the dataset. For instance, one might ask why some faces are so large or whhether the small faces are actually errors. The following cell groups example faces, which are of the target identity with probability, by their sizes in terms of screen area.
```
high_probability_threshold = 0.8
show_faces_by_size(results, high_probability_threshold, n=10)
```
## Screen Time Across All Shows
One question that we might ask about a person is whether they received a significantly different amount of screentime on different shows. The following section visualizes the amount of screentime by show in total minutes and also in proportion of the show's total time. For a celebrity or political figure such as Donald Trump, we would expect significant screentime on many shows. For a show host such as Wolf Blitzer, we expect that the screentime be high for shows hosted by Wolf Blitzer.
```
screen_time_by_show = get_screen_time_by_show(results)
plot_screen_time_by_show(name, screen_time_by_show)
```
## Appearances on a Single Show
For people such as hosts, we would like to examine in greater detail the screen time allotted for a single show. First, fill in a show below.
```
show_name = 'The Five'
# Compute the screen time for each video of the show
screen_time_by_video_id = compute_screen_time_by_video(results, show_name)
```
One question we might ask about a host is "how long they are show on screen" for an episode. Likewise, we might also ask for how many episodes is the host not present due to being on vacation or on assignment elsewhere. The following cell plots a histogram of the distribution of the length of the person's appearances in videos of the chosen show.
```
plot_histogram_of_screen_times_by_video(name, show_name, screen_time_by_video_id)
```
For a host, we expect screentime over time to be consistent as long as the person remains a host. For figures such as Hilary Clinton, we expect the screentime to track events in the real world such as the lead-up to 2016 election and then to drop afterwards. The following cell plots a time series of the person's screentime over time. Each dot is a video of the chosen show. Red Xs are videos for which the face detector did not run.
```
plot_screentime_over_time(name, show_name, screen_time_by_video_id)
```
We hypothesized that a host is more likely to appear at the beginning of a video and then also appear throughout the video. The following plot visualizes the distibution of shot beginning times for videos of the show.
```
plot_distribution_of_appearance_times_by_video(results, show_name)
```
In the section 3.3, we see that some shows may have much larger variance in the screen time estimates than others. This may be because a host or frequent guest appears similar to the target identity. Alternatively, the images of the identity may be consistently low quality, leading to lower scores. The next cell plots a histogram of the probabilites for for faces in a show.
```
plot_distribution_of_identity_probabilities(results, show_name)
```
# Persist to Cloud
The remaining code in this notebook uploads the built identity model to Google Cloud Storage and adds the FaceIdentity labels to the database.
## Save Model to Google Cloud Storage
```
gcs_model_path = results.save_to_gcs()
```
To ensure that the model stored to Google Cloud is valid, we load it and print the precision and cdf curve below.
```
gcs_results = FaceIdentityModel.load_from_gcs(name=name)
imshow(tile_imgs([cv2.resize(x[1][0], (200, 200)) for x in gcs_results.model_params['images']], cols=10))
plt.show()
plot_precision_and_cdf(gcs_results)
```
## Save Labels to DB
If you are satisfied with the model, we can commit the labels to the database.
```
from django.core.exceptions import ObjectDoesNotExist
def standardize_name(name):
return name.lower()
person_type = ThingType.objects.get(name='person')
try:
person = Thing.objects.get(name=standardize_name(name), type=person_type)
print('Found person:', person.name)
except ObjectDoesNotExist:
person = Thing(name=standardize_name(name), type=person_type)
print('Creating person:', person.name)
labeler = Labeler(name='face-identity-{}'.format(person.name), data_path=gcs_model_path)
```
### Commit the person and labeler
The labeler and person have been created but not set saved to the database. If a person was created, please make sure that the name is correct before saving.
```
person.save()
labeler.save()
```
### Commit the FaceIdentity labels
Now, we are ready to add the labels to the database. We will create a FaceIdentity for each face whose probability exceeds the minimum threshold.
```
commit_face_identities_to_db(results, person, labeler, min_threshold=0.001)
print('Committed {} labels to the db'.format(FaceIdentity.objects.filter(labeler=labeler).count()))
```
|
github_jupyter
|
from esper.prelude import *
from esper.identity import *
from esper import embed_google_images
name = 'Juan Williams'
gender = 'Male'
assert name != ''
results = FaceIdentityModel.load(name=name)
imshow(tile_imgs([cv2.resize(x[1][0], (200, 200)) for x in results.model_params['images']], cols=10))
plt.show()
plot_precision_and_cdf(results)
assert name != ''
# Grab face images from Google
img_dir = embed_google_images.fetch_images(name)
# If the images returned are not satisfactory, rerun the above with extra params:
# query_extras='' # additional keywords to add to search
# force=True # ignore cached images
face_imgs = load_and_select_faces_from_images(img_dir)
face_embs = embed_google_images.embed_images(face_imgs)
assert(len(face_embs) == len(face_imgs))
reference_imgs = tile_imgs([cv2.resize(x[0], (200, 200)) for x in face_imgs if x], cols=10)
def show_reference_imgs():
print('User selected reference images for {}.'.format(name))
imshow(reference_imgs)
plt.show()
show_reference_imgs()
# Score all of the faces in the dataset (this can take a minute)
face_ids_by_bucket, face_ids_to_score = face_search_by_embeddings(face_embs)
precision_model = PrecisionModel(face_ids_by_bucket)
show_reference_imgs()
print(('Mark all images that ARE NOT {}. Thumbnails are ordered by DESCENDING distance '
'to your selected images. (The first page is more likely to have non "{}" images.) '
'There are a total of {} frames. (CLICK THE DISABLE JUPYTER KEYBOARD BUTTON '
'BEFORE PROCEEDING.)').format(
name, name, precision_model.get_lower_count()))
lower_widget = precision_model.get_lower_widget()
lower_widget
show_reference_imgs()
print(('Mark all images that ARE {}. Thumbnails are ordered by ASCENDING distance '
'to your selected images. (The first page is more likely to have "{}" images.) '
'There are a total of {} frames. (CLICK THE DISABLE JUPYTER KEYBOARD BUTTON '
'BEFORE PROCEEDING.)').format(
name, name, precision_model.get_lower_count()))
upper_widget = precision_model.get_upper_widget()
upper_widget
# Compute the precision from the selections
lower_precision = precision_model.compute_precision_for_lower_buckets(lower_widget.selected)
upper_precision = precision_model.compute_precision_for_upper_buckets(upper_widget.selected)
precision_by_bucket = {**lower_precision, **upper_precision}
results = FaceIdentityModel(
name=name,
face_ids_by_bucket=face_ids_by_bucket,
face_ids_to_score=face_ids_to_score,
precision_by_bucket=precision_by_bucket,
model_params={
'images': list(zip(face_embs, face_imgs))
}
)
plot_precision_and_cdf(results)
results.save()
gender_breakdown = compute_gender_breakdown(results)
print('Expected counts by gender:')
for k, v in gender_breakdown.items():
print(' {} : {}'.format(k, int(v)))
print()
print('Percentage by gender:')
denominator = sum(v for v in gender_breakdown.values())
for k, v in gender_breakdown.items():
print(' {} : {:0.1f}%'.format(k, 100 * v / denominator))
print()
high_probability_threshold = 0.8
show_gender_examples(results, high_probability_threshold)
plot_histogram_of_face_sizes(results)
high_probability_threshold = 0.8
show_faces_by_size(results, high_probability_threshold, n=10)
screen_time_by_show = get_screen_time_by_show(results)
plot_screen_time_by_show(name, screen_time_by_show)
show_name = 'The Five'
# Compute the screen time for each video of the show
screen_time_by_video_id = compute_screen_time_by_video(results, show_name)
plot_histogram_of_screen_times_by_video(name, show_name, screen_time_by_video_id)
plot_screentime_over_time(name, show_name, screen_time_by_video_id)
plot_distribution_of_appearance_times_by_video(results, show_name)
plot_distribution_of_identity_probabilities(results, show_name)
gcs_model_path = results.save_to_gcs()
gcs_results = FaceIdentityModel.load_from_gcs(name=name)
imshow(tile_imgs([cv2.resize(x[1][0], (200, 200)) for x in gcs_results.model_params['images']], cols=10))
plt.show()
plot_precision_and_cdf(gcs_results)
from django.core.exceptions import ObjectDoesNotExist
def standardize_name(name):
return name.lower()
person_type = ThingType.objects.get(name='person')
try:
person = Thing.objects.get(name=standardize_name(name), type=person_type)
print('Found person:', person.name)
except ObjectDoesNotExist:
person = Thing(name=standardize_name(name), type=person_type)
print('Creating person:', person.name)
labeler = Labeler(name='face-identity-{}'.format(person.name), data_path=gcs_model_path)
person.save()
labeler.save()
commit_face_identities_to_db(results, person, labeler, min_threshold=0.001)
print('Committed {} labels to the db'.format(FaceIdentity.objects.filter(labeler=labeler).count()))
| 0.63624 | 0.948728 |
```
# Using different models to find the optimal model
from xgboost import XGBRegressor
from sklearn.metrics import mean_squared_error
import lightgbm as lgb
from sklearn.model_selection import train_test_split,KFold
import warnings
warnings.filterwarnings('ignore')
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style(style='darkgrid')
df=pd.read_csv('new_dataset.csv')
df.duplicated().any()
train=df[:3865].drop(['unique_id','galaxy','target'],axis=1)
test=df[3865:].drop(['unique_id','galaxy','target'],axis=1).reset_index().drop('index',axis=1)
target=df[:3865].target
#All our features are categoricall features
[feat for feat in train.columns if train[feat].dtypes =='object' ]
X_train, X_valid, y_train, y_valid = train_test_split(train,target,test_size=0.25, random_state=1)
X_train.shape, y_train.shape, X_valid.shape, y_valid.shape
train_data = lgb.Dataset(X_train, label=y_train)
test_data = lgb.Dataset(X_valid, label=y_valid)
param = {'objective': 'regression',
'boosting': 'gbdt',
'metric': 'rmse',
'learning_rate': 0.05,
'num_iterations': 7500,
'max_depth': -1,
'min_data_in_leaf': 15,
'bagging_fraction': 0.8,
'bagging_freq': 1,
'feature_fraction': 0.8
}
clf = lgb.train(params=param,
early_stopping_rounds=100,
verbose_eval=100,
train_set=train_data,
valid_sets=[test_data])
Xtest=test.copy()
X=train
y=target
errlgb = []
y_pred_totlgb = []
fold = KFold(n_splits=10, shuffle=True, random_state=42)
for train_index, test_index in fold.split(X):
X_train, X_test = X.loc[train_index], X.loc[test_index]
y_train, y_test = y[train_index], y[test_index]
train_data = lgb.Dataset(X_train, label=y_train)
test_data = lgb.Dataset(X_test, label=y_test)
clf = lgb.train(params=param,
early_stopping_rounds=100,
verbose_eval=100,
train_set=train_data,
valid_sets=[test_data])
y_pred = clf.predict(X_test)
print("RMSE: ", np.sqrt(mean_squared_error(y_test, y_pred)))
errlgb.append(np.sqrt(mean_squared_error(y_test, y_pred)))
p = clf.predict(Xtest)
y_pred_totlgb.append(p)
np.mean(y_pred_totlgb,0)
np.mean(errlgb, 0)
feature_imp = pd.DataFrame(sorted(zip(clf.feature_importance(), X.columns), reverse=True)[:15], columns=['Value','Feature'])
plt.figure(figsize=(8,8))
sns.barplot(x="Value", y="Feature", data=feature_imp.sort_values(by="Value", ascending=False))
plt.title('Top 15 Most Important LightGBM Features')
plt.tight_layout()
plt.show()
y_pred = np.mean(y_pred_totlgb,0)
#Creating the Submission file
test['index']=test.index
test['pred']=y_pred
sub=test[['pred','existence expectancy index']]
test
opt_pred=[]
for i in range(890):
if test['existence expectancy index'][i] < 0.7:
opt_pred.append(True)
else:
opt_pred.append(False)
sub['opt_pred']=opt_pred
sub['opt_pred'].value_counts()
#Distributing the 50,000 Zillion Liter of
map_p={False:52.75, True:99}
sub['opt_pred']=sub['opt_pred'].map(map_p)
sub.drop('existence expectancy index',axis=1,inplace=True)
sub['index']=sub.index
#Following submission format
sub=sub[['index','pred','opt_pred']]
sub.to_csv('my_latest_sub.csv',index=False)
```
|
github_jupyter
|
# Using different models to find the optimal model
from xgboost import XGBRegressor
from sklearn.metrics import mean_squared_error
import lightgbm as lgb
from sklearn.model_selection import train_test_split,KFold
import warnings
warnings.filterwarnings('ignore')
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style(style='darkgrid')
df=pd.read_csv('new_dataset.csv')
df.duplicated().any()
train=df[:3865].drop(['unique_id','galaxy','target'],axis=1)
test=df[3865:].drop(['unique_id','galaxy','target'],axis=1).reset_index().drop('index',axis=1)
target=df[:3865].target
#All our features are categoricall features
[feat for feat in train.columns if train[feat].dtypes =='object' ]
X_train, X_valid, y_train, y_valid = train_test_split(train,target,test_size=0.25, random_state=1)
X_train.shape, y_train.shape, X_valid.shape, y_valid.shape
train_data = lgb.Dataset(X_train, label=y_train)
test_data = lgb.Dataset(X_valid, label=y_valid)
param = {'objective': 'regression',
'boosting': 'gbdt',
'metric': 'rmse',
'learning_rate': 0.05,
'num_iterations': 7500,
'max_depth': -1,
'min_data_in_leaf': 15,
'bagging_fraction': 0.8,
'bagging_freq': 1,
'feature_fraction': 0.8
}
clf = lgb.train(params=param,
early_stopping_rounds=100,
verbose_eval=100,
train_set=train_data,
valid_sets=[test_data])
Xtest=test.copy()
X=train
y=target
errlgb = []
y_pred_totlgb = []
fold = KFold(n_splits=10, shuffle=True, random_state=42)
for train_index, test_index in fold.split(X):
X_train, X_test = X.loc[train_index], X.loc[test_index]
y_train, y_test = y[train_index], y[test_index]
train_data = lgb.Dataset(X_train, label=y_train)
test_data = lgb.Dataset(X_test, label=y_test)
clf = lgb.train(params=param,
early_stopping_rounds=100,
verbose_eval=100,
train_set=train_data,
valid_sets=[test_data])
y_pred = clf.predict(X_test)
print("RMSE: ", np.sqrt(mean_squared_error(y_test, y_pred)))
errlgb.append(np.sqrt(mean_squared_error(y_test, y_pred)))
p = clf.predict(Xtest)
y_pred_totlgb.append(p)
np.mean(y_pred_totlgb,0)
np.mean(errlgb, 0)
feature_imp = pd.DataFrame(sorted(zip(clf.feature_importance(), X.columns), reverse=True)[:15], columns=['Value','Feature'])
plt.figure(figsize=(8,8))
sns.barplot(x="Value", y="Feature", data=feature_imp.sort_values(by="Value", ascending=False))
plt.title('Top 15 Most Important LightGBM Features')
plt.tight_layout()
plt.show()
y_pred = np.mean(y_pred_totlgb,0)
#Creating the Submission file
test['index']=test.index
test['pred']=y_pred
sub=test[['pred','existence expectancy index']]
test
opt_pred=[]
for i in range(890):
if test['existence expectancy index'][i] < 0.7:
opt_pred.append(True)
else:
opt_pred.append(False)
sub['opt_pred']=opt_pred
sub['opt_pred'].value_counts()
#Distributing the 50,000 Zillion Liter of
map_p={False:52.75, True:99}
sub['opt_pred']=sub['opt_pred'].map(map_p)
sub.drop('existence expectancy index',axis=1,inplace=True)
sub['index']=sub.index
#Following submission format
sub=sub[['index','pred','opt_pred']]
sub.to_csv('my_latest_sub.csv',index=False)
| 0.559771 | 0.551332 |
# Classifying Business Documents using Deep Learning
## IBM Coursera Advanced Data Science Capstone - Model Building
## Sumudu Tennakoon
```
# -*- coding: utf-8 -*-
"""
Created on Sat Feb 16 2019
@author: Sumudu Tennakoon
@licence:Apache License, Version 2.0 (http://www.apache.org/licenses/LICENSE-2.0)
"""
import pandas as pd
import numpy as np
import sys
import os
import re
import matplotlib.pyplot as plt
from datetime import date
from sklearn.model_selection import train_test_split
import tensorflow as tf
import tensorflow.keras as keras
print('TensorFlow Version: ', tf.__version__)
from DocumentClassifierV1 import * # Custom library created for the Capstone project.
```
## 1. Read Pre-saved Input dataset
```
DocumentFilesData = pd.read_pickle('Data/DocumentClassification_IBM_ADV_DS_Capstone_TrainSample_128x128_20190316.pkl')
ClassLabels = list(DocumentFilesData.FileClass.unique())
ClassNumbers = list(range(len(ClassLabels)))
ClassLabelMap = list((zip(ClassLabels, ClassNumbers)))
print(ClassLabelMap)
for clm in ClassLabelMap:
DocumentFilesData.loc[DocumentFilesData['FileClass']==clm[0] , 'ClassNumber'] = clm[1]
NClasses = len(ClassLabels)
imgRows = 128
imgCols = 128
```
## 2. Pre-Process Modeling Dataset
```
TestSize= 0.3
ResponseColumn='ClassNumber'
TrainData, TestData = train_test_split(DocumentFilesData, test_size=TestSize, random_state=42)
x_train = TrainData['DocumentMatrix'].values
x_train = np.asarray(list(x_train), dtype ='int')
y_train = TrainData[ResponseColumn].values
x_test = TestData['DocumentMatrix'].values
x_test = np.asarray(list(x_test), dtype ='int')
y_test = TestData[ResponseColumn].values
#Modeling parameters
NClasses = len(ClassLabels)
#Shape of datasets
print(x_train.shape)
print(y_train.shape)
#Plot sample image with scale
plt.imshow(x_train[0])
plt.colorbar()
if keras.backend.image_data_format() == 'channels_first':
x_train = x_train.reshape(x_train.shape[0], 1, imgRows, imgCols)
x_test = x_test.reshape(x_test.shape[0], 1, imgRows, imgCols)
input_shape = (1, imgRows, imgCols)
else:
x_train = x_train.reshape(x_train.shape[0], imgRows, imgCols, 1)
x_test = x_test.reshape(x_test.shape[0], imgRows, imgCols, 1)
input_shape = (imgRows, imgCols, 1)
x_train = x_train.astype('float32') #convert interger image tensor to float
x_test = x_test.astype('float32') #convert interger image tensor to float
x_train = x_train/255 # Normalize grayscale to a number between 0 and 1
x_test = x_test/255 # Normalize grayscale to a number between 0 and 1
# convert class vectors to binary class matrices (One-hot encoding)
y_train = keras.utils.to_categorical(y_train, NClasses)
y_test = keras.utils.to_categorical(y_test, NClasses)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
```
## 3. Setup CNN Model
```
ModelID='DocumentClassification_IBM_ADV_DS_Capstone'
ModelVersion = 'CNN_V03' # 'LGR_V01' # 'DFF_V01' # 'CNN_V01' # 'CNN_V02' #
batch_size = 64 # 128 was used in GPU development environment (delivered model)
epochs = 10 # 40 was used in GPU development environment (delivered model)
optimizer=keras.optimizers.Adadelta()
BuiltDate= '20190316' #date.today().strftime('%Y%m%d')
ModelFileName = '{}_{}_{}x{}_{}'.format(ModelID, ModelVersion, imgRows, imgCols, BuiltDate)
print('Building Model {}'.format(ModelFileName))
model = keras.Sequential()
model.add(keras.layers.Conv2D(32, kernel_size=(3,3), activation='relu', input_shape=input_shape)) #, strides=(2, 2)
model.add(keras.layers.Conv2D(64, kernel_size=(3,3), activation='relu'))
model.add(keras.layers.MaxPooling2D(pool_size=(2,2)))
model.add(keras.layers.Dropout(0.25))
model.add(keras.layers.Conv2D(32, kernel_size=(3,3), activation='relu'))
model.add(keras.layers.Conv2D(64, kernel_size=(3,3), activation='relu'))
model.add(keras.layers.MaxPooling2D(pool_size=(2,2)))
model.add(keras.layers.Dropout(0.25))
model.add(keras.layers.Flatten())
model.add(keras.layers.Dense(128, activation='relu'))
model.add(keras.layers.Dropout(0.5))
model.add(keras.layers.Dense(128, activation='relu'))
model.add(keras.layers.Dropout(0.2))
model.add(keras.layers.Dense(NClasses, activation='softmax')) # Output layer
```
### model summary
```
print(model.summary())
```
### Compile model
```
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=optimizer,
metrics=['accuracy'])
```
## 4. Fit Model
```
# Create Tensorflow Session
sess = tf.Session(config=tf.ConfigProto(intra_op_parallelism_threads=16, inter_op_parallelism_threads=16))
keras.backend.set_session(sess)
# Fit model
hist = model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_data=(x_test, y_test))
ModelHistory = pd.DataFrame(data = hist.history)
# Plot model history and save
fig, axes = plt.subplots(nrows=2, ncols=1)
fig.subplots_adjust(hspace=0.5)
ModelHistory[['val_loss', 'loss']].plot(linewidth=2, figsize=(10, 6), ax=axes[0])
ModelHistory[['val_acc', 'acc']].plot(linewidth=2, figsize=(10, 6),ax=axes[1])
plt.title('Model {} fitting history'.format(ModelFileName))
plt.xlabel('Epocs')
```
## 5. Analyze model fitting History and Evaluate Performance
```
# Print model fitting summury
score = model.evaluate(x_test, y_test, verbose=0)
print(ModelFileName)
print(model.summary())
print('train samples:', x_train.shape[0])
print('test samples:', x_test.shape[0])
print('Test loss:', score[0])
print('Test accuracy:', score[1])
```
## 6. Save Model
```
model.save('{}.h5'.format(ModelFileName))
print('TF Model {}.h5 is saved...'.format(ModelFileName))
```
## 7. Test saved model
```
ModelLoaded = keras.models.load_model('{}.h5'.format(ModelFileName))
predictions_test = ModelLoaded.predict(x_test)
N =len(predictions_test)
PredictedClassNumbers = [None] * N
PredictedClassConfidence = [None] * N
ActualClassNumbers = [None] * N
for i in range(N):
p = np.argmax(predictions_test[i])
PredictedClassNumbers[i] = p
PredictedClassConfidence[i] = predictions_test[i][p]
for i in range(N):
p = np.argmax(y_test[i])
ActualClassNumbers[i] = p
TestResults = pd.DataFrame(data = predictions_test)
TestResults['PredictedClassNumber'] = PredictedClassNumbers
TestResults['ActualClassNumber'] = ActualClassNumbers
```
### Print model fitting summary and performance
```
print(ModelFileName)
print('train samples:', x_train.shape[0])
print('test samples:', x_test.shape[0])
print('Test loss:', score[0])
print('Test accuracy:', score[1])
print(pd.crosstab(TestResults['ActualClassNumber'], TestResults['PredictedClassNumber'], margins=True))
import seaborn as sns
sns.heatmap(pd.crosstab(TestResults.ActualClassNumber, TestResults.PredictedClassNumber, margins=False), annot=True)
```
<hr>
<p> This notebook and related materials were developed by <b> Sumudu Tennakoon</b> for the capstone project in partial fulfillment of the requirements for the <b> Advanced Data Science with IBM Specialization</b>. <br>
March 2019. <br>
Apache License, Version 2.0 (http://www.apache.org/licenses/LICENSE-2.0)</p>
|
github_jupyter
|
# -*- coding: utf-8 -*-
"""
Created on Sat Feb 16 2019
@author: Sumudu Tennakoon
@licence:Apache License, Version 2.0 (http://www.apache.org/licenses/LICENSE-2.0)
"""
import pandas as pd
import numpy as np
import sys
import os
import re
import matplotlib.pyplot as plt
from datetime import date
from sklearn.model_selection import train_test_split
import tensorflow as tf
import tensorflow.keras as keras
print('TensorFlow Version: ', tf.__version__)
from DocumentClassifierV1 import * # Custom library created for the Capstone project.
DocumentFilesData = pd.read_pickle('Data/DocumentClassification_IBM_ADV_DS_Capstone_TrainSample_128x128_20190316.pkl')
ClassLabels = list(DocumentFilesData.FileClass.unique())
ClassNumbers = list(range(len(ClassLabels)))
ClassLabelMap = list((zip(ClassLabels, ClassNumbers)))
print(ClassLabelMap)
for clm in ClassLabelMap:
DocumentFilesData.loc[DocumentFilesData['FileClass']==clm[0] , 'ClassNumber'] = clm[1]
NClasses = len(ClassLabels)
imgRows = 128
imgCols = 128
TestSize= 0.3
ResponseColumn='ClassNumber'
TrainData, TestData = train_test_split(DocumentFilesData, test_size=TestSize, random_state=42)
x_train = TrainData['DocumentMatrix'].values
x_train = np.asarray(list(x_train), dtype ='int')
y_train = TrainData[ResponseColumn].values
x_test = TestData['DocumentMatrix'].values
x_test = np.asarray(list(x_test), dtype ='int')
y_test = TestData[ResponseColumn].values
#Modeling parameters
NClasses = len(ClassLabels)
#Shape of datasets
print(x_train.shape)
print(y_train.shape)
#Plot sample image with scale
plt.imshow(x_train[0])
plt.colorbar()
if keras.backend.image_data_format() == 'channels_first':
x_train = x_train.reshape(x_train.shape[0], 1, imgRows, imgCols)
x_test = x_test.reshape(x_test.shape[0], 1, imgRows, imgCols)
input_shape = (1, imgRows, imgCols)
else:
x_train = x_train.reshape(x_train.shape[0], imgRows, imgCols, 1)
x_test = x_test.reshape(x_test.shape[0], imgRows, imgCols, 1)
input_shape = (imgRows, imgCols, 1)
x_train = x_train.astype('float32') #convert interger image tensor to float
x_test = x_test.astype('float32') #convert interger image tensor to float
x_train = x_train/255 # Normalize grayscale to a number between 0 and 1
x_test = x_test/255 # Normalize grayscale to a number between 0 and 1
# convert class vectors to binary class matrices (One-hot encoding)
y_train = keras.utils.to_categorical(y_train, NClasses)
y_test = keras.utils.to_categorical(y_test, NClasses)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
ModelID='DocumentClassification_IBM_ADV_DS_Capstone'
ModelVersion = 'CNN_V03' # 'LGR_V01' # 'DFF_V01' # 'CNN_V01' # 'CNN_V02' #
batch_size = 64 # 128 was used in GPU development environment (delivered model)
epochs = 10 # 40 was used in GPU development environment (delivered model)
optimizer=keras.optimizers.Adadelta()
BuiltDate= '20190316' #date.today().strftime('%Y%m%d')
ModelFileName = '{}_{}_{}x{}_{}'.format(ModelID, ModelVersion, imgRows, imgCols, BuiltDate)
print('Building Model {}'.format(ModelFileName))
model = keras.Sequential()
model.add(keras.layers.Conv2D(32, kernel_size=(3,3), activation='relu', input_shape=input_shape)) #, strides=(2, 2)
model.add(keras.layers.Conv2D(64, kernel_size=(3,3), activation='relu'))
model.add(keras.layers.MaxPooling2D(pool_size=(2,2)))
model.add(keras.layers.Dropout(0.25))
model.add(keras.layers.Conv2D(32, kernel_size=(3,3), activation='relu'))
model.add(keras.layers.Conv2D(64, kernel_size=(3,3), activation='relu'))
model.add(keras.layers.MaxPooling2D(pool_size=(2,2)))
model.add(keras.layers.Dropout(0.25))
model.add(keras.layers.Flatten())
model.add(keras.layers.Dense(128, activation='relu'))
model.add(keras.layers.Dropout(0.5))
model.add(keras.layers.Dense(128, activation='relu'))
model.add(keras.layers.Dropout(0.2))
model.add(keras.layers.Dense(NClasses, activation='softmax')) # Output layer
print(model.summary())
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=optimizer,
metrics=['accuracy'])
# Create Tensorflow Session
sess = tf.Session(config=tf.ConfigProto(intra_op_parallelism_threads=16, inter_op_parallelism_threads=16))
keras.backend.set_session(sess)
# Fit model
hist = model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_data=(x_test, y_test))
ModelHistory = pd.DataFrame(data = hist.history)
# Plot model history and save
fig, axes = plt.subplots(nrows=2, ncols=1)
fig.subplots_adjust(hspace=0.5)
ModelHistory[['val_loss', 'loss']].plot(linewidth=2, figsize=(10, 6), ax=axes[0])
ModelHistory[['val_acc', 'acc']].plot(linewidth=2, figsize=(10, 6),ax=axes[1])
plt.title('Model {} fitting history'.format(ModelFileName))
plt.xlabel('Epocs')
# Print model fitting summury
score = model.evaluate(x_test, y_test, verbose=0)
print(ModelFileName)
print(model.summary())
print('train samples:', x_train.shape[0])
print('test samples:', x_test.shape[0])
print('Test loss:', score[0])
print('Test accuracy:', score[1])
model.save('{}.h5'.format(ModelFileName))
print('TF Model {}.h5 is saved...'.format(ModelFileName))
ModelLoaded = keras.models.load_model('{}.h5'.format(ModelFileName))
predictions_test = ModelLoaded.predict(x_test)
N =len(predictions_test)
PredictedClassNumbers = [None] * N
PredictedClassConfidence = [None] * N
ActualClassNumbers = [None] * N
for i in range(N):
p = np.argmax(predictions_test[i])
PredictedClassNumbers[i] = p
PredictedClassConfidence[i] = predictions_test[i][p]
for i in range(N):
p = np.argmax(y_test[i])
ActualClassNumbers[i] = p
TestResults = pd.DataFrame(data = predictions_test)
TestResults['PredictedClassNumber'] = PredictedClassNumbers
TestResults['ActualClassNumber'] = ActualClassNumbers
print(ModelFileName)
print('train samples:', x_train.shape[0])
print('test samples:', x_test.shape[0])
print('Test loss:', score[0])
print('Test accuracy:', score[1])
print(pd.crosstab(TestResults['ActualClassNumber'], TestResults['PredictedClassNumber'], margins=True))
import seaborn as sns
sns.heatmap(pd.crosstab(TestResults.ActualClassNumber, TestResults.PredictedClassNumber, margins=False), annot=True)
| 0.496582 | 0.887058 |
Lambda School Data Science
*Unit 2, Sprint 1, Module 2*
---
```
%%capture
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/'
# If you're working locally:
#else:
#DATA_PATH = '../data/'
```
# Module Project: Regression II
In this project, you'll continue working with the New York City rent dataset you used in the last module project.
## Directions
The tasks for this project are as follows:
- **Task 1:** Import `csv` file using `wrangle` function.
- **Task 2:** Conduct exploratory data analysis (EDA), and modify `wrangle` function to engineer two new features.
- **Task 3:** Split data into feature matrix `X` and target vector `y`.
- **Task 4:** Split feature matrix `X` and target vector `y` into training and test sets.
- **Task 5:** Establish the baseline mean absolute error for your dataset.
- **Task 6:** Build and train a `Linearregression` model.
- **Task 7:** Calculate the training and test mean absolute error for your model.
- **Task 8:** Calculate the training and test $R^2$ score for your model.
- **Stretch Goal:** Determine the three most important features for your linear regression model.
**Note**
You should limit yourself to the following libraries for this project:
- `matplotlib`
- `numpy`
- `pandas`
- `sklearn`
# I. Wrangle Data
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score
from sklearn.linear_model import LinearRegression
from math import floor, ceil
from ipywidgets import interactive, IntSlider, FloatSlider
import datetime as dt
def wrangle(filepath):
df = pd.read_csv(filepath,
parse_dates=['created'],
index_col='created')
#drop_col = df.select_dtypes(include='object').columns
#df.drop(columns=drop_col, inplace=True)
# Remove the most extreme 1% prices,
# the most extreme .1% latitudes, &
# the most extreme .1% longitudes
df = df[(df['price'] >= np.percentile(df['price'], 0.5)) &
(df['price'] <= np.percentile(df['price'], 99.5)) &
(df['latitude'] >= np.percentile(df['latitude'], 0.05)) &
(df['latitude'] < np.percentile(df['latitude'], 99.95)) &
(df['longitude'] >= np.percentile(df['longitude'], 0.05)) &
(df['longitude'] <= np.percentile(df['longitude'], 99.95))]
df['total_bedrooms'] = df['bathrooms'] + df['bedrooms']
df['length_of_description'] = df['description'].str.len()
#df['created'] = df.index
#df['Year'] = df['created'].year
df.dropna(inplace=True)
return df
filepath = wrangle(DATA_PATH + 'apartments/renthop-nyc.csv')
df = filepath
df.isnull().sum().sum()
```
**Task 1:** Add the following functionality to the above `wrangle` function.
- The `'created'` column will parsed as a `DateTime` object and set as the `index` of the DataFrame.
- Rows with `NaN` values will be dropped.
Then use your modified function to import the `renthop-nyc.csv` file into a DataFrame named `df`.
```
df.shape
df.head()
```
**Task 2:** Using your `pandas` and dataviz skills decide on two features that you want to engineer for your dataset. Next, modify your `wrangle` function to add those features.
**Note:** You can learn more about feature engineering [here](https://en.wikipedia.org/wiki/Feature_engineering). Here are some ideas for new features:
- Does the apartment have a description?
- Length of description.
- Total number of perks that apartment has.
- Are cats _or_ dogs allowed?
- Are cats _and_ dogs allowed?
- Total number of rooms (beds + baths).
```
df.select_dtypes('object')
df['description']
# Conduct your exploratory data analysis here,
# and then modify the function above.
# Make feature for if each apartment has a description or not
#df['length_of_description'] = df['description'].str.len()
# Make feature for total number of rooms utilizing bedrooms and bathrooms
#df['total_bedrooms'] = df['bathrooms'] + df['bedrooms']
#df[['description', 'length_of_description']]
```
# II. Split Data
**Task 3:** Split your DataFrame `df` into a feature matrix `X` and the target vector `y`. You want to predict `'price'`.
**Note:** In contrast to the last module project, this time you should include _all_ the numerical features in your dataset.
```
X = df.drop(['description', 'display_address', 'street_address', 'interest_level'], axis=1)
y = df['price']
X
```
**Task 4:** Split `X` and `y` into a training set (`X_train`, `y_train`) and a test set (`X_test`, `y_test`).
- Your training set should include data from April and May 2016.
- Your test set should include data from June 2016.
```
cutoff = '2016-05-30 00:00:00'
mask = X.index < cutoff
X_train, y_train = X.loc[mask], y.loc[mask]
X_test, y_test = X.loc[~mask], y.loc[~mask]
X_test.tail()
```
# III. Establish Baseline
**Task 5:** Since this is a **regression** problem, you need to calculate the baseline mean absolute error for your model. First, calculate the mean of `y_train`. Next, create a list `y_pred` that has the same length as `y_train` and where every item in the list is the mean. Finally, use `mean_absolute_error` to calculate your baseline.
```
y_pred = [y_train.mean()]*len(y_train)
baseline_mae = mean_absolute_error(y_train, y_pred)
print('Baseline MAE:', baseline_mae)
```
# IV. Build Model
**Task 6:** Build and train a `LinearRegression` model named `model` using your feature matrix `X_train` and your target vector `y_train`.
```
# Step 1: Import predictor class
# Step 2: Instantiate predictor
model = LinearRegression()
# Step 3: Fit predictor on the (training) data
model.fit(X_train, y_train)
```
# V. Check Metrics
**Task 7:** Calculate the training and test mean absolute error for your model.
```
def whats_my_rent(bedrooms):
target_predict = model.predict([[bedroom]])
estimate = target_predict[0]
cof = model.coef_[0]
result = f'$(estimate) for a {bedroom} bedroom apartment,'
explanation = f'Scoat for additional rooms ${cof}'
training_mae = 'Train MAE:', mean_absolute_error(y_train, model.predict(X_train))
test_mae = 'Test MAE:', mean_absolute_error(y_test, model.predict(X_test))
print('Training MAE:', training_mae)
print('Test MAE:', test_mae)
```
**Task 8:** Calculate the training and test $R^2$ score for your model.
```
training_r2 = 'Training R^2 Score:', model.score(X_train, y_train)
test_r2 = 'Test R^2 Score:', model.score(X_test, y_test)
print('Training MAE:', training_r2)
print('Test MAE:', test_r2)
```
# VI. Communicate Results
**Stretch Goal:** What are the three most influential coefficients in your linear model? You should consider the _absolute value_ of each coefficient, so that it doesn't matter if it's positive or negative.
```
intercept = (model.intercept_)
print(intercept)
coef = (model.coef_)
print(coef)
f'Price = {intercept} + {coef} *latitude'
X_train.columns
```
|
github_jupyter
|
%%capture
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/'
# If you're working locally:
#else:
#DATA_PATH = '../data/'
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score
from sklearn.linear_model import LinearRegression
from math import floor, ceil
from ipywidgets import interactive, IntSlider, FloatSlider
import datetime as dt
def wrangle(filepath):
df = pd.read_csv(filepath,
parse_dates=['created'],
index_col='created')
#drop_col = df.select_dtypes(include='object').columns
#df.drop(columns=drop_col, inplace=True)
# Remove the most extreme 1% prices,
# the most extreme .1% latitudes, &
# the most extreme .1% longitudes
df = df[(df['price'] >= np.percentile(df['price'], 0.5)) &
(df['price'] <= np.percentile(df['price'], 99.5)) &
(df['latitude'] >= np.percentile(df['latitude'], 0.05)) &
(df['latitude'] < np.percentile(df['latitude'], 99.95)) &
(df['longitude'] >= np.percentile(df['longitude'], 0.05)) &
(df['longitude'] <= np.percentile(df['longitude'], 99.95))]
df['total_bedrooms'] = df['bathrooms'] + df['bedrooms']
df['length_of_description'] = df['description'].str.len()
#df['created'] = df.index
#df['Year'] = df['created'].year
df.dropna(inplace=True)
return df
filepath = wrangle(DATA_PATH + 'apartments/renthop-nyc.csv')
df = filepath
df.isnull().sum().sum()
df.shape
df.head()
df.select_dtypes('object')
df['description']
# Conduct your exploratory data analysis here,
# and then modify the function above.
# Make feature for if each apartment has a description or not
#df['length_of_description'] = df['description'].str.len()
# Make feature for total number of rooms utilizing bedrooms and bathrooms
#df['total_bedrooms'] = df['bathrooms'] + df['bedrooms']
#df[['description', 'length_of_description']]
X = df.drop(['description', 'display_address', 'street_address', 'interest_level'], axis=1)
y = df['price']
X
cutoff = '2016-05-30 00:00:00'
mask = X.index < cutoff
X_train, y_train = X.loc[mask], y.loc[mask]
X_test, y_test = X.loc[~mask], y.loc[~mask]
X_test.tail()
y_pred = [y_train.mean()]*len(y_train)
baseline_mae = mean_absolute_error(y_train, y_pred)
print('Baseline MAE:', baseline_mae)
# Step 1: Import predictor class
# Step 2: Instantiate predictor
model = LinearRegression()
# Step 3: Fit predictor on the (training) data
model.fit(X_train, y_train)
def whats_my_rent(bedrooms):
target_predict = model.predict([[bedroom]])
estimate = target_predict[0]
cof = model.coef_[0]
result = f'$(estimate) for a {bedroom} bedroom apartment,'
explanation = f'Scoat for additional rooms ${cof}'
training_mae = 'Train MAE:', mean_absolute_error(y_train, model.predict(X_train))
test_mae = 'Test MAE:', mean_absolute_error(y_test, model.predict(X_test))
print('Training MAE:', training_mae)
print('Test MAE:', test_mae)
training_r2 = 'Training R^2 Score:', model.score(X_train, y_train)
test_r2 = 'Test R^2 Score:', model.score(X_test, y_test)
print('Training MAE:', training_r2)
print('Test MAE:', test_r2)
intercept = (model.intercept_)
print(intercept)
coef = (model.coef_)
print(coef)
f'Price = {intercept} + {coef} *latitude'
X_train.columns
| 0.372277 | 0.964888 |
```
import os
# Find the latest version of spark 3.0 from http://www.apache.org/dist/spark/ and enter as the spark version
# For example:
# spark_version = 'spark-3.2.0'
spark_version = 'spark-3.2.0'
os.environ['SPARK_VERSION']=spark_version
# Install Spark and Java
!apt-get update
!apt-get install openjdk-11-jdk-headless -qq > /dev/null
!wget -q http://www.apache.org/dist/spark/$SPARK_VERSION/$SPARK_VERSION-bin-hadoop2.7.tgz
!tar xf $SPARK_VERSION-bin-hadoop2.7.tgz
!pip install -q findspark
# Set Environment Variables
import os
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-11-openjdk-amd64"
os.environ["SPARK_HOME"] = f"/content/{spark_version}-bin-hadoop2.7"
# Start a SparkSession
import findspark
findspark.init()
!wget https://jdbc.postgresql.org/download/postgresql-42.2.9.jar
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName("Video_DVD_v1_00").config("spark.driver.extraClassPath","/content/postgresql-42.2.9.jar").getOrCreate()
from pyspark import SparkFiles
# Load in data from S3 into a DataFrame
url = "https://s3.amazonaws.com/amazon-reviews-pds/tsv/amazon_reviews_us_Video_DVD_v1_00.tsv.gz"
spark.sparkContext.addFile(url)
video_dvd_df = spark.read.option('header', 'true').csv(SparkFiles.get("amazon_reviews_us_Video_DVD_v1_00.tsv.gz"), inferSchema=True, sep='\t')
video_dvd_df.show()
#Count number of records in each row
video_dvd_df.count()
#Drop empty
video_dvd_df = video_dvd_df.dropna()
video_dvd_df.show()
#Customer table
customer_df = video_dvd_df.select(["customer_id"])
customer_df.show()
customer_df = customer_df.groupBy("customer_id").count()
customer_df.orderBy("customer_id").select(["customer_id", "count"])
customer_df.show()
customer_df = customer_df.withColumnRenamed("count", "customer_count")
customer_df.show()
customer_df.printSchema()
#Products table
products_df = video_dvd_df.select(["product_id", "product_title"])
products_df.show()
products_df = products_df.dropDuplicates(["product_id"])
products_df.printSchema()
#Review ID table
review_id_df = video_dvd_df.select(["review_id", "customer_id", "product_id", "product_parent", "review_date"])
review_id_df.show()
review_id_df = review_id_df.dropDuplicates(["review_id"])
from pyspark.sql.functions import to_date
review_id_df = review_id_df.withColumn("review_date", to_date("review_date", "yyyy-mm-dd"))
review_id_df.show()
review_id_df.printSchema()
#Vine table
vine_df = video_dvd_df.select(["review_id", "star_rating", "helpful_votes", "total_votes"])
vine_df.show()
vine_df = vine_df.dropDuplicates(["review_id"])
vine_df.count()
vine_df.printSchema()
mode = "append"
jdbc_url = "jdbc:postgresql://mypostgresdb.cqxo6vsypirs.us-east-1.rds.amazonaws.com:5432/mypostgresdb"
config = {"user": "root", "password":"password1", "driver": "org.postgresql.Driver"}
customer_df.write.jdbc(url=jdbc_url, table="customers", mode = mode, properties = config)
products_df.write.jdbc(url=jdbc_url, table="products", mode = mode, properties = config)
review_id_df.write.jdbc(url=jdbc_url, table="review_id_table", mode = mode, properties = config)
vine_df.write.jdbc(url=jdbc_url, table="vine_table", mode = mode, properties = config)
```
|
github_jupyter
|
import os
# Find the latest version of spark 3.0 from http://www.apache.org/dist/spark/ and enter as the spark version
# For example:
# spark_version = 'spark-3.2.0'
spark_version = 'spark-3.2.0'
os.environ['SPARK_VERSION']=spark_version
# Install Spark and Java
!apt-get update
!apt-get install openjdk-11-jdk-headless -qq > /dev/null
!wget -q http://www.apache.org/dist/spark/$SPARK_VERSION/$SPARK_VERSION-bin-hadoop2.7.tgz
!tar xf $SPARK_VERSION-bin-hadoop2.7.tgz
!pip install -q findspark
# Set Environment Variables
import os
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-11-openjdk-amd64"
os.environ["SPARK_HOME"] = f"/content/{spark_version}-bin-hadoop2.7"
# Start a SparkSession
import findspark
findspark.init()
!wget https://jdbc.postgresql.org/download/postgresql-42.2.9.jar
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName("Video_DVD_v1_00").config("spark.driver.extraClassPath","/content/postgresql-42.2.9.jar").getOrCreate()
from pyspark import SparkFiles
# Load in data from S3 into a DataFrame
url = "https://s3.amazonaws.com/amazon-reviews-pds/tsv/amazon_reviews_us_Video_DVD_v1_00.tsv.gz"
spark.sparkContext.addFile(url)
video_dvd_df = spark.read.option('header', 'true').csv(SparkFiles.get("amazon_reviews_us_Video_DVD_v1_00.tsv.gz"), inferSchema=True, sep='\t')
video_dvd_df.show()
#Count number of records in each row
video_dvd_df.count()
#Drop empty
video_dvd_df = video_dvd_df.dropna()
video_dvd_df.show()
#Customer table
customer_df = video_dvd_df.select(["customer_id"])
customer_df.show()
customer_df = customer_df.groupBy("customer_id").count()
customer_df.orderBy("customer_id").select(["customer_id", "count"])
customer_df.show()
customer_df = customer_df.withColumnRenamed("count", "customer_count")
customer_df.show()
customer_df.printSchema()
#Products table
products_df = video_dvd_df.select(["product_id", "product_title"])
products_df.show()
products_df = products_df.dropDuplicates(["product_id"])
products_df.printSchema()
#Review ID table
review_id_df = video_dvd_df.select(["review_id", "customer_id", "product_id", "product_parent", "review_date"])
review_id_df.show()
review_id_df = review_id_df.dropDuplicates(["review_id"])
from pyspark.sql.functions import to_date
review_id_df = review_id_df.withColumn("review_date", to_date("review_date", "yyyy-mm-dd"))
review_id_df.show()
review_id_df.printSchema()
#Vine table
vine_df = video_dvd_df.select(["review_id", "star_rating", "helpful_votes", "total_votes"])
vine_df.show()
vine_df = vine_df.dropDuplicates(["review_id"])
vine_df.count()
vine_df.printSchema()
mode = "append"
jdbc_url = "jdbc:postgresql://mypostgresdb.cqxo6vsypirs.us-east-1.rds.amazonaws.com:5432/mypostgresdb"
config = {"user": "root", "password":"password1", "driver": "org.postgresql.Driver"}
customer_df.write.jdbc(url=jdbc_url, table="customers", mode = mode, properties = config)
products_df.write.jdbc(url=jdbc_url, table="products", mode = mode, properties = config)
review_id_df.write.jdbc(url=jdbc_url, table="review_id_table", mode = mode, properties = config)
vine_df.write.jdbc(url=jdbc_url, table="vine_table", mode = mode, properties = config)
| 0.494141 | 0.147463 |
# Lesson 3 Exercise 2 Solution: Focus on Primary Key
<img src="images/cassandralogo.png" width="250" height="250">
### Walk through the basics of creating a table with a good Primary Key in Apache Cassandra, inserting rows of data, and doing a simple CQL query to validate the information.
#### We will use a python wrapper/ python driver called cassandra to run the Apache Cassandra queries. This library should be preinstalled but in the future to install this library you can run this command in a notebook to install locally:
! pip install cassandra-driver
#### More documentation can be found here: https://datastax.github.io/python-driver/
#### Import Apache Cassandra python package
```
import cassandra
```
### Create a connection to the database
```
from cassandra.cluster import Cluster
try:
cluster = Cluster(['127.0.0.1']) #If you have a locally installed Apache Cassandra instance
session = cluster.connect()
except Exception as e:
print(e)
```
### Create a keyspace to work in
```
try:
session.execute("""
CREATE KEYSPACE IF NOT EXISTS udacity
WITH REPLICATION =
{ 'class' : 'SimpleStrategy', 'replication_factor' : 1 }"""
)
except Exception as e:
print(e)
```
#### Connect to the Keyspace. Compare this to how we had to create a new session in PostgreSQL.
```
try:
session.set_keyspace('udacity')
except Exception as e:
print(e)
```
### Imagine you need to create a new Music Library of albums
### Here is the information asked of the data:
### 1. Give every album in the music library that was created by a given artist
select * from music_library WHERE artist_name="The Beatles"
### Here is the Collection of Data
<img src="images/table3.png" width="650" height="350">
### How should we model these data?
#### What should be our Primary Key and Partition Key? Since the data are looking for the ARTIST, let's start with that. Is Partitioning our data by artist a good idea? In this case our data is very small. If we had a larger dataset of albums, partitions by artist might be a fine choice. But we would need to validate the dataset to make sure there is an equal spread of the data.
`Table Name: music_library
column 1: Year
column 2: Artist Name
column 3: Album Name
Column 4: City
PRIMARY KEY(artist_name)`
```
query = "CREATE TABLE IF NOT EXISTS music_library"
query = query + "(year int, artist_name text, album_name text, city text, PRIMARY KEY (artist_name))"
try:
session.execute(query)
except Exception as e:
print(e)
```
### Insert the data into the tables
```
query = "INSERT INTO music_library (year, artist_name, album_name, city)"
query = query + " VALUES (%s, %s, %s, %s)"
try:
session.execute(query, (1970, "The Beatles", "Let it Be", "Liverpool"))
except Exception as e:
print(e)
try:
session.execute(query, (1965, "The Beatles", "Rubber Soul", "Oxford"))
except Exception as e:
print(e)
try:
session.execute(query, (1965, "The Who", "My Generation", "London"))
except Exception as e:
print(e)
try:
session.execute(query, (1966, "The Monkees", "The Monkees", "Los Angeles"))
except Exception as e:
print(e)
try:
session.execute(query, (1970, "The Carpenters", "Close To You", "San Diego"))
except Exception as e:
print(e)
```
### Let's Validate our Data Model -- Did it work?? If we look for Albums from The Beatles we should expect to see 2 rows.
`select * from music_library WHERE artist_name="The Beatles"`
```
query = "select * from music_library WHERE artist_name='The Beatles'"
try:
rows = session.execute(query)
except Exception as e:
print(e)
for row in rows:
print (row.year, row.artist_name, row.album_name, row.city)
```
### That didn't work out as planned! Why is that? Because we did not create a unique primary key.
### Let's try again. This time focus on making the PRIMARY KEY unique.
### Looking at the dataset, what makes each row unique?
### We have a couple of options (City and Album Name) but that will not get us the query we need which is looking for album's in a particular artist. Let's make a composite key of the `ARTIST NAME` and `ALBUM NAME`. This is assuming that an album name is unique to the artist it was created by (not a bad bet). --But remember this is just an exercise, you will need to understand your dataset fully (no betting!)
```
query = "CREATE TABLE IF NOT EXISTS music_library1 "
query = query + "(artist_name text, album_name text, year int, city text, PRIMARY KEY (artist_name, album_name))"
try:
session.execute(query)
except Exception as e:
print(e)
query = "INSERT INTO music_library1 (artist_name, album_name, year, city)"
query = query + " VALUES (%s, %s, %s, %s)"
try:
session.execute(query, ("The Beatles", "Let it Be", 1970, "Liverpool"))
except Exception as e:
print(e)
try:
session.execute(query, ("The Beatles", "Rubber Soul", 1965, "Oxford"))
except Exception as e:
print(e)
try:
session.execute(query, ("The Who", "My Generation", 1965, "London"))
except Exception as e:
print(e)
try:
session.execute(query, ("The Monkees", "The Monkees", 1966, "Los Angeles"))
except Exception as e:
print(e)
try:
session.execute(query, ("The Carpenters", "Close To You", 1970, "San Diego"))
except Exception as e:
print(e)
```
### Validate the Data Model -- Did it work? If we look for Albums from The Beatles we should expect to see 2 rows.
`select * from music_library WHERE artist_name="The Beatles"`
```
query = "select * from music_library1 WHERE artist_name='The Beatles'"
try:
rows = session.execute(query)
except Exception as e:
print(e)
for row in rows:
print (row.year, row.artist_name, row.album_name, row.city)
```
### Success it worked! We created a unique Primary key that evenly distributed our data.
### Drop the tables
```
query = "drop table music_library"
try:
rows = session.execute(query)
except Exception as e:
print(e)
query = "drop table music_library1"
try:
rows = session.execute(query)
except Exception as e:
print(e)
```
### Close the session and cluster connection
```
session.shutdown()
cluster.shutdown()
```
|
github_jupyter
|
import cassandra
from cassandra.cluster import Cluster
try:
cluster = Cluster(['127.0.0.1']) #If you have a locally installed Apache Cassandra instance
session = cluster.connect()
except Exception as e:
print(e)
try:
session.execute("""
CREATE KEYSPACE IF NOT EXISTS udacity
WITH REPLICATION =
{ 'class' : 'SimpleStrategy', 'replication_factor' : 1 }"""
)
except Exception as e:
print(e)
try:
session.set_keyspace('udacity')
except Exception as e:
print(e)
query = "CREATE TABLE IF NOT EXISTS music_library"
query = query + "(year int, artist_name text, album_name text, city text, PRIMARY KEY (artist_name))"
try:
session.execute(query)
except Exception as e:
print(e)
query = "INSERT INTO music_library (year, artist_name, album_name, city)"
query = query + " VALUES (%s, %s, %s, %s)"
try:
session.execute(query, (1970, "The Beatles", "Let it Be", "Liverpool"))
except Exception as e:
print(e)
try:
session.execute(query, (1965, "The Beatles", "Rubber Soul", "Oxford"))
except Exception as e:
print(e)
try:
session.execute(query, (1965, "The Who", "My Generation", "London"))
except Exception as e:
print(e)
try:
session.execute(query, (1966, "The Monkees", "The Monkees", "Los Angeles"))
except Exception as e:
print(e)
try:
session.execute(query, (1970, "The Carpenters", "Close To You", "San Diego"))
except Exception as e:
print(e)
query = "select * from music_library WHERE artist_name='The Beatles'"
try:
rows = session.execute(query)
except Exception as e:
print(e)
for row in rows:
print (row.year, row.artist_name, row.album_name, row.city)
query = "CREATE TABLE IF NOT EXISTS music_library1 "
query = query + "(artist_name text, album_name text, year int, city text, PRIMARY KEY (artist_name, album_name))"
try:
session.execute(query)
except Exception as e:
print(e)
query = "INSERT INTO music_library1 (artist_name, album_name, year, city)"
query = query + " VALUES (%s, %s, %s, %s)"
try:
session.execute(query, ("The Beatles", "Let it Be", 1970, "Liverpool"))
except Exception as e:
print(e)
try:
session.execute(query, ("The Beatles", "Rubber Soul", 1965, "Oxford"))
except Exception as e:
print(e)
try:
session.execute(query, ("The Who", "My Generation", 1965, "London"))
except Exception as e:
print(e)
try:
session.execute(query, ("The Monkees", "The Monkees", 1966, "Los Angeles"))
except Exception as e:
print(e)
try:
session.execute(query, ("The Carpenters", "Close To You", 1970, "San Diego"))
except Exception as e:
print(e)
query = "select * from music_library1 WHERE artist_name='The Beatles'"
try:
rows = session.execute(query)
except Exception as e:
print(e)
for row in rows:
print (row.year, row.artist_name, row.album_name, row.city)
query = "drop table music_library"
try:
rows = session.execute(query)
except Exception as e:
print(e)
query = "drop table music_library1"
try:
rows = session.execute(query)
except Exception as e:
print(e)
session.shutdown()
cluster.shutdown()
| 0.201813 | 0.935405 |
Центр непрерывного образования
# Программа «Python для автоматизации и анализа данных»
*Ян Пиле*
# Задачки на регулярные выражения
### Задачка про аббревиатуры
Владимир устроился на работу в одно очень важное место. И в первом же документе он ничего не понял,
там были сплошные ФГУП НИЦ ГИДГЕО, ФГОУ ЧШУ АПК и т.п. Тогда он решил собрать все аббревиатуры, чтобы потом найти их расшифровки на http://sokr.ru/. Помогите ему.
Будем считать аббревиатурой слова только лишь из заглавных букв (как минимум из двух). Если несколько таких слов разделены пробелами, то они
считаются одной аббревиатурой.
**Ввод**: Это курс информатики соответствует ФГОС и ПООП, это подтверждено ФГУ ФНЦ НИИСИ РАН\
**Вывод**: ФГОС, ПООП, ФГУ ФНЦ НИИСИ РАН
```
import re
# Ваше решение
s = 'Это курс информатики соответствует ФГОС и ПООП, это подтверждено ФГУ ФНЦ НИИСИ РАН'
```
### Задачка про перевод из camel_case'a в snake_case
Мы уже довольно много говорили про то, что в компаниях могут быть конвенции по обозначению переменных. Что, если вы написали код, а в нем переменные названы в Camel Case а вам требуется snake case? Пожалуй, стоит автоматизировать этот процесс. Попробуем написать функцию, которая этот функционал реализует
```
#Camel case to snake case
v = 'camelCaseVar'
import re
```
### Задачка про подсчет количества слов
Слова у нас могут состоять из букв или букв, стоящих вокруг дефиса (во-первых, чуть-чуть, давай-ка). Вывести список слов.
```
text = '''
- Дельный, что и говорить,
Был старик тот самый,
Что придумал суп варить
На колесах прямо.
Суп - во-первых. Во-вторых,
Кашу в норме прочной.
Нет, старик он был старик
Чуткий - это точно.
'''
```
### Задачка про поиск слов на а и на е
Найдите в тексте слова, начинающиеся на а и на е
```
import re
# Input.
text = "The following example creates an ArrayList with a capacity of 50 elements.\
Four elements are then added to the ArrayList and the ArrayList is trimmed accordingly."
```
**Пример 2**
Найдите в тексте слова, начинающиеся на а и на е
```
import re
# Input.
text = '''
Für den folgenden Bericht gibt es einige Neben- und
drei Hauptquellen, die hier am Anfang einmal genannt, dann aber nicht mehr erwähnt werden. Die
Hauptquellen: Vernehmungsprotokolle der Polizeibehörde, Rechtsanwalt Dr. Hubert Bloma, sowie
dessen Schul- und Studienfreund, der Staatsanwalt Peter
Hach, der - vertraulich, versteht sich - die Vernehmungsprotokolle, gewisse Maßnahmen der
Untersuchungsbehörde und Ergebnisse von Recherchen, soweit sie nicht in den Protokollen auftauchten,
ergänzte;
'''
```
### Давайте разберем реальный пример
Возьмем перевод книги Идиот, вытащим оттуда текст первой главы, после чего посчитаем количество вхождений слова the. Ссылка 'https://www.gutenberg.org/files/2638/2638-0.txt'
Заметьте, что the может быть частью слова! надо достать именно **слово the**
```
import re
import requests
the_idiot_url = 'https://www.gutenberg.org/files/2638/2638-0.txt'
raw = requests.get(the_idiot_url).text
# Индекс начала первой главы
start = re.search(r'\*\*\* START OF THIS PROJECT GUTENBERG EBOOK THE IDIOT \*\*\*', raw).end()
# Индекс конца первой главы
end = re.search(r'II', raw).start()
end
```
## Про время (гуглить)
Вовочка подготовил одно очень важное письмо, но везде указал неправильное время.
Поэтому нужно заменить все вхождения времени на строку (TBD). Время — это строка вида HH:MM:SS или HH:MM, в которой HH — число от 00 до 23, а MM и SS — число от 00 до 59.
Ввод:
Уважаемые! Если вы к 09:00 не вернёте
чемодан, то уже в 09:00:01 я за себя не отвечаю.
PS. С отношением 25:50 всё нормально!
Вывод:
Уважаемые! Если вы к (TBD) не вернёте
чемодан, то уже в (TBD) я за себя не отвечаю.
PS. С отношением 25:50 всё нормально!
```
inp = """Уважаемые! Если вы к 09:00 не вернёте
чемодан, то уже в 09:00:01 я за себя не отвечаю.
PS. С отношением 25:50 всё нормально!"""
```
## Про финансовую отчетность (гуглить)
Владимиру потребовалось срочно запутать финансовую документацию. Но так, чтобы это было обратимо.
Он не придумал ничего лучше, чем заменить каждое целое число (последовательность цифр) на его куб. Помогите ему.
Ввод:
Было закуплено 12 единиц техники
по 410.37 рублей.
Вывод:
Было закуплено 1728 единиц техники
по 68921000.50653 рублей.
|
github_jupyter
|
import re
# Ваше решение
s = 'Это курс информатики соответствует ФГОС и ПООП, это подтверждено ФГУ ФНЦ НИИСИ РАН'
#Camel case to snake case
v = 'camelCaseVar'
import re
text = '''
- Дельный, что и говорить,
Был старик тот самый,
Что придумал суп варить
На колесах прямо.
Суп - во-первых. Во-вторых,
Кашу в норме прочной.
Нет, старик он был старик
Чуткий - это точно.
'''
import re
# Input.
text = "The following example creates an ArrayList with a capacity of 50 elements.\
Four elements are then added to the ArrayList and the ArrayList is trimmed accordingly."
import re
# Input.
text = '''
Für den folgenden Bericht gibt es einige Neben- und
drei Hauptquellen, die hier am Anfang einmal genannt, dann aber nicht mehr erwähnt werden. Die
Hauptquellen: Vernehmungsprotokolle der Polizeibehörde, Rechtsanwalt Dr. Hubert Bloma, sowie
dessen Schul- und Studienfreund, der Staatsanwalt Peter
Hach, der - vertraulich, versteht sich - die Vernehmungsprotokolle, gewisse Maßnahmen der
Untersuchungsbehörde und Ergebnisse von Recherchen, soweit sie nicht in den Protokollen auftauchten,
ergänzte;
'''
import re
import requests
the_idiot_url = 'https://www.gutenberg.org/files/2638/2638-0.txt'
raw = requests.get(the_idiot_url).text
# Индекс начала первой главы
start = re.search(r'\*\*\* START OF THIS PROJECT GUTENBERG EBOOK THE IDIOT \*\*\*', raw).end()
# Индекс конца первой главы
end = re.search(r'II', raw).start()
end
inp = """Уважаемые! Если вы к 09:00 не вернёте
чемодан, то уже в 09:00:01 я за себя не отвечаю.
PS. С отношением 25:50 всё нормально!"""
| 0.165526 | 0.938011 |
# Der Multiplikatoreffekt
```
%matplotlib inline
import ipywidgets as widgets
from ipywidgets import interactive, fixed
import matplotlib.pyplot as plt
import numpy as np
plt.rcParams['figure.figsize'] = [30/2.54, 20/2.54]
def f(r): # autonome Ausgaben und marginale Konsumneigung
fig = plt.figure()
ax = fig.add_subplot(111)
ax.set_aspect('equal', adjustable='box')
# Berechne Werte für x-Achse der Grafik
Y = np.linspace(0.0001, 2000, num=10) # Einkommen (Y)
A=500 # Ausgangsgleichgewicht A sei fix
c1=0.55 # Marginale Konsumneigung sei fix:
Z_0= A-c1*A # Berechne autonome Ausgaben für c1 und A
ZZ= Z_0+c1*Y # Werte für ZZ-Kurve (aggregierte Nachfrage)
delta_Z_0=500 # Erhöhung der Staatsausgaben
ZZ_p = Z_0 + delta_Z_0 + c1*Y # neue aggregierte Nachfrage
A_p= (Z_0 + delta_Z_0)/(1-c1) # Berechne neues Gleichgewicht A'
# Zeichne Kurven in Grafik ein
plt.plot(Y, ZZ, color="red")
plt.plot(Y,ZZ_p, color="red", linestyle="dashed")
plt.plot(Y,Y, color="gray")
# Annotation für A und A'
plt.annotate("A",(A+25,A-75))
plt.annotate("A'",(A_p+25,A_p-75))
# Hilfskurven für A and A'
plt.plot([500,500],[0,A], color="grey", linestyle="dashed")
plt.plot([0,500],[500,500], color="grey", linestyle="dashed")
plt.plot([A_p, A_p],[0,A_p], color="grey", linestyle="dashed")
plt.plot([0,A_p],[A_p,A_p], color="grey", linestyle="dashed")
# Definiere Vektoren für Pfeile:
dx=np.array([0, 1*delta_Z_0, 0, c1*delta_Z_0, 0, c1**2*delta_Z_0, 0, c1**3*delta_Z_0, 0, c1**4*delta_Z_0])
dy=np.array([1*delta_Z_0, 0, c1*delta_Z_0, 0, c1**2*delta_Z_0, 0, c1**3*delta_Z_0, 0, c1**4*delta_Z_0, 0])
# Werte auf der x-Achse für verschiedene Runden
x=np.array([A, A , A+dx[1], A+dx[1], A+dx[1]+dx[3], A+dx[1]+dx[3], A+dx[1]+dx[3]+dx[5],
A+dx[1]+dx[3]+dx[5], A+dx[1]+dx[3]+dx[5]+dx[7], A+dx[1]+dx[3]+dx[5]+dx[7]])
# Werte auf der y-Achse für verschiedene Runden
y=np.array([A, A+dy[0], A+dy[0], A+dy[0]+dy[2], A+dy[0]+dy[2], A+dy[0]+dy[2]+dy[4], A+dy[0]+dy[2]+dy[4],
A+dy[0]+dy[2]+dy[4]+dy[6], A+dy[0]+dy[2]+dy[4]+dy[6], A+dy[0]+dy[2]+dy[4]+dy[6]+dy[8]])
# Pfeile für Multiplikatoreffekt
R=np.arange(0, r, dtype=np.int)
for i in R:
plt.arrow(x[i], y[i], dx[i], dy[i], length_includes_head=True, head_width = 25, width = 0.05, color="black")
# Texte für Nachfragekurven
plt.annotate("ZZ",(Y[-1],ZZ[-1]-75))
plt.annotate("ZZ'",(Y[-1],ZZ_p[-1]-75))
# Achsenbeschriftung
plt.xlabel('Einkommen (Y)') # plt.xlabel('Label der x-Achse')
plt.ylabel('Nachfrage (Z), Produktion (Y)') # plt.ylabel('Label der y-Achse')
# Legende
plt.show()
# Erstelle Slider
interactive_plot = interactive(f, r=widgets.BoundedFloatText(value=0, min=0,max=10, step=1, description='Runden:', disabled=False))
output = interactive_plot.children[-1]
#output.layout.height = '350px'
interactive_plot
%matplotlib inline
import ipywidgets as widgets
from ipywidgets import interactive
import matplotlib.pyplot as plt
import numpy as np
plt.rcParams['figure.figsize'] = [30/2.54, 20/2.54]
def f(c1): # Multiplikatoreffekt
fig = plt.figure()
ax = fig.add_subplot(111)
ax.set_aspect('equal', adjustable='box')
# Berechne Werte für x-Achse der Grafik
Y = np.linspace(0.0001, 2000, num=10) # Einkommen (Y)
# Ausgangsgleichgewicht A sei fix
A=500
# Berechne autonome Ausgaben für c1 und A
Z_0= A-c1*A
ZZ= Z_0+c1*Y
delta_Z_0=500
ZZ_p = Z_0 + delta_Z_0 + c1*Y
# Berechne neues GGW A'
A_p= (Z_0 + delta_Z_0)/(1-c1)
# Zeichne Kurven in Grafik ein
plt.plot(Y, ZZ, color="red")
plt.plot(Y,ZZ_p, color="red", linestyle="dashed")
plt.plot(Y,Y, color="gray")
# Annotation for A and A'
plt.annotate("A",(A+25,A-75))
plt.annotate("A'",(A_p+25,A_p-75))
# Dashed lines for A and A'
plt.plot([500,500],[0,A], color="grey", linestyle="dashed")
plt.plot([0,500],[500,500], color="grey", linestyle="dashed")
plt.plot([A_p, A_p],[0,A_p], color="grey", linestyle="dashed")
plt.plot([0,A_p],[A_p,A_p], color="grey", linestyle="dashed")
# Schwarze Pfeile für Multiplikatoreffekt
plt.arrow(500, 500, 0, 500, length_includes_head=True, head_width = 25, width = 0.05, color="black")
plt.arrow(500, 1000, 500,0, length_includes_head=True, head_width = 25, width = 0.05, color="black")
plt.arrow(1000, 1000, 0, c1*500, length_includes_head=True, head_width = 25, width = 0.05, color="black")
plt.arrow(1000, 1000+c1*500, c1*500,0, length_includes_head=True, head_width = 25, width = 0.05, color="black")
plt.arrow(1000+c1*500, 1000+c1*500, 0, c1*c1*500, length_includes_head=True, head_width = 25, width = 0.05, color="black")
plt.arrow(1000+c1*500, 1000+c1*500+c1*c1*500, c1*c1*500,0, length_includes_head=True, head_width = 25, width = 0.05, color="black")
# Beschriftung für Nachfragekurven
plt.annotate("ZZ",(Y[-1],ZZ[-1]-75))
plt.annotate("ZZ'",(Y[-1],ZZ_p[-1]-75))
# Rote Pfeile für Gesamteffekt:
plt.arrow(A_p,A_p-500,0,500, width=20,length_includes_head=True, color="red")
plt.annotate("Steigerung um "+str(delta_Z_0),(A_p+50,A_p-250) )
plt.arrow(A,A,A_p-A,0, width=20,length_includes_head=True, color="red")
plt.annotate(r'Steigerung um $\frac{1}{1-c_1}*$'+ str(delta_Z_0)+ "$= $"+ str(round(1/(1-c1)*delta_Z_0,0)),(A+100,A-100))
# Achsenbeschriftung
plt.xlabel('Einkommen (Y)') # plt.xlabel('Label der x-Achse')
plt.ylabel('Nachfrage (Z), Produktion (Y)') # plt.ylabel('Label der y-Achse')
# Legende
plt.show()
# Erstelle Slider
interactive_plot = interactive(f,c1=widgets.FloatSlider(value=0.35, description='$c_1$', max=0.66, min=0, step=0.01))
# Name der Funktion, die die Grafik erstellt
# Slider für die Inputs der Grafik-Funktion, Namen müssen mit Argumenten für f() übereinstimmen
output = interactive_plot.children[-1]
#output.layout.height = '350px'
interactive_plot
```
|
github_jupyter
|
%matplotlib inline
import ipywidgets as widgets
from ipywidgets import interactive, fixed
import matplotlib.pyplot as plt
import numpy as np
plt.rcParams['figure.figsize'] = [30/2.54, 20/2.54]
def f(r): # autonome Ausgaben und marginale Konsumneigung
fig = plt.figure()
ax = fig.add_subplot(111)
ax.set_aspect('equal', adjustable='box')
# Berechne Werte für x-Achse der Grafik
Y = np.linspace(0.0001, 2000, num=10) # Einkommen (Y)
A=500 # Ausgangsgleichgewicht A sei fix
c1=0.55 # Marginale Konsumneigung sei fix:
Z_0= A-c1*A # Berechne autonome Ausgaben für c1 und A
ZZ= Z_0+c1*Y # Werte für ZZ-Kurve (aggregierte Nachfrage)
delta_Z_0=500 # Erhöhung der Staatsausgaben
ZZ_p = Z_0 + delta_Z_0 + c1*Y # neue aggregierte Nachfrage
A_p= (Z_0 + delta_Z_0)/(1-c1) # Berechne neues Gleichgewicht A'
# Zeichne Kurven in Grafik ein
plt.plot(Y, ZZ, color="red")
plt.plot(Y,ZZ_p, color="red", linestyle="dashed")
plt.plot(Y,Y, color="gray")
# Annotation für A und A'
plt.annotate("A",(A+25,A-75))
plt.annotate("A'",(A_p+25,A_p-75))
# Hilfskurven für A and A'
plt.plot([500,500],[0,A], color="grey", linestyle="dashed")
plt.plot([0,500],[500,500], color="grey", linestyle="dashed")
plt.plot([A_p, A_p],[0,A_p], color="grey", linestyle="dashed")
plt.plot([0,A_p],[A_p,A_p], color="grey", linestyle="dashed")
# Definiere Vektoren für Pfeile:
dx=np.array([0, 1*delta_Z_0, 0, c1*delta_Z_0, 0, c1**2*delta_Z_0, 0, c1**3*delta_Z_0, 0, c1**4*delta_Z_0])
dy=np.array([1*delta_Z_0, 0, c1*delta_Z_0, 0, c1**2*delta_Z_0, 0, c1**3*delta_Z_0, 0, c1**4*delta_Z_0, 0])
# Werte auf der x-Achse für verschiedene Runden
x=np.array([A, A , A+dx[1], A+dx[1], A+dx[1]+dx[3], A+dx[1]+dx[3], A+dx[1]+dx[3]+dx[5],
A+dx[1]+dx[3]+dx[5], A+dx[1]+dx[3]+dx[5]+dx[7], A+dx[1]+dx[3]+dx[5]+dx[7]])
# Werte auf der y-Achse für verschiedene Runden
y=np.array([A, A+dy[0], A+dy[0], A+dy[0]+dy[2], A+dy[0]+dy[2], A+dy[0]+dy[2]+dy[4], A+dy[0]+dy[2]+dy[4],
A+dy[0]+dy[2]+dy[4]+dy[6], A+dy[0]+dy[2]+dy[4]+dy[6], A+dy[0]+dy[2]+dy[4]+dy[6]+dy[8]])
# Pfeile für Multiplikatoreffekt
R=np.arange(0, r, dtype=np.int)
for i in R:
plt.arrow(x[i], y[i], dx[i], dy[i], length_includes_head=True, head_width = 25, width = 0.05, color="black")
# Texte für Nachfragekurven
plt.annotate("ZZ",(Y[-1],ZZ[-1]-75))
plt.annotate("ZZ'",(Y[-1],ZZ_p[-1]-75))
# Achsenbeschriftung
plt.xlabel('Einkommen (Y)') # plt.xlabel('Label der x-Achse')
plt.ylabel('Nachfrage (Z), Produktion (Y)') # plt.ylabel('Label der y-Achse')
# Legende
plt.show()
# Erstelle Slider
interactive_plot = interactive(f, r=widgets.BoundedFloatText(value=0, min=0,max=10, step=1, description='Runden:', disabled=False))
output = interactive_plot.children[-1]
#output.layout.height = '350px'
interactive_plot
%matplotlib inline
import ipywidgets as widgets
from ipywidgets import interactive
import matplotlib.pyplot as plt
import numpy as np
plt.rcParams['figure.figsize'] = [30/2.54, 20/2.54]
def f(c1): # Multiplikatoreffekt
fig = plt.figure()
ax = fig.add_subplot(111)
ax.set_aspect('equal', adjustable='box')
# Berechne Werte für x-Achse der Grafik
Y = np.linspace(0.0001, 2000, num=10) # Einkommen (Y)
# Ausgangsgleichgewicht A sei fix
A=500
# Berechne autonome Ausgaben für c1 und A
Z_0= A-c1*A
ZZ= Z_0+c1*Y
delta_Z_0=500
ZZ_p = Z_0 + delta_Z_0 + c1*Y
# Berechne neues GGW A'
A_p= (Z_0 + delta_Z_0)/(1-c1)
# Zeichne Kurven in Grafik ein
plt.plot(Y, ZZ, color="red")
plt.plot(Y,ZZ_p, color="red", linestyle="dashed")
plt.plot(Y,Y, color="gray")
# Annotation for A and A'
plt.annotate("A",(A+25,A-75))
plt.annotate("A'",(A_p+25,A_p-75))
# Dashed lines for A and A'
plt.plot([500,500],[0,A], color="grey", linestyle="dashed")
plt.plot([0,500],[500,500], color="grey", linestyle="dashed")
plt.plot([A_p, A_p],[0,A_p], color="grey", linestyle="dashed")
plt.plot([0,A_p],[A_p,A_p], color="grey", linestyle="dashed")
# Schwarze Pfeile für Multiplikatoreffekt
plt.arrow(500, 500, 0, 500, length_includes_head=True, head_width = 25, width = 0.05, color="black")
plt.arrow(500, 1000, 500,0, length_includes_head=True, head_width = 25, width = 0.05, color="black")
plt.arrow(1000, 1000, 0, c1*500, length_includes_head=True, head_width = 25, width = 0.05, color="black")
plt.arrow(1000, 1000+c1*500, c1*500,0, length_includes_head=True, head_width = 25, width = 0.05, color="black")
plt.arrow(1000+c1*500, 1000+c1*500, 0, c1*c1*500, length_includes_head=True, head_width = 25, width = 0.05, color="black")
plt.arrow(1000+c1*500, 1000+c1*500+c1*c1*500, c1*c1*500,0, length_includes_head=True, head_width = 25, width = 0.05, color="black")
# Beschriftung für Nachfragekurven
plt.annotate("ZZ",(Y[-1],ZZ[-1]-75))
plt.annotate("ZZ'",(Y[-1],ZZ_p[-1]-75))
# Rote Pfeile für Gesamteffekt:
plt.arrow(A_p,A_p-500,0,500, width=20,length_includes_head=True, color="red")
plt.annotate("Steigerung um "+str(delta_Z_0),(A_p+50,A_p-250) )
plt.arrow(A,A,A_p-A,0, width=20,length_includes_head=True, color="red")
plt.annotate(r'Steigerung um $\frac{1}{1-c_1}*$'+ str(delta_Z_0)+ "$= $"+ str(round(1/(1-c1)*delta_Z_0,0)),(A+100,A-100))
# Achsenbeschriftung
plt.xlabel('Einkommen (Y)') # plt.xlabel('Label der x-Achse')
plt.ylabel('Nachfrage (Z), Produktion (Y)') # plt.ylabel('Label der y-Achse')
# Legende
plt.show()
# Erstelle Slider
interactive_plot = interactive(f,c1=widgets.FloatSlider(value=0.35, description='$c_1$', max=0.66, min=0, step=0.01))
# Name der Funktion, die die Grafik erstellt
# Slider für die Inputs der Grafik-Funktion, Namen müssen mit Argumenten für f() übereinstimmen
output = interactive_plot.children[-1]
#output.layout.height = '350px'
interactive_plot
| 0.347426 | 0.913484 |
### Mapping and Reducing
#### *map* and *starmap*
You should already know the `map` and `reduce` built-in functions, so let's quickly review them:
The `map` function applies a given function (that takes a single argument) to an iterable of values and yields (lazily) the result of applying the function to each element of the iterable.
Let's see a simple example that calculates the square of values in an iterable:
```
maps = map(lambda x: x**2, range(5))
list(maps)
```
Keep in mind that `map` returns an iterator, so it will become exhausted:
```
list(maps)
```
Of course, we can supply multiple values to a function by using an iterable of iterables (e.g. tuples) and unpacking the tuple in the function - but we still only use a single argument:
```
def add(t):
return t[0] + t[1]
list(map(add, [(0,0), [1,1], range(2,4)]))
```
Remember how we can unpack an iterable into separate positional arguments?
```
def add(x, y):
return x + y
t = (2, 3)
add(*t)
```
It would be nice if we could do that with the `map` function as well.
For example, it would be nice to do the following:
```
list(map(add, [(0,0), (1,1), (2,2)]))
```
But of course that is not going to work, since `add` expects two arguments, and only a single one (the tuple) was provided.
This is where `starmap` comes in - it will essentially `*` each element of the iterable before passing it to the function defined in the map:
```
from itertools import starmap
list(starmap(add, [(0,0), (1,1), (2,2)]))
```
#### Accumulation
You should already know the `sum` function - it simply calculates the sum of all the elements in an iterable:
```
sum([10, 20, 30])
```
It simply returns the final sum.
Sometimes we want to perform other operations than just summing up the values. Maybe we want to find the product of all the values in an iterable.
To do so, we would then use the `reduce` function available in the `functools` module. You should already be familiar with that function, but let's review it quickly.
The `reduce` function requires a `binary` function (a function that takes two arguments). It then applies that binary function to the first two elements of the iterable, obtains a result, then continues applying the binary function using the previous result and the next item in the iterable.
Optionally we can specify a seed value that is used as the 'first' element.
For example, to obtain the product of all values in an iterable:
```
from functools import reduce
reduce(lambda x, y: x*y, [1, 2, 3, 4])
```
We can even specify a "start" value:
```
reduce(lambda x, y: x*y, [1, 2, 3, 4], 10)
```
You'll note that with both `sum` and `reduce`, only the final result is shown - none of the intermediate results are available.
Sometimes we want to see the intermediate results as well.
Let's see how we might try it with the `sum` function:|
```
def sum_(iterable):
it = iter(iterable)
acc = next(it)
yield acc
for item in it:
acc += item
yield acc
```
And we can use it as follows:
```
for item in sum_([10, 20, 30]):
print(item)
```
Of course, this is just going to work for a sum.
We may want the same functionality with arbitrary binary functions, just like `reduce` was more general than `sum`.
We could try doing it ourselves as follows:
```
def running_reduce(fn, iterable, start=None):
it = iter(iterable)
if start is None:
accumulator = next(it)
else:
accumulator = start
yield accumulator
for item in it:
accumulator = fn(accumulator, item)
yield accumulator
```
Let's try a running sum first.
We'll use the `operator` module instead of using lambdas.
```
import operator
list(running_reduce(operator.add, [10, 20, 30]))
```
Now we can also use other binary operators, such as multiplication:
```
list(running_reduce(operator.mul, [1, 2, 3, 4]))
```
And of course, we can even set a "start" value:
```
list(running_reduce(operator.mul, [1, 2, 3, 4], 10))
```
While this certainly works, we really don't need to code this ourselves - that's exactly what the `accumulate` function in `itertools` does for us.
The order of the arguments however is different, The iterable is defined first - that's because the binary function is optional, and defaults to addition if we don't specify it. Also it does not have a "start" value option. If you really need that feature, you could use the technique I just showed you.
```
from itertools import accumulate
list(accumulate([10, 20, 30]))
```
We can find the running product of an iterable:
```
list(accumulate([1, 2, 3, 4], operator.mul))
```
|
github_jupyter
|
maps = map(lambda x: x**2, range(5))
list(maps)
list(maps)
def add(t):
return t[0] + t[1]
list(map(add, [(0,0), [1,1], range(2,4)]))
def add(x, y):
return x + y
t = (2, 3)
add(*t)
list(map(add, [(0,0), (1,1), (2,2)]))
from itertools import starmap
list(starmap(add, [(0,0), (1,1), (2,2)]))
sum([10, 20, 30])
from functools import reduce
reduce(lambda x, y: x*y, [1, 2, 3, 4])
reduce(lambda x, y: x*y, [1, 2, 3, 4], 10)
def sum_(iterable):
it = iter(iterable)
acc = next(it)
yield acc
for item in it:
acc += item
yield acc
for item in sum_([10, 20, 30]):
print(item)
def running_reduce(fn, iterable, start=None):
it = iter(iterable)
if start is None:
accumulator = next(it)
else:
accumulator = start
yield accumulator
for item in it:
accumulator = fn(accumulator, item)
yield accumulator
import operator
list(running_reduce(operator.add, [10, 20, 30]))
list(running_reduce(operator.mul, [1, 2, 3, 4]))
list(running_reduce(operator.mul, [1, 2, 3, 4], 10))
from itertools import accumulate
list(accumulate([10, 20, 30]))
list(accumulate([1, 2, 3, 4], operator.mul))
| 0.361052 | 0.989013 |
Given an input string (s) and a pattern (p), implement regular expression matching with support for '.' and '*'.
- '.' Matches any single character.
- '*' Matches zero or more of the preceding element.
- The matching should cover the entire input string (not partial).
Note:
- s could be empty and contains only lowercase letters a-z.
- p could be empty and contains only lowercase letters a-z, and characters like . or *.
Example 1:
```
Input:
s = "aa"
p = "a"
Output: false
```
Explanation: "a" does not match the entire string "aa".
Example 2:
```
Input:
s = "aa"
p = "a*"
Output: true
```
Explanation: '*' means zero or more of the precedeng element, 'a'. Therefore, by repeating 'a' once, it becomes "aa".
Example 3:
```
Input:
s = "ab"
p = ".*"
Output: true
```
Explanation: ".*" means "zero or more (*) of any character (.)".
Example 4:
```
Input:
s = "aab"
p = "c*a*b"
Output: true
```
Explanation: c can be repeated 0 times, a can be repeated 1 time. Therefore it matches "aab".
Example 5:
```
Input:
s = "mississippi"
p = "mis*is*p*."
Output: false
```
```
# Solution 1: java way of handle, recursively
class Solution(object):
def isMatch(self, s, p):
"""
:type s: str
:type p: str
:rtype: bool
"""
return self.isMatched(s,0,p,0)
def isMatched(self,s,i,p,j):
"""
:type s: str
:type p: str
:type i: int
:type j: int
:rtype: bool
"""
if j == len(p): # end of pattern
return i == len(s) # end of string
if (j < len(p)-1 and p[j+1] != '*') or j == len(p) -1: # p[j+1] is not '*', or j is the last
return(i < len(s) and (s[i] == p[j] or p[j] == '.') # exact match or p[j] is '.'
) and self.isMatched(s,i+1,p,j+1) # check next
# p[j+1] is '*', and s[i] and p[j] matches
print(str(i) + ' ' + str(j))
while (i<len(s) and s[i]==p[j]) or (p[j]=='.' and i<len(s)): # only increase i
print(str(i) + ' ' + str(j))
if self.isMatched(s,i,p,j+2): # jump '*' and check
return True
i = i+1
return self.isMatched(s,i,p,j+2) # when p[i]!= s[i], and jump '*'
if __name__ == "__main__":
s = "aaa" # "aba"
p = "a*a" # "ac*"
print(Solution().isMatch(s,p))
# Solution 2: python way of handle, recursively, too slow
'''
Time Complexity: $O\big((T+P)2^{T + \frac{P}{2}}).
Space Complexity: $O\big((T+P)2^{T + \frac{P}{2}}).
'''
class Solution(object):
def isMatch(self, text, pattern):
if not pattern:
return not text
first_match = bool(text) and pattern[0] in {text[0], '.'}
if len(pattern) >=2 and pattern[1] == '*':
return (self.isMatch(text, pattern[2:]) or
first_match and self.isMatch(text[1:], pattern))
else:
return first_match and self.isMatch(text[1:], pattern[1:])
# Solution 3: Dynamic programming, top-down variation
# Time & Space Complexity: O(TP). With recursion
# L(i,j) -- from s[i:] and p[j:] is matched
# L(i,j) = case 0: both end, => True
# case 1: p[j+1] = '*' and p[j] = s[i], => L(i+1,j)
# {case 2: p[j] = '.' or p[j] = s[i], =>L(i+1, j+1)
# case 3: p[j] != s[i] and p[j] != '.', => False} The last two can join together
class Solution(object):
def isMatch(self, text, pattern):
memo = {}
def dp(i, j):
if (i, j) not in memo:
if j == len(pattern): #end
ans = i == len(text)
else:
first_match = i < len(text) and pattern[j] in {text[i], '.'}
if j+1 < len(pattern) and pattern[j+1] == '*': # '*'
ans = dp(i, j+2) or first_match and dp(i+1, j)
else:
ans = first_match and dp(i+1, j+1)
memo[i, j] = ans
return memo[i, j]
return dp(0, 0)
# Solution 4: Dynamic programming, bottom-up variation
# Time & Space Complexity: O(TP). No recursion
class Solution(object):
def isMatch(self, text, pattern):
dp = [[False] * (len(pattern) + 1) for _ in range(len(text) + 1)] # empty list of n*m
dp[-1][-1] = True # last one is True
for i in range(len(text), -1, -1):
for j in range(len(pattern) - 1, -1, -1):
first_match = i < len(text) and pattern[j] in {text[i], '.'}
if j+1 < len(pattern) and pattern[j+1] == '*':
dp[i][j] = dp[i][j+2] or first_match and dp[i+1][j]
else:
dp[i][j] = first_match and dp[i+1][j+1]
return dp[0][0]
```
|
github_jupyter
|
Input:
s = "aa"
p = "a"
Output: false
Input:
s = "aa"
p = "a*"
Output: true
Input:
s = "ab"
p = ".*"
Output: true
Input:
s = "aab"
p = "c*a*b"
Output: true
Input:
s = "mississippi"
p = "mis*is*p*."
Output: false
# Solution 1: java way of handle, recursively
class Solution(object):
def isMatch(self, s, p):
"""
:type s: str
:type p: str
:rtype: bool
"""
return self.isMatched(s,0,p,0)
def isMatched(self,s,i,p,j):
"""
:type s: str
:type p: str
:type i: int
:type j: int
:rtype: bool
"""
if j == len(p): # end of pattern
return i == len(s) # end of string
if (j < len(p)-1 and p[j+1] != '*') or j == len(p) -1: # p[j+1] is not '*', or j is the last
return(i < len(s) and (s[i] == p[j] or p[j] == '.') # exact match or p[j] is '.'
) and self.isMatched(s,i+1,p,j+1) # check next
# p[j+1] is '*', and s[i] and p[j] matches
print(str(i) + ' ' + str(j))
while (i<len(s) and s[i]==p[j]) or (p[j]=='.' and i<len(s)): # only increase i
print(str(i) + ' ' + str(j))
if self.isMatched(s,i,p,j+2): # jump '*' and check
return True
i = i+1
return self.isMatched(s,i,p,j+2) # when p[i]!= s[i], and jump '*'
if __name__ == "__main__":
s = "aaa" # "aba"
p = "a*a" # "ac*"
print(Solution().isMatch(s,p))
# Solution 2: python way of handle, recursively, too slow
'''
Time Complexity: $O\big((T+P)2^{T + \frac{P}{2}}).
Space Complexity: $O\big((T+P)2^{T + \frac{P}{2}}).
'''
class Solution(object):
def isMatch(self, text, pattern):
if not pattern:
return not text
first_match = bool(text) and pattern[0] in {text[0], '.'}
if len(pattern) >=2 and pattern[1] == '*':
return (self.isMatch(text, pattern[2:]) or
first_match and self.isMatch(text[1:], pattern))
else:
return first_match and self.isMatch(text[1:], pattern[1:])
# Solution 3: Dynamic programming, top-down variation
# Time & Space Complexity: O(TP). With recursion
# L(i,j) -- from s[i:] and p[j:] is matched
# L(i,j) = case 0: both end, => True
# case 1: p[j+1] = '*' and p[j] = s[i], => L(i+1,j)
# {case 2: p[j] = '.' or p[j] = s[i], =>L(i+1, j+1)
# case 3: p[j] != s[i] and p[j] != '.', => False} The last two can join together
class Solution(object):
def isMatch(self, text, pattern):
memo = {}
def dp(i, j):
if (i, j) not in memo:
if j == len(pattern): #end
ans = i == len(text)
else:
first_match = i < len(text) and pattern[j] in {text[i], '.'}
if j+1 < len(pattern) and pattern[j+1] == '*': # '*'
ans = dp(i, j+2) or first_match and dp(i+1, j)
else:
ans = first_match and dp(i+1, j+1)
memo[i, j] = ans
return memo[i, j]
return dp(0, 0)
# Solution 4: Dynamic programming, bottom-up variation
# Time & Space Complexity: O(TP). No recursion
class Solution(object):
def isMatch(self, text, pattern):
dp = [[False] * (len(pattern) + 1) for _ in range(len(text) + 1)] # empty list of n*m
dp[-1][-1] = True # last one is True
for i in range(len(text), -1, -1):
for j in range(len(pattern) - 1, -1, -1):
first_match = i < len(text) and pattern[j] in {text[i], '.'}
if j+1 < len(pattern) and pattern[j+1] == '*':
dp[i][j] = dp[i][j+2] or first_match and dp[i+1][j]
else:
dp[i][j] = first_match and dp[i+1][j+1]
return dp[0][0]
| 0.562898 | 0.90599 |
# Fer2013上训练并显示效果
```
from data import Fer2013, Jaffe, CK
from keras.utils import to_categorical
expressions, x_train, y_train = Fer2013().gen_train()
expressions, x_valid, y_valid = Fer2013().gen_valid()
# target编码
import numpy as np
y_train = to_categorical(y_train).reshape(y_train.shape[0], -1)
y_valid = to_categorical(y_valid).reshape(y_valid.shape[0], -1)
# 为了统一几个数据集,必须增加一列为0的
y_train = np.hstack((y_train, np.zeros((y_train.shape[0], 1))))
print(y_train.shape)
y_valid = np.hstack((y_valid, np.zeros((y_valid.shape[0], 1))))
print(y_valid.shape)
from model import CNN3
model = CNN3(input_shape=(48, 48, 1), n_classes=8)
from keras.callbacks import EarlyStopping, ReduceLROnPlateau, ModelCheckpoint
from keras.optimizers import SGD
sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(optimizer=sgd, loss='categorical_crossentropy', metrics=['accuracy'])
callback = [
ModelCheckpoint('../models/cnn2_best_weights.h5', monitor='val_acc', verbose=True, save_best_only=True, save_weights_only=True)]
epochs = 200
batch_size = 128
from keras.preprocessing.image import ImageDataGenerator
train_generator = ImageDataGenerator(rotation_range=10,
width_shift_range=0.05,
height_shift_range=0.05,
horizontal_flip=True,
shear_range=0.2,
zoom_range=0.2).flow(x_train, y_train, batch_size=batch_size)
valid_generator = ImageDataGenerator().flow(x_valid, y_valid, batch_size=batch_size)
history_fer2013 = model.fit_generator(train_generator,
steps_per_epoch=len(y_train)//batch_size,
epochs=epochs,
validation_data=valid_generator,
validation_steps=len(y_valid)//batch_size,
callbacks=callback)
_, x_test, y_test = Fer2013().gen_test()
pred = model.predict(x_test)
pred = np.argmax(pred, axis=1)
print(pred)
print(np.sum(pred.reshape(-1) == y_test.reshape(-1)) / y_test.shape[0])
print(np.max(history_fer2013.history['val_acc']))
```
# Jaffe上训练并显示效果
```
import numpy as np
expressions, x, y = Jaffe().gen_train()
from keras.utils import to_categorical
y = to_categorical(y).reshape(y.shape[0], -1)
# 为了统一几个数据集,必须增加一列为0的
y = np.hstack((y, np.zeros((y.shape[0], 1))))
from sklearn.model_selection import train_test_split
# 划分训练集验证集
x_train, x_valid, y_train, y_valid = train_test_split(x, y, test_size=0.2, random_state=2019)
print(x_train.shape, y_train.shape)
from keras.preprocessing.image import ImageDataGenerator
train_generator = ImageDataGenerator(rotation_range=10,
width_shift_range=0.05,
height_shift_range=0.05,
horizontal_flip=True,
shear_range=0.2,
zoom_range=0.2).flow(x_train, y_train, batch_size=32)
valid_generator = ImageDataGenerator().flow(x_valid, y_valid, batch_size=32)
model = CNN3()
from keras.callbacks import EarlyStopping, ReduceLROnPlateau, ModelCheckpoint
sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(optimizer=sgd, loss='categorical_crossentropy', metrics=['accuracy'])
callback = [
ModelCheckpoint('../models/cnn3_best_weights.h5', monitor='val_acc', verbose=True, save_best_only=True, save_weights_only=True)]
epochs = 200
batch_size = 32
history_jaffe = model.fit_generator(train_generator, steps_per_epoch=len(y_train)//batch_size, epochs=epochs, validation_data=valid_generator, validation_steps=len(y_valid)//batch_size, callbacks=callback)
```
# CK+上训练并显示效果
```
model = CNN3()
import numpy as np
expr, x, y = CK().gen_train()
from keras.utils import to_categorical
y = to_categorical(y).reshape(y.shape[0], -1)
from sklearn.model_selection import train_test_split
# 划分训练集验证集
x_train, x_valid, y_train, y_valid = train_test_split(x, y, test_size=0.2, random_state=2019)
print(x_train.shape, y_train.shape)
from keras.callbacks import EarlyStopping, ReduceLROnPlateau, ModelCheckpoint
sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(optimizer=sgd, loss='categorical_crossentropy', metrics=['accuracy'])
callback = [
ModelCheckpoint('../models/cnn3_best_weights.h5', monitor='val_acc', verbose=True, save_best_only=True, save_weights_only=True)]
epochs = 200
batch_size = 64
from keras.preprocessing.image import ImageDataGenerator
train_generator = ImageDataGenerator(rotation_range=10,
width_shift_range=0.05,
height_shift_range=0.05,
horizontal_flip=True,
shear_range=0.2,
zoom_range=0.2).flow(x_train, y_train, batch_size=32)
valid_generator = ImageDataGenerator().flow(x_valid, y_valid, batch_size=32)
history_ck = model.fit_generator(train_generator, steps_per_epoch=len(y_train)//batch_size, epochs=epochs, validation_data=valid_generator, validation_steps=len(y_valid)//batch_size, callbacks=callback)
```
# 绘制损失与准确率图像
```
import matplotlib.pyplot as plt
# 损失
plt.figure(figsize=(20, 8))
plt.subplot(1, 2, 1)
plt.plot(np.arange(len(history_fer2013.history['loss'])), history_fer2013.history['loss'], label='fer2013 train loss')
plt.plot(np.arange(len(history_jaffe.history['loss'])), history_jaffe.history['loss'], label='jaffe train loss')
plt.plot(np.arange(len(history_ck.history['loss'])), history_ck.history['loss'], label='ck+ train loss')
plt.plot(np.arange(len(history_fer2013.history['val_loss'])), history_fer2013.history['val_loss'], label='fer2013 valid loss')
plt.plot(np.arange(len(history_jaffe.history['val_loss'])), history_jaffe.history['val_loss'], label='jaffe valid loss')
plt.plot(np.arange(len(history_ck.history['val_loss'])), history_ck.history['val_loss'], label='ck+ valid loss')
plt.legend(loc='best')
# 准确率
plt.subplot(1, 2, 2)
plt.plot(np.arange(len(history_fer2013.history['acc'])), history_fer2013.history['acc'], label='fer2013 train accuracy')
plt.plot(np.arange(len(history_jaffe.history['acc'])), history_jaffe.history['acc'], label='jaffe train accuracy')
plt.plot(np.arange(len(history_ck.history['acc'])), history_ck.history['acc'], label='ck+ train accuracy')
plt.plot(np.arange(len(history_fer2013.history['val_acc'])), history_fer2013.history['val_acc'], label='fer2013 valid accuracy')
plt.plot(np.arange(len(history_jaffe.history['val_acc'])), history_jaffe.history['val_acc'], label='jaffe valid accuracy')
plt.plot(np.arange(len(history_ck.history['val_acc'])), history_ck.history['val_acc'], label='ck+ valid accuracy')
plt.legend(loc='best')
plt.savefig('loss.png')
plt.show()
```
# 评估模型泛化能力
```
from model import CNN3
model = CNN3()
model.load_weights('../models/cnn2_best_weights.h5')
from data import Jaffe, CK
_, x_jaffe, y_jaffe = Jaffe().gen_train()
_, x_ck, y_ck = CK().gen_train()
jaffe_pred = model.predict(x_jaffe)
ck_pred = model.predict(x_ck)
print(np.sum(np.argmax(jaffe_pred, axis=1) == y_jaffe) / y_jaffe.shape[0])
print(np.sum(np.argmax(ck_pred, axis=1) == y_ck) / y_ck.shape[0])
```
# 特征可视化
```
from model import CNN3
model = CNN3()
model.load_weights('../models/cnn3_best_weights.h5')
print(model.summary())
def get_feature_map(model, layer_index, channels, input_img):
from keras import backend as K
layer = K.function([model.layers[0].input], [model.layers[layer_index].output])
feature_map = layer([input_img])[0]
import matplotlib.pyplot as plt
plt.figure(figsize=(20, 8))
for i in range(channels):
img = feature_map[:, :, :, i]
plt.subplot(4, 8, i+1)
plt.imshow(img[0], cmap='gray')
plt.savefig('rst.png')
plt.show()
import cv2
img = cv2.cvtColor(cv2.imread('../data/demo.jpg'), cv2.cv2.COLOR_BGR2GRAY)
img.shape = (1, 48, 48, 1)
get_feature_map(model, 4, 32, img)
get_feature_map(model, 9, 32, img)
```
|
github_jupyter
|
from data import Fer2013, Jaffe, CK
from keras.utils import to_categorical
expressions, x_train, y_train = Fer2013().gen_train()
expressions, x_valid, y_valid = Fer2013().gen_valid()
# target编码
import numpy as np
y_train = to_categorical(y_train).reshape(y_train.shape[0], -1)
y_valid = to_categorical(y_valid).reshape(y_valid.shape[0], -1)
# 为了统一几个数据集,必须增加一列为0的
y_train = np.hstack((y_train, np.zeros((y_train.shape[0], 1))))
print(y_train.shape)
y_valid = np.hstack((y_valid, np.zeros((y_valid.shape[0], 1))))
print(y_valid.shape)
from model import CNN3
model = CNN3(input_shape=(48, 48, 1), n_classes=8)
from keras.callbacks import EarlyStopping, ReduceLROnPlateau, ModelCheckpoint
from keras.optimizers import SGD
sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(optimizer=sgd, loss='categorical_crossentropy', metrics=['accuracy'])
callback = [
ModelCheckpoint('../models/cnn2_best_weights.h5', monitor='val_acc', verbose=True, save_best_only=True, save_weights_only=True)]
epochs = 200
batch_size = 128
from keras.preprocessing.image import ImageDataGenerator
train_generator = ImageDataGenerator(rotation_range=10,
width_shift_range=0.05,
height_shift_range=0.05,
horizontal_flip=True,
shear_range=0.2,
zoom_range=0.2).flow(x_train, y_train, batch_size=batch_size)
valid_generator = ImageDataGenerator().flow(x_valid, y_valid, batch_size=batch_size)
history_fer2013 = model.fit_generator(train_generator,
steps_per_epoch=len(y_train)//batch_size,
epochs=epochs,
validation_data=valid_generator,
validation_steps=len(y_valid)//batch_size,
callbacks=callback)
_, x_test, y_test = Fer2013().gen_test()
pred = model.predict(x_test)
pred = np.argmax(pred, axis=1)
print(pred)
print(np.sum(pred.reshape(-1) == y_test.reshape(-1)) / y_test.shape[0])
print(np.max(history_fer2013.history['val_acc']))
import numpy as np
expressions, x, y = Jaffe().gen_train()
from keras.utils import to_categorical
y = to_categorical(y).reshape(y.shape[0], -1)
# 为了统一几个数据集,必须增加一列为0的
y = np.hstack((y, np.zeros((y.shape[0], 1))))
from sklearn.model_selection import train_test_split
# 划分训练集验证集
x_train, x_valid, y_train, y_valid = train_test_split(x, y, test_size=0.2, random_state=2019)
print(x_train.shape, y_train.shape)
from keras.preprocessing.image import ImageDataGenerator
train_generator = ImageDataGenerator(rotation_range=10,
width_shift_range=0.05,
height_shift_range=0.05,
horizontal_flip=True,
shear_range=0.2,
zoom_range=0.2).flow(x_train, y_train, batch_size=32)
valid_generator = ImageDataGenerator().flow(x_valid, y_valid, batch_size=32)
model = CNN3()
from keras.callbacks import EarlyStopping, ReduceLROnPlateau, ModelCheckpoint
sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(optimizer=sgd, loss='categorical_crossentropy', metrics=['accuracy'])
callback = [
ModelCheckpoint('../models/cnn3_best_weights.h5', monitor='val_acc', verbose=True, save_best_only=True, save_weights_only=True)]
epochs = 200
batch_size = 32
history_jaffe = model.fit_generator(train_generator, steps_per_epoch=len(y_train)//batch_size, epochs=epochs, validation_data=valid_generator, validation_steps=len(y_valid)//batch_size, callbacks=callback)
model = CNN3()
import numpy as np
expr, x, y = CK().gen_train()
from keras.utils import to_categorical
y = to_categorical(y).reshape(y.shape[0], -1)
from sklearn.model_selection import train_test_split
# 划分训练集验证集
x_train, x_valid, y_train, y_valid = train_test_split(x, y, test_size=0.2, random_state=2019)
print(x_train.shape, y_train.shape)
from keras.callbacks import EarlyStopping, ReduceLROnPlateau, ModelCheckpoint
sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(optimizer=sgd, loss='categorical_crossentropy', metrics=['accuracy'])
callback = [
ModelCheckpoint('../models/cnn3_best_weights.h5', monitor='val_acc', verbose=True, save_best_only=True, save_weights_only=True)]
epochs = 200
batch_size = 64
from keras.preprocessing.image import ImageDataGenerator
train_generator = ImageDataGenerator(rotation_range=10,
width_shift_range=0.05,
height_shift_range=0.05,
horizontal_flip=True,
shear_range=0.2,
zoom_range=0.2).flow(x_train, y_train, batch_size=32)
valid_generator = ImageDataGenerator().flow(x_valid, y_valid, batch_size=32)
history_ck = model.fit_generator(train_generator, steps_per_epoch=len(y_train)//batch_size, epochs=epochs, validation_data=valid_generator, validation_steps=len(y_valid)//batch_size, callbacks=callback)
import matplotlib.pyplot as plt
# 损失
plt.figure(figsize=(20, 8))
plt.subplot(1, 2, 1)
plt.plot(np.arange(len(history_fer2013.history['loss'])), history_fer2013.history['loss'], label='fer2013 train loss')
plt.plot(np.arange(len(history_jaffe.history['loss'])), history_jaffe.history['loss'], label='jaffe train loss')
plt.plot(np.arange(len(history_ck.history['loss'])), history_ck.history['loss'], label='ck+ train loss')
plt.plot(np.arange(len(history_fer2013.history['val_loss'])), history_fer2013.history['val_loss'], label='fer2013 valid loss')
plt.plot(np.arange(len(history_jaffe.history['val_loss'])), history_jaffe.history['val_loss'], label='jaffe valid loss')
plt.plot(np.arange(len(history_ck.history['val_loss'])), history_ck.history['val_loss'], label='ck+ valid loss')
plt.legend(loc='best')
# 准确率
plt.subplot(1, 2, 2)
plt.plot(np.arange(len(history_fer2013.history['acc'])), history_fer2013.history['acc'], label='fer2013 train accuracy')
plt.plot(np.arange(len(history_jaffe.history['acc'])), history_jaffe.history['acc'], label='jaffe train accuracy')
plt.plot(np.arange(len(history_ck.history['acc'])), history_ck.history['acc'], label='ck+ train accuracy')
plt.plot(np.arange(len(history_fer2013.history['val_acc'])), history_fer2013.history['val_acc'], label='fer2013 valid accuracy')
plt.plot(np.arange(len(history_jaffe.history['val_acc'])), history_jaffe.history['val_acc'], label='jaffe valid accuracy')
plt.plot(np.arange(len(history_ck.history['val_acc'])), history_ck.history['val_acc'], label='ck+ valid accuracy')
plt.legend(loc='best')
plt.savefig('loss.png')
plt.show()
from model import CNN3
model = CNN3()
model.load_weights('../models/cnn2_best_weights.h5')
from data import Jaffe, CK
_, x_jaffe, y_jaffe = Jaffe().gen_train()
_, x_ck, y_ck = CK().gen_train()
jaffe_pred = model.predict(x_jaffe)
ck_pred = model.predict(x_ck)
print(np.sum(np.argmax(jaffe_pred, axis=1) == y_jaffe) / y_jaffe.shape[0])
print(np.sum(np.argmax(ck_pred, axis=1) == y_ck) / y_ck.shape[0])
from model import CNN3
model = CNN3()
model.load_weights('../models/cnn3_best_weights.h5')
print(model.summary())
def get_feature_map(model, layer_index, channels, input_img):
from keras import backend as K
layer = K.function([model.layers[0].input], [model.layers[layer_index].output])
feature_map = layer([input_img])[0]
import matplotlib.pyplot as plt
plt.figure(figsize=(20, 8))
for i in range(channels):
img = feature_map[:, :, :, i]
plt.subplot(4, 8, i+1)
plt.imshow(img[0], cmap='gray')
plt.savefig('rst.png')
plt.show()
import cv2
img = cv2.cvtColor(cv2.imread('../data/demo.jpg'), cv2.cv2.COLOR_BGR2GRAY)
img.shape = (1, 48, 48, 1)
get_feature_map(model, 4, 32, img)
get_feature_map(model, 9, 32, img)
| 0.571288 | 0.804329 |
```
import numpy as np
from scipy import linalg
from scipy.special import erf as sperf
import matplotlib.pyplot as plt
def infer_LAD(x, y, tol=1e-8, max_iter=5000):
## 2019.12.26: Jungmin's code
weights_limit = sperf(1e-10)*1e10
s_sample, s_pred = x.shape
s_sample, s_target = y.shape
w_sol = 0.0*(np.random.rand(s_pred,s_target) - 0.5)
b_sol = np.random.rand(1,s_target) - 0.5
# print(weights.shape)
for index in range(s_target):
error, old_error = np.inf, 0
weights = np.ones((s_sample, 1))
cov = np.cov(np.hstack((x,y[:,index][:,None])), rowvar=False, \
ddof=0, aweights=weights.reshape(s_sample))
cov_xx, cov_xy = cov[:s_pred,:s_pred],cov[:s_pred,s_pred:(s_pred+1)]
# print(cov.shape, cov_xx.shape, cov_xy.shape)
counter = 0
while np.abs(error-old_error) > tol and counter < max_iter:
counter += 1
old_error = np.mean(np.abs(b_sol[0,index] + x.dot(w_sol[:,index]) - y[:,index]))
# old_error = np.mean(np.abs(b_sol[0,index] + x_test.dot(w_sol[:,index]) - y_test[:,index]))
# print(w_sol[:,index].shape, npl.solve(cov_xx, cov_xy).reshape(s_pred).shape)
w_sol[:,index] = np.linalg.solve(cov_xx,cov_xy).reshape(s_pred)
b_sol[0,index] = np.mean(y[:,index]-x.dot(w_sol[:,index]))
weights = (b_sol[0,index] + x.dot(w_sol[:,index]) - y[:,index])
sigma = np.std(weights)
error = np.mean(np.abs(weights))
# error = np.mean(np.abs(b_sol[0,index] + x_test.dot(w_sol[:,index]) - y_test[:,index]))
weights_eq_0 = np.abs(weights) < 1e-10
weights[weights_eq_0] = weights_limit
weights[~weights_eq_0] = sigma*sperf(weights[~weights_eq_0]/sigma)/weights[~weights_eq_0]
#weights = sigma*sperf(weights/sigma)/weights
cov = np.cov(np.hstack((x,y[:,index][:,None])), rowvar=False, \
ddof=0, aweights=weights.reshape(s_sample))
cov_xx, cov_xy = cov[:s_pred,:s_pred],cov[:s_pred,s_pred:(s_pred+1)]
# print(old_error,error)
return b_sol,w_sol
n_seq = 200
n_var = 10
# generage x,w,h0
x = np.random.rand(n_seq,n_var)-0.5
print(x.shape)
w = np.random.rand(n_var) - 0.5
print(w.shape)
h0 = np.random.rand() - 0.5
print('h0:',h0)
# h = h0 + w*x
h = h0 + x.dot(w)
h0_pred,w_pred = infer_LAD(x, h[:,np.newaxis])
plt.plot([-1,1],[-1,1],'r--')
plt.plot(w,w_pred,'ko')
print('h0_pred:',h0_pred)
```
|
github_jupyter
|
import numpy as np
from scipy import linalg
from scipy.special import erf as sperf
import matplotlib.pyplot as plt
def infer_LAD(x, y, tol=1e-8, max_iter=5000):
## 2019.12.26: Jungmin's code
weights_limit = sperf(1e-10)*1e10
s_sample, s_pred = x.shape
s_sample, s_target = y.shape
w_sol = 0.0*(np.random.rand(s_pred,s_target) - 0.5)
b_sol = np.random.rand(1,s_target) - 0.5
# print(weights.shape)
for index in range(s_target):
error, old_error = np.inf, 0
weights = np.ones((s_sample, 1))
cov = np.cov(np.hstack((x,y[:,index][:,None])), rowvar=False, \
ddof=0, aweights=weights.reshape(s_sample))
cov_xx, cov_xy = cov[:s_pred,:s_pred],cov[:s_pred,s_pred:(s_pred+1)]
# print(cov.shape, cov_xx.shape, cov_xy.shape)
counter = 0
while np.abs(error-old_error) > tol and counter < max_iter:
counter += 1
old_error = np.mean(np.abs(b_sol[0,index] + x.dot(w_sol[:,index]) - y[:,index]))
# old_error = np.mean(np.abs(b_sol[0,index] + x_test.dot(w_sol[:,index]) - y_test[:,index]))
# print(w_sol[:,index].shape, npl.solve(cov_xx, cov_xy).reshape(s_pred).shape)
w_sol[:,index] = np.linalg.solve(cov_xx,cov_xy).reshape(s_pred)
b_sol[0,index] = np.mean(y[:,index]-x.dot(w_sol[:,index]))
weights = (b_sol[0,index] + x.dot(w_sol[:,index]) - y[:,index])
sigma = np.std(weights)
error = np.mean(np.abs(weights))
# error = np.mean(np.abs(b_sol[0,index] + x_test.dot(w_sol[:,index]) - y_test[:,index]))
weights_eq_0 = np.abs(weights) < 1e-10
weights[weights_eq_0] = weights_limit
weights[~weights_eq_0] = sigma*sperf(weights[~weights_eq_0]/sigma)/weights[~weights_eq_0]
#weights = sigma*sperf(weights/sigma)/weights
cov = np.cov(np.hstack((x,y[:,index][:,None])), rowvar=False, \
ddof=0, aweights=weights.reshape(s_sample))
cov_xx, cov_xy = cov[:s_pred,:s_pred],cov[:s_pred,s_pred:(s_pred+1)]
# print(old_error,error)
return b_sol,w_sol
n_seq = 200
n_var = 10
# generage x,w,h0
x = np.random.rand(n_seq,n_var)-0.5
print(x.shape)
w = np.random.rand(n_var) - 0.5
print(w.shape)
h0 = np.random.rand() - 0.5
print('h0:',h0)
# h = h0 + w*x
h = h0 + x.dot(w)
h0_pred,w_pred = infer_LAD(x, h[:,np.newaxis])
plt.plot([-1,1],[-1,1],'r--')
plt.plot(w,w_pred,'ko')
print('h0_pred:',h0_pred)
| 0.287068 | 0.60711 |
```
import requests
import pickle
import time
from bs4 import BeautifulSoup
import datetime
with open('../data/yelp_businesses.pickle', 'rb') as pickleReader:
business_dict = pickle.load(pickleReader)
business_ids = [k for k, v in business_dict.items() if v['location']['state']=='AZ' ]
business_dict[business_ids[0]]['location']['state']=='AZ'
len(business_ids)
business_dict[business_ids[0]]
reviews_dict = {}
url = business_dict[business_ids[0]]['url']
r = requests.get(url)
r.status_code
soup = BeautifulSoup(r.content, "lxml")
def get_review_ids(soup):
'''
Doc Stings
Convert the request content into a string to extract the data-review-id on the yelp page.
INPUT: Beautiful soup lxml
OUTPUT: list of data review ids from Yelp page
'''
soup_string = str(soup)
data_review_ids = []
x = soup_string.find('data-review-id')
next_id=x
while next_id > 0:
y=x+soup_string[x:100+x].find('"')
z=y+soup_string[y+1:100+y].find('"')
data_review_id = soup_string[y+1:z+1]
if data_review_id not in data_review_ids:
data_review_ids.append(data_review_id)
next_id = soup_string[x+1:].find('data-review-id')
x += next_id+1
if x>1:
x+=1
return data_review_ids
def append_review(review_id_list, reviews_dict):
'''
Doc Stings
Convert the request content into a string to extract the data-review-id on the yelp page.
INPUT:
OUTPUT:
'''
for review in review_id_list:
div = soup.find("div", {"data-review-id": review})
if div!=None:
if div.find("p").text != '\n Was this review …?\n ':
reviews_dict[review] = div.find("p").text
return reviews_dict
reviews_dict = append_review(review_ids_list, reviews_dict)
for k, v in reviews_dict.items():
print(k,'\n', v, '\n\n')
def pull_yelp_review(nbr_key, reviews_dict):
if 'reviews_pulled' not in business_dict[business_ids[nbr_key]].keys():
url = business_dict[business_ids[nbr_key]]['url']
r = requests.get(url)
if r.status_code == 200:
soup = BeautifulSoup(r.content, "lxml")
review_ids_list = get_review_ids(soup)
new_reviews_dict = append_review(review_ids_list, reviews_dict)
business_dict[business_ids[nbr_key]]['reviews_pulled'] = True
return new_reviews_dict, 'done'
else:
print('yelp page request error')
return reviews_dict, 'error'
else:
print('review previously pulled.')
return reviews_dict, 'previously'
reviews_dict, status = pull_yelp_review(4, reviews_dict)
nbr_key = 0
nbr_key += 1
ds = open('../data/data_scraping.txt', 'a')
while nbr_key < 2300:
if 'reviews_pulled' not in business_dict[business_ids[nbr_key]].keys():
url = business_dict[business_ids[nbr_key]]['url']
r = requests.get(url)
print('Key:', nbr_key)
print('Request Status:', r.status_code)
if r.status_code == 200:
time.sleep(45)
soup = BeautifulSoup(r.content, "lxml")
review_ids_list = get_review_ids(soup)
reviews_dict = append_review(review_ids_list, reviews_dict)
business_dict[business_ids[nbr_key]]['reviews_pulled'] = True
if ds.closed:
ds = open('../data/data_scraping.txt', 'a')
print('opened data scraping log file.')
ds.write("{date}\t{key}\t{status}\n".format(date=datetime.datetime.now(), key=nbr_key, status=r.status_code))
print(business_ids[nbr_key] , 'Done', business_dict[business_ids[nbr_key]]['reviews_pulled'])
print(len(reviews_dict), status)
nbr_key += 1
print('Next Key:', nbr_key)
ds.close()
with open('../data/yelp_review.pickle', 'wb') as pickleWriter:
pickle.dump(reviews_dict, pickleWriter, protocol=2)
print('Batch Done at Key:', nbr_key)
print('Key:', nbr_key)
ds.close()
ds.write("{date}\t{key}\t{status}\n".format(date=datetime.datetime.now(), key=nbr_key, status=r.status_code))
if ds.closed:
ds = open('../data/data_scraping.txt', 'a')
print('opened data scraping log file.')
url = business_dict[business_ids[nbr_key]]['url']
r = requests.get(url)
print('server status', r.status_code)
soup = BeautifulSoup(r.content, "lxml")
review_ids_list = get_review_ids(soup)
reviews_dict = append_review(review_ids_list, reviews_dict)
business_dict[business_ids[nbr_key]]['reviews_pulled'] = True
time.sleep(30)
print((len(business_ids)*30)/60/60, 'hours')
print((len(business_ids)*30)/60/60/24, 'days')
with open('../data/yelp_review.pickle', 'wb') as pickleWriter:
pickle.dump(reviews_dict, pickleWriter, protocol=2)
with open('../data/yelp_review.pickle', 'rb') as pickleReader:
test = pickle.load(pickleReader)
for k in reviews_dict.keys():
print(k)
print(reviews_dict['8jYsEl63Lm3ckUEvYkW4kQ'])
print()
print(reviews_dict['AHG4DZ2GYjJXcg7LyvPMhQ'])
print()
print(reviews_dict['JVITPvg5vCJ6AUeY8yQC6g'])
print()
print(reviews_dict['2jlf6NA5ZetNOO4jpVLWqg'])
print()
print(reviews_dict['99end7AF6NiJsn2AUBRzdw'])
print()
print(reviews_dict['pVQql9djXWWXO6NyZDf9HQ'])
print()
print(reviews_dict['z9jmAT4uV72TmMeAu0Ixng'])
with open('../data/yelp_businesses.pickle', 'rb') as pickleReader:
business_dict = pickle.load(pickleReader)
with open('../data/yelp_businesses.pickle', 'wb') as pickleWriter:
pickle.dump(business_dict, pickleWriter, protocol=2)
print('Businesses:',len(business_dict))
print('Reviews:', len(reviews_dict))
nbr_key= 0
while nbr_key < 2300:
business_dict[business_ids[nbr_key]]['reviews_pulled'] = True
nbr_key += 1
nbr_key = 0
for bid in business_ids:
if 'reviews_pulled' in business_dict[bid].keys():
nbr_key += 1
print('Key:', nbr_key)
soup = BeautifulSoup(r.content, "lxml")
soup
review = 'Op_IdFSIbwC1gKU5ueABfA'
div = soup.find("div", {"data-review-id": review})
div.find('div', {'class': 'i-stars'})['title']
review_ids_list
```
|
github_jupyter
|
import requests
import pickle
import time
from bs4 import BeautifulSoup
import datetime
with open('../data/yelp_businesses.pickle', 'rb') as pickleReader:
business_dict = pickle.load(pickleReader)
business_ids = [k for k, v in business_dict.items() if v['location']['state']=='AZ' ]
business_dict[business_ids[0]]['location']['state']=='AZ'
len(business_ids)
business_dict[business_ids[0]]
reviews_dict = {}
url = business_dict[business_ids[0]]['url']
r = requests.get(url)
r.status_code
soup = BeautifulSoup(r.content, "lxml")
def get_review_ids(soup):
'''
Doc Stings
Convert the request content into a string to extract the data-review-id on the yelp page.
INPUT: Beautiful soup lxml
OUTPUT: list of data review ids from Yelp page
'''
soup_string = str(soup)
data_review_ids = []
x = soup_string.find('data-review-id')
next_id=x
while next_id > 0:
y=x+soup_string[x:100+x].find('"')
z=y+soup_string[y+1:100+y].find('"')
data_review_id = soup_string[y+1:z+1]
if data_review_id not in data_review_ids:
data_review_ids.append(data_review_id)
next_id = soup_string[x+1:].find('data-review-id')
x += next_id+1
if x>1:
x+=1
return data_review_ids
def append_review(review_id_list, reviews_dict):
'''
Doc Stings
Convert the request content into a string to extract the data-review-id on the yelp page.
INPUT:
OUTPUT:
'''
for review in review_id_list:
div = soup.find("div", {"data-review-id": review})
if div!=None:
if div.find("p").text != '\n Was this review …?\n ':
reviews_dict[review] = div.find("p").text
return reviews_dict
reviews_dict = append_review(review_ids_list, reviews_dict)
for k, v in reviews_dict.items():
print(k,'\n', v, '\n\n')
def pull_yelp_review(nbr_key, reviews_dict):
if 'reviews_pulled' not in business_dict[business_ids[nbr_key]].keys():
url = business_dict[business_ids[nbr_key]]['url']
r = requests.get(url)
if r.status_code == 200:
soup = BeautifulSoup(r.content, "lxml")
review_ids_list = get_review_ids(soup)
new_reviews_dict = append_review(review_ids_list, reviews_dict)
business_dict[business_ids[nbr_key]]['reviews_pulled'] = True
return new_reviews_dict, 'done'
else:
print('yelp page request error')
return reviews_dict, 'error'
else:
print('review previously pulled.')
return reviews_dict, 'previously'
reviews_dict, status = pull_yelp_review(4, reviews_dict)
nbr_key = 0
nbr_key += 1
ds = open('../data/data_scraping.txt', 'a')
while nbr_key < 2300:
if 'reviews_pulled' not in business_dict[business_ids[nbr_key]].keys():
url = business_dict[business_ids[nbr_key]]['url']
r = requests.get(url)
print('Key:', nbr_key)
print('Request Status:', r.status_code)
if r.status_code == 200:
time.sleep(45)
soup = BeautifulSoup(r.content, "lxml")
review_ids_list = get_review_ids(soup)
reviews_dict = append_review(review_ids_list, reviews_dict)
business_dict[business_ids[nbr_key]]['reviews_pulled'] = True
if ds.closed:
ds = open('../data/data_scraping.txt', 'a')
print('opened data scraping log file.')
ds.write("{date}\t{key}\t{status}\n".format(date=datetime.datetime.now(), key=nbr_key, status=r.status_code))
print(business_ids[nbr_key] , 'Done', business_dict[business_ids[nbr_key]]['reviews_pulled'])
print(len(reviews_dict), status)
nbr_key += 1
print('Next Key:', nbr_key)
ds.close()
with open('../data/yelp_review.pickle', 'wb') as pickleWriter:
pickle.dump(reviews_dict, pickleWriter, protocol=2)
print('Batch Done at Key:', nbr_key)
print('Key:', nbr_key)
ds.close()
ds.write("{date}\t{key}\t{status}\n".format(date=datetime.datetime.now(), key=nbr_key, status=r.status_code))
if ds.closed:
ds = open('../data/data_scraping.txt', 'a')
print('opened data scraping log file.')
url = business_dict[business_ids[nbr_key]]['url']
r = requests.get(url)
print('server status', r.status_code)
soup = BeautifulSoup(r.content, "lxml")
review_ids_list = get_review_ids(soup)
reviews_dict = append_review(review_ids_list, reviews_dict)
business_dict[business_ids[nbr_key]]['reviews_pulled'] = True
time.sleep(30)
print((len(business_ids)*30)/60/60, 'hours')
print((len(business_ids)*30)/60/60/24, 'days')
with open('../data/yelp_review.pickle', 'wb') as pickleWriter:
pickle.dump(reviews_dict, pickleWriter, protocol=2)
with open('../data/yelp_review.pickle', 'rb') as pickleReader:
test = pickle.load(pickleReader)
for k in reviews_dict.keys():
print(k)
print(reviews_dict['8jYsEl63Lm3ckUEvYkW4kQ'])
print()
print(reviews_dict['AHG4DZ2GYjJXcg7LyvPMhQ'])
print()
print(reviews_dict['JVITPvg5vCJ6AUeY8yQC6g'])
print()
print(reviews_dict['2jlf6NA5ZetNOO4jpVLWqg'])
print()
print(reviews_dict['99end7AF6NiJsn2AUBRzdw'])
print()
print(reviews_dict['pVQql9djXWWXO6NyZDf9HQ'])
print()
print(reviews_dict['z9jmAT4uV72TmMeAu0Ixng'])
with open('../data/yelp_businesses.pickle', 'rb') as pickleReader:
business_dict = pickle.load(pickleReader)
with open('../data/yelp_businesses.pickle', 'wb') as pickleWriter:
pickle.dump(business_dict, pickleWriter, protocol=2)
print('Businesses:',len(business_dict))
print('Reviews:', len(reviews_dict))
nbr_key= 0
while nbr_key < 2300:
business_dict[business_ids[nbr_key]]['reviews_pulled'] = True
nbr_key += 1
nbr_key = 0
for bid in business_ids:
if 'reviews_pulled' in business_dict[bid].keys():
nbr_key += 1
print('Key:', nbr_key)
soup = BeautifulSoup(r.content, "lxml")
soup
review = 'Op_IdFSIbwC1gKU5ueABfA'
div = soup.find("div", {"data-review-id": review})
div.find('div', {'class': 'i-stars'})['title']
review_ids_list
| 0.074292 | 0.174797 |
```
%matplotlib inline
import sys
sys.path.insert(0, "../..") # Adds the module to path
```
# Example 2. Single particle tracking
## 1. Setup
Imports the objects needed for this example.
```
import numpy as np
import matplotlib.pyplot as plt
import deeptrack as dt
import datasets
datasets.load("ParticleTracking")
IMAGE_SIZE = 51
```
## 2. Defining the dataset
### 2.1 Defining the training set
The training set consists of simulated 51 by 51 pixel images, containing a single particle each. The particles are simulated as spheres with a radius between 1 micron and 2 microns, and a refractive index between 1.5 and 1.6. Its position in the camera plane is constrained to be within the image, and is sampled with a normal distribution with standard deviation of 5 pixel units in along the axis normal to the camera plane.
```
particle = dt.MieSphere(
position=lambda: np.random.uniform(IMAGE_SIZE / 2 - 3, IMAGE_SIZE / 2 + 3, 2) * dt.units.pixel,
z=lambda: np.random.uniform(0, 5) * dt.units.pixel,
radius=300e-9 ,
refractive_index=lambda: np.random.uniform(1.37, 1.42),
position_unit="pixel",
L=10
)
```
The particle is imaged using a brightfield microscope with NA between 0.15 and 0.25 and a illuminating laser wavelength between 400 and 700 nm. To simulate the broad spectrum we define 10 individual optical devices, each imaging the particle at a single wavelength. The result is then averaged.
```
spectrum = np.linspace(500e-9, 700e-9, 5)
imaged_particle_list = []
for wavelength in spectrum:
single_wavelength_optics = dt.Brightfield(
NA=0.8,
resolution=1e-6,
magnification=15,
wavelength=wavelength,
padding=(32, 32, 32, 32),
output_region=(0, 0, IMAGE_SIZE, IMAGE_SIZE),
)
imaged_particle_list.append(
single_wavelength_optics(particle)
)
dataset = sum(imaged_particle_list) / len(imaged_particle_list)
```
### 2.2 Defining the training label
The training label is extracted directly from the image as the `position` property divided by the image size, such that the posible values are contained within -0.5 and 0.5.
```
def get_label(image):
px = np.array(image.get_property("position")) / IMAGE_SIZE - 0.5
return px
```
### 2.3 Visualizing the dataset
We resolve and show 16 images, with a green circle indicating the particle position.
```
NUMBER_OF_IMAGES = 4
for _ in range(NUMBER_OF_IMAGES):
dataset.update()
image_of_particle = dataset.resolve()
position_of_particle = get_label(image_of_particle) * IMAGE_SIZE + IMAGE_SIZE / 2
plt.imshow(image_of_particle[..., 0], cmap="gray")
plt.colorbar()
plt.scatter(position_of_particle[1], position_of_particle[0], s=120, facecolors='none', edgecolors="g", linewidth=2)
plt.show()
```
### 2.4 Augmenting dataset
Simulating mie particles is slow. To speed up training we implement augmentation techniques. Here we flip and mirror the image. Note that DeepTrack ensures that the position is still correct after the augmentation.
```
augmented_dataset = dt.Reuse(dataset, 8) >> dt.FlipLR() >> dt.FlipUD() >> dt.FlipDiagonal()
```
We add noises after augmentation. This allows the augmented images to be more distinct.
```
gradient = dt.IlluminationGradient(
gradient=lambda: np.random.randn(2) * 1e-4,
)
noise = dt.Poisson(
min_snr=5,
max_snr=100,
snr=lambda min_snr, max_snr: min_snr + np.random.rand() * (max_snr - min_snr),
background=1
)
normalization = dt.NormalizeMinMax(lambda: np.random.rand() * 0.2, lambda: 0.8 + np.random.rand() * 0.2)
data_pipeline = augmented_dataset >> gradient >> noise >> normalization
NUMBER_OF_IMAGES = 4
for _ in range(NUMBER_OF_IMAGES):
data_pipeline.update()
image_of_particle = data_pipeline()
position_of_particle = get_label(image_of_particle) * IMAGE_SIZE + IMAGE_SIZE / 2
print(position_of_particle)
plt.imshow(image_of_particle[..., 0], cmap="gray")
plt.colorbar()
plt.scatter(position_of_particle[1], position_of_particle[0], s=120, facecolors='none', edgecolors="g", linewidth=2)
plt.show()
```
## 3. Defining the network
The network used is a Convolutional network, with mse as loss.
```
import tensorflow.keras.backend as K
import tensorflow.keras.optimizers as optimizers
def pixel_error(T, P):
return K.mean(K.sqrt(K.sum(K.square(T - P), axis=-1))) * IMAGE_SIZE
model = dt.models.Convolutional(
input_shape=(IMAGE_SIZE, IMAGE_SIZE, 1),
conv_layers_dimensions=(16, 32, 64),
dense_layers_dimensions=(32, 32),
steps_per_pooling=1,
number_of_outputs=2,
loss="mse",
metrics=[pixel_error],
optimizer="adam",
dense_block=dt.layers.DenseBlock(activation="relu"),
pooling_block=dt.layers.PoolingBlock(padding="valid")
)
model.summary()
```
## 4. Training the network
We use the `ContinuousGenerator` to generate the images. It creates a new thread and generates images while the model is training.
Set TRAIN_MODEL to True to train the model, otherwise a pretrained model is downloaded.
```
TRAIN_MODEL = True
from tensorflow.keras.callbacks import EarlyStopping
validation_set_size = 200
validation_set = [data_pipeline.update().resolve() for _ in range(validation_set_size)]
validation_labels = [get_label(image) for image in validation_set]
if TRAIN_MODEL:
generator = dt.generators.ContinuousGenerator(
data_pipeline & (data_pipeline >> get_label),
min_data_size=int(1e3),
max_data_size=int(2e3),
batch_size=64,
max_epochs_per_sample=25
)
histories = []
with generator:
h = model.fit(
generator,
validation_data=(
np.array(validation_set),
np.array(validation_labels)
),
epochs=250
)
plt.plot(h.history["loss"], 'g')
plt.plot(h.history["val_loss"], 'r')
plt.legend(["loss", "val_loss"])
plt.yscale('log')
plt.show()
else:
model_path = datasets.load_model("ParticleTracking")
model.load_weights(model_path)
```
## 5. Evaluating the network
### 5.1 Prediction vs actual
We show the prediction of each output versus the ground truth
```
validation_prediction = model.predict(np.array(validation_set))
labels = np.array(validation_labels)
for col in range(validation_prediction.shape[-1]):
label_col = labels[:, col]
prediction_col = validation_prediction[:, col]
plt.scatter(label_col, prediction_col, alpha=0.1)
plt.plot([np.min(label_col), np.max(label_col)],
[np.min(label_col), np.max(label_col)], c='k')
plt.show()
```
### 5.3 Prediction vs property value
We show the the pixel error as a function of some properties.
```
properties = ["position", "z", "snr", "gradient", "refractive_index", "NA"]
validation_prediction = model.predict(np.array(validation_set))
snr = [image.get_property("snr") for image in validation_set]
validation_error = np.mean(np.abs(validation_prediction - validation_labels), axis=-1) * 51
for property_name in properties:
property_values = np.array([image.get_property(property_name) for image in validation_set])
if property_values.ndim == 1:
property_values = np.expand_dims(property_values, axis=-1)
for col in range(property_values.shape[1]):
values = property_values[:, col]
plt.subplot(1, property_values.shape[1], col + 1)
plt.scatter(values, validation_error, alpha=0.1)
plt.xlim([np.min(values), np.max(values)])
plt.ylim([np.min(validation_error), np.max(validation_error)])
plt.yscale("log")
plt.ylabel("Pixel error")
plt.xlabel("{0}[{1}]".format(property_name, col))
plt.show()
```
### 5.3 Experimental data
We play some experimental videos tracked by the DeepTrack model, compared to radial center method.
```
import cv2
import IPython
import numpy as np
from radialcenter import radialcenter
import matplotlib.pyplot as plt
def track_video(video, frames_to_track):
video_width = int(video.get(cv2.CAP_PROP_FRAME_WIDTH))
video_height = int(video.get(cv2.CAP_PROP_FRAME_HEIGHT))
# Initialize variables
predicted_positions = np.zeros((frames_to_track, 2))
predicted_positions_radial = np.zeros((frames_to_track, 2))
# Track the positions of the particles frame by frame
for i in range(frames_to_track):
# Read the current frame from the video
(ret, frame) = video.read()
frame = cv2.normalize(frame, None, 0, 255, cv2.NORM_MINMAX)
# Convert color image to grayscale.
frame = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) / 255
radial_x, radial_y = radialcenter(frame)
predicted_positions_radial[i, 0] = radial_x
predicted_positions_radial[i, 1] = radial_y
### Resize the frame
frame_resize = cv2.resize(frame, (51, 51))
predicted_position = model.predict(np.reshape(frame_resize, (1, 51, 51, 1)))
predicted_position_y = predicted_position[0,0] * video_width + video_width / 2 + 1
predicted_position_x = predicted_position[0,1] * video_height + video_height / 2 + 1
predicted_positions[i, 0] = predicted_position_x
predicted_positions[i, 1] = predicted_position_y
IPython.display.clear_output(wait=True)
plt.imshow(frame, cmap="gray")
plt.scatter(predicted_position_x, predicted_position_y, marker='o', s=360, edgecolor='b', facecolor='none')
plt.scatter(radial_x, radial_y, marker='x', s=240, c="g")
plt.show()
return predicted_positions, predicted_positions_radial
```
Here, the blue circle is deeptrack, and the cross is radial center
```
video = cv2.VideoCapture("./datasets/ParticleTracking/ideal.avi")
p, pr = track_video(video, 100)
video = cv2.VideoCapture("./datasets/ParticleTracking/bad.avi")
p, pr = track_video(video, 100)
```
|
github_jupyter
|
%matplotlib inline
import sys
sys.path.insert(0, "../..") # Adds the module to path
import numpy as np
import matplotlib.pyplot as plt
import deeptrack as dt
import datasets
datasets.load("ParticleTracking")
IMAGE_SIZE = 51
particle = dt.MieSphere(
position=lambda: np.random.uniform(IMAGE_SIZE / 2 - 3, IMAGE_SIZE / 2 + 3, 2) * dt.units.pixel,
z=lambda: np.random.uniform(0, 5) * dt.units.pixel,
radius=300e-9 ,
refractive_index=lambda: np.random.uniform(1.37, 1.42),
position_unit="pixel",
L=10
)
spectrum = np.linspace(500e-9, 700e-9, 5)
imaged_particle_list = []
for wavelength in spectrum:
single_wavelength_optics = dt.Brightfield(
NA=0.8,
resolution=1e-6,
magnification=15,
wavelength=wavelength,
padding=(32, 32, 32, 32),
output_region=(0, 0, IMAGE_SIZE, IMAGE_SIZE),
)
imaged_particle_list.append(
single_wavelength_optics(particle)
)
dataset = sum(imaged_particle_list) / len(imaged_particle_list)
def get_label(image):
px = np.array(image.get_property("position")) / IMAGE_SIZE - 0.5
return px
NUMBER_OF_IMAGES = 4
for _ in range(NUMBER_OF_IMAGES):
dataset.update()
image_of_particle = dataset.resolve()
position_of_particle = get_label(image_of_particle) * IMAGE_SIZE + IMAGE_SIZE / 2
plt.imshow(image_of_particle[..., 0], cmap="gray")
plt.colorbar()
plt.scatter(position_of_particle[1], position_of_particle[0], s=120, facecolors='none', edgecolors="g", linewidth=2)
plt.show()
augmented_dataset = dt.Reuse(dataset, 8) >> dt.FlipLR() >> dt.FlipUD() >> dt.FlipDiagonal()
gradient = dt.IlluminationGradient(
gradient=lambda: np.random.randn(2) * 1e-4,
)
noise = dt.Poisson(
min_snr=5,
max_snr=100,
snr=lambda min_snr, max_snr: min_snr + np.random.rand() * (max_snr - min_snr),
background=1
)
normalization = dt.NormalizeMinMax(lambda: np.random.rand() * 0.2, lambda: 0.8 + np.random.rand() * 0.2)
data_pipeline = augmented_dataset >> gradient >> noise >> normalization
NUMBER_OF_IMAGES = 4
for _ in range(NUMBER_OF_IMAGES):
data_pipeline.update()
image_of_particle = data_pipeline()
position_of_particle = get_label(image_of_particle) * IMAGE_SIZE + IMAGE_SIZE / 2
print(position_of_particle)
plt.imshow(image_of_particle[..., 0], cmap="gray")
plt.colorbar()
plt.scatter(position_of_particle[1], position_of_particle[0], s=120, facecolors='none', edgecolors="g", linewidth=2)
plt.show()
import tensorflow.keras.backend as K
import tensorflow.keras.optimizers as optimizers
def pixel_error(T, P):
return K.mean(K.sqrt(K.sum(K.square(T - P), axis=-1))) * IMAGE_SIZE
model = dt.models.Convolutional(
input_shape=(IMAGE_SIZE, IMAGE_SIZE, 1),
conv_layers_dimensions=(16, 32, 64),
dense_layers_dimensions=(32, 32),
steps_per_pooling=1,
number_of_outputs=2,
loss="mse",
metrics=[pixel_error],
optimizer="adam",
dense_block=dt.layers.DenseBlock(activation="relu"),
pooling_block=dt.layers.PoolingBlock(padding="valid")
)
model.summary()
TRAIN_MODEL = True
from tensorflow.keras.callbacks import EarlyStopping
validation_set_size = 200
validation_set = [data_pipeline.update().resolve() for _ in range(validation_set_size)]
validation_labels = [get_label(image) for image in validation_set]
if TRAIN_MODEL:
generator = dt.generators.ContinuousGenerator(
data_pipeline & (data_pipeline >> get_label),
min_data_size=int(1e3),
max_data_size=int(2e3),
batch_size=64,
max_epochs_per_sample=25
)
histories = []
with generator:
h = model.fit(
generator,
validation_data=(
np.array(validation_set),
np.array(validation_labels)
),
epochs=250
)
plt.plot(h.history["loss"], 'g')
plt.plot(h.history["val_loss"], 'r')
plt.legend(["loss", "val_loss"])
plt.yscale('log')
plt.show()
else:
model_path = datasets.load_model("ParticleTracking")
model.load_weights(model_path)
validation_prediction = model.predict(np.array(validation_set))
labels = np.array(validation_labels)
for col in range(validation_prediction.shape[-1]):
label_col = labels[:, col]
prediction_col = validation_prediction[:, col]
plt.scatter(label_col, prediction_col, alpha=0.1)
plt.plot([np.min(label_col), np.max(label_col)],
[np.min(label_col), np.max(label_col)], c='k')
plt.show()
properties = ["position", "z", "snr", "gradient", "refractive_index", "NA"]
validation_prediction = model.predict(np.array(validation_set))
snr = [image.get_property("snr") for image in validation_set]
validation_error = np.mean(np.abs(validation_prediction - validation_labels), axis=-1) * 51
for property_name in properties:
property_values = np.array([image.get_property(property_name) for image in validation_set])
if property_values.ndim == 1:
property_values = np.expand_dims(property_values, axis=-1)
for col in range(property_values.shape[1]):
values = property_values[:, col]
plt.subplot(1, property_values.shape[1], col + 1)
plt.scatter(values, validation_error, alpha=0.1)
plt.xlim([np.min(values), np.max(values)])
plt.ylim([np.min(validation_error), np.max(validation_error)])
plt.yscale("log")
plt.ylabel("Pixel error")
plt.xlabel("{0}[{1}]".format(property_name, col))
plt.show()
import cv2
import IPython
import numpy as np
from radialcenter import radialcenter
import matplotlib.pyplot as plt
def track_video(video, frames_to_track):
video_width = int(video.get(cv2.CAP_PROP_FRAME_WIDTH))
video_height = int(video.get(cv2.CAP_PROP_FRAME_HEIGHT))
# Initialize variables
predicted_positions = np.zeros((frames_to_track, 2))
predicted_positions_radial = np.zeros((frames_to_track, 2))
# Track the positions of the particles frame by frame
for i in range(frames_to_track):
# Read the current frame from the video
(ret, frame) = video.read()
frame = cv2.normalize(frame, None, 0, 255, cv2.NORM_MINMAX)
# Convert color image to grayscale.
frame = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) / 255
radial_x, radial_y = radialcenter(frame)
predicted_positions_radial[i, 0] = radial_x
predicted_positions_radial[i, 1] = radial_y
### Resize the frame
frame_resize = cv2.resize(frame, (51, 51))
predicted_position = model.predict(np.reshape(frame_resize, (1, 51, 51, 1)))
predicted_position_y = predicted_position[0,0] * video_width + video_width / 2 + 1
predicted_position_x = predicted_position[0,1] * video_height + video_height / 2 + 1
predicted_positions[i, 0] = predicted_position_x
predicted_positions[i, 1] = predicted_position_y
IPython.display.clear_output(wait=True)
plt.imshow(frame, cmap="gray")
plt.scatter(predicted_position_x, predicted_position_y, marker='o', s=360, edgecolor='b', facecolor='none')
plt.scatter(radial_x, radial_y, marker='x', s=240, c="g")
plt.show()
return predicted_positions, predicted_positions_radial
video = cv2.VideoCapture("./datasets/ParticleTracking/ideal.avi")
p, pr = track_video(video, 100)
video = cv2.VideoCapture("./datasets/ParticleTracking/bad.avi")
p, pr = track_video(video, 100)
| 0.522689 | 0.985552 |
# LSTM Stock Predictor Using Closing Prices
In this notebook, you will build and train a custom LSTM RNN that uses a 10 day window of Bitcoin closing prices to predict the 11th day closing price.
You will need to:
1. Prepare the data for training and testing
2. Build and train a custom LSTM RNN
3. Evaluate the performance of the model
## Data Preparation
In this section, you will need to prepare the training and testing data for the model. The model will use a rolling 10 day window to predict the 11th day closing price.
You will need to:
1. Use the `window_data` function to generate the X and y values for the model.
2. Split the data into 70% training and 30% testing
3. Apply the MinMaxScaler to the X and y values
4. Reshape the X_train and X_test data for the model. Note: The required input format for the LSTM is:
```python
reshape((X_train.shape[0], X_train.shape[1], 1))
```
```
import numpy as np
import pandas as pd
import hvplot.pandas
# Set the random seed for reproducibility
# Note: This is for the homework solution, but it is good practice to comment this out and run multiple experiments to evaluate your model
from numpy.random import seed
seed(1)
from tensorflow import random
random.set_seed(2)
# Load the fear and greed sentiment data for Bitcoin
df = pd.read_csv('btc_sentiment.csv', index_col="date", infer_datetime_format=True, parse_dates=True)
df = df.drop(columns="fng_classification")
df.head()
# Load the historical closing prices for Bitcoin
df2 = pd.read_csv('btc_historic.csv', index_col="Date", infer_datetime_format=True, parse_dates=True)['Close']
df2 = df2.sort_index()
df2.tail()
# Join the data into a single DataFrame
df = df.join(df2, how="inner")
df.tail()
df.head()
# This function accepts the column number for the features (X) and the target (y)
# It chunks the data up with a rolling window of Xt-n to predict Xt
# It returns a numpy array of X any y
def window_data(df, window, feature_col_number, target_col_number):
X = []
y = []
for i in range(len(df) - window - 1):
features = df.iloc[i:(i + window), feature_col_number]
target = df.iloc[(i + window), target_col_number]
X.append(features)
y.append(target)
return np.array(X), np.array(y).reshape(-1, 1)
# Predict Closing Prices using a 10 day window of previous closing prices
# Then, experiment with window sizes anywhere from 1 to 10 and see how the model performance changes
window_size = 10
# Column index 0 is the 'fng_value' column
# Column index 1 is the `Close` column
feature_column = 1
target_column = 1
X, y = window_data(df, window_size, feature_column, target_column)
# Use 70% of the data for training and the remaineder for testing
split = int(0.7 * len(X))
X_train = X[: split - 1]
X_test = X[split:]
y_train = y[: split - 1]
y_test = y[split:]
from sklearn.preprocessing import MinMaxScaler
# Use the MinMaxScaler to scale data between 0 and 1.
scaler = MinMaxScaler()
scaler.fit(X)
X_train = scaler.transform(X_train)
X_test = scaler.transform(X_test)
scaler.fit(y)
y_train = scaler.transform(y_train)
y_test = scaler.transform(y_test)
# Reshape the features for the model
X_train = X_train.reshape((X_train.shape[0], X_train.shape[1], 1))
X_test = X_test.reshape((X_test.shape[0], X_test.shape[1], 1))
```
---
## Build and Train the LSTM RNN
In this section, you will design a custom LSTM RNN and fit (train) it using the training data.
You will need to:
1. Define the model architecture
2. Compile the model
3. Fit the model to the training data
### Hints:
You will want to use the same model architecture and random seed for both notebooks. This is necessary to accurately compare the performance of the FNG model vs the closing price model.
```
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import LSTM, Dense, Dropout
# Build the LSTM model.
# The return sequences need to be set to True if you are adding additional LSTM layers, but
# You don't have to do this for the final layer.
# Note: The dropouts help prevent overfitting
# Note: The input shape is the number of time steps and the number of indicators
# Note: Batching inputs has a different input shape of Samples/TimeSteps/Features
model = Sequential()
number_units = 30
dropout_fraction = 0.2
# Layer 1
model.add(LSTM(
units = number_units,
return_sequences = True,
input_shape = (X_train.shape[1],1))
)
model.add(Dropout(dropout_fraction))
# Layer 2
model.add(LSTM(
units = number_units,
return_sequences = False,
))
model.add(Dropout(dropout_fraction))
model.add(Dropout(dropout_fraction))
model.add(Dense(1))
# Compile the model
model.compile(optimizer="sgd", loss="mean_squared_error")
# Summarize the model
model.summary()
# Train the model
# Use at least 10 epochs
# Do not shuffle the data
# Experiement with the batch size, but a smaller batch size is recommended
epochs = 50
batch_size = 5
model.fit(X_train, y_train, epochs=epochs, shuffle=False, batch_size=batch_size, verbose=1)
```
---
## Model Performance
In this section, you will evaluate the model using the test data.
You will need to:
1. Evaluate the model using the `X_test` and `y_test` data.
2. Use the X_test data to make predictions
3. Create a DataFrame of Real (y_test) vs predicted values.
4. Plot the Real vs predicted values as a line chart
### Hints
Remember to apply the `inverse_transform` function to the predicted and y_test values to recover the actual closing prices.
```
# Evaluate the model
model.evaluate(X_test, y_test)
# Make some predictions
predicted = model.predict(X_test)
# Recover the original prices instead of the scaled version
predicted_prices = scaler.inverse_transform(predicted)
real_prices = scaler.inverse_transform(y_test.reshape(-1, 1))
# Create a DataFrame of Real and Predicted values
stocks = pd.DataFrame({
"Real": real_prices.ravel(),
"Predicted": predicted_prices.ravel()
}, index = df.index[-len(real_prices): ])
stocks.head()
stocks.hvplot.line(xlabel="Date", ylabel="Price")
```
|
github_jupyter
|
reshape((X_train.shape[0], X_train.shape[1], 1))
import numpy as np
import pandas as pd
import hvplot.pandas
# Set the random seed for reproducibility
# Note: This is for the homework solution, but it is good practice to comment this out and run multiple experiments to evaluate your model
from numpy.random import seed
seed(1)
from tensorflow import random
random.set_seed(2)
# Load the fear and greed sentiment data for Bitcoin
df = pd.read_csv('btc_sentiment.csv', index_col="date", infer_datetime_format=True, parse_dates=True)
df = df.drop(columns="fng_classification")
df.head()
# Load the historical closing prices for Bitcoin
df2 = pd.read_csv('btc_historic.csv', index_col="Date", infer_datetime_format=True, parse_dates=True)['Close']
df2 = df2.sort_index()
df2.tail()
# Join the data into a single DataFrame
df = df.join(df2, how="inner")
df.tail()
df.head()
# This function accepts the column number for the features (X) and the target (y)
# It chunks the data up with a rolling window of Xt-n to predict Xt
# It returns a numpy array of X any y
def window_data(df, window, feature_col_number, target_col_number):
X = []
y = []
for i in range(len(df) - window - 1):
features = df.iloc[i:(i + window), feature_col_number]
target = df.iloc[(i + window), target_col_number]
X.append(features)
y.append(target)
return np.array(X), np.array(y).reshape(-1, 1)
# Predict Closing Prices using a 10 day window of previous closing prices
# Then, experiment with window sizes anywhere from 1 to 10 and see how the model performance changes
window_size = 10
# Column index 0 is the 'fng_value' column
# Column index 1 is the `Close` column
feature_column = 1
target_column = 1
X, y = window_data(df, window_size, feature_column, target_column)
# Use 70% of the data for training and the remaineder for testing
split = int(0.7 * len(X))
X_train = X[: split - 1]
X_test = X[split:]
y_train = y[: split - 1]
y_test = y[split:]
from sklearn.preprocessing import MinMaxScaler
# Use the MinMaxScaler to scale data between 0 and 1.
scaler = MinMaxScaler()
scaler.fit(X)
X_train = scaler.transform(X_train)
X_test = scaler.transform(X_test)
scaler.fit(y)
y_train = scaler.transform(y_train)
y_test = scaler.transform(y_test)
# Reshape the features for the model
X_train = X_train.reshape((X_train.shape[0], X_train.shape[1], 1))
X_test = X_test.reshape((X_test.shape[0], X_test.shape[1], 1))
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import LSTM, Dense, Dropout
# Build the LSTM model.
# The return sequences need to be set to True if you are adding additional LSTM layers, but
# You don't have to do this for the final layer.
# Note: The dropouts help prevent overfitting
# Note: The input shape is the number of time steps and the number of indicators
# Note: Batching inputs has a different input shape of Samples/TimeSteps/Features
model = Sequential()
number_units = 30
dropout_fraction = 0.2
# Layer 1
model.add(LSTM(
units = number_units,
return_sequences = True,
input_shape = (X_train.shape[1],1))
)
model.add(Dropout(dropout_fraction))
# Layer 2
model.add(LSTM(
units = number_units,
return_sequences = False,
))
model.add(Dropout(dropout_fraction))
model.add(Dropout(dropout_fraction))
model.add(Dense(1))
# Compile the model
model.compile(optimizer="sgd", loss="mean_squared_error")
# Summarize the model
model.summary()
# Train the model
# Use at least 10 epochs
# Do not shuffle the data
# Experiement with the batch size, but a smaller batch size is recommended
epochs = 50
batch_size = 5
model.fit(X_train, y_train, epochs=epochs, shuffle=False, batch_size=batch_size, verbose=1)
# Evaluate the model
model.evaluate(X_test, y_test)
# Make some predictions
predicted = model.predict(X_test)
# Recover the original prices instead of the scaled version
predicted_prices = scaler.inverse_transform(predicted)
real_prices = scaler.inverse_transform(y_test.reshape(-1, 1))
# Create a DataFrame of Real and Predicted values
stocks = pd.DataFrame({
"Real": real_prices.ravel(),
"Predicted": predicted_prices.ravel()
}, index = df.index[-len(real_prices): ])
stocks.head()
stocks.hvplot.line(xlabel="Date", ylabel="Price")
| 0.839668 | 0.981927 |
```
#!conda install -y -c conda-forge pyarrow
import seaborn as sns
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
from matplotlib.transforms import blended_transform_factory
import dask.dataframe as dd
import pandas as pd
import re
%matplotlib inline
%config Completer.use_jedi = False
```
**Set parameters**
```
plt.rcParams["axes.labelweight"] = "bold"
sns.set_palette("deep")
sns.set_style("white")
sns.set_context("paper", font_scale = 2.0, rc={"grid.linewidth": 2.5, 'fontweight':'bold'})
```
**Load GPU failures**
```
FAILURES = '/gpfs/alpine/stf218/proj-shared/data/lake/summit_gpu_failures/gpu_failures.csv'
NODE = 'hostname'
TIME = 'timestamp'
XID = 'xid'
failures = pd.read_csv(FAILURES)[[NODE, XID]]
# Remove data for login and batch nodes.
failures = failures[~failures[NODE].str.startswith('login') & ~failures[NODE].str.startswith('batch')]
failures[failures[NODE].str.startswith('login') | failures[NODE].str.startswith('batch')][NODE].unique()
xid_names = {
31: 'Memory page fault', 13: 'Graphics engine exception', 43: 'Stopped processing', 74: 'NVLINK error',
63: 'Page retirement event', 64: 'Page retirement failure', 48: 'Double-bit error', 45: 'Preemptive cleanup',
61: 'Internal microcontroller warning', 44: 'Graphics engine fault', 79: 'Fallen off the bus', 62: 'Internal microcontroller halt',
38: 'Driver firmware error', 32: 'Corrupted push buffer stream', 12: 'Driver error handling exception', 69: 'Graphics engine class error'}
failures['name'] = failures[XID].apply(xid_names.get)
len(failures)
failures.groupby('name')[XID].count()
```
**Obtain failure frequencies in nodes**
```
FREQ = 'freq'
FAILURE = 'failure'
freq_per_node = failures.groupby([XID, NODE], as_index=False).size().rename(columns={'size': FREQ, XID: FAILURE})
freq_per_node[FAILURE] = freq_per_node[FAILURE].apply(xid_names.get)
freq_per_node.head()
xid_counts = failures.groupby(XID)[XID].count()
xids = [xid_names[xid] for xid in xid_counts[xid_counts >= 20].index.values]
freq_per_node.groupby(FAILURE)[FREQ].sum().sort_values()
xid_freqs = freq_per_node.pivot(index=NODE, columns=FAILURE, values=FREQ)
xid_freqs = freq_per_node[freq_per_node[FAILURE].isin(xids)].pivot(index=NODE, columns=FAILURE, values=FREQ)
xid_freqs = xid_freqs.fillna(0)
corrs = xid_freqs.corr(method='pearson')
is_na = corrs.isna().all(axis=0)
corrs = corrs[~is_na][corrs.columns[~is_na]]
```
**Plots**
```
import scipy.spatial.distance as dist
import scipy.stats as ss
p_values = dist.squareform(dist.pdist(xid_freqs.T, lambda x, y: ss.pearsonr(x, y)[1]))
p_values.shape
is_significant = p_values < 0.05 / 13 / 14
captions = np.empty_like(corrs, dtype=np.dtype('U4'))
captions[is_significant] = np.vectorize(lambda corr: f'{corr:.2f}'.replace('.' + '00', '').replace('0.', '.').replace('-0', '0'))(corrs)[is_significant]
fig = plt.figure(figsize=(20, 15))
mask = np.triu(np.ones_like(corrs, dtype=bool), k=1)
cmap = sns.diverging_palette(230, 20, as_cmap=True)
ax = sns.heatmap(corrs, mask=mask, cmap=cmap, vmax=1, vmin=-1, center=0, square=True, linewidths=.5,
annot=captions, fmt='', cbar_kws={"shrink": .8}, cbar=False)
ax.set_xlabel('')
ax.set_ylabel('')
plt.xticks(rotation=45, ha='right')
fig.tight_layout()
fig.savefig(f'../plots/gpu_failure_corr.pdf')
```
|
github_jupyter
|
#!conda install -y -c conda-forge pyarrow
import seaborn as sns
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
from matplotlib.transforms import blended_transform_factory
import dask.dataframe as dd
import pandas as pd
import re
%matplotlib inline
%config Completer.use_jedi = False
plt.rcParams["axes.labelweight"] = "bold"
sns.set_palette("deep")
sns.set_style("white")
sns.set_context("paper", font_scale = 2.0, rc={"grid.linewidth": 2.5, 'fontweight':'bold'})
FAILURES = '/gpfs/alpine/stf218/proj-shared/data/lake/summit_gpu_failures/gpu_failures.csv'
NODE = 'hostname'
TIME = 'timestamp'
XID = 'xid'
failures = pd.read_csv(FAILURES)[[NODE, XID]]
# Remove data for login and batch nodes.
failures = failures[~failures[NODE].str.startswith('login') & ~failures[NODE].str.startswith('batch')]
failures[failures[NODE].str.startswith('login') | failures[NODE].str.startswith('batch')][NODE].unique()
xid_names = {
31: 'Memory page fault', 13: 'Graphics engine exception', 43: 'Stopped processing', 74: 'NVLINK error',
63: 'Page retirement event', 64: 'Page retirement failure', 48: 'Double-bit error', 45: 'Preemptive cleanup',
61: 'Internal microcontroller warning', 44: 'Graphics engine fault', 79: 'Fallen off the bus', 62: 'Internal microcontroller halt',
38: 'Driver firmware error', 32: 'Corrupted push buffer stream', 12: 'Driver error handling exception', 69: 'Graphics engine class error'}
failures['name'] = failures[XID].apply(xid_names.get)
len(failures)
failures.groupby('name')[XID].count()
FREQ = 'freq'
FAILURE = 'failure'
freq_per_node = failures.groupby([XID, NODE], as_index=False).size().rename(columns={'size': FREQ, XID: FAILURE})
freq_per_node[FAILURE] = freq_per_node[FAILURE].apply(xid_names.get)
freq_per_node.head()
xid_counts = failures.groupby(XID)[XID].count()
xids = [xid_names[xid] for xid in xid_counts[xid_counts >= 20].index.values]
freq_per_node.groupby(FAILURE)[FREQ].sum().sort_values()
xid_freqs = freq_per_node.pivot(index=NODE, columns=FAILURE, values=FREQ)
xid_freqs = freq_per_node[freq_per_node[FAILURE].isin(xids)].pivot(index=NODE, columns=FAILURE, values=FREQ)
xid_freqs = xid_freqs.fillna(0)
corrs = xid_freqs.corr(method='pearson')
is_na = corrs.isna().all(axis=0)
corrs = corrs[~is_na][corrs.columns[~is_na]]
import scipy.spatial.distance as dist
import scipy.stats as ss
p_values = dist.squareform(dist.pdist(xid_freqs.T, lambda x, y: ss.pearsonr(x, y)[1]))
p_values.shape
is_significant = p_values < 0.05 / 13 / 14
captions = np.empty_like(corrs, dtype=np.dtype('U4'))
captions[is_significant] = np.vectorize(lambda corr: f'{corr:.2f}'.replace('.' + '00', '').replace('0.', '.').replace('-0', '0'))(corrs)[is_significant]
fig = plt.figure(figsize=(20, 15))
mask = np.triu(np.ones_like(corrs, dtype=bool), k=1)
cmap = sns.diverging_palette(230, 20, as_cmap=True)
ax = sns.heatmap(corrs, mask=mask, cmap=cmap, vmax=1, vmin=-1, center=0, square=True, linewidths=.5,
annot=captions, fmt='', cbar_kws={"shrink": .8}, cbar=False)
ax.set_xlabel('')
ax.set_ylabel('')
plt.xticks(rotation=45, ha='right')
fig.tight_layout()
fig.savefig(f'../plots/gpu_failure_corr.pdf')
| 0.304662 | 0.601857 |
# CSC 421 - Agents
### Instructor: George Tzanetakis
## Agents
**EMPHASIS**: Agents as a unifying concept of thinking about AI and software
During this lecture we will cover the following topics:
1. Agents
2. Performance, environments, actuators, sensors
3. Agent architectures
7. Learning
## WORKPLAN
The section number is based on the 4th edition of the AIMA textbook and is the suggested
reading for this week. Each list entry provides just the additional sections. For example the Expected reading include the sections listed under Basic as well as the sections listed under Expected. Some additional readings are suggested for Advanced.
1. Basic: Sections **2.1**, **2.3**, **2.4** and Summary
2. Expected: **2.2**
3. Advanced: Bibligraphical and historical notes
## Agents and Environments
**Agents** perceive their **environment** through **sensors** and act upon that environment through **actuators**.
Terminology:
1. Percept
2. Percept sequence
3. Agent function (abstract mathematical distribution, in many cases infinite tabulation)
4. Agent program (concrete implementation running on a phyisical system)
What makes an agent effective, good, intelligent ?
Any area of engineering can be viewed through the lenses of agents. What makes AI unique is the significant computational resources that can be employed by the agent and the non-trivial decision making that the task
environment requires.
Let's consider some examples - what are the possible percepts, environments, sensors and actuators for these
agents:
1. Human
2. Robot
3. Vacuum cleaner world
4. Single chess piece valid chessboard moves
5. Self-driving car
6. Ant
7. NPC in game
8. Chess playing program
## TASK ENVIRONMENTS**
Specifying the task environment (essentially the problem to which rational agents are the solutions):
1. Performance
2. Environment
3. Actuators
4. Sensors
Properties of task environmets (for each one think of examples or consider the examples mentioned above):
1. Fully observable vs partially observable
2. Single-agent vs multiagent
1. Competitive multiagent (chess) vs co-operative multiagent (self-driving cars avoiding collisions)
3. Deterministic vs nondeterministic
**Agent = architecture + program**
## Structure of Agents
1. Reflex agents
2. Model-based reflex agents
3. Goal-based agents
4. Utility-based agents
Goals alone are not enough. Utility is an internal representation of the performance measure.
Example:
NPC in a graph-based text adventure
IMPORTANT: CLEAR SEPARATION OF AGENT AND ENVIRONMENT WHEN DOING SIMULATIONS
### LEARNING AGENTS
All types of agent architectures can benefit from learning.
## World Representations
Atomic representation, factored representation, structured representation, distributed representation
|
github_jupyter
|
# CSC 421 - Agents
### Instructor: George Tzanetakis
## Agents
**EMPHASIS**: Agents as a unifying concept of thinking about AI and software
During this lecture we will cover the following topics:
1. Agents
2. Performance, environments, actuators, sensors
3. Agent architectures
7. Learning
## WORKPLAN
The section number is based on the 4th edition of the AIMA textbook and is the suggested
reading for this week. Each list entry provides just the additional sections. For example the Expected reading include the sections listed under Basic as well as the sections listed under Expected. Some additional readings are suggested for Advanced.
1. Basic: Sections **2.1**, **2.3**, **2.4** and Summary
2. Expected: **2.2**
3. Advanced: Bibligraphical and historical notes
## Agents and Environments
**Agents** perceive their **environment** through **sensors** and act upon that environment through **actuators**.
Terminology:
1. Percept
2. Percept sequence
3. Agent function (abstract mathematical distribution, in many cases infinite tabulation)
4. Agent program (concrete implementation running on a phyisical system)
What makes an agent effective, good, intelligent ?
Any area of engineering can be viewed through the lenses of agents. What makes AI unique is the significant computational resources that can be employed by the agent and the non-trivial decision making that the task
environment requires.
Let's consider some examples - what are the possible percepts, environments, sensors and actuators for these
agents:
1. Human
2. Robot
3. Vacuum cleaner world
4. Single chess piece valid chessboard moves
5. Self-driving car
6. Ant
7. NPC in game
8. Chess playing program
## TASK ENVIRONMENTS**
Specifying the task environment (essentially the problem to which rational agents are the solutions):
1. Performance
2. Environment
3. Actuators
4. Sensors
Properties of task environmets (for each one think of examples or consider the examples mentioned above):
1. Fully observable vs partially observable
2. Single-agent vs multiagent
1. Competitive multiagent (chess) vs co-operative multiagent (self-driving cars avoiding collisions)
3. Deterministic vs nondeterministic
**Agent = architecture + program**
## Structure of Agents
1. Reflex agents
2. Model-based reflex agents
3. Goal-based agents
4. Utility-based agents
Goals alone are not enough. Utility is an internal representation of the performance measure.
Example:
NPC in a graph-based text adventure
IMPORTANT: CLEAR SEPARATION OF AGENT AND ENVIRONMENT WHEN DOING SIMULATIONS
### LEARNING AGENTS
All types of agent architectures can benefit from learning.
## World Representations
Atomic representation, factored representation, structured representation, distributed representation
| 0.74158 | 0.814053 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.