markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
An alternative approach Suppose that we wanted to simulate the Solow model with different parameter values so that we could compare the simulations. Since we'd be doing the same basic steps multiple times using different numbers, it would make sense to define a function so that we could avoid repetition. The code below defines a function called solow_example() that simulates the Solow model with exogenous labor growth. solow_example() takes as arguments the parameters of the Solow model $A$, $\alpha$, $\delta$, $s$, and $n$; the initial values $K_0$ and $L_0$; and the number of simulation periods $T$. solow_example() returns a Pandas DataFrame with computed values for aggregate and per worker quantities.
def solow_example(A,alpha,delta,s,n,K0,L0,T): '''Returns DataFrame with simulated values for a Solow model with labor growth and constant TFP''' # Initialize a variable called capital as a (T+1)x1 array of zeros and set first value to k0 capital = np.zeros(T+1) capital[0] = K0 # Initialize a variable called labor as a (T+1)x1 array of zeros and set first value to l0 labor = np.zeros(T+1) labor[0] = L0 # Compute all capital and labor values by iterating over t from 0 through T for t in np.arange(T): labor[t+1] = (1+n)*labor[t] capital[t+1] = s*A*capital[t]**alpha*labor[t]**(1-alpha) + (1-delta)*capital[t] # Store the simulated capital df in a pandas DataFrame called data df = pd.DataFrame({'capital':capital,'labor':labor}) # Create columns in the DataFrame to store computed values of the other endogenous variables df['output'] = df['capital']**alpha*df['labor']**(1-alpha) df['consumption'] = (1-s)*df['output'] df['investment'] = df['output'] - df['consumption'] # Create columns in the DataFrame to store capital per worker, output per worker, consumption per worker, and investment per worker df['capital_pw'] = df['capital']/df['labor'] df['output_pw'] = df['output']/df['labor'] df['consumption_pw'] = df['consumption']/df['labor'] df['investment_pw'] = df['investment']/df['labor'] return df
winter2017/econ129/python/Econ129_Class_09.ipynb
letsgoexploring/teaching
mit
With solow_example() defined, we can redo the previous exercise quickly:
# Create the DataFrame with simulated values df = solow_example(A=10,alpha=0.35,delta=0.1,s=0.15,n=0.01,K0=20,L0=1,T=100) # Create a 2x2 grid of plots of the capital per worker, outputper worker, consumption per worker, and investment per worker fig = plt.figure(figsize=(12,8)) ax = fig.add_subplot(2,2,1) ax.plot(df['capital_pw'],lw=3) ax.grid() ax.set_title('Capital per worker') ax = fig.add_subplot(2,2,2) ax.plot(df['output_pw'],lw=3) ax.grid() ax.set_title('Output per worker') ax = fig.add_subplot(2,2,3) ax.plot(df['consumption_pw'],lw=3) ax.grid() ax.set_title('Consumption per worker') ax = fig.add_subplot(2,2,4) ax.plot(df['investment_pw'],lw=3) ax.grid() ax.set_title('Investment per worker')
winter2017/econ129/python/Econ129_Class_09.ipynb
letsgoexploring/teaching
mit
solow_example() can be used to perform multiple simulations. For example, suppose we want to see the effect of having two different initial values of capital: $k_0 = 20$ and $k_0'=10$.
df1 = solow_example(A=10,alpha=0.35,delta=0.1,s=0.15,n=0.01,K0=20,L0=1,T=100) df2 = solow_example(A=10,alpha=0.35,delta=0.1,s=0.15,n=0.01,K0=10,L0=1,T=100) # Create a 2x2 grid of plots of the capital per worker, outputper worker, consumption per worker, and investment per worker fig = plt.figure(figsize=(12,8)) ax = fig.add_subplot(2,2,1) ax.plot(df1['capital_pw'],lw=3) ax.plot(df2['capital_pw'],lw=3) ax.grid() ax.set_title('Capital per worker') ax = fig.add_subplot(2,2,2) ax.plot(df1['output_pw'],lw=3) ax.plot(df2['output_pw'],lw=3) ax.grid() ax.set_title('Output per worker') ax = fig.add_subplot(2,2,3) ax.plot(df1['consumption_pw'],lw=3) ax.plot(df2['consumption_pw'],lw=3) ax.grid() ax.set_title('Consumption per worker') ax = fig.add_subplot(2,2,4) ax.plot(df1['investment_pw'],lw=3,label='$k_0=20$') ax.plot(df2['investment_pw'],lw=3,label='$k_0=10$') ax.grid() ax.set_title('Investment per worker') ax.legend(loc='lower right')
winter2017/econ129/python/Econ129_Class_09.ipynb
letsgoexploring/teaching
mit
This dataset is borrowed from the PyCon tutorial of Brandon Rhodes (so all credit to him!). You can download these data from here: titles.csv and cast.csv and put them in the /data folder.
cast = pd.read_csv('data/cast.csv') cast.head() titles = pd.read_csv('data/titles.csv') titles.head()
solved - 03b - Some more advanced indexing.ipynb
jorisvandenbossche/2015-EuroScipy-pandas-tutorial
bsd-2-clause
Setting columns as the index Why is it useful to have an index? Giving meaningful labels to your data -> easier to remember which data are where Unleash some powerful methods, eg with a DatetimeIndex for time series Easier and faster selection of data It is this last one we are going to explore here! Setting the title column as the index:
c = cast.set_index('title') c.head()
solved - 03b - Some more advanced indexing.ipynb
jorisvandenbossche/2015-EuroScipy-pandas-tutorial
bsd-2-clause
Instead of doing:
%%time cast[cast['title'] == 'Hamlet']
solved - 03b - Some more advanced indexing.ipynb
jorisvandenbossche/2015-EuroScipy-pandas-tutorial
bsd-2-clause
we can now do:
%%time c.loc['Hamlet']
solved - 03b - Some more advanced indexing.ipynb
jorisvandenbossche/2015-EuroScipy-pandas-tutorial
bsd-2-clause
But you can also have multiple columns as the index, leading to a multi-index or hierarchical index:
c = cast.set_index(['title', 'year']) c.head() %%time c.loc[('Hamlet', 2000),:] c2 = c.sort_index() %%time c2.loc[('Hamlet', 2000),:]
solved - 03b - Some more advanced indexing.ipynb
jorisvandenbossche/2015-EuroScipy-pandas-tutorial
bsd-2-clause
Target information
names, p, v, w = load_clubs('clubs.csv') cpi = 40e-3 T = 12 t_sim = np.arange(0, T, cpi) t1, p1, v1 = calc_traj(p[0, :], v[0, :], w[0, :], t_sim) t2, p2, v2 = calc_traj(p[-1, :], v[-1, :], w[-1, :], t_sim) sensor_locations = np.array([[-10, 28.5, 1], [-15, 30.3, 3], [200, 30, 1.5], [220, -31, 2], [-30, 0, 0.5], [150, 10, 0.6]]) rd_1 = range_doppler(sensor_locations, p1, v1) pm_1 = multilateration(sensor_locations, rd_1[:, :, 1]) vm_1 = determine_velocity(t1, pm_1, rd_1[:, :, 0]) rd_2 = range_doppler(sensor_locations, p2, v2) pm_2 = multilateration(sensor_locations, rd_2[:, :, 1]) vm_2 = determine_velocity(t2, pm_2, rd_2[:, :, 0])
tracking/tracking.ipynb
scjrobertson/xRange
gpl-3.0
The Kalman Filter Model
N = 6 if pm_1.shape < pm_2.shape: M, _ = pm_1.shape pm_2 = pm_2[:M] vm_2 = pm_2[:M] else: M, _ = pm_2.shape pm_1 = pm_1[:M] vm_1 = vm_2[:M] print(M) dt = cpi g = 9.81 sigma_r = 2.5 sigma_q = 0.5 prior_var = 1
tracking/tracking.ipynb
scjrobertson/xRange
gpl-3.0
Motion and measurement models
A = np.identity(N) A[0, 3] = A[1, 4] = A[2, 5] = dt B = np.zeros((N, N)) B[2, 2] = B[5, 5] = 1 R = np.identity(N)*sigma_r C = np.identity(N) Q = np.identity(N)*sigma_q u = np.zeros((6, 1)) u[2] = -0.5*g*(dt**2) u[5] = -g*dt
tracking/tracking.ipynb
scjrobertson/xRange
gpl-3.0
Priors
#Object 1 mu0_1 = np.zeros((N, 1)) mu0_1[:3, :] = p1[0, :].reshape(3, 1) mu0_1[3:, :] = v[0, :].reshape(3, 1) prec0_1 = np.linalg.inv(prior_var*np.identity(N)) h0_1 = (prec0_1)@(mu0_1) g0_1 = -0.5*(mu0_1.T)@(prec0_1)@(mu0_1) -3*np.log(2*np.pi) #Object 2 mu0_2 = np.zeros((N, 1)) mu0_2[:3, :] = p2[0, :].reshape(3, 1) mu0_2[3:, :] = v2[0, :].reshape(3, 1) prec0_2 = np.linalg.inv(prior_var*np.identity(N)) h0_2 = (prec0_2)@(mu0_2) g0_2 = -0.5*(mu0_2.T)@(prec0_2)@(mu0_2) -3*np.log(2*np.pi) print(h0_1)
tracking/tracking.ipynb
scjrobertson/xRange
gpl-3.0
Linear Kalman Filtering Creating the model
z_t = np.empty((M, N)) z_t[:, :3] = pm_1 z_t[:, 3:] = vm_1 R_in = np.linalg.inv(R) P_pred = np.bmat([[R_in, -(R_in)@(A)], [-(A.T)@(R_in), (A.T)@(R_in)@(A)]]) M_pred = np.zeros((2*N, 1)) M_pred[:N, :] = (B)@(u) h_pred = (P_pred)@(M_pred) g_pred = -0.5*(M_pred.T)@(P_pred)@(M_pred).flatten() -0.5*np.log( np.linalg.det(2*np.pi*R)) Q_in = np.linalg.inv(Q) P_meas = np.bmat([[(C.T)@(Q_in)@(C), -(C.T)@(Q_in)], [-(Q_in)@(C), Q_in]]) h_meas = np.zeros((2*N, 1)) g_meas = -0.5*np.log( np.linalg.det(2*np.pi*Q)) L, _ = z_t.shape X = np.arange(0, L) Z = np.arange(L-1, 2*L-1) C_X = [CG([X[0]], [N], h0_1, prec0_1, g0_1)] C_Z = [CG([X[0]], [N], h0_1, prec0_1, g0_1)] for i in np.arange(1, L): C_X.append(CG([X[i], X[i-1]], [N, N], h_pred, P_pred, g_pred)) C_Z.append(CG([X[i], Z[i]], [N, N], h_meas, P_meas, g_meas))
tracking/tracking.ipynb
scjrobertson/xRange
gpl-3.0
The Kalman Filter algorithm: Gaussian belief propagation
message_out = [C_X[0]] prediction = [C_X[0]] mean = np.zeros((N, L)) for i in np.arange(1, L): #Kalman Filter Algorithm C_Z[i].introduce_evidence([Z[i]], z_t[i, :]) marg = (message_out[i-1]*C_X[i]).marginalize([X[i-1]]) message_out.append(marg*C_Z[i]) mean[:, i] = (np.linalg.inv(message_out[i]._prec)@(message_out[i]._info)).reshape((N, )) #For plotting only prediction.append(marg) p_e = mean[:3, :] fig = plt.figure(figsize=(25, 25)) ax = plt.axes(projection='3d') ax.plot(p1[:, 0], p1[:, 1], p1[:, 2]) ax.plot(p_e[0, :], p_e[1, :], p_e[2, :], 'or') ax.set_xlabel('x (m)', fontsize = '20') ax.set_ylabel('y (m)', fontsize = '20') ax.set_zlabel('z (m)', fontsize = '20') ax.set_title('Kalman Filtering', fontsize = '20') ax.set_ylim([-1, 1]) ax.legend(['Actual Trajectory', 'Estimated trajectory']) plt.show() D = 100 t = np.linspace(0, 2*np.pi, D) xz = np.array([[np.cos(t)], [np.sin(t)]]).reshape((2, D)) gaussians = message_out + prediction + C_Z ellipses = [] for g in gaussians: g._vars = [1, 2, 3, 4] g._dims = [1, 1, 1, 3] c = g.marginalize([2, 4]) cov = np.linalg.inv(c._prec) mu = (cov)@(c._info) U, S, _ = np.linalg.svd(cov) L = np.diag(np.sqrt(S)) ellipses.append(np.dot((U)@(L), xz) + mu) for i in np.arange(0, M): plt.figure(figsize= (15, 15)) message_out = ellipses[i] prediction = ellipses[i+M] measurement = ellipses[i+2*M] plt.plot(p1[:, 0], p1[:, 2], 'k--', label='Trajectory') plt.plot(message_out[0, :], message_out[1, :], 'r', label='After measurement update') plt.plot(prediction[0, :], prediction[1, :], 'b', label = 'Recursive prediction') plt.plot(measurement[0, :], measurement[1, :], 'g', label='Measurement') plt.xlim([-3.5, 250]) plt.ylim([-3.5, 35]) plt.grid(True) plt.xlabel('x (m)') plt.ylabel('z (m)') plt.legend(loc='upper left') plt.title('x-z position for t = %d'%i) plt.savefig('images/kalman/%d.png'%i, format = 'png') plt.close()
tracking/tracking.ipynb
scjrobertson/xRange
gpl-3.0
<img src="images/kalman/kalman_hl.gif"> Two object tracking
fig = plt.figure(figsize=(25, 25)) ax = plt.axes(projection='3d') ax.plot(p1[:, 0], p1[:, 1], p1[:, 2]) ax.plot(p2[:, 0], p2[:, 1], p2[:, 2], 'or') ax.set_xlabel('x (m)', fontsize = '20') ax.set_ylabel('y (m)', fontsize = '20') ax.set_zlabel('z (m)', fontsize = '20') ax.set_title('', fontsize = '20') ax.set_ylim([-20, 20]) ax.legend(['Target 1', 'Target 2']) plt.show() L = 10 X_1 = np.arange(0, L).tolist() X_2 = np.arange(L, 2*L).tolist() Z_1 = np.arange(2*L, 3*L).tolist() Z_2 = np.arange(3*L, 4*L).tolist() z_1 = np.empty((M, N)) z_1[:, :3] = pm_1 z_1[:, 3:] = vm_1 z_2 = np.empty((M, N)) z_2[:, :3] = pm_2 z_2[:, 3:] = vm_2 C_X = [CG([X_1[0]], [N], h0_1, prec0_1, g0_1)*CG([X_2[0]], [N], h0_2, prec0_2, g0_2)] for i in np.arange(1, L): C_X.append(CG([X_1[i], X_1[i-1]], [N, N], h_pred, P_pred, g_pred) *CG([X_2[i], X_2[i-1]], [N, N], h_pred, P_pred, g_pred)) C_Z = [None] Z_11 = CG([X_1[1], Z_1[1]], [N, N], h_meas, P_meas, g_meas) Z_11.introduce_evidence([Z_1[1]], z_1[1, :]) Z_22 = CG([X_2[1], Z_2[1]], [N, N], h_meas, P_meas, g_meas) Z_22.introduce_evidence([Z_2[1]], z_2[1, :]) C_Z.append(Z_11*Z_22) for i in np.arange(2, L): Z_11 = CG([X_1[i], Z_1[i]], [N, N], h_meas, P_meas, g_meas) Z_11.introduce_evidence([Z_1[i]], z_1[i, :]) Z_22 = CG([X_2[i], Z_2[i]], [N, N], h_meas, P_meas, g_meas) Z_22.introduce_evidence([Z_2[i]], z_2[i, :]) Z_12 = CG([X_1[i], Z_2[i]], [N, N], h_meas, P_meas, g_meas) Z_12.introduce_evidence([Z_2[i]] ,z_2[i, :]) Z_21 = CG([X_2[i], Z_1[i]], [N, N], h_meas, P_meas, g_meas) Z_21.introduce_evidence([Z_1[i]], z_1[i, :]) C_Z.append(GMM([0.5*(Z_11*Z_22), 0.5*(Z_12*Z_21)])) predict = [C_X[0]] for i in np.arange(1, L): marg = (C_X[i]*predict[i-1]).marginalize([X_1[i-1], X_2[i-1]]) predict.append(C_Z[i]*marg) D = 100 t = np.linspace(0, 2*np.pi, D) xz = np.array([[np.cos(t)], [np.sin(t)]]).reshape((2, D)) ellipses = [] norms = [] i = 0 for p in predict: if isinstance(p, GMM): mix = p._mix else: mix = [p] time_step = [] for m in mix: m._vars = [1, 2, 3, 4] m._dims = [1, 1, 1, 9] c = m.marginalize([2, 4]) cov = np.linalg.inv(c._prec) mu = (cov)@(c._info) if i == 0: print(cov) i = 1 U, S, _ = np.linalg.svd(cov) lambda_ = np.diag(np.sqrt(S)) norms.append(c._norm) time_step.append(np.dot((U)@(lambda_), xz) + mu) ellipses.append(time_step) for i in np.arange(0, L): plt.figure(figsize= (15, 15)) plt.plot(p1[1:, 0], p1[1:, 2], 'or', label='Trajectory 1') plt.plot(p2[1:, 0], p2[1:, 2], 'og', label='Trajectory 2') for e in ellipses[i]: plt.plot(e[0, :], e[1, :], 'b') plt.xlim([-3.5, 25]) plt.ylim([-3.5, 15]) plt.grid(True) plt.legend(loc='upper left') plt.xlabel('x (m)') plt.ylabel('z (m)') plt.title('x-z position for t = %d'%(i)) plt.savefig('images/two_objects/%d.png'%i, format = 'png') plt.close()
tracking/tracking.ipynb
scjrobertson/xRange
gpl-3.0
Make Some Toy Data
import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D import numpy as np x = np.random.randn(100) * 5 y = np.random.randn(100) z = np.random.randn(100) points = np.vstack([y,x,z])
RPCA_Testing-3d.ipynb
fivetentaylor/rpyca
mit
Add Some Outliers to Make Life Difficult
outliers = np.tile([15,-10,10], 10).reshape((-1,3)) pts = np.vstack([points.T, outliers]).T
RPCA_Testing-3d.ipynb
fivetentaylor/rpyca
mit
Compute SVD on both the clean data and the outliery data
U,s,Vt = np.linalg.svd(points) U_n,s_n,Vt_n = np.linalg.svd(pts)
RPCA_Testing-3d.ipynb
fivetentaylor/rpyca
mit
Just 10 outliers can really screw up our line fit!
def randrange(n, vmin, vmax): return (vmax-vmin)*np.random.rand(n) + vmin fig = plt.figure() ax = fig.add_subplot(111, projection='3d') n = 100 for c, m, zl, zh in [('r', 'o', -50, -25), ('b', '^', -30, -5)]: xs = randrange(n, 23, 32) ys = randrange(n, 0, 100) zs = randrange(n, zl, zh) ax.scatter(xs, ys, zs, c=c, marker=m) ax.set_xlabel('X Label') ax.set_ylabel('Y Label') ax.set_zlabel('Z Label') plt.show()
RPCA_Testing-3d.ipynb
fivetentaylor/rpyca
mit
Now the robust pca version!
import rpca reload(rpca) import logging logger = logging.getLogger(rpca.__name__) logger.setLevel(logging.INFO)
RPCA_Testing-3d.ipynb
fivetentaylor/rpyca
mit
Factor the matrix into L (low rank) and S (sparse) parts
L,S = rpca.rpca(pts, eps=0.0000001, r=1)
RPCA_Testing-3d.ipynb
fivetentaylor/rpyca
mit
Run SVD on the Low Rank Part
U,s,Vt = np.linalg.svd(L)
RPCA_Testing-3d.ipynb
fivetentaylor/rpyca
mit
And have a look at this!
plt.ylim([-20,20]) plt.xlim([-20,20]) plt.scatter(*pts) pts0 = np.dot(U[0].reshape((2,1)), np.array([-20,20]).reshape((1,2))) plt.plot(*pts0) plt.scatter(*L, c='red')
RPCA_Testing-3d.ipynb
fivetentaylor/rpyca
mit
Have a look at the factored components...
plt.ylim([-20,20]) plt.xlim([-20,20]) plt.scatter(*L) plt.scatter(*S, c='red')
RPCA_Testing-3d.ipynb
fivetentaylor/rpyca
mit
It really does add back to the original matrix!
plt.ylim([-20,20]) plt.xlim([-20,20]) plt.scatter(*(L+S))
RPCA_Testing-3d.ipynb
fivetentaylor/rpyca
mit
We can open the fits file by fits.open() and check the info of the fits file by .info()
hdu_list = fits.open(image_file) hdu_list.info()
notebooks/Feb2017/Astronomy/Astropy - Load fits.ipynb
ryan-leung/PHYS4650_Python_Tutorial
bsd-3-clause
We get the data by the following command
image_data = hdu_list[0].data print image_data
notebooks/Feb2017/Astronomy/Astropy - Load fits.ipynb
ryan-leung/PHYS4650_Python_Tutorial
bsd-3-clause
We get the header by the following command
image_header = hdu_list[0].header print image_header.items
notebooks/Feb2017/Astronomy/Astropy - Load fits.ipynb
ryan-leung/PHYS4650_Python_Tutorial
bsd-3-clause
We can get individual header items by calling it as dictionary
print image_header['CRVAL1'] print image_header['CRVAL2']
notebooks/Feb2017/Astronomy/Astropy - Load fits.ipynb
ryan-leung/PHYS4650_Python_Tutorial
bsd-3-clause
This document is based heavily on this excellent resource from UCLA http://www.ats.ucla.edu/stat/r/library/contrast_coding.htm A categorical variable of K categories, or levels, usually enters a regression as a sequence of K-1 dummy variables. This amounts to a linear hypothesis on the level means. That is, each test statistic for these variables amounts to testing whether the mean for that level is statistically significantly different from the mean of the base category. This dummy coding is called Treatment coding in R parlance, and we will follow this convention. There are, however, different coding methods that amount to different sets of linear hypotheses. In fact, the dummy coding is not technically a contrast coding. This is because the dummy variables add to one and are not functionally independent of the model's intercept. On the other hand, a set of contrasts for a categorical variable with k levels is a set of k-1 functionally independent linear combinations of the factor level means that are also independent of the sum of the dummy variables. The dummy coding isn't wrong per se. It captures all of the coefficients, but it complicates matters when the model assumes independence of the coefficients such as in ANOVA. Linear regression models do not assume independence of the coefficients and thus dummy coding is often the only coding that is taught in this context. To have a look at the contrast matrices in Patsy, we will use data from UCLA ATS. First let's load the data. Example Data
import pandas as pd url = 'https://stats.idre.ucla.edu/stat/data/hsb2.csv' hsb2 = pd.read_table(url, delimiter=",") hsb2.head(10)
examples/notebooks/contrasts.ipynb
ChadFulton/statsmodels
bsd-3-clause
It will be instructive to look at the mean of the dependent variable, write, for each level of race ((1 = Hispanic, 2 = Asian, 3 = African American and 4 = Caucasian)).
hsb2.groupby('race')['write'].mean()
examples/notebooks/contrasts.ipynb
ChadFulton/statsmodels
bsd-3-clause
Treatment (Dummy) Coding Dummy coding is likely the most well known coding scheme. It compares each level of the categorical variable to a base reference level. The base reference level is the value of the intercept. It is the default contrast in Patsy for unordered categorical factors. The Treatment contrast matrix for race would be
from patsy.contrasts import Treatment levels = [1,2,3,4] contrast = Treatment(reference=0).code_without_intercept(levels) print(contrast.matrix)
examples/notebooks/contrasts.ipynb
ChadFulton/statsmodels
bsd-3-clause
Here we used reference=0, which implies that the first level, Hispanic, is the reference category against which the other level effects are measured. As mentioned above, the columns do not sum to zero and are thus not independent of the intercept. To be explicit, let's look at how this would encode the race variable.
hsb2.race.head(10) print(contrast.matrix[hsb2.race-1, :][:20]) sm.categorical(hsb2.race.values)
examples/notebooks/contrasts.ipynb
ChadFulton/statsmodels
bsd-3-clause
This is a bit of a trick, as the race category conveniently maps to zero-based indices. If it does not, this conversion happens under the hood, so this won't work in general but nonetheless is a useful exercise to fix ideas. The below illustrates the output using the three contrasts above
from statsmodels.formula.api import ols mod = ols("write ~ C(race, Treatment)", data=hsb2) res = mod.fit() print(res.summary())
examples/notebooks/contrasts.ipynb
ChadFulton/statsmodels
bsd-3-clause
We explicitly gave the contrast for race; however, since Treatment is the default, we could have omitted this. Simple Coding Like Treatment Coding, Simple Coding compares each level to a fixed reference level. However, with simple coding, the intercept is the grand mean of all the levels of the factors. Patsy doesn't have the Simple contrast included, but you can easily define your own contrasts. To do so, write a class that contains a code_with_intercept and a code_without_intercept method that returns a patsy.contrast.ContrastMatrix instance
from patsy.contrasts import ContrastMatrix def _name_levels(prefix, levels): return ["[%s%s]" % (prefix, level) for level in levels] class Simple(object): def _simple_contrast(self, levels): nlevels = len(levels) contr = -1./nlevels * np.ones((nlevels, nlevels-1)) contr[1:][np.diag_indices(nlevels-1)] = (nlevels-1.)/nlevels return contr def code_with_intercept(self, levels): contrast = np.column_stack((np.ones(len(levels)), self._simple_contrast(levels))) return ContrastMatrix(contrast, _name_levels("Simp.", levels)) def code_without_intercept(self, levels): contrast = self._simple_contrast(levels) return ContrastMatrix(contrast, _name_levels("Simp.", levels[:-1])) hsb2.groupby('race')['write'].mean().mean() contrast = Simple().code_without_intercept(levels) print(contrast.matrix) mod = ols("write ~ C(race, Simple)", data=hsb2) res = mod.fit() print(res.summary())
examples/notebooks/contrasts.ipynb
ChadFulton/statsmodels
bsd-3-clause
Sum (Deviation) Coding Sum coding compares the mean of the dependent variable for a given level to the overall mean of the dependent variable over all the levels. That is, it uses contrasts between each of the first k-1 levels and level k In this example, level 1 is compared to all the others, level 2 to all the others, and level 3 to all the others.
from patsy.contrasts import Sum contrast = Sum().code_without_intercept(levels) print(contrast.matrix) mod = ols("write ~ C(race, Sum)", data=hsb2) res = mod.fit() print(res.summary())
examples/notebooks/contrasts.ipynb
ChadFulton/statsmodels
bsd-3-clause
This corresponds to a parameterization that forces all the coefficients to sum to zero. Notice that the intercept here is the grand mean where the grand mean is the mean of means of the dependent variable by each level.
hsb2.groupby('race')['write'].mean().mean()
examples/notebooks/contrasts.ipynb
ChadFulton/statsmodels
bsd-3-clause
Backward Difference Coding In backward difference coding, the mean of the dependent variable for a level is compared with the mean of the dependent variable for the prior level. This type of coding may be useful for a nominal or an ordinal variable.
from patsy.contrasts import Diff contrast = Diff().code_without_intercept(levels) print(contrast.matrix) mod = ols("write ~ C(race, Diff)", data=hsb2) res = mod.fit() print(res.summary())
examples/notebooks/contrasts.ipynb
ChadFulton/statsmodels
bsd-3-clause
For example, here the coefficient on level 1 is the mean of write at level 2 compared with the mean at level 1. Ie.,
res.params["C(race, Diff)[D.1]"] hsb2.groupby('race').mean()["write"][2] - \ hsb2.groupby('race').mean()["write"][1]
examples/notebooks/contrasts.ipynb
ChadFulton/statsmodels
bsd-3-clause
Helmert Coding Our version of Helmert coding is sometimes referred to as Reverse Helmert Coding. The mean of the dependent variable for a level is compared to the mean of the dependent variable over all previous levels. Hence, the name 'reverse' being sometimes applied to differentiate from forward Helmert coding. This comparison does not make much sense for a nominal variable such as race, but we would use the Helmert contrast like so:
from patsy.contrasts import Helmert contrast = Helmert().code_without_intercept(levels) print(contrast.matrix) mod = ols("write ~ C(race, Helmert)", data=hsb2) res = mod.fit() print(res.summary())
examples/notebooks/contrasts.ipynb
ChadFulton/statsmodels
bsd-3-clause
To illustrate, the comparison on level 4 is the mean of the dependent variable at the previous three levels taken from the mean at level 4
grouped = hsb2.groupby('race') grouped.mean()["write"][4] - grouped.mean()["write"][:3].mean()
examples/notebooks/contrasts.ipynb
ChadFulton/statsmodels
bsd-3-clause
As you can see, these are only equal up to a constant. Other versions of the Helmert contrast give the actual difference in means. Regardless, the hypothesis tests are the same.
k = 4 1./k * (grouped.mean()["write"][k] - grouped.mean()["write"][:k-1].mean()) k = 3 1./k * (grouped.mean()["write"][k] - grouped.mean()["write"][:k-1].mean())
examples/notebooks/contrasts.ipynb
ChadFulton/statsmodels
bsd-3-clause
Orthogonal Polynomial Coding The coefficients taken on by polynomial coding for k=4 levels are the linear, quadratic, and cubic trends in the categorical variable. The categorical variable here is assumed to be represented by an underlying, equally spaced numeric variable. Therefore, this type of encoding is used only for ordered categorical variables with equal spacing. In general, the polynomial contrast produces polynomials of order k-1. Since race is not an ordered factor variable let's use read as an example. First we need to create an ordered categorical from read.
hsb2['readcat'] = np.asarray(pd.cut(hsb2.read, bins=3)) hsb2.groupby('readcat').mean()['write'] from patsy.contrasts import Poly levels = hsb2.readcat.unique().tolist() contrast = Poly().code_without_intercept(levels) print(contrast.matrix) mod = ols("write ~ C(readcat, Poly)", data=hsb2) res = mod.fit() print(res.summary())
examples/notebooks/contrasts.ipynb
ChadFulton/statsmodels
bsd-3-clause
Simple Layouts In order to add widgets or have multiple plots that are linked together, you must first be able to create documents that contain these separate objects. It is possible to accomplish this in your own custom templates using bokeh.embed.components. But, Bokeh also provides simple layout capability for grid plots, vplots, and hplots (than can be nested). An example using gridplot is shown below:
from bokeh.plotting import figure from bokeh.io import gridplot x = list(range(11)) y0, y1, y2 = x, [10-i for i in x], [abs(i-5) for i in x] # create a new plot s1 = figure(width=250, plot_height=250) s1.circle(x, y0, size=10, color="navy", alpha=0.5) # create another one s2 = figure(width=250, height=250) s2.triangle(x, y1, size=10, color="firebrick", alpha=0.5) # create and another s3 = figure(width=250, height=250) s3.square(x, y2, size=10, color="olive", alpha=0.5) # put all the plots in an HBox p = gridplot([[s1, s2, s3]], toolbar_location=None) # show the results show(p) # EXERCISE: create a gridplot of your own
04 - interactions.ipynb
flaxandteal/python-course-lecturer-notebooks
mit
Bokeh also provides the vplot and hplot functions to arrange plot objects in vertical or horizontal layouts.
# EXERCISE: use vplot to arrange a few plots vertically
04 - interactions.ipynb
flaxandteal/python-course-lecturer-notebooks
mit
Linked Interactions It is possible to link various interactions between different Bokeh plots. For instance, the ranges of two (or more) plots can be linked, so that when one of the plots is panned (or zoomed, or otherwise has its range changed) the other plots will update in unison. It is also possible to link selections between two plots, so that when items are selected on one plot, the corresponding items on the second plot also become selected. Linked panning Linked panning (when mulitple plots have ranges that stay in sync) is simple to spell with Bokeh. You simply share the approrpate range objects between two (or more) plots. The example below shows how to accomplish this by linking the ranges of three plots in various ways:
plot_options = dict(width=250, plot_height=250, title=None, tools='pan') # create a new plot s1 = figure(**plot_options) s1.circle(x, y0, size=10, color="navy") # create a new plot and share both ranges s2 = figure(x_range=s1.x_range, y_range=s1.y_range, **plot_options) s2.triangle(x, y1, size=10, color="firebrick") # create a new plot and share only one range s3 = figure(x_range=s1.x_range, **plot_options) s3.square(x, y2, size=10, color="olive") p = gridplot([[s1, s2, s3]]) # show the results show(p) # EXERCISE: create two plots in a gridplot, and link their ranges
04 - interactions.ipynb
flaxandteal/python-course-lecturer-notebooks
mit
Linked brushing Linking selections is accomplished in a similar way, by sharing data sources between plots. Note that normally with bokeh.plotting and bokeh.charts creating a default data source for simple plots is handled automatically. However to share a data source, we must create them by hand and pass them explicitly. This is illustrated in the example below:
from bokeh.models import ColumnDataSource x = list(range(-20, 21)) y0, y1 = [abs(xx) for xx in x], [xx**2 for xx in x] # create a column data source for the plots to share source = ColumnDataSource(data=dict(x=x, y0=y0, y1=y1)) TOOLS = "box_select,lasso_select,help" # create a new plot and add a renderer left = figure(tools=TOOLS, width=300, height=300) left.circle('x', 'y0', source=source) # create another new plot and add a renderer right = figure(tools=TOOLS, width=300, height=300) right.circle('x', 'y1', source=source) p = gridplot([[left, right]]) show(p) # EXERCISE: create two plots in a gridplot, and link their data sources
04 - interactions.ipynb
flaxandteal/python-course-lecturer-notebooks
mit
Hover Tools Bokeh has a Hover Tool that allows additional information to be displayed in a popup whenever the uer howevers over a specific glyph. Basic hover tool configuration amounts to providing a list of (name, format) tuples. The full details can be found in the User's Guide here. The example below shows some basic usage of the Hover tool with a circle glyph:
from bokeh.models import HoverTool source = ColumnDataSource( data=dict( x=[1, 2, 3, 4, 5], y=[2, 5, 8, 2, 7], desc=['A', 'b', 'C', 'd', 'E'], ) ) hover = HoverTool( tooltips=[ ("index", "$index"), ("(x,y)", "($x, $y)"), ("desc", "@desc"), ] ) p = figure(plot_width=300, plot_height=300, tools=[hover], title="Mouse over the dots") p.circle('x', 'y', size=20, source=source) # Also show custom hover from utils import get_custom_hover show(gridplot([[p, get_custom_hover()]]))
04 - interactions.ipynb
flaxandteal/python-course-lecturer-notebooks
mit
Ipython Interactors It is possible to use native IPython notebook interactors together with Bokeh. In the interactor update function, the push_notebook method can be used to update a data source (presumably based on the iteractor widget values) to cause a plot to update. Warning: The current implementation of push_notebook leaks memory. It is suitable for interactive exploration but not for long-running or streaming use cases. The problem will be resolved in future releases. The example below shows a "trig function" exporer using IPython interactors:
import numpy as np from bokeh.models import Line x = np.linspace(0, 2*np.pi, 2000) y = np.sin(x) source = ColumnDataSource(data=dict(x=x, y=y)) p = figure(title="simple line example", plot_height=300, plot_width=600, y_range=(-5, 5)) p.line(x, y, color="#2222aa", alpha=0.5, line_width=2, source=source, name="foo") def update(f, w=1, A=1, phi=0): if f == "sin": func = np.sin elif f == "cos": func = np.cos elif f == "tan": func = np.tan source.data['y'] = A * func(w * x + phi) source.push_notebook() show(p) from ipywidgets import interact interact(update, f=["sin", "cos", "tan"], w=(0,10, 0.1), A=(0,5, 0.1), phi=(0, 10, 0.1))
04 - interactions.ipynb
flaxandteal/python-course-lecturer-notebooks
mit
Widgets Bokeh supports direct integration with a small basic widget set. Thse can be used in conjunction with a Bokeh Server, or with CustomJS models to add more interactive capability to your documents. You can see a complete list, with example code in the Adding Widgets section of the User's Guide. To use the widgets, include them in a layout like you would a plot object:
from bokeh.models.widgets import Slider from bokeh.io import vform slider = Slider(start=0, end=10, value=1, step=.1, title="foo") show(vform(slider)) # EXERCISE: create and show a Select widget
04 - interactions.ipynb
flaxandteal/python-course-lecturer-notebooks
mit
Callbacks
from bokeh.models import TapTool, CustomJS, ColumnDataSource callback = CustomJS(code="alert('hello world')") tap = TapTool(callback=callback) p = figure(plot_width=600, plot_height=300, tools=[tap]) p.circle('x', 'y', size=20, source=ColumnDataSource(data=dict(x=[1, 2, 3, 4, 5], y=[2, 5, 8, 2, 7]))) show(p)
04 - interactions.ipynb
flaxandteal/python-course-lecturer-notebooks
mit
Lots of places to add callbacks Widgets - Button, Toggle, Dropdown, TextInput, AutocompleteInput, Select, Multiselect, Slider, (DateRangeSlider), DatePicker, Tools - TapTool, BoxSelectTool, HoverTool, Selection - ColumnDataSource, AjaxDataSource, BlazeDataSource, ServerDataSource Ranges - Range1d, DataRange1d, FactorRange Callbacks for widgets Widgets that have values associated can have small JavaScript actions attached to them. These actions (also referred to as "callbacks") are executed whenever the widget's value is changed. In order to make it easier to refer to specific Bokeh models (e.g., a data source, or a glyhph) from JavaScript, the CustomJS obejct also accepts a dictionary of "args" that map names to Python Bokeh models. The corresponding JavaScript models are made available automaticaly to the CustomJS code. And example below shows an action attached to a slider that updates a data source whenever the slider is moved:
from bokeh.io import vform from bokeh.models import CustomJS, ColumnDataSource, Slider x = [x*0.005 for x in range(0, 200)] y = x source = ColumnDataSource(data=dict(x=x, y=y)) plot = figure(plot_width=400, plot_height=400) plot.line('x', 'y', source=source, line_width=3, line_alpha=0.6) callback = CustomJS(args=dict(source=source), code=""" var data = source.get('data'); var f = cb_obj.get('value') x = data['x'] y = data['y'] for (i = 0; i < x.length; i++) { y[i] = Math.pow(x[i], f) } source.trigger('change'); """) slider = Slider(start=0.1, end=4, value=1, step=.1, title="power", callback=callback) layout = vform(slider, plot) show(layout)
04 - interactions.ipynb
flaxandteal/python-course-lecturer-notebooks
mit
Calbacks for selections It's also possible to make JavaScript actions that execute whenever a user selection (e.g., box, point, lasso) changes. This is done by attaching the same kind of CustomJS object to whatever data source the selection is made on. The example below is a bit more sophisticaed, and demonstrates updating one glyphs data source in response to another glyph's selection:
from random import random x = [random() for x in range(500)] y = [random() for y in range(500)] color = ["navy"] * len(x) s = ColumnDataSource(data=dict(x=x, y=y, color=color)) p = figure(plot_width=400, plot_height=400, tools="lasso_select", title="Select Here") p.circle('x', 'y', color='color', size=8, source=s, alpha=0.4) s2 = ColumnDataSource(data=dict(ym=[0.5, 0.5])) p.line(x=[0,1], y='ym', color="orange", line_width=5, alpha=0.6, source=s2) s.callback = CustomJS(args=dict(s2=s2), code=""" var inds = cb_obj.get('selected')['1d'].indices; var d = cb_obj.get('data'); var ym = 0 if (inds.length == 0) { return; } for (i = 0; i < d['color'].length; i++) { d['color'][i] = "navy" } for (i = 0; i < inds.length; i++) { d['color'][inds[i]] = "firebrick" ym += d['y'][inds[i]] } ym /= inds.length s2.get('data')['ym'] = [ym, ym] cb_obj.trigger('change'); s2.trigger('change'); """) show(p)
04 - interactions.ipynb
flaxandteal/python-course-lecturer-notebooks
mit
k-means k-means clustering is an unsupervised machine learning algorithm that can be used to group items into clusters. So far we have only worked with supervised algorithms. Supervised algorithms have training data with labels that identify the numeric value or class for each item. These algorithms use labeled data to build a model that can be used to make predictions. k-means clustering is different. The training data is not labeled. Unlabeled training data is fed into the model, which attempts to find relationships in the data and create clusters based on those relationships. Once these clusters are formed, predictions can be made about which cluster new data items belong to. The clusters can't easily be labeled in many cases. The clusters are "emergent clusters" and are created by the algorithm. They don't always map to groupings that you might expect. Example: Groups of Mushrooms Let's start by looking at a real world use case involving mushrooms. The University of California Irvine has a dataset containing various attributes of mushrooms. One of those attributes is the edibility of the mushroom: Is it edible or is it poisonous? We want to see if we can find clusters of mushroom attributes that can be used to determine if a mushroom is edible or not. Load the Data For this example we'll load the mushroom classification data. The dataset attributes about over 8,000 different mushrooms. Upload your kaggle.json file and run the code block below.
! chmod 600 kaggle.json && (ls ~/.kaggle 2>/dev/null || mkdir ~/.kaggle) && mv kaggle.json ~/.kaggle/ && echo 'Done'
content/06_other_models/01_k_means/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
And then use the Kaggle API to download the dataset.
! kaggle datasets download uciml/mushroom-classification ! ls
content/06_other_models/01_k_means/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
Unzip the Data.
! unzip mushroom-classification.zip ! ls
content/06_other_models/01_k_means/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
And finally, load the training data into a DataFrame.
import pandas as pd data = pd.read_csv('mushrooms.csv') data.sample(n=10)
content/06_other_models/01_k_means/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
Exploratory Data Analysis Let's take a closer look at the data that we'll be working with, starting with a simple describe.
data.describe(include='all')
content/06_other_models/01_k_means/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
It doesn't look like any columns are missing data since we see counts of 8,124 for every column. It does look like all of the data is categorical. We'll need to convert it into numeric values for the model to work. Let's do it for every column except class. We aren't trying to predict class, but we do want to see if we can get pure clusters of one type of class. So we don't want it included in our training data. Also, it is the only feature that isn't observable without having dire consequences!
columns = [c for c in data.columns.values if c != 'class'] id_to_value_mappings = {} value_to_id_mappings = {} for column in columns: i_to_v = sorted(data[column].unique()) v_to_i = { v:i for i, v in enumerate(i_to_v)} numeric_column = column + '-id' data[numeric_column] = [v_to_i[v] for v in data[column]] value_to_id_mappings[column] = v_to_i id_to_value_mappings[numeric_column] = i_to_v numeric_columns = id_to_value_mappings.keys() data[numeric_columns].describe()
content/06_other_models/01_k_means/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
Perform Clustering We now have numeric data that a model can handle. To run k-means clustering on the data, we simply load k-means from scikit-learn and ask the model to find a specific number of clusters for us. Notice that we are scaling the data. The class IDs are integer values, and some columns have many more classes than others. Scaling helps make sure that columns with more classes don't have an undue influence on the model.
from sklearn.cluster import KMeans from sklearn.preprocessing import scale model = KMeans(n_clusters=10) model.fit(scale(data[numeric_columns])) print(model.inertia_)
content/06_other_models/01_k_means/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
We asked scikit-learn to create 10 clusters for us, and then we printed out the inertia_ for the resultant clusters. Inertia is the sum of the squared distances of samples to their closest cluster center. Typically, the smaller the inertia the better. But why did we choose 10 clusters? And is the inertia that we received reasonable? Find the Optimal Number of Clusters With just one run of the algorithm, it is difficult to tell how many clusters we should have and what an appropriate inertia value is. k-means is trying to discover things about your data that you do not know. Picking a number of clusters at random isn't the best way to use k-means. Instead, you should experiment with a few different cluster values and measure the inertia of each. As you increase the number of clusters, your inertia should decrease.
from sklearn.cluster import KMeans from sklearn.preprocessing import scale import matplotlib.pyplot as plt clusters = list(range(5, 50, 5)) inertias = [] scaled_data = scale(data[numeric_columns]) for c in clusters: model = KMeans(n_clusters=c) model = model.fit(scaled_data) inertias.append(model.inertia_) plt.plot(clusters, inertias) plt.show()
content/06_other_models/01_k_means/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
The resulting graph should start high and to the left and curve down as the number of clusters grows. The initial slope is steep, but begins to level off. Your optimal number of clusters is somewhere in the ["elbow" of the graph](https://en.wikipedia.org/wiki/Elbow_method_(clustering) as the slope levels. Once you have this number, you need to then check to see if the number is reasonable for your use case. Say that the 'optimal' number of clusters for our mushroom identification is 15. Is that a reasonable number of clusters to deal with? If we have too many, we can overfit and make the model poor at generalizing. And what are the purposes of the clusters? If you are clustering mushrooms and want to find clusters that are definitely safe to eat, 15 or more clusters might be perfectly fine. If you are clustering customers for different advertising campaigns, 15 different campaigns might be more than your marketing department can handle. Clustering the data is often just the start of your journey. Once you have clusters, you'll need to look at each group and try to determine what makes them similar. What patterns did the clustering find? And will that clustering be useful to you? Examining Clusters Let's say that 15 is a reasonable number of clusters. We can rebuild the model using that setting.
from sklearn.cluster import KMeans from sklearn.preprocessing import scale model = KMeans(n_clusters=15) model.fit(scale(data[numeric_columns])) print(model.inertia_)
content/06_other_models/01_k_means/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
Now let's see if we have any 'pure' clusters. These are clusters with all-edible or all-poisonous mushrooms.
import numpy as np for cluster in sorted(np.unique(model.labels_)): num_edible = np.sum(data[model.labels_ == cluster]['class'] == 'e') total = np.sum(model.labels_ == cluster) print(cluster, num_edible / total)
content/06_other_models/01_k_means/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
In our model we had clusters 0, 1, 6, and 10 be 100% edible. Clusters 2, 4, 7, and 12 were all poisonous. The remaining were a mix of the two. Knowing this, let's look at one of the all-edible clusters and see what attributes we could look for to have confidence that we have an edible mushroom.
edible = data[model.labels_ == 1] for column in edible.columns: if column.endswith('-id'): continue print(column, edible[column].unique())
content/06_other_models/01_k_means/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
The mapping of the letter codes to more descriptive text can be found in the dataset description. Example: Classification of Digits Clustering for data exploration purposes can lead to interesting insights in to your data, but clustering can also be used for classification purposes. In the example below, we'll try to use k-means clustering to predict handwritten digits. Load the Data We'll load the digits dataset packaged with scikit-learn.
from sklearn.datasets import load_digits digits = load_digits()
content/06_other_models/01_k_means/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
Scale the Data It is good practice to scale the data to ensure that outliers don't have too big of an impact on the clustering.
from sklearn.preprocessing import scale scaled_digits = scale(digits.data)
content/06_other_models/01_k_means/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
Fit a Model We can then create a k-means model with 10 clusters. (We know there are 10 digits from 0 through 9.)
from sklearn.cluster import KMeans model = KMeans(n_clusters=10) model = model.fit(scaled_digits)
content/06_other_models/01_k_means/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
Make Predictions We can then use the model to predict which category a data point belongs to. In the case below, we'll just use some of the data that we trained with for illustrative purposes. The prediction will provide a numeric value.
cluster = model.predict([scaled_digits[0]])[0] cluster
content/06_other_models/01_k_means/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
What is this value? Is it the predicted digit? No. This number is the cluster that the model thinks the digit belongs to. To determine the predicted digit, we'll need to see what other digits are in the cluster and choose the most popular one for our classification.
import numpy as np labels = digits.target cluster_to_digit = [ np.argmax( np.bincount( np.array( [labels[i] for i in range(len(model.labels_)) if model.labels_[i] == cluster] ) ) ) for cluster in range(10) ] cluster_to_digit
content/06_other_models/01_k_means/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
Here we can see the digit that each cluster represents. Measure Model Quality If we do have labeled data, as is the case with our digits data, then we can measure the quality of our model using the homogeneity score and the completeness score.
from sklearn.metrics import homogeneity_score from sklearn.metrics import completeness_score homogeneity = homogeneity_score(labels, model.labels_) completeness = completeness_score(labels, model.labels_) homogeneity, completeness
content/06_other_models/01_k_means/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
Exercises Exercise 1 Load the iris dataset, create a k-means model with three clusters, and then find the homogeneity and completeness scores for the model. Student Solution
# Your code goes here
content/06_other_models/01_k_means/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
Exercise 2 Load the iris dataset, and then create a k-means model with three clusters using only two features. (Try to find the best two features for clustering.) Create a plot of the two features. For each datapoint in the chart, use a marker to encode the actual/correct species. For instance, use a triangle for Setosa, a square for Versicolour, and a circle for Virginica. Color each marker green if the predicted class matches the actual. Color each marker red if the classes don't match. Student Solution
# Your code goes here
content/06_other_models/01_k_means/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
The tables define the value and error as a string: val (err) which is a pain in the ass because now I have to parse the strings, which always takes much longer than it should because data wrangling is hard sometimes. I define a function that takes a column name and a data frame and strips the output.
def strip_parentheses(col, df): ''' splits single column strings of "value (error)" into two columns of value and error input: -string name of column to split in two -dataframe to apply to returns dataframe ''' out1 = df[col].str.replace(")","").str.split(pat="(") df_out = out1.apply(pd.Series) # Split the string on the whitespace base, sufx = col.split(" ") df[base] = df_out[0].copy() df[base+"_e"] = df_out[1].copy() del df[col] return df
notebooks/Patten2006.ipynb
BrownDwarf/ApJdataFrames
mit
Table 1 - Basic data on sources
names = ["Name","R.A. (J2000.0)","Decl. (J2000.0)","Spectral Type","SpectralType Ref.","Parallax (error)(arcsec)", "Parallax Ref.","J (error)","H (error)","Ks (error)","JHKRef.","PhotSys"] tbl1 = pd.read_csv("http://iopscience.iop.org/0004-637X/651/1/502/fulltext/64991.tb1.txt", sep='\t', names=names, na_values='\ldots') cols_to_fix = [col for col in tbl1.columns.values if "(error)" in col] for col in cols_to_fix: print col tbl1 = strip_parentheses(col, tbl1) tbl1.head()
notebooks/Patten2006.ipynb
BrownDwarf/ApJdataFrames
mit
Table 3- IRAC photometry
names = ["Name","Spectral Type","[3.6] (error)","n1","[4.5] (error)","n2", "[5.8] (error)","n3","[8.0] (error)","n4","[3.6]-[4.5]","[4.5]-[5.8]","[5.8]-[8.0]","Notes"] tbl3 = pd.read_csv("http://iopscience.iop.org/0004-637X/651/1/502/fulltext/64991.tb3.txt", sep='\t', names=names, na_values='\ldots') cols_to_fix = [col for col in tbl3.columns.values if "(error)" in col] cols_to_fix for col in cols_to_fix: print col tbl3 = strip_parentheses(col, tbl3) tbl3.head() pd.options.display.max_columns = 50 del tbl3["Spectral Type"] #This is repeated patten2006 = pd.merge(tbl1, tbl3, how="outer", on="Name") patten2006.head()
notebooks/Patten2006.ipynb
BrownDwarf/ApJdataFrames
mit
Convert spectral type to number
import gully_custom patten2006["SpT_num"], _1, _2, _3= gully_custom.specTypePlus(patten2006["Spectral Type"])
notebooks/Patten2006.ipynb
BrownDwarf/ApJdataFrames
mit
Make a plot of mid-IR colors as a function of spectral type.
sns.set_context("notebook", font_scale=1.5) for color in ["[3.6]-[4.5]", "[4.5]-[5.8]", "[5.8]-[8.0]"]: plt.plot(patten2006["SpT_num"], patten2006[color], '.', label=color) plt.xlabel(r'Spectral Type (M0 = 0)') plt.ylabel(r'$[3.6]-[4.5]$') plt.title("IRAC colors as a function of spectral type") plt.legend(loc='best')
notebooks/Patten2006.ipynb
BrownDwarf/ApJdataFrames
mit
Save the cleaned data.
patten2006.to_csv('../data/Patten2006/patten2006.csv', index=False)
notebooks/Patten2006.ipynb
BrownDwarf/ApJdataFrames
mit
Part 1: Load Example Dataset We start this exercise by using a small dataset that is easily to visualize.
ex7data1 = scipy.io.loadmat('ex7data1.mat') X = ex7data1['X'] def plot_data(X, ax): ax.plot(X[:,0], X[:,1], 'bo') fig, ax = plt.subplots() plot_data(X, ax) def normalize_features(X): mu = np.mean(X, 0) X_norm = X - mu sigma = np.std(X_norm, 0) X_norm = X_norm / sigma return X_norm, mu, sigma X_norm, mu, sigma = normalize_features(X)
ex7/ml-ex7-pca.ipynb
noammor/coursera-machinelearning-python
mit
Part 2: Principal Component Analysis You should now implement PCA, a dimension reduction technique. You should complete the following code.
def pca(X): #PCA Run principal component analysis on the dataset X # [U, S] = pca(X) computes eigenvectors of the covariance matrix of X # Returns the eigenvectors U, the eigenvalues in S # m, n = X.shape # You need to return the following variables correctly. U = np.zeros((n, n)) S = np.zeros(n) # ====================== YOUR CODE HERE ====================== # Instructions: You should first compute the covariance matrix. Then, you # should use the "scipy.linalg.svd" function to compute the eigenvectors # and eigenvalues of the covariance matrix. # # Note: When computing the covariance matrix, remember to divide by m (the # number of examples). # # ========================================================================= return U, S U, S = pca(X_norm) U
ex7/ml-ex7-pca.ipynb
noammor/coursera-machinelearning-python
mit
Draw the eigenvectors centered at mean of data. These lines show the directions of maximum variations in the dataset.
def draw_line(a, b, ax, *args): ax.plot([a[0], b[0]], [a[1], b[1]], *args) fig, ax = plt.subplots(figsize=(5,5)) ax.set_ylim(2, 8) ax.set_xlim(0.5, 6.5) ax.set_aspect('equal') plot_data(X, ax) ax.plot(mu[0], mu[1]) draw_line(mu, mu + 1.5 * S[0] * U[0, :], ax, '-k') draw_line(mu, mu + 1.5 * S[1] * U[1, :], ax, '-k')
ex7/ml-ex7-pca.ipynb
noammor/coursera-machinelearning-python
mit
The top eigenvector should be [-0.707107, -0.707107].
U[0]
ex7/ml-ex7-pca.ipynb
noammor/coursera-machinelearning-python
mit
Part 3: Dimension Reduction You should now implement the projection step to map the data onto the first k eigenvectors. The code will then plot the data in this reduced dimensional space. This will show you what the data looks like when using only the corresponding eigenvectors to reconstruct it. You should complete the code in project_data.
def project_data(X, U, K): #PROJECTDATA Computes the reduced data representation when projecting only #on to the top k eigenvectors # Z = projectData(X, U, K) computes the projection of # the normalized inputs X into the reduced dimensional space spanned by # the first K columns of U. It returns the projected examples in Z. # # You need to return the following variables correctly. Z = np.zeros((X.shape[0], K)) # ====================== YOUR CODE HERE ====================== # Instructions: Compute the projection of the data using only the top K # eigenvectors in U (first K columns). # For the i-th example X(i,:), the projection on to the k-th # eigenvector is given as follows: # x = X[i, :].T # projection_k = x.T.dot(U(:, k)); # # ============================================================= return Z
ex7/ml-ex7-pca.ipynb
noammor/coursera-machinelearning-python
mit
Projection of the first example: (should be about 1.49631261)
K = 1 Z = project_data(X_norm, U, K) Z[0,0] def recover_data(Z, U, K): #RECOVERDATA Recovers an approximation of the original data when using the #projected data # X_rec = RECOVERDATA(Z, U, K) recovers an approximation the # original data that has been reduced to K dimensions. It returns the # approximate reconstruction in X_rec. # # You need to return the following variables correctly. X_rec = np.zeros((Z.shape[0], U.shape[0])) # ====================== YOUR CODE HERE ====================== # Instructions: Compute the approximation of the data by projecting back # onto the original space using the top K eigenvectors in U. # # For the i-th example Z(i,:), the (approximate) # recovered data for dimension j is given as follows: # v = Z(i, :)'; # recovered_j = v' * U(j, 1:K)'; # # Notice that U(j, 1:K) is a row vector. # # ============================================================= return X_rec
ex7/ml-ex7-pca.ipynb
noammor/coursera-machinelearning-python
mit
Approximation of the first example: (should be about [-1.05805279, -1.05805279])
X_rec = recover_data(Z, U, K) X_rec[0]
ex7/ml-ex7-pca.ipynb
noammor/coursera-machinelearning-python
mit
Draw lines connecting the projected points to the original points
fig, ax = plt.subplots(figsize=(5,5)) ax.set_ylim(-3, 3) ax.set_xlim(-3, 3) ax.set_aspect('equal') plot_data(X_norm, ax) ax.plot(X_rec[:,0], X_rec[:,1], 'ro') for x_norm, x_rec in zip(X_norm, X_rec): draw_line(x_norm, x_rec, ax, '--k')
ex7/ml-ex7-pca.ipynb
noammor/coursera-machinelearning-python
mit
Part 4: Loading and Visualizing Face Data We start the exercise by first loading and visualizing the dataset. The following code will load the dataset into your environment, and later display the first 100 faces in the dataset.
X = scipy.io.loadmat('ex7faces.mat')['X'] X.shape def display_faces(X, example_width=None): example_size = len(X[0]) if example_width is None: example_width = int(np.sqrt(example_size)) num_examples = len(X) figures_row_length = int(np.sqrt(num_examples)) fig, axes = plt.subplots(nrows=figures_row_length, ncols=figures_row_length, figsize=(6,6)) fig.subplots_adjust(wspace=0, hspace=0) for i, j in itertools.product(range(figures_row_length), range(figures_row_length)): ax = axes[i][j] ax.set_axis_off() ax.set_aspect('equal') example = X[i*figures_row_length + j].reshape(example_size//example_width, example_width).T ax.imshow(example, cmap='Greys_r') display_faces(X[:100])
ex7/ml-ex7-pca.ipynb
noammor/coursera-machinelearning-python
mit
Part 5: PCA on Face Data: Eigenfaces Run PCA and visualize the eigenvectors which are in this case eigenfaces We display the first 64 eigenfaces. Before running PCA, it is important to first normalize X.
X_norm, mu, sigma = normalize_features(X) U, S = pca(X_norm) display_faces(U[:, :64].T)
ex7/ml-ex7-pca.ipynb
noammor/coursera-machinelearning-python
mit
Part 6: Dimension Reduction for Faces Project images to the eigen space using the top k eigenvectors
K = 100 Z = project_data(X_norm, U, K) Z.shape
ex7/ml-ex7-pca.ipynb
noammor/coursera-machinelearning-python
mit
Part 7: Visualization of Faces after PCA Dimension Reduction Project images to the eigen space using the top K eigen vectors and visualize only using those K dimensions. Compare to the original input.
X_rec = recover_data(Z, U, K) X_rec.shape display_faces(X_rec[:100])
ex7/ml-ex7-pca.ipynb
noammor/coursera-machinelearning-python
mit
.. _tut_erp: EEG processing and Event Related Potentials (ERPs) For a generic introduction to the computation of ERP and ERF see :ref:tut_epoching_and_averaging. Here we cover the specifics of EEG, namely: - setting the reference - using standard montages :func:mne.channels.Montage - Evoked arithmetic (e.g. differences)
import mne from mne.datasets import sample
0.12/_downloads/plot_eeg_erp.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Setup for reading the raw data
data_path = sample.data_path() raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif' event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif' raw = mne.io.read_raw_fif(raw_fname, add_eeg_ref=True, preload=True)
0.12/_downloads/plot_eeg_erp.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
And it's actually possible to plot the channel locations using the :func:mne.io.Raw.plot_sensors method
raw.plot_sensors() raw.plot_sensors('3d') # in 3D
0.12/_downloads/plot_eeg_erp.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Setting EEG montage In the case where your data don't have locations you can set them using a :func:mne.channels.Montage. MNE comes with a set of default montages. To read one of them do:
montage = mne.channels.read_montage('standard_1020') print(montage)
0.12/_downloads/plot_eeg_erp.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
To apply a montage on your data use the :func:mne.io.set_montage function. Here don't actually call this function as our demo dataset already contains good EEG channel locations. Next we'll explore the definition of the reference. Setting EEG reference Let's first remove the reference from our Raw object. This explicitly prevents MNE from adding a default EEG average reference required for source localization.
raw_no_ref, _ = mne.io.set_eeg_reference(raw, [])
0.12/_downloads/plot_eeg_erp.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Average reference: This is normally added by default, but can also be added explicitly.
raw_car, _ = mne.io.set_eeg_reference(raw) evoked_car = mne.Epochs(raw_car, **epochs_params).average() del raw_car # save memory title = 'EEG Average reference' evoked_car.plot(titles=dict(eeg=title)) evoked_car.plot_topomap(times=[0.1], size=3., title=title)
0.12/_downloads/plot_eeg_erp.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Custom reference: Use the mean of channels EEG 001 and EEG 002 as a reference
raw_custom, _ = mne.io.set_eeg_reference(raw, ['EEG 001', 'EEG 002']) evoked_custom = mne.Epochs(raw_custom, **epochs_params).average() del raw_custom # save memory title = 'EEG Custom reference' evoked_custom.plot(titles=dict(eeg=title)) evoked_custom.plot_topomap(times=[0.1], size=3., title=title)
0.12/_downloads/plot_eeg_erp.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Next, we create averages of stimulation-left vs stimulation-right trials. We can use basic arithmetic to, for example, construct and plot difference ERPs.
left, right = epochs["left"].average(), epochs["right"].average() (left - right).plot_joint() # create and plot difference ERP
0.12/_downloads/plot_eeg_erp.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Note that by default, this is a trial-weighted average. If you have imbalanced trial numbers, consider either equalizing the number of events per condition (using Epochs.equalize_event_counts), or the combine_evoked function. As an example, first, we create individual ERPs for each condition.
aud_l = epochs["auditory", "left"].average() aud_r = epochs["auditory", "right"].average() vis_l = epochs["visual", "left"].average() vis_r = epochs["visual", "right"].average() all_evokeds = [aud_l, aud_r, vis_l, vis_r] # This could have been much simplified with a list comprehension: # all_evokeds = [epochs[cond] for cond in event_id] # Then, we construct and plot an unweighted average of left vs. right trials. mne.combine_evoked(all_evokeds, weights=(1, -1, 1, -1)).plot_joint()
0.12/_downloads/plot_eeg_erp.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Set Up Verta
HOST = 'app.verta.ai' PROJECT_NAME = 'Film Review Classification' EXPERIMENT_NAME = 'spaCy CNN' # import os # os.environ['VERTA_EMAIL'] = # os.environ['VERTA_DEV_KEY'] = from verta import Client from verta.utils import ModelAPI client = Client(HOST, use_git=False) proj = client.set_project(PROJECT_NAME) expt = client.set_experiment(EXPERIMENT_NAME) run = client.set_experiment_run()
client/workflows/examples/text_classification_spacy.ipynb
mitdbg/modeldb
mit