prompt
stringlengths 501
4.98M
| target
stringclasses 1
value | chunk_prompt
bool 1
class | kind
stringclasses 2
values | prob
float64 0.2
0.97
⌀ | path
stringlengths 10
394
⌀ | quality_prob
float64 0.4
0.99
⌀ | learning_prob
float64 0.15
1
⌀ | filename
stringlengths 4
221
⌀ |
---|---|---|---|---|---|---|---|---|
<h2> How to run this file </h2>
To run code in each cell, use shift enter or tinker with cell part of the bar menu above
You may come across run issues, look up online references or tinker and figure out, failure of the
code is not a big deal as you can always delete a file and start afresh.
<h2> About this block </h2>
This is a markdown cell, it was created by using the cell menu above and selecting the markdown option.
For more on the usage of Markdown scriptlook up the corresponding wikipedia page.
The cell below defines two lines, shows the resuluting plot and gives the intersection point if it is unique.
Feel free to tinker with the code and rerurn using shift enter.
```
# Comments are given on the right side in blue after the # prompt.
# Please note that indentation is used for nested statements without using "end".
# If you have a line equations of the form ax + by = c, bring them to y = mx + c form to use this code.
import numpy as np # Python library for efficient computation
import matplotlib.pyplot as plt # Plot library
x = np.arange(-10, 10, 0.5) # defining the x range and step size
m1 = 2 # m1 is slope for line 1
c1 = 3 # c1 is intercept for line 1
m2 = 2 # m2 is slope for line 2
c2 = -2 # c2 is intercept for line 2
def y(x,m,c): # The definition of the line
return m*x + c
plt.plot(x, y(x,m1,c1), 'b', x, y(x,m2,c2), 'p') # Plotting the two lines in two colors, blue 'b' and green 'g'
plt.show() # After creating the plot object, it has to be shown.
```
Create a cell below using the insert menu at the top. Declare the cell of type "markdown" by selecting "cell menu -> cell type" and
write some really important information like "Computing is cool"
```
# Solving the two equations, check for special cases
x0 = "not defined"
y0 = "not defined"
if (m1*m2)==0 : #If at least one of the line is horizontal.
if m1 != 0:
y0 = c2
x0 = (c2 - c1)/m1
elif m2 != 0 :
y0 = c1
x0 = (c1 - c2)/m2
elif c1 == c2:
print("The lines are the same, zero slope")
print("infinite number of solutions")
else:
print("parallel lines zero slope, solution doesn't exist")
elif (m1-m2) != 0 :
x0 = (c2-c1)/(m1 - m2)
y0 = m1*x0 + c1
elif (c1 == c2) :
print("The lines are the same")
print("infinite number of solutions")
elif c1 != c2 :
print("parallel lines, solutions doesn't exist")
(x0,y0)
```
Matrix form of the equation is
$$
\left(\begin{array}{cc}
m1 & -1\\
m2 & -1
\end{array}\right)
\left(\begin{array}{c}
x \\
y
\end{array}\right)
=
\left(\begin{array}{c}
-c1 \\
-c2
\end{array}\right)
$$
```
# Define two lines and draw them
m1p = 2
m2p = -1
c1p = 1
c2p = -2
plt.plot(x, y(x,m1p,c1p), 'pink', x, y(x,m2p,c2p), 'c')
plt.show()
# Repeat the exercise to draw three lines
# Write the code below
```
Write the matrix equation representing 3 intersecting lines, How many variables (unknowns) are there?, how many equations are there? What does the solution mean geometrically? How many different types of solutions can there be? Insert a cell below and answer the questions.
<h2> Notation </h2>
We use uparrow after a symbol to denote a column vector (like, $x \uparrow$) and a horizontal arrow above the symbol
(like, $\vec{x}$) to denote a row vector.
<h2> The row and column pictures </h2>
Consider a matrix equation $ A x\uparrow = b \uparrow $ as follows,
$$
\left(\begin{array}{cc}
a_{11} & a_{12}\\
a_{21} & a_{22}
\end{array}\right)
\left(\begin{array}{c}
x \\
y
\end{array}\right)
=
\left(\begin{array}{c}
b_1 \\
b_2
\end{array}\right)
$$
The question of finding solutions of the above matrix equation can be thought of as that of finding the intersection point(s) for a set of lines. This gemetric picture is also known as row picture of the matrix equation.
The equations in the row picture are written as,
$$
a_{11} x + a_{12} y = b_1 \\
a_{21} x + a_{22} y = b_2.
$$
One can also interprete the matrix equation in terms of addition of vectors as follows,
$$
x
\left(\begin{array}{c}
a_{11} \\
a_{21}
\end{array}\right)
+ y
\left(\begin{array}{c}
a_{12} \\
a_{22}
\end{array}\right)
=
\left(\begin{array}{c}
b_1 \\
b_2
\end{array}\right)
$$
Here the question is to look for scaling facctors $x$ and $y$ so that the vector sum of the two given scaled vectors
equals the RHS. This geometric picture is known as the column picture. A detailed discussion and exercises can be found
in "Linear Algebra and its applications" (chapter 1) by Gilbert Strang.
The matrix equation can also be interpreted in a third scenario where the matrix is to be thought of a linear
transformation and the question reduces to that of finding a vector $x\uparrow$ whose linear transformation gives the
vector $b\uparrow$.
Write a three dimensional version of the matrix equation $ A x\uparrow = b \uparrow $ and the corresponding row and column picture representations below.
| true |
code
| 0.518059 | null | null | null | null |
|
```
import sys
if "google.colab" in sys.modules:
branch = "master" # change to the branch you want
! git clone --single-branch --branch $branch https://github.com/OpenMined/PySyft.git
! cd PySyft && ./scripts/colab.sh # fixes some colab python issues
sys.path.append("/content/PySyft/src") # prevents needing restart
import syft as sy
```
## Join the Duet Server the Data Owner 1 connected to
```
duet1 = sy.join_duet(loopback=True)
duet1.store.pandas
```
## Join the Duet Server the Data Owner 2 connected to
```
duet2 = sy.join_duet(loopback=True)
duet2.store.pandas
```
## Linear regression
```
data1_ptr = duet1.store[0]
target1_ptr = duet1.store[1]
#data2_ptr = duet2.store[0]
#target2_ptr = duet2.store[1]
print(data1_ptr)
print(target1_ptr)
#print(data2_ptr)
#print(target2_ptr)
```
### Create Base Model
```
import torch
in_dim = 8
out_dim = 5
class SyNet(sy.Module):
def __init__(self, torch_ref):
super(SyNet, self).__init__(torch_ref=torch_ref)
self.lin1 = self.torch_ref.nn.Linear(in_dim, 256)
self.act1 = self.torch_ref.nn.ReLU()
self.lin2 = self.torch_ref.nn.Linear(256, 64)
self.act2 = self.torch_ref.nn.ReLU()
self.lin3 = self.torch_ref.nn.Linear(64, out_dim)
self.sm = self.torch_ref.nn.Softmax(dim=1)
def forward(self, x):
x = self.lin1(x)
x = self.act1(x)
x = self.lin2(x)
x = self.act2(x)
x = self.lin3(x)
return x
def inference(self, x):
x = self.forward(x)
x = self.sm(x)
return x
combined_model = SyNet(torch)
```
### Training
```
def train(epochs, model, torch_ref, optim, data_ptr, target_ptr, criterion):
losses = []
for epoch in range(epochs):
optim.zero_grad()
output = model(data_ptr)
loss = criterion(output, target_ptr)
loss_item = loss.item()
loss_value = loss_item.get(
reason="To evaluate training progress",
request_block=True,
timeout_secs=5,
)
#if epoch % 5 == 0:
print("Epoch", epoch, "loss", loss_value)
losses.append(loss_value)
loss.backward()
optim.step()
return losses
```
#### Send one copy of the model to each data owner or client and train remotely
```
import torch as th
import numpy as np
```
Train on Data Owner 1 data
```
local_model1 = SyNet(torch)
print(local_model1.parameters())
remote_model1 = local_model1.send(duet1)
remote_torch1 = duet1.torch
params = remote_model1.parameters()
optim1 = remote_torch1.optim.SGD(params=params, lr=0.01)
```
Dummy target data
```
#target1_ptr = th.FloatTensor(np.array([5, 10, 15, 22, 30, 38]).reshape(-1, 1))
#target1_ptr
print(remote_torch1)
epochs= 20
criterion = remote_torch1.nn.CrossEntropyLoss()
losses = train(epochs, remote_model1, remote_torch1, optim1,
data1_ptr, target1_ptr, criterion)
```
Train on Data Owner 2 data
```
data2_ptr = duet2.store[0]
target2_ptr = duet2.store[1]
print(data2_ptr)
print(target2_ptr)
local_model2 = SyNet(torch)
print(local_model2.parameters())
remote_model2 = local_model2.send(duet2)
remote_torch2 = duet2.torch
params = remote_model2.parameters()
optim2 = remote_torch2.optim.SGD(params=params, lr=0.01)
```
Dummy Target data
```
#target2_ptr = th.FloatTensor(np.array([35, 40, 45, 55, 60]).reshape(-1, 1))
#target2_ptr
epochs = 20
criterion = remote_torch2.nn.CrossEntropyLoss()
losses = train(epochs, remote_model2, remote_torch2, optim2, data2_ptr, target2_ptr, criterion)
```
### Averaging Model Updates
Ideally, there will be a coordinator server who will get the model updates from different clients and make an aggregation. For the case of simplicity, in this example we will make THIS server the coordinator.
```
from collections import OrderedDict
## Little sanity check!
param1 = remote_model1.parameters().get(request_block=True)
param2 = remote_model2.parameters().get(request_block=True)
print("Local model1 parameters:")
print(local_model1.parameters())
print("Remote model1 parameters:")
print(param1)
print()
print("Local model2 parameters:")
print(local_model2.parameters())
print("Remote model2 parameters:")
print(param2)
remote_model1_updates = remote_model1.get(
request_block=True
).state_dict()
print(remote_model1_updates)
remote_model2_updates = remote_model2.get(
request_block=True
).state_dict()
print(remote_model2_updates)
avg_updates = OrderedDict()
avg_updates["linear.weight"] = (
remote_model1_updates["linear.weight"] + remote_model2_updates["linear.weight"]
) / 2
avg_updates["linear.bias"] = (
remote_model1_updates["linear.bias"] + remote_model2_updates["linear.bias"]
) / 2
print(avg_updates)
```
### Load aggregated weights
```
combined_model.load_state_dict(avg_updates)
del avg_updates
test_data = th.FloatTensor(np.array([17, 25, 32, 50, 80]).reshape(-1, 1))
test_target = th.FloatTensor(np.array([12, 15, 20, 30, 50]).reshape(-1, 1))
preds = []
with torch.no_grad():
for i in range(len(test_data)):
sample = test_data[i]
y_hat = combined_model(sample)
print(f"Prediction: {y_hat.item()} Ground Truth: {test_target[i].item()}")
preds.append(y_hat)
```
## Comparison to classical linear regression on centralised data
```
import torch
import numpy as np
in_dim = 1
out_dim = 1
class ClassicalLR(torch.nn.Module):
def __init__(self, torch):
super(ClassicalLR, self).__init__()
self.linear = torch.nn.Linear(in_dim, out_dim)
def forward(self, x):
x = self.linear(x)
return x
classical_model = ClassicalLR(torch)
data = torch.FloatTensor(
np.array([5, 15, 25, 35, 45, 55, 60, 65, 75, 85, 95]).reshape(-1, 1)
)
target = torch.FloatTensor(
np.array([5, 10, 15, 22, 30, 38, 35, 40, 45, 55, 60]).reshape(-1, 1)
)
def classic_train(epochs, model, torch, optim, data, target, criterion):
losses = []
for i in range(epochs):
optim.zero_grad()
output = model(data)
loss = criterion(output, target)
loss_item = loss.item()
if i % 10 == 0:
print("Epoch", i, "loss", loss_item)
losses.append(loss_item)
loss.backward()
optim.step()
return losses
params = classical_model.parameters()
optim = torch.optim.SGD(params=params, lr=0.01)
criterion = torch.nn.MSELoss()
epochs = 20
losses = classic_train(
epochs, classical_model, torch, optim, data, target, criterion
)
test_data = th.FloatTensor(np.array([17, 25, 32, 50, 80]).reshape(-1, 1))
test_target = th.FloatTensor(np.array([12, 15, 20, 30, 50]).reshape(-1, 1))
preds = []
with torch.no_grad():
for i in range(len(test_data)):
sample = test_data[i]
y_hat = classical_model(sample)
print(f"Prediction: {y_hat.item()} Ground Truth: {test_target[i].item()}")
preds.append(y_hat)
```
| true |
code
| 0.709925 | null | null | null | null |
|
# Reference:
Implemented:
https://towardsdatascience.com/detection-of-price-support-and-resistance-levels-in-python-baedc44c34c9
Alternative:
https://medium.com/@judopro/using-machine-learning-to-programmatically-determine-stock-support-and-resistance-levels-9bb70777cf8e
```
import pandas as pd
import numpy as np
import yfinance
from mplfinance.original_flavor import candlestick_ohlc
import matplotlib.dates as mpl_dates
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = [12, 7]
plt.rc('font', size=14)
# Download S&P 500 daily data
ticker = yfinance.Ticker('SPY')
df = ticker.history(interval="1d", start="2020-01-01", end="2021-02-15")
df['Date'] = pd.to_datetime(df.index)
df['Date'] = df['Date'].apply(mpl_dates.date2num)
df = df.loc[:,['Date', 'Open', 'High', 'Low', 'Close']]
df.head()
# Two functions that identify the 4-candles fractals
def isSupport(df,i):
support = df['Low'][i] < df['Low'][i-1] and df['Low'][i] < df['Low'][i+1] and df['Low'][i+1] < df['Low'][i+2] and df['Low'][i-1] < df['Low'][i-2]
return support
def isResistance(df,i):
resistance = df['High'][i] > df['High'][i-1] and df['High'][i] > df['High'][i+1] and df['High'][i+1] > df['High'][i+2] and df['High'][i-1] > df['High'][i-2]
return resistance
# Create a list that will contain the levels we find. Each level is a tuple whose first element is the index of the signal candle and the second element is the price value.
levels = []
for i in range(2,df.shape[0]-2):
if isSupport(df,i):
levels.append((i,df['Low'][i]))
elif isResistance(df,i):
levels.append((i,df['High'][i]))
# Define a function that plots price and key levels together
def plot_all():
fig, ax = plt.subplots()
candlestick_ohlc(ax,df.values,width=0.6, \
colorup='green', colordown='red', alpha=0.8)
date_format = mpl_dates.DateFormatter('%d %b %Y')
ax.xaxis.set_major_formatter(date_format)
fig.autofmt_xdate()
fig.tight_layout()
for level in levels:
plt.hlines(level[1],xmin=df['Date'][level[0]],\
xmax=max(df['Date']),colors='blue')
fig.show()
# Plot and see
plot_all()
```
We have been able to detect the major rejection levels, but there’s still some noise. Some levels are over others, but they are essentially the same level.
We can clean this noise modifying the function that detects key levels. If a level is near another one, it will be discarded. We must decide what “near” means, then. We can say that a level is near another one if their distance is less than the average candle size in our chart (i.e. the average difference between high and low prices in a candle). This will give us a rough estimate of volatility.
```
s = np.mean(df['High'] - df['Low'])
# Define a function that, given a price value, returns False if it is near some previously discovered key level.
def isFarFromLevel(l):
return np.sum([abs(l-x) < s for x in levels]) == 0
# Scan the price history looking for key levels using this function as a filter.
levels = []
for i in range(2,df.shape[0]-2):
if isSupport(df,i):
l = df['Low'][i]
if isFarFromLevel(l):
levels.append((i,l))
elif isResistance(df,i):
l = df['High'][i]
if isFarFromLevel(l):
levels.append((i,l))
plot_all()
```
| true |
code
| 0.442335 | null | null | null | null |
|
# Parcels Tutorial
Welcome to a quick tutorial on Parcels. This is meant to get you started with the code, and give you a flavour of some of the key features of Parcels.
In this tutorial, we will first cover how to run a set of particles [from a very simple idealised field](#Running-particles-in-an-idealised-field). We will show how easy it is to run particles in [time-backward mode](#Running-particles-in-backward-time). Then, we will show how to [add custom behaviour](#Adding-a-custom-behaviour-kernel) to the particles. Then we will show how to [run particles in a set of NetCDF files from external data](#Reading-in-data-from-arbritrary-NetCDF-files). Then we will show how to use particles to [sample a field](#Sampling-a-Field-with-Particles) such as temperature or sea surface height. And finally, we will show how to [write a kernel that tracks the distance travelled by the particles](#A-second-example-kernel:-calculating-distance-travelled).
Let's start with importing the relevant modules. The key ones are all in the `parcels` package.
```
%matplotlib inline
from parcels import FieldSet, ParticleSet, Variable, JITParticle, AdvectionRK4, plotTrajectoriesFile
import numpy as np
import math
from datetime import timedelta
from operator import attrgetter
```
## Running particles in an idealised field
The first step to running particles with Parcels is to define a `FieldSet` object, which is simply a collection of hydrodynamic fields. In this first case, we use a simple flow of two idealised moving eddies. That field is saved in NetCDF format in the directory `examples/MovingEddies_data`. Since we know that the files are in what's called Parcels FieldSet format, we can call these files using the function `FieldSet.from_parcels()`.
```
fieldset = FieldSet.from_parcels("MovingEddies_data/moving_eddies")
```
The `fieldset` can then be visualised with the `show()` function. To show the zonal velocity (`U`), give the following command
```
fieldset.U.show()
```
The next step is to define a `ParticleSet`. In this case, we start 2 particles at locations (330km, 100km) and (330km, 280km) using the `from_list` constructor method, that are advected on the `fieldset` we defined above. Note that we use `JITParticle` as `pclass`, because we will be executing the advection in JIT (Just-In-Time) mode. The alternative is to run in `scipy` mode, in which case `pclass` is `ScipyParticle`
```
pset = ParticleSet.from_list(fieldset=fieldset, # the fields on which the particles are advected
pclass=JITParticle, # the type of particles (JITParticle or ScipyParticle)
lon=[3.3e5, 3.3e5], # a vector of release longitudes
lat=[1e5, 2.8e5]) # a vector of release latitudes
```
Print the `ParticleSet` to see where they start
```
print(pset)
```
This output shows for each particle the (longitude, latitude, depth, time). Note that in this case the time is `not_yet_set`, that is because we didn't specify a `time` when we defined the `pset`.
To plot the positions of these particles on the zonal velocity, use the following command
```
pset.show(field=fieldset.U)
```
The final step is to run (or 'execute') the `ParticelSet`. We run the particles using the `AdvectionRK4` kernel, which is a 4th order Runge-Kutte implementation that comes with Parcels. We run the particles for 6 days (using the `timedelta` function from `datetime`), at an RK4 timestep of 5 minutes. We store the trajectory information at an interval of 1 hour in a file called `EddyParticles.nc`. Because `time` was `not_yet_set`, the particles will be advected from the first date available in the `fieldset`, which is the default behaviour.
```
output_file = pset.ParticleFile(name="EddyParticles.nc", outputdt=timedelta(hours=1)) # the file name and the time step of the outputs
pset.execute(AdvectionRK4, # the kernel (which defines how particles move)
runtime=timedelta(days=6), # the total length of the run
dt=timedelta(minutes=5), # the timestep of the kernel
output_file=output_file)
```
The code should have run, which can be confirmed by printing and plotting the `ParticleSet` again
```
print(pset)
pset.show(field=fieldset.U)
```
Note that both the particles (the black dots) and the `U` field have moved in the plot above. Also, the `time` of the particles is now 518400 seconds, which is 6 days.
The trajectory information of the particles can be written to the `EddyParticles.nc` file by using the `.export()` method on the output file. The trajectory can then be quickly plotted using the `plotTrajectoriesFile` function.
```
output_file.export()
plotTrajectoriesFile('EddyParticles.nc');
```
The `plotTrajectoriesFile` function can also be used to show the trajectories as an animation, by specifying that it has to run in `movie2d_notebook` mode. If we pass this to our function above, we can watch the particles go!
```
plotTrajectoriesFile('EddyParticles.nc', mode='movie2d_notebook')
```
The `plotTrajectoriesFile` can also be used to display 2-dimensional histograms (`mode=hist2d`) of the number of particle observations per bin. Use the `bins` argument to control the number of bins in the longitude and latitude direction. See also the [matplotlib.hist2d](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.hist2d.html) page.
```
plotTrajectoriesFile('EddyParticles.nc', mode='hist2d', bins=[30, 20]);
```
Now one of the neat features of Parcels is that the particles can be plotted as a movie *during execution*, which is great for debugging. To rerun the particles while plotting them on top of the zonal velocity field (`fieldset.U`), first reinitialise the `ParticleSet` and then re-execute. However, now rather than saving the output to a file, display a movie using the `moviedt` display frequency, in this case with the zonal velocity `fieldset.U` as background
```
# THIS DOES NOT WORK IN THIS IPYTHON NOTEBOOK, BECAUSE OF THE INLINE PLOTTING.
# THE 'SHOW_MOVIE' KEYWORD WILL WORK ON MOST MACHINES, THOUGH
# pset = ParticleSet(fieldset=fieldset, size=2, pclass=JITParticle, lon=[3.3e5, 3.3e5], lat=[1e5, 2.8e5])
# pset.execute(AdvectionRK4,
# runtime=timedelta(days=6),
# dt=timedelta(minutes=5),
# moviedt=timedelta(hours=1),
# movie_background_field=fieldset.U)
```
## Running particles in backward time
Running particles in backward time is extremely simple: just provide a `dt` < 0.
```
output_file = pset.ParticleFile(name="EddyParticles_Bwd.nc", outputdt=timedelta(hours=1)) # the file name and the time step of the outputs
pset.execute(AdvectionRK4,
dt=-timedelta(minutes=5), # negative timestep for backward run
runtime=timedelta(days=6), # the run time
output_file=output_file)
```
Now print the particles again, and see that they (except for some round-off errors) returned to their original position
```
print(pset)
pset.show(field=fieldset.U)
```
## Adding a custom behaviour kernel
A key feature of Parcels is the ability to quickly create very simple kernels, and add them to the execution. Kernels are little snippets of code that are run during exection of the particles.
In this example, we'll create a simple kernel where particles obtain an extra 2 m/s westward velocity after 1 day. Of course, this is not very realistic scenario, but it nicely illustrates the power of custom kernels.
```
def WestVel(particle, fieldset, time):
if time > 86400:
uvel = -2.
particle.lon += uvel * particle.dt
```
Now reset the `ParticleSet` again, and re-execute. Note that we have now changed `kernel` to be `AdvectionRK4 + k_WestVel`, where `k_WestVel` is the `WestVel` function as defined above cast into a `Kernel` object (via the `pset.Kernel` call).
```
pset = ParticleSet.from_list(fieldset=fieldset, pclass=JITParticle, lon=[3.3e5, 3.3e5], lat=[1e5, 2.8e5])
k_WestVel = pset.Kernel(WestVel) # casting the WestVel function to a kernel object
output_file = pset.ParticleFile(name="EddyParticles_WestVel.nc", outputdt=timedelta(hours=1))
pset.execute(AdvectionRK4 + k_WestVel, # simply add kernels using the + operator
runtime=timedelta(days=2),
dt=timedelta(minutes=5),
output_file=output_file)
```
And now plot this new trajectory file
```
output_file.export()
plotTrajectoriesFile('EddyParticles_WestVel.nc');
```
## Reading in data from arbritrary NetCDF files
In most cases, you will want to advect particles within pre-computed velocity fields. If these velocity fields are stored in NetCDF format, it is fairly easy to load them into the `FieldSet.from_netcdf()` function.
The `examples` directory contains a set of [GlobCurrent](http://globcurrent.ifremer.fr/products-data/products-overview) files of the region around South Africa.
First, define the names of the files containing the zonal (U) and meridional (V) velocities. You can use wildcards (`*`) and the filenames for U and V can be the same (as in this case)
```
filenames = {'U': "GlobCurrent_example_data/20*.nc",
'V': "GlobCurrent_example_data/20*.nc"}
```
Then, define a dictionary of the variables (`U` and `V`) and dimensions (`lon`, `lat` and `time`; note that in this case there is no `depth` because the GlobCurrent data is only for the surface of the ocean)
```
variables = {'U': 'eastward_eulerian_current_velocity',
'V': 'northward_eulerian_current_velocity'}
dimensions = {'lat': 'lat',
'lon': 'lon',
'time': 'time'}
```
Finally, read in the fieldset using the `FieldSet.from_netcdf` function with the above-defined `filenames`, `variables` and `dimensions`
```
fieldset = FieldSet.from_netcdf(filenames, variables, dimensions)
```
Now define a `ParticleSet`, in this case with 5 particle starting on a line between (28E, 33S) and (30E, 33S) using the `ParticleSet.from_line` constructor method
```
pset = ParticleSet.from_line(fieldset=fieldset, pclass=JITParticle,
size=5, # releasing 5 particles
start=(28, -33), # releasing on a line: the start longitude and latitude
finish=(30, -33)) # releasing on a line: the end longitude and latitude
```
And finally execute the `ParticleSet` for 10 days using 4th order Runge-Kutta
```
output_file = pset.ParticleFile(name="GlobCurrentParticles.nc", outputdt=timedelta(hours=6))
pset.execute(AdvectionRK4,
runtime=timedelta(days=10),
dt=timedelta(minutes=5),
output_file=output_file)
```
Now visualise this simulation using the `plotParticles` script again. Note you can plot the particles on top of one of the velocity fields using the `tracerfile`, `tracerfield`, etc keywords.
```
output_file.export()
plotTrajectoriesFile('GlobCurrentParticles.nc',
tracerfile='GlobCurrent_example_data/20020101000000-GLOBCURRENT-L4-CUReul_hs-ALT_SUM-v02.0-fv01.0.nc',
tracerlon='lon',
tracerlat='lat',
tracerfield='eastward_eulerian_current_velocity');
```
## Sampling a Field with Particles
One typical use case of particle simulations is to sample a Field (such as temperature, vorticity or sea surface hight) along a particle trajectory. In Parcels, this is very easy to do, with a custom Kernel.
Let's read in another example, the flow around a Peninsula (see [Fig 2.2.3 in this document](http://archimer.ifremer.fr/doc/00157/26792/24888.pdf)), and this time also load the Pressure (`P`) field, using `extra_fields={'P': 'P'}`. Note that, because this flow does not depend on time, we need to set `allow_time_extrapolation=True` when reading in the fieldset.
```
fieldset = FieldSet.from_parcels("Peninsula_data/peninsula", extra_fields={'P': 'P'}, allow_time_extrapolation=True)
```
Now define a new `Particle` class that has an extra `Variable`: the pressure. We initialise this by sampling the `fieldset.P` field.
```
class SampleParticle(JITParticle): # Define a new particle class
p = Variable('p', initial=fieldset.P) # Variable 'p' initialised by sampling the pressure
```
Now define a `ParticleSet` using the `from_line` method also used above in the GlobCurrent data. Plot the `pset` and print their pressure values `p`
```
pset = ParticleSet.from_line(fieldset=fieldset, pclass=SampleParticle,
start=(3000, 3000), finish=(3000, 46000), size=5, time=0)
pset.show(field='vector')
print('p values before execution:', [p.p for p in pset])
```
Now create a custom function that samples the `fieldset.P` field at the particle location. Cast this function to a `Kernel`.
```
def SampleP(particle, fieldset, time): # Custom function that samples fieldset.P at particle location
particle.p = fieldset.P[time, particle.depth, particle.lat, particle.lon]
k_sample = pset.Kernel(SampleP) # Casting the SampleP function to a kernel.
```
Finally, execute the `pset` with a combination of the `AdvectionRK4` and `SampleP` kernels, plot the `pset` and print their new pressure values `p`
```
pset.execute(AdvectionRK4 + k_sample, # Add kernels using the + operator.
runtime=timedelta(hours=20),
dt=timedelta(minutes=5))
pset.show(field=fieldset.P, show_time=0)
print('p values after execution:', [p.p for p in pset])
```
And see that these pressure values `p` are (within roundoff errors) the same as the pressure values before the execution of the kernels. The particles thus stay on isobars!
## Calculating distance travelled
As a second example of what custom kernels can do, we will now show how to create a kernel that logs the total distance that particles have travelled.
First, we need to create a new `Particle` class that includes three extra variables. The `distance` variable will be written to output, but the auxiliary variables `prev_lon` and `prev_lat` won't be written to output (can be controlled using the `to_write` keyword)
```
class DistParticle(JITParticle): # Define a new particle class that contains three extra variables
distance = Variable('distance', initial=0., dtype=np.float32) # the distance travelled
prev_lon = Variable('prev_lon', dtype=np.float32, to_write=False,
initial=attrgetter('lon')) # the previous longitude
prev_lat = Variable('prev_lat', dtype=np.float32, to_write=False,
initial=attrgetter('lat')) # the previous latitude.
```
Now define a new function `TotalDistance` that calculates the sum of Euclidean distances between the old and new locations in each RK4 step
```
def TotalDistance(particle, fieldset, time):
# Calculate the distance in latitudinal direction (using 1.11e2 kilometer per degree latitude)
lat_dist = (particle.lat - particle.prev_lat) * 1.11e2
# Calculate the distance in longitudinal direction, using cosine(latitude) - spherical earth
lon_dist = (particle.lon - particle.prev_lon) * 1.11e2 * math.cos(particle.lat * math.pi / 180)
# Calculate the total Euclidean distance travelled by the particle
particle.distance += math.sqrt(math.pow(lon_dist, 2) + math.pow(lat_dist, 2))
particle.prev_lon = particle.lon # Set the stored values for next iteration.
particle.prev_lat = particle.lat
```
*Note:* here it is assumed that the latitude and longitude are measured in degrees North and East, respectively. However, some datasets (e.g. the `MovingEddies` used above) give them measured in (kilo)meters, in which case we must *not* include the factor `1.11e2`.
We will run the `TotalDistance` function on a `ParticleSet` containing the five particles within the `GlobCurrent` fieldset from above. Note that `pclass=DistParticle` in this case.
```
filenames = {'U': "GlobCurrent_example_data/20*.nc",
'V': "GlobCurrent_example_data/20*.nc"}
variables = {'U': 'eastward_eulerian_current_velocity',
'V': 'northward_eulerian_current_velocity'}
dimensions = {'lat': 'lat',
'lon': 'lon',
'time': 'time'}
fieldset = FieldSet.from_netcdf(filenames, variables, dimensions)
pset = ParticleSet.from_line(fieldset=fieldset,
pclass=DistParticle,
size=5, start=(28, -33), finish=(30, -33))
```
Again define a new kernel to include the function written above and execute the `ParticleSet`.
```
k_dist = pset.Kernel(TotalDistance) # Casting the TotalDistance function to a kernel.
pset.execute(AdvectionRK4 + k_dist, # Add kernels using the + operator.
runtime=timedelta(days=6),
dt=timedelta(minutes=5),
output_file=pset.ParticleFile(name="GlobCurrentParticles_Dist.nc", outputdt=timedelta(hours=1)))
```
And finally print the distance in km that each particle has travelled (note that this is also stored in the `EddyParticles_Dist.nc` file)
```
print([p.distance for p in pset]) #the distances in km travelled by the particles
```
| true |
code
| 0.560974 | null | null | null | null |
|
IN DEVELOPMENT
# Part 2: Training an RBM *with* a phase
## Getting Started
The following imports are needed to run this tutorial.
```
from rbm_tutorial import RBM_Module, ComplexRBM
import torch
import cplx
import unitary_library
import numpy as np
import csv
```
*rbm_tutorial.py* contains the child class **ComplexRBM** that inherits properties and functions from the parent class **RBM_Module**.
Pytorch (torch) is used as a replacement for doing some algebra that would normally be done with numpy. Pytorch also allows one to take advantage of GPU acceleration among many other things. Don't worry if you don't have a GPU on your machine; the tutorial will run in no time on a CPU.
One downfall of pytorch is that it currently does not have complex number support, so we have written our own complex algebra library (cplx.py). For more information on this library's contents, please refer to [here](../cplx.rst). We hope that pytorch will implement complex numbers soon!
*unitary_library* is a package that will create a dictionary of the unitaries needed in order to train a ComplexRBM object (more later).
## Training
Let's go through training a complex wavefunction. To evaluate how the RBM is training, we will compute the fidelity between the true wavefunction of the system and the wavefunction the RBM reconstructs. We first need to load our training data and the true wavefunction of this system. However, we also need the corresponding file that contains all of the measurements that each site is in. The dummy dataset we will train our RBM on is a two qubit system who's wavefunction is $\psi =\left.\frac{1}{2}\right\vert+,+\rangle - \left.\frac{1}{2}\right\vert+,-\rangle + \left.\frac{i}{2}\right\vert-,+\rangle - \left.\frac{i}{2}\right\vert-,-\rangle$, where $+$ and $-$ represent spin-up and spin-down, respectively.
```
train_set2 = np.loadtxt('2qubits_train_samples.txt', dtype= 'float32')
psi_file = np.loadtxt('2qubits_psi.txt')
true_psi2 = torch.tensor([psi_file[:,0], psi_file[:,1]], dtype = torch.double)
bases = np.loadtxt('2qubits_train_bases.txt', dtype = str)
```
The following arguments are required to construct a **ComplexRBM** object.
1. **A dictionary containing 2x2 unitaries, unitaries**. We will create this dictionary in the next block with the hand of the module we imported called *unitary_library*.
2. **The number of visible units, num_visible**. This is 2 for the case of our dataset.
3. **The number of hidden units in the amplitude hidden layer of the RBM, num_hidden_amp**. It's recommended that the number of hidden units stay equal to the number of visible units (2 in the case of our dummy dataset).
4. **The number of hidden units in the phase hidden layer of the RBM, num_hidden_amp**. It's recommended that the number of hidden units stay equal to the number of visible units (2 in the case of our dummy dataset).
```
unitaries = unitary_library.create_dict()
'''If you would like to add your own quantum gates from your experiment to "unitaries", do:
unitaries = unitary_library.create_dict(name='your_name',
unitary=torch.tensor([[real part], [imaginary part]], dtype=torch.double)
For example:
unitaries = unitary_library.create_dict(name='qucumber', unitary=torch.tensor([ [[1.,0.],[0.,1.]]
[[0.,0.],[0.,0.]] ], dtype=torch.double))
By default, unitary_library.create_dict() contains the hadamard and K gates with keys X and Y, respectively.'''
num_visible = train_set2.shape[-1] # 2
num_hidden_amp = train_set2.shape[-1] # 2
num_hidden_phase = train_set2.shape[-1] # 2
```
A **ComplexRBM** object has a function called *fit* that performs the training. *fit* takes the following arguments.
1. ***train_set***. Needed for selecting mini batches of the data.
2. ***bases***. Needed for calculating gradients (performing the correct rotations).
2. ***true_psi***. Only needed here to compute the fidelity.
3. **The number of epochs, *epochs***. The number of training cycles that will be performed. 15 should be fine.
4. **The mini batch size, *batch_size***. The number of data points that each mini batch will contain. We'll go with 10.
5. **The number of contrastive divergence steps, *k***. One contrastive divergence step seems to be good enough in most cases.
6. **The learning rate, *lr***. We will use a learning rate of 0.01 here.
7. **How often you would like the program to update you during training, *log_every***. Every 10 epochs the program will print out the fidelity.
```
epochs = 15
batch_size = 10
k = 1
lr = 0.01
log_every = 5
rbm_complex = ComplexRBM(num_visible, num_hidden_amp, num_hidden_phase)
rbm_complex.fit(train_set2, bases, true_psi2, unitaries, epochs, batch_size, k, lr, log_every)
```
### After Training
After training your RBM, the *fit* function will have saved your trained weights and biases for the amplitude and the phase. Now, you have the option to generate new data from the trained RBM. The *rbm_real* object has a *sample* function that takes the following arguments.
1. The number of samples you wish to generate, *num_samples*.
2. The number of contrastive divergence steps performed to generate the samples, *k*.
```
num_samples = 100
k = 10
samples = rbm_complex.sample(num_samples, k)
```
You will now find the *generated_samples_complexRBM.pkl* file in your directory that contains your new samples.
| true |
code
| 0.641675 | null | null | null | null |
|
[View in Colaboratory](https://colab.research.google.com/github/adowaconan/Deep_learning_fMRI/blob/master/3_1_some_concepts_of_CNN.ipynb)
# Reference
## [How HBO’s Silicon Valley built “Not Hotdog” with mobile TensorFlow, Keras & React Native](https://medium.com/@timanglade/how-hbos-silicon-valley-built-not-hotdog-with-mobile-tensorflow-keras-react-native-ef03260747f3)
## [Convolutional Neural Networks - Basics](https://mlnotebook.github.io/post/CNN1/)
## [reference within reference](https://github.com/rcmalli/keras-squeezenet/blob/master/examples/example_keras_squeezenet.ipynb)
## [Understanding Activation Functions in Neural Networks](https://medium.com/the-theory-of-everything/understanding-activation-functions-in-neural-networks-9491262884e0)
## [Activation Functions: Neural Networks](https://towardsdatascience.com/activation-functions-neural-networks-1cbd9f8d91d6)
# Kernels
## come matrices that could only do addition and multiplication
## feature extracters
## filter
```
from IPython.display import Image
Image(url='https://mlnotebook.github.io/img/CNN/convSobel.gif')
```
The kernel shown above is a kernel with size (3,3) and stride of (1,1), no activation introduced
# Activation functions
```
print('step function')
Image(url='https://cdn-images-1.medium.com/max/800/0*8U8_aa9hMsGmzMY2.')
print('linear function')
Image(url='https://cdn-images-1.medium.com/max/800/1*tldIgyDQWqm-sMwP7m3Bww.png')
print('sigmoid function')
Image(url='https://cdn-images-1.medium.com/max/800/0*5euYS7InCmDP08ir.')
print('tanh function')
Image(url='https://cdn-images-1.medium.com/max/800/0*YJ27cYXmTAUFZc9Z.')
print('ratified function')
Image(url='https://cdn-images-1.medium.com/max/800/0*vGJq0cIuvTB9dvf5.')
print('Leaky ratified function')
Image(url='https://cdn-images-1.medium.com/max/800/1*A_Bzn0CjUgOXtPCJKnKLqA.jpeg')
```
Softmax is different...
# pooling
## [What is wrong with Convolutional neural networks ?](https://towardsdatascience.com/what-is-wrong-with-convolutional-neural-networks-75c2ba8fbd6f)
```
Image(url='https://cdn-images-1.medium.com/max/800/1*lbUtgiANqLoO1GFSc9pHTg.gif')
Image(url='https://cdn-images-1.medium.com/max/800/1*wsf4tsOH77T1lpylPUIhbA.png')
```
# Dropout
## [Dropout in (Deep) Machine learning](https://medium.com/@amarbudhiraja/https-medium-com-amarbudhiraja-learning-less-to-learn-better-dropout-in-deep-machine-learning-74334da4bfc5)
```
Image(url='https://cdn-images-1.medium.com/max/800/1*iWQzxhVlvadk6VAJjsgXgg.png')
```
# Batch Normalization
## [Batch normalization in Neural Networks](https://towardsdatascience.com/batch-normalization-in-neural-networks-1ac91516821c)
## [Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift](https://arxiv.org/abs/1502.03167)
### Batch normalization reduces the amount by what the hidden unit values shift around (covariance shift)
### batch normalization allows each layer of a network to learn by itself a little bit more independently of other layers
#### VGG nets don't have batch normalization layers, but they still work well. They don't have one is because batch normalization was not invented yet.
#### Pytorch VGG net is trained with batch normalization while tensorflow VGG net is not.
# Here is the reason we don't want to train proposed deep neural nets from scratch
# Let's compare these models:
1. Xception net
2. VGG19
3. ResNet50
4. Inception V3
5. InceptionResNet V2
6. MobileNet
7. DenseNet
8. NASNet
```
import pandas as pd
df = {}
df['Model']=['Xception','VGG16','VGG19','ResNet50','InceptionV3','InceptionResNetV2',
'MobileNet','DenseNet121','DenseNet169','DenseNet201']
df['Size']=[88,528,549,99,92,215,17,33,57,80]
df['Top1 Accuracy']=[.79,.715,.727,.759,.788,.804,.665,.745,.759,.77]
df['Top5 Accuracy']=[.945,.901,.910,.929,.944,.953,.871,.918,.928,.933]
df['Parameters']=[22910480,138357544,143667240,25636712,23851784,55873736,4253864,
8062504,14307880,20242984]
df['Depth']=[126,23,26,168,159,572,88,121,169,201]
df['min input size']=['150x150','48x48','48x48','197x197','139x139','139x139',
'32x32','?','?','?']
df = pd.DataFrame(df)
df[['Model','Size','Top1 Accuracy','Top5 Accuracy','Parameters','Depth','min input size']]
```
# With trasfer learning, building a model ontop of VGG19, we only need to train 1026 parameters with 128x128x3 (number of featuers = 49152) pixel values of each image
```
from keras.applications import VGG19
model_vgg19 = VGG19(include_top=False, # do not include the classifier
weights='imagenet', # get the pretrained weights
input_tensor=None, # don't know what this is
input_shape=(128,128,3), # decide the input shape
pooling='avg', # use global average for the pooling
classes=1000)# doesen't matter
from keras.models import Model
from keras.layers import Dense,Dropout
from keras import optimizers,metrics,losses
for i,layer in enumerate(model_vgg19.layers[:-2]):
layer.trainable = False
Encoder = model_vgg19.output
Encoder = Dropout(0.5)(Encoder)
output = Dense(2,activation='softmax',)(Encoder)
model = Model(model_vgg19.input,output)
model.compile(optimizer=optimizers.adam(),
loss=losses.binary_crossentropy,
metrics=['acc'])
model.summary()
```
# Image argumentation
## flip the images
## stratch the images
## rotate the images
## shear the images
## take out some pixels
## and so on ...
```
Image(url='https://cdn-images-1.medium.com/max/800/1*RVV70qYkWJ1Uw8hvALjV4A.png')
```
| true |
code
| 0.661896 | null | null | null | null |
|
# Nutria
In this Notebook we'll consider the population growth of the Nutria species. The data has been taken from .. . We'll begin importing the data and visualizing it.
```
import pandas as pd
from pyfilter import __version__
print(__version__)
data = pd.read_csv("nutria.txt", sep='\t').iloc[:, 0].rename("nutria")
data.plot(figsize=(16, 9), title="Nutria population")
```
Next, we'll specify the model to use for inference. We'll use the flexible Allee model, found in .. . However, instead of considering the actual population, we'll use the logarithm.
```
from pyfilter.timeseries import LinearGaussianObservations, AffineProcess
from torch.distributions import Normal, Gamma, TransformedDistribution, AffineTransform, PowerTransform
import torch
from pyfilter.distributions import Prior, DistributionWrapper
def f(x, a, b, c, d):
exped = x.values.exp()
return x.values + a + b * exped + c * exped ** 2
def g(x, a, b, c, d):
return d.sqrt()
def build_invgamma(concentration, rate, power, **kwargs):
return TransformedDistribution(Gamma(alpha, rate, **kwargs), PowerTransform(power))
alpha = data.shape[0] / 2
beta = 2 * (alpha - 1) / 10
invgamma_prior = Prior(
build_invgamma,
concentration=alpha,
rate=beta,
power=-1.0
)
norm_prior = Prior(Normal, loc=0.0, scale=1.0)
h_priors = norm_prior, norm_prior, norm_prior, invgamma_prior
dist = DistributionWrapper(Normal, loc=0.0, scale=1.0)
hidden = AffineProcess((f, g), h_priors, dist, dist)
model = LinearGaussianObservations(hidden, 1., invgamma_prior)
```
Next, we'll use SMC2 together with APF to perform inference on the logged dataset.
```
from pyfilter.inference.sequential import SMC2
from pyfilter.filters.particle import APF, proposals as p
import numpy as np
logged_data = torch.from_numpy(data.values).float().log()
algs = list()
for i in range(2):
filt = APF(model.copy(), 250)
alg = SMC2(filt, 1000, n_steps=5).cuda()
state = alg.fit(logged_data)
algs.append((state, alg))
```
Next, let's visualize the filtered means of the state.
```
import matplotlib.pyplot as plt
fig, ax = plt.subplots(figsize=(16, 9))
data.plot(ax=ax)
for state, _ in algs:
ax.plot(state.filter_state.filter_means.mean(dim=1)[1:].exp().cpu().numpy(), label="Filtered")
ax.legend()
```
Next, let's visualize the posterior distributions of the parameters.
```
import pandas as pd
from arviz import plot_posterior
fig, ax = plt.subplots(5, figsize=(16, 9))
colors = ["gray", "salmon"]
names = "a, b, c, d, \sigma".split(", ")
for j, (state, alg) in enumerate(algs):
w = state.normalized_weights()
for i, param in enumerate(alg.filter.ssm.parameters()):
plot_posterior(param.squeeze().cpu().numpy(), ax=ax[i], color=colors[j], point_estimate=None, hdi_prob='hide')
ax[i].set_title(f"${names[i]}$")
plt.tight_layout()
```
| true |
code
| 0.646962 | null | null | null | null |
|
```
import rasterio as rio
import numpy as np
import matplotlib.pyplot as plt
from IPython import display
from matplotlib_scalebar.scalebar import ScaleBar
```
# Sensitivity of ASO Snow cover mask to snow depth threshold
David Shean
May 2, 2020
_(modified by Tony Cannistra, Jan 30, 2021)_
**Purpose**: Examine the choice of snow depth threshold used to binarize 3 m ASO snow depth data.
```
aso_sd_fn = '/Volumes/wrangell-st-elias/research/planet/ASO_3M_SD_USCATE_20180528.tif'
aso_sd_ds = rio.open(aso_sd_fn)
aso_sd = aso_sd_ds.read(1, masked=True)
def imshow_stretch(ax,a,clim=None,perc=(2,98),sym=False,cmap='inferno',dx=aso_sd_ds.res[0],cbar=True):
if sym:
cmap = 'RdBu'
if clim is None:
vmin,vmax = np.percentile(a.compressed(),perc)
#vmin,vmax = np.percentile(a,perc)
if sym:
vmax = np.max(np.abs([vmin,vmax]))
vmin = -vmax
clim = (vmin, vmax)
m = ax.imshow(a, vmin=clim[0], vmax=clim[1], cmap=cmap, interpolation='None')
ax.add_artist(ScaleBar(dx))
cbar_obj=None
if cbar:
cbar_obj = plt.colorbar(m, ax=ax)
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
ax.set_facecolor('0.5')
return clim, cbar_obj
f, ax = plt.subplots(figsize=(10,10))
clim, cbar = imshow_stretch(ax, aso_sd)
plt.title("ASO 3m Snow Depth")
cbar.set_label("Snow Depth (m)")
```
## General Statistics
```
display.Markdown(f"**Number of snow depth pixels:** {aso_sd.count():.1E}")
display.Markdown(f"**Maxiumum snow depth**: {aso_sd.max():.2f}m")
# 1 cm snow depth bins from 1 cm to 300 cm
bins = np.arange(0.01,3.01,0.01)
f,ax = plt.subplots(dpi=150)
ax.hist(aso_sd.compressed(), bins=bins)
ax.set_xlabel('Snow Depth (m)')
ax.set_ylabel('Bin count (px)')
plt.ticklabel_format(style='sci', axis='y', scilimits=(0,0))
```
## Examine the effect of multiple thresholds on binary snow pixel assignment
```
# Possible thresholds
# 1 cm to 20 cm in 1 cm increments
sd_thresh_list = np.arange(0.01, 0.21, 0.01)
sd_thresh_list
# count the number of pixels >= each threshold
snow_mask_count_list = []
for sd_thresh in sd_thresh_list:
print(sd_thresh)
snow_mask = aso_sd >= sd_thresh
snow_mask_count = snow_mask.sum()
snow_mask_count_list.append(snow_mask_count)
f, ax = plt.subplots(dpi=150)
ax.plot(sd_thresh_list*100, snow_mask_count_list)
ax.set_ylabel('Snowcover pixel count')
ax.set_xlabel('Snow Depth Threshold (cm)')
ax.set_title("Effect of SD Thresh on Pixel Count")
ax.axvline(10.0, linestyle=':', linewidth=1, color='red', label='10 cm Threshold')
plt.legend()
snow_mask_area_list = (np.array(snow_mask_count_list) * aso_sd_ds.res[0] * aso_sd_ds.res[1])/1E6
f, ax = plt.subplots(dpi=150)
ax.plot(sd_thresh_list*100, snow_mask_area_list, linewidth=1, color='grey')
ax.set_ylabel('Snowcover Area (km$^2$)')
ax.set_xlabel('Snow Depth Threshold (cm)')
ax.set_title("Effect of SD Threshold on SCA")
#10cm vs 8cm
cm10 = 0.10
cm8 =0.08
cm10_sca = snow_mask_area_list[np.where(np.isclose(sd_thresh_list,cm10))][0]
cm8_sca = snow_mask_area_list[np.where(np.isclose(sd_thresh_list,cm8))][0]
ax.hlines(cm8_sca, cm8*100, cm10*100, linestyle='--', linewidth=1)
ax.vlines(cm10*100, cm8_sca, cm10_sca, linestyle='--', linewidth=1, label='Differences')
bottom = min(snow_mask_area_list)
ax.vlines(cm10*100, bottom, cm10_sca, linestyle=':', linewidth=1, color='red', label=f'{cm10*100:0.0f} cm Threshold')
ax.vlines(cm8*100, bottom, cm8_sca, linestyle=':', linewidth=1, color='green', label=f'{cm8*100:0.0f} cm Threshold')
ax.annotate(f"{(cm8_sca - cm10_sca):.2f} km$^2$ Difference", (cm10*100, cm10_sca + (cm8_sca - cm10_sca)/2),
(cm10*100 + 2.2, cm10_sca + (cm8_sca - cm10_sca)/2), xycoords='data',
ha="left", va="center",
size=10,
arrowprops=dict(arrowstyle='-[',
shrinkA=5,
shrinkB=5,
fc="k", ec="k",
),
bbox=dict(boxstyle="square", fc="w"))
ax.annotate('ASO_3M_SD_USCATE_20180528.tif', (0.02, 0.02), xycoords='axes fraction', color='grey', size=6)
plt.legend()
```
## Visual + Quantitative Comparison of Specific Thresholds
1cm, 10cm and 20cm.
```
snow_mask_01cm = aso_sd >= 0.01
snow_mask_08cm = aso_sd >= 0.08
snow_mask_10cm = aso_sd >= 0.10
snow_mask_20cm = aso_sd >= 0.20
f, axa = plt.subplots(1,3, figsize=(14,8))
imshow_stretch(axa[0], snow_mask_01cm, cbar=False)
imshow_stretch(axa[1], snow_mask_10cm, cbar=False)
imshow_stretch(axa[2], snow_mask_20cm, cbar=False)
axa[0].set_title('SD Thresh %0.2f m' % 0.01);
axa[1].set_title('SD Thresh %0.2f m' % 0.10);
axa[2].set_title('SD Thresh %0.2f m' % 0.20);
def snowmask_comparison(a,b):
#All valid pixels
a_all_count = a.count()
print(a_all_count, "all pixel count in a")
#All valid snow pixels in a
a_snow_count = a.sum()
print(a_snow_count, "snow pixel count in a")
a_snow_count_perc = 100*a_snow_count/a_all_count
print("%0.2f%% snow pixel count in a" % a_snow_count_perc)
#All valid snow pixels in b
b_snow_count = b.sum()
print(b_snow_count, "snow pixel count in b")
b_snow_count_perc = 100*b_snow_count/a_all_count
print("%0.2f%% snow pixel count in b" % b_snow_count_perc)
ab_snow_count_diff = np.abs(a_snow_count - b_snow_count)
print(ab_snow_count_diff, "snow pixel count difference between a and b")
ab_snow_count_diff_perc = 100*ab_snow_count_diff/np.mean([a_snow_count,b_snow_count])
#ab_snow_count_diff_perc = 100*ab_snow_count_diff/a_snow_count
print("%0.2f%% snow pixel count percent difference between a and b" % ab_snow_count_diff_perc)
#Boolean disagreement for snow
ab_snow_disagree = ~(a == b)
#Count of snow pixels that agree
#print(ab_snow_disagree.sum())
return ab_snow_disagree
```
### Percentage Difference in SCA Between Specific Thresholds
**1cm and 10cm**:
```
snow_mask_01cm_10cm = snowmask_comparison(snow_mask_01cm, snow_mask_10cm)
display.Markdown(f"_Area Difference: {(1982066 * 3.0 * 3.0) / 1E6:.2f} km$^2$_")
```
**8 cm and 10 cm**
```
snow_mask_01cm_10cm = snowmask_comparison(snow_mask_08cm, snow_mask_10cm)
display.Markdown(f"_Area Difference: {(491801 * 3.0 * 3.0) / 1E6:.2f} km$^2$_")
```
**10 cm and 20cm**
```
snow_mask_10cm_20cm = snowmask_comparison(snow_mask_10cm, snow_mask_20cm)
display.Markdown(f"_Area Difference: {(2678151 * 3.0 * 3.0) / 1E6:.2f} km$^2$_")
```
**1 cm and 20 cm**
```
snow_mask_01cm_20cm = snowmask_comparison(snow_mask_01cm, snow_mask_20cm)
display.Markdown(f"_Area Difference: {(4660217 * 3.0 * 3.0) / 1E6:.2f} km$^2$_")
```
### Visualization of SCA differences with 3 thresholds
```
f,axa = plt.subplots(1,3, figsize=(16,10), sharex=True, sharey=True)
imshow_stretch(axa[0], snow_mask_01cm_10cm, clim=(0,1), cbar=False)
axa[0].set_title('1 cm vs. 10 cm')
imshow_stretch(axa[1], snow_mask_10cm_20cm, clim=(0,1), cbar=False)
axa[1].set_title('10 cm vs. 20 cm')
imshow_stretch(axa[2], snow_mask_01cm_20cm, clim=(0,1), cbar=False)
axa[2].set_title('1 cm vs. 20 cm')
```
## Analysis
We assessed a broad range of thresholds to determine the sensitivity of ASO-derived snow covered area to choice of threshold. [Raleigh and Small, 2017)][rs] suggest a range of vertical accuracy of lidar-based snow depth measurements between 2-30 cm, so we chose to evaluate a subset ranging from 1 cm to 20 cm.
We observed a **9.28% difference in SCA** across the widest range of thresholds (e.g. comparing 1 cm to 20 cm). This amounts to a difference of $4660217$ pixels, which represents $\mathrm{41.9 km}^2$.
We also evaluated several specific thresholds and their relationships to one another. [Painter et al., 2016][painter16] suggest an 8 cm vertical accuracy for Airborne Snow Observatory-derived snow depth measurements, based on an assessment of open (e.g. unforested) terrain without topographic complexity. [Currier et al., 2016], when comparing ALS snow depth to both terrestrial lidar and ground-based snow probe surveys at open and forested sites, observed a range of vertical RMSD values between 8 cm and 16 cm.
Our assessment of this literature motivated the comparison of **2 cm, 8 cm, 10 cm, and 20 cm** thresholds. Focusing our attention on the center of the known vertical accuracy range, we found small differences in SCA between 8 cm and 10 cm (**0.97% SCA difference, 4.43 km$^2$**).
Taking into account the homogeneous nature of the Tuolumne watershed studied here, particularly with regard to topogrpahic complexity and forested regions, we believe a 10 cm is the threshold value that takes into account these sources of uncertainty present in this watershed while still being representative of our current understanding of ALS vertical accuracy.
[rs]: https://doi.org/10.1002/2016GL071999
[painter16]: https://doi.org/10.1016/j.rse.2016.06.018
[currier]: https://doi.org/10.1029/2018WR024533
| true |
code
| 0.542682 | null | null | null | null |
|
# Collect NISMOD2 results for NIC resilience - demand scenarios
- water demand
- energy demand
- transport OD matrix, trip distribution, energy consumption
```
import glob
import os
import re
from datetime import datetime, timedelta
import pandas
import geopandas
from pandas.api.types import CategoricalDtype
from tqdm.notebook import tqdm
```
## Water demand
```
water_demand_files = glob.glob("../results/nic_w*/water_demand/decision_0/*.csv")
dfs = []
for fn in water_demand_files:
demand_scenario = re.search("__(\w+)", fn).group(1)
year = re.search("2\d+", fn).group(0)
df = pandas.read_csv(fn, dtype={
'water_resource_zones': 'category'
})
df['timestep'] = int(year)
df.timestep = df.timestep.astype('int16')
df['demand_scenario'] = demand_scenario
df.demand_scenario = df.demand_scenario.astype(CategoricalDtype(['BL', 'FP']))
dfs.append(df)
water_demand = pandas.concat(dfs)
del dfs
water_demand.head()
water_demand.dtypes
water_demand.to_parquet('nic_water_demand.parquet')
```
## Energy demand
```
energy_demand_files = glob.glob("../results/nic_ed_unconstrained/energy_demand_unconstrained/decision_0/*2050.parquet")
dfs = []
for n, fn in enumerate(tqdm(energy_demand_files)):
output = re.search("output_(\w+)_timestep", fn).group(1)
year = re.search("2\d+", fn).group(0)
sector = re.match("[^_]*", output).group(0)
service = output.replace(sector + "_", "")
fuel = re.match("hydrogen|oil|solid_fuel|gas|electricity|biomass|heat", service).group(0)
df = pandas.read_parquet(
fn
).rename(columns={
output: 'energy_demand'
})
df['fuel'] = fuel
df['sector'] = sector
dfs.append(df)
energy_demand = pandas.concat(dfs)
del dfs
energy_demand.head()
ed_heat_elec = energy_demand[energy_demand.fuel.isin(('heat', 'electricity'))] \
.groupby(['fuel', 'lad_uk_2016', 'hourly']) \
.sum() \
.reset_index()
ed_heat_elec
# set date values
ed_heat_elec['date'] = ed_heat_elec.hourly.apply(lambda h: datetime(2050, 1, 1) + timedelta(hours=h-1))
ed_heat_elec = ed_heat_elec.set_index('date')
ed_heat_elec
# national dated
ed_national = ed_heat_elec \
.groupby('hourly') \
.sum() \
.reset_index()
ed_national['date'] = ed_national.hourly.apply(lambda h: datetime(2050, 1, 1) + timedelta(hours=h-1))
ed_national = ed_national.set_index('date')
ed_national
# find max demand day
daily = ed_national.drop(columns=['hourly']).resample('D').sum()
daily.loc[daily.energy_demand.idxmax()]
# find max demand hour
ed_national.loc[ed_national.energy_demand.idxmax()]
# select from max day
max_day = ed_heat_elec.loc['2050-01-20']
max_day
max_day \
.groupby(['fuel', 'hourly']) \
.sum() \
.reset_index() \
.pivot(columns='fuel', index='hourly') \
.plot()
max_day.to_parquet('nic_energy_demand_heat_electricity_2050_max_day.parquet')
ed_heat_elec.to_parquet('nic_energy_demand_heat_electricity_2050.parquet')
```
## Transport energy
```
def hours_to_int(h):
"""Convert from string-named hours to 24-hour clock integers
"""
lu = {
'MIDNIGHT': 0,
'ONEAM': 1,
'TWOAM': 2,
'THREEAM': 3,
'FOURAM': 4,
'FIVEAM': 5,
'SIXAM': 6,
'SEVENAM': 7,
'EIGHTAM': 8,
'NINEAM': 9,
'TENAM': 10,
'ELEVENAM': 12,
'NOON': 11,
'ONEPM': 13,
'TWOPM': 14,
'THREEPM': 15,
'FOURPM': 16,
'FIVEPM': 17,
'SIXPM': 18,
'SEVENPM': 19,
'EIGHTPM': 20,
'NINEPM': 21,
'TENPM': 22,
'ELEVENPM': 23,
}
return lu[h]
ev_paths = glob.glob("../results/nic_ed_tr/transport/decision_0/*vehicle*")
dfs = []
for fn in ev_paths:
output = re.search("output_(\w+)_timestep", fn).group(1)
year = re.search("2\d+", fn).group(0)
df = pandas.read_parquet(fn).rename(columns={
output: 'value'
})
df['timestep'] = int(year)
df['key'] = output
dfs.append(df)
ev_demand = pandas.concat(dfs) \
.reset_index()
del dfs
ev_demand.annual_day_hours = ev_demand.annual_day_hours.apply(hours_to_int)
ev_demand = ev_demand \
.pivot_table(
index=['timestep', 'lad_gb_2016', 'annual_day_hours'],
columns='key',
values='value'
) \
.reset_index()
del ev_demand.columns.name
ev_demand.head()
ev_demand.dtypes
ev_demand.to_parquet('nic_ev_demand.parquet')
```
## Transport trips
```
tr_data_path = "../results/nic_ed_tr/transport-raw_data_results_nic_ed_tr/"
# 2015 estimated tempro OD
tempro15 = pandas.read_csv(tr_data_path + "data/csvfiles/temproMatrixListBased198WithMinor4.csv")
tempro15
# 2015 aggregated LAD OD
lad15 = pandas.read_csv(tr_data_path + "data/csvfiles/ladFromTempro198ODMWithMinor4.csv") \
.sort_values(by=['origin', 'destination'])
lad15
# 2050 predicted LAD OD - to disaggregate
lad50 = pandas.read_csv(tr_data_path + "output/2050/predictedODMatrix.csv") \
.melt(id_vars='origin', var_name='destination', value_name='flow') \
.sort_values(by=['origin', 'destination'])
lad50
# tempro zones shapefile - with LAD codes already attached
tempro_lad = geopandas.read_file(tr_data_path + "data/shapefiles/tempro2.shp") \
.rename(columns={
'Zone_Name': 'tempro_name',
'Zone_Code': 'tempro',
'LAD_Code': 'lad',
'Local_Auth': 'lad_name'
}) \
[['lad', 'lad_name', 'tempro', 'tempro_name']] \
.sort_values(by=['lad', 'tempro'])
tempro_lad_codes = tempro_lad[['lad', 'tempro']]
tempro_lad
# start with tempro 2015 OD
# merge on LAD codes for tempro origins
df = tempro15 \
.rename(columns={'flow': 'tempro2015'}) \
.merge(tempro_lad_codes, left_on='origin', right_on='tempro') \
.drop(columns='tempro') \
.rename(columns={'lad': 'origin_lad'})
# merge on LAD codes for tempro destinations
df = df \
.merge(tempro_lad_codes, left_on='destination', right_on='tempro') \
.drop(columns='tempro') \
.rename(columns={'lad': 'destination_lad'})
# merge on LAD 2015 flows
df = df \
.merge(lad15, left_on=['origin_lad', 'destination_lad'], right_on=['origin', 'destination'], suffixes=('', '_y')) \
.drop(columns=['origin_y', 'destination_y']) \
.rename(columns={'flow': 'lad2015'})
# merge on LAD 2050 flows
df = df \
.merge(lad50, left_on=['origin_lad', 'destination_lad'], right_on=['origin', 'destination'], suffixes=('', '_y')) \
.drop(columns=['origin_y', 'destination_y']) \
.rename(columns={'flow': 'lad2050'})
df
# Disaggregation calculation
df['tempro2050'] = (df.tempro2015 * (df.lad2050 / df.lad2015)) \
.round() \
.astype(int)
# Quick check
df[(df.origin_lad == 'E09000007') & (df.destination_lad == 'E09000029')]
df = df.drop(columns=['lad2015', 'lad2050', 'origin_lad', 'destination_lad'])
df
df.to_parquet('nic_transport_trips.parquet')
```
| true |
code
| 0.312131 | null | null | null | null |
|
# Regression Week 4: Ridge Regression (gradient descent)
In this notebook, you will implement ridge regression via gradient descent. You will:
* Convert an SFrame into a Numpy array
* Write a Numpy function to compute the derivative of the regression weights with respect to a single feature
* Write gradient descent function to compute the regression weights given an initial weight vector, step size, tolerance, and L2 penalty
# Fire up Turi Create
Make sure you have the latest version of Turi Create
```
import turicreate
```
# Load in house sales data
Dataset is from house sales in King County, the region where the city of Seattle, WA is located.
```
sales = turicreate.SFrame('home_data.sframe/')
```
If we want to do any "feature engineering" like creating new features or adjusting existing ones we should do this directly using the SFrames as seen in the first notebook of Week 2. For this notebook, however, we will work with the existing features.
# Import useful functions from previous notebook
As in Week 2, we convert the SFrame into a 2D Numpy array. Copy and paste `get_numpy_data()` from the second notebook of Week 2.
```
import numpy as np # note this allows us to refer to numpy as np instead
def get_numpy_data(data_sframe, features, output):
data_sframe['constant'] = 1 # this is how you add a constant column to an SFrame
# add the column 'constant' to the front of the features list so that we can extract it along with the others:
features = ['constant'] + features # this is how you combine two lists
# select the columns of data_SFrame given by the features list into the SFrame features_sframe (now including constant):
features_sframe = data_sframe[features]
# the following line will convert the features_SFrame into a numpy matrix:
feature_matrix = features_sframe.to_numpy()
# assign the column of data_sframe associated with the output to the SArray output_sarray
output_sarray = data_sframe[output]
# the following will convert the SArray into a numpy array by first converting it to a list
output_array = output_sarray.to_numpy()
return(feature_matrix, output_array)
```
Also, copy and paste the `predict_output()` function to compute the predictions for an entire matrix of features given the matrix and the weights:
```
def predict_output(feature_matrix, weights):
# assume feature_matrix is a numpy matrix containing the features as columns and weights is a corresponding numpy array
# create the predictions vector by using np.dot()
predictions = np.dot(feature_matrix, weights)
return(predictions)
```
# Computing the Derivative
We are now going to move to computing the derivative of the regression cost function. Recall that the cost function is the sum over the data points of the squared difference between an observed output and a predicted output, plus the L2 penalty term.
```
Cost(w)
= SUM[ (prediction - output)^2 ]
+ l2_penalty*(w[0]^2 + w[1]^2 + ... + w[k]^2).
```
Since the derivative of a sum is the sum of the derivatives, we can take the derivative of the first part (the RSS) as we did in the notebook for the unregularized case in Week 2 and add the derivative of the regularization part. As we saw, the derivative of the RSS with respect to `w[i]` can be written as:
```
2*SUM[ error*[feature_i] ].
```
The derivative of the regularization term with respect to `w[i]` is:
```
2*l2_penalty*w[i].
```
Summing both, we get
```
2*SUM[ error*[feature_i] ] + 2*l2_penalty*w[i].
```
That is, the derivative for the weight for feature i is the sum (over data points) of 2 times the product of the error and the feature itself, plus `2*l2_penalty*w[i]`.
**We will not regularize the constant.** Thus, in the case of the constant, the derivative is just twice the sum of the errors (without the `2*l2_penalty*w[0]` term).
Recall that twice the sum of the product of two vectors is just twice the dot product of the two vectors. Therefore the derivative for the weight for feature_i is just two times the dot product between the values of feature_i and the current errors, plus `2*l2_penalty*w[i]`.
With this in mind complete the following derivative function which computes the derivative of the weight given the value of the feature (over all data points) and the errors (over all data points). To decide when to we are dealing with the constant (so we don't regularize it) we added the extra parameter to the call `feature_is_constant` which you should set to `True` when computing the derivative of the constant and `False` otherwise.
```
def feature_derivative_ridge(errors, feature, weight, l2_penalty, feature_is_constant):
if feature_is_constant == True:
derivative = 2 * np.dot(errors, feature)
# Otherwise, derivative is twice the dot product plus 2*l2_penalty*weight
else:
derivative = 2 * np.dot(errors, feature) + 2 * l2_penalty * weight
return derivative
```
To test your feature derivartive run the following:
```
(example_features, example_output) = get_numpy_data(sales, ['sqft_living'], 'price')
my_weights = np.array([1., 10.])
test_predictions = predict_output(example_features, my_weights)
errors = test_predictions - example_output # prediction errors
# next two lines should print the same values
print (feature_derivative_ridge(errors, example_features[:,1], my_weights[1], 1, False))
print (np.sum(errors*example_features[:,1])*2+20.)
print ('')
# next two lines should print the same values
print (feature_derivative_ridge(errors, example_features[:,0], my_weights[0], 1, True))
print (np.sum(errors)*2.)
```
# Gradient Descent
Now we will write a function that performs a gradient descent. The basic premise is simple. Given a starting point we update the current weights by moving in the negative gradient direction. Recall that the gradient is the direction of *increase* and therefore the negative gradient is the direction of *decrease* and we're trying to *minimize* a cost function.
The amount by which we move in the negative gradient *direction* is called the 'step size'. We stop when we are 'sufficiently close' to the optimum. Unlike in Week 2, this time we will set a **maximum number of iterations** and take gradient steps until we reach this maximum number. If no maximum number is supplied, the maximum should be set 100 by default. (Use default parameter values in Python.)
With this in mind, complete the following gradient descent function below using your derivative function above. For each step in the gradient descent, we update the weight for each feature before computing our stopping criteria.
```
def ridge_regression_gradient_descent(feature_matrix, output, initial_weights, step_size, l2_penalty, max_iterations=100):
print ('Starting gradient descent with l2_penalty = ' + str(l2_penalty))
weights = np.array(initial_weights) # make sure it's a numpy array
iteration = 0 # iteration counter
print_frequency = 1 # for adjusting frequency of debugging output
#while not reached maximum number of iterations:
while iteration <= max_iterations:
iteration += 1 # increment iteration counter
### === code section for adjusting frequency of debugging output. ===
if iteration == 10:
print_frequency = 10
if iteration == 100:
print_frequency = 100
if iteration%print_frequency==0:
print('Iteration = ' + str(iteration))
### === end code section ===
# compute the predictions based on feature_matrix and weights using your predict_output() function
predictions = predict_output(feature_matrix, weights)
# compute the errors as predictions - output
errors = predictions - output
# from time to time, print the value of the cost function
if iteration%print_frequency==0:
print ('Cost function = ', str(np.dot(errors,errors) + l2_penalty*(np.dot(weights,weights) - weights[0]**2)))
for i in range(len(weights)): # loop over each weight
# Recall that feature_matrix[:,i] is the feature column associated with weights[i]
# compute the derivative for weight[i].
#(Remember: when i=0, you are computing the derivative of the constant!)
if i == 0:
derivative = feature_derivative_ridge(errors, feature_matrix[:, i], weights[i], 0.0, True)
else:
derivative = feature_derivative_ridge(errors, feature_matrix[:, i], weights[i], l2_penalty, False)
# subtract the step size times the derivative from the current weight
weights[i] -= step_size*derivative
print ('Done with gradient descent at iteration ', iteration)
print ('Learned weights = ', str(weights))
return weights
```
# Visualizing effect of L2 penalty
The L2 penalty gets its name because it causes weights to have small L2 norms than otherwise. Let's see how large weights get penalized. Let us consider a simple model with 1 feature:
```
simple_features = ['sqft_living']
my_output = 'price'
```
Let us split the dataset into training set and test set. Make sure to use `seed=0`:
```
train_data,test_data = sales.random_split(.8,seed=0)
```
In this part, we will only use `'sqft_living'` to predict `'price'`. Use the `get_numpy_data` function to get a Numpy versions of your data with only this feature, for both the `train_data` and the `test_data`.
```
(simple_feature_matrix, output) = get_numpy_data(train_data, simple_features, my_output)
(simple_test_feature_matrix, test_output) = get_numpy_data(test_data, simple_features, my_output)
```
Let's set the parameters for our optimization:
```
initial_weights = np.array([0., 0.])
step_size = 1e-12
max_iterations=1000
```
First, let's consider no regularization. Set the `l2_penalty` to `0.0` and run your ridge regression algorithm to learn the weights of your model. Call your weights:
`simple_weights_0_penalty`
we'll use them later.
```
simple_weights_0_penalty = ridge_regression_gradient_descent(simple_feature_matrix,
output, initial_weights,
step_size, 0.0, max_iterations = 100)
simple_weights_0_penalty
```
Next, let's consider high regularization. Set the `l2_penalty` to `1e11` and run your ridge regression algorithm to learn the weights of your model. Call your weights:
`simple_weights_high_penalty`
we'll use them later.
```
simple_weights_high_penalty = ridge_regression_gradient_descent(simple_feature_matrix,
output, initial_weights,
step_size, 1e11, max_iterations = 100)
simple_weights_high_penalty
```
This code will plot the two learned models. (The blue line is for the model with no regularization and the red line is for the one with high regularization.)
```
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(simple_feature_matrix,output,'k.',
simple_feature_matrix,predict_output(simple_feature_matrix, simple_weights_0_penalty),'b-',
simple_feature_matrix,predict_output(simple_feature_matrix, simple_weights_high_penalty),'r-')
```
Compute the RSS on the TEST data for the following three sets of weights:
1. The initial weights (all zeros)
2. The weights learned with no regularization
3. The weights learned with high regularization
Which weights perform best?
```
print (((test_output - predict_output(simple_test_feature_matrix, initial_weights))**2).sum())
print (predict_output(simple_test_feature_matrix, initial_weights)[0])
print (((test_output - predict_output(simple_test_feature_matrix, simple_weights_0_penalty))**2).sum())
print (predict_output(simple_test_feature_matrix, simple_weights_0_penalty)[0])
print (((test_output - predict_output(simple_test_feature_matrix, simple_weights_high_penalty))**2).sum())
print (predict_output(simple_test_feature_matrix, simple_weights_high_penalty)[0])
```
***QUIZ QUESTIONS***
1. What is the value of the coefficient for `sqft_living` that you learned with no regularization, rounded to 1 decimal place? What about the one with high regularization?
2. Comparing the lines you fit with the with no regularization versus high regularization, which one is steeper? no regularization was steeper
3. What are the RSS on the test data for each of the set of weights above (initial, no regularization, high regularization)?
initial: 1784273282524564.0
no regularization: 275723643923134.44
high regularization: 694653077641343.2
# Running a multiple regression with L2 penalty
Let us now consider a model with 2 features: `['sqft_living', 'sqft_living15']`.
First, create Numpy versions of your training and test data with these two features.
```
model_features = ['sqft_living', 'sqft_living15'] # sqft_living15 is the average squarefeet for the nearest 15 neighbors.
my_output = 'price'
(feature_matrix, output) = get_numpy_data(train_data, model_features, my_output)
(test_feature_matrix, test_output) = get_numpy_data(test_data, model_features, my_output)
```
We need to re-inialize the weights, since we have one extra parameter. Let us also set the step size and maximum number of iterations.
```
initial_weights = np.array([0.0,0.0,0.0])
step_size = 1e-12
max_iterations = 1000
```
First, let's consider no regularization. Set the `l2_penalty` to `0.0` and run your ridge regression algorithm to learn the weights of your model. Call your weights:
`multiple_weights_0_penalty`
```
multiple_weights_0_penalty = ridge_regression_gradient_descent(feature_matrix,
output, initial_weights,
step_size, 0.0, max_iterations)
multiple_weights_0_penalty
```
Next, let's consider high regularization. Set the `l2_penalty` to `1e11` and run your ridge regression algorithm to learn the weights of your model. Call your weights:
`multiple_weights_high_penalty`
```
multiple_weights_high_penalty = ridge_regression_gradient_descent(feature_matrix,
output, initial_weights,
step_size, 1e11, max_iterations)
multiple_weights_high_penalty
```
Compute the RSS on the TEST data for the following three sets of weights:
1. The initial weights (all zeros)
2. The weights learned with no regularization
3. The weights learned with high regularization
Which weights perform best?
```
((test_output - predict_output(test_feature_matrix, initial_weights))**2).sum()
((test_output - predict_output(test_feature_matrix, multiple_weights_0_penalty))**2).sum()
((test_output - predict_output(test_feature_matrix, multiple_weights_high_penalty))**2).sum()
```
Predict the house price for the 1st house in the test set using the no regularization and high regularization models. (Remember that python starts indexing from 0.) How far is the prediction from the actual price? Which weights perform best for the 1st house?
```
test_output[0]
mult_0_predictions_test = predict_output(test_feature_matrix, multiple_weights_0_penalty)
mult_0_predictions_test[0]
mult_high_predictions_test = predict_output(test_feature_matrix, multiple_weights_high_penalty)
mult_high_predictions_test[0]
```
***QUIZ QUESTIONS***
1. What is the value of the coefficient for `sqft_living` that you learned with no regularization, rounded to 1 decimal place? What about the one with high regularization?
2. What are the RSS on the test data for each of the set of weights above (initial, no regularization, high regularization)? 1784273282524564.0, 274067694347184.56, 500404796858030.0
3. We make prediction for the first house in the test set using two sets of weights (no regularization vs high regularization). Which weights make better prediction <u>for that particular house</u>? no regularization
| true |
code
| 0.510192 | null | null | null | null |
|
### Demonstration of triangle slicing
Here are some Python based functions for slicing
sets of triangles given in an STL file relative to
different tool shapes.
A "barmesh" is an efficient way of encoding a continuous
mesh of triangles using forward-right and back-left pointers
from each edge that makes the triangles trivial to infer.
```
_____NF
/|
^ |
/ |BFR
| /-> |
| <-/ |
BBL| / |
| /
|/___
NB
```
```
import time
time.process_time()
# quick access to the library (I know this is not done properly)
import sys
sys.path.append("..")
# load the triangles into the efficient encoding structure
# (there is a numpy based version of this, which we should use in future)
from tribarmes import TriangleBarMesh
fname = "../stlsamples/frameguide.stl"
tbm = TriangleBarMesh(fname)
# Quick and dirty plot of this triangle mesh in 3D
%matplotlib inline
from basicgeo import P3
from mpl_toolkits import mplot3d
from matplotlib import pyplot as plt
fig = plt.figure()
axes = mplot3d.Axes3D(fig)
vs = tbm.GetBarMeshTriangles()
cs = mplot3d.art3d.Poly3DCollection(vs)
# need to shade the triangles according to normal vectors
cm = plt.get_cmap('cool')
def col(t):
n = P3.ZNorm(P3.Cross(t[1]-t[0], t[2]-t[0]))
if n[2] < 0:
n = -n
return cm(n[2]*0.8 + n[0]*0.6)
cs.set_facecolor([col(t) for t in vs])
axes.auto_scale_xyz([t[0][0] for t in vs], [t[0][1] for t in vs], [t[0][2] for t in vs])
axes.add_collection3d(cs)
plt.show()
# This builds the initial 2D mesh which will be used for the basis of the
# slicing of the STL shape above
from basicgeo import P2, P3, Partition1, Along
import barmesh
rad = 2.5
rex = rad + 2.5
xpart = Partition1(tbm.xlo-rex, tbm.xhi+rex, 19)
ypart = Partition1(tbm.ylo-rex, tbm.yhi+rex, 17)
zlevel = Along(0.1, tbm.zlo, tbm.zhi)
bm = barmesh.BarMesh()
bm.BuildRectBarMesh(xpart, ypart, zlevel)
# show the mesh as just a regular rectangular array of line segments
from matplotlib.collections import LineCollection
segments = [[(bar.nodeback.p[0], bar.nodeback.p[1]), (bar.nodefore.p[0], bar.nodefore.p[1])] for bar in bm.bars if not bar.bbardeleted ]
lc = LineCollection(segments)
plt.gca().add_collection(lc)
rex2 = rex + 4
plt.xlim(tbm.xlo-rex2, tbm.xhi+rex2)
plt.ylim(tbm.ylo-rex2, tbm.yhi+rex2)
plt.show()
import implicitareaballoffset
iaoffset = implicitareaballoffset.ImplicitAreaBallOffset(tbm)
# Here we actually make the slice of the triangle mesh by inserting mid-points
# into the segments and adding more joining segments where needed to model the
# contours to tolerance
from barmeshslicer import BarMeshSlicer
rd2 = max(xpart.vs[1]-xpart.vs[0], ypart.vs[1]-ypart.vs[0], rad*1.5) + 0.1
bms = BarMeshSlicer(bm, iaoffset, rd=rad, rd2=rd2, contourdotdiff=0.95, contourdelta=0.05, lamendgap=0.001)
bms.fullmakeslice()
# Plot the in and out parts of each segments in red and blue
plt.figure(figsize=(11,11))
segmentswithin = [ ]
segmentsbeyond = [ ]
for bar in bm.bars:
if not bar.bbardeleted:
p0within, p1within = None, None
p0beyond, p1beyond = None, None
if bar.nodeback.pointzone.izone == barmesh.PZ_WITHIN_R and bar.nodefore.pointzone.izone == barmesh.PZ_WITHIN_R:
p0within, p1within = bar.nodeback.p, bar.nodefore.p
elif bar.nodeback.pointzone.izone == barmesh.PZ_BEYOND_R and bar.nodefore.pointzone.izone == barmesh.PZ_BEYOND_R:
p0beyond, p1beyond = bar.nodeback.p, bar.nodefore.p
elif bar.nodeback.pointzone.izone == barmesh.PZ_WITHIN_R and bar.nodefore.pointzone.izone == barmesh.PZ_BEYOND_R:
p0within, p1within = bar.nodeback.p, bar.nodemid.p
p0beyond, p1beyond = bar.nodemid.p, bar.nodefore.p
elif bar.nodeback.pointzone.izone == barmesh.PZ_BEYOND_R and bar.nodefore.pointzone.izone == barmesh.PZ_WITHIN_R:
p0beyond, p1beyond = bar.nodeback.p, bar.nodemid.p
p0within, p1within = bar.nodemid.p, bar.nodefore.p
if p0within:
segmentswithin.append([(p0within[0], p0within[1]), (p1within[0], p1within[1])])
if p0beyond:
segmentsbeyond.append([(p0beyond[0], p0beyond[1]), (p1beyond[0], p1beyond[1])])
lc = LineCollection(segmentswithin, color="red")
plt.gca().add_collection(lc)
lc = LineCollection(segmentsbeyond, color="blue")
plt.gca().add_collection(lc)
rex2 = rex + 4
plt.xlim(tbm.xlo-rex2, tbm.xhi+rex2)
plt.ylim(tbm.ylo-rex2, tbm.yhi+rex2)
plt.show()
# Now extract the contours by following the round the cells keeping the WITHIN
# and BEYOND sides on one side of a series of nodemids
from mainfunctions import BarMeshContoursF, NestContours
conts, topbars = BarMeshContoursF(bm, barmesh.PZ_BEYOND_R)
contnest = NestContours(topbars, barmesh.PZ_BEYOND_R)
mconts = dict((topbar.midcontournumber, cont) for cont, topbar in zip(conts, topbars))
cnswithin = [cn for cn, (izone, outxn, innlist) in contnest.items() if izone == barmesh.PZ_WITHIN_R]
cnsbeyond = [cn for cn, (izone, outxn, innlist) in contnest.items() if izone == barmesh.PZ_BEYOND_R]
plt.figure(figsize=(11,11))
lc = LineCollection([[(p[0], p[1]) for p in mconts[cn]] for cn in cnswithin], color="red")
plt.gca().add_collection(lc)
lc = LineCollection([[(p[0], p[1]) for p in mconts[cn]] for cn in cnsbeyond], color="blue")
plt.gca().add_collection(lc)
rex2 = rex + 4
plt.xlim(tbm.xlo-rex2, tbm.xhi+rex2)
plt.ylim(tbm.ylo-rex2, tbm.yhi+rex2)
plt.show()
class F:
def __init__(self, V):
self.V = V
class G(F):
def __init__(self, q):
super().__init__(88)
self.q = q
g = G(9)
g.__dict__
```
| true |
code
| 0.515559 | null | null | null | null |
|
# Data Preparation for 2D Medical Imaging
## Kidney Segmentation with PyTorch Lightning and OpenVINO™ - Part 1
This tutorial is part of a series on how to train, optimize, quantize and show live inference on a medical segmentation model. The goal is to accelerate inference on a kidney segmentation model. The [UNet](https://arxiv.org/abs/1505.04597) model is trained from scratch; the data is from [Kits19](https://github.com/neheller/kits19).
The Kits19 Nifty images are 3D files. Kidney segmentation is a relatively simple problem for neural networks - it is expected that a 2D neural network should work quite well. 2D networks are smaller, and easier to work with than 3D networks, and image data is easier to work with than Nifty files.
This first tutorial in the series shows how to:
- Load Nifty images and get the data as array
- Apply windowing to a CT scan to increase contrast
- Convert Nifty data to 8-bit images
> Note: This will not result in the best kidney segmentation model. Optimizing the kidney segmentation model is outside the scope of this tutorial. The goal is to have a small model that works reasonably well, as a starting point.
All notebooks in this series:
- Data Preparation for 2D Segmentation of 3D Medical Data (this notebook)
- Train a 2D-UNet Medical Imaging Model with PyTorch Lightning (will be published soon)
- [Convert and Quantize a UNet Model and Show Live Inference](../110-ct-segmentation-quantize/110-ct-segmentation-quantize.ipynb)
- [Live Inference and Benchmark CT-scan data](../210-ct-scan-live-inference/210-ct-scan-live-inference.ipynb)
## Instructions
To install the requirements for running this notebook, please follow the instructions in the README.
Before running this notebook, you must download the Kits19 dataset, with code from https://github.com/neheller/kits19.
**This code will take a long time to run. The downloaded data takes up around 21GB of space, and the converted images around 3.5GB**. Downloading the full dataset is only required if you want to train the model yourself. To show quantization on a downloadable subset of the dataset, see the [Convert and Quantize a UNet Model and Show Live Inference](../110-ct-segmentation-quantize/110-ct-segmentation-quantize.ipynb) tutorial.
To do this, first clone the repository and install the requirements. It is recommend to install the requirements in the `openvino_env` virtual environment. In short:
```
1. git clone https://github.com/neheller/kits19
2. cd kits19
3. pip install -r requirements.txt
4. python -m starter_code.get_imaging
```
If you installed the Kits19 requirements in the `openvino_env` environment, you will have installed [nibabel](https://nipy.org/nibabel/). If you get an importerror, you can install nibabel in the current environment by uncommenting and running the first cell.
## Imports
```
# Uncomment this cell to install nibabel if it is not yet installed
# %pip install nibabel
import os
import time
from pathlib import Path
from typing import Optional, Tuple
import cv2
import matplotlib.pyplot as plt
import nibabel as nib
import numpy as np
```
## Settings
Set `NIFTI_PATH` to the root directory of the Nifty files. This is the directory that contains subdirectories `case_00000` to `case_00299` containing _.nii.gz_ data. FRAMES_DIR should point to the directory to save the frames.
```
# Adjust NIFTI_PATH to directory that contains case_00000 to case_00299 files with .nii.gz data
NIFTI_PATH = Path("~/kits19/data").expanduser()
FRAMES_DIR = "kits19_frames"
# This assert checks that the directory exists, but not that the data in it is correct
assert NIFTI_PATH.exists(), f"NIFTI_PATH {NIFTI_PATH} does not exist"
```
## Show One CT-scan
Let's load one CT-scan and visualize the scan and the label
```
mask_path = NIFTI_PATH / "case_00002/segmentation.nii.gz"
image_path = mask_path.with_name("imaging.nii.gz")
nii_mask = nib.load(mask_path)
nii_image = nib.load(image_path)
mask_data = nii_mask.get_fdata()
image_data = nii_image.get_fdata()
print(image_data.shape)
```
A CT-scan is a 3D image. To visualize this in 2D, we can create slices, or frames. This can be done in three [anatomical planes](https://en.wikipedia.org/wiki/Anatomical_plane): from the front (coronal) , from the side (sagittal), or from the top (axial).
Since a kidney is relatively small, most pixels do not contain kidney data. For an indication, let's check the fraction of pixels that contain kidney data, by dividing the number of non-zero pixels by the total number of pixels in the scan.
```
np.count_nonzero(mask_data) / np.size(mask_data)
```
This number shows that in this particular scan, less than one percent of all pixels in the scan belongs to a kidney.
We find frames with pixels that are annotated as kidney, and show the kidney from all three sides
```
z = np.argmax([np.count_nonzero(item) for item in mask_data])
x = np.argmax([np.count_nonzero(item) for item in np.transpose(mask_data, (1, 2, 0))])
y = np.argmax([np.count_nonzero(item) for item in np.transpose(mask_data, (2, 1, 0))])
print(z, x, y)
def show_slices(z: int, x: int, y: int):
fig, ax = plt.subplots(nrows=2, ncols=3, figsize=(12, 6))
ax[0, 0].imshow(image_data[z], cmap="gray")
ax[1, 0].imshow(mask_data[z], cmap="gray", vmin=0, vmax=2)
ax[0, 1].imshow(image_data[:, x, :], cmap="gray")
ax[1, 1].imshow(mask_data[:, x, :], cmap="gray", vmin=0, vmax=2)
ax[0, 2].imshow(image_data[:, :, y], cmap="gray")
ax[1, 2].imshow(mask_data[:, :, y], cmap="gray", vmin=0, vmax=2);
show_slices(z, x, y)
```
The image above shows three slices, from three different perspectives, in different places in the body. The middle slices shows two colors, indicating a kidney and a tumor were annotated in this slice.
## Apply Window-Level to Increase Contrast
CT-scan data can contain a large range of pixel values. This means that the contrast in the slices shown above is low. We show histograms to visualize the distribution of the pixel values. We then apply a soft tissue window level to increase the contrast for soft tissue in the visualization. See [Radiopaedia](https://radiopaedia.org/articles/windowing-ct) for information on windowing CT-scan data.
```
fig, axs = plt.subplots(nrows=1, ncols=3, figsize=(15, 4))
axs[0].hist(image_data[z, ::])
axs[1].hist(image_data[:, x, :])
axs[2].hist(image_data[:, :, y]);
# (-125,225) is a suitable level for visualizing soft tissue
window_start = -125
window_end = 225
image_data[image_data < window_start] = window_start
image_data[image_data > window_end] = window_end
show_slices(z, x, y)
```
## Extract Slices from Nifty Data
The `save_kits19_frames` function has the mask_path of one nii.gz segmentation mask as argument, and converts the mask and corresponding image to a series of images that are saved as jpg (for images) and png (for masks).
```
def save_kits19_frames(
mask_path: Path,
root_dir: os.PathLike,
window_level: Optional[Tuple] = None,
make_binary: bool = True,
):
"""
Save Kits19 CT-scans to image files, optionally applying a window level.
Images and masks are saved in a subdirectory of root_dir: case_XXXXX.
Images are saved in imaging_frames, masks in segmentation frames, which are
both subdirectories of the case directory.
Frames are taken in the axial direction.
:param mask_path: Path to segmentation.nii.gz file. The corresponding imaging.nii.gz
file should be in the same directory.
:param root_dir: Root directory to save the generated image files. Will be generated
if it does not exist
:param window_level: Window level top apply to the data before saving
:param make_binary: If true, create a binary mask where all non-zero pixels are
considered to be "foreground" pixels and get pixel value 1.
"""
start_time = time.time()
Path(root_dir).mkdir(exist_ok=True)
image_path = mask_path.with_name("imaging.nii.gz")
assert mask_path.exists(), f"mask_path {mask_path} does not exist!"
assert image_path.exists(), f"image_path {image_path} does not exist!"
nii_mask = nib.load(mask_path)
nii_image = nib.load(image_path)
mask_data = nii_mask.get_fdata()
image_data = nii_image.get_fdata()
assert mask_data.shape == image_data.shape, f"Mask and image shape of {mask_path} are not equal"
if make_binary:
mask_data[mask_data > 0] = 1
if window_level is not None:
window_start, window_end = window_level
image_data[image_data < window_start] = window_start
image_data[image_data > window_end] = window_end
image_directory = Path(root_dir) / mask_path.parent.name / "imaging_frames"
mask_directory = Path(root_dir) / mask_path.parent.name / "segmentation_frames"
image_directory.parent.mkdir(exist_ok=True)
image_directory.mkdir(exist_ok=True)
mask_directory.mkdir(exist_ok=True)
for i, (mask_frame, image_frame) in enumerate(zip(mask_data, image_data)):
image_frame = (image_frame - image_frame.min()) / (image_frame.max() - image_frame.min())
image_frame = image_frame * 255
image_frame = image_frame.astype(np.uint8)
new_image_path = str(image_directory / f"{mask_path.parent.name}_{i:04d}.jpg")
new_mask_path = str(mask_directory / f"{mask_path.parent.name}_{i:04d}.png")
cv2.imwrite(new_image_path, image_frame)
cv2.imwrite(new_mask_path, mask_frame)
end_time = time.time()
print(
f"Saved {mask_path.parent.name} with {mask_data.shape[0]} frames "
f"in {end_time-start_time:.2f} seconds"
)
```
Running the next cell will convert all Nifty files in NIFTI_PATH to images that are saved in FRAMES_DIR. A soft tissue window level of (-125,225) is appplied and the segmentation labels are converted to binary kidney segmentations.
Running this cell will take quite a long time.
```
mask_paths = sorted(NIFTI_PATH.glob("case_*/segmentation.nii.gz"))
for mask_path in mask_paths:
save_kits19_frames(
mask_path=mask_path, root_dir=FRAMES_DIR, window_level=(-125, 225), make_binary=True
)
```
## References
- [Kits19 Challenge Homepage](https://kits19.grand-challenge.org/)
- [Kits19 Github Repository](https://github.com/neheller/kits19)
- [The KiTS19 Challenge Data: 300 Kidney Tumor Cases with Clinical Context, CT Semantic Segmentations, and Surgical Outcomes](https://arxiv.org/abs/1904.00445)
- [The state of the art in kidney and kidney tumor segmentation in contrast-enhanced CT imaging: Results of the KiTS19 challenge](https://www.sciencedirect.com/science/article/pii/S1361841520301857)
| true |
code
| 0.859162 | null | null | null | null |
|
# Inheritance with the Gaussian Class
To give another example of inheritance, take a look at the code in this Jupyter notebook. The Gaussian distribution code is refactored into a generic Distribution class and a Gaussian distribution class. Read through the code in this Jupyter notebook to see how the code works.
The Distribution class takes care of the initialization and the read_data_file method. Then the rest of the Gaussian code is in the Gaussian class. You'll later use this Distribution class in an exercise at the end of the lesson.
Run the code in each cell of this Jupyter notebook. This is a code demonstration, so you do not need to write any code.
```
class Distribution:
def __init__(self, mu=0, sigma=1):
""" Generic distribution class for calculating and
visualizing a probability distribution.
Attributes:
mean (float) representing the mean value of the distribution
stdev (float) representing the standard deviation of the distribution
data_list (list of floats) a list of floats extracted from the data file
"""
self.mean = mu
self.stdev = sigma
self.data = []
def read_data_file(self, file_name):
"""Function to read in data from a txt file. The txt file should have
one number (float) per line. The numbers are stored in the data attribute.
Args:
file_name (string): name of a file to read from
Returns:
None
"""
with open(file_name) as file:
data_list = []
line = file.readline()
while line:
data_list.append(int(line))
line = file.readline()
file.close()
self.data = data_list
import math
import matplotlib.pyplot as plt
class Gaussian(Distribution):
""" Gaussian distribution class for calculating and
visualizing a Gaussian distribution.
Attributes:
mean (float) representing the mean value of the distribution
stdev (float) representing the standard deviation of the distribution
data_list (list of floats) a list of floats extracted from the data file
"""
def __init__(self, mu=0, sigma=1):
Distribution.__init__(self, mu, sigma)
def calculate_mean(self):
"""Function to calculate the mean of the data set.
Args:
None
Returns:
float: mean of the data set
"""
avg = 1.0 * sum(self.data) / len(self.data)
self.mean = avg
return self.mean
def calculate_stdev(self, sample=True):
"""Function to calculate the standard deviation of the data set.
Args:
sample (bool): whether the data represents a sample or population
Returns:
float: standard deviation of the data set
"""
if sample:
n = len(self.data) - 1
else:
n = len(self.data)
mean = self.calculate_mean()
sigma = 0
for d in self.data:
sigma += (d - mean) ** 2
sigma = math.sqrt(sigma / n)
self.stdev = sigma
return self.stdev
def plot_histogram(self):
"""Function to output a histogram of the instance variable data using
matplotlib pyplot library.
Args:
None
Returns:
None
"""
plt.hist(self.data)
plt.title('Histogram of Data')
plt.xlabel('data')
plt.ylabel('count')
def pdf(self, x):
"""Probability density function calculator for the gaussian distribution.
Args:
x (float): point for calculating the probability density function
Returns:
float: probability density function output
"""
return (1.0 / (self.stdev * math.sqrt(2*math.pi))) * math.exp(-0.5*((x - self.mean) / self.stdev) ** 2)
def plot_histogram_pdf(self, n_spaces = 50):
"""Function to plot the normalized histogram of the data and a plot of the
probability density function along the same range
Args:
n_spaces (int): number of data points
Returns:
list: x values for the pdf plot
list: y values for the pdf plot
"""
mu = self.mean
sigma = self.stdev
min_range = min(self.data)
max_range = max(self.data)
# calculates the interval between x values
interval = 1.0 * (max_range - min_range) / n_spaces
x = []
y = []
# calculate the x values to visualize
for i in range(n_spaces):
tmp = min_range + interval*i
x.append(tmp)
y.append(self.pdf(tmp))
# make the plots
fig, axes = plt.subplots(2,sharex=True)
fig.subplots_adjust(hspace=.5)
axes[0].hist(self.data, density=True)
axes[0].set_title('Normed Histogram of Data')
axes[0].set_ylabel('Density')
axes[1].plot(x, y)
axes[1].set_title('Normal Distribution for \n Sample Mean and Sample Standard Deviation')
axes[0].set_ylabel('Density')
plt.show()
return x, y
def __add__(self, other):
"""Function to add together two Gaussian distributions
Args:
other (Gaussian): Gaussian instance
Returns:
Gaussian: Gaussian distribution
"""
result = Gaussian()
result.mean = self.mean + other.mean
result.stdev = math.sqrt(self.stdev ** 2 + other.stdev ** 2)
return result
def __repr__(self):
"""Function to output the characteristics of the Gaussian instance
Args:
None
Returns:
string: characteristics of the Gaussian
"""
return "mean {}, standard deviation {}".format(self.mean, self.stdev)
# initialize two gaussian distributions
gaussian_one = Gaussian(25, 3)
gaussian_two = Gaussian(30, 2)
# initialize a third gaussian distribution reading in a data efile
gaussian_three = Gaussian()
gaussian_three.read_data_file('numbers.txt')
gaussian_three.calculate_mean()
gaussian_three.calculate_stdev()
# print out the mean and standard deviations
print(gaussian_one.mean)
print(gaussian_two.mean)
print(gaussian_one.stdev)
print(gaussian_two.stdev)
print(gaussian_three.mean)
print(gaussian_three.stdev)
# plot histogram of gaussian three
gaussian_three.plot_histogram_pdf()
# add gaussian_one and gaussian_two together
gaussian_one + gaussian_two
```
| true |
code
| 0.891173 | null | null | null | null |
|
```
import heapq
import random
from PIL import Image
import numpy
import nltk
from IPython.display import display, Image as Img
```
# Minimum Spanning Trees:
This tutorial will teach you the basics of Minimum spanning trees, Algorithms on how to make them, and some applications of minimum spanning trees.
# Task Zero: Graph Representations
For this part, we need to decide on a graph representation that we will use for the rest of the problems in this task.
For implementing Minimum spanning trees, it is useful to represent graphs as a list of vertices (from 0 to n-1), and a list of edges in the form (u, v, c). As you will soon see, most of our algorithms involve sorting edges and finding connected components, which we can do quickly via these representations.
There are many other representation that are more suited to different algorithms, for example if we were trying to compute a search algorithm, it would be very convienient to represents graphs as a dictionary of outgoing edges, since we only care about the local neighborhood of any given vertex, rather than the position of every edge relative to the other edges. In paticular, we notice that for running prim's algorithms, it is really uncomfortable to use our graph representation, so our Prim's algorithm does not run in the advertised time.
Note: we will only use directed graphs for this tutorial, but you can make them directed by duplicating every edge and reversing the direction (which we will often do anyways).
# Task One: The Basics
We will first cover two sequential algorithms for finding minimum spanning trees. These are both sequential, and you can verify that they take O(mlogm) work and O(m) span, where m is the number of edges.
1) Prim's Algorithm: First discovered in 1930 by Vojtech Jarnik, the algorithm goes as follows:
a) pick one vertex to visit first, in our case, we'll just pick vertex 0.
b) find the lightest outgoing edge from the set of visited vertices, if it leads to a new edge, then add the edge to the edges in the MST, and add the vertex to the visited vertices, otherwise pick the next lightest edge until a suitable one is found
c) Repeat step 2 until all vertices are visited, or we have completed a component. We can repeat this for every component, but for now we'll just assume that our graph is connected.
2) Kruskals Algorithm: First appearing in 1956, written by Joseph Kruskal, the algorithm goes as follows:
a) Create a list of all edges in the graph, and initialize a forrest of individual vertices.
b) Add the minimum weight edges to the MST Edges if and only if it doesn't form a cycle. The vertex must connect two forrests, so combine/contract the two forrests into one. (*)
c) Repeat the algorithm until there is only one connected component, or no edges remain.
Aside: A good way to perform contraction is to have representatives for each vertex. When we want to contract an edge, we pick one of the vertices to be the representative of the other, and to find the top, we follow the chain up until one vertex is its own representative. This takes O(n) to find the the component a vertex is in, worst case.
```
def outgoing(vertex, edges):
return map(lambda (x, y, c) : (c, (x, y)), filter(lambda (x, y, c) : x == vertex, edges))
def find_set(components, x):
if(components[x] == x):
return x
else:
return find_set(components, components[x])
def prims(graph):
visited = [graph[0][0]]
edges = []
heap = outgoing(visited[0], graph[1])
heapq.heapify(map(lambda x : reversed(x), heap))
while(visited != graph[0]):
(c, (x, y)) = heapq.heappop(heap)
if(y in visited):
continue
else:
edges += [(x, y, c)]
visited += [y]
for (c, (a, b)) in outgoing(y, graph[1]):
heapq.heappush(heap, (c, (a, b)))
return edges
def kruskals(graph):
edges = list(reversed(sorted(graph[1], key=lambda x : x[2])))
components = sorted(graph[0])
numcomponents = len(graph[0])
mstedges = []
while(numcomponents != 1 and edges != []):
(x, y, c) = edges.pop()
if(find_set(components, x) == find_set(components, y)):
continue
else:
components[y] = x
mstedges += [(x, y, c)]
numcomponents = numcomponents - 1
return mstedges
```
# Task Two: Parallelism to the Rescue
The previous algorithms were nice, but seemed awkward to implement with our graph representation. Now we will see how we can parallelize the process.
Definition: We define the cut_G(S) = {(u, v) in E | u in S and v in V\S}
Lemma (Light edge property): Assume that G has unique edge weights. For all S subset E, if we define x to be the minimum weight edge in cut_G(S), then x is in the MST Edges.
Proof: Let T be the MST of G, and fix a cut S, and let x be the minimum edge in the cut. If x is in T, then we are done.
Otherwise assume for contradiction that x is not in the cut. Since T is an MST, S and V\S must be in the same connected component with respect to T, so there must be another edge, y, in the cut that is in T. Since our edge weights are unique, the weight of x must be strictly less than the weight of y. Consider the tree formed by removing y and connecting S and V\S with x, this tree must be of less weight than T, which contradicts the fact that T was the MST.
Therefore x must be in the MST for any set S.
Equipped with this knowledge, we realize that by defining S = {v}, we can find an edge in the MST for every vertex in the graph, and this motivates some parallel algorithms...
# Boruvka's Algorithm
Published in 1926 by Otakar Boruvka, the algorithm goes as follows:
1) Initialize a forrest where each tree is a single vertex
2) While the forrest has more than one connected component:
a) For each component, add the cheapest outgoing edge to the potential MST edges. Since the cut {v} and V/{v} is valid, the lightest of the edges spanning this cut (ie outing edges of v) must be in the MST.
b) Contract along the potential MST edges, add the edges to the MST.
3) Repeat step 2 (called the Boruvka step) until there is only one connected component, which means we have found the minimum spanning tree.
# Contraction
Part of Boruvka's algorithm is contracting the edges to make a new edgeset. Before, when we contracted only one edge at a time, this was easy, but now we may need to contract multiple edges into a single vertex. There are many ways to do this, but in this example we will use Star Contraction.
1) Label vertices as stars or satellites by flipping a coin n times
2) For every edge (u, v), if u is a satellite and v is a star, then contract u into v. We represent the contraction by setting the representative of u as v, and replacing u by v in all of the edges. (and then removing all edges going from v -> v). Our contraction function just returns the representatives, BoruvkaStep takes care of pruning the edges.
At every round of Boruvka's algorithm we do this, and you can verify that we expect to get O(n) many edges that need to be contracted, and at 1/4 of them will be contracted in expectation, which means we expect O(logn) many rounds to occur, each taking O(m) time. So the work of Boruvka's is O(mlogn).
```
# Requires: E is the edge set, reverse sorted by weights (so low weights appear at the end of the list)
# Returns: Minimum edge comming out of each vertex in range(n)
def minEdges(E, n):
minE = [(-1, -1, (-1, -1, -1)) for i in range(n)]
for (u, v, w, l) in E:
minE[u] = (v, w, l)
return minE
# To do star contraction, we first decide which ones are stars and which are sattelites.
# If v is a head, then we contract u to v (and add it to the mst), otherwise we leave u alone.
def findSatellites((u, (v, w, l)), flips):
if(v == -1):
return (u, -1, -1, (-1, -1, -1))
else:
if(flips[u] == 0 and flips[v] == 1):
return (u, v, w, l)
else:
return (u, -1, -1, (-1, -1, -1))
# First we flip heads, and then get the minimum outgoing edge of each of the vertices
# Then we run star contraction on it to get the components, as well as mst edges
def starContract(E, n):
flips = [random.randint(0, 1) for i in range(n)]
minE = minEdges(E, n)
contracted = map(lambda x : findSatellites(x, flips), enumerate(minE))
return contracted
def BoruvkaStep(labeled, T, n):
# Get the components of the new graph
contract = starContract(labeled, n)
# Find the representatives (if we contracted the edge, then v will not be -1, so u will be contracted into v, otherwise u stays as u)
reps = map(lambda (u, v, w, l) : u if v == -1 else v, contract)
# Now we remove the edges that were invalid to get the mst edges
contract = filter(lambda (u, v, w, l) : v != -1, contract)
# l represents the original edge, so we add it to the mst
T = T + [l for (u, v, w, l) in contract]
# Now we apply the contraction and filter out any edges that were destroyed as a result.
labeled = filter(lambda (u, v, w, l) : reps[u] != reps[v], labeled)
labeled = map (lambda (u, v, w, l) : (reps[u], reps[v], w, l), labeled)
return labeled, T
def Boruvka(graph):
n = len(graph[0])
sort = (sorted(graph[1], lambda x, y : -x[2] + y[2]))
# We want to contract, so we need to carry the original list with us
labeled = map(lambda (u, v, w) : (u, v, w, (u, v, w)), sort)
# Initialize a new MST
T = []
while(len(labeled) != 0):
labeled, T = BoruvkaStep(labeled, T, n)
return T
## Here is a small example to make sure that the code works.
V = range(4)
E = [(0, 1, 4), (1, 0, 4), (1, 2, 3), (2, 1, 3), (2, 3, 4), (3, 2, 4), (0, 2, 7), (2, 0, 7)]
print(Boruvka((V, E)))
print(prims((V, E)))
print(kruskals((V, E)))
# We can verify that all 3 algorithms agree on the MST for this small case, which means we didn't do anything too wrong.
```
# Task Three: Randomize
Now that we have the Boruvka step, we can actually one up boruvka's algorithm by doing work in between rounds of Boruvka Steps.
In paticular we define F-heavy edges in the following manner: Let F be a Minimum spanning forrest of a graph G, then an edge (u,v, c) in E is a F-heavy edge if c is greater than the heaviest edge in the path connecting u and v in F. (if no path connects u and v in F, then (u, v, c) is not F-heavy).
We claim that for any minimum spanning forrest F, all of the F-heavy edges in G are not in the MST, and we can compute them in linear time (with respect to the number of edges in the graph). One way we can do this is by running kruskal's algorithm, traversing the edges in order of weight. When we run kruskals and encounter an edge that would make a cycle, we can remove it since it is larger than the heaviest edge currently in the mst connecting it's endpoints.
So, here's the algorithm:
1) Run some fixed number of boruvka steps to get some preliminary mst edges.
2) After that, create a new graph by including edges in G with one half probability, which we will call F.
3) Find the F-heavy edges of G by running modified kruskals algorithm.
4) Remove the F-heavy edges we found, and do it again until we are done finding the MST.
If it is done right, this will solve the MST problem in expected time O(m), and worst case time O(mlogn), the same bound as Boruvka's algorithm.
```
# As we describe above, what findheavy needs to do is run kruskals algorithm
# and remove any cycle edges, as they can't be F-light.
def findheavy(graph, edges):
edges = sorted(edges, lambda x : x[2])
components = sorted(graph[1])
numcomponents = len(graph[0])
mstedges = []
fheavy = []
while(numcomponents != 1 or edges != []):
(x, y, c, l) = edges.pop()
if(find_set(components, x) == find_set(components, y)):
fheavy += [(x, y, c, l)]
continue
else:
if((x, y, c, l) not in graph[1]):
continue
components[y] = x
mstedges += [(x, y)]
numcomponents = numcomponents - 1
return fheavy, mstedges
def Tarjan(graph):
n = len(graph[0])
sort = (sorted(graph[1], lambda x, y : -x[2] + y[2]))
# We want to contract, so we need to carry the original list with us
labeled = map(lambda (u, v, w) : (u, v, w, (u, v, w)), sort)
# Initialize a new MST
T = []
while(len(labeled) != 0):
labeled, T = BoruvkaStep(labeled, T, n)
labeled, T = BoruvkaStep(labeled, T, n)
flips = [random.randint(0, 1) for i in range(len(labeled))]
# There's Some False Advertising here, as this could be done much faster by removing edges from E as we find them.
# We also note that the theoretical bound of O(m) is very difficult to achieve without using Fibbonacci heaps or Brodal set,
# neither of which are fun to implement or quick
newE = [x for x in labeled if flips[x] == 1]
fheavy, msf = findheavy(graph[0], newE, labeled)
labeled = filter(lambda x : x not in fheavy, labeled)
return T
```
# Task Four: Applications
Now we will look at some uses of minimum spanning tree. In paticular we will look at creating a dependency tree from a sentence, and performing clustering with a modified boruvka's algorithm.
# Dependency Parsers
Following the outline set in http://www.seas.upenn.edu/~strctlrn/bib/PDF/nonprojectiveHLT-EMNLP2005.pdf, we can use Maximum spanning trees (which we can get by negating a graph and computing it's Minimum spanning tree), to determine dependencies among words in a sentence (assuming we can extract features from the sentence). We won't go into detail about what kinds of features are best to use, but we will use some cursory features to get a feel for how the algorithm works.
Features for this parser can get very complex (http://ufal.mff.cuni.cz/~zabokrtsky/publications/papers/featureengin07.pdf), and often require machine learning to determine weighting, but we will use the following as features with some made up weightings:
1) The direction of the dependency (-1 or 1 depending on which of the two words appears first in the sentence).
2) The POS tags of the dependency
3) Distance between two words
The basic idea of the algorithm is that if we have a metric between words that represents how dependent words are, we can find a maximum spanning tree of the dense graph, and this will find us the subtree that has the highest dependency. Of course, finding the best way to determine dependency is really difficult.
There isn't enough room in this tutorial for me to explain how to perform the machine learning to determine the proper correlation weighting for two words in a sentence, but I highly recommend reading the papers above to learn more about how it was really done.
```
# These are fake functions to give you a sense of what this would do
def tagscore(tag1, tag2):
if(tag1 == "NN" and tag2 == "NN"):
return 1
else:
return 2
def f(distance, direction, POSs):
return distance + direction + POSs
def extractGraph(sentence):
edges = []
for i in range(len(sentence)):
for j in range(len(sentence)):
if(i == j):
continue
else:
distance = abs(i-j)
direction = 1 if i < j else -1
POSs = tagscore(sentence[i][1], sentence[j][1])
edges += (i, j, -f(distance, direction, POSs))
return edges
# This will work, but not as well as we expect it to do. To really do it, read the papers listed above.
def parsedep(sentence):
G = (range(len(sentence.split(" "))), extractGraph(nltk.pos_tag(sentence.split(" "))))
edges = Boruvka(G)
return map(lambda (x, y, c) : (sentence.split(" ")[x], sentence.split(" ")[y]), edges)
# Here is a sentence that we can parse, and below it is the answer as provided by the Stanford parser
parsedep("Bills on ports and immigration were submitted by Republican Senator Brownback of Kansas")
Img(filename='deptree.png')
```
# Cluster Detection and Image Segmentation
Now, we will modify our implementation of Boruvka's to make it find a minimum spanning forrest, removing key edges from the original graph if they are too "difficult" to contract.
In paticular we will make it cost "currency" to contract edges, and when two edges try to contract, we subtract the weight of the edge from the minimum credits of the two contracted vertices. At the beginning of every round we filter out any edges that can not be contracted because of our cost requirement. We continue this until there are no contractable edges or we are finished contracting, and unlike the previous MST algorithms, here we are interested in the connected components of the MSF rather than the edges, as these represent the clusters in the data.
We notice that rather than making a minimum spanning tree, this will create a forrest, and each forrest will be determined by some notion of proximity of vertices in the components. For a picture, we determine the proximity by the absolute difference in the colors of the two, so that similar colors will get merged more often than different colors. For clustering, we could define it to be any metric we want, the L2 norm, or manhattan distance.
The function below will look really similar to Boruvka's algorithm, with some new credit variables.
```
def Segment(graph, initialcredits):
# The first part is copying over your code from boruvka's
n = len(graph[0])
credits = [initialcredits for i in range(n)]
sort = sorted(graph[1], lambda x, y : int(-x[2] + y[2]))
# We are going to record the connect components of the graph the same way as before.
colors = range(n)
# We want to contract, so we need to carry the original list with us
labeled = map(lambda (u, v, w) : (u, v, w, (u, v, w)), sort)
# Initialize a new MST
T = []
# We will repeat the body of the loop until there are not more edges to run on
while(len(labeled) != 0):
# First filter out the edges that can't be contracted along
labeled = filter(lambda (u, v, w, l) : min(credits[u], credits[v]) > w, labeled)
# Get the components of the new graph
contract = starContract(labeled, n)
# Find the representatives (if we contracted the edge, then v will not be -1, so u will be contracted into v, otherwise u stays as u)
reps = map(lambda (u, v, w, l) : u if v == -1 else v, contract)
for i in range(n):
if(contract[i][1] != -1):
colors[contract[i][0]] = contract[i][1]
# Now we remove the edges that were invalid to get the mst edges
contract = filter(lambda (u, v, w, l) : v != -1, contract)
# l represents the original edge, so we add it to the mst
T = T + [l for (u, v, w, l) in contract]
# Now we apply the contraction and filter out any edges that were destroyed as a result.
labeled = filter(lambda (u, v, w, l) : reps[u] != reps[v], labeled)
labeled = map (lambda (u, v, w, l) : (reps[u], reps[v], w, l), labeled)
## I might consider making this a seperate chunk, but the logic is very simple, and
## I don't see a reason to modularize this part.
contractedcredit = map(lambda (u, v, w, l) : (v, credits[u]), contract)
creditdict = dict()
for (u, c) in contractedcredit:
if(u in creditdict):
creditdict[u] += [c]
else:
creditdict[u] = [c]
def findMin(v, dic):
return (v, reduce(lambda x, y : min(x, y), dic[v]))
for i in creditdict:
creditdict[i] = (findMin(i, creditdict))
weights = map(lambda (u, v, w, l) : (v, w), contract)
weightdict = dict()
for (u, w) in weightdict:
if(u in weightdict):
weightdict[u] += [w]
else:
weightdict[u] = [w]
for i in weightdict:
weightdict[i] = (reduce(lambda x, y : x+y, (i, weightdict)))
for i in weightdict:
credits[i] = creditdict[i]-weightdict[i]
return (T, colors)
```
The function below is just a helper function that creates all of our edges from the picture array by taking every pair of adjacent vertices and weighting the edge as the magnitude of the distance vector between their colors. It's long because there are a lot of edge cases and I wanted to make sure that I was transparent with my algorithms.
```
def process(filename):
img = numpy.asarray(Image.open(filename))
edges = []
vertices = []
for i in range(len(img)):
for j in range(len(img[0])):
vertices += [i + j*len(img)]
if(i == 0):
if(j == 0):
edges += [(i + j*len(img), i + j*len(img) + 1, abs(img[i][j] - img[i+1][j])), (i + j*len(img), i + (j+1)*len(img), abs(img[i][j] - img[i][j+1]))]
elif(j == len(img[0]) - 1):
edges += [(i + j*len(img), i + j*len(img) + 1, abs(img[i][j] - img[i+1][j])), (i + j*len(img), i + (j-1)*len(img), abs(img[i][j] - img[i][j-1]))]
else:
edges += [(i + j*len(img), i + j*len(img) + 1, abs(img[i][j] - img[i+1][j])), (i + j*len(img), i + (j-1)*len(img), abs(img[i][j] - img[i][j-1])), (i + j*len(img), i + (j+1)*len(img), abs(img[i][j] - img[i][j+1]))]
elif(i == len(img) - 1):
if(j == 0):
edges += [(i + j*len(img), i + j*len(img) - 1, abs(img[i][j] - img[i-1][j])), (i + j*len(img), i + (j+1)*len(img), abs(img[i][j] - img[i][j+1]))]
elif(j == len(img[0])-1):
edges += [(i + j*len(img), i + j*len(img) - 1, abs(img[i][j] - img[i-1][j])), (i + j*len(img), i + (j-1)*len(img), abs(img[i][j] - img[i][j-1]))]
else:
edges += [(i + j*len(img), i + j*len(img) - 1, abs(img[i][j] - img[i-1][j])), (i + j*len(img), i + (j-1)*len(img), abs(img[i][j] - img[i][j-1])), (i + j*len(img), i + (j+1)*len(img), abs(img[i][j] - img[i][j+1]))]
elif(j == 0):
edges += [(i + j*len(img), i + j*len(img) + 1, abs(img[i][j] - img[i+1][j])), (i + j*len(img), i + j*len(img) - 1, abs(img[i][j] - img[i-1][j])), (i + j*len(img), i + (j+1)*len(img), abs(img[i][j] - img[i][j+1]))]
elif(j == len(img[0])-1):
edges += [(i + j*len(img), i + j*len(img) + 1, abs(img[i][j] - img[i+1][j])), (i + j*len(img), i + j*len(img) - 1, abs(img[i][j] - img[i-1][j])), (i + j*len(img), i + (j-1)*len(img), abs(img[i][j] - img[i][j-1]))]
else:
edges += [(i + j*len(img), i + j*len(img) - 1, abs(img[i][j] - img[i-1][j])), (i + j*len(img), i + j*len(img) + 1, abs(img[i][j] - img[i+1][j])), (i + j*len(img), i + (j-1)*len(img), abs(img[i][j] - img[i][j-1])), (i + j*len(img), i + (j+1)*len(img), abs(img[i][j] - img[i][j+1]))]
return (sorted(vertices), map(lambda (a, b, w) : (a, b, numpy.linalg.norm(w)), edges))
```
Below is a little example of how to run the program, incase you wanted to try it out. Basically we find the MST components and turn each pixel's color into it's representatives color. (This takes a few seconds to run).
```
(V, E) = process("sunset.jpg")
MST, Components = Segment((V, E), 10)
newpic = numpy.zeros(numpy.asarray(Image.open("sunset.jpg")).shape, dtype = numpy.uint8)
oldpic = numpy.asarray(Image.open('sunset.jpg'))
(x, y, z) = newpic.shape
for i in V:
component = find_set(Components, i)
color = numpy.asarray(Image.open("sunset.jpg"))[component%x, (int(component/x))]
newpic[i%x, (int(i/x))] = color
image = Image.fromarray(newpic, 'RGB')
image.save("sunset-10.png")
Img(filename = 'sunset-10.png')
```
Below are some more pretty pictures, observe what happens when we increase the initial credits given to each vertex.
```
Img(filename='dog.png')
Img(filename='dog-1000.png')
Img(filename='skittles.png')
Img(filename='skittles-100.png')
Img(filename='skittles-1000.png')
```
| true |
code
| 0.415521 | null | null | null | null |
|
<center>
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/Logos/organization_logo/organization_logo.png" width="300" alt="cognitiveclass.ai logo" />
</center>
<h1>Extracting and Visualizing Stock Data</h1>
<h2>Description</h2>
Extracting essential data from a dataset and displaying it is a necessary part of data science; therefore individuals can make correct decisions based on the data. In this assignment, you will extract some stock data, you will then display this data in a graph.
<h2>Table of Contents</h2>
<div class="alert alert-block alert-info" style="margin-top: 20px">
<ul>
<li>Define a Function that Makes a Graph</li>
<li>Question 1: Use yfinance to Extract Stock Data</li>
<li>Question 2: Use Webscraping to Extract Tesla Revenue Data</li>
<li>Question 3: Use yfinance to Extract Stock Data</li>
<li>Question 4: Use Webscraping to Extract GME Revenue Data</li>
<li>Question 5: Plot Tesla Stock Graph</li>
<li>Question 6: Plot GameStop Stock Graph</li>
</ul>
<p>
Estimated Time Needed: <strong>30 min</strong></p>
</div>
<hr>
```
!pip install yfinance
#!pip install pandas
#!pip install requests
!pip install bs4
#!pip install plotly
import yfinance as yf
import pandas as pd
import requests
from bs4 import BeautifulSoup
import plotly.graph_objects as go
from plotly.subplots import make_subplots
```
## Define Graphing Function
In this section, we define the function `make_graph`. You don't have to know how the function works, you should only care about the inputs. It takes a dataframe with stock data (dataframe must contain Date and Close columns), a dataframe with revenue data (dataframe must contain Date and Revenue columns), and the name of the stock.
```
def make_graph(stock_data, revenue_data, stock):
fig = make_subplots(rows=2, cols=1, shared_xaxes=True, subplot_titles=("Historical Share Price", "Historical Revenue"), vertical_spacing = .3)
fig.add_trace(go.Scatter(x=pd.to_datetime(stock_data.Date, infer_datetime_format=True), y=stock_data.Close.astype("float"), name="Share Price"), row=1, col=1)
fig.add_trace(go.Scatter(x=pd.to_datetime(revenue_data.Date, infer_datetime_format=True), y=revenue_data.Revenue.astype("float"), name="Revenue"), row=2, col=1)
fig.update_xaxes(title_text="Date", row=1, col=1)
fig.update_xaxes(title_text="Date", row=2, col=1)
fig.update_yaxes(title_text="Price ($US)", row=1, col=1)
fig.update_yaxes(title_text="Revenue ($US Millions)", row=2, col=1)
fig.update_layout(showlegend=False,
height=900,
title=stock,
xaxis_rangeslider_visible=True)
fig.show()
```
## Question 1: Use yfinance to Extract Stock Data
Using the `Ticker` function enter the ticker symbol of the stock we want to extract data on to create a ticker object. The stock is Tesla and its ticker symbol is `TSLA`.
```
tesla = yf.Ticker("TSLA")
```
Using the ticker object and the function `history` extract stock information and save it in a dataframe named `tesla_data`. Set the `period` parameter to `max` so we get information for the maximum amount of time.
```
tesla_data = tesla.history(period="max")
```
**Reset the index** using the `reset_index(inplace=True)` function on the tesla_data DataFrame and display the first five rows of the `tesla_data` dataframe using the `head` function. Take a screenshot of the results and code from the beginning of Question 1 to the results below.
```
tesla_data.reset_index(inplace=True)
tesla_data.head()
```
## Question 2: Use Webscraping to Extract Tesla Revenue Data
Use the `requests` library to download the webpage [https://www.macrotrends.net/stocks/charts/TSLA/tesla/revenue](https://www.macrotrends.net/stocks/charts/TSLA/tesla/revenue?cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-PY0220EN-SkillsNetwork-23455606&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-PY0220EN-SkillsNetwork-23455606&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-PY0220EN-SkillsNetwork-23455606&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-PY0220EN-SkillsNetwork-23455606&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ). Save the text of the response as a variable named `html_data`.
```
tesla_url = "https://www.macrotrends.net/stocks/charts/TSLA/tesla/revenue"
tesla_html_data = requests.get(tesla_url).text
```
Parse the html data using `beautiful_soup`.
```
tesla_soup = BeautifulSoup(tesla_html_data, "html5lib")
```
Using beautiful soup extract the table with `Tesla Quarterly Revenue` and store it into a dataframe named `tesla_revenue`. The dataframe should have columns `Date` and `Revenue`. Make sure the comma and dollar sign is removed from the `Revenue` column.
```
tesla_tables = tesla_soup.find_all('table')
for index,table in enumerate(tesla_tables):
if ("Tesla Quarterly Revenue" in str(table)):
tesla_table_index = index
tesla_revenue = pd.DataFrame(columns=["Date", "Revenue"])
for row in tesla_tables[tesla_table_index].tbody.find_all("tr"):
col = row.find_all("td")
if (col !=[]):
date = col[0].text
revenue = col[1].text.replace("$", "").replace(",", "")
tesla_revenue = tesla_revenue.append({"Date" : date, "Revenue" : revenue}, ignore_index=True)
```
<details><summary>Click here if you need help removing the dollar sign and comma</summary>
```
If you parsed the HTML table by row and column you can use the replace function on the string
revenue = col[1].text.replace("$", "").replace(",", "")
If you use the read_html function you can use the replace function on the string representation of the column
tesla_revenue["Revenue"] = tesla_revenue["Revenue"].str.replace("$", "").str.replace(",", "")
```
</details>
Remove the rows in the dataframe that are empty strings or are NaN in the Revenue column. Print the entire `tesla_revenue` DataFrame to see if you have any.
```
tesla_revenue = tesla_revenue[tesla_revenue['Revenue'] != ""]
tesla_revenue
```
<details><summary>Click here if you need help removing the Nan or empty strings</summary>
```
If you have NaN in the Revenue column
tesla_revenue.dropna(inplace=True)
If you have emtpty string in the Revenue column
tesla_revenue = tesla_revenue[tesla_revenue['Revenue'] != ""]
```
</details>
Display the last 5 row of the `tesla_revenue` dataframe using the `tail` function. Take a screenshot of the results.
```
tesla_revenue.tail()
```
## Question 3: Use yfinance to Extract Stock Data
Using the `Ticker` function enter the ticker symbol of the stock we want to extract data on to create a ticker object. The stock is GameStop and its ticker symbol is `GME`.
```
gamestop = yf.Ticker("GME")
```
Using the ticker object and the function `history` extract stock information and save it in a dataframe named `gme_data`. Set the `period` parameter to `max` so we get information for the maximum amount of time.
```
gme_data = gamestop.history(period="max")
```
**Reset the index** using the `reset_index(inplace=True)` function on the gme_data DataFrame and display the first five rows of the `gme_data` dataframe using the `head` function. Take a screenshot of the results and code from the beginning of Question 3 to the results below.
```
gme_data.reset_index(inplace=True)
gme_data.head()
```
## Question 4: Use Webscraping to Extract GME Revenue Data
Use the `requests` library to download the webpage [https://www.macrotrends.net/stocks/charts/GME/gamestop/revenue](https://www.macrotrends.net/stocks/charts/GME/gamestop/revenue?cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-PY0220EN-SkillsNetwork-23455606&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-PY0220EN-SkillsNetwork-23455606&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-PY0220EN-SkillsNetwork-23455606&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-PY0220EN-SkillsNetwork-23455606&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ). Save the text of the response as a variable named `html_data`.
```
gme_url = "https://www.macrotrends.net/stocks/charts/GME/gamestop/revenue"
gme_html_data = requests.get(gme_url).text
```
Parse the html data using `beautiful_soup`.
```
gme_soup = BeautifulSoup(gme_html_data, "html5lib")
```
Using beautiful soup extract the table with `GameStop Quarterly Revenue` and store it into a dataframe named `gme_revenue`. The dataframe should have columns `Date` and `Revenue`. Make sure the comma and dollar sign is removed from the `Revenue` column using a method similar to what you did in Question 2.
```
gme_tables = gme_soup.find_all('table')
for index,table in enumerate(gme_tables):
if ("GameStop Quarterly Revenue" in str(table)):
gme_table_index = index
gme_revenue = pd.DataFrame(columns=["Date", "Revenue"])
for row in gme_tables[gme_table_index].tbody.find_all("tr"):
col = row.find_all("td")
if (col !=[]):
date = col[0].text
revenue = col[1].text.replace("$", "").replace(",", "")
gme_revenue = gme_revenue.append({"Date" : date, "Revenue" : revenue}, ignore_index=True)
```
Display the last five rows of the `gme_revenue` dataframe using the `tail` function. Take a screenshot of the results.
```
gme_revenue.tail()
```
## Question 5: Plot Tesla Stock Graph
Use the `make_graph` function to graph the Tesla Stock Data, also provide a title for the graph. The structure to call the `make_graph` function is `make_graph(tesla_data, tesla_revenue, 'Tesla')`
```
make_graph(tesla_data, tesla_revenue, 'Tesla')
```
## Question 6: Plot GameStop Stock Graph
Use the `make_graph` function to graph the GameStop Stock Data, also provide a title for the graph. The structure to call the `make_graph` function is `make_graph(gme_data, gme_revenue, 'GameStop')`.
```
make_graph(gme_data, gme_revenue, 'GameStop')
```
<h2>About the Authors:</h2>
<a href="https://www.linkedin.com/in/joseph-s-50398b136/">Joseph Santarcangelo</a> has a PhD in Electrical Engineering, his research focused on using machine learning, signal processing, and computer vision to determine how videos impact human cognition. Joseph has been working for IBM since he completed his PhD.
Azim Hirjani
## Change Log
| Date (YYYY-MM-DD) | Version | Changed By | Change Description |
| ----------------- | ------- | ------------- | ------------------------- |
| 2020-11-10 | 1.1 | Malika Singla | Deleted the Optional part |
| 2020-08-27 | 1.0 | Malika Singla | Added lab to GitLab |
<hr>
## <h3 align="center"> © IBM Corporation 2020. All rights reserved. <h3/>
<p>
| true |
code
| 0.462291 | null | null | null | null |
|
# Source layouts schematics
```
from IPython.display import display # noqa: F401 # ignore used but not imported
from pathlib import Path
import numpy as np
import pandas as pd
import verde as vd
import matplotlib.pyplot as plt
from matplotlib.patches import Rectangle
import boost_and_layouts
from boost_and_layouts import save_to_json
```
## Define parameters for building the source distributions
```
# Define results directory to read synthetic ground survey
results_dir = Path("..") / "results"
ground_results_dir = results_dir / "ground_survey"
```
## Read synthetic ground survey
Get coordinates of observation points from a synthetic ground survey
```
survey = pd.read_csv(ground_results_dir / "survey.csv")
inside = np.logical_and(
np.logical_and(
survey.easting > 0,
survey.easting < 40e3,
),
np.logical_and(
survey.northing > -60e3,
survey.northing < -20e3,
),
)
survey = survey.loc[inside]
survey
fig, ax = plt.subplots(figsize=(6, 6))
tmp = ax.scatter(survey.easting, survey.northing)
ax.set_aspect("equal")
ax.set_title("Height of ground survey points")
plt.show()
coordinates = (survey.easting, survey.northing, survey.height)
```
### Generate the source distributions
```
block_spacing = 3000
grid_spacing = 2000
layouts = ["source_below_data", "grid_sources", "block_averaged_sources"]
depth_type = "constant_depth"
parameters = {}
layout = "source_below_data"
parameters[layout] = dict(
depth_type=depth_type,
depth=500,
)
layout = "grid_sources"
parameters[layout] = dict(depth_type=depth_type, depth=500, spacing=grid_spacing)
layout = "block_averaged_sources"
parameters[layout] = dict(depth_type=depth_type, depth=500, spacing=block_spacing)
source_distributions = {}
for layout in parameters:
source_distributions[layout] = getattr(boost_and_layouts, layout)(
coordinates, **parameters[layout]
)
```
Create lines for plotting the boundaries of the blocks
```
region = vd.get_region(coordinates)
grid_nodes = vd.grid_coordinates(region, spacing=block_spacing)
grid_lines = (np.unique(grid_nodes[0]), np.unique(grid_nodes[1]))
for nodes in grid_lines:
nodes.sort()
```
## Plot observation points and source layouts
```
# Load matplotlib configuration
plt.style.use(Path(".") / "matplotlib.rc")
titles = {
"source_below_data": "Sources below data",
"block_averaged_sources": "Block-averaged sources",
"grid_sources": "Regular grid",
}
fig, axes = plt.subplots(nrows=1, ncols=4, sharey=True, figsize=(7, 1.7), dpi=300)
size = 3
labels = "a b c d".split()
for ax, label in zip(axes, labels):
ax.set_aspect("equal")
ax.annotate(
label,
xy=(0.02, 0.95),
xycoords="axes fraction",
bbox=dict(boxstyle="circle", fc="white", lw=0.2),
)
ax.axis("off")
# Plot observation points
ax = axes[0]
ax.scatter(survey.easting, survey.northing, s=size, c="C0", marker="^")
ax.set_title("Observation points")
# Plot location of sources for each source layout
for ax, layout in zip(axes[1:], layouts):
ax.scatter(*source_distributions[layout][:2], s=size, c="C1")
ax.set_title(titles[layout])
# Add blocks boundaries to Block Averaged Sources plot
ax = axes[3]
grid_style = dict(color="grey", linewidth=0.5, linestyle="--")
xmin, xmax, ymin, ymax = region[:]
for x in grid_lines[0]:
ax.plot((x, x), (ymin, ymax), **grid_style)
for y in grid_lines[1]:
ax.plot((xmin, xmax), (y, y), **grid_style)
plt.tight_layout(w_pad=0)
plt.savefig(
Path("..") / "manuscript" / "figs" / "source-layouts-schematics.pdf",
dpi=300,
bbox_inches="tight",
)
plt.show()
```
## Dump number of observation points and sources to JSON file
```
variables = {
"source_layouts_schematics_observations": survey.easting.size,
}
for layout in layouts:
variables["source_layouts_schematics_{}".format(layout)] = source_distributions[
layout
][0].size
json_file = results_dir / "source-layouts-schematics.json"
save_to_json(variables, json_file)
```
# Gradient boosting schematics
```
sources = source_distributions["source_below_data"]
region = vd.get_region(sources)
overlapping = 0.5
window_size = 18e3
spacing = window_size * (1 - overlapping)
centers, indices = vd.rolling_window(sources, size=window_size, spacing=spacing)
spacing_easting = centers[0][0, 1] - centers[0][0, 0]
spacing_northing = centers[1][1, 0] - centers[1][0, 0]
print("Desired spacing:", spacing)
print("Actual spacing:", (spacing_easting, spacing_northing))
indices = [i[0] for i in indices.ravel()]
centers = [i.ravel() for i in centers]
n_windows = centers[0].size
print("Number of windows:", n_windows)
ncols = 10
figsize = (1.7 * ncols, 1.7)
size = 3
fig, axes = plt.subplots(
ncols=ncols, nrows=1, figsize=figsize, dpi=300, sharex=True, sharey=True
)
for ax in axes:
ax.set_aspect("equal")
ax.axis("off")
# Observation points
axes[0].scatter(survey.easting, survey.northing, s=size, c="C0", marker="^")
# Sources
axes[1].scatter(*sources[:2], s=size, c="C1")
# First fit
# ---------
window_i = 0
window = indices[window_i]
not_window = [i for i in np.arange(sources[0].size) if i not in window]
w_center_easting, w_center_northing = centers[0][window_i], centers[1][window_i]
rectangle_kwargs = dict(
xy=(w_center_easting - window_size / 2, w_center_northing - window_size / 2),
width=window_size,
height=window_size,
fill=False,
linewidth=0.5,
linestyle="--",
color="#444444",
)
# Observation points
axes[2].scatter(
survey.easting.values[window],
survey.northing.values[window],
s=size,
c="C0",
marker="^",
)
axes[2].scatter(
survey.easting.values[not_window],
survey.northing.values[not_window],
s=size,
c="C7",
marker="^",
)
rectangle = Rectangle(**rectangle_kwargs)
axes[2].add_patch(rectangle)
# Sources
axes[3].scatter(sources[0][window], sources[1][window], s=size, c="C1")
axes[3].scatter(sources[0][not_window], sources[1][not_window], s=size, c="C7")
rectangle = Rectangle(**rectangle_kwargs)
axes[3].add_patch(rectangle)
# First Prediction
# ----------------
axes[4].scatter(survey.easting, survey.northing, s=size, c="C3", marker="v")
axes[5].scatter(sources[0][window], sources[1][window], s=size, c="C1")
rectangle = Rectangle(**rectangle_kwargs)
axes[5].add_patch(rectangle)
# Second fit
# ----------
window_i = 3
window = indices[window_i]
not_window = [i for i in np.arange(sources[0].size) if i not in window]
w_center_easting, w_center_northing = centers[0][window_i], centers[1][window_i]
rectangle_kwargs = dict(
xy=(w_center_easting - window_size / 2, w_center_northing - window_size / 2),
width=window_size,
height=window_size,
fill=False,
linewidth=0.5,
linestyle="--",
color="#444444",
)
# Residue
axes[6].scatter(
survey.easting.values[window],
survey.northing.values[window],
s=size,
c="C3",
marker="v",
)
axes[6].scatter(
survey.easting.values[not_window],
survey.northing.values[not_window],
s=size,
c="C7",
marker="^",
)
rectangle = Rectangle(**rectangle_kwargs)
axes[6].add_patch(rectangle)
# Sources
axes[7].scatter(sources[0][window], sources[1][window], s=size, c="C1")
axes[7].scatter(sources[0][not_window], sources[1][not_window], s=size, c="C7")
rectangle = Rectangle(**rectangle_kwargs)
axes[7].add_patch(rectangle)
# Second Prediction
# -----------------
axes[8].scatter(survey.easting, survey.northing, s=size, c="C3", marker="v")
axes[9].scatter(sources[0][window], sources[1][window], s=size, c="C1")
rectangle = Rectangle(**rectangle_kwargs)
axes[9].add_patch(rectangle)
plt.savefig(Path("..") / "manuscript" / "figs" / "svg" / "gradient-boosting-raw.svg")
plt.show()
```
| true |
code
| 0.685713 | null | null | null | null |
|
<a href="https://colab.research.google.com/github/GitMarco27/TMML/blob/main/Notebooks/009_Custom_Loss.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# 3 Minutes Machine Learning
## Episode 9: Custom Loss
#### Marco Sanguineti, 2021
---
Welcome to 3 minutes Machine Learning!
Reference: https://archive.ics.uci.edu/ml/datasets/Airfoil+Self-Noise
```
import tensorflow as tf
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn import preprocessing
print(tf.__version__)
import os
def loadThumb(path):
# Let's import this video thumbnail!
if os.path.exists(path):
myThumb = plt.imread(path)
fig, ax = plt.subplots(figsize=(15, 10))
plt.axis('off')
ax.imshow(myThumb)
plt.show()
loadThumb('/tmp/yt_thumb_009.png')
```
#### Video Topics
> 1. Load the dataset from UCI.edu
> 2. Create a model with the keras API with a custom layer and custon loss
> 3. Train the model and check the results
> 4. See you on next video!
# Load the dataset
___
```
URL = "https://archive.ics.uci.edu/ml/machine-learning-databases/00291/airfoil_self_noise.dat"
cols = ['Frequency',
'Angle of Attack',
'Chord length',
'Free-stream velocity',
'Suction side displacement thickness',
'Sound Pressure']
dataset = pd.read_table(URL, names=cols, dtype='float32')
dataset
dataset.describe().T
# sns.pairplot(dataset)
# plt.show()
```
# Create the model
___
```
from tensorflow.keras.layers import Dense, Input
from tensorflow.keras.models import Model
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.layers import Layer
# Let's create a custom quadratic layer
class myDenseLayer(Layer):
def __init__(self, units=32, activation=None):
super(myDenseLayer, self).__init__()
self.units = units
self.activation = tf.keras.activations.get(activation)
def build(self, input_shape):
a_init = tf.random_normal_initializer()
self.a = tf.Variable(name='a',
initial_value=a_init(shape=(input_shape[-1], self.units)), dtype='float32',
trainable=True)
self.b = tf.Variable(name='b',
initial_value=a_init(shape=(input_shape[-1], self.units)), dtype='float32',
trainable=True)
c_init = tf.zeros_initializer()
self.c = tf.Variable(name='c',
initial_value=c_init(shape=(self.units)), dtype='float32',
trainable=True)
def call(self, inputs):
return self.activation(tf.matmul(tf.math.square(inputs), self.a)+tf.matmul(inputs, self.b) + self.c)
myLayer = myDenseLayer(units=64, activation='relu')
myLayer_2 = myDenseLayer(units=64, activation='relu')
#My Custom Regressor Accuracy
import keras.backend as K
import sklearn
class CustomAccuracy(tf.keras.losses.Loss):
def __init__(self):
super().__init__()
def call(self, y_true, y_pred):
mse = tf.reduce_mean(tf.square(y_pred-y_true))
rmse = tf.math.sqrt(mse)
return rmse / tf.reduce_mean(tf.square(y_true)) - 1
import numpy as np
loss = CustomAccuracy()
a = tf.random.uniform(shape=(32, 5))
b = tf.random.uniform(shape=(32, 5))
loss(a, b)
input_data = Input(shape=(5), name='Input')
customDense = myLayer(input_data)
customDense_2 = myLayer_2(customDense)
output = Dense(1, name='output')(customDense_2)
model = Model(input_data, output)
model.compile(optimizer=Adam(learning_rate=0.001), loss=CustomAccuracy(), metrics=['mae', 'mse'])
model.summary()
tf.keras.utils.plot_model(
model, to_file='model.png', show_shapes=True, show_dtype=True,
show_layer_names=True, rankdir='TB', expand_nested=False, dpi=96
)
def separate(df):
return df[['Sound Pressure']].to_numpy(), df.drop(df[['Sound Pressure']], axis=1).to_numpy()
min_max_scaler = preprocessing.MinMaxScaler()
df_normed = pd.DataFrame(min_max_scaler.fit_transform(dataset))
df_normed.columns = list(dataset.columns)
train_set, test_set = train_test_split(df_normed)
train_labels, train_features = separate(train_set)
test_labels, test_features = separate(test_set)
```
# Train and check the results
___
```
myLayer.variables
history = model.fit(
train_features,
train_labels,
batch_size = 128,
epochs=500,
validation_data=(test_features,
test_labels)
)
print(f'My final score on testo set {- model.evaluate(test_features, test_labels)[0]}')
myLayer.variables
loss = history.history['loss']
val_loss = history.history['val_loss']
fig, ax = plt.subplots(figsize=(8, 6))
plt.plot(loss)
plt.plot(val_loss)
plt.grid('both')
plt.xlabel('Epochs')
plt.ylabel('Loss Function')
plt.title('Loss Function trend')
plt.show()
fig, ax = plt.subplots(1, 2, figsize=(12, 6), sharey=True)
ax[0].axis('equal')
ax[0].scatter(train_labels[:, 0], model.predict(train_features)[:, 0], marker='^',
color='r', edgecolor='k')
ax[0].plot([0, 1], [0, 1], c='k')
ax[0].plot([0, 1], [0.2, 1.2],'--', c='orange')
ax[0].plot([0, 1], [-0.2, 0.8],'--', c='orange')
ax[0].plot([0, 1], [0.1, 1.1],'--', c='pink')
ax[0].plot([0, 1], [-0.1, 0.9],'--', c='pink')
ax[0].set_title('Training Set - Y1')
ax[0].set_ylim(0, 1)
ax[0].grid(which='both', alpha=0.8, c='white')
ax[0].set_facecolor('#eaeaf2')
ax[0].spines['bottom'].set_color('white')
ax[0].spines['top'].set_color('white')
ax[0].spines['right'].set_color('white')
ax[0].spines['left'].set_color('white')
ax[1].axis('equal')
ax[1].scatter(test_labels[:, 0], model.predict(test_features)[:, 0], marker='^',
color='g', edgecolor='k')
ax[1].plot([0, 1], [0, 1], c='k')
ax[1].plot([0, 1], [0.2, 1.2],'--', c='orange')
ax[1].plot([0, 1], [-0.2, 0.8],'--', c='orange')
ax[1].plot([0, 1], [0.1, 1.1],'--', c='pink')
ax[1].plot([0, 1], [-0.1, 0.9],'--', c='pink')
ax[1].set_title('Validation Set - Y1')
ax[1].set_ylim(0, 1)
ax[1].grid(which='both', alpha=0.8, c='white')
ax[1].set_facecolor('#eaeaf2')
ax[1].spines['bottom'].set_color('white')
ax[1].spines['top'].set_color('white')
ax[1].spines['right'].set_color('white')
ax[1].spines['left'].set_color('white')
import numpy as np
from sklearn.metrics import r2_score
from scipy.stats import pearsonr
for i in range(np.shape(train_labels)[1]):
metrics= {
'mae-train': np.mean(np.abs(train_labels[:, i] - model.predict(train_features)[:, i])),
'mse-train': np.mean(np.square(train_labels[:, i] - model.predict(train_features)[:, i])),
'r2-train': r2_score(train_labels[:, i], model.predict(train_features)[:, i]),
'pearson-train': pearsonr(train_labels[:, i], model.predict(train_features)[:, i])[0],
'mae-test': np.mean(np.abs(test_labels[:, i] - model.predict(test_features)[:, i])),
'mse-test': np.mean(np.square(test_labels[:, i] - model.predict(test_features)[:, i])),
'r2-test': r2_score(test_labels[:, i] ,model.predict(test_features)[:, i]),
'pearson-test': pearsonr(test_labels[:, i], model.predict(test_features)[:, i])[0]
}
blue = lambda x: '\033[94m' + x + '\033[0m'
yellow = lambda x: '\033[93m' + x + '\033[0m'
for key in metrics:
if 'train' in key:
print(f'Y{i} - {blue(key)} - {str(metrics[key])[:7]}')
else:
print(f'Y{i} - {yellow(key)} - {str(metrics[key])[:7]}')
```
# Greetings
---
```
!pip install art
from art import tprint, aprint
tprint('See you on next videos!')
def subscribe():
"""
Attractive subscription form
"""
aprint("giveme", number=5)
print(f'\n\tLike and subscribe to support this work!\n')
aprint("giveme", number=5)
subscribe()
```
| true |
code
| 0.69168 | null | null | null | null |
|
# How to contribute to jupyter notebooks
```
from fastai.gen_doc.nbdoc import *
from fastai.gen_doc.gen_notebooks import *
from fastai.gen_doc import *
```
The documentation is built from notebooks in `docs_src/`. Follow the steps below to build documentation. For more information about generating and authoring notebooks, see [`fastai.gen_doc.gen_notebooks`](/gen_doc.gen_notebooks.html#gen_doc.gen_notebooks).
## Modules
### [`fastai.gen_doc.gen_notebooks`](/gen_doc.gen_notebooks.html#gen_doc.gen_notebooks)
Generate and update notebook skeletons automatically from modules. Includes an overview of the whole authoring process.
### [`fastai.gen_doc.convert2html`](/gen_doc.convert2html.html#gen_doc.convert2html)
Create HTML (jekyll) docs from notebooks.
### [`fastai.gen_doc.nbdoc`](/gen_doc.nbdoc.html#gen_doc.nbdoc)
Underlying documentation functions; most important is [`show_doc`](/gen_doc.nbdoc.html#show_doc).
## Process for contributing to the docs
If you want to help us and contribute to the docs, you just have to make modifications to the source notebooks, our scripts will then automatically convert them to HTML. There is just one script to run after cloning the fastai repo, to ensure that everything works properly. The rest of this page goes more in depth about all the functionalities this module offers, but if you just want to add a sentence or correct a typo, make a PR with the notebook changed and we'll take care of the rest.
### Thing to run after git clone
Make sure you follow this recipe:
git clone https://github.com/fastai/fastai
cd fastai
tools/run-after-git-clone
This will take care of everything that is explained in the following two sections. We'll tell you what they do, but you need to execute just this one script.
Note: windows users, not using bash emulation, will need to invoke the command as:
python tools\run-after-git-clone
If you're on windows, you also need to convert the Unix symlink between `docs_src\imgs` and `docs\imgs`. You will need to (1) remove `docs_src\imgs`, (2) execute `cmd.exe` as administrator, and (3) finally, in the `docs_src` folder, execute:
mklink /d imgs ..\docs\imgs
#### after-git-clone #1: a mandatory notebook strip out
Currently we only store `source` code cells under git (and a few extra fields for documentation notebooks). If you would like to commit or submit a PR, you need to confirm to that standard.
This is done automatically during `diff`/`commit` git operations, but you need to configure your local repository once to activate that instrumentation.
Therefore, your developing process will always start with:
tools/trust-origin-git-config
The last command tells git to invoke configuration stored in `fastai/.gitconfig`, so your `git diff` and `git commit` invocations for this particular repository will now go via `tools/fastai-nbstripout` which will do all the work for you.
You don't need to run it if you run:
tools/run-after-git-clone
If you skip this configuration your commit/PR involving notebooks will not be accepted, since it'll carry in it many JSON bits which we don't want in the git repository. Those unwanted bits create collisions and lead to unnecessarily complicated and time wasting merge activities. So please do not skip this step.
Note: we can't make this happen automatically, since git will ignore a repository-stored `.gitconfig` for security reasons, unless a user will tell git to use it (and thus trust it).
If you'd like to check whether you already trusted git with using `fastai/.gitconfig` please look inside `fastai/.git/config`, which should have this entry:
[include]
path = ../.gitconfig
or alternatively run:
tools/trust-origin-git-config -t
#### after-git-clone #2: automatically updating doc notebooks to be trusted on git pull
We want the doc notebooks to be already trusted when you load them in `jupyter notebook`, so this script which should be run once upon `git clone`, will install a `git` `post-merge` hook into your local check out.
The installed hook will be executed by git automatically at the end of `git pull` only if it triggered an actual merge event and that the latter was successful.
To trust run:
tools/trust-doc-nbs-install-hook
You don't need to run it if you run:
tools/run-after-git-clone
To distrust run:
rm .git/hooks/post-merge
### Validate any notebooks you're contributing to
If you were using a text editor to make changes, when you are done working on a notebook improvement, please, make sure to validate that notebook's format, by simply loading it in the jupyter notebook.
Alternatively, you could use a CLI JSON validation tool, e.g. [jsonlint](https://jsonlint.com/):
jsonlint-php example.ipynb
but it's second best, since you may have a valid JSON, but invalid notebook format, as the latter has extra requirements on which fields are valid and which are not.
## Building the documentation website
The https://docs.fast.ai website is comprised from documentation notebooks converted to `.html`, `.md` files, jekyll metadata, jekyll templates (including the sidebar).
* `.md` files are automatically converted by github pages (requires no extra action)
* the sidebar and other jekyll templates under `docs/_data/` are automatically deployed by github pages (requires no extra action)
* changes in jekyll metadata require a rebuild of the affected notebooks
* changes in `.ipynb` nbs require a rebuild of the affected notebooks
### Updating sidebar
1. edit `docs_src/sidebar/sidebar_data.py`
2. `python tools/make_sidebar.py`
3. check `docs/_data/sidebars/home_sidebar.yml`
4. `git commit docs_src/sidebar/sidebar_data.py docs/_data/sidebars/home_sidebar.yml`
[jekyll sidebar documentation](https://idratherbewriting.com/documentation-theme-jekyll/#configure-the-sidebar).
### Updating notebook metadata
In order to pass the right settings to the website version of the `docs`, each notebook has a custom entry which if you look at the source code, looks like:
```
"metadata": {
"jekyll": {
"keywords": "fastai",
"toc": "false",
"title": "Welcome to fastai"
},
[...]
```
Do not edit this entry manually, or your changes will be overwritten in the next metadata update.
The only correct way to change any notebook's metadata is by opening `docs_src/jekyll_metadata.ipynb`, finding the notebook you want to change the metadata for, changing it, and running the notebook, then saving and committing it and the resulting changes.
### Updating notebooks
Use this section only when you have added a new function that you want to document, or modified an existing function.
Here is how to build/update the documentation notebooks to reflect changes in the library.
To update all modified notebooks under `docs_src` run:
```bash
python tools/build-docs
```
To update specific `*ipynb` nbs:
```bash
python tools/build-docs docs_src/notebook1.ipynb docs_src/notebook2.ipynb ...
```
To force a rebuild of all notebooks and not just the modified ones, use the `-f` option.
```bash
python tools/build-docs -f
```
To scan a module and add any new module functions to documentation notebook:
```bash
python tools/build-docs --document-new-fns
```
To automatically append new fastai methods to their corresponding documentation notebook:
```bash
python tools/build-docs --update-nb-links
```
Use the `-h` for more options.
Alternatively, [`update_notebooks`](/gen_doc.gen_notebooks.html#update_notebooks) can be run from the notebook.
To update all notebooks under `docs_src` run:
```python
update_notebooks('.')
```
To update specific python file only:
```python
update_notebooks('gen_doc.gen_notebooks.ipynb', update_nb=True)
```
`update_nb=True` inserts newly added module methods into the docs that haven't already been documented.
Alternatively, you can update a specific module:
```python
update_notebooks('fastai.gen_doc.gen_notebooks', dest_path='fastai/docs_src')
```
### Updating html only
If you are not syncronizing the code base with its documentation, but made some manual changes to the documentation notebooks, then you don't need to update the notebooks, but just convert them to `.html`:
To convert `docs_src/*ipynb` to `docs/*html`:
* only the modified `*ipynb`:
```bash
python tools/build-docs -l
```
* specific `*ipynb`s:
```bash
python tools/build-docs -l docs_src/notebook1.ipynb docs_src/notebook2.ipynb ...
```
* force to rebuild all `*ipynb`s:
```bash
python tools/build-docs -fl
```
## Links and anchors
### Validate links and anchors
After you commit doc changes please validate that all the links and `#anchors` are correct.
If it's the first time you are about to run the link checker, install the [prerequisites](https://github.com/fastai/fastai/blob/master/tools/checklink/README.md) first.
After committing the new changes, first, wait a few minutes for github pages to sync, otherwise you'll be testing an outdated live site.
Then, do:
```
cd tools/checklink
./checklink-docs.sh
```
The script will be silent and only report problems as it finds them.
Remember, that it's testing the live website, so if you detect problems and make any changes, remember to first commit the changes and wait a few minutes before re-testing.
You can also test the site locally before committing your changes, please see: [README](https://github.com/fastai/fastai/blob/master/tools/checklink/README.md).
To test the course-v3.fast.ai site, do:
```
./checklink-course-v3.sh
```
## Working with Markdown
### Preview
If you work on markdown (.md) files it helps to be able to validate your changes so that the resulting layout is not broken. [grip](https://github.com/joeyespo/grip) seems to work quite well for this purpose (`pip install grip`). For example:
```
grip -b docs/dev/release.md
```
will open a browser with the rendered markdown as html - it uses github API, so this is exacly how it'll look on github once you commit it. And here is a handy alias:
```
alias grip='grip -b'
```
so you don't need to remember the flag.
### Markdown Tips
* If you use numbered items and their number goes beyond 9 you must switch to 4-whitespace chars indentation for the paragraphs belonging to each item. Under 9 or with \* you need 3-whitespace chars as a leading indentation.
* When building tables make sure to use `--|--` and not `--+--` to separate the headers - github will not render it properly otherwise.
## Testing site locally
Install prerequisites:
```
sudo apt install ruby-bundler
```
When running this one it will ask for your user's password (basically running a sudo operation):
```
bundle install jekyll
```
Start the website:
```
cd docs
bundle exec jekyll serve
```
it will tell you which localhost url to go to to see the site.
| true |
code
| 0.678034 | null | null | null | null |
|
# SchNet S2EF training example
The purpose of this notebook is to demonstrate some of the basics of the Open Catalyst Project's (OCP) codebase and data. In this example, we will train a schnet model for predicting the energy and forces of a given structure (S2EF task). First, ensure you have installed the OCP ocp repo and all the dependencies according to the [README](https://github.com/Open-Catalyst-Project/ocp/blob/master/README.md).
Disclaimer: This notebook is for tutorial purposes, it is unlikely it will be practical to train baseline models on our larger datasets using this format. As a next step, we recommend trying the command line examples.
## Imports
```
import torch
import os
from ocpmodels.trainers import ForcesTrainer
from ocpmodels import models
from ocpmodels.common import logger
from ocpmodels.common.utils import setup_logging
setup_logging()
# a simple sanity check that a GPU is available
if torch.cuda.is_available():
print("True")
else:
print("False")
```
## The essential steps for training an OCP model
1) Download data
2) Preprocess data (if necessary)
3) Define or load a configuration (config), which includes the following
- task
- model
- optimizer
- dataset
- trainer
4) Train
5) Depending on the model/task there might be intermediate relaxation step
6) Predict
## Dataset
This examples uses the LMDB generated from the following [tutorial](http://laikapack.cheme.cmu.edu/notebook/open-catalyst-project/mshuaibi/notebooks/projects/ocp/docs/source/tutorials/lmdb_dataset_creation.ipynb). Please run that notebook before moving on. Alternatively, if you have other LMDBs available you may specify that instead.
```
# set the path to your local lmdb directory
train_src = "s2ef"
```
## Define config
For this example, we will explicitly define the config; however, a set of default config files exists in the config folder of this repository. Default config yaml files can easily be loaded with the `build_config` util (found in `ocp/ocpmodels/common/utils.py`). Loading a yaml config is preferrable when launching jobs from the command line. We have included our best models' config files [here](https://github.com/Open-Catalyst-Project/ocp/tree/master/configs/s2ef).
**Task**
```
task = {
'dataset': 'trajectory_lmdb', # dataset used for the S2EF task
'description': 'Regressing to energies and forces for DFT trajectories from OCP',
'type': 'regression',
'metric': 'mae',
'labels': ['potential energy'],
'grad_input': 'atomic forces',
'train_on_free_atoms': True,
'eval_on_free_atoms': True
}
```
**Model** - SchNet for this example
```
model = {
'name': 'schnet',
'hidden_channels': 1024, # if training is too slow for example purposes reduce the number of hidden channels
'num_filters': 256,
'num_interactions': 3,
'num_gaussians': 200,
'cutoff': 6.0
}
```
**Optimizer**
```
optimizer = {
'batch_size': 16, # if hitting GPU memory issues, lower this
'eval_batch_size': 8,
'num_workers': 8,
'lr_initial': 0.0001,
'scheduler': "ReduceLROnPlateau",
'mode': "min",
'factor': 0.8,
'patience': 3,
'max_epochs': 80,
'max_epochs': 1, # used for demonstration purposes
'force_coefficient': 100,
}
```
**Dataset**
For simplicity, `train_src` is used for all the train/val/test sets. Feel free to update with the actual S2EF val and test sets, but it does require additional downloads and preprocessing. If you desire to normalize your targets, `normalize_labels` must be set to `True` and corresponding `mean` and `stds` need to be specified. These values have been precomputed for you and can be found in any of the [`base.yml`](https://github.com/Open-Catalyst-Project/ocp/blob/master/configs/s2ef/20M/base.yml#L5-L9) config files.
```
dataset = [
{'src': train_src, 'normalize_labels': False}, # train set
{'src': train_src}, # val set (optional)
{'src': train_src} # test set (optional - writes predictions to disk)
]
```
**Trainer**
Use the `ForcesTrainer` for the S2EF and IS2RS tasks, and the `EnergyTrainer` for the IS2RE task
```
trainer = ForcesTrainer(
task=task,
model=model,
dataset=dataset,
optimizer=optimizer,
identifier="SchNet-example",
run_dir="./", # directory to save results if is_debug=False. Prediction files are saved here so be careful not to override!
is_debug=False, # if True, do not save checkpoint, logs, or results
is_vis=False,
print_every=5,
seed=0, # random seed to use
logger="tensorboard", # logger of choice (tensorboard and wandb supported)
local_rank=0,
amp=False, # use PyTorch Automatic Mixed Precision (faster training and less memory usage)
)
```
## Check the model
```
print(trainer.model)
```
## Train
```
trainer.train()
```
### Load Checkpoint
Once training has completed a `Trainer` class, by default, is loaded with the best checkpoint as determined by training or validation (if available) metrics. To load a `Trainer` class directly with a pretrained model, specify the `checkpoint_path` as defined by your previously trained model (`checkpoint_dir`):
```
checkpoint_path = os.path.join(trainer.config["cmd"]["checkpoint_dir"], "checkpoint.pt")
checkpoint_path
model = {
'name': 'schnet',
'hidden_channels': 1024, # if training is too slow for example purposes reduce the number of hidden channels
'num_filters': 256,
'num_interactions': 3,
'num_gaussians': 200,
'cutoff': 6.0
}
pretrained_trainer = ForcesTrainer(
task=task,
model=model,
dataset=dataset,
optimizer=optimizer,
identifier="SchNet-example",
run_dir="./", # directory to save results if is_debug=False. Prediction files are saved here so be careful not to override!
is_debug=False, # if True, do not save checkpoint, logs, or results
is_vis=False,
print_every=10,
seed=0, # random seed to use
logger="tensorboard", # logger of choice (tensorboard and wandb supported)
local_rank=0,
amp=False, # use PyTorch Automatic Mixed Precision (faster training and less memory usage)
)
pretrained_trainer.load_checkpoint(checkpoint_path=checkpoint_path)
```
## Predict
If a test has been provided in your config, predictions are generated and written to disk automatically upon training completion. Otherwise, to make predictions on unseen data a `torch.utils.data` DataLoader object must be constructed. Here we reference our test set to make predictions on. Predictions are saved in `{results_file}.npz` in your `results_dir`.
```
# make predictions on the existing test_loader
predictions = pretrained_trainer.predict(pretrained_trainer.test_loader, results_file="s2ef_results", disable_tqdm=False)
energies = predictions["energy"]
forces = predictions["forces"]
```
| true |
code
| 0.638046 | null | null | null | null |
|
# Image classification - training from scratch demo
1. [Introduction](#Introduction)
2. [Prerequisites and Preprocessing](#Prequisites-and-Preprocessing)
3. [Fine-tuning the Image classification model](#Fine-tuning-the-Image-classification-model)
4. [Set up hosting for the model](#Set-up-hosting-for-the-model)
1. [Import model into hosting](#Import-model-into-hosting)
2. [Create endpoint configuration](#Create-endpoint-configuration)
3. [Create endpoint](#Create-endpoint)
5. [Perform Inference](#Perform-Inference)
## Introduction
Welcome to our end-to-end example of distributed image classification algorithm in transfer learning mode. In this demo, we will use the Amazon sagemaker image classification algorithm to learn to classify the [CIFAR-10 dataset](https://www.cs.toronto.edu/~kriz/cifar.html).
To get started, we need to set up the environment with a few prerequisite steps, for permissions, configurations, and so on.
## Prequisites and Preprocessing
### Permissions and environment variables
Here we set up the linkage and authentication to AWS services. There are three parts to this:
* The roles used to give learning and hosting access to your data. This will automatically be obtained from the role used to start the notebook
* The S3 bucket that you want to use for training and model data
* The Amazon sagemaker image classification docker image which need not be changed
```
%%time
import boto3
import re
from sagemaker import get_execution_role
role = get_execution_role()
bucket='jsimon-sagemaker-us' # customize to your bucket
containers = {'us-west-2': '433757028032.dkr.ecr.us-west-2.amazonaws.com/image-classification:latest',
'us-east-1': '811284229777.dkr.ecr.us-east-1.amazonaws.com/image-classification:latest',
'us-east-2': '825641698319.dkr.ecr.us-east-2.amazonaws.com/image-classification:latest',
'eu-west-1': '685385470294.dkr.ecr.eu-west-1.amazonaws.com/image-classification:latest'}
training_image = containers[boto3.Session().region_name]
print(training_image)
```
## Training the Image classification model
The CIFAR-10 dataset consist of images from 10 categories and has 50,000 images with 5,000 images per category.
The image classification algorithm can take two types of input formats. The first is a [recordio format](https://mxnet.incubator.apache.org/tutorials/basic/record_io.html) and the other is a [lst format](https://mxnet.incubator.apache.org/how_to/recordio.html?highlight=im2rec). Files for both these formats are available at http://data.mxnet.io/data/cifar10/. In this example, we will use the recordio format for training and use the training/validation split.
```
import os
import urllib.request
import boto3
def download(url):
filename = url.split("/")[-1]
if not os.path.exists(filename):
urllib.request.urlretrieve(url, filename)
def upload_to_s3(channel, file):
s3 = boto3.resource('s3')
data = open(file, "rb")
key = channel + '/' + file
s3.Bucket(bucket).put_object(Key=key, Body=data)
# CIFAR-10
download('http://data.mxnet.io/data/cifar10/cifar10_train.rec')
download('http://data.mxnet.io/data/cifar10/cifar10_val.rec')
upload_to_s3('validation/cifar10', 'cifar10_val.rec')
upload_to_s3('train/cifar10', 'cifar10_train.rec')
```
Once we have the data available in the correct format for training, the next step is to actually train the model using the data. Before training the model, we need to setup the training parameters. The next section will explain the parameters in detail.
## Training parameters
There are two kinds of parameters that need to be set for training. The first one are the parameters for the training job. These include:
* **Input specification**: These are the training and validation channels that specify the path where training data is present. These are specified in the "InputDataConfig" section. The main parameters that need to be set is the "ContentType" which can be set to "application/x-recordio" or "application/x-image" based on the input data format and the S3Uri which specifies the bucket and the folder where the data is present.
* **Output specification**: This is specified in the "OutputDataConfig" section. We just need to specify the path where the output can be stored after training
* **Resource config**: This section specifies the type of instance on which to run the training and the number of hosts used for training. If "InstanceCount" is more than 1, then training can be run in a distributed manner.
Apart from the above set of parameters, there are hyperparameters that are specific to the algorithm. These are:
* **num_layers**: The number of layers (depth) for the network. We use 44 in this sample but other values can be used.
* **num_training_samples**: This is the total number of training samples. It is set to 50000 for CIFAR-10 dataset with the current split
* **num_classes**: This is the number of output classes for the new dataset. Imagenet was trained with 1000 output classes but the number of output classes can be changed for fine-tuning. For CIFAR-10, we use 10.
* **epochs**: Number of training epochs
* **learning_rate**: Learning rate for training
* **mini_batch_size**: The number of training samples used for each mini batch. In distributed training, the number of training samples used per batch will be N * mini_batch_size where N is the number of hosts on which training is run
After setting training parameters, we kick off training, and poll for status until training is completed, which in this example, takes between 10 to 12 minutes per epoch on a p2.xlarge machine. The network typically converges after 10 epochs.
```
# The algorithm supports multiple network depth (number of layers). They are 18, 34, 50, 101, 152 and 200
# For this training, we will use 50 layers
num_layers = 44
# we need to specify the input image shape for the training data
image_shape = "3,28,28"
# we also need to specify the number of training samples in the training set
# for CIFAR-10 it is 50000
num_training_samples = 50000
# specify the number of output classes
num_classes = 10
# batch size for training
mini_batch_size = 128
# number of epochs
epochs = 100
# optimizer
optimizer='adam'
# Since we are using transfer learning, we set use_pretrained_model to 1 so that weights can be
# initialized with pre-trained weights
use_pretrained_model = 0
```
# Training
Run the training using Amazon sagemaker CreateTrainingJob API
```
%%time
import time
import boto3
from time import gmtime, strftime
s3 = boto3.client('s3')
# create unique job name
job_name_prefix = 'sagemaker-imageclassification-cifar10'
timestamp = time.strftime('-%Y-%m-%d-%H-%M-%S', time.gmtime())
job_name = job_name_prefix + timestamp
training_params = \
{
# specify the training docker image
"AlgorithmSpecification": {
"TrainingImage": training_image,
"TrainingInputMode": "File"
},
"RoleArn": role,
"OutputDataConfig": {
"S3OutputPath": 's3://{}/{}/output'.format(bucket, job_name_prefix)
},
"ResourceConfig": {
"InstanceCount": 1,
"InstanceType": "ml.p2.xlarge",
"VolumeSizeInGB": 50
},
"TrainingJobName": job_name,
"HyperParameters": {
"image_shape": image_shape,
"num_layers": str(num_layers),
"num_training_samples": str(num_training_samples),
"num_classes": str(num_classes),
"mini_batch_size": str(mini_batch_size),
"epochs": str(epochs),
"learning_rate": str(learning_rate),
"use_pretrained_model": str(use_pretrained_model)
},
"StoppingCondition": {
"MaxRuntimeInSeconds": 360000
},
#Training data should be inside a subdirectory called "train"
#Validation data should be inside a subdirectory called "validation"
#The algorithm currently only supports fullyreplicated model (where data is copied onto each machine)
"InputDataConfig": [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": 's3://{}/train/cifar10'.format(bucket),
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "application/x-recordio",
"CompressionType": "None"
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": 's3://{}/validation/cifar10'.format(bucket),
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "application/x-recordio",
"CompressionType": "None"
}
]
}
print('Training job name: {}'.format(job_name))
print('\nInput Data Location: {}'.format(training_params['InputDataConfig'][0]['DataSource']['S3DataSource']))
# create the Amazon SageMaker training job
sagemaker = boto3.client(service_name='sagemaker')
sagemaker.create_training_job(**training_params)
# confirm that the training job has started
status = sagemaker.describe_training_job(TrainingJobName=job_name)['TrainingJobStatus']
print('Training job current status: {}'.format(status))
try:
# wait for the job to finish and report the ending status
sagemaker.get_waiter('training_job_completed_or_stopped').wait(TrainingJobName=job_name)
training_info = sagemaker.describe_training_job(TrainingJobName=job_name)
status = training_info['TrainingJobStatus']
print("Training job ended with status: " + status)
except:
print('Training failed to start')
# if exception is raised, that means it has failed
message = sagemaker.describe_training_job(TrainingJobName=job_name)['FailureReason']
print('Training failed with the following error: {}'.format(message))
training_info = sagemaker.describe_training_job(TrainingJobName=job_name)
status = training_info['TrainingJobStatus']
print("Training job ended with status: " + status)
```
If you see the message,
> `Training job ended with status: Completed`
then that means training sucessfully completed and the output model was stored in the output path specified by `training_params['OutputDataConfig']`.
You can also view information about and the status of a training job using the AWS SageMaker console. Just click on the "Jobs" tab.
## Plot training and validation accuracies
```
import boto3
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
client = boto3.client('logs')
lgn='/aws/sagemaker/TrainingJobs'
lsn='sagemaker-imageclassification-cifar10-2018-01-16-10-31-05/algo-1-1516099203'
log=client.get_log_events(logGroupName=lgn, logStreamName=lsn)
trn_accs=[]
val_accs=[]
for e in log['events']:
msg=e['message']
if 'Validation-accuracy' in msg:
val = msg.split("=")
val = val[1]
val_accs.append(float(val))
if 'Train-accuracy' in msg:
trn = msg.split("=")
trn = trn[1]
trn_accs.append(float(trn))
print("Maximum validation accuracy: %f " % max(val_accs))
fig, ax = plt.subplots()
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
trn_plot, = ax.plot(range(epochs), trn_accs, label="Training accuracy")
val_plot, = ax.plot(range(epochs), val_accs, label="Validation accuracy")
plt.legend(handles=[trn_plot,val_plot])
ax.yaxis.set_ticks(np.arange(0.4, 1.05, 0.05))
ax.yaxis.set_major_formatter(ticker.FormatStrFormatter('%0.2f'))
plt.show()
```
# Inference
***
A trained model does nothing on its own. We now want to use the model to perform inference. For this example, that means predicting the topic mixture representing a given document.
This section involves several steps,
1. [Create Model](#CreateModel) - Create model for the training output
1. [Create Endpoint Configuration](#CreateEndpointConfiguration) - Create a configuration defining an endpoint.
1. [Create Endpoint](#CreateEndpoint) - Use the configuration to create an inference endpoint.
1. [Perform Inference](#Perform Inference) - Perform inference on some input data using the endpoint.
## Create Model
We now create a SageMaker Model from the training output. Using the model we can create an Endpoint Configuration.
```
%%time
import boto3
from time import gmtime, strftime
sage = boto3.Session().client(service_name='sagemaker')
model_name="test-image-classification-model-cifar-10epochs"
print(model_name)
info = sage.describe_training_job(TrainingJobName=job_name)
model_data = info['ModelArtifacts']['S3ModelArtifacts']
print(model_data)
containers = {'us-west-2': '433757028032.dkr.ecr.us-west-2.amazonaws.com/image-classification:latest',
'us-east-1': '811284229777.dkr.ecr.us-east-1.amazonaws.com/image-classification:latest',
'us-east-2': '825641698319.dkr.ecr.us-east-2.amazonaws.com/image-classification:latest',
'eu-west-1': '685385470294.dkr.ecr.eu-west-1.amazonaws.com/image-classification:latest'}
hosting_image = containers[boto3.Session().region_name]
primary_container = {
'Image': hosting_image,
'ModelDataUrl': model_data,
}
create_model_response = sage.create_model(
ModelName = model_name,
ExecutionRoleArn = role,
PrimaryContainer = primary_container)
print(create_model_response['ModelArn'])
```
### Create Endpoint Configuration
At launch, we will support configuring REST endpoints in hosting with multiple models, e.g. for A/B testing purposes. In order to support this, customers create an endpoint configuration, that describes the distribution of traffic across the models, whether split, shadowed, or sampled in some way.
In addition, the endpoint configuration describes the instance type required for model deployment, and at launch will describe the autoscaling configuration.
```
from time import gmtime, strftime
timestamp = time.strftime('-%Y-%m-%d-%H-%M-%S', time.gmtime())
endpoint_config_name = job_name_prefix + '-epc-' + timestamp
endpoint_config_response = sage.create_endpoint_config(
EndpointConfigName = endpoint_config_name,
ProductionVariants=[{
'InstanceType':'ml.m4.xlarge',
'InitialInstanceCount':1,
'ModelName':model_name,
'VariantName':'AllTraffic'}])
print('Endpoint configuration name: {}'.format(endpoint_config_name))
print('Endpoint configuration arn: {}'.format(endpoint_config_response['EndpointConfigArn']))
```
### Create Endpoint
Lastly, the customer creates the endpoint that serves up the model, through specifying the name and configuration defined above. The end result is an endpoint that can be validated and incorporated into production applications. This takes 9-11 minutes to complete.
```
%%time
import time
timestamp = time.strftime('-%Y-%m-%d-%H-%M-%S', time.gmtime())
endpoint_name = job_name_prefix + '-ep-' + timestamp
print('Endpoint name: {}'.format(endpoint_name))
endpoint_params = {
'EndpointName': endpoint_name,
'EndpointConfigName': endpoint_config_name,
}
endpoint_response = sagemaker.create_endpoint(**endpoint_params)
print('EndpointArn = {}'.format(endpoint_response['EndpointArn']))
```
Finally, now the endpoint can be created. It may take sometime to create the endpoint...
```
# get the status of the endpoint
response = sagemaker.describe_endpoint(EndpointName=endpoint_name)
status = response['EndpointStatus']
print('EndpointStatus = {}'.format(status))
# wait until the status has changed
sagemaker.get_waiter('endpoint_in_service').wait(EndpointName=endpoint_name)
# print the status of the endpoint
endpoint_response = sagemaker.describe_endpoint(EndpointName=endpoint_name)
status = endpoint_response['EndpointStatus']
print('Endpoint creation ended with EndpointStatus = {}'.format(status))
if status != 'InService':
raise Exception('Endpoint creation failed.')
```
If you see the message,
> `Endpoint creation ended with EndpointStatus = InService`
then congratulations! You now have a functioning inference endpoint. You can confirm the endpoint configuration and status by navigating to the "Endpoints" tab in the AWS SageMaker console.
We will finally create a runtime object from which we can invoke the endpoint.
## Perform Inference
Finally, the customer can now validate the model for use. They can obtain the endpoint from the client library using the result from previous operations, and generate classifications from the trained model using that endpoint.
```
import boto3
runtime = boto3.Session().client(service_name='runtime.sagemaker')
```
### Download test image
```
# Bird
#!wget -O /tmp/test.jpg https://cdn.pixabay.com/photo/2015/12/19/10/54/bird-1099639_960_720.jpg
# Horse
#!wget -O /tmp/test.jpg https://cdn.pixabay.com/photo/2016/02/15/13/26/horse-1201143_960_720.jpg
# Dog
!wget -O /tmp/test.jpg https://cdn.pixabay.com/photo/2016/02/19/15/46/dog-1210559_960_720.jpg
# Truck
# Truck
#!wget -O /tmp/test.jpg https://cdn.pixabay.com/photo/2015/09/29/10/14/truck-truck-963637_960_720.jpg
file_name = '/tmp/test.jpg'
# test image
from IPython.display import Image
Image(file_name)
import json
import numpy as np
with open(file_name, 'rb') as f:
payload = f.read()
payload = bytearray(payload)
response = runtime.invoke_endpoint(EndpointName=endpoint_name,
ContentType='application/x-image',
Body=payload)
result = response['Body'].read()
# result will be in json format and convert it to ndarray
result = json.loads(result)
print(result)
# the result will output the probabilities for all classes
# find the class with maximum probability and print the class index
index = np.argmax(result)
object_categories = ['airplane', 'automobile', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
print("Result: label - " + object_categories[index] + ", probability - " + str(result[index]))
```
### Clean up
When we're done with the endpoint, we can just delete it and the backing instances will be released. Run the following cell to delete the endpoint.
```
sage.delete_endpoint(EndpointName=endpoint_name)
```
| true |
code
| 0.598606 | null | null | null | null |
|
Проект команды **paranormal** в рамках домашнего задания Летней Школы **МТС.Тета**, направление "Машинное обучение"
#### Загрузка и настройка необходимых библиотек
```
import pickle
import warnings
import numpy as np
import pandas as pd
import seaborn as sns
from matplotlib import pyplot as plt
from scipy.stats import pointbiserialr
from sklearn.metrics import confusion_matrix
from sklearn.metrics import precision_score
from sklearn.metrics import f1_score, recall_score
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
sns.set_theme(style='whitegrid', palette='deep')
warnings.filterwarnings('ignore')
```
## 1. Анализ данных
### 1.1. Предобработка датасета
```
# загрузка данных
data = pd.read_csv('data/diabetes.csv')
# убираем дубликаты
data = data.drop_duplicates()
# провидим в однообразное написание название переменных
data.columns = [c.replace(' ', '_').lower() for c in data.columns]
# заменяем значения 'Female', 'No' и 'Negative' на 0, 'Male', 'Yes' и 'Positive' - на 1
data = data.replace(["Yes", 'No', 'Male', 'Female', 'Positive', 'Negative'], [1, 0, 1, 0, 1, 0])
# сохраняем загруженные данные в отдельный датасет
df_diabetes = data.copy()
df_diabetes.head(5)
```
#### Переменные:
- polyuria - полиурия (увеличенное образование мочи)
- polydipsia - полидипсия (неутолимая жажда)
- sudden weight loss - внезапная потеря веса
- weakness - слабость
- polyphagia - полифагия (повышенный аппетит)
- genital thrush - генитальная молочница
- visual blurring - расплывчатость зрения
- itching - зуд
- irritability - раздражительность
- delayed healing - медленное заживление ран
- partial paresis - частичный парез (потеря мышечной силы)
- muscle stiffness - жесткость мышц
- alopecia - алопеция (выпадение волос)
- obesity - ожирение
### 2.2. Разведочный анализ данных
```
print('Уникальные значения переменных')
for col in df_diabetes.columns:
print(col, df_diabetes[col].unique())
df_diabetes.info()
```
<div class="alert alert-block alert-info"><b>
Пропущенных значений нет, нет необходимости в обработке пропусков
</div>
```
df_diabetes.describe()
```
<div class="alert alert-block alert-info"><b>
<p>В датасете данные пациентов в возрасте от 16 до 90 лет, медиана - 48 лет, средний 48.9 лет. </p>
<p>Остальные переменные - бинарные. </p>
<p>Датасет по целевому классу достаточно сбалансирован: 69% на 31%.
</div>
```
sns.distplot(df_diabetes['age'], bins=20);
sns.displot(data=data, x='age', hue='class', kde = True);
sns.pairplot(df_diabetes, hue='class', corner=True);
sns.pairplot(df_diabetes[['gender', 'class']], hue='class');
round(df_diabetes[df_diabetes['class'] == 1].groupby(['gender'])['weakness'].count() / df_diabetes[df_diabetes['class'] == 1]['weakness'].count() * 100, 2)
fig, ax = plt.subplots(figsize=(15,12))
sns.heatmap(df_diabetes.corr(method='pearson'), center=0, square=False, annot=True, ax=ax);
pointbiserialr(df_diabetes.iloc[:, 1], df_diabetes.age)
```
<div class="alert alert-block alert-info"><b>
Основные выводы
</div>
<div class="alert alert-block alert-info"><b>
1) Диабет, особенно 2-го типа, наиболее распространен среди мужчин, чем среди женщин. https://www.news-medical.net/health/Diabetes-in-Men-versus-Women.aspx
2) Целевой класс сильно коррелирует с переменными полиурия и полидипсия. https://www.jdrf.org/t1d-resources/about/symptoms/frequent-urination/
3) Также целевой класс коррелирует с внезапной потерей веса. https://www.medicinenet.com/is_weight_loss_caused_by_diabetes_dangerous/ask.htm
4) Перечисленные в исходных данных признаки (полиурия, полидипсия, внезапная потеря веса, слабость, повышенный аппетит, ожирание, зуд и т.п.) являются симптомами сахарного диабета. Стоит отметить, что чем выше стадия сахарного диабета, тем заметнее проявление симптомов.
5) Указан признак полиурия, но помимо этого возможно также ночное недержание. Можно добавить и такие признаки, как онемение и покалывание в руках и ногах, повышеная потливость, быстрая утомляемость, нехватка энергии, сильная усталость и сухость во рту из-за чувства жажды.
6) На представленных данных можно построить модель. В будущем в данные можно будет добавить указанные выше симптомы, а также расширить географию сбора данных.
7) Признаки не противоречат друг другу, данные соответствуют гипотезе.
</div>
## 2. Моделирование
<div class="alert alert-block alert-info"><b>
Решая поставленную задачу, мы испробовали несколько методов машинного обучения, включая логистическую регрессию, градиентный бустинг и случайный лес. Лучший результат на наших данных показал случайный лес по метрике F1. Приводим код только финальной модели.</div>
```
X, y = df_diabetes.drop('class', 1), df_diabetes['class']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=42, shuffle=True)
param_grid = {
'n_estimators': np.arange(5, 51, 15),
'max_depth': np.arange(5, 51, 15),
'min_samples_split': np.arange(2, 11, 4),
'min_samples_leaf': np.arange(1, 10, 4),
'max_samples': np.arange(0.1, 0.99, 0.23),
}
%%time
rf = RandomForestClassifier(n_jobs=-1, random_state=42)
cv = GridSearchCV(rf, param_grid, cv=3).fit(X_train, y_train)
cv.best_params_
cv.best_estimator_
y_pred = cv.best_estimator_.predict(X_test)
conf_mat = confusion_matrix(y_test, y_pred)
ax = plt.subplot()
sns.heatmap(conf_mat / np.sum(conf_mat), annot=True, fmt='.2%', cmap='Blues', ax=ax)
ax.set_title('Confusion Matrix')
ax.set_xlabel('Predicted labels')
ax.set_ylabel('True labels')
ax.xaxis.set_ticklabels(['healthy', 'sick'])
ax.yaxis.set_ticklabels(['healthy', 'sick'])
def print_metrics(y_true, y_pred):
print(f'f1_score: {f1_score(y_true, y_pred):.4f}')
print(f'recall_score: {recall_score(y_true, y_pred):.4f}')
print(f'precision_score: {precision_score(y_true, y_pred):.4f}')
print_metrics(y_test, y_pred)
```
<div class="alert alert-block alert-info"><b>
Полученное значение F1 меры соответствует ожидаемому качеству модели. </div>
## 3. Сохраняем модель в бинарный файл
```
model_path = 'random_forest_diabet.pkl'
with open(model_path, 'wb') as file:
pickle.dump(cv.best_estimator_, file)
```
## 4. Загружаем модель и проверяем метрики
```
with open(model_path, 'rb') as file:
loaded_model = pickle.load(file)
loaded_model
print_metrics(y_test, loaded_model.predict(X_test))
```
| true |
code
| 0.430866 | null | null | null | null |
|
# Testing HLS Module
The HLS module simply copies the input image to the output image (passthrough)
The project builds on the VDMA demo.
## Project sources can be found here
[HLS Passthrough Demo](https://github.com/CospanDesign/pynq-hdl/tree/master/Projects/Simple%20HLS%20VDMA)
```
import cv2
import numpy as np
def cvtcolor_rgb2yuv422(rgb):
yuv422 =np.zeros((rgb.shape[0], rgb.shape[1], 2)).astype(np.uint8)
yuv444 = cv2.cvtColor(rgb, cv2.COLOR_BGR2YUV);
# chroma subsampling: yuv444 -> yuv422;
for row in range(yuv444.shape[0]):
for col in range(0, yuv444.shape[1], 2):
p0_in = yuv444[row, col]
p1_in = yuv444[row, col + 1]
p0_out = [p0_in[0], p0_in[1]]
p1_out = [p1_in[0], p0_in[2]]
yuv422[row, col] = p0_out
yuv422[row, col + 1] = p1_out
return yuv422
```
# Open and Convert the Image to a usable format
Open the image and convert it to YUV422
Perform the conversion in a seperate cell than below because the conversion takes a long time.
```
# %matplotlib inline
from matplotlib import pyplot as plt
#Create a YUV422 Image So we don't need to keep regenerating it
IMAGE_FILE = "../data/test_1080p.bmp"
image_in = cv2.imread(IMAGE_FILE)
image_yuv = cvtcolor_rgb2yuv422(image_in)
#SHOW IMAGE
image_out = cv2.cvtColor(image_yuv, cv2.COLOR_YUV2BGR_YUYV)
plt.imshow(image_out)
plt.show()
```
# Perform the Image Processing
1. Program the FPGA.
2. Configure the Egress and Ingress Video DMA cores and configure them to take in images with the with and height the same as the image opened.
3. Configure the Image Processor.
4. Send down the image to the memory accessable by the FPGA.
5. Intitate the VDMA Transfer.
6. Wait for the transfer to finish.
7. Read back and display the image
```
# %matplotlib inline
from time import sleep
from pynq import Overlay
from pynq.drivers import VDMA
from image_processor import ImageProcessor
import cv2
from matplotlib import pyplot as plt
from IPython.display import Image
import numpy as np
#Constants
BITFILE_NAME = "hls_passthrough.bit"
EGRESS_VDMA_NAME = "SEG_axi_vdma_0_Reg"
INGRESS_VDMA_NAME = "SEG_axi_vdma_1_Reg"
HLS_NAME = "SEG_image_filter_0_Reg"
# Set Debug to true to enable debug messages from the VDMA core
DEBUG = False
#DEBUG = True
# Set Verbose to true to dump a lot of messages about
VERBOSE = False
#VERBOSE = True
#These can be set between 0 - 2, the VDMA can also be configured for up to 32 frames in 32-bit memspace and 16 in 64-bit memspace
EGRESS_FRAME_INDEX = 0
INGRESS_FRAME_INDEX = 0
IMAGE_WIDTH = image_yuv.shape[1]
IMAGE_HEIGHT = image_yuv.shape[0]
print ("Image Size: %dx%d" % (IMAGE_WIDTH, IMAGE_HEIGHT))
#Download Images
ol = Overlay(BITFILE_NAME)
ol.download()
vdma_egress = VDMA(name = EGRESS_VDMA_NAME, debug = DEBUG)
vdma_ingress = VDMA(name = INGRESS_VDMA_NAME, debug = DEBUG)
image_processor = ImageProcessor(HLS_NAME)
image_processor.set_image_width(IMAGE_WIDTH)
image_processor.set_image_height(IMAGE_HEIGHT)
image_processor.enable(True)
#print ("Image Processor Enabled? %s" % image_processor.is_enabled())
#Set the size of the image
vdma_egress.set_image_size(IMAGE_WIDTH, IMAGE_HEIGHT, color_depth = 2)
vdma_ingress.set_image_size(IMAGE_WIDTH, IMAGE_HEIGHT, color_depth = 2)
#The above functions created the video frames
#Populate the frame
frame = vdma_egress.get_frame(EGRESS_FRAME_INDEX)
frame.set_bytearray(bytearray(image_yuv.astype(np.int8).tobytes()))
print ("Frame width, height: %d, %d" % (frame.width, frame.height))
print ("")
print ("Running? Egress:Ingress %s:%s" % (vdma_egress.is_egress_enabled(), vdma_ingress.is_ingress_enabled()))
if VERBOSE:
vdma_egress.dump_egress_registers()
vdma_ingress.dump_ingress_registers()
print ("")
print ("Enabling One of the Engine")
#Open Up the Ingress Side
vdma_ingress.start_ingress_engine( continuous = False,
num_frames = 1,
frame_index = INGRESS_FRAME_INDEX,
interrupt = False)
if VERBOSE:
vdma_egress.dump_egress_registers()
vdma_ingress.dump_ingress_registers()
print ("Running? Egress:Ingress %s:%s" % (vdma_egress.is_egress_enabled(), vdma_ingress.is_ingress_enabled()))
print ("")
print ("Enabling Both Engines")
#Quick Start
vdma_egress.start_egress_engine( continuous = False,
num_frames = 1,
frame_index = EGRESS_FRAME_INDEX,
interrupt = False)
print ("")
print ("Both of the engines should be halted after transferring one frame")
#XXX: I think this sleep isn't needed but the core erroniously reports an engine isn't finished even though it is.
#XXX: This sleep line can be commented out but the egress core may report it is not finished.
sleep(0.1)
if VERBOSE:
vdma_egress.dump_egress_registers()
vdma_ingress.dump_ingress_registers()
print ("Running? Egress:Ingress %s:%s" % (vdma_egress.is_egress_enabled(), vdma_ingress.is_ingress_enabled()))
if VERBOSE:
print ("Egress WIP: %d" % vdma_egress.get_wip_egress_frame())
print ("Ingress WIP: %d" % vdma_ingress.get_wip_ingress_frame())
#Check to see if the egress frame point progressed
print ("")
print ("Disabling both engines")
#Disable both
vdma_egress.stop_egress_engine()
vdma_ingress.stop_ingress_engine()
print ("Running? Egress:Ingress %s:%s" % (vdma_egress.is_egress_enabled(), vdma_ingress.is_ingress_enabled()))
if VERBOSE:
vdma_egress.dump_egress_registers()
vdma_ingress.dump_ingress_registers()
print ("Egress Error: 0x%08X" % vdma_egress.get_egress_error())
print ("Ingress Error: 0x%08X" % vdma_ingress.get_ingress_error())
frame = vdma_ingress.get_frame(INGRESS_FRAME_INDEX)
#frame.save_as_jpeg("./image.jpg")
image_yuv_out = np.ndarray( shape = (IMAGE_HEIGHT, IMAGE_WIDTH, 2),
dtype=np.uint8,
buffer = frame.get_bytearray())
image_rgb_out = cv2.cvtColor(image_yuv_out, cv2.COLOR_YUV2BGR_YUYV)
#SHOW IMAGE
plt.imshow(image_rgb_out)
plt.show()
```
| true |
code
| 0.356405 | null | null | null | null |
|
# Interactive Demo for Metrics
* command line executables: see README.md
* algorithm documentation: [metrics.py API & Algorithm Documentation](metrics.py_API_Documentation.ipynb)
* **make sure you enabled interactive widgets via: **
```
sudo jupyter nbextension enable --py --sys-prefix widgetsnbextension
```
* **make sure you use the correct Kernel** matching the your evo Python version (otherwise use the menu Kernel->Change..
...some modules and settings for this demo:
```
from __future__ import print_function
from evo.tools import log
log.configure_logging()
from evo.tools import plot
from evo.tools.plot import PlotMode
from evo.core.metrics import PoseRelation, Unit
from evo.tools.settings import SETTINGS
# temporarily override some package settings
SETTINGS.plot_figsize = [6, 6]
SETTINGS.plot_split = True
SETTINGS.plot_usetex = False
# magic plot configuration
import matplotlib.pyplot as plt
%matplotlib inline
%matplotlib notebook
# interactive widgets configuration
import ipywidgets
check_opts_ape = {"align": False, "correct_scale": False, "show_plot": True}
check_boxes_ape=[ipywidgets.Checkbox(description=desc, value=val) for desc, val in check_opts_ape.items()]
check_opts_rpe = {"align": False, "correct_scale": False, "all_pairs": False, "show_plot": True}
check_boxes_rpe=[ipywidgets.Checkbox(description=desc, value=val) for desc, val in check_opts_rpe.items()]
delta_input = ipywidgets.FloatText(value=1.0, description='delta', disabled=False, color='black')
du_selector=ipywidgets.Dropdown(
options={u.value: u for u in Unit},
value=Unit.frames, description='delta_unit'
)
pm_selector=ipywidgets.Dropdown(
options={p.value: p for p in PlotMode},
value=PlotMode.xy, description='plot_mode'
)
pr_selector=ipywidgets.Dropdown(
options={p.value: p for p in PoseRelation},
value=PoseRelation.translation_part, description='pose_relation'
)
```
---
## Load trajectories
```
from evo.tools import file_interface
from evo.core import sync
```
**Load KITTI files** with entries of the first three rows of $\mathrm{SE}(3)$ matrices per line (no timestamps):
```
traj_ref = file_interface.read_kitti_poses_file("../test/data/KITTI_00_gt.txt")
traj_est = file_interface.read_kitti_poses_file("../test/data/KITTI_00_ORB.txt")
```
**...or load a ROS bagfile** with `geometry_msgs/PoseStamped` topics:
```
try:
import rosbag
bag_handle = rosbag.Bag("../test/data/ROS_example.bag")
traj_ref = file_interface.read_bag_trajectory(bag_handle, "groundtruth")
traj_est = file_interface.read_bag_trajectory(bag_handle, "ORB-SLAM")
traj_ref, traj_est = sync.associate_trajectories(traj_ref, traj_est)
except ImportError as e:
print(e) # ROS not found
```
**... or load TUM files with** 3D position and orientation quaternion per line ($x$ $y$ $z$ $q_x$ $q_y$ $q_z$ $q_w$):
```
traj_ref = file_interface.read_tum_trajectory_file("../test/data/fr2_desk_groundtruth.txt")
traj_est = file_interface.read_tum_trajectory_file("../test/data/fr2_desk_ORB_kf_mono.txt")
traj_ref, traj_est = sync.associate_trajectories(traj_ref, traj_est)
print(traj_ref)
print(traj_est)
```
---
## APE
Algorithm and API explanation: [see here](metrics.py_API_Documentation.ipynb#ape_math)
### Interactive APE Demo
***Run the code below, configure the parameters in the GUI and press the update button.***
(uses the trajectories loaded above)
```
import evo.main_ape as main_ape
import evo.common_ape_rpe as common
count = 0
results = []
def callback_ape(pose_relation, align, correct_scale, plot_mode, show_plot):
global results, count
est_name="APE Test #{}".format(count)
result = main_ape.ape(traj_ref, traj_est, est_name=est_name,
pose_relation=pose_relation, align=align, correct_scale=correct_scale)
count += 1
results.append(result)
if show_plot:
fig = plt.figure()
ax = plot.prepare_axis(fig, plot_mode)
plot.traj(ax, plot_mode, traj_ref, style="--", alpha=0.5)
plot.traj_colormap(
ax, result.trajectories[est_name], result.np_arrays["error_array"], plot_mode,
min_map=result.stats["min"], max_map=result.stats["max"])
_ = ipywidgets.interact_manual(callback_ape, pose_relation=pr_selector, plot_mode=pm_selector,
**{c.description: c.value for c in check_boxes_ape})
```
---
## RPE
Algorithm and API explanation: [see here](metrics.py_API_Documentation.ipynb#rpe_math)
### Interactive RPE Demo
***Run the code below, configure the parameters in the GUI and press the update button.***
(uses the trajectories loaded above, alignment only useful for visualization here)
```
import evo.main_rpe as main_rpe
count = 0
results = []
def callback_rpe(pose_relation, delta, delta_unit, all_pairs, align, correct_scale, plot_mode, show_plot):
global results, count
est_name="RPE Test #{}".format(count)
result = main_rpe.rpe(traj_ref, traj_est, est_name=est_name,
pose_relation=pose_relation, delta=delta, delta_unit=delta_unit,
all_pairs=all_pairs, align=align, correct_scale=correct_scale,
support_loop=True)
count += 1
results.append(result)
if show_plot:
fig = plt.figure()
ax = plot.prepare_axis(fig, plot_mode)
plot.traj(ax, plot_mode, traj_ref, style="--", alpha=0.5)
plot.traj_colormap(
ax, result.trajectories[est_name], result.np_arrays["error_array"], plot_mode,
min_map=result.stats["min"], max_map=result.stats["max"])
_ = ipywidgets.interact_manual(callback_rpe, pose_relation=pr_selector, plot_mode=pm_selector,
delta=delta_input, delta_unit=du_selector,
**{c.description: c.value for c in check_boxes_rpe})
```
Do stuff with the result objects:
```
import pandas as pd
from evo.tools import pandas_bridge
df = pd.DataFrame()
for result in results:
df = pd.concat((df, pandas_bridge.result_to_df(result)), axis="columns")
df
df.loc["stats"]
```
| true |
code
| 0.544256 | null | null | null | null |
|
<a id='ar1'></a>
<div id="qe-notebook-header" align="right" style="text-align:right;">
<a href="https://quantecon.org/" title="quantecon.org">
<img style="width:250px;display:inline;" width="250px" src="https://assets.quantecon.org/img/qe-menubar-logo.svg" alt="QuantEcon">
</a>
</div>
# AR1 Processes
<a id='index-0'></a>
## Contents
- [AR1 Processes](#AR1-Processes)
- [Overview](#Overview)
- [The AR(1) Model](#The-AR(1)-Model)
- [Stationarity and Asymptotic Stability](#Stationarity-and-Asymptotic-Stability)
- [Ergodicity](#Ergodicity)
- [Exercises](#Exercises)
- [Solutions](#Solutions)
## Overview
In this lecture we are going to study a very simple class of stochastic
models called AR(1) processes.
These simple models are used again and again in economic research to represent the dynamics of series such as
- labor income
- dividends
- productivity, etc.
AR(1) processes can take negative values but are easily converted into positive processes when necessary by a transformation such as exponentiation.
We are going to study AR(1) processes partly because they are useful and
partly because they help us understand important concepts.
Let’s start with some imports:
```
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
```
## The AR(1) Model
The **AR(1) model** (autoregressive model of order 1) takes the form
<a id='equation-can-ar1'></a>
$$
X_{t+1} = a X_t + b + c W_{t+1} \tag{1}
$$
where $ a, b, c $ are scalar-valued parameters.
This law of motion generates a time series $ \{ X_t\} $ as soon as we
specify an initial condition $ X_0 $.
This is called the **state process** and the state space is $ \mathbb R $.
To make things even simpler, we will assume that
- the process $ \{ W_t \} $ is IID and standard normal,
- the initial condition $ X_0 $ is drawn from the normal distribution $ N(\mu_0, v_0) $ and
- the initial condition $ X_0 $ is independent of $ \{ W_t \} $.
### Moving Average Representation
Iterating backwards from time $ t $, we obtain
$$
X_t = a X_{t-1} + b + c W_t
= a^2 X_{t-2} + a b + a c W_{t-1} + b + c W_t
= \cdots
$$
If we work all the way back to time zero, we get
<a id='equation-ar1-ma'></a>
$$
X_t = a^t X_0 + b \sum_{j=0}^{t-1} a^j +
c \sum_{j=0}^{t-1} a^j W_{t-j} \tag{2}
$$
Equation [(2)](#equation-ar1-ma) shows that $ X_t $ is a well defined random variable, the value of which depends on
- the parameters,
- the initial condition $ X_0 $ and
- the shocks $ W_1, \ldots W_t $ from time $ t=1 $ to the present.
Throughout, the symbol $ \psi_t $ will be used to refer to the
density of this random variable $ X_t $.
### Distribution Dynamics
One of the nice things about this model is that it’s so easy to trace out the sequence of distributions $ \{ \psi_t \} $ corresponding to the time
series $ \{ X_t\} $.
To see this, we first note that $ X_t $ is normally distributed for each $ t $.
This is immediate form [(2)](#equation-ar1-ma), since linear combinations of independent
normal random variables are normal.
Given that $ X_t $ is normally distributed, we will know the full distribution
$ \psi_t $ if we can pin down its first two moments.
Let $ \mu_t $ and $ v_t $ denote the mean and variance
of $ X_t $ respectively.
We can pin down these values from [(2)](#equation-ar1-ma) or we can use the following
recursive expressions:
<a id='equation-dyn-tm'></a>
$$
\mu_{t+1} = a \mu_t + b
\quad \text{and} \quad
v_{t+1} = a^2 v_t + c^2 \tag{3}
$$
These expressions are obtained from [(1)](#equation-can-ar1) by taking, respectively, the expectation and variance of both sides of the equality.
In calculating the second expression, we are using the fact that $ X_t $
and $ W_{t+1} $ are independent.
(This follows from our assumptions and [(2)](#equation-ar1-ma).)
Given the dynamics in [(2)](#equation-ar1-ma) and initial conditions $ \mu_0,
v_0 $, we obtain $ \mu_t, v_t $ and hence
$$
\psi_t = N(\mu_t, v_t)
$$
The following code uses these facts to track the sequence of marginal
distributions $ \{ \psi_t \} $.
The parameters are
```
a, b, c = 0.9, 0.1, 0.5
mu, v = -3.0, 0.6 # initial conditions mu_0, v_0
```
Here’s the sequence of distributions:
```
from scipy.stats import norm
sim_length = 10
grid = np.linspace(-5, 7, 120)
fig, ax = plt.subplots()
for t in range(sim_length):
mu = a * mu + b
v = a**2 * v + c**2
ax.plot(grid, norm.pdf(grid, loc=mu, scale=np.sqrt(v)),
label=f"$\psi_{t}$",
alpha=0.7)
ax.legend(bbox_to_anchor=[1.05,1],loc=2,borderaxespad=1)
plt.show()
```
## Stationarity and Asymptotic Stability
Notice that, in the figure above, the sequence $ \{ \psi_t \} $ seems to be converging to a limiting distribution.
This is even clearer if we project forward further into the future:
```
def plot_density_seq(ax, mu_0=-3.0, v_0=0.6, sim_length=60):
mu, v = mu_0, v_0
for t in range(sim_length):
mu = a * mu + b
v = a**2 * v + c**2
ax.plot(grid,
norm.pdf(grid, loc=mu, scale=np.sqrt(v)),
alpha=0.5)
fig, ax = plt.subplots()
plot_density_seq(ax)
plt.show()
```
Moreover, the limit does not depend on the initial condition.
For example, this alternative density sequence also converges to the same limit.
```
fig, ax = plt.subplots()
plot_density_seq(ax, mu_0=3.0)
plt.show()
```
In fact it’s easy to show that such convergence will occur, regardless of the initial condition, whenever $ |a| < 1 $.
To see this, we just have to look at the dynamics of the first two moments, as
given in [(3)](#equation-dyn-tm).
When $ |a| < 1 $, these sequence converge to the respective limits
<a id='equation-mu-sig-star'></a>
$$
\mu^* := \frac{b}{1-a}
\quad \text{and} \quad
v^* = \frac{c^2}{1 - a^2} \tag{4}
$$
(See our [lecture on one dimensional dynamics](scalar_dynam.ipynb) for background on deterministic convergence.)
Hence
<a id='equation-ar1-psi-star'></a>
$$
\psi_t \to \psi^* = N(\mu^*, v^*)
\quad \text{as }
t \to \infty \tag{5}
$$
We can confirm this is valid for the sequence above using the following code.
```
fig, ax = plt.subplots()
plot_density_seq(ax, mu_0=3.0)
mu_star = b / (1 - a)
std_star = np.sqrt(c**2 / (1 - a**2)) # square root of v_star
psi_star = norm.pdf(grid, loc=mu_star, scale=std_star)
ax.plot(grid, psi_star, 'k-', lw=2, label="$\psi^*$")
ax.legend()
plt.show()
```
As claimed, the sequence $ \{ \psi_t \} $ converges to $ \psi^* $.
### Stationary Distributions
A stationary distribution is a distribution that is a fixed
point of the update rule for distributions.
In other words, if $ \psi_t $ is stationary, then $ \psi_{t+j} =
\psi_t $ for all $ j $ in $ \mathbb N $.
A different way to put this, specialized to the current setting, is as follows: a
density $ \psi $ on $ \mathbb R $ is **stationary** for the AR(1) process if
$$
X_t \sim \psi
\quad \implies \quad
a X_t + b + c W_{t+1} \sim \psi
$$
The distribution $ \psi^* $ in [(5)](#equation-ar1-psi-star) has this property —
checking this is an exercise.
(Of course, we are assuming that $ |a| < 1 $ so that $ \psi^* $ is
well defined.)
In fact, it can be shown that no other distribution on $ \mathbb R $ has this property.
Thus, when $ |a| < 1 $, the AR(1) model has exactly one stationary density and that density is given by $ \psi^* $.
## Ergodicity
The concept of ergodicity is used in different ways by different authors.
One way to understand it in the present setting is that a version of the Law
of Large Numbers is valid for $ \{X_t\} $, even though it is not IID.
In particular, averages over time series converge to expectations under the
stationary distribution.
Indeed, it can be proved that, whenever $ |a| < 1 $, we have
<a id='equation-ar1-ergo'></a>
$$
\frac{1}{m} \sum_{t = 1}^m h(X_t) \to
\int h(x) \psi^*(x) dx
\quad \text{as } m \to \infty \tag{6}
$$
whenever the integral on the right hand side is finite and well defined.
Notes:
- In [(6)](#equation-ar1-ergo), convergence holds with probability one.
- The textbook by [[MT09]](zreferences.ipynb#meyntweedie2009) is a classic reference on ergodicity.
For example, if we consider the identity function $ h(x) = x $, we get
$$
\frac{1}{m} \sum_{t = 1}^m X_t \to
\int x \psi^*(x) dx
\quad \text{as } m \to \infty
$$
In other words, the time series sample mean converges to the mean of the
stationary distribution.
As will become clear over the next few lectures, ergodicity is a very
important concept for statistics and simulation.
## Exercises
### Exercise 1
Let $ k $ be a natural number.
The $ k $-th central moment of a random variable is defined as
$$
M_k := \mathbb E [ (X - \mathbb E X )^k ]
$$
When that random variable is $ N(\mu, \sigma^2) $, it is know that
$$
M_k =
\begin{cases}
0 & \text{ if } k \text{ is odd} \\
\sigma^k (k-1)!! & \text{ if } k \text{ is even}
\end{cases}
$$
Here $ n!! $ is the double factorial.
According to [(6)](#equation-ar1-ergo), we should have, for any $ k \in \mathbb N $,
$$
\frac{1}{m} \sum_{t = 1}^m
(X_t - \mu^* )^k
\approx M_k
$$
when $ m $ is large.
Confirm this by simulation at a range of $ k $ using the default parameters from the lecture.
### Exercise 2
Write your own version of a one dimensional [kernel density
estimator](https://en.wikipedia.org/wiki/Kernel_density_estimation),
which estimates a density from a sample.
Write it as a class that takes the data $ X $ and bandwidth
$ h $ when initialized and provides a method $ f $ such that
$$
f(x) = \frac{1}{hn} \sum_{i=1}^n
K \left( \frac{x-X_i}{h} \right)
$$
For $ K $ use the Gaussian kernel ($ K $ is the standard normal
density).
Write the class so that the bandwidth defaults to Silverman’s rule (see
the “rule of thumb” discussion on [this
page](https://en.wikipedia.org/wiki/Kernel_density_estimation)). Test
the class you have written by going through the steps
1. simulate data $ X_1, \ldots, X_n $ from distribution $ \phi $
1. plot the kernel density estimate over a suitable range
1. plot the density of $ \phi $ on the same figure
for distributions $ \phi $ of the following types
- [beta
distribution](https://en.wikipedia.org/wiki/Beta_distribution)
with $ \alpha = \beta = 2 $
- [beta
distribution](https://en.wikipedia.org/wiki/Beta_distribution)
with $ \alpha = 2 $ and $ \beta = 5 $
- [beta
distribution](https://en.wikipedia.org/wiki/Beta_distribution)
with $ \alpha = \beta = 0.5 $
Use $ n=500 $.
Make a comment on your results. (Do you think this is a good estimator
of these distributions?)
### Exercise 3
In the lecture we discussed the following fact: For the $ AR(1) $ process
$$
X_{t+1} = a X_t + b + c W_{t+1}
$$
with $ \{ W_t \} $ iid and standard normal,
$$
\psi_t = N(\mu, s^2) \implies \psi_{t+1}
= N(a \mu + b, a^2 s^2 + c^2)
$$
Confirm this, at least approximately, by simulation. Let
- $ a = 0.9 $
- $ b = 0.0 $
- $ c = 0.1 $
- $ \mu = -3 $
- $ s = 0.2 $
First, plot $ \psi_t $ and $ \psi_{t+1} $ using the true
distributions described above.
Second, plot $ \psi_{t+1} $ on the same figure (in a different
color) as follows:
1. Generate $ n $ draws of $ X_t $ from the $ N(\mu, s^2) $
distribution
1. Update them all using the rule
$ X_{t+1} = a X_t + b + c W_{t+1} $
1. Use the resulting sample of $ X_{t+1} $ values to produce a
density estimate via kernel density estimation.
Try this for $ n=2000 $ and confirm that the
simulation based estimate of $ \psi_{t+1} $ does converge to the
theoretical distribution.
| true |
code
| 0.689815 | null | null | null | null |
|
<font size=4>**Create Plots**</font>
**Plot with Symbolic Plotting Functions**
MATLAB® provides many techniques for plotting numerical data. Graphical capabilities of MATLAB include plotting tools, standard plotting functions, graphic manipulation and data exploration tools, and tools for printing and exporting graphics to standard formats. Symbolic Math Toolbox™ expands these graphical capabilities and lets you plot symbolic functions using:
- <font color=blue>fplot</font> to create 2-D plots of symbolic expressions, equations, or functions in Cartesian coordinates.
- <font color=blue>fplot3</font> to create 3-D parametric plots.
- <font color=blue>ezpolar</font> to create plots in polar coordinates.
- <font color=blue>fsurf</font> to create surface plots.
- <font color=blue>fcontour</font> to create contour plots.
- <font color=blue>fmesh</font> to create mesh plots.
Plot the symbolic expression $sin(6x)$ by using **fplot**. By default, **fplot** uses the range $−5<x<5$.
```
from sympy import *
x = symbols('x')
plot(sin(6*x),(x,-5,5))
```
Plot a symbolic expression or function in polar coordinates $r$ (radius) and $\theta$ (polar angle) by using **ezpolar**. By default, **ezpolar** plots a symbolic expression or function over the interval $0<\theta<2\pi$.
Plot the symbolic expression $sin(6t)$ in polar coordinates.
```
#syms t
#ezpolar(sin(6*t))
import matplotlib.pyplot as plt
import numpy as np
t = symbols('t')
eqf = lambdify(t,sin(6*t))
angle = np.arange(0,2*np.pi,1/100)
plt.polar(angle,np.abs(eqf(angle)))
plt.title('$r=sin(6t)$')
```
**Plot Functions Numerically**
As an alternative to plotting expressions symbolically, you can substitute symbolic variables with numeric values by using **subs**. Then, you can use these numeric values with plotting functions in MATLAB™.
In the following expressions **u** and **v**, substitute the symbolic variables **x** and **y** with the numeric values defined by **meshgrid**.
```
x,y = symbols('x y')
u = sin(x**2+y**2)
v = cos(x*y)
```
Now, you can plot **U** and **V** by using standard MATLAB plotting functions.
Create a plot of the vector field defined by the functions $U(X,Y)$ and $V(X,Y)$ by using the MATLAB **quiver** function.
```
eqfU = lambdify((x,y),u)
eqfV = lambdify((x,y),v)
X,Y = np.meshgrid(np.arange(-1,1,0.1),np.arange(-1,1,0.1))
plt.quiver(X,Y,eqfU(X,Y),eqfV(X,Y))
```
**Plot Multiple Symbolic Functions in One Graph**
Plot several functions on one graph by adding the functions sequentially. After plotting the first function, add successive functions by using the **hold** on command. The **hold on** command keeps the existing plots. Without the **hold on** command, each new plot replaces any existing plot. After the **hold on** command, each new plot appears on top of existing plots. Switch back to the default behavior of replacing plots by using the **hold off** command.
Plot $f=e^x sin(20x)$ using **fplot**. Show the bounds of **f** by superimposing plots of $e^x$ and $-e^x$ as dashed red lines. Set the title by using the **DisplayName** property of the object returned by **fplot**.
```
x,y = symbols('x y')
f = exp(x)*sin(20*x)
```
$f=sin(20x)e^x$
```
p1 = plot(f,exp(x),-exp(x),(x,0,3))
```
**Plot Multiple Symbolic Functions in One Figure**
Display several functions side-by-side in one figure by dividing the figure window into several subplots using **subplot**. The command **subplot(m,n,p)** divides the figure into a **m** by **n** matrix of subplots and selects the subplot **p**. Display multiple plots in separate subplots by selecting the subplot and using plotting commands. Plotting into multiple subplots is useful for side-by-side comparisons of plots.
Compare plots of $sin\left(\left(x^2+y^2\right)/a\right)$ for $a=10,20,50,100$ by using subplot to create side-by-side subplots.
```
import mpl_toolkits.mplot3d
x,y,a = symbols('x y a')
eqf3 = lambdify((x,y,a),sin((x**2+y**2)/a))
X,Y = np.meshgrid(np.arange(-5,5,0.1),np.arange(-5,5,0.1))
fig = plt.figure(constrained_layout=True)
ax0 = fig.add_subplot(2,2,1,projection='3d')
ax0.plot_surface(X,Y,eqf3(X,Y,10),cmap=plt.cm.viridis) #使用viridis色谱
ax0.set_title('$a=10$',loc='left')
ax1 = fig.add_subplot(2,2,2,projection='3d')
ax1.plot_surface(X,Y,eqf3(X,Y,20),cmap=plt.cm.viridis) #使用viridis色谱
ax1.set_title('$a=20$',loc='left')
ax2 = fig.add_subplot(2,2,3,projection='3d')
ax2.plot_surface(X,Y,eqf3(X,Y,50),cmap=plt.cm.viridis) #使用viridis色谱
ax2.set_title('$a=50$',loc='left')
ax3 = fig.add_subplot(2,2,4,projection='3d')
ax3.plot_surface(X,Y,eqf3(X,Y,100),cmap=plt.cm.viridis) #使用viridis色谱
ax3.set_title('$a=100$',loc='left')
```
**Combine Symbolic Function Plots and Numeric Data Plots**
Plot numeric and symbolic data on the same graph by using MATLAB and Symbolic Math Toolbox functions together.
For numeric values of **x** between $[−5,5]$, return a noisy sine curve by finding $y=sin(x)$ and adding random values to **y**. View the noisy sine curve by using **scatter** to plot the points $(x1,y1),(x2,y2),⋯$.
```
x = np.arange(-5,5,1/10)
y = np.sin(x)+((-1)*np.random.randint(10,size=100)*np.random.rand(100))/8
fig,ax = plt.subplots()
ax.scatter(x,y,c='w',edgecolors='#1f77b4')
```
Show the underlying structure in the points by superimposing a plot of the sine function. First, use **hold on** to retain the **scatter** plot. Then, use **fplot** to plot the sine function.
```
#hold on
#syms t
#fplot(sin(t))
#hold off
t = symbols('t')
eqft = lambdify(t,sin(t))
fig,ax = plt.subplots()
ax.scatter(x,y,c='w',edgecolors='#1f77b4')
ax.plot(x,eqft(x))
```
**Combine Numeric and Symbolic Plots in 3-D**
Combine symbolic and numeric plots in 3-D by using MATLAB and Symbolic Math Toolbox plotting functions. Symbolic Math Toolbox provides these 3-D plotting functions:
- <font color=blue>fplot3</font> creates 3-D parameterized line plots.
- <font color=blue>fsurf</font> creates 3-D surface plots.
- <font color=blue>fmesh</font> creates 3-D mesh plots.
Create a spiral plot by using **fplot3** to plot the parametric line
$$ x=(1-t)sin(100t)$$
$$ y=(1-t)cos(100t)$$
$$ z=\sqrt{1-x^2-y^2}$$
```
t = symbols('t')
x = (1-t)*sin(100*t)
y = (1-t)*cos(100*t)
z = sqrt(1-x**2-y**2)
eqfx = lambdify(t,x)
eqfy = lambdify(t,y)
eqfz = lambdify(t,z)
X = eqfx(np.arange(0,1,1/1000))
Y = eqfy(np.arange(0,1,1/1000))
Z = eqfz(np.arange(0,1,1/1000))
fig = plt.figure()
ax = fig.add_subplot(projection='3d')
ax.plot(X,Y,Z,linewidth=0.6)
ax.set_title('Symbolic 3-D Parametric Line')
```
Superimpose a plot of a sphere with radius 1 and center at $(0, 0, 0)$. Find points on the sphere numerically by using **sphere**. Plot the sphere by using **mesh**. The resulting plot shows the symbolic parametric line wrapped around the top hemisphere.
```
#hold on
#[X,Y,Z] = sphere;
#mesh(X, Y, Z)
#colormap(gray)
#title('Symbolic Parametric Plot and a Sphere')
#hold off
theta,phi = np.meshgrid(np.linspace(0,2*np.pi,30),np.linspace(0,np.pi,30))
X_sphere = np.sin(phi)*np.cos(theta)
Y_sphere = np.sin(phi)*np.sin(theta)
Z_sphere = np.cos(phi)
fig = plt.figure()
ax = fig.add_subplot(projection='3d')
ax.plot_wireframe(X_sphere,Y_sphere,Z_sphere,linewidth=0.2,color='black')
ax.plot(X,Y,Z)
```
| true |
code
| 0.747485 | null | null | null | null |
|
<a href="https://colab.research.google.com/github/seopbo/nlp_tutorials/blob/main/single_text_classification_(nsmc)_LoRa.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Single text classification - LoRa
[LoRA: Low-Rank Adaptation of Large Language Models](https://arxiv.org/abs/2106.09685)를 GPT에 적용합니다.
- pre-trained language model로는 `skt/kogpt2-base-v2`를 사용합니다.
- https://huggingface.co/skt/kogpt2-base-v2
- single text classification task를 수행하는 예시 데이터셋으로는 `nsmc`를 사용합니다.
- https://huggingface.co/datasets/nsmc
## Setup
어떠한 GPU가 할당되었는 지 아래의 코드 셀을 실행함으로써 확인할 수 있습니다.
```
gpu_info = !nvidia-smi
gpu_info = '\n'.join(gpu_info)
if gpu_info.find('failed') >= 0:
print('Not connected to a GPU')
else:
print(gpu_info)
```
아래의 코드 셀을 실행함으로써 본 노트북을 실행하기위한 library를 install하고 load합니다.
```
!pip install torch
!pip install transformers
!pip install datasets
!pip install -U scikit-learn
import torch
import transformers
import datasets
```
## Data preprocessing
1. `skt/kogpt2-base-v2`가 사용한 subword tokenizer를 load합니다.
2. `datasets` library를 이용하여 `nsmc`를 load합니다.
3. 1의 subword tokenizer를 이용 `nsmc`의 data를 single text classification을 수행할 수 있는 형태, train example로 transform합니다.
- `<s> tok 1 ... tok N </s>`로 만들고, 이를 list_of_integers로 transform합니다.

`nsmc`를 load하고, `train_ds`, `valid_ds`, `test_ds`를 생성합니다
```
from datasets import load_dataset
cs = load_dataset("nsmc", split="train")
cs = cs.train_test_split(0.1)
test_cs = load_dataset("nsmc", split="test")
train_cs = cs["train"]
valid_cs = cs["test"]
```
transform을 위한 함수를 정의하고 적용합니다. 먼저 `skt/kogpt2-base-v2`가 사용하는 subword tokenizer의 special tokens들을 확인합니다.
```
from transformers import GPT2TokenizerFast, GPT2Config
test_tokenizer = GPT2TokenizerFast.from_pretrained("skt/kogpt2-base-v2")
print(test_tokenizer.convert_ids_to_tokens(0))
print(test_tokenizer.convert_ids_to_tokens(1))
print(test_tokenizer.convert_ids_to_tokens(2))
print(test_tokenizer.convert_ids_to_tokens(3))
print(test_tokenizer.convert_ids_to_tokens(4))
print(test_tokenizer.convert_ids_to_tokens(5))
```
Figure 1의 classification 유형과 동일하게 처리하기위해서, `build_inputs_with_special_tokens` method를 overriding합니다. `build_inputs_with_special_tokens`를 overriding하면 `prepare_for_model` method 사용 시 그 변경사항이 반영됩니다.
```
from transformers import GPT2TokenizerFast, GPT2Config
class CustomGPT2TokenizerFast(GPT2TokenizerFast):
def build_inputs_with_special_tokens(self, token_ids_0, token_ids_1=None):
"""
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. A GPT sequence has the following format:
- single sequence: ``<s> X </s>``
- pair of sequences: ``<s> A </s> B </s>``
Args:
token_ids_0 (:obj:`List[int]`):
List of IDs to which the special tokens will be added.
token_ids_1 (:obj:`List[int]`, `optional`):
Optional second list of IDs for sequence pairs.
Returns:
:obj:`List[int]`: List of `input IDs <../glossary.html#input-ids>`__ with the appropriate special tokens.
"""
output = [tokenizer.bos_token_id] + token_ids_0 + [tokenizer.eos_token_id]
if token_ids_1:
output += token_ids_1 + [tokenizer.eos_token_id]
return output
tokenizer = CustomGPT2TokenizerFast.from_pretrained("skt/kogpt2-base-v2")
tokenizer.pad_token = "<pad>"
tokenizer.unk_token = "<unk>"
tokenizer.bos_token = "<s>"
tokenizer.eos_token = "</s>"
config = GPT2Config.from_pretrained("skt/kogpt2-base-v2")
print(tokenizer.__class__)
print(config.__class__)
```
`__call__` method를 사용하지않고 단계적으로 `tokenize`, `convert_tokens_to_ids`, `prepare_for_model` method를 이용하여, `transform` function을 구현합니다.
```
from typing import Union, List, Dict
def transform(sentences: Union[str, List[str]], tokenizer) -> Dict[str, List[List[int]]]:
if not isinstance(sentences, list):
sentences = [sentences]
dicf_of_training_examples: Dict[str, List[List[int]]] = {}
for sentence in sentences:
list_of_tokens = tokenizer.tokenize(sentence)
list_of_ids = tokenizer.convert_tokens_to_ids(list_of_tokens)
training_example = tokenizer.prepare_for_model(list_of_ids, add_special_tokens=True, padding=False, truncation=False)
for key in training_example.keys():
if key not in dicf_of_training_examples:
dicf_of_training_examples.setdefault(key, [])
dicf_of_training_examples[key].append(training_example[key])
return dicf_of_training_examples
samples = train_cs[:2]
transformed_samples = transform(samples["document"], tokenizer)
print(samples)
print(transformed_samples)
train_ds = train_cs.map(lambda data: transform(data["document"], tokenizer), remove_columns=["id", "document"], batched=True).rename_column("label", "labels")
valid_ds = valid_cs.map(lambda data: transform(data["document"], tokenizer), remove_columns=["id", "document"], batched=True).rename_column("label", "labels")
test_ds = test_cs.map(lambda data: transform(data["document"], tokenizer), remove_columns=["id", "document"], batched=True).rename_column("label", "labels")
```
## Prepare model
single text classification을 수행하기위해서 `skt/kogpt2-base-v2`를 load해야합니다. 단 LoRa를 위한 weight를 정의하기위해서 custom class를 작성하는 것이 필요합니다.
### 1. `GPT2Attention`을 subclassing하여 `GPT2AttentionWithLoRa`를 구현
`GPT2Attention`을 상속하여 `__init__`과 `forward` method를 overriding합니다.
```
from torch import nn
from transformers.models.gpt2.modeling_gpt2 import GPT2Attention
from transformers.modeling_utils import Conv1D
class GPT2AttentionWithLoRa(GPT2Attention):
def __init__(self, config, is_cross_attention=False, layer_idx=None):
super().__init__(config, is_cross_attention=False, layer_idx=None)
self.c_attn_lora = nn.Sequential(
Conv1D(4, self.embed_dim),
Conv1D(3 * self.embed_dim, 4),
)
self.c_proj_lora = nn.Sequential(
Conv1D(4, self.embed_dim),
Conv1D(self.embed_dim, 4),
)
def forward(
self,
hidden_states,
layer_past=None,
attention_mask=None,
head_mask=None,
encoder_hidden_states=None,
encoder_attention_mask=None,
use_cache=False,
output_attentions=False,
):
if encoder_hidden_states is not None:
if not hasattr(self, "q_attn"):
raise ValueError(
"If class is used as cross attention, the weights `q_attn` have to be defined. "
"Please make sure to instantiate class with `GPT2Attention(..., is_cross_attention=True)`."
)
query = self.q_attn(hidden_states)
key, value = self.c_attn(encoder_hidden_states).split(self.split_size, dim=2)
attention_mask = encoder_attention_mask
else:
query_orig, key_orig, value_orig = self.c_attn(hidden_states).split(self.split_size, dim=2)
# Added codes
query_adpt, key_adpt, value_adpt = self.c_attn_lora(hidden_states).split(self.split_size, dim=2)
query = query_orig + query_adpt
key = key_orig + key_adpt
value = value_orig + value_adpt
query = self._split_heads(query, self.num_heads, self.head_dim)
key = self._split_heads(key, self.num_heads, self.head_dim)
value = self._split_heads(value, self.num_heads, self.head_dim)
if layer_past is not None:
past_key, past_value = layer_past
key = torch.cat((past_key, key), dim=-2)
value = torch.cat((past_value, value), dim=-2)
if use_cache is True:
present = (key, value)
else:
present = None
if self.reorder_and_upcast_attn:
attn_output, attn_weights = self._upcast_and_reordered_attn(query, key, value, attention_mask, head_mask)
else:
attn_output, attn_weights = self._attn(query, key, value, attention_mask, head_mask)
attn_output_raw = self._merge_heads(attn_output, self.num_heads, self.head_dim)
attn_output_orig = self.c_proj(attn_output_raw)
# Added codes
attn_output_adpt = self.c_proj_lora(attn_output_raw)
attn_output = attn_output_orig + attn_output_adpt
attn_output = self.resid_dropout(attn_output)
outputs = (attn_output, present)
if output_attentions:
outputs += (attn_weights,)
return outputs # a, present, (attentions)
```
### 2. `GPT2Attention`을 subclassing하여 `GPT2BlockWithLoRa`를 구현
`GPT2Block`을 상속하여 `__init__`을 overriding합니다.
```
from torch import nn
from transformers.models.gpt2.modeling_gpt2 import GPT2Block, GPT2MLP
from transformers.modeling_utils import Conv1D
class GPT2BlockWithLoRa(GPT2Block):
def __init__(self, config, layer_idx=None):
super().__init__(config, layer_idx)
self.attn = GPT2AttentionWithLoRa(config, layer_idx=layer_idx)
```
### 3. `GPT2Model`을 subclassing하여 `GPT2ModelkWithLoRa`를 구현
`GPT2Model`을 상속하여 `__init__`을 overriding합니다.
```
from torch import nn
from transformers.models.gpt2.modeling_gpt2 import GPT2Model
class GPT2ModelWithLoRa(GPT2Model):
def __init__(self, config):
super().__init__(config)
self.h = nn.ModuleList([GPT2BlockWithLoRa(config, layer_idx=i) for i in range(config.num_hidden_layers)])
```
### 4, `GPT2ForSequenceClassification`을 subclassing하여 `GPT2ForSequenceClassificationWithLoRa`를 구현
`GPT2ForSequenceClassification`을 상속하여 `__init__`을 overriding합니다.
```
from torch import nn
from transformers.models.gpt2.modeling_gpt2 import GPT2ForSequenceClassification
from transformers.modeling_utils import Conv1D
class GPT2ForSequenceClassificationWithLoRa(GPT2ForSequenceClassification):
def __init__(self, config):
super().__init__(config)
self.transformer = GPT2ModelWithLoRa(config)
```
### lora 관련 weight만 train 되도록 설정
```
model = GPT2ForSequenceClassificationWithLoRa.from_pretrained("skt/kogpt2-base-v2", num_labels=2)
for named_parameter in model.named_parameters():
if "lora" in named_parameter[0]:
continue
named_parameter[-1].requires_grad_(False)
```
## Training model
`Trainer` class를 이용하여 train합니다.
- https://huggingface.co/transformers/custom_datasets.html?highlight=trainer#fine-tuning-with-trainer
```
import numpy as np
from transformers.data.data_collator import DataCollatorWithPadding
from sklearn.metrics import accuracy_score
def compute_metrics(p):
pred, labels = p
pred = np.argmax(pred, axis=1)
accuracy = accuracy_score(y_true=labels, y_pred=pred)
return {"accuracy": accuracy}
batchify = DataCollatorWithPadding(
tokenizer = tokenizer,
padding = "longest",
)
# mini-batch 구성확인
batchify(train_ds[:2])
from transformers import Trainer, TrainingArguments
training_args = TrainingArguments(
output_dir='./results',
evaluation_strategy="steps",
eval_steps=1000,
per_device_train_batch_size=32,
per_device_eval_batch_size=32,
learning_rate=1e-4,
weight_decay=0.01,
adam_beta1=.9,
adam_beta2=.95,
adam_epsilon=1e-8,
max_grad_norm=1.,
num_train_epochs=2,
lr_scheduler_type="linear",
warmup_steps=100,
logging_dir='./logs',
logging_strategy="steps",
logging_first_step=True,
logging_steps=100,
save_strategy="epoch",
seed=42,
dataloader_drop_last=False,
dataloader_num_workers=2
)
trainer = Trainer(
args=training_args,
data_collator=batchify,
model=model,
train_dataset=train_ds,
eval_dataset=valid_ds,
compute_metrics=compute_metrics
)
trainer.train()
trainer.evaluate(test_ds)
```
| true |
code
| 0.690442 | null | null | null | null |
|
### 94. Binary Tree Inorder Traversal
#### Content
<p>Given the <code>root</code> of a binary tree, return <em>the inorder traversal of its nodes' values</em>.</p>
<p> </p>
<p><strong>Example 1:</strong></p>
<img alt="" src="https://assets.leetcode.com/uploads/2020/09/15/inorder_1.jpg" style="width: 202px; height: 324px;" />
<pre>
<strong>Input:</strong> root = [1,null,2,3]
<strong>Output:</strong> [1,3,2]
</pre>
<p><strong>Example 2:</strong></p>
<pre>
<strong>Input:</strong> root = []
<strong>Output:</strong> []
</pre>
<p><strong>Example 3:</strong></p>
<pre>
<strong>Input:</strong> root = [1]
<strong>Output:</strong> [1]
</pre>
<p><strong>Example 4:</strong></p>
<img alt="" src="https://assets.leetcode.com/uploads/2020/09/15/inorder_5.jpg" style="width: 202px; height: 202px;" />
<pre>
<strong>Input:</strong> root = [1,2]
<strong>Output:</strong> [2,1]
</pre>
<p><strong>Example 5:</strong></p>
<img alt="" src="https://assets.leetcode.com/uploads/2020/09/15/inorder_4.jpg" style="width: 202px; height: 202px;" />
<pre>
<strong>Input:</strong> root = [1,null,2]
<strong>Output:</strong> [1,2]
</pre>
<p> </p>
<p><strong>Constraints:</strong></p>
<ul>
<li>The number of nodes in the tree is in the range <code>[0, 100]</code>.</li>
<li><code>-100 <= Node.val <= 100</code></li>
</ul>
<p> </p>
<strong>Follow up:</strong> Recursive solution is trivial, could you do it iteratively?
#### Difficulty: Easy, AC rate: 68.0%
#### Question Tags:
- Stack
- Tree
- Depth-First Search
- Binary Tree
#### Links:
🎁 [Question Detail](https://leetcode.com/problems/binary-tree-inorder-traversal/description/) | 🎉 [Question Solution](https://leetcode.com/problems/binary-tree-inorder-traversal/solution/) | 💬 [Question Discussion](https://leetcode.com/problems/binary-tree-inorder-traversal/discuss/?orderBy=most_votes)
#### Hints:
#### Sample Test Case
[1,null,2,3]
---
What's your idea?
递归
---
```
from typing import Optional, List
# Definition for a binary tree node.
class TreeNode:
def __init__(self, val=0, left=None, right=None):
self.val = val
self.left = left
self.right = right
class Solution:
def inorderTraversal(self, root: Optional[TreeNode]) -> List[int]:
if root is None:
return []
return self.inorderTraversal(root.left) + [root.val] + self.inorderTraversal(root.right)
s = Solution()
n3 = TreeNode(3)
n2 = TreeNode(2, n3, None)
n1 = TreeNode(1, None, n2)
s.inorderTraversal(n1)
s.inorderTraversal(None)
n2 = TreeNode(2)
n1 = TreeNode(1, n2, None)
s.inorderTraversal(n1)
n2 = TreeNode(2)
n1 = TreeNode(1, None, n2)
s.inorderTraversal(n1)
import sys, os; sys.path.append(os.path.abspath('..'))
from submitter import submit
submit(94)
```
| true |
code
| 0.875148 | null | null | null | null |
|
<a href="https://colab.research.google.com/github/JoshuaShunk/NSDropout/blob/main/mnist_numbers_implementation_of_Dropout.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# MNIST Numbers Implementation of Old Dropout
```
import matplotlib.pyplot as plt
import numpy as np
import random
import keras
from keras.datasets import mnist
import tensorflow as tf
import pandas as pd
np.set_printoptions(threshold=np.inf)
np.random.seed(seed=22) #Random seed used for comparison between old dropout
print(np.random.random(size=3)) #Check that seeds line up
#@title Load Layers (Credit to Harrison Kinsley & Daniel Kukiela for raw python implementation)
# Dense layer
class Layer_Dense:
# Layer initialization
def __init__(self, n_inputs, n_neurons,
weight_regularizer_l1=0, weight_regularizer_l2=0,
bias_regularizer_l1=0, bias_regularizer_l2=0):
# Initialize weights and biases
self.weights = 0.01 * np.random.randn(n_inputs, n_neurons)
self.biases = np.zeros((1, n_neurons))
# Set regularization strength
self.weight_regularizer_l1 = weight_regularizer_l1
self.weight_regularizer_l2 = weight_regularizer_l2
self.bias_regularizer_l1 = bias_regularizer_l1
self.bias_regularizer_l2 = bias_regularizer_l2
# Forward pass
def forward(self, inputs):
# Remember input values
self.inputs = inputs
# Calculate output values from inputs, weights and biases
self.output = np.dot(inputs, self.weights) + self.biases
# Backward pass
def backward(self, dvalues):
# Gradients on parameters
self.dweights = np.dot(self.inputs.T, dvalues)
self.dbiases = np.sum(dvalues, axis=0, keepdims=True)
# Gradients on regularization
# L1 on weights
if self.weight_regularizer_l1 > 0:
dL1 = np.ones_like(self.weights)
dL1[self.weights < 0] = -1
self.dweights += self.weight_regularizer_l1 * dL1
# L2 on weights
if self.weight_regularizer_l2 > 0:
self.dweights += 2 * self.weight_regularizer_l2 * \
self.weights
# L1 on biases
if self.bias_regularizer_l1 > 0:
dL1 = np.ones_like(self.biases)
dL1[self.biases < 0] = -1
self.dbiases += self.bias_regularizer_l1 * dL1
# L2 on biases
if self.bias_regularizer_l2 > 0:
self.dbiases += 2 * self.bias_regularizer_l2 * \
self.biases
# Gradient on values
self.dinputs = np.dot(dvalues, self.weights.T)
# ReLU activation
class Activation_ReLU:
# Forward pass
def forward(self, inputs):
# Remember input values
self.inputs = inputs
# Calculate output values from inputs
self.output = np.maximum(0, inputs)
# Backward pass
def backward(self, dvalues):
# Since we need to modify original variable,
# let's make a copy of values first
self.dinputs = dvalues.copy()
# Zero gradient where input values were negative
self.dinputs[self.inputs <= 0] = 0
# Softmax activation
class Activation_Softmax:
# Forward pass
def forward(self, inputs):
# Remember input values
self.inputs = inputs
# Get unnormalized probabilities
exp_values = np.exp(inputs - np.max(inputs, axis=1,
keepdims=True))
# Normalize them for each sample
probabilities = exp_values / np.sum(exp_values, axis=1,
keepdims=True)
self.output = probabilities
# Backward pass
def backward(self, dvalues):
# Create uninitialized array
self.dinputs = np.empty_like(dvalues)
# Enumerate outputs and gradients
for index, (single_output, single_dvalues) in \
enumerate(zip(self.output, dvalues)):
# Flatten output array
single_output = single_output.reshape(-1, 1)
# Calculate Jacobian matrix of the output
jacobian_matrix = np.diagflat(single_output) - \
np.dot(single_output, single_output.T)
# Calculate sample-wise gradient
# and add it to the array of sample gradients
self.dinputs[index] = np.dot(jacobian_matrix,
single_dvalues)
def predictions(self, outputs):
return np.argmax(outputs, axis=1)
# Sigmoid activation
class Activation_Sigmoid:
# Forward pass
def forward(self, inputs):
# Save input and calculate/save output
# of the sigmoid function
self.inputs = inputs
self.output = 1 / (1 + np.exp(-inputs))
# Backward pass
def backward(self, dvalues):
# Derivative - calculates from output of the sigmoid function
self.dinputs = dvalues * (1 - self.output) * self.output
# SGD optimizer
class Optimizer_SGD:
# Initialize optimizer - set settings,
# learning rate of 1. is default for this optimizer
def __init__(self, learning_rate=1., decay=0., momentum=0.):
self.learning_rate = learning_rate
self.current_learning_rate = learning_rate
self.decay = decay
self.iterations = 0
self.momentum = momentum
# Call once before any parameter updates
def pre_update_params(self):
if self.decay:
self.current_learning_rate = self.learning_rate * \
(1. / (1. + self.decay * self.iterations))
# Update parameters
def update_params(self, layer):
# If we use momentum
if self.momentum:
# If layer does not contain momentum arrays, create them
# filled with zeros
if not hasattr(layer, 'weight_momentums'):
layer.weight_momentums = np.zeros_like(layer.weights)
# If there is no momentum array for weights
# The array doesn't exist for biases yet either.
layer.bias_momentums = np.zeros_like(layer.biases)
# Build weight updates with momentum - take previous
# updates multiplied by retain factor and update with
# current gradients
weight_updates = \
self.momentum * layer.weight_momentums - \
self.current_learning_rate * layer.dweights
layer.weight_momentums = weight_updates
# Build bias updates
bias_updates = \
self.momentum * layer.bias_momentums - \
self.current_learning_rate * layer.dbiases
layer.bias_momentums = bias_updates
# Vanilla SGD updates (as before momentum update)
else:
weight_updates = -self.current_learning_rate * \
layer.dweights
bias_updates = -self.current_learning_rate * \
layer.dbiases
# Update weights and biases using either
# vanilla or momentum updates
layer.weights += weight_updates
layer.biases += bias_updates
# Call once after any parameter updates
def post_update_params(self):
self.iterations += 1
# Adagrad optimizer
class Optimizer_Adagrad:
# Initialize optimizer - set settings
def __init__(self, learning_rate=1., decay=0., epsilon=1e-7):
self.learning_rate = learning_rate
self.current_learning_rate = learning_rate
self.decay = decay
self.iterations = 0
self.epsilon = epsilon
# Call once before any parameter updates
def pre_update_params(self):
if self.decay:
self.current_learning_rate = self.learning_rate * \
(1. / (1. + self.decay * self.iterations))
# Update parameters
def update_params(self, layer):
# If layer does not contain cache arrays,
# create them filled with zeros
if not hasattr(layer, 'weight_cache'):
layer.weight_cache = np.zeros_like(layer.weights)
layer.bias_cache = np.zeros_like(layer.biases)
# Update cache with squared current gradients
layer.weight_cache += layer.dweights ** 2
layer.bias_cache += layer.dbiases ** 2
# Vanilla SGD parameter update + normalization
# with square rooted cache
layer.weights += -self.current_learning_rate * \
layer.dweights / \
(np.sqrt(layer.weight_cache) + self.epsilon)
layer.biases += -self.current_learning_rate * \
layer.dbiases / \
(np.sqrt(layer.bias_cache) + self.epsilon)
# Call once after any parameter updates
def post_update_params(self):
self.iterations += 1
# RMSprop optimizer
class Optimizer_RMSprop:
# Initialize optimizer - set settings
def __init__(self, learning_rate=0.001, decay=0., epsilon=1e-7,
rho=0.9):
self.learning_rate = learning_rate
self.current_learning_rate = learning_rate
self.decay = decay
self.iterations = 0
self.epsilon = epsilon
self.rho = rho
# Call once before any parameter updates
def pre_update_params(self):
if self.decay:
self.current_learning_rate = self.learning_rate * \
(1. / (1. + self.decay * self.iterations))
# Update parameters
def update_params(self, layer):
# If layer does not contain cache arrays,
# create them filled with zeros
if not hasattr(layer, 'weight_cache'):
layer.weight_cache = np.zeros_like(layer.weights)
layer.bias_cache = np.zeros_like(layer.biases)
# Update cache with squared current gradients
layer.weight_cache = self.rho * layer.weight_cache + \
(1 - self.rho) * layer.dweights ** 2
layer.bias_cache = self.rho * layer.bias_cache + \
(1 - self.rho) * layer.dbiases ** 2
# Vanilla SGD parameter update + normalization
# with square rooted cache
layer.weights += -self.current_learning_rate * \
layer.dweights / \
(np.sqrt(layer.weight_cache) + self.epsilon)
layer.biases += -self.current_learning_rate * \
layer.dbiases / \
(np.sqrt(layer.bias_cache) + self.epsilon)
# Call once after any parameter updates
def post_update_params(self):
self.iterations += 1
# Adam optimizer
class Optimizer_Adam:
# Initialize optimizer - set settings
def __init__(self, learning_rate=0.02, decay=0., epsilon=1e-7,
beta_1=0.9, beta_2=0.999):
self.learning_rate = learning_rate
self.current_learning_rate = learning_rate
self.decay = decay
self.iterations = 0
self.epsilon = epsilon
self.beta_1 = beta_1
self.beta_2 = beta_2
# Call once before any parameter updates
def pre_update_params(self):
if self.decay:
self.current_learning_rate = self.learning_rate * \
(1. / (1. + self.decay * self.iterations))
# Update parameters
def update_params(self, layer):
# If layer does not contain cache arrays,
# create them filled with zeros
if not hasattr(layer, 'weight_cache'):
layer.weight_momentums = np.zeros_like(layer.weights)
layer.weight_cache = np.zeros_like(layer.weights)
layer.bias_momentums = np.zeros_like(layer.biases)
layer.bias_cache = np.zeros_like(layer.biases)
# Update momentum with current gradients
layer.weight_momentums = self.beta_1 * \
layer.weight_momentums + \
(1 - self.beta_1) * layer.dweights
layer.bias_momentums = self.beta_1 * \
layer.bias_momentums + \
(1 - self.beta_1) * layer.dbiases
# Get corrected momentum
# self.iteration is 0 at first pass
# and we need to start with 1 here
weight_momentums_corrected = layer.weight_momentums / \
(1 - self.beta_1 ** (self.iterations + 1))
bias_momentums_corrected = layer.bias_momentums / \
(1 - self.beta_1 ** (self.iterations + 1))
# Update cache with squared current gradients
layer.weight_cache = self.beta_2 * layer.weight_cache + \
(1 - self.beta_2) * layer.dweights ** 2
layer.bias_cache = self.beta_2 * layer.bias_cache + \
(1 - self.beta_2) * layer.dbiases ** 2
# Get corrected cache
weight_cache_corrected = layer.weight_cache / \
(1 - self.beta_2 ** (self.iterations + 1))
bias_cache_corrected = layer.bias_cache / \
(1 - self.beta_2 ** (self.iterations + 1))
# Vanilla SGD parameter update + normalization
# with square rooted cache
layer.weights += -self.current_learning_rate * \
weight_momentums_corrected / \
(np.sqrt(weight_cache_corrected) +
self.epsilon)
layer.biases += -self.current_learning_rate * \
bias_momentums_corrected / \
(np.sqrt(bias_cache_corrected) +
self.epsilon)
# Call once after any parameter updates
def post_update_params(self):
self.iterations += 1
# Common loss class
class Loss:
# Regularization loss calculation
def regularization_loss(self, layer):
# 0 by default
regularization_loss = 0
# L1 regularization - weights
# calculate only when factor greater than 0
if layer.weight_regularizer_l1 > 0:
regularization_loss += layer.weight_regularizer_l1 * \
np.sum(np.abs(layer.weights))
# L2 regularization - weights
if layer.weight_regularizer_l2 > 0:
regularization_loss += layer.weight_regularizer_l2 * \
np.sum(layer.weights *
layer.weights)
# L1 regularization - biases
# calculate only when factor greater than 0
if layer.bias_regularizer_l1 > 0:
regularization_loss += layer.bias_regularizer_l1 * \
np.sum(np.abs(layer.biases))
# L2 regularization - biases
if layer.bias_regularizer_l2 > 0:
regularization_loss += layer.bias_regularizer_l2 * \
np.sum(layer.biases *
layer.biases)
return regularization_loss
# Set/remember trainable layers
def remember_trainable_layers(self, trainable_layers):
self.trainable_layers = trainable_layers
# Calculates the data and regularization losses
# given model output and ground truth values
def calculate(self, output, y, *, include_regularization=False):
# Calculate sample losses
sample_losses = self.forward(output, y)
# Calculate mean loss
data_loss = np.mean(sample_losses)
# Return loss
return data_loss
# Calculates accumulated loss
def calculate_accumulated(self, *, include_regularization=False):
# Calculate mean loss
data_loss = self.accumulated_sum / self.accumulated_count
# If just data loss - return it
if not include_regularization:
return data_loss
# Return the data and regularization losses
return data_loss, self.regularization_loss()
# Reset variables for accumulated loss
def new_pass(self):
self.accumulated_sum = 0
self.accumulated_count = 0
# Cross-entropy loss
class Loss_CategoricalCrossentropy(Loss):
# Forward pass
def forward(self, y_pred, y_true):
# Number of samples in a batch
samples = len(y_pred)
# Clip data to prevent division by 0
# Clip both sides to not drag mean towards any value
y_pred_clipped = np.clip(y_pred, 1e-7, 1 - 1e-7)
# Probabilities for target values -
# only if categorical labels
if len(y_true.shape) == 1:
correct_confidences = y_pred_clipped[
range(samples),
y_true
]
# Mask values - only for one-hot encoded labels
elif len(y_true.shape) == 2:
correct_confidences = np.sum(
y_pred_clipped * y_true,
axis=1
)
# Losses
negative_log_likelihoods = -np.log(correct_confidences)
return negative_log_likelihoods
# Backward pass
def backward(self, dvalues, y_true):
# Number of samples
samples = len(dvalues)
# Number of labels in every sample
# We'll use the first sample to count them
labels = len(dvalues[0])
# If labels are sparse, turn them into one-hot vector
if len(y_true.shape) == 1:
y_true = np.eye(labels)[y_true]
# Calculate gradient
self.dinputs = -y_true / dvalues
# Normalize gradient
self.dinputs = self.dinputs / samples
# Softmax classifier - combined Softmax activation
# and cross-entropy loss for faster backward step
class Activation_Softmax_Loss_CategoricalCrossentropy():
# Creates activation and loss function objects
def __init__(self):
self.activation = Activation_Softmax()
self.loss = Loss_CategoricalCrossentropy()
# Forward pass
def forward(self, inputs, y_true):
# Output layer's activation function
self.activation.forward(inputs)
# Set the output
self.output = self.activation.output
# Calculate and return loss value
return self.loss.calculate(self.output, y_true)
# Backward pass
def backward(self, dvalues, y_true):
# Number of samples
samples = len(dvalues)
# If labels are one-hot encoded,
# turn them into discrete values
if len(y_true.shape) == 2:
y_true = np.argmax(y_true, axis=1)
# Copy so we can safely modify
self.dinputs = dvalues.copy()
# Calculate gradient
self.dinputs[range(samples), y_true] -= 1
# Normalize gradient
self.dinputs = self.dinputs / samples
# Binary cross-entropy loss
class Loss_BinaryCrossentropy(Loss):
# Forward pass
def forward(self, y_pred, y_true):
# Clip data to prevent division by 0
# Clip both sides to not drag mean towards any value
y_pred_clipped = np.clip(y_pred, 1e-7, 1 - 1e-7)
# Calculate sample-wise loss
sample_losses = -(y_true * np.log(y_pred_clipped) +
(1 - y_true) * np.log(1 - y_pred_clipped))
sample_losses = np.mean(sample_losses, axis=-1)
# Return losses
return sample_losses
# Backward pass
def backward(self, dvalues, y_true):
# Number of samples
samples = len(dvalues)
# Number of outputs in every sample
# We'll use the first sample to count them
outputs = len(dvalues[0])
# Clip data to prevent division by 0
# Clip both sides to not drag mean towards any value
clipped_dvalues = np.clip(dvalues, 1e-7, 1 - 1e-7)
# Calculate gradient
self.dinputs = -(y_true / clipped_dvalues -
(1 - y_true) / (1 - clipped_dvalues)) / outputs
# Normalize gradient
self.dinputs = self.dinputs / samples
# Common accuracy class
class Accuracy:
# Calculates an accuracy
# given predictions and ground truth values
def calculate(self, predictions, y):
# Get comparison results
comparisons = self.compare(predictions, y)
# Calculate an accuracy
accuracy = np.mean(comparisons)
# Add accumulated sum of matching values and sample count
# Return accuracy
return accuracy
# Calculates accumulated accuracy
def calculate_accumulated(self):
# Calculate an accuracy
accuracy = self.accumulated_sum / self.accumulated_count
# Return the data and regularization losses
return accuracy
# Reset variables for accumulated accuracy
def new_pass(self):
self.accumulated_sum = 0
self.accumulated_count = 0
# Accuracy calculation for classification model
class Accuracy_Categorical(Accuracy):
def __init__(self, *, binary=False):
# Binary mode?
self.binary = binary
# No initialization is needed
def init(self, y):
pass
# Compares predictions to the ground truth values
def compare(self, predictions, y):
if not self.binary and len(y.shape) == 2:
y = np.argmax(y, axis=1)
return predictions == y
# Accuracy calculation for regression model
class Accuracy_Regression(Accuracy):
def __init__(self):
# Create precision property
self.precision = None
# Calculates precision value
# based on passed-in ground truth values
def init(self, y, reinit=False):
if self.precision is None or reinit:
self.precision = np.std(y) / 250
# Compares predictions to the ground truth values
def compare(self, predictions, y):
return np.absolute(predictions - y) < self.precision
class model:
def __init__(self):
pass
def predict(self, classes, samples):
self.classes = classes
self.samples = samples
self.X, self.y = spiral_data(samples=self.samples, classes=self.classes)
dense1.forward(self.X)
activation1.forward(dense1.output)
dense2.forward(activation1.output)
activation2.forward(dense2.output)
# Calculate the data loss
self.loss = loss_function.calculate(activation2.output, self.y)
self.predictions = (activation2.output > 0.5) * 1
self.accuracy = np.mean(self.predictions == self.y)
print(f'Accuracy: {self.accuracy}')
```
# Old Dropout Layer
```
class Layer_Dropout:
# Init
def __init__(self, rate):
# Store rate, we invert it as for example for dropout
# of 0.1 we need success rate of 0.9
self.rate = 1 - rate
# Forward pass
def forward(self, inputs):
# Save input values
self.inputs = inputs
# Generate and save scaled mask
self.binary_mask = np.random.binomial(1, self.rate,
size=inputs.shape) / self.rate
# Apply mask to output values
self.output = inputs * self.binary_mask
# Backward pass
def backward(self, dvalues):
# Gradient on values
self.dinputs = dvalues * self.binary_mask
#print(self.dinputs.shape)
```
Initializing Caches
```
loss_cache = []
val_loss_cache = []
acc_cache = []
val_acc_cache = []
lr_cache = []
epoch_cache = []
test_acc_cache = []
test_loss_cache = []
max_val_accuracyint = 0
```
Initializing Summary List
```
summary = []
```
# Loading Data
Vizulizing Data
```
#(X, y), (X_test, y_test) = tf.keras.datasets.fashion_mnist.load_data()
# load dataset
(X, y), (X_test, y_test) = mnist.load_data()
# Label index to label name relation
number_mnist_labels = {
0: '0',
1: '1',
2: '2',
3: '3',
4: '4',
5: '5',
6: '6',
7: '7',
8: '8',
9: '9'
}
# Shuffle the training dataset
keys = np.array(range(X.shape[0]))
np.random.shuffle(keys)
X = X[keys]
y = y[keys]
X = X[:8000,:,:]
X_test = X_test[:1600,:,:]
y = y[:8000]
y_test = y_test[:1600]
# Scale and reshape samples
X = (X.reshape(X.shape[0], -1).astype(np.float32) - 127.5) / 127.5
X_test = (X_test.reshape(X_test.shape[0], -1).astype(np.float32) - 127.5) / 127.5
print(X.shape)
print(y.shape)
print(X_test.shape)
print(y_test.shape)
```
Sorting Training Data
```
idx = np.argsort(y)
X_sorted = X[idx]
y_sorted = y[idx]
sorted_x = {}
sorted_y = {}
for classes in range(len(set(y))):
sorted_x["X_{0}".format(classes)] = X[y == classes]
sorted_y["y_{0}".format(classes)] = y[y == classes]
for sorted_lists in sorted_x:
print(f'Number of Samples for {sorted_lists}: {sorted_x[sorted_lists].shape[0]}')
```
Sorting Testing Data
```
idx = np.argsort(y_test)
X_test_sorted = X_test[idx]
y_test_sorted = y_test[idx]
class_list = []
sorted_x_test = {}
sorted_y_test = {}
for classes in range(len(set(y))):
sorted_x_test["X_test_{0}".format(classes)] = X_test[y_test == classes]
sorted_y_test["y_test_{0}".format(classes)] = y_test[y_test == classes]
for sorted_lists in sorted_x_test:
print(f'Number of Samples for {sorted_lists}: {sorted_x_test[sorted_lists].shape[0]}')
class_list.append(sorted_x_test[sorted_lists].shape[0])
print(f'Found {X.shape[0]} images belonging to {len(set(y))} unique classes')
```
# Initializing Layers
```
# Create Dense layer with 2 input features and 64 output values
dense1 = Layer_Dense(X.shape[1], 128, weight_regularizer_l2=5e-4,
bias_regularizer_l2=5e-4)
activation1 = Activation_ReLU()
dropout1 = Layer_Dropout(0.2)
dense2 = Layer_Dense(128, 128)
activation2 = Activation_ReLU()
dense3 = Layer_Dense(128,128)
activation3 = Activation_ReLU()
dense4 = Layer_Dense(128,len(set(y)))
activation4 = Activation_Softmax()
loss_function = Loss_CategoricalCrossentropy()
softmax_classifier_output = \
Activation_Softmax_Loss_CategoricalCrossentropy()
# Create optimizer
optimizer = Optimizer_Adam(decay=5e-7,learning_rate=0.005)
#optimizer = Optimizer_SGD(learning_rate=0.01)
accuracy = Accuracy_Categorical()
accuracy.init(y)
```
# Training Loop
```
epochs = 178
for epoch in range(epochs + 1):
dense1.forward(X)
activation1.forward(dense1.output)
dropout1.forward(activation1.output)
dense2.forward(dropout1.output)
activation2.forward(dense2.output)
dense3.forward(activation2.output)
activation3.forward(dense3.output)
dense4.forward(activation3.output)
activation4.forward(dense4.output)
# Calculate the data loss
data_loss = loss_function.calculate(activation4.output, y)
regularization_loss = \
loss_function.regularization_loss(dense1) + \
loss_function.regularization_loss(dense2) + \
loss_function.regularization_loss(dense3) + \
loss_function.regularization_loss(dense4)
loss = data_loss + regularization_loss
#Accuracy
predictions = activation4.predictions(activation4.output)
train_accuracy = accuracy.calculate(predictions, y)
# Backward pass
softmax_classifier_output.backward(activation4.output, y)
activation4.backward(softmax_classifier_output.dinputs)
dense4.backward(activation4.dinputs)
activation3.backward(dense4.dinputs)
dense3.backward(activation3.dinputs)
activation2.backward(dense3.dinputs)
dense2.backward(activation2.dinputs)
dropout1.backward(dense2.dinputs)
activation1.backward(dropout1.dinputs)
dense1.backward(activation1.dinputs)
# Update weights and biases
optimizer.pre_update_params()
optimizer.update_params(dense1)
optimizer.update_params(dense2)
optimizer.update_params(dense3)
optimizer.update_params(dense4)
optimizer.post_update_params()
# Validation
dense1.forward(X_test)
activation1.forward(dense1.output)
dense2.forward(activation1.output)
dense1_outputs = dense1.output
meanarray = np.mean(dense1.output, axis=0)
cached_val_inputs = activation1.output
trainout = meanarray
activation2.forward(dense2.output)
dense3.forward(activation2.output)
activation3.forward(dense3.output)
dense4.forward(activation3.output)
activation4.forward(dense4.output)
# Calculate the data loss
valloss = loss_function.calculate(activation4.output, y_test)
predictions = activation4.predictions(activation4.output)
valaccuracy = accuracy.calculate(predictions, y_test)
#Updating List
loss_cache.append(loss)
val_loss_cache.append(valloss)
acc_cache.append(train_accuracy)
val_acc_cache.append(valaccuracy)
lr_cache.append(optimizer.current_learning_rate)
epoch_cache.append(epoch)
#Summary Items
if valaccuracy >= .8 and len(summary) == 0:
nintypercent = f'Model hit 80% validation accuracy in {epoch} epochs'
summary.append(nintypercent)
if valaccuracy >= .85 and len(summary) == 1:
nintypercent = f'Model hit 85% validation accuracy in {epoch} epochs'
summary.append(nintypercent)
if valaccuracy >= .9 and len(summary) == 2:
nintypercent = f'Model hit 90% validation accuracy in {epoch} epochs'
summary.append(nintypercent)
if valaccuracy >= .95 and len(summary) == 3:
nintypercent = f'Model hit 95% validation accuracy in {epoch} epochs'
summary.append(nintypercent)
if valaccuracy >= .975 and len(summary) == 4:
nintypercent = f'Model hit 97.5% validation accuracy in {epoch} epochs'
summary.append(nintypercent)
if valaccuracy >= 1 and len(summary) == 5:
nintypercent = f'Model hit 100% validation accuracy in {epoch} epochs'
summary.append(nintypercent)
if epoch == epochs:
if valaccuracy > max_val_accuracyint:
max_val_accuracyint = valaccuracy
max_val_accuracy = f'Max accuracy was {valaccuracy * 100}% at epoch {epoch}.'
summary.append(max_val_accuracy)
else:
summary.append(max_val_accuracy)
else:
if valaccuracy > max_val_accuracyint:
max_val_accuracyint = valaccuracy
max_val_accuracy = f'Max accuracy was {valaccuracy * 100}% at epoch {epoch}.'
if not epoch % 1:
print(f'epoch: {epoch}, ' +
f'acc: {train_accuracy:.3f}, ' +
f'loss: {loss:.3f} (' +
f'data_loss: {data_loss:.3f}, ' +
f'reg_loss: {regularization_loss:.3f}), ' +
f'lr: {optimizer.current_learning_rate:.9f} ' +
f'validation, acc: {valaccuracy:.3f}, loss: {valloss:.3f} ')
```
# Summary
```
print(np.mean(acc_cache))
for milestone in summary:
print(milestone)
```
# Testing
```
accuracy = Accuracy_Categorical()
accuracy.init(y_test)
dense1.forward(X_test)
activation1.forward(dense1.output)
dense2.forward(activation1.output)
activation2.forward(dense2.output)
dense3.forward(activation2.output)
activation3.forward(dense3.output)
dense4.forward(activation3.output)
activation4.forward(dense4.output)
# Calculate the data loss
loss = loss_function.calculate(activation4.output, y_test)
predictions = activation4.predictions(activation4.output)
testaccuracy = accuracy.calculate(predictions, y_test)
print(f'Accuracy: {testaccuracy:.3f}, loss: {loss:.3f}')
training_diff = []
testing_diff = []
combined_diff = []
```
Individual Training Classes
```
accuracy = Accuracy_Categorical()
for classes, (X_sorted_lists, y_sorted_lists) in enumerate(zip(sorted_x, sorted_y)):
accuracy = Accuracy_Categorical()
y = sorted_y[y_sorted_lists]
X = sorted_x[X_sorted_lists]
accuracy.init(y)
dense1.forward(X)
activation1.forward(dense1.output)
train_train_mean = activation1.output
dense2.forward(activation1.output)
activation2.forward(dense2.output)
dense3.forward(activation2.output)
activation3.forward(dense3.output)
dense4.forward(activation3.output)
activation4.forward(dense4.output)
# Calculate the data loss
loss = loss_function.calculate(activation4.output, y)
predictions = activation4.predictions(activation4.output)
testaccuracy = accuracy.calculate(predictions, y)
print(f'{number_mnist_labels[classes]} Train Accuracy: {testaccuracy:.3f}, loss: {loss:.3f}')
accuracy = Accuracy_Categorical()
for classes, (X_sorted_lists, y_sorted_lists) in enumerate(zip(sorted_x_test, sorted_y_test)):
accuracy.init(y_sorted_lists)
#print(sorted_y[y_sorted_lists].shape)
#print(sorted_x[X_sorted_lists].shape)
dense1.forward(sorted_x_test[X_sorted_lists])
activation1.forward(dense1.output)
testmean = np.mean(activation1.output, axis=0)
testing_diff.append(testmean)
dense2.forward(activation1.output)
activation2.forward(dense2.output)
dense3.forward(activation2.output)
activation3.forward(dense3.output)
dense4.forward(activation3.output)
activation4.forward(dense4.output)
# Calculate the data loss
loss = loss_function.calculate(activation4.output, sorted_y_test[y_sorted_lists])
predictions = activation4.predictions(activation4.output)
testaccuracy = accuracy.calculate(predictions, sorted_y_test[y_sorted_lists])
print(f'{number_mnist_labels[classes]} Test Accuracy: {testaccuracy:.3f}, loss: {loss:.3f}')
```
# Full mnist test
Training data
```
(input, label), (X_val, y_val) = mnist.load_data()
# Label index to label name relation
number_mnist_labels = {
0: '0',
1: '1',
2: '2',
3: '3',
4: '4',
5: '5',
6: '6',
7: '7',
8: '8',
9: '9'
}
# Shuffle the training dataset
keys = np.array(range(input.shape[0]))
np.random.shuffle(keys)
input = input[keys]
label = label[keys]
# Scale and reshape samples
input = (input.reshape(input.shape[0], -1).astype(np.float32) - 127.5) / 127.5
X_val = (X_val.reshape(X_val.shape[0], -1).astype(np.float32) -
127.5) / 127.5
accuracy = Accuracy_Categorical()
accuracy.init(label)
dense1.forward(input)
activation1.forward(dense1.output)
train_train_mean = activation1.output
dense2.forward(activation1.output)
activation2.forward(dense2.output)
dense3.forward(activation2.output)
activation3.forward(dense3.output)
dense4.forward(activation3.output)
activation4.forward(dense4.output)
# Calculate the data loss
loss = loss_function.calculate(activation4.output, label)
predictions = activation4.predictions(activation4.output)
testaccuracy = accuracy.calculate(predictions, label)
print(f'Full Training Accuracy: {testaccuracy:.3f}, loss: {loss:.3f}')
```
Testing data
```
accuracy = Accuracy_Categorical()
accuracy.init(y_val)
dense1.forward(X_val)
activation1.forward(dense1.output)
train_train_mean = activation1.output
dense2.forward(activation1.output)
activation2.forward(dense2.output)
dense3.forward(activation2.output)
activation3.forward(dense3.output)
dense4.forward(activation3.output)
activation4.forward(dense4.output)
# Calculate the data loss
loss = loss_function.calculate(activation4.output, y_val)
predictions = activation4.predictions(activation4.output)
testaccuracy = accuracy.calculate(predictions, y_val)
print(f'Full Testing Accuracy: {testaccuracy:.5f}, loss: {loss:.3f}')
predicted_list = []
true_list = []
for sample in range(len(X_val)):
predicted_list.append(np.where(activation4.output[sample] == np.amax(activation4.output[sample]))[0][0])
true_list.append(y_val[sample])
from sklearn import metrics
import seaborn as sn
import pandas as pd
array = metrics.confusion_matrix(true_list, predicted_list, labels=[0,1,2,3,4,5,6,7,8,9])
df_cm = pd.DataFrame(array, range(len(set(true_list))), range(len(set(true_list))))
df_cm.round(9)
plt.figure(figsize=(10,7))
sn.set(font_scale=1.2) # for label size
sn.heatmap(df_cm, annot=True, annot_kws={"size": 12}, fmt='g') # font size
plt.xlabel('Predicted')
plt.ylabel('True')
plt.show()
# Printing the precision and recall, among other metrics
print(metrics.classification_report(true_list, predicted_list, labels=[0,1,2,3,4,5,6,7,8,9]))
```
Change idex to get confidence of different samples of testing data. Index values 0-1600 were refrenced in training. Anything past was never seen during training. Lowest confidence is at index 5046 when trained with 178 epochs and numpy seed set to 22.
```
index = 5046
print(f'{(activation4.output[index][np.where(activation4.output[index] == np.amax(activation4.output[index]))][0]*100):.3f}% Confident True is {number_mnist_labels[np.where(activation4.output[index] == np.amax(activation4.output[index]))[0][0]]}. True is actually {number_mnist_labels[y_val[index]]}')
X_val.resize(X_val.shape[0],28,28)
image = X_val[index]
fig = plt.figure
plt.grid(False)
plt.title(f'{number_mnist_labels[y_val[index]]}')
plt.imshow(image, cmap='gray')
plt.show()
confidence_list = []
for index in range(10000):
confidence_list.append(activation4.output[index][np.where(activation4.output[index] == np.amax(activation4.output[index]))][0])
print(confidence_list.index(min(confidence_list)))
```
Plotting Graphs
```
plt.rcParams['axes.grid'] = False
plt.plot(epoch_cache, val_loss_cache, label='Validation Loss')
plt.plot(epoch_cache, loss_cache, label='Training Loss')
plt.title('Loss')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend(loc = "upper right")
plt.show()
plt.plot(epoch_cache, val_acc_cache, label='Validation Accuracy')
plt.plot(epoch_cache, acc_cache, label='Training Accuracy')
plt.title('Accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend(loc = "upper right")
plt.show()
plt.plot(epoch_cache, lr_cache, label='LR')
plt.title('Learning Rate')
plt.xlabel('Epoch')
plt.ylabel('Learning Rate')
plt.show()
```
| true |
code
| 0.758259 | null | null | null | null |
|
# Custom Interactivity
```
import param
import numpy as np
import holoviews as hv
hv.extension('bokeh', 'matplotlib')
```
In previous notebooks we discovered how the ``DynamicMap`` class allows us to declare objects in a lazy way to enable exploratory analysis of large parameter spaces. In the [Responding to Events](./11-Responding_to_Events.ipynb) guide we learned how to interactively push updates to existing plots by declaring Streams on a DynamicMap. In this user guide we will extend the idea to so called *linked* Streams, which allows complex interactions to be declared by specifying which events should be exposed when a plot is interacted with. By passing information about live interactions to a simple Python based callback, you will be able to build richer, even more interactive visualizations that enable seemless data exploration.
Some of the possibilities this opens up include:
* Dynamically aggregating datasets of billions of datapoints depending on the plot axis ranges using the [datashader](./14-Large_Data.ipynb) library.
* Responding to ``Tap`` and ``DoubleTap`` events to reveal more information in subplots.
* Computing statistics in response to selections applied with box- and lasso-select tools.
Currently only the bokeh backend for HoloViews supports the linked streams system but the principles used should extend to any backend that can define callbacks that fire when a user zooms or pans or interacts with a plot.
<center><div class="alert alert-info" role="alert">To use and visualize <b>DynamicMap</b> or <b>Stream</b> objects you need to be running a live Jupyter server.<br>This user guide assumes that it will be run in a live notebook environment.<br>
When viewed statically, DynamicMaps will only show the first available Element.<br></div></center>
## Available Linked Streams
There are a huge number of ways one might want to interact with a plot. The HoloViews streams module aims to expose many of the most common interactions you might want want to employ, while also supporting extensibility via custom linked Streams.
Here is the full list of linked Stream that are all descendants of the ``LinkedStream`` baseclass:
```
from holoviews import streams
listing = ', '.join(sorted([str(s.name) for s in param.descendents(streams.LinkedStream)]))
print('The linked stream classes supported by HoloViews are:\n\n{listing}'.format(listing=listing))
```
```
The linked stream classes supported by HoloViews are:
Bounds, BoundsX, BoundsY, DoubleTap, Draw, LinkedStream, MouseEnter, MouseLeave, PlotSize, PointerX, PointerXY, PointerY, PositionX, PositionXY, PositionY, RangeX, RangeXY, RangeY, Selection1D, SingleTap, Tap
```
As you can see, most of these events are about specific interactions with a plot such as the current axis ranges (the ``RangeX``, ``RangeY`` and ``RangeXY`` streams), the mouse pointer position (the ``PointerX``, ``PointerY`` and ``PointerXY`` streams), click or tap positions (``Tap``, ``DoubleTap``). Additionally there a streams to access plotting selections made using box- and lasso-select tools (``Selection1D``), the plot size (``PlotSize``) and the ``Bounds`` of a selection.
Each of these linked Stream types has a corresponding backend specific ``Callback``, which defines which plot attributes or events to link the stream to and triggers events on the ``Stream`` in response to changes on the plot. Defining custom ``Stream`` and ``Callback`` types will be covered in future guides.
## Linking streams to plots
At the end of the [Responding to Events](./11-Responding_to_Events.ipynb) guide we discovered that streams have ``subscribers``, which allow defining user defined callbacks on events, but also allow HoloViews to install subscribers that let plots respond to Stream updates. Linked streams add another concept on top of ``subscribers``, namely the Stream ``source``.
The source of a linked stream defines which plot element to receive events from. Any plot containing the ``source`` object will be attached to the corresponding linked stream and will send event values in response to the appropriate interactions.
Let's start with a simple example. We will declare one of the linked Streams from above, the ``PointerXY`` stream. This stream sends the current mouse position in plot axes coordinates, which may be continuous or categorical. The first thing to note is that we haven't specified a ``source`` which means it uses the default value of ``None``.
```
pointer = streams.PointerXY()
print(pointer.source)
```
```
None
```
Before continuing, we can check the stream parameters that are made available to user callbacks from a given stream instance by looking at its contents:
```
print('The %s stream has contents %r' % (pointer, pointer.contents))
```
```
The PointerXY(x=None,y=None) stream has contents {'y': None, 'x': None}
```
#### Automatic linking
A stream instance is automatically linked to the first ``DynamicMap`` we pass it to, which we can confirm by inspecting the stream's ``source`` attribute after supplying it to a ``DynamicMap``:
```
pointer_dmap = hv.DynamicMap(lambda x, y: hv.Points([(x, y)]), streams=[pointer])
print(pointer.source is pointer_dmap)
```
```
True
```
The ``DynamicMap`` we defined above simply defines returns a ``Points`` object composed of a single point that marks the current ``x`` and ``y`` position supplied by our ``PointerXY`` stream. The stream is linked whenever this ``DynamicMap`` object is displayed as it is the stream source:
```
pointer_dmap(style={"Points": dict(size=10)})
```
<center><img src="http://assets.holoviews.org/gifs/guides/user_guide/Custom_Interactivity/point_hover.gif" width=300></center>
If you hover over the plot canvas above you can see that the point tracks the current mouse position. We can also inspect the last cursor position by examining the stream contents:
```
pointer.contents
```
```
{'x': 0.40575409375411886, 'y': 0.6441381051588625}
```
In the [Responding to Events](11-Responding_to_Events.ipynb) user guide, we introduced an integration example that would work more intuitively with linked streams. Here it is again with the ``limit`` value controlled by the ``PointerX`` linked stream:
```
%%opts Area (color='#fff8dc' line_width=2) Curve (color='black') VLine (color='red')
xs = np.linspace(-3, 3, 400)
def function(xs, time):
"Some time varying function"
return np.exp(np.sin(xs+np.pi/time))
def integral(limit, time):
limit = -3 if limit is None else np.clip(limit,-3,3)
curve = hv.Curve((xs, function(xs, time)))[limit:]
area = hv.Area ((xs, function(xs, time)))[:limit]
summed = area.dimension_values('y').sum() * 0.015 # Numeric approximation
return (area * curve * hv.VLine(limit) * hv.Text(limit + 0.8, 2.0, '%.2f' % summed))
hv.DynamicMap(integral, streams=[streams.Stream.define('Time', time=1.0)(),
streams.PointerX().rename(x='limit')])
```
<center><img src="http://assets.holoviews.org/gifs/guides/user_guide/Custom_Interactivity/area_hover.gif" width=300></center>
We only needed to import and use the ``PointerX`` stream and rename the ``x`` parameter that tracks the cursor position to 'limit' so that it maps to the corresponding argument. Otherwise, the example only required bokeh specific style options to match the matplotlib example as closely as possible.
#### Explicit linking
In the example above, we took advantage of the fact that a ``DynamicMap`` automatically becomes the stream source if a source isn't explicitly specified. If we want to link the stream instance to a different object we can specify our source explicitly. Here we will create a 2D ``Image`` of sine gratings, and then declare that this image is the ``source`` of the ``PointerXY`` stream. This pointer stream is then used to generate a single point that tracks the cursor when hovering over the image:
```
xvals = np.linspace(0,4,202)
ys,xs = np.meshgrid(xvals, -xvals[::-1])
img = hv.Image(np.sin(((ys)**3)*xs))
pointer = streams.PointerXY(x=0,y=0, source=img)
pointer_dmap = hv.DynamicMap(lambda x, y: hv.Points([(x, y)]), streams=[pointer])
```
Now if we display a ``Layout`` consisting of the ``Image`` acting as the source together with the ``DynamicMap``, the point shown on the right tracks the cursor position when hovering over the image on the left:
```
img + pointer_dmap(style={"Points": dict(size=10)})
```
<center><img src="http://assets.holoviews.org/gifs/guides/user_guide/Custom_Interactivity/raster_hover.gif" width=600></center>
This will even work across different cells. If we use this particular stream instance in another ``DynamicMap`` and display it, this new visualization will also be supplied with the cursor position when hovering over the image.
To illustrate this, we will now use the pointer ``x`` and ``y`` position to generate cross-sections of the image at the cursor position on the ``Image``, making use of the ``Image.sample`` method. Note the use of ``np.clip`` to make sure the cross-section is well defined when the cusor goes out of bounds:
```
%%opts Curve {+framewise}
hv.DynamicMap(lambda x, y: img.sample(y=np.clip(y,-.49,.49)), streams=[pointer]) +\
hv.DynamicMap(lambda x, y: img.sample(x=np.clip(x,-.49,.49)), streams=[pointer])
```
<center><img src="http://assets.holoviews.org/gifs/guides/user_guide/Custom_Interactivity/cross_section_hover.gif" width=600></center>
Now when you hover over the ``Image`` above, you will see the cross-sections update while the point position to the right of the ``Image`` simultaneously updates.
#### Unlinking objects
Sometimes we just want to display an object designated as a source without linking it to the stream. If the object is not a ``DynamicMap``, like the ``Image`` we designated as a ``source`` above, we can make a copy of the object using the ``clone`` method. We can do the same with ``DynamicMap`` though we just need to supply ``link_inputs=False`` as an extra argument.
Here we will create a ``DynamicMap`` that draws a cross-hair at the cursor position:
```
pointer = streams.PointerXY(x=0, y=0)
cross_dmap = hv.DynamicMap(lambda x, y: (hv.VLine(x) * hv.HLine(y)), streams=[pointer])
```
Now we will add two copies of the ``cross_dmap`` into a Layout but the subplot on the right will not be linking the inputs. Try hovering over the two subplots and observe what happens:
```
cross_dmap + cross_dmap.clone(link_inputs=False)
```
<center><img src="http://assets.holoviews.org/gifs/guides/user_guide/Custom_Interactivity/unlink.gif" width=600></center>
Notice how hovering over the left plot updates the crosshair position on both subplots, while hovering over the right subplot has no effect.
## Transient linked streams
In the basic [Responding to Events](11-Responding_to_Events.ipynb) user guide we saw that stream parameters can be updated and those values are then passed to the callback. This model works well for many different types of streams that have well-defined values at all times.
This approach is not suitable for certain events which only have a well defined value at a particular point in time. For instance, when you hover your mouse over a plot, the hover position always has a well-defined value but the click position is only defined when a click occurs (if it occurs).
This latter case is an example of what are called 'transient' streams. These streams are supplied new values only when they occur and fall back to a default value at all other times. This default value is typically ``None`` to indicate that the event is not occuring and therefore has no data.
Transient streams are particularly useful when you are subscribed to multiple streams, some of which are only occasionally triggered. A good example are the ``Tap`` and ``DoubleTap`` streams; while you sometimes just want to know the last tapped position, we can only tell the two events apart if their values are ``None`` when not active.
We'll start by declaring a ``SingleTap`` and a ``DoubleTap`` stream as ``transient``. Since both streams supply 'x' and 'y' parameters, we will rename the ``DoubleTap`` parameters to 'x2' and 'y2'.
```
tap = streams.SingleTap(transient=True)
double_tap = streams.DoubleTap(rename={'x': 'x2', 'y': 'y2'}, transient=True)
```
Next we define a list of taps we can append to, and a function that accumulates the tap and double tap coordinates along with the number of taps, returning a ``Points`` Element of the tap positions.
```
taps = []
def record_taps(x, y, x2, y2):
if None not in [x,y]:
taps.append((x, y, 1))
elif None not in [x2, y2]:
taps.append((x2, y2, 2))
return hv.Points(taps, vdims='Taps')
```
Finally we can create a ``DynamicMap`` from our callback and attach the streams. We also apply some styling so the points are colored depending on the number of taps.
```
%%opts Points [color_index='Taps' tools=['hover']] (size=10 cmap='Set1')
hv.DynamicMap(record_taps, streams=[tap, double_tap])
```
<center><img src="http://assets.holoviews.org/gifs/guides/user_guide/Custom_Interactivity/tap_record.gif" width=300></center>
Now try single- and double-tapping within the plot area, each time you tap a new point is appended to the list and displayed. Single taps show up in red and double taps show up in grey. We can also inspect the list of taps directly:
```
taps
```
```
[(0.4395821339578692, 0.6807756323806448, 1),
(0.3583948374688684, 0.6073731430597871, 2),
(0.7327584823903722, 0.48095774478497655, 1),
(0.20053064985136673, 0.17103612320802172, 1),
(0.8590498324843735, 0.7337885413345976, 1),
(0.3358428106663682, 0.358620262583547, 2)]
```
| true |
code
| 0.33691 | null | null | null | null |
|
# Introduction to Functions
- [Download the lecture notes](https://philchodrow.github.io/PIC16A/content/functions/functions_1.ipynb).
**Functions** are one of the most important constructs in computer programming. A function is a single command which, when executed, performs some operations and may return a value. You've already encountered functions in PIC10A, where they may have looked something like this:
```cpp
// Filename: boldy.cpp
#include <iostream>
int main() {
std::cout << "To boldly go";
return 0;
}
```
You'll notice the *type declaration* (`int`), the function name (`main`), the parameter declaration (`()`, i.e. no parameters in this case), and the *return value* (`0`). Python functions have a similar syntax. Instead of a type declaration, one uses the `def` keyword to denote function definition. One does not use `{}` braces, but one does use a `:` colon to initiate the body of the function and whitespace to indent the body.
Since Python is interpreted rather than compiled, functions are ready to use as soon as they are defined.
```
def boldly_print(): # colon ends declaration and begins definition
print("To boldly go")
# return values are optional
boldly_print()
# ---
```
## Parameters
Just as in C++, in Python we can pass *arguments* (or *parameters*) to functions in order to modify their behavior.
```
def boldly_print_2(k):
for i in range(k):
print("To boldly go")
boldly_print_2(3)
# ---
```
These arguments can be given *default* values, so that it is not necessary to specify each argument in each function call.
```
def boldly_print_3(k, verb="go"):
for i in range(k):
print("To boldly " + verb)
boldly_print_3(2)
# ---
```
It is often desirable to use *keyword arguments* so that your code clearly indicates which argument is being supplied which value:
```
boldly_print_3(3, "sing") # fine
# ---
boldly_print_3(k=3, verb="sing") # same as above, easier to read
# ---
```
All keyword arguments must be supplied after all positional arguments:
```
boldly_print_3(k = 3, "sing")
# ---
```
## Scope
The **global scope** is the set of all variables available for usage outside of any function.
```
x = 3 # available in global scope
x
```
Functions create a **local scope**. This means:
- Variables in the global scope are available within the function.
- Variables created within the function are **not** available within the global scope.
```
# variables within the global scope are available within the function
def print_x():
print(x)
print_x()
# ---
def print_y():
y = 2
print(y)
print_y()
# ---
y
# ---
```
Immutable variables in the global scope cannot be modified by functions, even if you use the same variable name.
```
def new_x():
x = 7
print(x)
new_x()
# ---
print(x)
# ---
```
On the other hand, *mutable* variables in global scope can be modified by functions. **This is usually a bad idea**, for reasons we'll discuss in another set of notes.
```
# this works, but it's a bad idea.
captains = ["Kirk", "Picard", "Janeway", "Sisko"]
def reverse_names():
for i in range(4):
captains[i] = captains[i][::-1]
reverse_names()
captains
```
## Return values
So far, we've seen examples of functions that print but do not *return* anything. Usually, you will want your function to have one or more return values. These allow the output of a function to be used in future computations.
```
def boldly_return(k = 1, verb = "go"):
return(["to boldly " + verb for i in range(k)])
x = boldly_return(k = 2, verb = "dance")
x
```
Your function can return multiple values:
```
def double_your_number(j):
return(j, 2*j)
x, y = double_your_number(10)
```
The `return` statement *immediately* terminates the function's local scope, usually returning to global scope. So, for example, a `return` statement can be used to terminate a `while` loop, similar to a `break` statement.
```
def largest_power_below(a, upper_bound):
i = 1
while True:
i *= a
if a*i >= upper_bound:
return(i)
largest_power_below(3, 10000)
```
| true |
code
| 0.439386 | null | null | null | null |
|
```
import tensorflow as tf
# Import MNIST data (Numpy format)
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)
# Parameters
learning_rate = 0.01
num_steps = 1000
batch_size = 128
display_step = 100
# Network Parameters
n_input = 784 # MNIST data input (img shape: 28*28)
n_classes = 10 # MNIST total classes (0-9 digits)
dropout = 0.75 # Dropout, probability to keep units
sess = tf.Session()
# Create a dataset tensor from the images and the labels
dataset = tf.data.Dataset.from_tensor_slices(
(mnist.train.images, mnist.train.labels))
# Automatically refill the data queue when empty
dataset = dataset.repeat()
# Create batches of data
dataset = dataset.batch(batch_size)
# Prefetch data for faster consumption
dataset = dataset.prefetch(batch_size)
# Create an iterator over the dataset
iterator = dataset.make_initializable_iterator()
# Initialize the iterator
sess.run(iterator.initializer)
# Neural Net Input (images, labels)
X, Y = iterator.get_next()
# -----------------------------------------------
# THIS IS A CLASSIC CNN (see examples, section 3)
# -----------------------------------------------
# Note that a few elements have changed (usage of sess run).
# Create model
def conv_net(x, n_classes, dropout, reuse, is_training):
# Define a scope for reusing the variables
with tf.variable_scope('ConvNet', reuse=reuse):
# MNIST data input is a 1-D vector of 784 features (28*28 pixels)
# Reshape to match picture format [Height x Width x Channel]
# Tensor input become 4-D: [Batch Size, Height, Width, Channel]
x = tf.reshape(x, shape=[-1, 28, 28, 1])
# Convolution Layer with 32 filters and a kernel size of 5
conv1 = tf.layers.conv2d(x, 32, 5, activation=tf.nn.relu)
# Max Pooling (down-sampling) with strides of 2 and kernel size of 2
conv1 = tf.layers.max_pooling2d(conv1, 2, 2)
# Convolution Layer with 32 filters and a kernel size of 5
conv2 = tf.layers.conv2d(conv1, 64, 3, activation=tf.nn.relu)
# Max Pooling (down-sampling) with strides of 2 and kernel size of 2
conv2 = tf.layers.max_pooling2d(conv2, 2, 2)
# Flatten the data to a 1-D vector for the fully connected layer
fc1 = tf.contrib.layers.flatten(conv2)
# Fully connected layer (in contrib folder for now)
fc1 = tf.layers.dense(fc1, 1024)
# Apply Dropout (if is_training is False, dropout is not applied)
fc1 = tf.layers.dropout(fc1, rate=dropout, training=is_training)
# Output layer, class prediction
out = tf.layers.dense(fc1, n_classes)
# Because 'softmax_cross_entropy_with_logits' already apply softmax,
# we only apply softmax to testing network
out = tf.nn.softmax(out) if not is_training else out
return out
# Because Dropout have different behavior at training and prediction time, we
# need to create 2 distinct computation graphs that share the same weights.
# Create a graph for training
logits_train = conv_net(X, n_classes, dropout, reuse=False, is_training=True)
# Create another graph for testing that reuse the same weights, but has
# different behavior for 'dropout' (not applied).
logits_test = conv_net(X, n_classes, dropout, reuse=True, is_training=False)
# Define loss and optimizer (with train logits, for dropout to take effect)
loss_op = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(
logits=logits_train, labels=Y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
train_op = optimizer.minimize(loss_op)
# Evaluate model (with test logits, for dropout to be disabled)
correct_pred = tf.equal(tf.argmax(logits_test, 1), tf.argmax(Y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
# Initialize the variables (i.e. assign their default value)
init = tf.global_variables_initializer()
# Run the initializer
sess.run(init)
# Training cycle
for step in range(1, num_steps + 1):
# Run optimization
sess.run(train_op)
if step % display_step == 0 or step == 1:
# Calculate batch loss and accuracy
# (note that this consume a new batch of data)
loss, acc = sess.run([loss_op, accuracy])
print("Step " + str(step) + ", Minibatch Loss= " + \
"{:.4f}".format(loss) + ", Training Accuracy= " + \
"{:.3f}".format(acc))
print("Optimization Finished!")
```
| true |
code
| 0.814864 | null | null | null | null |
|
## Assignment:
Beat the performance of my Lasso regression by **using different feature engineering steps ONLY!!**.
The performance of my current model, as shown in this notebook is:
- test rmse: 44798.497576784845
- test r2: 0.7079639526659389
To beat my model you will need a test r2 bigger than 0.71 and a rmse smaller than 44798.
### Conditions:
- You MUST NOT change the hyperparameters of the Lasso.
- You MUST use the same seeds in Lasso and train_test_split as I show in this notebook (random_state)
- You MUST use all the features of the dataset (except Id) - you MUST NOT select features
### If you beat my model:
Make a pull request with your notebook to this github repo:
https://github.com/solegalli/udemy-feml-challenge
Remember that you need to fork this repo first, upload your winning notebook to your repo, and then make a PR (pull request) to my repo. I will then revise and accept the PR, which will appear in my repo and be available to all the students in the course. This way, other students can learn from your creativity when transforming the variables in your dataset.
## House Prices dataset
```
from math import sqrt
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# for the model
from sklearn.model_selection import train_test_split
from sklearn.linear_model import Lasso
from sklearn.pipeline import Pipeline
from sklearn.metrics import mean_squared_error, r2_score
# for feature engineering
from sklearn.preprocessing import StandardScaler
from feature_engine import missing_data_imputers as mdi
from feature_engine import discretisers as dsc
from feature_engine import categorical_encoders as ce
```
### Load Datasets
```
# load dataset
data = pd.read_csv('../houseprice.csv')
# make lists of variable types
categorical = [var for var in data.columns if data[var].dtype == 'O']
year_vars = [var for var in data.columns if 'Yr' in var or 'Year' in var]
discrete = [
var for var in data.columns if data[var].dtype != 'O'
and len(data[var].unique()) < 20 and var not in year_vars
]
numerical = [
var for var in data.columns if data[var].dtype != 'O'
if var not in discrete and var not in ['Id', 'SalePrice']
and var not in year_vars
]
print('There are {} continuous variables'.format(len(numerical)))
print('There are {} discrete variables'.format(len(discrete)))
print('There are {} temporal variables'.format(len(year_vars)))
print('There are {} categorical variables'.format(len(categorical)))
```
### Separate train and test set
```
# IMPORTANT: keep the random_state to zero for reproducibility
# Let's separate into train and test set
X_train, X_test, y_train, y_test = train_test_split(data.drop(
['Id', 'SalePrice'], axis=1),
data['SalePrice'],
test_size=0.1,
random_state=0)
# calculate elapsed time
def elapsed_years(df, var):
# capture difference between year variable and
# year the house was sold
df[var] = df['YrSold'] - df[var]
return df
for var in ['YearBuilt', 'YearRemodAdd', 'GarageYrBlt']:
X_train = elapsed_years(X_train, var)
X_test = elapsed_years(X_test, var)
# drop YrSold
X_train.drop('YrSold', axis=1, inplace=True)
X_test.drop('YrSold', axis=1, inplace=True)
# Join number of bathrooms in total
X_train['FullBath'] = X_train['FullBath'] + X_train['BsmtFullBath']
X_train['HalfBath'] = X_train['HalfBath'] + X_train['BsmtHalfBath']
X_test['FullBath'] = X_test['FullBath'] + X_test['BsmtFullBath']
X_test['HalfBath'] = X_test['HalfBath'] + X_test['BsmtHalfBath']
X_train.drop(['BsmtFullBath', 'BsmtHalfBath'], axis=1, inplace=True)
X_test.drop(['BsmtFullBath', 'BsmtHalfBath'], axis=1, inplace=True)
discrete.remove('BsmtFullBath')
discrete.remove('BsmtHalfBath')
# capture the column names for use later in the notebook
final_columns = X_train.columns
```
## Feature Engineering Pipeline
```
# I will treat discrete variables as if they were categorical
# to treat discrete as categorical using Feature-engine
# we need to re-cast them as object
X_train[discrete] = X_train[discrete].astype('O')
X_test[discrete] = X_test[discrete].astype('O')
data[np.append(year_vars, numerical)].isnull().mean().sort_values(ascending=False)
#mean of number of categories
vals = []
for i in categorical:
vals.append(len(data[i].unique()))
np.ceil(np.mean(vals))
house_pipe = Pipeline([
# missing data imputation - section 4
('missing_ind_1',
mdi.ArbitraryNumberImputer(
arbitrary_number=0, variables=['MasVnrArea'])),
('missing_ind_2',
mdi.AddNaNBinaryImputer(
variables=['LotFrontage', 'GarageYrBlt'])),
('imputer_num',
mdi.MeanMedianImputer(
imputation_method='median',
variables=['LotFrontage', 'GarageYrBlt'])),
('imputer_cat', mdi.CategoricalVariableImputer(variables=categorical)),
# categorical encoding - section 6
('rare_label_enc',
ce.RareLabelCategoricalEncoder(tol=0.03,
n_categories=7,
variables=categorical)),
('categorical_enc',
ce.OneHotCategoricalEncoder(top_categories=10,
variables=categorical)),
# feature Scaling - section 10
('scaler', StandardScaler()),
# regression
('lasso', Lasso(random_state=0))
])
# let's fit the pipeline
house_pipe.fit(X_train, y_train)
# let's get the predictions
X_train_preds = house_pipe.predict(X_train)
X_test_preds = house_pipe.predict(X_test)
# check model performance:
print('train mse: {}'.format(mean_squared_error(y_train, X_train_preds)))
print('train rmse: {}'.format(sqrt(mean_squared_error(y_train, X_train_preds))))
print('train r2: {}'.format(r2_score(y_train, X_train_preds)))
print()
print('test mse: {}'.format(mean_squared_error(y_test, X_test_preds)))
print('test rmse: {}'.format(sqrt(mean_squared_error(y_test, X_test_preds))))
print('test r2: {}'.format(r2_score(y_test, X_test_preds)))
# plot predictions vs real value
plt.scatter(y_test,X_test_preds)
plt.xlabel('True Price')
plt.ylabel('Predicted Price')
plt.show()
```
| true |
code
| 0.6137 | null | null | null | null |
|
```
# Imágenes: Copyright a autores respectivos.
# Gráficos: Tomados de http://matplotlib.org/gallery.html y modificados.
```
# MAT281
## Aplicaciones de la Matemática en la Ingeniería
## ¿Porqué aprenderemos sobre visualización?
* Porque un resultado no sirve si no puede comunicarse correctamente.
* Porque una buena visualización dista de ser una tarea trivial.
* Porque un ingenierio necesita producir excelentes gráficos (pero nadie enseña cómo).
Seguramente está exagerando...
## No, no exagero...
<img src="images/Fox1.png" alt="" width="800" align="middle"/>
## No, no exagero...
<img src="images/Fox2.png" alt="" width="800" align="middle"/>
## No, no exagero...
<img src="images/Fox3.png" alt="" width="800" align="middle"/>
## Primeras visualizaciones
Campaña de Napoleón a Moscú (Charles Minard, 1889).
<img src="images/Napoleon.png" alt="" width="800" align="middle"/>
## Primeras visualizaciones
Mapa del cólera (John Snow, 1855).
<img src="images/Colera.png" alt="" width="800" align="middle"/>
## ¿Y en primer lugar, porqué utilizamos gráficos?
¿Porqué utilizamos gráficos para presentar datos?
* El 70 % de los receptores sensoriales del cuerpo humano está dedicado a la visión.
* Cerebro ha sido entrenado evolutivamente para interpretar la información visual de manera masiva.
“The eye and the visual cortex of the brain form a massively
parallel processor that provides the highest bandwidth channel
into human cognitive centers”
— Colin Ware, Information Visualization, 2004.
## Ejemplo clásico: Cuarteto de ANSCOMBE
Considere los siguientes 4 conjuntos de datos.
¿Qué puede decir de los datos?
```
import pandas as pd
import os
filepath = os.path.join("data","anscombe.csv")
df = pd.read_csv(filepath)
df
```
## Ejemplo clásico: Cuarteto de ANSCOMBE
Consideremos las estadísticas de los datos, versión `numpy` puro:
```
import numpy as np
data = np.loadtxt("data/anscombe.csv", delimiter=",", skiprows=1)
for i in range(4):
x = data[:,2*i]
y = data[:,2*i+1]
slope, intercept = np.polyfit(x, y, 1)
print("Grupo %d:" %(i+1))
print("\tTiene pendiente m=%.2f e intercepto b=%.2f" %(slope, intercept))
```
Ahora utilizando `pandas`.
```
import pandas as pd
import os
filepath = os.path.join("data","anscombe.csv")
df = pd.read_csv(filepath)
df[sorted(df.columns)].describe(include="all")
```
## Ejemplo clásico: Cuarteto de ANSCOMBE
Grafiquemos los datos, con `numpy`:
```
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
def my_plot():
data = np.loadtxt("data/anscombe.csv", delimiter=",", skiprows=1)
fig = plt.figure(figsize=(16,8))
for i in range(4):
x = data[:,2*i]
y = data[:,2*i+1]
plt.subplot(2, 2, i+1)
plt.plot(x,y,'o')
plt.xlim([2,20])
plt.ylim([2,20])
plt.title("Grupo %d" %(i+1))
m, b = np.polyfit(x, y, 1)
x_aux = np.linspace(2,16,20)
plt.plot(x_aux, m*x_aux + b, 'r', lw=2.0)
plt.suptitle("Cuarteto de Anscombe")
plt.show()
my_plot()
```
Grafiquemos con `pandas`:
```
import pandas as pd
import os
# Formateo de la información
filepath = os.path.join("data","anscombe.csv")
df = pd.read_csv(filepath)
long_format_data = []
for i in range(1,5):
old_cols = ["x{}".format(i), "y{}".format(i)]
new_cols = ["x", "y"]
df_aux = df[old_cols].rename(columns = dict(zip(old_cols, new_cols)))
df_aux["set"] = "{}".format(i)
long_format_data.append(df_aux)
df_new = pd.concat(long_format_data)
df_new
df_new.plot(x="x", y="y", kind="scatter", subplots=True, figsize=(16,8))
```
Uuuffff. Más dificil de lo pensado.
En realidad, siempre conviene usar la mejor herramienta a mano (y conocer varias herramientas).
```
pd.plotting.scatter_matrix(df, figsize=(10,10))#;
```
## Sistema visual humano
#### Buenas noticias
* Gráficos entregan información que la estadística podría no revelar.
* Despliegue visual es esencial para comprensión.
#### Malas noticias
* La atención es selectiva y puede ser fácilmente engañada.
#### La atención es selectiva y puede ser fácilmente engañada.
<img src="images/IO1a.png" alt="" width="400" align="middle"/>
#### La atención es selectiva y puede ser fácilmente engañada.
<img src="images/IO1b.png" alt="" width="400" align="middle"/>
#### La atención es selectiva y puede ser fácilmente engañada.
<img src="images/IO2a.png" alt="" width="400" align="middle"/>
#### La atención es selectiva y puede ser fácilmente engañada.
<img src="images/IO2b.png" alt="" width="400" align="middle"/>
## Consejos generales
Noah Illinsky, en su charla "Cuatro pilatres de la visualización" ([es](https://www.youtube.com/watch?v=nC92wIzpQFE), [en](https://www.youtube.com/watch?v=3eZ15VplE3o)), presenta buenos consejos sobre cómo realizar una correcta visualización:
* Propósito
* Información/Contenido
* Codificación/Estructura
* Formato
Es altamente aconsejable ver el video, pero en resumen:
* **Propósito** o público tiene que ver con para quién se está preparando la viz y que utilidad se le dará. Es muy diferente preparar un gráfico orientado a información y toma de decisiones.
* **Información/Contenido** se refiere a contar con la información que se desea mostrar, en el formato necesario para su procesamiento.
* **Codificación/Estructura** tiene que ver con la selección correcta de la codificación y estructura de la información.
* **Formato** tiene que ver con la elección de fuentes, colores, tamaños relativos, etc.
Lo anterior indica que una visualización no es el resultado de unos datos. Una visualización se diseña, se piensa, y luego se buscan fuentes de información apropiadas.
## Elementos para la creación de una buena visualización
1. ***Honestidad***: representaciones visuales no deben engañar al observador.
2. ***Priorización***: dato más importante debe utilizar elemento de mejor percepción.
3. ***Expresividad***: datos deben utilizar elementos con atribuciones adecuadas.
4. ***Consistencia***: codificación visual debe permitir reproducir datos.
El principio básico a respetar es que a partir del gráfico uno debe poder reobtener fácilmente los datos originales.
## 1. Honestidad
El ojo humano no tiene la misma precisión al estimar distintas atribuciones:
* **Largo**: Bien estimado y sin sesgo, con un factor multiplicativo de 0.9 a 1.1.
* **Área**: Subestimado y con sesgo, con un factor multiplicativo de 0.6 a 0.9.
* **Volumen**: Muy subestimado y con sesgo, con un factor multiplicativo de 0.5 a 0.8.
#### 1. Honestidad
Resulta inadecuado realizar gráficos de datos utilizando áreas o volúmenes buscando inducir a errores.
<img src="images/Honestidad1.png" alt="" width="800" align="middle"/>
#### 1. Honestidad
Resulta inadecuado realizar gráficos de datos utilizando áreas o volúmenes si no queda claro la atribución utilizada.
<img src="images/Honestidad2.png" alt="" width="800" align="middle"/>
#### 1. Honestidad
Una pseudo-excepción la constituyen los "pie-chart" o gráficos circulares,
porque el ojo humano distingue bien ángulos y segmentos de círculo,
y porque es posible indicar los porcentajes respectivos.
```
from matplotlib import pyplot as plt
def my_plot():
# make a square figure and axes
plt.figure(figsize=(6,6))
ax = plt.axes([0.1, 0.1, 0.8, 0.8])
# The slices will be ordered and plotted counter-clockwise.
my_labels = 'Frogs', 'Hogs', 'Dogs', 'Logs'
my_fracs = [15, 30, 45, 10]
my_explode=(0, 0.10, 0.10, 0)
#plt.pie(my_fracs, labels=my_labels)
plt.pie(my_fracs, explode=my_explode, labels=my_labels, autopct='%1.1f%%', shadow=True, startangle=90)
plt.title('Raining Hogs and Dogs', bbox={'facecolor':'0.8', 'pad':5})
plt.show()
my_plot()
```
## 2. Priorización
Dato más importante debe utilizar elemento de mejor percepción.
```
import numpy as np
from matplotlib import pyplot as plt
def my_plot():
N = 31
x = np.arange(N)
y1 = 80 + 20*x/N + 5*np.random.rand(N)
y2 = 75 + 25*x/N + 5*np.random.rand(N)
fig = plt.figure(figsize=(16,8))
plt.subplot(2, 2, 1)
plt.plot(x, y1, 'ok')
plt.plot(x, y2, 'sk')
plt.subplot(2, 2, 2)
plt.plot(x, y1,'ob')
plt.plot(x, y2,'or')
plt.subplot(2, 2, 3)
plt.plot(x, y1,'ob')
plt.plot(x, y2,'*r')
plt.subplot(2, 2, 4)
plt.plot(x, y1,'sr')
plt.plot(x, y2,'ob')
plt.show()
my_plot()
```
#### 2. Priorización
## Elementos de mejor percepción
No todos los elementos tienen la misma percepción a nivel del sistema visual.
En particular, el color y la forma son elementos preatentivos: un color distinto o una forma distinta se reconocen de manera no conciente.
Ejemplos de elementos preatentivos.
<img src="images/preatentivo1.png" alt="" width="600" align="middle"/>
<img src="images/preatentivo2.png" alt="" width="600" align="middle"/>
#### 2. Priorización
## Elementos de mejor percepción
¿En que orden creen que el sistema visual humano puede estimar los siguientes atributos visuales:
* Color
* Pendiente
* Largo
* Ángulo
* Posición
* Área
* Volumen
#### 2. Priorización
## Elementos de mejor percepción
El sistema visual humano puede estimar con precisión siguientes atributos visuales:
1. Posición
2. Largo
3. Pendiente
4. Ángulo
5. Área
6. Volumen
7. Color
Utilice el atributo que se estima con mayor precisión cuando sea posible.
#### 2. Priorización
## Colormaps
Puesto que la percepción del color tiene muy baja precisión, resulta ***inadecuado*** tratar de representar un valor numérico con colores.
* ¿Qué diferencia numérica existe entre el verde y el rojo?
* ¿Que asociación preexistente posee el color rojo, el amarillo y el verde?
* ¿Con cuánta precisión podemos distinguir valores en una escala de grises?
#### 2. Priorización
## Colormaps
<img src="images/colormap.png" alt="" width="400" align="middle"/>
#### 2. Priorización
## Colormaps
Algunos ejemplos de colormaps
```
import matplotlib
import numpy as np
import matplotlib.cm as cm
import matplotlib.mlab as mlab
import matplotlib.pyplot as plt
def my_plot():
delta = 0.025
x = np.arange(-3.0, 3.0, delta)
y = np.arange(-2.0, 2.0, delta)
X, Y = np.meshgrid(x, y)
Z1 = mlab.bivariate_normal(X, Y, 1.0, 1.0, 0.0, 0.0)
Z2 = mlab.bivariate_normal(X, Y, 1.5, 0.5, 1, 1)
# difference of Gaussians
Z = 10.0 * (Z2 - Z1)
plt.figure(figsize=(16,8))
# First plot
plt.subplot(2,2,1)
im = plt.imshow(Z, interpolation='bilinear', origin='lower',cmap=cm.rainbow, extent=(-3, 3, -2, 2))
plt.colorbar(im, shrink=0.8)
# Second plot
plt.subplot(2,2,2)
im = plt.imshow(Z, interpolation='bilinear', origin='lower',cmap=cm.autumn, extent=(-3, 3, -2, 2))
plt.colorbar(im, shrink=0.8)
# Third plot
plt.subplot(2,2,3)
im = plt.imshow(Z, interpolation='bilinear', origin='lower',cmap=cm.coolwarm, extent=(-3, 3, -2, 2))
plt.colorbar(im, shrink=0.8)
# Fourth plot
plt.subplot(2,2,4)
im = plt.imshow(Z, interpolation='bilinear', origin='lower',cmap=cm.gray, extent=(-3, 3, -2, 2))
plt.colorbar(im, shrink=0.8)
# Show
plt.show()
my_plot()
```
#### 2. Priorización
## Colormaps
Consejo: evite mientras pueda los colormaps. Por ejemplo, utilizando contour plots.
```
import matplotlib
import numpy as np
import matplotlib.cm as cm
import matplotlib.mlab as mlab
import matplotlib.pyplot as plt
def my_plot():
delta = 0.025
x = np.arange(-3.0, 3.0, delta)
y = np.arange(-2.0, 2.0, delta)
X, Y = np.meshgrid(x, y)
Z1 = mlab.bivariate_normal(X, Y, 1.0, 1.0, 0.0, 0.0)
Z2 = mlab.bivariate_normal(X, Y, 1.5, 0.5, 1, 1)
# difference of Gaussians
Z = 10.0 * (Z2 - Z1)
plt.figure(figsize=(16,8))
# First plot
plt.subplot(2,2,1)
CS = plt.contour(X, Y, Z, 9, cmap=cm.rainbow)
# Second plot
matplotlib.rcParams['contour.negative_linestyle'] = 'solid'
plt.subplot(2,2,2)
CS = plt.contour(X, Y, Z, 9, cmap=cm.rainbow)
plt.clabel(CS, fontsize=9, inline=1)
# Third plot
matplotlib.rcParams['contour.negative_linestyle'] = 'solid'
plt.subplot(2,2,3)
CS = plt.contour(X, Y, Z, 9, colors='k')
plt.clabel(CS, fontsize=9, inline=1)
# Fourth plot
matplotlib.rcParams['contour.negative_linestyle'] = 'dashed'
plt.subplot(2,2,4)
CS = plt.contour(X, Y, Z, 9, colors='k')
plt.clabel(CS, fontsize=9, inline=1)
plt.grid('on');
# Show
plt.show()
my_plot()
```
## 3. Sobre la Expresividad
Mostrar los datos y sólo los datos.
Los datos deben utilizar elementos con atribuciones adecuadas: Not all data is born equal.
#### 3. Sobre la Expresividad
Clasificación de datos:
* ***Datos Cuantitativos***: Cuantificación absoluta.
* Cantidad de azúcar en fruta: 50 [gr/kg]
* Operaciones =, $\neq$, <, >, +, −, * , /
* ***Datos Posicionales***: Cuantificación relativa.
* Fecha de cosecha: 1 Agosto 2014, 2 Agosto 2014.
* Operaciones =, $\neq$, <, >, +, −
* ***Datos Ordinales***: Orden sin cuantificación.
* Calidad de la Fruta: baja, media, alta, exportación.
* Operaciones =, $\neq$, <, >
* ***Datos Nominales***: Nombres o clasificaciones
* Frutas: manzana, pera, kiwi, ...
* Operaciones $=$, $\neq$
#### 3. Sobre la Expresividad
Ejemplo: Terremotos. ¿Que tipos de datos tenemos?
* Ciudad más próxima
* Año
* Magnitud en escala Richter
* Magnitud en escala Mercalli
* Latitud
* Longitud
#### 3. Sobre la Expresividad
Contraejemplo: Compañías de computadores.
| Companía | Procedencia |
|----------|-------------|
| MSI | Taiwan |
| Asus | Taiwan |
| Acer | Taiwan |
| HP | EEUU |
| Dell | EEUU |
| Apple | EEUU |
| Sony | Japon |
| Toshiba | Japon |
| Lenovo | Hong Kong |
| Samsung | Corea del Sur |
#### 3. Sobre la Expresividad
Contraejemplo: Compañias de computadores.
```
import matplotlib.pyplot as plt
import numpy as np
def my_plot():
brands = {"MSI":"Taiwan", "Asus":"Taiwan", "Acer":"Taiwan",
"HP":"EEUU", "Dell":"EEUU", "Apple":"EEUU",
"Sony":"Japon", "Toshiba":"Japon",
"Lenovo":"Hong Kong",
"Samsung":"Corea del Sur"}
C2N = {"Taiwan":1,"EEUU":2,"Japon":3,"Hong Kong":4,"Corea del Sur":7}
x = np.arange(len(brands.keys()))
y = np.array([C2N[val] for key,val in brands.items()])
width = 0.35 # the width of the bars
fig, ax = plt.subplots(figsize=(16,8))
rects1 = ax.bar(x, y, width, color='r')
# add some text for labels, title and axes ticks
ax.set_xticks(x + 0.5*width)
ax.set_xticklabels(brands.keys(), rotation="90")
ax.set_yticks(list(C2N.values()))
ax.set_yticklabels(C2N.keys())
plt.xlim([-1,len(x)+1])
plt.ylim([-1,y.max()+1])
plt.show()
my_plot()
```
#### 3. Sobre la Expresividad
Clasificación de datos:
* ***Datos Cuantitativos***: Cuantificación absoluta.
* Cantidad de azúcar en fruta: 50 [gr/kg]
* Operaciones =, $\neq$, <, >, +, −, * , /
* **Utilizar posición, largo, pendiente o ángulo**
* ***Datos Posicionales***: Cuantificación relativa.
* Fecha de cosecha: 1 Agosto 2014, 2 Agosto 2014.
* Operaciones =, $\neq$, <, >, +, −
* **Utilizar posición, largo, pendiente o ángulo**
* ***Datos Ordinales***: Orden sin cuantificación.
* Calidad de la Fruta: baja, media, alta, exportación.
* Operaciones =, $\neq$, <, >
* **Utilizar marcadores diferenciados en forma o tamaño, o mapa de colores apropiado**
* ***Datos Nominales***: Nombres o clasificaciones
* Frutas: manzana, pera, kiwi, ...
* Operaciones $=$, $\neq$
* **Utilizar forma o color**
## 4. Consistencia
La codificación visual debe permitir reproducir datos. Para ello debemos:
* Graficar datos que sean comparables.
* Utilizar ejes escalados adecuadamente.
* Utilizar la misma codificación visual entre gráficos similares.
#### 4. Consistencia
## Utilizar ejes escalados adecuadamente.
```
import numpy as np
from matplotlib import pyplot as plt
def my_plot():
# Datos
x = range(1,13)
y = 80 + 20*np.random.rand(12)
x_ticks = ["E","F","M","A","M","J","J","A","S","O","N","D"]
fig = plt.figure(figsize=(16,8))
plt.subplot(1, 2, 1)
plt.plot(x, y,'o-')
plt.xticks(x, x_ticks)
plt.xlim([-1,13])
plt.subplot(1, 2, 2)
plt.plot(x, y,'o-')
plt.xticks(x, x_ticks)
plt.xlim([-1,13])
plt.ylim([0,100])
plt.show()
my_plot()
```
#### 4. Consistencia
## Utilizar la misma codificación visual entre gráficos similares
```
import numpy as np
from matplotlib import pyplot as plt
def my_plot():
x = np.linspace(0, 1, 50)
f1 = x**2+.2*np.random.rand(50)
g1 = x+.2*np.random.rand(50)
f2 = 0.5-0.2*x+.2*np.random.rand(50)
g2 =x**3+.2*np.random.rand(50)
fig = plt.figure(figsize=(16,8))
plt.subplot(2, 1, 1)
plt.title("Antes de MAT281")
plt.plot(x, f1, 'b', label='Chile', lw=2.0)
plt.plot(x, g1, 'g:', label='OECD', lw=2.0)
plt.legend(loc="upper left")
plt.subplot(2, 1, 2)
plt.title("Despues de MAT281")
plt.plot(x, f2, 'g:', label='Chile', lw=2.0)
plt.plot(x, g2, 'b', label='OECD', lw=2.0)
plt.legend()
plt.show()
my_plot()
```
## Resumen
Elementos para la creación de una buena visualización
* ***Honestidad***: representaciones visuales no deben engañar al observador.
* ***Priorización***: dato más importante debe utilizar elemento de mejor percepción.
* ***Expresividad***: datos deben utilizar elementos con atribuciones adecuadas.
* ***Consistencia***: codificación visual debe permitir reproducir datos.
El principio básico a respetar es que a partir del gráfico uno debe poder reobtener fácilmente los datos originales.
#### Gráfico a gráfico
## ¿Cuándo utilizar gráfico de barras?
```
from matplotlib import pyplot as plt
import numpy as np
def my_plot():
people = ('Tom', 'Dick', 'Harry', 'Slim', 'Jim')
y_pos = np.arange(len(people))
performance = 3 + 10 * np.random.rand(len(people))
error = np.random.rand(len(people))
fig = plt.figure(figsize=(16,8))
plt.subplot(1,2,1)
plt.barh(y_pos, performance, xerr=error, align='center', color="g", alpha=0.4)
plt.yticks(y_pos, people)
plt.xlabel('Performance')
plt.subplot(1,2,2)
plt.bar(y_pos, performance, yerr=error, align='center', color="g", alpha=0.6)
plt.xticks(y_pos, people)
plt.xlabel('People')
plt.ylabel('Performance')
plt.show()
my_plot()
```
### ¿Cuándo utilizar gráfico de barras?
* x: Debe ser datos del tipo nominal o ordinal.
* y: Debe ser datos de tipo ordinal, posicional o cuantitativo.
Evitar: gráfico de nominal vs nominal.
#### Gráfico a gráfico
## ¿Cuándo utilizar campos de vectores?
¿Porqué se llama quiver al campo de vectores en inglés?
```
import matplotlib.pyplot as plt
import numpy as np
from numpy import ma
def my_plot():
X, Y = np.meshgrid(np.arange(0, 2 * np.pi, .2), np.arange(0, 2 * np.pi, .2))
U = np.cos(X)
V = np.sin(Y)
fig = plt.figure(figsize=(16,8))
plt.subplot(1,2,1)
Q = plt.quiver(U, V)
qk = plt.quiverkey(Q, 0.5, 0.92, 2, r'$2 \frac{m}{s}$', labelpos='W',
fontproperties={'weight': 'bold'})
l, r, b, t = plt.axis()
dx, dy = r - l, t - b
plt.axis([l - 0.05*dx, r + 0.05*dx, b - 0.05*dy, t + 0.05*dy])
plt.subplot(1,2,2)
Q = plt.quiver(X[::3, ::3], Y[::3, ::3], U[::3, ::3], V[::3, ::3],
pivot='mid', color='r', units='inches')
qk = plt.quiverkey(Q, 0.5, 0.03, 1, r'$1 \frac{m}{s}$',
fontproperties={'weight': 'bold'})
plt.plot(X[::3, ::3], Y[::3, ::3], 'k.')
plt.axis([-1, 7, -1, 7])
plt.title("pivot='mid'; every third arrow; units='inches'")
plt.show()
my_plot()
```
### ¿Cuándo utilizar campos de vectores?
* x: Debe ser datos del tipo posicional o cuantitativo.
* y: Debe ser datos de tipo posicional o cuantitativo.
* z: Pendiente debe ser dato de tipo posicional o cuantitativo.
Evitar: gráfico de campo de vectores si no es posible la interpretación correspondiente.
#### Gráfico a gráfico
## ¿Cuándo utilizar contour plot?
```
import matplotlib
import numpy as np
import matplotlib.cm as cm
import matplotlib.mlab as mlab
import matplotlib.pyplot as plt
def my_plot():
delta = 0.025
x = np.arange(-3.0, 3.0, delta)
y = np.arange(-2.0, 2.0, delta)
X, Y = np.meshgrid(x, y)
Z1 = mlab.bivariate_normal(X, Y, 1.0, 1.0, 0.0, 0.0)
Z2 = mlab.bivariate_normal(X, Y, 1.5, 0.5, 1, 1)
# difference of Gaussians
Z = 10.0 * (Z2 - Z1)
plt.figure(figsize=(16,8))
matplotlib.rcParams['contour.negative_linestyle'] = 'solid'
plt.subplot(1,2,1)
CS = plt.contour(X, Y, Z, 9, colors='k')
plt.clabel(CS, fontsize=9, inline=1)
matplotlib.rcParams['contour.negative_linestyle'] = 'dashed'
plt.subplot(1,2,2)
CS = plt.contour(X, Y, Z, 9, colors='k')
plt.clabel(CS, fontsize=9, inline=1)
plt.grid('on')
# Show
plt.show()
my_plot()
```
* x: Dato del tipo posicional o cuantitativo.
* y: Dato de tipo posicional o cuantitativo.
* z: Dato de tipo posicional o cuantitativo.
***OBSERVACION***: Se debe tener suficiente densidad/regularidad de puntos como para poder obtener superficies de nivel.
#### Gráfico a gráfico
## ¿Cuándo utilizar scatter plot?
```
import matplotlib.pyplot as plt
import numpy as np
def my_plot():
N = 100
r0 = 0.6
x = 0.9*np.random.rand(N)
y = 0.9*np.random.rand(N)
area = np.pi*(10 * np.random.rand(N))**2 # 0 to 10 point radiuses
c = np.sqrt(area)
r = np.sqrt(x*x + y*y)
cm1 = plt.cm.get_cmap('RdYlBu')
cm2 = plt.cm.get_cmap('Greys')
plt.figure(figsize=(16,8))
area1 = np.ma.masked_where(r < r0, area)
area2 = np.ma.masked_where(r >= r0, area)
sc1 = plt.scatter(x, y, s=area1, marker='^', c=c, cmap=cm1)
plt.colorbar(sc1)
sc2 = plt.scatter(x, y, s=area2, marker='o', c=c, cmap=cm2)
plt.colorbar(sc2)
# Show the boundary between the regions:
theta = np.arange(0, np.pi/2, 0.01)
plt.plot(r0*np.cos(theta), r0*np.sin(theta), "k:", lw=2.0)
plt.show()
my_plot()
```
### ¿Cuándo utilizar scatter plot?
* x: Dato del tipo posicional o cuantitativo.
* y: Dato del tipo posicional o cuantitativo.
* z: Dato del tipo nominal u ordinal (opcional)
***OBSERVACION***: Si hay pocos puntos, también puede usarse para z datos de tipo posicional o cuantitativo.
#### Gráfico a gráfico
## ¿Cuándo utilizar gráfico de barra de error?
```
import numpy as np
import matplotlib.pyplot as plt
def my_plot():
x = np.arange(0.1, 4, 0.5)
y = np.exp(-x)
plt.figure(figsize=(16,8))
plt.subplot(1,2,1)
x_error = 0.1 + 0.2*np.random.rand(len(x))
plt.errorbar(x, y, xerr=x_error)
plt.subplot(1,2,2)
y_error = 0.1 + 0.2*np.random.rand(len(x))
plt.errorbar(x, y, yerr=y_error)
plt.show()
my_plot()
```
### ¿Cuándo utilizar gráfico de barra de error?
* x: Dato del tipo posicional o cuantitativo.
* y: Dato del tipo posicional o cuantitativo.
* z: Dato del tipo posicional o cuantitativo.
Los valores de z tienen que tener las mismas unidades y.
## Para hacer buenas visualizaciones
* Aprender a reconocer buenos ejemplos y malos ejemplos. Vitrinee.
* Para graficos 2d y 3d simples:
* Libreria clásica: matplotlib (ver ejemplos en http://matplotlib.org/gallery.html)
* Otras librerías: seaborn, gnuplot, ...
* Para gráficos 3d:
* Librería clásica: gmsh
* Otras librerías: mayavi, paraview, ...
* Para gráficos interactivos:
* altair, bokeh, d3js
* PowerBI, Tableau, etc.
| true |
code
| 0.514278 | null | null | null | null |
|
## Structure solving as meta-optimization (demo)
This is going to be so cool!
In the work of Senior et al. (2019), Yang et al. (2020), and others, static optimization constraints are predicted then provided to a static, general purpose optimization algorithm (with some amount of manual tuning of optimization parameters to the specific task).
Fascinatingly, there is a broad modern literature on the use of neural networks to learn to optimize. For example, Andrychowicz et al. (2016) demonstrate the learning of a domain-specific optimization algorithm that subsequently was shown to out-perform all of the best in class optimizers available for that problem (that had been a legacy of painstaking effort over more than a decade).
This is amazing because there's the potential to learn better and better optimizers from data which can in turn save time and money for future work - but it's also quite interesting to think of how an optimizer might learn to become specialized to individual optimization problems (such as navigating the energy landscape of a protein structure).
<img src="https://upload.wikimedia.org/wikipedia/commons/9/91/Folding_funnel_schematic.svg" alt="Folding funnel schematic.svg" height="480" width="463">
(Image [CC-BY-SA 3.0](https://creativecommons.org/licenses/by-sa/3.0) / [Thomas Splettstoesser](commons.wikimedia.org/wiki/User:Splette); [original](https://commons.wikimedia.org/wiki/File:Folding_funnel_schematic.svg#/media/File:Folding_funnel_schematic.svg))
### Work in progress
The plan is to modify the [GraphNetEncoder](https://github.com/google/jax-md/blob/master/jax_md/nn.py#L650) and [EnergyGraphNet](https://github.com/google/jax-md/blob/master/jax_md/energy.py#L944) from jax-md to also accept as input evolutionary data and not to predict a single energy value but to predict several things including:
1. A future conformation,
2. A distance matrix,
3. Bond angles, and
4. Compound interaction strengths
The simplest way to include (1) in a loss seems to be to have one of the model outputs be a coordinate for each node that are passed to a conventional jax-md energy function which is then used to incentivized input conformations being mapped to output conformations with lower energy.
It looks like (2) and (3) would be straightforward if the model returned edge representation in some form. It's possible to for now also accomplish (4) in this way.
The philosophy regarding (4) is that when folding a new protein you could obtain its iteraction profile fairly easily and if your model was previously trained to use interaction profiles as a guide (in the same way as using evolutionary data as a guide) might then be able to solve the structure more easily. Succeeding with that means architecting the model in a way consistent with that use case.
This might be done in a variety of ways. In the spirit of our learned optimizer, we might wish to learn an optimizer that not only minimizes energy but predicts conformations that are more and more consistent with interaction profiles with a set of compounds. To do this it seems we may need to run a simulator of those structure/compound interactions (which would be computationally expensive but not impossible, especially for important structures). The tendency of the learned energy minimizer to minimize energy could be fine-tuned based on the interactions of produced structures with compounds.
Or, we might consider the compound interactions as simply a guide to better learning how to extract information from evolutionary data and ignore their predictions at structure inference time.
Alternatively, we might consider compound-polymer interaction strengths as a type of input, like evolutionary data, that need to be correctly encoded but need not be predicted by the network - it simply is yet another kind of input information that can help the model learn to predict low-energy structures.
It's possible we might want to synergize with the energy-predicting approach of jax-md given that the task of learning to predict structures of lower energy seems closely related to that of computing energies - so training node functions to compute partial energies might be nice pre-training for their learning to perform position updates that reduce energy.
### Setup
Ensure the most recent version of Flatland is installed.
```
!pip install git+git://github.com/cayley-group/flatland.git --quiet
```
### Loading examples
Here we use a [Tensorflow Datasets](https://github.com/tensorflow/datasets) definition of a dataset generated using the Flatland environment. This provides a simplified interface to returning a [tf.data](https://www.tensorflow.org/guide/data) Dataset which has a variety of convenient methods for handling the input example stream (e.g. for batching, shuffling, caching, and pre-fetching).
Let's load an example from the "flatland_mock" dataset to see what the structure and data type of examples will be.
```
from absl import logging
logging.set_verbosity(logging.INFO)
import tensorflow as tf
import tensorflow_datasets as tfds
import flatland.dataset
ds = tfds.load('flatland_mock', split="train")
assert isinstance(ds, tf.data.Dataset)
ds = ds.cache().repeat()
for example in tfds.as_numpy(ds):
break
example
```
## Train demo solver
Here we have a wrapper to train the demo solver that currently only trains an energy predicting model but subsequently will transfer-learn this to predicting lower-energy structures.
```
from flatland.train import train_demo_solver
from absl import logging
logging.set_verbosity(logging.INFO)
params = train_demo_solver(num_training_steps=1,
training_log_every=1,
batch_size=16)
from flatland.train import demo_example_stream, graph_network_neighbor_list
from flatland.train import OrigamiNet
from jax_md import space
from functools import partial
box_size = 10.862
batch_size = 16
iter_examples = demo_example_stream(
batch_size=batch_size, split="train")
positions, energies, forces = next(iter_examples)
_, polymer_length, polymer_dimensions = positions.shape
displacement, shift = space.periodic(box_size)
neighbor_fn, init_fn, apply_fn = graph_network_neighbor_list(
network=OrigamiNet,
displacement_fn=displacement,
box_size=box_size,
polymer_length=polymer_length,
polymer_dimensions=polymer_dimensions,
r_cutoff=3.0,
dr_threshold=0.0)
neighbor = neighbor_fn(positions[0], extra_capacity=6)
structure_fn = partial(apply_fn, params)
structure = structure_fn(positions[0], neighbor)[1:]
structure
# A polymer of length 10 and dimension 2
structure.shape
%timeit structure_fn(next(iter_examples)[0][0], neighbor)
```
## Long auto-regressive search
Here we will provide some minimal experimentation with using the model to actually optimize a structure by simply repeatedly applying the structure minimizer. We'll characterize what happens to the energy - e.g. does it consistently go down over time or does it diverge after a certain length of such a "rollout"?
```
# WIP
```
## Genetic + short auto-regressive
Presuming the previous won't be stable under long-rollouts, we'll use the previous method only over somewhat short rollouts (for the horizon over which these are stable) in conjunction with an evolutionary optimization approach to progressively determining better and better optimization starting points.
```
# WIP
```
| true |
code
| 0.761027 | null | null | null | null |
|
# Crossentropy method
This notebook will teach you to solve reinforcement learning problems with crossentropy method. We'll follow-up by scaling everything up and using neural network policy.
```
# In google collab, uncomment this:
# !wget https://bit.ly/2FMJP5K -O setup.py && bash setup.py
# XVFB will be launched if you run on a server
import os
if type(os.environ.get("DISPLAY")) is not str or len(os.environ.get("DISPLAY")) == 0:
!bash ../xvfb start
%env DISPLAY = : 1
import gym
import numpy as np
import pandas as pd
env = gym.make("Taxi-v2")
env.reset()
env.render()
n_states = env.observation_space.n
n_actions = env.action_space.n
print("n_states=%i, n_actions=%i" % (n_states, n_actions))
```
# Create stochastic policy
This time our policy should be a probability distribution.
```policy[s,a] = P(take action a | in state s)```
Since we still use integer state and action representations, you can use a 2-dimensional array to represent the policy.
Please initialize policy __uniformly__, that is, probabililities of all actions should be equal.
```
policy = np.ndarray((500, 6))
policy.fill(1. / n_actions)
assert type(policy) in (np.ndarray, np.matrix)
assert np.allclose(policy, 1./n_actions)
assert np.allclose(np.sum(policy, axis=1), 1)
```
# Play the game
Just like before, but we also record all states and actions we took.
```
def generate_session(policy, t_max=10**4):
"""
Play game until end or for t_max ticks.
:param policy: an array of shape [n_states,n_actions] with action probabilities
:returns: list of states, list of actions and sum of rewards
"""
states, actions = [], []
total_reward = 0.
s = env.reset()
for t in range(t_max):
a = np.random.choice([0, 1, 2, 3, 4, 5], p=policy[s])
new_s, r, done, info = env.step(a)
# Record state, action and add up reward to states,actions and total_reward accordingly.
states.append(s)
actions.append(a)
total_reward += r
s = new_s
if done:
break
return states, actions, total_reward
s, a, r = generate_session(policy)
assert type(s) == type(a) == list
assert len(s) == len(a)
assert type(r) in [float, np.float]
# let's see the initial reward distribution
import matplotlib.pyplot as plt
%matplotlib inline
sample_rewards = [generate_session(policy, t_max=1000)[-1] for _ in range(200)]
plt.hist(sample_rewards, bins=20)
plt.vlines([np.percentile(sample_rewards, 50)], [0], [
100], label="50'th percentile", color='green')
plt.vlines([np.percentile(sample_rewards, 90)], [0], [
100], label="90'th percentile", color='red')
plt.legend()
```
### Crossentropy method steps (2pts)
```
def select_elites(states_batch, actions_batch, rewards_batch, percentile=50):
"""
Select states and actions from games that have rewards >= percentile
:param states_batch: list of lists of states, states_batch[session_i][t]
:param actions_batch: list of lists of actions, actions_batch[session_i][t]
:param rewards_batch: list of rewards, rewards_batch[session_i]
:returns: elite_states,elite_actions, both 1D lists of states and respective actions from elite sessions
Please return elite states and actions in their original order
[i.e. sorted by session number and timestep within session]
If you're confused, see examples below. Please don't assume that states are integers (they'll get different later).
"""
reward_threshold = np.percentile(rewards_batch, percentile)
elite_states = []
elite_actions = []
for session_i, reward in enumerate(rewards_batch):
if reward >= reward_threshold:
elite_states.extend(states_batch[session_i])
elite_actions.extend(actions_batch[session_i])
return elite_states, elite_actions
states_batch = [
[1, 2, 3], # game1
[4, 2, 0, 2], # game2
[3, 1] # game3
]
actions_batch = [
[0, 2, 4], # game1
[3, 2, 0, 1], # game2
[3, 3] # game3
]
rewards_batch = [
3, # game1
4, # game2
5, # game3
]
test_result_0 = select_elites(
states_batch, actions_batch, rewards_batch, percentile=0)
test_result_40 = select_elites(
states_batch, actions_batch, rewards_batch, percentile=30)
test_result_90 = select_elites(
states_batch, actions_batch, rewards_batch, percentile=90)
test_result_100 = select_elites(
states_batch, actions_batch, rewards_batch, percentile=100)
assert np.all(test_result_0[0] == [1, 2, 3, 4, 2, 0, 2, 3, 1]) \
and np.all(test_result_0[1] == [0, 2, 4, 3, 2, 0, 1, 3, 3]),\
"For percentile 0 you should return all states and actions in chronological order"
assert np.all(test_result_40[0] == [4, 2, 0, 2, 3, 1]) and \
np.all(test_result_40[1] == [3, 2, 0, 1, 3, 3]),\
"For percentile 30 you should only select states/actions from two first"
assert np.all(test_result_90[0] == [3, 1]) and \
np.all(test_result_90[1] == [3, 3]),\
"For percentile 90 you should only select states/actions from one game"
assert np.all(test_result_100[0] == [3, 1]) and\
np.all(test_result_100[1] == [3, 3]),\
"Please make sure you use >=, not >. Also double-check how you compute percentile."
print("Ok!")
def update_policy(elite_states, elite_actions):
"""
Given old policy and a list of elite states/actions from select_elites,
return new updated policy where each action probability is proportional to
policy[s_i,a_i] ~ #[occurences of si and ai in elite states/actions]
Don't forget to normalize policy to get valid probabilities and handle 0/0 case.
In case you never visited a state, set probabilities for all actions to 1./n_actions
:param elite_states: 1D list of states from elite sessions
:param elite_actions: 1D list of actions from elite sessions
"""
new_policy = np.zeros([n_states, n_actions])
for state, action in zip(elite_states, elite_actions):
new_policy[state, action] += 1
for row in new_policy:
total = sum(row)
if total:
row /= total
else:
row.fill(1/n_actions)
return new_policy
elite_states, elite_actions = ([1, 2, 3, 4, 2, 0, 2, 3, 1], [
0, 2, 4, 3, 2, 0, 1, 3, 3])
new_policy = update_policy(elite_states, elite_actions)
assert np.isfinite(new_policy).all(
), "Your new policy contains NaNs or +-inf. Make sure you don't divide by zero."
assert np.all(
new_policy >= 0), "Your new policy can't have negative action probabilities"
assert np.allclose(new_policy.sum(
axis=-1), 1), "Your new policy should be a valid probability distribution over actions"
reference_answer = np.array([
[1., 0., 0., 0., 0.],
[0.5, 0., 0., 0.5, 0.],
[0., 0.33333333, 0.66666667, 0., 0.],
[0., 0., 0., 0.5, 0.5]])
assert np.allclose(new_policy[:4, :5], reference_answer)
print("Ok!")
```
# Training loop
Generate sessions, select N best and fit to those.
```
from IPython.display import clear_output
def show_progress(rewards_batch, log, reward_range=[-990, +10]):
"""
A convenience function that displays training progress.
No cool math here, just charts.
"""
mean_reward = np.mean(rewards_batch)
threshold = np.percentile(rewards_batch, percentile)
log.append([mean_reward, threshold])
clear_output(True)
print("mean reward = %.3f, threshold=%.3f" % (mean_reward, threshold))
plt.figure(figsize=[8, 4])
plt.subplot(1, 2, 1)
plt.plot(list(zip(*log))[0], label='Mean rewards')
plt.plot(list(zip(*log))[1], label='Reward thresholds')
plt.legend()
plt.grid()
plt.subplot(1, 2, 2)
plt.hist(rewards_batch, range=reward_range)
plt.vlines([np.percentile(rewards_batch, percentile)],
[0], [100], label="percentile", color='red')
plt.legend()
plt.grid()
plt.show()
# reset policy just in case
policy = np.ones([n_states, n_actions])/n_actions
n_sessions = 250 # sample this many sessions
percentile = 20 # take this percent of session with highest rewards
learning_rate = 0.5 # add this thing to all counts for stability
log = []
for i in range(100):
%time sessions = [generate_session(policy) for _ in range(n_sessions)] #< generate a list of n_sessions new sessions > ]
states_batch, actions_batch, rewards_batch = zip(*sessions)
elite_states, elite_actions = select_elites( states_batch, actions_batch, rewards_batch, percentile)
new_policy = update_policy(elite_states, elite_actions)
policy = learning_rate*new_policy + (1-learning_rate)*policy
# display results on chart
show_progress(rewards_batch, log)
```
# Digging deeper: approximate crossentropy with neural nets

In this section we will train a neural network policy for continuous state space game
```
# if you see "<classname> has no attribute .env", remove .env or update gym
env = gym.make("CartPole-v0").env
env.reset()
n_actions = env.action_space.n
plt.imshow(env.render("rgb_array"))
# create agent
from sklearn.neural_network import MLPClassifier
agent = MLPClassifier(hidden_layer_sizes=(20, 20),
activation='tanh',
warm_start=True, # keep progress between .fit(...) calls
max_iter=1 # make only 1 iteration on each .fit(...)
)
# initialize agent to the dimension of state an amount of actions
agent.fit([env.reset()]*n_actions, range(n_actions))
def generate_session(t_max=1000):
states, actions = [], []
total_reward = 0
s = env.reset()
for t in range(t_max):
# predict array of action probabilities
probs = agent.predict_proba([s])[0]
a = np.random.choice([0, 1], p=probs)#<sample action with such probabilities >
new_s, r, done, info = env.step(a)
# record sessions like you did before
states.append(s)
actions.append(a)
total_reward += r
s = new_s
if done:
break
return states, actions, total_reward
n_sessions = 100
percentile = 70
log = []
for i in range(100):
# generate new sessions
sessions = [generate_session() for _ in range(n_sessions)]#< generate a list of n_sessions new sessions > ]
states_batch, actions_batch, rewards_batch = map(np.array, zip(*sessions))
elite_states, elite_actions = select_elites( states_batch, actions_batch, rewards_batch, percentile)
agent.fit(elite_states, elite_actions)
show_progress(rewards_batch, log, reward_range=[0, np.max(rewards_batch)])
if np.mean(rewards_batch) > 190:
print("You Win! You may stop training now via KeyboardInterrupt.")
```
# Results
```
# record sessions
import gym.wrappers
env = gym.wrappers.Monitor(gym.make("CartPole-v0"),
directory="videos", force=True)
sessions = [generate_session() for _ in range(100)]
env.close()
# show video
from IPython.display import HTML
import os
video_names = list(
filter(lambda s: s.endswith(".mp4"), os.listdir("./videos/")))
HTML("""
<video width="640" height="480" controls>
<source src="{}" type="video/mp4">
</video>
""".format("./videos/"+video_names[-1])) # this may or may not be _last_ video. Try other indices
```
# Homework part I
### Tabular crossentropy method
You may have noticed that the taxi problem quickly converges from -100 to a near-optimal score and then descends back into -50/-100. This is in part because the environment has some innate randomness. Namely, the starting points of passenger/driver change from episode to episode.
### Tasks
- __1.1__ (1 pts) Find out how the algorithm performance changes if you change different percentile and different n_sessions.
- __1.2__ (2 pts) Tune the algorithm to end up with positive average score.
It's okay to modify the existing code.
```<Describe what you did here. Preferably with plot/report to support it.>```
# Homework part II
### Deep crossentropy method
By this moment you should have got enough score on [CartPole-v0](https://gym.openai.com/envs/CartPole-v0) to consider it solved (see the link). It's time to try something harder.
* if you have any trouble with CartPole-v0 and feel stuck, feel free to ask us or your peers for help.
### Tasks
* __2.1__ (3 pts) Pick one of environments: MountainCar-v0 or LunarLander-v2.
* For MountainCar, get average reward of __at least -150__
* For LunarLander, get average reward of __at least +50__
See the tips section below, it's kinda important.
__Note:__ If your agent is below the target score, you'll still get most of the points depending on the result, so don't be afraid to submit it.
* __2.2__ (bonus: 4++ pt) Devise a way to speed up training at least 2x against the default version
* Obvious improvement: use [joblib](https://www.google.com/search?client=ubuntu&channel=fs&q=joblib&ie=utf-8&oe=utf-8)
* Try re-using samples from 3-5 last iterations when computing threshold and training
* Experiment with amount of training iterations and learning rate of the neural network (see params)
* __Please list what you did in anytask submission form__
### Tips
* Gym page: [mountaincar](https://gym.openai.com/envs/MountainCar-v0), [lunarlander](https://gym.openai.com/envs/LunarLander-v2)
* Sessions for MountainCar may last for 10k+ ticks. Make sure ```t_max``` param is at least 10k.
* Also it may be a good idea to cut rewards via ">" and not ">=". If 90% of your sessions get reward of -10k and 20% are better, than if you use percentile 20% as threshold, R >= threshold __fails cut off bad sessions__ whule R > threshold works alright.
* _issue with gym_: Some versions of gym limit game time by 200 ticks. This will prevent cem training in most cases. Make sure your agent is able to play for the specified __t_max__, and if it isn't, try `env = gym.make("MountainCar-v0").env` or otherwise get rid of TimeLimit wrapper.
* If you use old _swig_ lib for LunarLander-v2, you may get an error. See this [issue](https://github.com/openai/gym/issues/100) for solution.
* If it won't train it's a good idea to plot reward distribution and record sessions: they may give you some clue. If they don't, call course staff :)
* 20-neuron network is probably not enough, feel free to experiment.
### Bonus tasks
* __2.3 bonus__ Try to find a network architecture and training params that solve __both__ environments above (_Points depend on implementation. If you attempted this task, please mention it in anytask submission._)
* __2.4 bonus__ Solve continuous action space task with `MLPRegressor` or similar.
* Start with ["Pendulum-v0"](https://github.com/openai/gym/wiki/Pendulum-v0).
* Since your agent only predicts the "expected" action, you will have to add noise to ensure exploration.
* [MountainCarContinuous-v0](https://gym.openai.com/envs/MountainCarContinuous-v0), [LunarLanderContinuous-v2](https://gym.openai.com/envs/LunarLanderContinuous-v2)
* 4 points for solving. Slightly less for getting some results below solution threshold. Note that discrete and continuous environments may have slightly different rules aside from action spaces.
If you're still feeling unchallenged, consider the project (see other notebook in this folder).
| true |
code
| 0.388908 | null | null | null | null |
|
# Introduction
This notebook was used in order to create the **"Naive Early-fusion" row in TABLE II**.
Note that a lot of code is copy-pasted across notebooks, so you may find some functionality implemented here that is not used, for instance the network is implemented in a way to support late-fusion, which is not used.
```
import numpy as np
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
# Font which got unicode math stuff.
import matplotlib as mpl
mpl.rcParams['font.family'] = 'DejaVu Sans'
# Much more readable plots
import matplotlib.pyplot as plt
plt.style.use('ggplot')
# Much better than plt.subplots()
from mpl_toolkits.axes_grid1 import ImageGrid
# https://github.com/ipython/ipython/issues/7270#issuecomment-355276432
mpl.interactive(False)
import wheelchAI.utils as u
import lbtoolbox.util as lbu
from ipywidgets import interact, IntSlider, FloatSlider
import ipywidgets
```
# Data loading
```
from os.path import join as pjoin
from glob import glob
```
**CAREFUL**: `scan` goes right-to-left, i.e. first array value corresponds to "rightmost" laser point. Positive angle is left, negative angle right.
```
LABELDIR = DATADIR = "/fastwork/data/DROW-data/"
train_names = [f[:-4] for f in glob(pjoin(DATADIR, 'train', '*.csv'))]
val_names = [f[:-4] for f in glob(pjoin(DATADIR, 'val', '*.csv'))]
te_names = [f[:-4] for f in glob(pjoin(DATADIR, 'test', '*.csv'))]
tr = u.Dataset(train_names, DATADIR, LABELDIR)
va = u.Dataset(val_names, DATADIR, LABELDIR)
WIN_KW = dict(ntime=5, nsamp=48, odom=False, repeat_before=True, center_time='each')
%timeit u.get_batch(tr, bs=1024, **WIN_KW)
batcher = u.BackgroundFunction(u.get_batch, 5, data=tr, bs=1024, **WIN_KW)
```
# Model definition
```
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.autograd import Variable
import lbtoolbox.pytorch as lbt
torch.backends.cudnn.benchmark = True # Run benchmark to select fastest implementation of ops.
GPU=1 # This is the GPU index, use `False` for CPU-only.
class DROWNet3EF(nn.Module):
def __init__(self, snip_len, dropout=0.5, *a, **kw):
""" thin_fact should be 8 for 5 time-win. """
super(DROWNet3EF, self).__init__(*a, **kw)
# >>> m = weight_norm(nn.Linear(20, 40), name='weight', dim=???)
self.dropout = dropout
self.conv1a = nn.Conv1d(snip_len, 64, kernel_size=3, padding=1)
self.bn1a = nn.BatchNorm1d( 64)
self.conv1b = nn.Conv1d( 64, 64, kernel_size=3, padding=1)
self.bn1b = nn.BatchNorm1d( 64)
self.conv1c = nn.Conv1d( 64, 128, kernel_size=3, padding=1)
self.bn1c = nn.BatchNorm1d(128)
self.conv2a = nn.Conv1d(128, 128, kernel_size=3, padding=1)
self.bn2a = nn.BatchNorm1d(128)
self.conv2b = nn.Conv1d(128, 128, kernel_size=3, padding=1)
self.bn2b = nn.BatchNorm1d(128)
self.conv2c = nn.Conv1d(128, 256, kernel_size=3, padding=1)
self.bn2c = nn.BatchNorm1d(256)
self.conv3a = nn.Conv1d(256, 256, kernel_size=3, padding=1)
self.bn3a = nn.BatchNorm1d(256)
self.conv3b = nn.Conv1d(256, 256, kernel_size=3, padding=1)
self.bn3b = nn.BatchNorm1d(256)
self.conv3c = nn.Conv1d(256, 512, kernel_size=3, padding=1)
self.bn3c = nn.BatchNorm1d(512)
self.conv4a = nn.Conv1d(512, 256, kernel_size=3, padding=1)
self.bn4a = nn.BatchNorm1d(256)
self.conv4b = nn.Conv1d(256, 128, kernel_size=3, padding=1)
self.bn4b = nn.BatchNorm1d(128)
self.conv4p = nn.Conv1d(128, 4, kernel_size=1) # probs
self.conv4v = nn.Conv1d(128, 2, kernel_size=1) # vote
self.reset_parameters()
def forward(self, x):
x = F.leaky_relu(self.bn1a(self.conv1a(x)), 0.1)
x = F.leaky_relu(self.bn1b(self.conv1b(x)), 0.1)
x = F.leaky_relu(self.bn1c(self.conv1c(x)), 0.1)
x = F.max_pool1d(x, 2) # 24
x = F.dropout(x, p=self.dropout, training=self.training)
x = F.leaky_relu(self.bn2a(self.conv2a(x)), 0.1)
x = F.leaky_relu(self.bn2b(self.conv2b(x)), 0.1)
x = F.leaky_relu(self.bn2c(self.conv2c(x)), 0.1)
x = F.max_pool1d(x, 2) # 12
x = F.dropout(x, p=self.dropout, training=self.training)
x = F.leaky_relu(self.bn3a(self.conv3a(x)), 0.1)
x = F.leaky_relu(self.bn3b(self.conv3b(x)), 0.1)
x = F.leaky_relu(self.bn3c(self.conv3c(x)), 0.1)
x = F.max_pool1d(x, 2) # 6
x = F.dropout(x, p=self.dropout, training=self.training)
x = F.leaky_relu(self.bn4a(self.conv4a(x)), 0.1)
x = F.leaky_relu(self.bn4b(self.conv4b(x)), 0.1)
x = F.avg_pool1d(x, 6)
logits = self.conv4p(x)
votes = self.conv4v(x)
return logits[:,:,0], votes[:,:,0] # Due to the arch, output has spatial size 1, so we [0] it.
def reset_parameters(self):
lbt.init(self.conv1a, lambda t: nn.init.kaiming_normal(t, a=0.1), 0)
lbt.init(self.conv1b, lambda t: nn.init.kaiming_normal(t, a=0.1), 0)
lbt.init(self.conv1c, lambda t: nn.init.kaiming_normal(t, a=0.1), 0)
lbt.init(self.conv2a, lambda t: nn.init.kaiming_normal(t, a=0.1), 0)
lbt.init(self.conv2b, lambda t: nn.init.kaiming_normal(t, a=0.1), 0)
lbt.init(self.conv2c, lambda t: nn.init.kaiming_normal(t, a=0.1), 0)
lbt.init(self.conv3a, lambda t: nn.init.kaiming_normal(t, a=0.1), 0)
lbt.init(self.conv3b, lambda t: nn.init.kaiming_normal(t, a=0.1), 0)
lbt.init(self.conv3c, lambda t: nn.init.kaiming_normal(t, a=0.1), 0)
lbt.init(self.conv4a, lambda t: nn.init.kaiming_normal(t, a=0.1), 0)
lbt.init(self.conv4b, lambda t: nn.init.kaiming_normal(t, a=0.1), 0)
lbt.init(self.conv4p, lambda t: nn.init.constant(t, 0), 0)
lbt.init(self.conv4v, lambda t: nn.init.constant(t, 0), 0)
nn.init.constant(self.bn1a.weight, 1)
nn.init.constant(self.bn1b.weight, 1)
nn.init.constant(self.bn1c.weight, 1)
nn.init.constant(self.bn2a.weight, 1)
nn.init.constant(self.bn2b.weight, 1)
nn.init.constant(self.bn2c.weight, 1)
nn.init.constant(self.bn3a.weight, 1)
nn.init.constant(self.bn3b.weight, 1)
nn.init.constant(self.bn3c.weight, 1)
nn.init.constant(self.bn4a.weight, 1)
nn.init.constant(self.bn4b.weight, 1)
net = lbt.maybe_cuda(DROWNet3EF(WIN_KW['ntime']), GPU)
lbt.count_parameters(net)
with torch.no_grad():
logits, votes = net(Variable(lbt.maybe_cuda(torch.from_numpy(batcher()[0]), GPU)))
logits.data.shape, votes.data.shape
_dummy_X, _, _ = u.get_batch(tr, 450, **WIN_KW)
def _fwd(net, GPU):
with torch.no_grad():
logits, votes = net(Variable(lbt.maybe_cuda(torch.from_numpy(_dummy_X), GPU), requires_grad=False))
return logits.data.cpu(), votes.data.cpu()
net.eval();
%timeit _fwd(net, GPU)
```
# Training
```
import lbtoolbox.plotting as lbplt
def plottrain_loss(ax_xent, ax_votes):
ax_xent.plot(np.array(xent_avg_losses).flatten())
ax_xent.plot(7500*(0.5 + np.arange(len(xent_avg_losses))), np.mean(xent_avg_losses, axis=-1))
ax_xent.set_yscale('log')
ax_xent.set_ylim(top=2e-1)
ax_votes.plot(np.array(offs_avg_losses).flatten())
ax_votes.plot(7500*(0.5 + np.arange(len(xent_avg_losses))), np.mean(offs_avg_losses, axis=-1))
ax_votes.set_yscale('log')
ax_votes.set_ylim(top=2e-1)
def plottrain1():
fig, axs = plt.subplots(1, 2, figsize=(15,5))
plottrain_loss(*axs)
return fig
```
## Actual start
```
opt = optim.Adam(net.parameters(), amsgrad=True)
xent_losses = []
offs_losses = []
xent_avg_losses = []
offs_avg_losses = []
e, name = 50, "final-WNet3xEF-T5-odom=False-center=each"
net.reset_parameters()
with lbu.Uninterrupt() as un:
net.train()
for e in range(e, 50):
torch.save({'model': net.state_dict(), 'optim': opt.state_dict()},
'/fastwork/beyer/dumps/DROW/{}-{:.0f}ep.pth.tar'.format(name, e))
if un.interrupted:
break
for i in range(7500):
Xb, yb_conf, yb_offs = batcher()
# Apply target noise
tgt_noise = np.exp(np.random.randn(*yb_offs.shape).astype(np.float32)/20)
yb_offs = yb_offs*tgt_noise
# Random left-right flip. Of whole batch for convenience, but should be the same as individuals.
if np.random.rand() < 0.5:
Xb = np.array(Xb[:,:,::-1]) # PyTorch doesn't currently support negative strides.
yb_offs = np.c_[-yb_offs[:,0], yb_offs[:,1]] # Sure to get a copy, batched could give us a view!
v_X = Variable(lbt.maybe_cuda(torch.from_numpy(Xb), GPU))
v_y_conf = Variable(lbt.maybe_cuda(torch.from_numpy(yb_conf), GPU), requires_grad=False)
v_y_offs = Variable(lbt.maybe_cuda(torch.from_numpy(yb_offs), GPU), requires_grad=False)
opt.zero_grad()
logits, votes = net(v_X)
xent = F.cross_entropy(logits, v_y_conf, reduce=True)
xent_losses.append(xent.data.cpu().numpy())
loss = xent.mean()
# Need to special-case batches without any vote labels, because mean of empty is nan.
if np.sum(yb_conf) > 0:
offs = F.mse_loss(votes, v_y_offs, reduce=False) # This is really just (a - b)²
offs = torch.sqrt(torch.masked_select(torch.sum(offs, 1), v_y_conf.ne(0)))
offs_losses.append(offs.data.cpu().numpy())
loss += offs.mean()
else:
offs_losses.append(np.array([]))
loss.backward()
# Total number of iterations/updates
for group in opt.param_groups:
group['lr'] = lbu.expdec(e+i/7500, 40, 1e-3, 50, 1e-6)
opt.step()
if i > 0 and i % 25 == 0:
print('\r[{:.2f} ({}/{})]: Loss: xent={:.4f} offs={:.4f} | Q-fill={:.1%} '.format(
e+i/7500, i, 7500,
np.mean(xent_losses[-100:]), np.nanmean(list(map(np.mean, offs_losses[-100:]))),
batcher.fill_status(normalize=True),
), end='', flush=True)
# To avoid OOM errors on long runs
xent_avg_losses.append(np.array([np.mean(x) for x in xent_losses]))
offs_avg_losses.append(np.array([np.mean(o) for o in offs_losses]))
xent_losses.clear()
offs_losses.clear()
lbplt.liveplot(plottrain1)
torch.save({'model': net.state_dict(), 'optim': opt.state_dict()},
'/fastwork/beyer/dumps/DROW/{}-{:.0f}ep.pth.tar'.format(name, e+1))
```
```
load = torch.load('/fastwork/beyer/dumps/DROW/{}-{:.0f}ep.pth.tar'.format(name, e))
net.load_state_dict(load['model'])
opt.load_state_dict(load['optim'])
```
# Evaluation
```
import pickle
def get_scan(va, iseq, iscan, ntime, nsamp, repeat_before, **cutout_kw):
scan = va.scans[iseq][iscan]
Xb = np.empty((len(scan), ntime, nsamp), np.float32)
assert repeat_before, "Don't know what to do if not repeat before?!"
# Prepend the exact same scan/odom for the first few where there's no history.
if iscan-ntime+1 < 0:
scans = np.array([va.scans[iseq][0]]*abs(iscan-ntime+1) + [va.scans[iseq][i] for i in range(iscan+1)])
odoms = np.array([va.odoms[iseq][0]]*abs(iscan-ntime+1) + [va.odoms[iseq][i] for i in range(iscan+1)])
else:
scans = va.scans[iseq][iscan-ntime+1:iscan+1]
odoms = va.odoms[iseq][iscan-ntime+1:iscan+1]
for ipt in range(len(scan)):
u.cutout(scans, odoms, ipt, out=Xb[ipt], nsamp=nsamp, **cutout_kw)
return Xb
def forward(net, xb):
net.eval()
with torch.no_grad():
logits, votes = net(Variable(lbt.maybe_cuda(torch.from_numpy(xb), GPU)))
return F.softmax(logits, dim=-1).data.cpu().numpy(), votes.data.cpu().numpy()
def forward_all(net, va, **get_scan_kw):
all_confs, all_votes = [], []
nseq = len(va.detsns)
for iseq in range(nseq):
ndet = len(va.detsns[iseq])
for idet in range(ndet):
print('\r[{}/{} | {}/{}] '.format(1+iseq, nseq, 1+idet, ndet), flush=True, end='')
confs, votes = forward(net, get_scan(va, iseq, va.idet2iscan[iseq][idet], **get_scan_kw))
all_confs.append(confs)
all_votes.append(votes)
return np.array(all_confs), np.array(all_votes)
```
## On val
```
pred_yva_conf, pred_yva_offs = forward_all(net, va, **WIN_KW)
```
Compute and dump the predictions on the validation set in order to use them in our hyperparameter tuning setup (which is not published because very specific to our lab)
```
_seqs, _scans, _wcs, _was, _wps = u.linearize(va.scansns, va.scans, va.detsns, va.wcdets, va.wadets, va.wpdets)
_scans = np.array(_scans)
x, y = u._prepare_prec_rec_softmax(_scans, pred_yva_offs)
pickle.dump([x, y, pred_yva_conf, _wcs, _was, _wps], open('/fastwork/beyer/dumps/DROW/' + name + ".pkl", "wb"))
'/fastwork/beyer/dumps/DROW/' + name + ".pkl"
results = u.comp_prec_rec_softmax(_scans, _wcs, _was, _wps, pred_yva_conf, pred_yva_offs,
blur_win=5, blur_sigma=1, weighted_avg=False)
fig, ax = u.plot_prec_rec(*results, title=name + " VoteAvg")
plt.close(fig)
fig
```
## On Test
```
te = u.Dataset(te_names, DATADIR, LABELDIR)
_seqs_te, _scans_te, _wcs_te, _was_te, _wps_te = u.linearize(te.scansns, te.scans, te.detsns, te.wcdets, te.wadets, te.wpdets)
_scans_te = np.array(_scans_te)
pred_yte_conf, pred_yte_offs = forward_all(net, te, **WIN_KW)
```
### TABLE II, row "Naive Early-fusion"
```
import json
from os.path import join as pjoin
with open(pjoin('/home/hermans/drow_votes', name + '.json')) as f:
_kw = json.loads(f.read())
results_te = u.comp_prec_rec_softmax(_scans_te, _wcs_te, _was_te, _wps_te, pred_yte_conf, pred_yte_offs, **_kw)
plt.close()
fig, ax = u.plot_prec_rec(*results_te, title=name + " Hype (TEST)")
plt.show(fig)
print(_kw)
for i, cls in enumerate(['wd', 'wc', 'wa', 'wp']):
u.dump_paper_pr_curves(
'/home/beyer/academic/drower9k/iros18_laser_people_detection/data/pr_curves/' + name + '_' + cls,
results_te[i][1], results_te[i][0])
```
| true |
code
| 0.7036 | null | null | null | null |
|
# LSTM
* We will implement it with tensorflow library together with LSTM tool for sentiment analysis in tweets.
* Unlike the LSTM (Long short-term memory) method, it is a deep learning method.
* Data preprocessing steps are similar to Naive Bayes vs Logistic Regression methods, but the classification of tweets is different.
```
# mounting the drive
from google.colab import drive
drive.mount('/content/drive')
# Importing libararies and modules
import tensorflow as tf
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# colab da işe yaramıyor jupyter de kullanmak için
#from nltk.corpus import twitter_samples
# reading the data in a dataframe df
# for use in colab
df_pos = pd.read_json(r'/content/drive/MyDrive/Bitirme_P/positive_tweets.json', lines = True, encoding='utf-8')
df_neg = pd.read_json(r'/content/drive/MyDrive/Bitirme_P/negative_tweets.json', lines = True, encoding='utf-8')
# for use in Jupyter notebook
#df_pos = pd.read_json("positive_tweets.json", lines = True, encoding= 'UTF-8')
#df_neg = pd.read_json("negative_tweets.json", lines = True, encoding= 'UTF-8')
print(df_pos.shape)
print(df_neg.shape)
# Tagging positive tweets
positive_sentiment = []
for i in range(0,5000):
positive_sentiment.append(1)
print(len(positive_sentiment))
df_pos['sentiment'] = positive_sentiment
# creating an 'id' column for tweets
dataframe_id = []
for i in range(0,5000):
i+=1
dataframe_id.append(i)
print(len(dataframe_id))
df_pos['df_id'] = dataframe_id
# Tagging positive tweets
negative_sentiment = []
for i in range(0,5000):
negative_sentiment.append(0)
df_neg['sentiment'] = negative_sentiment
# creating an 'id' column for tweets
dataframe_id = []
for i in range(5000,10000):
i+=1
dataframe_id.append(i)
df_neg['df_id'] = dataframe_id
# Two json files merged on dataframe by adding lines
df_tam = pd.concat([df_pos, df_neg])
print(df_tam.shape)
df_tam.tail()
# printing the dataframe and assign
df = df_tam[['df_id','sentiment','text']]
df.head(10)
# verifying the sentimnet values
# 1 is positive sentimnet and 0 is negative sentiment
df['sentiment'].value_counts()
# pre-processing the data
# define a function to remove the @mentions and other useless text from the tweets
import re
def text_cleaning(tweet):
tweet = re.sub(r'@[A-Za-z0-9]+', '', tweet) # removing @mentions
tweet = re.sub(r'@[A-Za-zA-Z0-9]+', '', tweet) # removing @mentions
tweet = re.sub(r'@[A-Za-z]+', '', tweet) # removing @mentions
tweet = re.sub(r'@[-)]+', '', tweet) # removing @mentions
tweet = re.sub(r'#', '', tweet) # removing '#' sign
tweet = re.sub(r'RT[\s]+', '', tweet) # removing RT
tweet = re.sub(r'https?:\/\/.*[\r\n]*', '', tweet) # removing hyper link
tweet = re.sub(r'&[a-z;]+', '', tweet) # removing '>
return tweet
df['text'] = df['text'].apply(text_cleaning)
df.head()
# splitting the data into training and testing data
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(df['text'].values, df['sentiment'].values, test_size=0.2)
# chechking the data split
print('Text: ', x_train[0])
print('Sentiment: ', y_train[0])
# converting the strings into integers using Tokenizer
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
#from nltk.tokenize import TweetTokenizer
# instantiating the tokenizer
max_vocab = 20000000
tokenizer = Tokenizer(num_words=max_vocab)
tokenizer.fit_on_texts(x_train)
# checking the word index and find out the vocabulary of the dataset
wordidx = tokenizer.word_index
V = len(wordidx)
print('The size of dataset vocab is: ', V)
# converting train and test sentences into sequences
train_seq = tokenizer.texts_to_sequences(x_train)
test_seq = tokenizer.texts_to_sequences(x_test)
print('Training sequence: ', train_seq[0])
print('Testing sequence: ', test_seq[0])
# padding the sequence to get equal length sequence because its convertional to use same size sequences
# padding the training sequence
pad_train = pad_sequences(train_seq)
T = pad_train.shape[1]
print('The length of training sequence is: ', T)
# padding the test sequence
pad_test = pad_sequences(test_seq, maxlen=T)
print('The length of testing sequence is: ', pad_test.shape[1])
# building the model
from tensorflow.keras.layers import Input, Dense, Embedding, LSTM, GlobalMaxPooling1D
from tensorflow.keras.models import Model
D = 20
M = 15
i = Input(shape=(T, ))
x = Embedding(V+1, D)(i)
x = LSTM(M, return_sequences=True)(x)
x = GlobalMaxPooling1D()(x)
x = Dense(32, activation='relu')(x)
x = Dense(1, activation='sigmoid')(x)
model = Model(i,x)
# compiling the model
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
model.summary()
# training the model
r = model.fit(pad_train, y_train, validation_data=(pad_test, y_test), epochs=2, verbose=1, shuffle=True)
# Evaluating the model
# plotting the loss and validation loss of the model
plt.plot(r.history['loss'], label='loss')
plt.plot(r.history['val_loss'], label='val_loss')
plt.legend()
# plotting the accuracy and validation accuracy of the model
plt.plot(r.history['accuracy'], label='accuracy')
plt.plot(r.history['val_accuracy'], label='val_accuracy')
plt.legend()
# Predicting the sentiment of any text
def predict_sentiment(text):
# preprocessing the given text
text_seq = tokenizer.texts_to_sequences(text)
text_pad = pad_sequences(text_seq, maxlen=T)
# predicting the class
predicted_sentiment = model.predict(text_pad).round()
if predicted_sentiment == 1.0:
return (print('It is a positive sentiment'))
else:
return (print('It is a negative sentiment'))
text = ['I love #data #datascience ']
predict_sentiment(text)
```
| true |
code
| 0.531088 | null | null | null | null |
|
<script async src="https://www.googletagmanager.com/gtag/js?id=UA-59152712-8"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', 'UA-59152712-8');
</script>
# Common Functions for `GiRaFFEfood` Initial Data for `GiRaFFE`
### NRPy+ Source Code for this module: [GiRaFFEfood_NRPy/GiRaFFEfood_NRPy_Common_Functions.py](../../edit/in_progress/GiRaFFEfood_NRPy/GiRaFFEfood_NRPy_Common_Functions.py)
**Notebook Status:** <font color='red'><b> In Progress </b></font>
**Validation Notes:** This tutorial notebook has been confirmed to be self-consistent with its corresponding NRPy+ module through the main initial data modules that depend on it.
## Introduction:
We will need to "feed" our giraffe with initial data to evolve. There are several different choices of initial data we can use here; while each represents different physical systems, they all have some steps in common with each other. To avoid code duplication, we will first write several functions that we will use for all of them.
<a id='toc'></a>
# Table of Contents:
$$\label{toc}$$
This notebook is organized as follows
1. [Step 1](#initializenrpy): Import core NRPy+ modules and set NRPy+ parameters
1. [Step 2](#vectorpotential): Set the vector potential from input functions
1. [Step 3](#velocity): Compute $v^i_{(n)}$ from $E^i$ and $B^i$
1. [Step 4](#setall): Generate specified initial data
<a id='initializenrpy'></a>
# Step 1: Import core NRPy+ modules and set NRPy+ parameters \[Back to [top](#toc)\]
$$\label{initializenrpy}$$
Here, we will import the NRPy+ core modules, set the reference metric to Cartesian, and set commonly used NRPy+ parameters. We will also set up a parameter to determine what initial data is set up, although it won't do much yet.
```
# Step 0: Import the NRPy+ core modules and set the reference metric to Cartesian
import NRPy_param_funcs as par
import grid as gri # NRPy+: Functions having to do with numerical grids
import indexedexp as ixp
import sympy as sp # SymPy: The Python computer algebra package upon which NRPy+ depends
import reference_metric as rfm
# Use the Jacobian matrix to transform the vectors to Cartesian coordinates.
# Construct Jacobian & Inverse Jacobians:
par.set_parval_from_str("reference_metric::CoordSystem","Spherical")
rfm.reference_metric()
Jac_dUCart_dDrfmUD,Jac_dUrfm_dDCartUD = rfm.compute_Jacobian_and_inverseJacobian_tofrom_Cartesian()
# Transform the coordinates of the Jacobian matrix from spherical to Cartesian:
par.set_parval_from_str("reference_metric::CoordSystem","Cartesian")
rfm.reference_metric()
tmpa,tmpb,tmpc = sp.symbols("tmpa,tmpb,tmpc")
for i in range(3):
for j in range(3):
Jac_dUCart_dDrfmUD[i][j] = Jac_dUCart_dDrfmUD[i][j].subs([(rfm.xx[0],tmpa),(rfm.xx[1],tmpb),(rfm.xx[2],tmpc)])
Jac_dUCart_dDrfmUD[i][j] = Jac_dUCart_dDrfmUD[i][j].subs([(tmpa,rfm.xxSph[0]),(tmpb,rfm.xxSph[1]),(tmpc,rfm.xxSph[2])])
Jac_dUrfm_dDCartUD[i][j] = Jac_dUrfm_dDCartUD[i][j].subs([(rfm.xx[0],tmpa),(rfm.xx[1],tmpb),(rfm.xx[2],tmpc)])
Jac_dUrfm_dDCartUD[i][j] = Jac_dUrfm_dDCartUD[i][j].subs([(tmpa,rfm.xxSph[0]),(tmpb,rfm.xxSph[1]),(tmpc,rfm.xxSph[2])])
# Step 1a: Set commonly used parameters.
thismodule = "GiRaFFEfood_NRPy"
```
<a id='vectorpotential'></a>
# Step 2: Set the vector potential from input functions \[Back to [top](#toc)\]
$$\label{vectorpotential}$$
First, we will write a function to generate the vector potential from input functions for each component. This function will also apply the correct coordinate staggering if the input is set as such. That is, in the staggered prescription, $A_x$ is sampled at $(i,j+1/2,k+1/2)$, $A_y$ at $(i+1/2,j,k+1/2)$, and $A_z$ at $(i+1/2,j+1/2,k)$.
We will first do this for initial data that are given with Cartesian vector components.
```
# Generic function for all 1D tests: Compute Ax,Ay,Az
def Axyz_func_Cartesian(Ax_func,Ay_func,Az_func, stagger_enable, **params):
x = rfm.xx_to_Cart[0]
y = rfm.xx_to_Cart[1]
z = rfm.xx_to_Cart[2]
AD = ixp.zerorank1()
# First Ax
if stagger_enable:
y += sp.Rational(1,2)*gri.dxx[1]
z += sp.Rational(1,2)*gri.dxx[2]
AD[0] = Ax_func(x,y,z, **params)
# Then Ay
if stagger_enable:
x += sp.Rational(1,2)*gri.dxx[0]
y -= sp.Rational(1,2)*gri.dxx[1]
z += sp.Rational(1,2)*gri.dxx[2]
AD[1] = Ay_func(x,y,z, **params)
# Finally Az
if stagger_enable:
x += sp.Rational(1,2)*gri.dxx[0]
y += sp.Rational(1,2)*gri.dxx[1]
z -= sp.Rational(1,2)*gri.dxx[2]
AD[2] = Az_func(x,y,z, **params)
return AD
# Generic function for all 1D tests: Compute Ax,Ay,Az
def Axyz_func_spherical(Ar_func,At_func,Ap_func, stagger_enable, **params):
if "KerrSchild_radial_shift" in params:
KerrSchild_radial_shift = params["KerrSchild_radial_shift"]
r = rfm.xxSph[0] + KerrSchild_radial_shift # We are setting the data up in Shifted Kerr-Schild coordinates
else:
r = rfm.xxSph[0] # Some other coordinate system
theta = rfm.xxSph[1]
phi = rfm.xxSph[2]
AsphD = ixp.zerorank1()
# First Ax
if stagger_enable:
y += sp.Rational(1,2)*gri.dxx[1]
z += sp.Rational(1,2)*gri.dxx[2]
AsphD[0] = Ar_func(r,theta,phi, **params)
# Then Ay
if stagger_enable:
x += sp.Rational(1,2)*gri.dxx[0]
y -= sp.Rational(1,2)*gri.dxx[1]
z += sp.Rational(1,2)*gri.dxx[2]
AsphD[1] = At_func(r,theta,phi, **params)
# Finally Az
if stagger_enable:
x += sp.Rational(1,2)*gri.dxx[0]
y += sp.Rational(1,2)*gri.dxx[1]
z -= sp.Rational(1,2)*gri.dxx[2]
AsphD[2] = Ap_func(r,theta,phi, **params)
# Use the Jacobian matrix to transform the vectors to Cartesian coordinates.
AD = change_basis_spherical_to_Cartesian(AsphD)
return AD
```
<a id='velocity'></a>
# Step 3: Compute $v^i_{(n)}$ from $E^i$ and $B^i$ \[Back to [top](#toc)\]
$$\label{velocity}$$
This function computes the Valenciea 3-velocity from input electric and magnetic fields. It can also take the three-metric $\gamma_{ij}$ as an optional input; if this is not set, the function defaults to flat spacetime.
```
# Generic function for all 1D tests: Valencia 3-velocity from ED and BU
def compute_ValenciavU_from_ED_and_BU(ED, BU, gammaDD=None):
# Now, we calculate v^i = ([ijk] E_j B_k) / B^2,
# where [ijk] is the Levi-Civita symbol and B^2 = \gamma_{ij} B^i B^j$ is a trivial dot product in flat space.
LeviCivitaSymbolDDD = ixp.LeviCivitaSymbol_dim3_rank3()
B2 = sp.sympify(0)
# In flat spacetime, use the Minkowski metric; otherwise, use the input metric.
if gammaDD is None:
gammaDD = ixp.zerorank2()
for i in range(3):
gammaDD[i][i] = sp.sympify(1)
for i in range(3):
for j in range(3):
B2 += gammaDD[i][j] * BU[i] * BU[j]
BD = ixp.zerorank1()
for i in range(3):
for j in range(3):
BD[i] = gammaDD[i][j]*BU[j]
ValenciavU = ixp.zerorank1()
for i in range(3):
for j in range(3):
for k in range(3):
ValenciavU[i] += LeviCivitaSymbolDDD[i][j][k] * ED[j] * BD[k] / B2
return ValenciavU
```
<a id='setall'></a>
# Step 4: Generate specified initial data \[Back to [top](#toc)\]
$$\label{setall}$$
This is the main function that users can call to generate the initial data by passing the name of the initial data as a string and specifying if they want to enable staggering.
```
def GiRaFFEfood_NRPy_generate_initial_data(ID_type = "DegenAlfvenWave", stagger_enable = False,**params):
global AD, ValenciavU
if ID_type == "ExactWald":
AD = gfcf.Axyz_func_spherical(gfew.Ar_EW,gfew.Ath_EW,gfew.Aph_EW,stagger_enable,**params)
ValenciavU = gfew.ValenciavU_func_EW(**params)
elif ID_type == "MagnetosphericWald":
AD = gfcf.Axyz_func_spherical(gfmw.Ar_MW,gfmw.Ath_MW,gfmw.Aph_MW,stagger_enable,**params)
ValenciavU = gfmw.ValenciavU_func_MW(**params)
elif ID_type == "SplitMonopole":
AD = gfcf.Axyz_func_spherical(gfsm.Ar_SM,gfsm.Ath_SM,gfsm.Aph_SM,stagger_enable,**params)
ValenciavU = gfsm.ValenciavU_func_SM(**params)
elif ID_type == "AlfvenWave":
AD = gfcf.Axyz_func_Cartesian(gfaw.Ax_AW,gfaw.Ay_AW,gfaw.Az_AW, stagger_enable, **params)
ValenciavU = gfaw.ValenciavU_func_AW(**params)
elif ID_type == "FastWave":
AD = gfcf.Axyz_func_Cartesian(gffw.Ax_FW,gffw.Ay_FW,gffw.Az_FW, stagger_enable, **params)
ValenciavU = gffw.ValenciavU_func_FW(**params)
elif ID_type == "DegenAlfvenWave":
AD = gfcf.Axyz_func_Cartesian(gfdaw.Ax_DAW,gfdaw.Ay_DAW,gfdaw.Az_DAW, stagger_enable, **params)
ValenciavU = gfdaw.ValenciavU_func_DAW(**params)
elif ID_type == "ThreeWaves":
AD = gfcf.Axyz_func_Cartesian(gftw.Ax_TW,gftw.Ay_TW,gftw.Az_TW, stagger_enable, **params)
ValenciavU = gftw.ValenciavU_func_TW(**params)
elif ID_type == "FFE_Breakdown":
AD = gfcf.Axyz_func_Cartesian(gffb.Ax_FB,gffb.Ay_FB,gffb.Az_FB, stagger_enable, **params)
ValenciavU = gffb.ValenciavU_func_FB(**params)
elif ID_type == "AlignedRotator":
AD = gfcf.Axyz_func_spherical(gfar.Ar_AR,gfar.Ath_AR,gfar.Aph_AR, stagger_enable, **params)
ValenciavU = gfar.ValenciavU_func_AR(**params)
```
| true |
code
| 0.599309 | null | null | null | null |
|
## Example 3: Sensitivity analysis for a NetLogo model with SALib and Multiprocessing
This is a short demo similar to example two but using the multiprocessing [Pool](https://docs.python.org/3.6/library/multiprocessing.html#module-multiprocessing.pool)
All files used in the example are available from the pyNetLogo repository at https://github.com/quaquel/pyNetLogo.
This code requires python3.
For in depth discussion, please see example 2.
### Running the experiments in parallel using a Process Pool
There are multiple libraries available in the python ecosystem for performing tasks in parallel. One of the default libraries that ships with Python is [concurrent.futures](https://docs.python.org/3/library/concurrent.futures.html#module-concurrent.futures). This is in fact a high level interface around several other libraries. See the documentation for details. One of the libraries wrapped by concurrent.futures is multiprocessing. Below we use multiprocessing, anyone on python3.7 can use the either code below or use the ProcessPoolExecuturor from concurrent.futures (recommended).
Here we are going to use the ProcessPoolExecutor, which uses the multiprocessing library. Parallelization is an advanced topic and the exact way in which it is to be done depends at least in part on the operating system one is using. It is recommended to carefully read the documentation provided by both concurrent.futures and mulitprocessing. This example is ran on a mac, linux is expected to be similar but Windows is likely to be slightly different
```
from multiprocessing import Pool
import os
import pandas as pd
import pyNetLogo
from SALib.sample import saltelli
def initializer(modelfile):
'''initialize a subprocess
Parameters
----------
modelfile : str
'''
# we need to set the instantiated netlogo
# link as a global so run_simulation can
# use it
global netlogo
netlogo = pyNetLogo.NetLogoLink(gui=False)
netlogo.load_model(modelfile)
def run_simulation(experiment):
'''run a netlogo model
Parameters
----------
experiments : dict
'''
#Set the input parameters
for key, value in experiment.items():
if key == 'random-seed':
#The NetLogo random seed requires a different syntax
netlogo.command('random-seed {}'.format(value))
else:
#Otherwise, assume the input parameters are global variables
netlogo.command('set {0} {1}'.format(key, value))
netlogo.command('setup')
# Run for 100 ticks and return the number of sheep and
# wolf agents at each time step
counts = netlogo.repeat_report(['count sheep','count wolves'], 100)
results = pd.Series([counts['count sheep'].values.mean(),
counts['count wolves'].values.mean()],
index=['Avg. sheep', 'Avg. wolves'])
return results
if __name__ == '__main__':
modelfile = os.path.abspath('./models/Wolf Sheep Predation_v6.nlogo')
problem = {
'num_vars': 6,
'names': ['random-seed',
'grass-regrowth-time',
'sheep-gain-from-food',
'wolf-gain-from-food',
'sheep-reproduce',
'wolf-reproduce'],
'bounds': [[1, 100000],
[20., 40.],
[2., 8.],
[16., 32.],
[2., 8.],
[2., 8.]]
}
n = 1000
param_values = saltelli.sample(problem, n,
calc_second_order=True)
# cast the param_values to a dataframe to
# include the column labels
experiments = pd.DataFrame(param_values,
columns=problem['names'])
with Pool(4, initializer=initializer, initargs=(modelfile,)) as executor:
results = []
for entry in executor.map(run_simulation, experiments.to_dict('records')):
results.append(entry)
results = pd.DataFrame(results)
```
| true |
code
| 0.534066 | null | null | null | null |
|
# Detecting Spam
*Curtis Miller*
Now, having seen how to load and prepare our e-mail collection, we can start training a classifier.
## Loading And Splitting E-Mails
Our first task is to load in the data. We will split the data into training and test data. The training data will be used to train a classifier while the test data will be used for evaluating how well our classifier performs.
```
import re
import pandas as pd
import email
from bs4 import BeautifulSoup
import nltk
from nltk.stem import SnowballStemmer
from nltk.tokenize import wordpunct_tokenize
import string
from sklearn.naive_bayes import BernoulliNB, GaussianNB
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
with open("SPAMTrain.label") as f:
spamfiles = f.read()
filedata = pd.DataFrame([f.split(" ") for f in spamfiles.split("\n")[:-1]], columns=["ham", "file"]) # 1 for ham
filedata.ham = filedata.ham.astype('int8')
filedata
```
Here we perform the split.
```
train_emails, test_emails = train_test_split(filedata)
train_emails
```
Now let's load in our training data, storing it in a pandas `DataFrame`.
```
basedir = "RTRAINING/"
train_email_str = list()
for filename in train_emails.file:
with open(basedir + filename, encoding="latin1") as f:
filestr = f.read()
bsobj = BeautifulSoup(filestr, "lxml")
train_email_str.append(bsobj.get_text())
train_email_str[0]
train_emails = train_emails.assign(text=pd.Series(train_email_str, index=train_emails.index))
train_emails
```
## Choosing Features
There are lots of words in our e-mails even after stopwords are removed. Our feature space will be how frequently commonly seen words appear in an e-mail. We will combine all the spam and all the ham e-mails together and choose 1000 most-frequently-seen words for each of those classes, and count how often those words are seen in individual e-mails.
```
def email_clean(email_string):
"""A function for taking an email contained in a string and returning a clean string representing the email"""
stemmer = SnowballStemmer("english")
email_string = email_string.lower()
email_string = re.sub("\s+", " ", email_string)
email_words = wordpunct_tokenize(email_string)
goodchars = "abcdefghijklmnopqrstuvwxyz" # No punctuation or numbers; not interesting for my purpose
email_words = [''.join([c for c in w if c in goodchars]) for w in email_words if w not in ["spam"]]
email_words = [w for w in email_words if w not in nltk.corpus.stopwords.words("english") and w is not '']
return " ".join(email_words)
cleantext = pd.Series(train_emails.text.map(email_clean), index=train_emails.index)
train_emails = train_emails.assign(cleantext=cleantext)
train_emails
train_emails[train_emails.ham == 0].cleantext
```
Here we combine the e-mails to find common words in both spam and ham e-mails.
```
mass_spam = " ".join(train_emails.loc[train_emails.ham == 0].cleantext)
mass_spam
mass_ham = " ".join(train_emails.loc[train_emails.ham == 1].cleantext)
mass_ham
spam_freq = nltk.FreqDist([w for w in mass_spam.split(" ")])
M = 1000
spam_freq.most_common(M)
ham_freq = nltk.FreqDist([w for w in mass_ham.split(" ")])
M = 1000
ham_freq.most_common(M)
```
We now can find the words that will be in our feature space.
```
words = [t[0] for t in ham_freq.most_common(M)] + [t[0] for t in spam_freq.most_common(M)]
words = set(words)
words
len(words)
```
The final step in generating the features for the e-mails is to count how often the words of interest appear in e-mails in the training set.
```
feature_dict = dict()
for i, s in train_emails.iterrows():
wordcounts = dict()
for w in words:
wordcounts[w] = s["cleantext"].count(w)
feature_dict[i] = pd.Series(wordcounts)
pd.DataFrame(feature_dict).T
train_emails = train_emails.join(pd.DataFrame(feature_dict).T, lsuffix='0')
train_emails
```
## Training a Classifier
Now we can train a classifier. In this case we're training a Gaussian naive Bayes classifier.
```
spampred = GaussianNB()
spampred = spampred.fit(train_emails.loc[:, words], train_emails.ham)
ham_predicted = spampred.predict(train_emails.loc[:, words])
ham_predicted
print(classification_report(train_emails.ham, ham_predicted))
```
The classifier does very well in the training data. How well does it do on unseen test data?
## Evaluating Performance
The final step is to evaluate our classifier on test data to see how well we can expect it to perform on future, unseen data. The steps below prepare the test data like we did the training data, loading and cleaning the e-mails and counting how often the words of interest appear in them.
```
test_email_str = list()
for filename in test_emails.file:
with open(basedir + filename, encoding="latin1") as f:
filestr = f.read()
bsobj = BeautifulSoup(filestr, "lxml")
test_email_str.append(bsobj.get_text())
cleantext_test = pd.Series([email_clean(s) for s in test_email_str], index=test_emails.index)
test_emails = test_emails.assign(cleantext=cleantext_test)
feature_dict_test = dict()
for i, s in test_emails.iterrows():
wordcounts = dict()
for w in words:
wordcounts[w] = s["cleantext"].count(w)
feature_dict_test[i] = pd.Series(wordcounts)
test_emails = test_emails.join(pd.DataFrame(feature_dict_test).T, lsuffix='0')
```
Now let's see how the classifier performed.
```
ham_predicted_test = spampred.predict(test_emails.loc[:, words])
print(classification_report(test_emails.ham, ham_predicted_test))
```
It did very well, just like on the training data! It seems we don't have much (if any) overfitting or underfitting. We could have a classifier ready to deploy.
(Of course, our classifier is only as good as the data it was trained on. Perhaps e-mails seen in different contexts or at a different period in time have different characteristics, including both the spam and ham e-mails. In that case the classifier trained here won't be any good since it was trained on the wrong data.)
| true |
code
| 0.250294 | null | null | null | null |
|
# Evaluation of SBMV for structured references
Dominika Tkaczyk
5.05.2019
This analysis contains the evaluation of the search-based matching algorithms for structured references.
## Methodology
The test dataset is composed of 2,000 randomly chosen structured references. Three algorithms are compared:
* the legacy approach (OpenURL)
* Search-Based Matching
* Search-Based Matching with Validation
## Results
```
import sys
sys.path.append('../..')
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import re
import utils.data_format_keys as dfk
from dataset.dataset_utils import get_target_test_doi, get_target_gt_doi
from evaluation.link_metrics import LinkMetricsResults
from scipy.stats import chi2_contingency
from utils.utils import read_json
from utils.cr_utils import generate_unstructured
DATA_DIR = 'data/'
```
Read the datasets:
```
dataset_ou = read_json(DATA_DIR + 'dataset_ou.json')[dfk.DATASET_DATASET]
dataset_sbm = read_json(DATA_DIR + 'dataset_sbm.json')[dfk.DATASET_DATASET]
dataset_sbmv = read_json(DATA_DIR + 'dataset_sbmv.json')[dfk.DATASET_DATASET]
print('Dataset size: {}'.format(len(dataset_sbm)))
```
This function modifies the dataset according to the threshold:
```
def modify_validation_threshold(dataset, threshold):
for item in dataset:
if item[dfk.DATASET_SCORE] is not None and item[dfk.DATASET_SCORE] < threshold:
item[dfk.DATASET_TARGET_TEST][dfk.CR_ITEM_DOI] = None
return dataset
def modify_relevance_threshold(dataset, threshold):
for item in dataset:
if item[dfk.DATASET_SCORE] is not None \
and item[dfk.DATASET_SCORE]/len(generate_unstructured(item[dfk.DATASET_REFERENCE])) < threshold:
item[dfk.DATASET_TARGET_TEST][dfk.CR_ITEM_DOI] = None
return dataset
```
Let's calculate SBM's and SBMV's results for different thresholds:
```
dataset_sbm = modify_relevance_threshold(dataset_sbm, 0.47)
dataset_sbmv = modify_validation_threshold(dataset_sbmv, 0.78)
```
The results of OpenURL:
```
def print_summary(dataset, name):
link_results = LinkMetricsResults(dataset)
print('{} precision: {:.4f} (CI at 95% {:.4f}-{:.4f})'
.format(name, link_results.get(dfk.EVAL_PREC),
link_results.get(dfk.EVAL_CI_PREC)[0], link_results.get(dfk.EVAL_CI_PREC)[1]))
print('{} recall: {:.4f} (CI at 95% {:.4f}-{:.4f})'
.format(name, link_results.get(dfk.EVAL_REC),
link_results.get(dfk.EVAL_CI_REC)[0], link_results.get(dfk.EVAL_CI_REC)[1]))
print('{} F1: {:.4f}'.format(name, link_results.get(dfk.EVAL_F1)))
print_summary(dataset_ou, 'OpenURL')
```
The results of SBM:
```
print_summary(dataset_sbm, 'SBM')
```
The results of SBMV:
```
print_summary(dataset_sbmv, 'SBMV')
```
Let's use a statistical test to check whether the differences in precision and recall between the legacy approach and SBMV are statistically significant:
```
for metric in [dfk.EVAL_PREC, dfk.EVAL_REC]:
fun = get_target_test_doi if metric == dfk.EVAL_PREC else get_target_gt_doi
ou_results = LinkMetricsResults(dataset_ou)
ou_precision = ou_results.get(metric)
ou_test_count = len([d for d in dataset_ou if fun(d) is not None])
ou_precision_success = int(ou_precision * ou_test_count)
sbmv_results = LinkMetricsResults(dataset_sbmv)
sbmv_precision = sbmv_results.get(metric)
sbmv_test_count = len([d for d in dataset_sbmv if fun(d) is not None])
sbmv_precision_success = int(sbmv_precision * sbmv_test_count)
_, p, _, _ = chi2_contingency(np.array([[ou_precision_success,
ou_test_count-ou_precision_success],
[sbmv_precision_success,
sbmv_test_count-sbmv_precision_success]]),
correction=True)
c = 'this is statistically significant' if p < 0.05 \
else 'this is not statistically significant'
print('{} p-value: {:.4f} ({})'.format(metric, p, c))
```
Let's compare the algorithms in one plot:
```
def get_means(dataset):
results = LinkMetricsResults(dataset)
return [results.get(m) for m in [dfk.EVAL_PREC, dfk.EVAL_REC, dfk.EVAL_F1]]
def get_ci(dataset):
results = LinkMetricsResults(dataset)
ms = [results.get(m) for m in [dfk.EVAL_PREC, dfk.EVAL_REC]]
return [[a-results.get(m)[0] for m, a in zip([dfk.EVAL_CI_PREC, dfk.EVAL_CI_REC], ms)] + [0],
[results.get(m)[1]-a for m, a in zip([dfk.EVAL_CI_PREC, dfk.EVAL_CI_REC], ms)] + [0]]
def autolabel(ax, rects):
plt.rcParams.update({'font.size': 14})
for rect in rects:
height = rect.get_height()
text = '{:.2f}'.format(height)
text = re.sub('\.00$', '', text)
ax.text(rect.get_x() + rect.get_width()/2., 1.04*height, text, ha='center', va='bottom')
ind = np.arange(3)
width = 0.25
plt.rcParams.update({'font.size': 16, 'legend.fontsize': 14})
fig, ax = plt.subplots(figsize=(12, 9))
rects1 = ax.bar(ind - 0.5 * width, get_means(dataset_ou), yerr=get_ci(dataset_ou), width=width,
color='#d8d2c4')
rects2 = ax.bar(ind + 0.5 * width, get_means(dataset_sbm), yerr=get_ci(dataset_sbm),
width=width, color='#4f5858')
rects3 = ax.bar(ind + 1.5 * width, get_means(dataset_sbmv), yerr=get_ci(dataset_sbmv),
width=width, color='#3eb1c8')
ax.set_ylabel('fraction')
ax.set_xticks(ind + width / 2)
ax.set_xticklabels(('precision', 'recall', 'F1'))
plt.ylim(0, 1.25)
plt.yticks([0, 0.2, 0.4, 0.6, 0.8, 1.0])
ax.legend((rects1[0], rects2[0], rects3[0]), ('OpenURL', 'SBM', 'SBMV'))
autolabel(ax, rects1)
autolabel(ax, rects2)
autolabel(ax, rects3)
plt.show()
```
| true |
code
| 0.552359 | null | null | null | null |
|
# Доверительные интервалы для двух долей
```
import numpy as np
import pandas as pd
import scipy
from statsmodels.stats.weightstats import *
from statsmodels.stats.proportion import proportion_confint
```
## Загрузка данных
```
data = pd.read_csv('banner_click_stat.txt', header = None, sep = '\t')
data.columns = ['banner_a', 'banner_b']
data.head()
data.describe()
```
## Интервальные оценки долей
$$\frac1{ 1 + \frac{z^2}{n} } \left( \hat{p} + \frac{z^2}{2n} \pm z \sqrt{ \frac{ \hat{p}\left(1-\hat{p}\right)}{n} + \frac{z^2}{4n^2} } \right), \;\; z \equiv z_{1-\frac{\alpha}{2}}$$
```
conf_interval_banner_a = proportion_confint(sum(data.banner_a),
data.shape[0],
method = 'wilson')
conf_interval_banner_b = proportion_confint(sum(data.banner_b),
data.shape[0],
method = 'wilson')
print 'interval for banner a [%f, %f]' % conf_interval_banner_a
print 'interval for banner b [%f, %f]' % conf_interval_banner_b
```
### Как их сравнить?
## Доверительный интервал для разности долей (независимые выборки)
| $X_1$ | $X_2$
------------- | -------------|
1 | a | b
0 | c | d
$\sum$ | $n_1$| $n_2$
$$ \hat{p}_1 = \frac{a}{n_1}$$
$$ \hat{p}_2 = \frac{b}{n_2}$$
$$\text{Доверительный интервал для }p_1 - p_2\colon \;\; \hat{p}_1 - \hat{p}_2 \pm z_{1-\frac{\alpha}{2}}\sqrt{\frac{\hat{p}_1(1 - \hat{p}_1)}{n_1} + \frac{\hat{p}_2(1 - \hat{p}_2)}{n_2}}$$
```
def proportions_confint_diff_ind(sample1, sample2, alpha = 0.05):
z = scipy.stats.norm.ppf(1 - alpha / 2.)
p1 = float(sum(sample1)) / len(sample1)
p2 = float(sum(sample2)) / len(sample2)
left_boundary = (p1 - p2) - z * np.sqrt(p1 * (1 - p1)/ len(sample1) + p2 * (1 - p2)/ len(sample2))
right_boundary = (p1 - p2) + z * np.sqrt(p1 * (1 - p1)/ len(sample1) + p2 * (1 - p2)/ len(sample2))
return (left_boundary, right_boundary)
print "confidence interval: [%f, %f]" % proportions_confint_diff_ind(data.banner_a, data.banner_b)
```
## Доверительный интервал для разности долей (связанные выборки)
$X_1$ \ $X_2$ | 1| 0 | $\sum$
------------- | -------------|
1 | e | f | e + f
0 | g | h | g + h
$\sum$ | e + g| f + h | n
$$ \hat{p}_1 = \frac{e + f}{n}$$
$$ \hat{p}_2 = \frac{e + g}{n}$$
$$ \hat{p}_1 - \hat{p}_2 = \frac{f - g}{n}$$
$$\text{Доверительный интервал для }p_1 - p_2\colon \;\; \frac{f - g}{n} \pm z_{1-\frac{\alpha}{2}}\sqrt{\frac{f + g}{n^2} - \frac{(f - g)^2}{n^3}}$$
```
def proportions_confint_diff_rel(sample1, sample2, alpha = 0.05):
z = scipy.stats.norm.ppf(1 - alpha / 2.)
sample = zip(sample1, sample2)
n = len(sample)
f = sum([1 if (x[0] == 1 and x[1] == 0) else 0 for x in sample])
g = sum([1 if (x[0] == 0 and x[1] == 1) else 0 for x in sample])
left_boundary = float(f - g) / n - z * np.sqrt(float((f + g)) / n**2 - float((f - g)**2) / n**3)
right_boundary = float(f - g) / n + z * np.sqrt(float((f + g)) / n**2 - float((f - g)**2) / n**3)
return (left_boundary, right_boundary)
print "confidence interval: [%f, %f]" % proportions_confint_diff_rel(data.banner_a, data.banner_b)
```
| true |
code
| 0.505615 | null | null | null | null |
|
# AI2S Deep Learning Day - Beginners notebook
<sub>Alessio Ansuini, AREA Research and Technology</sub>
<sub>Andrea Gasparin and Marco Zullich, Artificial Intelligence Student Society</sub>
## Pytorch
PyTorch is a Python library offering extensive support for the construction of deep Neural Networks (NNs).
One of the main characteristics of PyTorch is that it operates with **Tensors**, as they provide a significative speed up of the computations.
For the scope of this introduction we can simply think at Tensors as arrays, with all the relative operations preserved as we can see in the following example.
```
import torch
import numpy as np
tensor_A = torch.tensor([1,1,1])
array_A = np.array([1,1,1])
print(tensor_A)
print(array_A)
print( 2 * tensor_A )
print( 2 * array_A )
```
## The images representation
In our context, we will work with black and white images. They are represented as matrices containing numbers.
The numbers will go from 0 (white) to the max value (black) including all the grey scale spectrum.
```
central_vertical_line = torch.tensor([[ 0, 4, 0],
[ 0, 8, 0],
[ 0, 10, 0]])
import matplotlib.pyplot as plt #plots and image viewer module
plt.imshow(central_vertical_line, cmap="Greys")
```
## Handwritten digit recognition (MNIST dataset)
In this notebook, we'll train a simple fully-connected NN for the classification of the MNIST dataset.
The MNIST (*modified National Institute of Standards and Technology database*) is a collection of 28x28 pixels black and white images containing handwritten digits. Let's see an example:
```
import torchvision #the module where is stored the dataset
#to improve training efficiency, data are first normalised. The "transform" method will do the job for us
transform = torchvision.transforms.Compose([
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize((0.1307,), (0.3081,)),
])
trainset = torchvision.datasets.MNIST(root="./data", train=True, transform=transform, download=True)
testset = torchvision.datasets.MNIST(root="./data", train=False, transform=transform, download=True)
```
**trainset.data** contains the images, represented as 28x28 matrixes of float numbers
**trainset.target** contains the labels, so the numbers represented in the images
```
print("trainset.data[0] is the first image; its size is:", trainset.data[0].shape)
print("the digit represented is the number: ", trainset.targets[0])
# if we have a tensor composed of a single scalar, we can extract the scalar via tensor.item()
print("scalar representation: ", trainset.targets[0].item())
```
Let's see that the image actually shows the number 5
```
print(trainset.data[0][6])
plt.imshow(trainset.data[0], cmap='Greys')
```
### THE TRAINING
First we need to separate the images and the labels
```
train_imgs = trainset.data
train_labels = trainset.targets
test_imgs = testset.data
test_labels = testset.targets
```
### Flatten the image
To simplify the network flow, images are initially flattened, meaning that the corresponding matrix will be transformed in a single longer row array:
```
central_vertical_line_flattened = central_vertical_line.flatten()
print("initial matrix:\n",central_vertical_line)
print("\nmatrix flattened:\n",central_vertical_line_flattened)
print("\nmatrix shape:",central_vertical_line.shape, " flattened shape:", central_vertical_line_flattened.shape)
```
### Creating the NN
We create the NN as in the image below:
* the **input layer** has 784 neurons: this as the images have 28x28=784 numbers;
* there are three **hidden layers**: the first one has 16 neurons, the second one has 32, the first one has 16 again;
* the **output layer** has 10 neurons, one per class.
The NN can be easily created using the `torch.nn.Sequential` method, which allows for the construction of the NN by pipelining the building blocks in a list and passing it to the Sequential constructor.
We pass to Sequential the following elements:
* we start with a `Flatten()` module since we need to flatten the 2D 28x28 images into the 784 elements 1D array
* we alternate `Linear` layers (fully-connected layers) with `ReLU` modules (Rectified Linear Unit) activation functions
* we conclude with a `Linear` layer withoud activation function: this will output, for each image, an array of 10 scalars, each one indicating the "confidence" that the network has in assigning the input image to the corresponding class. We'll assign the image to the class having the highest confidence.
After this, the architecture of the NN is complete! We will then focus on telling Python how to train this NN.
```
from torch import nn
inputDimension = 784
outputDimension = 10 # the number of classes - 10 digits from 0 to 9
layersWidth = 16
network = nn.Sequential(
nn.Flatten(),
nn.Linear(inputDimension, layersWidth),
nn.ReLU(),
nn.Linear(layersWidth, layersWidth*2),
nn.ReLU(),
nn.Linear(layersWidth*2, layersWidth),
nn.ReLU(),
nn.Linear(layersWidth, outputDimension),
)
```
### NN training
We'll use vanilla mini-batch Stochastic Gradient Descent (SGD) with a learning rate of *learningRate* (you chose!!!) as the optimizer.
We'll create mini-batches of size *batchSize* (i.e., we'll have 60000/*batchSize*=600 mini-batches containing our data) for the training.
We'll train the NN for *epochs* epochs, each epoch indicating how many times the NN "sees" the whole dataset during training.
The loss function we'll use is the **categorical cross-entropy** (particularly useful for non-binary classification problems) and we'll also evaluate the network on its **accuracy** (i.e., images correctly classified divided by total images).
### *learningRate*, *batchSize*, and *epochs* are parameters you can play with, let's see haw you can improve the accuracy!!!
```
#hyper parameters
batchSize = 100
learningRate = 0.1
epochs = 3
```
In order to pass our data to the network, we'll make use of DataLoaders: they take care of subdividing the dataset into mini-batches, applying the requested transformations, and optionally re-shuffling them at the beginning of each new epoch.
```
trainloader = torch.utils.data.DataLoader(trainset, batch_size=100, shuffle=True)
testloader = torch.utils.data.DataLoader(testset, batch_size=100, shuffle=False)
```
We also provide a function to compute the accuracy of the nn given its outputs and the true values of the images they are trying to classify
```
def calculate_accuracy(nn_output, true_values):
class_prediction = nn_output.topk(1).indices.flatten()
match = (class_prediction == true_values)
correctly_classified = match.sum().item()
accuracy = correctly_classified / nn_output.size(0)
return accuracy
```
Let's check that it works for a fictitious batch of 4 images and 3 classes.
A NN output in this case will be a matrix of shape 4x3, each row holding the probability that the model assigns the corresponding image to the corresponding class.
We create a fake ground truth s.t. the NN assigns correctly the first 3 images: the corresponding accuracy should then be 3/4=0.75
### Here the actual traininig
```
lossValues = [] #to store the loss value trand during the training (we want it to DECREASE as much as possible)
accuracy = [] #to store the accuracy trand during the training (we want it to INCREASE as much as possible)
lossFunction = torch.nn.CrossEntropyLoss() #the error function the nn is trying to minimise
network.train() #this tells our nn that it is in training mode.
optimizer = torch.optim.SGD(network.parameters(), lr=learningRate) #the kind of optimiser we want of our nn to use
# MAIN LOOP: one iteration for each epoch
for e in range(epochs):
# INNER LOOP: one for each MINI-BATCH
for i, (imgs, ground_truth) in enumerate(trainloader): #range(num_of_batches):
optimizer.zero_grad() # VERY TECHNICAL needed in order NOT to accumulate gradients on top of the previous epochs
predictions = network(imgs)
loss = lossFunction(predictions, ground_truth)
loss.backward()
optimizer.step()
accuracy_batch = calculate_accuracy(predictions, ground_truth)
lossValues.append(loss.item())
accuracy.append(accuracy_batch)
# Every 200 iterations, we print the status of loss and accuracy
if (i+1)%200 == 0:
print(f"***Epoch {e+1} | Iteration {i+1} | Mini-batch loss {loss.item()} | Mini-batch accuracy {accuracy_batch}")
# Let us draw the charts for loss and accuracy for each training iteration
plt.plot(lossValues, label="loss")
plt.plot(accuracy, label="accuracy")
plt.legend()
```
# Check yourself
Here we provide a function to pick a few images from the test set and check if the network classifies them properly
```
def classify():
for i in range(5):
num = np.random.randint(0,test_imgs.shape[0])
network.eval()
plt.imshow(test_imgs[num])
plt.show()
print("Our network classifies this image as: ", network(test_imgs[num:num+1].float()).topk(1).indices.flatten().item())
print("The true value is: ", test_labels[num:num+1].item())
print("\n\n")
classify()
```
| true |
code
| 0.6025 | null | null | null | null |
|
# Foundations of Computational Economics #38
by Fedor Iskhakov, ANU
<img src="_static/img/dag3logo.png" style="width:256px;">
## Dynamic programming with continuous choice
<img src="_static/img/lecture.png" style="width:64px;">
<img src="_static/img/youtube.png" style="width:65px;">
[https://youtu.be/pAEm9cZd92Y](https://youtu.be/pAEm9cZd92Y)
Description: Optimization in Python. Consumption-savings model with continuous choice.
Goal: take continuous choice seriously and deal with it without discretization
- no discretization of choice variables
- need to employ numerical optimizer to find optimal continuous choice in Bellman equation
- optimization problem has to be solved for all points in the state space
Implement the continuous version of Bellman operator for the stochastic consumption-savings model
### Consumption-savings problem (Deaton model)
$$
V(M)=\max_{0 \le c \le M}\big\{u(c)+\beta \mathbb{E}_{y} V\big(\underset{=M'}{\underbrace{R(M-c)+\tilde{y}}}\big)\big\}
$$
- discrete time, infinite horizon
- one continuous choice of consumption $ 0 \le c \le M $
- state space: consumable resources in the beginning of the period $ M $, discretized
- income $ \tilde{y} $, follows log-normal distribution with $ \mu = 0 $ and $ \sigma $
$$
V(M)=\max_{0 \le c \le M}\big\{u(c)+\beta \mathbb{E}_{y} V\big(\underset{=M'}{\underbrace{R(M-c)+\tilde{y}}}\big)\big\}
$$
- preferences are given by time separable utility $ u(c) = \log(c) $
- discount factor $ \beta $
- gross return on savings $ R $, fixed
### Continuous (non-discretized) Bellman equation
Have to compute
$$
\max_{0 \le c \le M}\big\{u(c)+\beta \mathbb{E}_{y} V\big(R(M-c)+\tilde{y}\big)\big\} = \max_{0 \le c \le M} G(M,c)
$$
using numerical optimization algorithm
- constrained optimization (bounds on $ c $)
- have to interpolate value function $ V(\cdot) $ for every evaluation of objective $ G(c) $
- have to solve this optimization problem for **all possible values** $ M $
#### Numerical optimization in Python
Optimization can be approached
1. **directly**, or through the lenses of analytic
1. **first order conditions**, assuming the objective function is differentiable
- FOC approach is equation solving, see video 13, 22, 23
- here focus on optimization itself
The two approaches are equivalent in terms of computational complexity, end even numerically
### Newton method as optimizer
$$
\max_{x \in \mathbb{R}} f(x) = -x^4 + 2.5x^2 + x + 2
$$
Solve the first order condition:
$$
\begin{eqnarray}
f'(x)=-4x^3 + 5x +1 &=& 0 \\
-4x(x^2-1) + x+1 &=& 0 \\
(x+1)(-4x^2+4x+1) &=& 0 \\
\big(x+1\big)\big(x-\frac{1}{2}-\frac{1}{\sqrt{2}}\big)\big(x-\frac{1}{2}+\frac{1}{\sqrt{2}}\big) &=& 0
\end{eqnarray}
$$
### Taylor series expansion of the equation
Let $ x' $ be an approximate solution of the equation $ g(x)=f'(x)=0 $
$$
g(x') = g(x) + g'(x)(x'-x) + \dots = 0
$$
$$
x' = x - g(x)/g'(x)
$$
Newton step towards $ x' $ from an approximate solution $ x_i $ at iteration $ i $ is then
$$
x_{i+1} = x_i - g(x_i)/g'(x_i) = x_i - f'(x_i)/f''(x_i)
$$
### Or use repeated quadratic approximations
Given approximate solution $ x_i $ at iteration $ i $, approximate function $ f(x) $ using first three terms of Taylor series
$$
\hat{f}(x) = f(x_i) + f'(x_i) (x-x_i) + \tfrac{1}{2} f''(x_i) (x-x_i)^2
$$
The maximum/minimum of this quadratic approximation is given by
$$
{\hat{f}}'(x) = f'(x_i) + f''(x_i) (x-x_i) = 0
$$
Leading to the Newton step
$$
x = x_{i+1} = x_i - f'(x_i)/f''(x_i)
$$
```
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
def newton(fun,grad,x0,tol=1e-6,maxiter=100,callback=None):
'''Newton method for solving equation f(x)=0
with given tolerance and number of iterations.
Callback function is invoked at each iteration if given.
'''
for i in range(maxiter):
x1 = x0 - fun(x0)/grad(x0)
err = abs(x1-x0)
if callback != None: callback(err=err,x0=x0,x1=x1,iter=i)
if err<tol: break
x0 = x1
else:
raise RuntimeError('Failed to converge in %d iterations'%maxiter)
return (x0+x1)/2
F = lambda x: -x**4+2.5*x**2+x+2 # main function
f = lambda x: -4*x**3+5*x+1 # FOC
g = lambda x: -12*x**2+5 # derivative of FOC
# make nice seriest of plots
a,b = -1.5,1.5 # upper and lower limits
xd = np.linspace(a,b,1000) # x grid
ylim1 = [min(np.amin(f(xd))-1,0),max(np.amax(f(xd))+1,0)]
ylim2 = [min(np.amin(F(xd))-1,0),max(np.amax(F(xd))+1,0)]
print(ylim1,ylim2)
def plot_step(x0,x1,iter,**kwargs):
plot_step.iter = iter+1
if iter<10:
fig1, (ax1,ax2) = plt.subplots(1,2,figsize=(16,6))
ax1.set_title('FOC equation solver')
ax1.plot(xd,f(xd),c='red') # plot the function
ax1.plot([a,b],[0,0],c='black') # plot zero line
ax1.plot([x0,x0],ylim1,c='grey') # plot x0
l = lambda z: g(x0)*(z - x1)
ax1.plot(xd,l(xd),c='green') # plot the function
ax1.set_ylim(bottom=ylim1[0],top=ylim1[1])
ax2.set_title('Optimizer')
ax2.plot(xd,F(xd),c='red') # plot the function
ax2.plot([x0,x0],ylim2,c='grey') # plot x0
l = lambda z: F(x0)+f(x0)*(z-x0)+(g(x0)*(z-x0)**2)/2
ax2.plot(xd,l(xd),c='green') # plot the function
ax2.plot([x1,x1],ylim2,c='grey') # plot x1
ax2.set_ylim(bottom=ylim2[0],top=ylim2[1])
ax1.set_ylabel('Iteration %d'%(iter+1))
plt.show()
newton(f,g,x0=-1.3,callback=plot_step) # 0.9, 0.42
print('Converged in %d iterations'%plot_step.iter)
```
### Multidimensional case
$$
\max_{x_1,\dots,x_n} F(x_1,\dots,x_n)
$$
- the Newton optimization method would work with multivariate function $ F(x_1,\dots,x_n) $, *gradient* vector $ \nabla F(x_1,\dots,x_n) $
composed of partial derivatives, and a *Hessian* matrix $ \nabla^2 F(x_1,\dots,x_n) $ composed of second order partial derivatives of $ F(x_1,\dots,x_n) $
- the FOC solver Newton method would work with vector-valued multivariate function $ G(x_1,\dots,x_n)=\nabla F(x_1,\dots,x_n) $,
and a *Jacobian* matrix of first order partial derivatives of all of the outputs of the function $ G(x_1,\dots,x_n) $ with respect to all arguments
### Newton step in multidimensional case
$$
x_{i+1} = x_i - \frac{F'(x_i)}{F''(x_i)} = x_i - \big( \nabla^2 F(x_i) \big)^{-1} \nabla F(x_i)
$$
- requires *inverting* the Hessian/Jacobian matrix
- when analytic Hessian/Jacobian is not available, numerical differentiation can be used (yet slow and imprecise)
### Quasi-Newton methods
**SciPy.optimize**
Main idea: replace Jacobian/Hessian with approximation. For example,
when costly to compute, and/or unavailable in analytic form.
- DFP (Davidon–Fletcher–Powell)
- BFGS (Broyden–Fletcher–Goldfarb–Shanno)
- SR1 (Symmetric rank-one)
- BHHH (Berndt–Hall–Hall–Hausman) $ \leftarrow $ for statistical application and estimation!
#### Broader view on the optimization methods
1. Line search methods
- Newton and Quasi-Newton
- Gradient descent
1. Trust region methods
- Approximation of function in question in a ball around the current point
1. Derivative free algorithms
- Nelder-Mead (simplex)
- Pattern search
1. Global solution algorithms
- Simulation based
- Genetic algorithms
1. **Poly-algorithms** Combinations of other algorithms
### Global convergence of Newton method
Newton step: $ x_{i+1} = x_i + s_i $ where $ s_i $ is the *direction* of the step
$$
s_i = - \frac{f'(x_i)}{f''(x_i)} = - \big( \nabla^2 f(x_i) \big)^{-1} \nabla f(x_i)
$$
Newton method becomes globally convergent with a subproblem of choosing step size $ \tau $, such that
$$
x_{i+1} = x_i + \tau s_i
$$
**Globally convergent to local optimum**: converges from any starting value, but is not guaranteed to find global optimum
### Gradient descent
$$
x_{i+1} = x_i - \tau \nabla f(x_i)
$$
- $ \nabla f(x_i) $ is direction of the fastest change in the function
value
- As a greedy algorithm, can be much slower that Newton.
- Finding optimal step size $ \tau $ is a separate one-dimensional optimization sub-problem
#### Derivative-free methods
**Methods of last resort!**
- Grid search (`brute` in SciPy)
- Nelder-Mead (“simplex”)
- Pattern search (generalization of grid search)
- Model specific (POUNDerS for min sum of squares)
### Nelder-Mead
1. Initialize a simplex
1. Update simplex based on function values
- Increase size of the simplex
- Reduce size of the simplex
- Reflect (flip) the simplex
1. Iterate until convergence
### Nelder-Mead
<img src="_static/img/nedlermead.png" style="">
### Trade-off with derivative free methods
Only local convergence. Anybody talking about global convergence with
derivative free methods is
- either assumes something about the problem (for example, concavity),
- or is prepared to wait forever
“An algorithm converges to the global minimum for any continuous
$ f $ if and only if the sequence of points visited by the algorithm
is dense in $ \Omega $.” Torn & Zilinskas book “Global Optimization”
### Global and simulation-based methods
Coincide with derivative-free methods $ \Rightarrow $ see above!
- Simulated annealing (`basinhopping, dual_annealing` in SciPy.optimize)
- Particle swarms
- Evolutionary algorithms
Better idea: Multi-start + poly-algorithms
### Constrained optimization
Optimization in presence of constraints on the variables of the problem.
**SciPy.optimize**
- Constrained optimization by linear approximation (COBYLA)
- Sequential Least SQuares Programming (SLSQP)
- Trust region with constraints
#### Solving for optimal consumption level in cake eating problem
<img src="_static/img/cake.png" style="width:128px;">
- Simple version of consumption-savings problem
- No returns on savings $ R=1 $
- No income $ y=0 $
- What is not eaten in period $ t $ is left for the future $ M_{t+1}=M_t-c_t $
### Bellman equation
$$
V(M_{t})=\max_{0 \le c_{t} \le M_t}\big\{u(c_{t})+\beta V(\underset{=M_{t}-c_{t}}{\underbrace{M_{t+1}}})\big\}
$$
Attack the optimization problem directly and run the optimizer to solve
$$
\max_{0 \le c \le M} \big\{u(c)+\beta V_{i-1}(M-c) \big \}
$$
### Thoughts on appropriate method
- For Newton we would need first and second derivatives of $ V_{i-1} $, which is
itself only approximated on a grid, so no go..
- The problem is bounded, so constrained optimization method is needed
- **Bisections** should be considered
- Other derivative free methods?
- Quasi-Newton method with bounds?
### Bounded optimization in Python
*Bounded optimization* is a kind of *constrained optimization* with simple
bounds on the variables
(like Robust Newton algorithm in video 25)
Will use **scipy.optimize.minimize_scalar(method=’bounded’)** which uses the
Brent method to find a local minimum.
```
import numpy as np
import matplotlib.pyplot as plt
from scipy import interpolate
from scipy.optimize import minimize_scalar
%matplotlib inline
class cake_continuous():
'''Implementation of the cake eating problem with continuous choices.'''
def __init__(self,beta=.9, Wbar=10, ngrid=50, maxiter_bellman=100,tol_bellman=1e-8):
self.beta = beta # Discount factor
self.Wbar = Wbar # Upper bound on cake size
self.ngrid = ngrid # Number of grid points for the size of cake
self.epsilon = np.finfo(float).eps # smallest positive float number
self.grid_state = np.linspace(self.epsilon,Wbar,ngrid) # grid for state space
self.maxiter_bellman = maxiter_bellman # maximum iterations in Bellman solver
self.tol_bellman = tol_bellman # tolerance in Bellman solver
def bellman(self,V0):
#Bellman operator, V0 is one-dim vector of values on grid
def maximand(c,M,interf):
'''Maximand of the Bellman equation'''
Vnext = interf(M-c) # next period value at the size of cake in the next period
V1 = np.log(c) + self.beta*Vnext
return -V1 # negative because of minimization
def findC(M,maximand,interf):
'''Solves for optimal consumption for given cake size M and value function VF'''
opt = {'maxiter':self.maxiter_bellman, 'xatol':self.tol_bellman}
res = minimize_scalar(maximand,args=(M,interf),method='Bounded',bounds=[self.epsilon,M],options=opt)
if res.success:
return res.x # if converged successfully
else:
return M/2 # return some visibly wrong value
# interpolation function for the current approximation of the vaulue function
interfunc = interpolate.interp1d(self.grid_state,V0,kind='slinear',fill_value="extrapolate")
# allocate space for the policy function
c1=np.empty(self.ngrid,dtype='float')
c1[0] = self.grid_state[0]/2 # skip the zero/eps point
# loop over state space
for i in range(1,self.ngrid):
# find optimal consumption level for each point in the state space
c1[i] = findC(self.grid_state[i],maximand,interfunc)
# compute the value function corresponding to the computed policy
V1 = - maximand(c1,self.grid_state,interfunc) # don't forget the negation!
return V1, c1
def solve(self, maxiter=1000, tol=1e-4, callback=None):
'''Solves the model using successive approximations'''
V0=np.log(self.grid_state) # on first iteration assume consuming everything
for iter in range(maxiter):
V1,c1=self.bellman(V0)
if callback: callback(iter,self.grid_state,V1,c1) # callback for making plots
if np.all(abs(V1-V0) < tol):
break
V0=V1
else: # when i went up to maxiter
print('No convergence: maximum number of iterations achieved!')
return V1,c1
def solve_plot(self, maxiter=1000, tol=1e-4):
'''Illustrate solution'''
fig1, (ax1,ax2) = plt.subplots(1,2,figsize=(14,8))
ax1.grid(b=True, which='both', color='0.65', linestyle='-')
ax2.grid(b=True, which='both', color='0.65', linestyle='-')
ax1.set_title('Value function convergence with VFI')
ax2.set_title('Policy function convergence with VFI')
ax1.set_xlabel('Cake size, W')
ax2.set_xlabel('Cake size, W')
ax1.set_ylabel('Value function')
ax2.set_ylabel('Policy function')
print('Iterations:',end=' ')
def callback(iter,grid,v,c):
print(iter,end=' ') # print iteration number
ax1.plot(grid[1:],v[1:],color='k',alpha=0.25)
ax2.plot(grid,c,color='k',alpha=0.25)
V,c = self.solve(maxiter=maxiter,tol=tol,callback=callback)
# add solutions
ax1.plot(self.grid_state[1:],V[1:],color='r',linewidth=2.5)
ax2.plot(self.grid_state,c,color='r',linewidth=2.5)
plt.show()
return V,c
m3 = cake_continuous (beta=0.92,Wbar=10,ngrid=10,tol_bellman=1e-8)
V3,c3 = m3.solve_plot()
m3 = cake_continuous (beta=0.92,Wbar=10,ngrid=100,tol_bellman=1e-4)
V3,c3 = m3.solve_plot()
```
### Conclusion
Dealing with continuous choice directly using numerical optimization:
- is **slow**, consider using lower level language or just in time complication in Python
- more precise, but not ideal, requires additional technical parameters (tolerance and maxiter for within Bellman optimization)
(Will come back to full blown stochastic consumption-savings model in the next practical video.)
#### Further learning resources
- Overview of SciPy optimize
[https://docs.scipy.org/doc/scipy/reference/tutorial/optimize.html](https://docs.scipy.org/doc/scipy/reference/tutorial/optimize.html)
- Docs [https://docs.scipy.org/doc/scipy/reference/optimize.html#module-scipy.optimize](https://docs.scipy.org/doc/scipy/reference/optimize.html#module-scipy.optimize)
- Visualization of Nelder-Mead [https://www.youtube.com/watch?v=j2gcuRVbwR0](https://www.youtube.com/watch?v=j2gcuRVbwR0)
- Brent’s method explained [https://www.youtube.com/watch?v=-bLSRiokgFk](https://www.youtube.com/watch?v=-bLSRiokgFk)
- Many visualizations of Newton and other methods [https://www.youtube.com/user/oscarsveliz/videos](https://www.youtube.com/user/oscarsveliz/videos)
| true |
code
| 0.680428 | null | null | null | null |
|
This guide uses the [Fashion MNIST](https://github.com/zalandoresearch/fashion-mnist) dataset which contains 70,000 grayscale images in 10 categories. The images show individual articles of clothing at low resolution (28 by 28 pixels), as seen here:
<table>
<tr><td>
<img src="https://tensorflow.org/images/fashion-mnist-sprite.png"
alt="Fashion MNIST sprite" width="600">
</td></tr>
<tr><td align="center">
<b>Figure 1.</b> <a href="https://github.com/zalandoresearch/fashion-mnist">Fashion-MNIST samples</a> (by Zalando, MIT License).<br/>
</td></tr>
</table>
Fashion MNIST is intended as a drop-in replacement for the classic [MNIST](http://yann.lecun.com/exdb/mnist/) dataset—often used as the "Hello, World" of machine learning programs for computer vision. The MNIST dataset contains images of handwritten digits (0, 1, 2, etc.) in a format identical to that of the articles of clothing you'll use here.
This guide uses Fashion MNIST for variety, and because it's a slightly more challenging problem than regular MNIST. Both datasets are relatively small and are used to verify that an algorithm works as expected. They're good starting points to test and debug code.
Here, 60,000 images are used to train the network and 10,000 images to evaluate how accurately the network learned to classify images. You can access the Fashion MNIST directly from TensorFlow. Import and load the Fashion MNIST data directly from TensorFlow:
```
import tensorflow
from tensorflow.keras.datasets.fashion_mnist import load_data
#fashion_mnist = tf.keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = load_data()
```
Loading the dataset returns four NumPy arrays:
* The `train_images` and `train_labels` arrays are the *training set*—the data the model uses to learn.
* The model is tested against the *test set*, the `test_images`, and `test_labels` arrays.
The images are 28x28 NumPy arrays, with pixel values ranging from 0 to 255. The *labels* are an array of integers, ranging from 0 to 9. These correspond to the *class* of clothing the image represents:
<table>
<tr>
<th>Label</th>
<th>Class</th>
</tr>
<tr>
<td>0</td>
<td>T-shirt/top</td>
</tr>
<tr>
<td>1</td>
<td>Trouser</td>
</tr>
<tr>
<td>2</td>
<td>Pullover</td>
</tr>
<tr>
<td>3</td>
<td>Dress</td>
</tr>
<tr>
<td>4</td>
<td>Coat</td>
</tr>
<tr>
<td>5</td>
<td>Sandal</td>
</tr>
<tr>
<td>6</td>
<td>Shirt</td>
</tr>
<tr>
<td>7</td>
<td>Sneaker</td>
</tr>
<tr>
<td>8</td>
<td>Bag</td>
</tr>
<tr>
<td>9</td>
<td>Ankle boot</td>
</tr>
</table>
Each image is mapped to a single label. Since the *class names* are not included with the dataset, store them here to use later when plotting the images:
```
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
```
## Explore the data
Let's explore the format of the dataset before training the model. The following shows there are 60,000 images in the training set, with each image represented as 28 x 28 pixels:
```
print(train_images.shape)
len(train_labels)
train_labels
test_images.shape
```
## Preprocess the data
The data must be preprocessed before training the network. If you inspect the first image in the training set, you will see that the pixel values fall in the range of 0 to 255:
```
import matplotlib.pyplot as plt
plt.figure()
plt.imshow(train_images[0])
plt.colorbar()
plt.grid(False)
plt.show()
```
Scale these values to a range of 0 to 1 before feeding them to the neural network model. To do so, divide the values by 255. It's important that the *training set* and the *testing set* be preprocessed in the same way:
```
train_images = train_images / 255.0
test_images = test_images / 255.0
```
To verify that the data is in the correct format and that you're ready to build and train the network, let's display the first 25 images from the *training set* and display the class name below each image.
```
plt.figure(figsize=(10,10))
for i in range(25):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(train_images[i], cmap=plt.cm.binary)
plt.xlabel(class_names[train_labels[i]])
plt.show()
```
## Build the model
Building the neural network requires configuring the layers of the model, then compiling the model.
### Set up the layers
The basic building block of a neural network is the *layer*. Layers extract representations from the data fed into them. Hopefully, these representations are meaningful for the problem at hand.
Most of deep learning consists of chaining together simple layers. Most layers, such as `tf.keras.layers.Dense`, have parameters that are learned during training.
```
from tensorflow import keras
model = keras.Sequential([
keras.layers.Flatten(input_shape=(28, 28)),
keras.layers.Dense(128, activation='relu'),
keras.layers.Dense(10)
])
```
**The** first layer in this network, `tf.keras.layers.Flatten`, transforms the format of the images from a two-dimensional array (of 28 by 28 pixels) to a one-dimensional array (of 28 * 28 = 784 pixels). Think of this layer as unstacking rows of pixels in the image and lining them up. This layer has no parameters to learn; it only reformats the data.
After the pixels are flattened, the network consists of a sequence of two `tf.keras.layers.Dense` layers. These are densely connected, or fully connected, neural layers. The first `Dense` layer has 128 nodes (or neurons). The second (and last) layer returns a logits array with length of 10. Each node contains a score that indicates the current image belongs to one of the 10 classes.
### Compile the model
Before the model is ready for training, it needs a few more settings. These are added during the model's *compile* step:
* *Loss function* —This measures how accurate the model is during training. You want to minimize this function to "steer" the model in the right direction.
* *Optimizer* —This is how the model is updated based on the data it sees and its loss function.
* *Metrics* —Used to monitor the training and testing steps. The following example uses *accuracy*, the fraction of the images that are correctly classified.
```
import tensorflow as tf
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
```
## Train the model
Training the neural network model requires the following steps:
1. Feed the training data to the model. In this example, the training data is in the `train_images` and `train_labels` arrays.
2. The model learns to associate images and labels.
3. You ask the model to make predictions about a test set—in this example, the `test_images` array.
4. Verify that the predictions match the labels from the `test_labels` array.
### Feed the model
To start training, call the `model.fit` method—so called because it "fits" the model to the training data:
```
model.fit(train_images, train_labels, epochs=10)
```
### Evaluate accuracy
Next, compare how the model performs on the test dataset:
```
test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2)
print('\nTest accuracy:', test_acc)
```
It turns out that the accuracy on the test dataset is a little less than the accuracy on the training dataset. This gap between training accuracy and test accuracy represents *overfitting*. Overfitting happens when a machine learning model performs worse on new, previously unseen inputs than it does on the training data. An overfitted model "memorizes" the noise and details in the training dataset to a point where it negatively impacts the performance of the model on the new data. For more information, see the following:
* [Demonstrate overfitting](https://www.tensorflow.org/tutorials/keras/overfit_and_underfit#demonstrate_overfitting)
* [Strategies to prevent overfitting](https://www.tensorflow.org/tutorials/keras/overfit_and_underfit#strategies_to_prevent_overfitting)
##Add weight regularization
You may be familiar with Occam's Razor principle: given two explanations for something, the explanation most likely to be correct is the "simplest" one, the one that makes the least amount of assumptions. This also applies to the models learned by neural networks: given some training data and a network architecture, there are multiple sets of weights values (multiple models) that could explain the data, and simpler models are less likely to overfit than complex ones.
A "simple model" in this context is a model where the distribution of parameter values has less entropy (or a model with fewer parameters altogether, as we saw in the section above). Thus a common way to mitigate overfitting is to put constraints on the complexity of a network by forcing its weights only to take small values, which makes the distribution of weight values more "regular". This is called "weight regularization", and it is done by adding to the loss function of the network a cost associated with having large weights. This cost comes in two flavors:
* ----L1 regularization, where the cost added is proportional to the absolute value of the weights coefficients (i.e. to what is called the "L1 norm" of the weights).
* ----L2 regularization, where the cost added is proportional to the square of the value of the weights coefficients (i.e. to what is called the squared "L2 norm" of the weights). L2 regularization is also called weight decay in the context of neural networks. Don't let the different name confuse you: weight decay is mathematically the exact same as L2 regularization.
L1 regularization pushes weights towards exactly zero encouraging a sparse model. L2 regularization will penalize the weights parameters without making them sparse since the penalty goes to zero for small weights. one reason why L2 is more common.
In tf.keras, weight regularization is added by passing weight regularizer instances to layers as keyword arguments. Let's add L2 weight regularization now.
### Applying L2 regularization
```
from tensorflow.keras import regularizers
model_l2 = keras.Sequential([
keras.layers.Flatten(input_shape=(28, 28)),
keras.layers.Dense(128, activation='relu', kernel_regularizer=regularizers.l2(0.001)),
keras.layers.Dense(10)
])
model_l2.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
model_l2.fit(train_images, train_labels, epochs=10)
test_loss_l2, test_acc_l2 = model_l2.evaluate(test_images, test_labels, verbose=2)
print('\nTest accuracy:', test_acc_l2)
```
As can be seen above overfitting is removed to some extent from the model but at the cost of performance.
### Add dropouts
Dropout is one of the most effective and most commonly used regularization techniques for neural networks, developed by Hinton and his students at the University of Toronto.
The intuitive explanation for dropout is that because individual nodes in the network cannot rely on the output of the others, each node must output features that are useful on their own.
Dropout, applied to a layer, consists of randomly "dropping out" (i.e. set to zero) a number of output features of the layer during training. Let's say a given layer would normally have returned a vector [0.2, 0.5, 1.3, 0.8, 1.1] for a given input sample during training; after applying dropout, this vector will have a few zero entries distributed at random, e.g. [0, 0.5, 1.3, 0, 1.1].
The "dropout rate" is the fraction of the features that are being zeroed-out; it is usually set between 0.2 and 0.5. At test time, no units are dropped out, and instead the layer's output values are scaled down by a factor equal to the dropout rate, so as to balance for the fact that more units are active than at training time.
In tf.keras you can introduce dropout in a network via the Dropout layer, which gets applied to the output of layer right before.
Let's add two Dropout layers in our network to see how well they do at reducing overfitting:
```
from tensorflow.keras import layers
model_dropout = keras.Sequential([
keras.layers.Flatten(input_shape=(28, 28)),
keras.layers.Dense(128, activation='relu', kernel_regularizer=regularizers.l2(0.001)),
layers.Dropout(0.3),
keras.layers.Dense(10)
])
model_dropout.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
model_dropout.fit(train_images, train_labels, epochs=10)
test_loss_dropout, test_acc_dropout = model_dropout.evaluate(test_images, test_labels, verbose=2)
print('\nTest accuracy:', test_acc_dropout)
```
We applied dropouts on the single layer of 128 nodes by making the output of 30% nodes as zeroes, we were able to reduce overfitting to more extent than l2 regularization
### Combined L2 + dropout
```
from tensorflow.keras import regularizers
model_l2_dropout = keras.Sequential([
keras.layers.Flatten(input_shape=(28, 28)),
keras.layers.Dense(128, activation='relu', kernel_regularizer=regularizers.l2(0.001)),
layers.Dropout(0.5),
keras.layers.Dense(10)
])
model_l2_dropout.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
model_l2_dropout.fit(train_images, train_labels, epochs=10)
test_loss_l2_dropout, test_acc_l2_dropout = model_l2_dropout.evaluate(test_images, test_labels, verbose=2)
print('\nTest accuracy:', test_acc_l2_dropout)
```
### Make predictions
With the model trained, you can use it to make predictions about some images.
The model's linear outputs, [logits](https://developers.google.com/machine-learning/glossary#logits). Attach a softmax layer to convert the logits to probabilities, which are easier to interpret.
```
probability_model = tf.keras.Sequential([model,
tf.keras.layers.Softmax()])
predictions = probability_model.predict(test_images)
```
Here, the model has predicted the label for each image in the testing set. Let's take a look at the first prediction:
```
predictions[0]
test_labels[0]
```
Graph this to look at the full set of 10 class predictions.
```
import numpy as np
def plot_image(i, predictions_array, true_label, img):
true_label, img = true_label[i], img[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
plt.imshow(img, cmap=plt.cm.binary)
predicted_label = np.argmax(predictions_array)
if predicted_label == true_label:
color = 'blue'
else:
color = 'red'
plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label],
100*np.max(predictions_array),
class_names[true_label]),
color=color)
def plot_value_array(i, predictions_array, true_label):
true_label = true_label[i]
plt.grid(False)
plt.xticks(range(10))
plt.yticks([])
thisplot = plt.bar(range(10), predictions_array, color="#777777")
plt.ylim([0, 1])
predicted_label = np.argmax(predictions_array)
thisplot[predicted_label].set_color('red')
thisplot[true_label].set_color('blue')
```
### Verify predictions
With the model trained, you can use it to make predictions about some images.
Let's look at the 0th image, predictions, and prediction array. Correct prediction labels are blue and incorrect prediction labels are red. The number gives the percentage (out of 100) for the predicted label.
```
i = 0
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions[i], test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions[i], test_labels)
plt.show()
i = 12
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions[i], test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions[i], test_labels)
plt.show()
```
Let's plot several images with their predictions. Note that the model can be wrong even when very confident.
```
# Plot the first X test images, their predicted labels, and the true labels.
# Color correct predictions in blue and incorrect predictions in red.
print(class_names)
num_rows = 5
num_cols = 3
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
plot_image(i, predictions[i], test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
plot_value_array(i, predictions[i], test_labels)
plt.tight_layout()
plt.show()
```
## Use the trained model
Finally, use the trained model to make a prediction about a single image.
## Use the trained model
Finally, use the trained model to make a prediction about a single image.
```
# Grab an image from the test dataset.
img = test_images[1]
print(img.shape)
# Add the image to a batch where it's the only member.
img = (np.expand_dims(img,0))
print(img.shape)
predictions_single = probability_model.predict(img)
print(predictions_single)
plot_value_array(1, predictions_single[0], test_labels)
_ = plt.xticks(range(10), class_names, rotation=45)
```
### I hope with this you can start your journey into the world of Deep Learning
| true |
code
| 0.760423 | null | null | null | null |
|
# Forecasting with sktime
In forecasting, we're interested in using past data to make temporal forward predictions. sktime provides common statistical forecasting algorithms and tools for building composite machine learning models.
For more details, take a look at [our paper on forecasting with sktime](https://arxiv.org/abs/2005.08067) in which we discuss the forecasting API in more detail and use it to replicate and extend the M4 study.
In particular, you'll learn how to
* use statistical models to make forecasts,
* build composite machine learning models, including common techniques like reduction to regression, ensembling and pipelining.
## Preliminaries
```
import matplotlib.pyplot as plt
import numpy as np
from sktime.datasets import load_airline
from sktime.forecasting.model_selection import temporal_train_test_split
from sktime.performance_metrics.forecasting import smape_loss
from sktime.utils.plotting.forecasting import plot_ys
%matplotlib inline
```
## Data
For this tutorial, we will use the famous Box-Jenkins airline data set, which shows the number of international airline
passengers per month from 1949-1960.
As well as using the original time series (which is a classic example of a *multiplicative* time series), we will create an *additive* time series by performing a log-transform on the original data, so we may compare forecasters against both types of model.
```
y = load_airline()
fig, ax = plot_ys(y)
ax.set(xlabel="Time", ylabel="Number of airline passengers");
```
Next we will define a forecasting task.
* We will try to predict the last 3 years of data, using the previous years as training data. Each point in the series represents a month, so we should hold out the last 36 points as test data, and use 36-step ahead forecasting horizon to evaluate forecasting performance.
* We will use the sMAPE (symmetric mean absolute percentage error) to quantify the accuracy of our forecasts. A lower sMAPE means higher accuracy.
We can split the data as follows:
```
y_train, y_test = temporal_train_test_split(y, test_size=36)
print(y_train.shape[0], y_test.shape[0])
```
When we want to generate forecasts, we need to specify the forecasting horizon and pass that to our forecasting algorithm. We can specify the forecasting horizon as a simple numpy array of the steps ahead, relative to the end of the training series:
```
fh = np.arange(len(y_test)) + 1
fh
```
## Forecasting
Like in scikit-learn, in order to make forecasts, we need to first specify (or build) a model, then fit it to the training data, and finally call predict to generate forecasts for the given forecasting horizon.
sktime comes with several forecasting algorithms (or forecasters) and tools for composite model building. All forecaster share a common interface. Forecasters are trained on a single series of data and make forecasts for the provided forecasting horizon.
### Naïve baselines
Let's start with two naïve forecasting strategies which can serve as references for comparison of more sophisticated approaches.
1. We always predict the last value observed (in the training series),
2. We predict the last value observed in the same season.
```
from sktime.forecasting.naive import NaiveForecaster
forecaster = NaiveForecaster(strategy="last")
forecaster.fit(y_train)
y_last = forecaster.predict(fh)
smape_loss(y_last, y_test)
forecaster = NaiveForecaster(strategy="seasonal_last", sp=12)
forecaster.fit(y_train)
y_last_seasonal = forecaster.predict(fh)
smape_loss(y_last_seasonal, y_test)
plot_ys(y_train, y_test, y_last, y_last_seasonal,
labels=["y_train", "y_test", "last", "seasonal_last"]);
```
### Statistical forecasters
sktime has a number of statistical forecasting algorithms, based on implementations in statsmodels. For example, to use exponential smoothing with an additive trend component and multiplicative seasonality, we can write the following.
Note that since this is monthly data, the seasonal periodicity (sp), or the number of periods per year, is 12.
```
from sktime.forecasting.exp_smoothing import ExponentialSmoothing
forecaster = ExponentialSmoothing(trend="add", seasonal="multiplicative", sp=12)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_ys(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]);
smape_loss(y_test, y_pred)
```
Another common model is the ARIMA model. In sktime, we interface [pmdarima](https://github.com/alkaline-ml/pmdarima), a package for automatically selecting the best ARIMA model. This since searches over a number of possible model parametrisations, it may take a bit longer.
```
from sktime.forecasting.arima import AutoARIMA
forecaster = AutoARIMA(sp=12, suppress_warnings=True)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_ys(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]);
smape_loss(y_test, y_pred)
```
## Compositite model building
sktime provides a modular API for composite model building for forecasting.
### Ensembling
Like scikit-learn, sktime provides a meta-forecaster to ensemble multiple forecasting algorithms. For example, we can combine different variants of exponential smoothing as follows:
```
from sktime.forecasting.compose import EnsembleForecaster
forecaster = EnsembleForecaster([
("ses", ExponentialSmoothing(seasonal="multiplicative", sp=12)),
("holt", ExponentialSmoothing(trend="add", damped=False, seasonal="multiplicative", sp=12)),
("damped", ExponentialSmoothing(trend="add", damped=True, seasonal="multiplicative", sp=12))
])
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_ys(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]);
smape_loss(y_test, y_pred)
```
### Applying machine learning: reduction to regression
Forecasting is often solved via regression. This approach is sometimes called reduction, because we reduce the forecasting task to the simpler but related task of tabular regression. This allows apply any regression algorithm to the forecasting problem.
Reduction to regression works as follows: We first need to transform the data into the required tabular format. We can do this by cutting the training series into windows of a fixed length and stacking them on top of each other. Our target variable consist of the subsequent observation for each window.
sktime provides a meta-estimator for this approach, which is compatible with scikit-learn, so that we can use any scikit-learn regressor to solve our forecasting problem.
```
from sktime.forecasting.compose import ReducedRegressionForecaster
from sklearn.neighbors import KNeighborsRegressor
regressor = KNeighborsRegressor(n_neighbors=1)
forecaster = ReducedRegressionForecaster(regressor=regressor, window_length=10, strategy="recursive")
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_ys(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]);
smape_loss(y_test, y_pred)
```
To better understand the prior data transformation, we can look at how we can split the training series into windows. Internally, sktime uses a temporal time series splitter, similar to the cross-validation splitter in scikit-learn. Here we show how this works for the first 20 observations of the training series:
```
from sktime.forecasting.model_selection import SlidingWindowSplitter
cv = SlidingWindowSplitter(window_length=10, start_with_window=True)
for input_window, output_window in cv.split(y_train.iloc[:20]):
print(input_window, output_window)
```
## Tuning
In the `ReducedRegressionForecaster`, both the `window_length` and `strategy` arguments are hyper-parameters which we may want to optimise.
```
from sktime.forecasting.model_selection import ForecastingGridSearchCV
forecaster = ReducedRegressionForecaster(regressor=regressor, window_length=15, strategy="recursive")
param_grid = {"window_length": [5, 10, 15]}
# we fit the forecaster on the initial window, and then use temporal cross-validation to find the optimal parameter
cv = SlidingWindowSplitter(initial_window=int(len(y_train) * 0.5))
gscv = ForecastingGridSearchCV(forecaster, cv=cv, param_grid=param_grid)
gscv.fit(y_train)
y_pred = gscv.predict(fh)
plot_ys(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]);
smape_loss(y_test, y_pred)
gscv.best_params_
```
You could of course also try to tune the regressor inside `ReducedRegressionForecaster` using scikit-learn's `GridSearchCV`.
### Detrending
Note that so far the reduction approach above does not take any seasonal or trend into account, but we can easily specify a pipeline which first detrends the data.
sktime provides a generic detrender, a transformer which uses any forecaster and returns the in-sample residuals of the forecaster's predicted values. For example, to remove the linear trend of a time series, we can write:
```
from sktime.forecasting.trend import PolynomialTrendForecaster
from sktime.transformers.single_series.detrend import Detrender
# liner detrending
transformer = Detrender(forecaster=forecaster)
yt = transformer.fit_transform(y_train)
# internally, the Detrender uses the in-sample predictions of the PolynomialTrendForecaster
forecaster = PolynomialTrendForecaster(degree=1)
fh_ins = -np.arange(len(y_train)) # in-sample forecasting horizon
y_pred = forecaster.fit(y_train).predict(fh=fh_ins)
plot_ys(y_train, y_pred, yt, labels=["y_train", "Fitted linear trend", "Residuals"]);
```
### Pipelining
Let's use the detrender in a pipeline together with de-seasonalisation. Note that in forecasting, when we apply data transformations before fitting, we need to apply the inverse transformation to the predicted values. For this purpose, we provide the following pipeline class:
```
from sktime.forecasting.compose import TransformedTargetForecaster
from sktime.transformers.single_series.detrend import Deseasonalizer
forecaster = TransformedTargetForecaster([
("deseasonalise", Deseasonalizer(model="multiplicative", sp=12)),
("detrend", Detrender(forecaster=PolynomialTrendForecaster(degree=1))),
("forecast", ReducedRegressionForecaster(regressor=regressor, window_length=15, strategy="recursive"))
])
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_ys(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]);
smape_loss(y_test, y_pred)
```
Of course, we could try again to optimise the hyper-parameters of components of the pipeline.
Below we discuss two other aspects of forecasting: online learning, where we want to dynamically update forecasts as new data comes in, and prediction intervals, which allow us to quantify the uncertainty of our forecasts.
## Dynamic forecasts
For model evaluation, we sometimes want to evaluate multiple forecasts, using temporal cross-validation with a sliding window over the test data. For this purpose, all forecasters in sktime have a `update_predict` method. Here we make repeated single-step ahead forecasts over the test set.
Note that the forecasting task is changed: while we still make 36 predictions, we do not predict 36 steps ahead, but instead make 36 single-step-ahead predictions.
```
forecaster = NaiveForecaster(strategy="last")
forecaster.fit(y_train)
cv = SlidingWindowSplitter(fh=1)
y_pred = forecaster.update_predict(y_test, cv)
smape_loss(y_test, y_pred)
plot_ys(y_train, y_test, y_pred);
```
For a single update, you can use the `update` method.
## Prediction intervals
So far, we've only looked at point forecasts. In many cases, we're also interested in prediction intervals. sktime's interface support prediction intervals, but we haven't implemented them for all algorithms yet.
Here, we use the Theta forecasting algorithm:
```
from sktime.forecasting.theta import ThetaForecaster
forecaster = ThetaForecaster(sp=12)
forecaster.fit(y_train)
alpha = 0.05 # 95% prediction intervals
y_pred, pred_ints = forecaster.predict(fh, return_pred_int=True, alpha=alpha)
smape_loss(y_test, y_pred)
fig, ax = plot_ys(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
ax.fill_between(y_pred.index, pred_ints["lower"], pred_ints["upper"], alpha=0.2, color="green", label=f"{1 - alpha}% prediction intervals")
plt.legend();
```
| true |
code
| 0.771731 | null | null | null | null |
|
<a href="https://colab.research.google.com/github/sayakpaul/Handwriting-Recognizer-in-Keras/blob/main/Recognizer_KerasOCR.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
## References:
* https://keras-ocr.readthedocs.io/en/latest/examples/fine_tuning_recognizer.html
## Initial setup
```
!pip install -U git+https://github.com/faustomorales/keras-ocr.git#egg=keras-ocr
!pip install -U opencv-python # We need the most recent version of OpenCV.
import matplotlib.pyplot as plt
import numpy as np
import keras_ocr
import imgaug
import os
import tensorflow as tf
print(tf.__version__)
tf.random.set_seed(42)
np.random.seed(42)
!nvidia-smi
```
## Dataset gathering
```
!wget -q https://github.com/sayakpaul/Handwriting-Recognizer-in-Keras/releases/download/v1.0.0/IAM_Words.zip
!unzip -qq IAM_Words.zip
!mkdir data
!mkdir data/words
!tar -C /content/data/words -xf IAM_Words/words.tgz
!mv IAM_Words/words.txt /content/data
!head -20 data/words.txt
```
## Create training and validation splits
```
words_list = []
words = open('/content/data/words.txt', 'r').readlines()
for line in words:
if line[0]=='#':
continue
if line.split(" ")[1]!="err": # We won't need to deal with errored entries
words_list.append(line)
len(words_list)
np.random.shuffle(words_list)
splitIdx = int(0.9 * len(words_list))
trainSamples = words_list[:splitIdx]
validationSamples = words_list[splitIdx:]
len(trainSamples), len(validationSamples)
def parse_path(file_line):
lineSplit = file_line.strip()
lineSplit = lineSplit.split(" ")
# part1/part1-part2/part1-part2-part3.png
imageName = lineSplit[0]
partI = imageName.split("-")[0]
partII = imageName.split("-")[1]
img_path = os.path.join("/content/data/words/", partI,
(partI + '-' + partII),
(imageName + ".png")
)
label = file_line.split(' ')[8:][0].strip()
if (os.path.getsize(img_path)!=0) & (label!=None):
return (img_path, None, label.lower())
train_labels = [parse_path(file_line) for file_line in trainSamples
if parse_path(file_line)!=None]
val_labels = [parse_path(file_line) for file_line in validationSamples
if parse_path(file_line)!=None]
len(train_labels), len(val_labels)
train_labels[:5]
```
## Create data generators
```
recognizer = keras_ocr.recognition.Recognizer()
recognizer.compile()
batch_size = 8
augmenter = imgaug.augmenters.Sequential([
imgaug.augmenters.GammaContrast(gamma=(0.25, 3.0)),
])
(training_image_gen, training_steps), (validation_image_gen, validation_steps) = [
(
keras_ocr.datasets.get_recognizer_image_generator(
labels=labels,
height=recognizer.model.input_shape[1],
width=recognizer.model.input_shape[2],
alphabet=recognizer.alphabet,
augmenter=augmenter
),
len(labels) // batch_size
) for labels, augmenter in [(train_labels, augmenter), (val_labels, None)]
]
training_gen, validation_gen = [
recognizer.get_batch_generator(
image_generator=image_generator,
batch_size=batch_size
)
for image_generator in [training_image_gen, validation_image_gen]
]
image, text = next(training_image_gen)
plt.imshow(image)
plt.title(text)
plt.show()
```
[Here's](https://keras-ocr.readthedocs.io/en/latest/examples/end_to_end_training.html#generating-synthetic-data) where you can know on what basis a character is termed as illegal in the framework.
## Model training and sample inference
```
callbacks = [
tf.keras.callbacks.EarlyStopping(monitor='val_loss', min_delta=0, patience=10, restore_best_weights=True),
]
history = recognizer.training_model.fit_generator(
generator=training_gen,
steps_per_epoch=training_steps,
validation_steps=validation_steps,
validation_data=validation_gen,
callbacks=callbacks,
epochs=1000
)
plt.figure()
plt.plot(history.history["loss"], label="train_loss")
plt.plot(history.history["val_loss"], label="val_loss")
plt.title("Training and Validation Loss on Dataset")
plt.xlabel("Epoch #")
plt.ylabel("Loss")
plt.legend(loc="lower left")
plt.show()
```
The training seems to be a bit unstable. This can likely be mitigated by using a lower learning rate.
```
image_filepath, _, actual = val_labels[1]
predicted = recognizer.recognize(image_filepath)
print(f'Predicted: {predicted}, Actual: {actual}')
_ = plt.imshow(keras_ocr.tools.read(image_filepath))
```
| true |
code
| 0.611759 | null | null | null | null |
|
# Retail Demo Store Experimentation Workshop - Interleaving Recommendation Exercise
In this exercise we will define, launch, and evaluate the results of an experiment using recommendation interleaving using the experimentation framework implemented in the Retail Demo Store project. If you have not already stepped through the **[3.1-Overview](./3.1-Overview.ipynb)** workshop notebook, please do so now as it provides the foundation built upon in this exercise. It is also recommended, but not required, to complete the **[3.2-AB-Experiment](./3.2-AB-Experiment.ipynb)** workshop notebook.
Recommended Time: 30 minutes
## Prerequisites
Since this module uses the Retail Demo Store's Recommendation microservice to run experiments across variations that depend on the personalization features of the Retail Demo Store, it is assumed that you have either completed the [Personalization](../1-Personalization/1.1-Personalize.ipynb) workshop or those resources have been pre-provisioned in your AWS environment. If you are unsure and attending an AWS managed event such as a workshop, check with your event lead.
## Exercise 2: Interleaving Recommendations Experiment
For the first exercise, **[3.2-AB-Experiment](./3.2-AB-Experiment.ipynb)**, we demonstrated how to create and run an A/B experiment using two different variations for making product recommendations. We calculated the sample sizes of users needed to reach a statistically significant result comparing the two variations. Then we ran the experiment using a simulation until the sample sizes were reached for both variations. In real-life, depending on the baseline and minimum detectable effect rate combined with your site's user traffic, the amount of time necessary to complete an experiment can take several days to a few weeks. This can be expensive from both an opportunity cost perspective as well as negatively impacting the pace at which experiments and changes can be rolled out to your site.
In this exercise we will look at an alternative approach to evaluating product recommendation variations that requires a smaller sample size and shorter experiment durations. This technique is often used as a preliminary step before formal A/B testing to reduce a larger number of variations to just the top performers. Traditional A/B testing is then done against the best performing variations, significantly reducing the overall time necessary for experimentation.
We will use the same two variations as the last exercise. The first variation will represent our current implementation using the **Default Product Resolver** and the second variation will use the **Personalize Recommendation Resolver**. The scenario we are simulating is adding product recommendations powered by Amazon Personalize to the home page and measuring the impact/uplift in click-throughs for products as a result of deploying a personalization strategy. We will use the same hypothesis from our A/B test where the conversion rate of our existing approach is 15% and we expect a 25% lift in this rate by adding personalized recommendations.
### What is Interleaving Recommendation Testing?
The approach of interleaving recommendations is to take the recommendations from two or more variations and interleave, or blend, them into a single set of recommendations for *every user in the experiment*. Because each user in the sample is exposed to recommendations from all variations, we gain some key benefits. First, the sample size can be smaller since we don't need separate groups of users for each variation. This also results in a shorter experiment duration. Additionally, this approach is less susceptible to variances in user type and behavior that could throw off the results of an experiment. For example, it's not uncommon to have power users who shop/watch/listen/read much more than a typical user. With multiple sample groups, the behavior of these users can throw off results for their group, particularly with smaller sample sizes.
Care must be taken in how recommendations are interleaved, though, to account for position bias in the recommendations and to track variation attribution. There are two common methods to interleaving recommendations. First is a balanced approach where recommendations are taken from each variation in an alternating style where the starting variation is selected randomly. The other approach follows the team-draft analogy where team captains select their "best player" (recommendation) from the variations in random selection order. Both methods can result in different interleaving outputs.
Interleaving recommendations as an approach to experimenation got its start with information retrieval systems and search engines (Yahoo! & Bing) where different approaches to ranking results could be measured concurrently. More recently, [Netflix has adopted the interleaving technique](https://medium.com/netflix-techblog/interleaving-in-online-experiments-at-netflix-a04ee392ec55) to rapidly evaluate different approaches to making movie recommendations to its users. The image below depicts the recommendations from two different recommenders/variations (Ranker A and Ranker B) and examples of how they are interleaved.

### InterleavingExperiment Class
Before stepping through creating and executing our interleaving test, let's look at the relevant source code for the **InterleavingExperiment** class that implements this experiment type in the Retail Demo Store project.
As noted in the **[3.1-Overview](./3.1-Overview.ipynb)** notebook, all experiment types are subclasses of the abstract **Experiment** class. See **[3.1-Overview](./3.1-Overview.ipynb)** for more details on the experimentation framework.
The `InterleavingExperiment.get_items()` method is where item recommendations are retrieved for the experiment. This method will retrieve recommendations from the resolvers for all variations and then use the configured interleaving method (balanced or team-draft) to interleave the recommendations to produce the final result. Exposure tracking is also implemented to facilitate measuring the outcome of an experiment. The implementations for the balanced and team-draft interleaving methods are not included below but are available in the source code for the Recommendations service.
```python
# from src/recommendations/src/recommendations-service/experimentation/experiment_interleaving.py
class InterleavingExperiment(Experiment):
""" Implements interleaving technique described in research paper by
Chapelle et al http://olivier.chapelle.cc/pub/interleaving.pdf
"""
METHOD_BALANCED = 'balanced'
METHOD_TEAM_DRAFT = 'team-draft'
def __init__(self, table, **data):
super(InterleavingExperiment, self).__init__(table, **data)
self.method = data.get('method', InterleavingExperiment.METHOD_BALANCED)
def get_items(self, user_id, current_item_id = None, item_list = None, num_results = 10, tracker = None):
...
# Initialize array structure to hold item recommendations for each variation
variations_data = [[] for x in range(len(self.variations))]
# Get recomended items for each variation
for i in range(len(self.variations)):
resolve_params = {
'user_id': user_id,
'product_id': current_item_id,
'product_list': item_list,
'num_results': num_results * 3 # account for overlaps
}
variation = self.variations[i]
items = variation.resolver.get_items(**resolve_params)
variations_data[i] = items
# Interleave items to produce result
interleaved = []
if self.method == InterleavingExperiment.METHOD_TEAM_DRAFT:
interleaved = self._interleave_team_draft(user_id, variations_data, num_results)
else:
interleaved = self._interleave_balanced(user_id, variations_data, num_results)
# Increment exposure for each variation (can be optimized)
for i in range(len(self.variations)):
self._increment_exposure_count(i)
...
return interleaved
```
### Setup - Import Dependencies
Througout this workshop we will need access to some common libraries and clients for connecting to AWS services. Let's set those up now.
```
import boto3
import json
import uuid
import numpy as np
import requests
import pandas as pd
import random
import scipy.stats as scs
import time
import decimal
import matplotlib.pyplot as plt
from boto3.dynamodb.conditions import Key
from random import randint
# import custom scripts for plotting results
from src.plot import *
from src.stats import *
%matplotlib inline
plt.style.use('ggplot')
# We will be using a DynamoDB table to store configuration info for our experiments.
dynamodb = boto3.resource('dynamodb')
# Service discovery will allow us to dynamically discover Retail Demo Store resources
servicediscovery = boto3.client('servicediscovery')
# Retail Demo Store config parameters are stored in SSM
ssm = boto3.client('ssm')
# Utility class to convert types for printing as JSON.
class CompatEncoder(json.JSONEncoder):
def default(self, obj):
if isinstance(obj, decimal.Decimal):
if obj % 1 > 0:
return float(obj)
else:
return int(obj)
else:
return super(CompatEncoder, self).default(obj)
```
### Experiment Strategy Datastore
Let's create an experiment using the interleaving technique.
A DynamoDB table was created by the Retail Demo Store CloudFormation template that we will use to store the configuration information for our experiments. The table name can be found in a system parameter.
```
response = ssm.get_parameter(Name='retaildemostore-experiment-strategy-table-name')
table_name = response['Parameter']['Value'] # Do Not Change
print('Experiments DDB table: ' + table_name)
table = dynamodb.Table(table_name)
```
Next we need to lookup the Amazon Personalize campaign ARN for product recommendations. This is the campaign that was created in the Personalization workshop.
```
response = ssm.get_parameter(Name = 'retaildemostore-product-recommendation-campaign-arn')
campaign_arn = response['Parameter']['Value'] # Do Not Change
print('Personalize product recommendations ARN: ' + campaign_arn)
```
### Create Interleaving Experiment
The Retail Demo Store supports running multiple experiments concurrently. For this workshop we will create a single interleaving test/experiment that will expose users of a single group to recommendations from the default behavior and recommendations from Amazon Personalize. The Recommendations microservice already has logic that supports interleaving experiments when an active experiment is detected.
Experiment configurations are stored in a DynamoDB table where each item in the table represents an experiment and has the following fields.
- **id** - Uniquely identified this experience (UUID).
- **feature** - Identifies the Retail Demo Store feature where the experiment should be applied. The name for the home page product recommendations feature is `home_product_recs`.
- **name** - The name of the experiment. Keep the name short but descriptive. It will be used in the UI for demo purposes and when logging events for experiment result tracking.
- **status** - The status of the experiment (`ACTIVE`, `EXPIRED`, or `PENDING`).
- **type** - The type of test (`ab` for an A/B test, `interleaving` for interleaved recommendations, or `mab` for multi-armed bandit test)
- **method** - The interleaving method (`balanced` or `team-draft`)
- **variations** - List of configurations representing variations for the experiment. For example, for interleaving tests of the `home_product_recs` feature, the `variations` can be two Amazon Personalize campaign ARNs (variation type `personalize-recommendations`) or a single Personalize campaign ARN and the default product behavior.
```
feature = 'home_product_recs'
experiment_name = 'home_personalize_interleaving'
# First, make sure there are no other active experiments so we can isolate
# this experiment for the exercise.
response = table.scan(
ProjectionExpression='#k',
ExpressionAttributeNames={'#k' : 'id'},
FilterExpression=Key('status').eq('ACTIVE')
)
for item in response['Items']:
response = table.update_item(
Key=item,
UpdateExpression='SET #s = :inactive',
ExpressionAttributeNames={
'#s' : 'status'
},
ExpressionAttributeValues={
':inactive' : 'INACTIVE'
}
)
# Query the experiment strategy table to see if our experiment already exists
response = table.query(
IndexName='feature-name-index',
KeyConditionExpression=Key('feature').eq(feature) & Key('name').eq(experiment_name),
FilterExpression=Key('status').eq('ACTIVE')
)
if response.get('Items') and len(response.get('Items')) > 0:
print('Experiment already exists')
home_page_experiment = response['Items'][0]
else:
print('Creating experiment')
# Default product resolver
variation_0 = {
'type': 'product'
}
# Amazon Personalize resolver
variation_1 = {
'type': 'personalize-recommendations',
'campaign_arn': campaign_arn
}
home_page_experiment = {
'id': uuid.uuid4().hex,
'feature': feature,
'name': experiment_name,
'status': 'ACTIVE',
'type': 'interleaving',
'method': 'team-draft',
'analytics': {},
'variations': [ variation_0, variation_1 ]
}
response = table.put_item(
Item=home_page_experiment
)
print(json.dumps(response, indent=4))
print(json.dumps(home_page_experiment, indent=4, cls=CompatEncoder))
```
## Load Users
For our experiment simulation, we will load all Retail Demo Store users and run the experiment until the sample size has been met.
First, let's discover the IP address for the Retail Demo Store's Users service.
```
response = servicediscovery.discover_instances(
NamespaceName='retaildemostore.local',
ServiceName='users',
MaxResults=1,
HealthStatus='HEALTHY'
)
users_service_instance = response['Instances'][0]['Attributes']['AWS_INSTANCE_IPV4']
print('Users Service Instance IP: {}'.format(users_service_instance))
```
Next, let's load all users into a local data frame.
```
# Load all 5K users so we have enough to satisfy our sample size requirements.
response = requests.get('http://{}/users/all?count=5000'.format(users_service_instance))
users = response.json()
users_df = pd.DataFrame(users)
pd.set_option('display.max_rows', 5)
users_df
```
## Discover Recommendations Service
Next, let's discover the IP address for the Retail Demo Store's Recommendation service.
```
response = servicediscovery.discover_instances(
NamespaceName='retaildemostore.local',
ServiceName='recommendations',
MaxResults=1,
HealthStatus='HEALTHY'
)
recommendations_service_instance = response['Instances'][0]['Attributes']['AWS_INSTANCE_IPV4']
print('Recommendation Service Instance IP: {}'.format(recommendations_service_instance))
```
## Simulate Experiment
Next we will simulate our interleaving recommendation experiment by making calls to the Recommendation service across the users we just loaded.
### Simulation Function
The following `simulate_experiment` function is supplied with the number of trials we want to run and the probability of conversion for each variation for our simulation. It runs the simulation long enough to satisfy the number of trials and calls the Recommendations service for each trial in the experiment.
```
def simulate_experiment(n_trials, probs):
"""Simulates experiment based on pre-determined probabilities
Example:
Parameters:
n_trials (int): number of trials to run for experiment
probs (array float): array of floats containing probability/conversion
rate for each variation
Returns:
df (df) - data frame of simulation data/results
"""
# will hold exposure/outcome data
data = []
print('Simulating experiment for {} users... this may take a few minutes'.format(n_trials))
for idx in range(n_trials):
if idx > 0 and idx % 500 == 0:
print('Simulated experiment for {} users so far'.format(idx))
row = {}
# Get random user
user = users[randint(0, len(users)-1)]
# Call Recommendations web service to get recommendations for the user
response = requests.get('http://{}/recommendations?userID={}&feature={}'.format(recommendations_service_instance, user['id'], feature))
recommendations = response.json()
recommendation = recommendations[randint(0, len(recommendations)-1)]
variation = recommendation['experiment']['variationIndex']
row['variation'] = variation
# Conversion based on probability of variation
row['converted'] = np.random.binomial(1, p=probs[variation])
if row['converted'] == 1:
# Update experiment with outcome/conversion
correlation_id = recommendation['experiment']['correlationId']
requests.post('http://{}/experiment/outcome'.format(recommendations_service_instance), data={'correlationId':correlation_id})
data.append(row)
# convert data into pandas dataframe
df = pd.DataFrame(data)
print('Done')
return df
```
### Run Simulation
Next we run the simulation by defining our simulation parameters for the number of trials and probabilities and then call `simulate_experiment`. This will take a few minutes to run.
```
%%time
# Number of trials to run
N = 2000
# bcr: baseline conversion rate
p_A = 0.15
# d_hat: difference in a metric between the two groups, sometimes referred to as minimal detectable effect or lift depending on the context
p_B = 0.1875
ab_data = simulate_experiment(N, [p_A, p_B])
ab_data
```
### Inspect Experiment Summary Statistics
Since the **Experiment** class updates statistics on the experiment in the experiment strategy table when a user is exposed to an experiment ("exposure") and when a user converts ("outcome"), we should see updated counts on our experiment. Let's reload our experiment and inspect the exposure and conversion counts for our simulation.
```
response = table.get_item(Key={'id': home_page_experiment['id']})
print(json.dumps(response['Item'], indent=4, cls=CompatEncoder))
```
Note the `conversions` and `exposures` counts for each variation above. These counts were incremented by the experiment class each time a trial was run (exposure) and a user converted in the `simulate_experiment` function above.
### Analyze Simulation Results
To wrap up, let's analyze some of the results from our simulated interleaving experiment by inspecting the actual conversion rate and verifying our target confidence interval and power.
First, let's take a closer look at the results of our simulation. We'll start by calculating some summary statistics.
```
ab_summary = ab_data.pivot_table(values='converted', index='variation', aggfunc=np.sum)
# add additional columns to the pivot table
ab_summary['total'] = ab_data.pivot_table(values='converted', index='variation', aggfunc=lambda x: len(x))
ab_summary['rate'] = ab_data.pivot_table(values='converted', index='variation')
ab_summary
```
Next let's isolate data for each variation.
```
A_group = ab_data[ab_data['variation'] == 0]
B_group = ab_data[ab_data['variation'] == 1]
A_converted, B_converted = A_group['converted'].sum(), B_group['converted'].sum()
A_converted, B_converted
```
Determine the actual sample size for each variation.
```
A_total, B_total = len(A_group), len(B_group)
A_total, B_total
```
Calculate the actual conversion rates and uplift from our simulation.
```
p_A, p_B = A_converted / A_total, B_converted / B_total
p_A, p_B
p_B - p_A
```
### Determining Statistical Significance
For simplicity we will use the same approach as our A/B test to determine statistical significance.
Let's plot the data from both groups as binomial distributions.
```
fig, ax = plt.subplots(figsize=(12,6))
xA = np.linspace(A_converted-49, A_converted+50, 100)
yA = scs.binom(A_total, p_A).pmf(xA)
ax.scatter(xA, yA, s=10)
xB = np.linspace(B_converted-49, B_converted+50, 100)
yB = scs.binom(B_total, p_B).pmf(xB)
ax.scatter(xB, yB, s=10)
plt.xlabel('converted')
plt.ylabel('probability')
```
Based the probabilities from our hypothesis, we should see that the test group in blue (B) converted more users than the control group in red (A). However, the plot above is not a plot of the null and alternate hypothesis. The null hypothesis is a plot of the difference between the probability of the two groups.
> Given the randomness of our user selection, group hashing, and probabilities, your simulation results should be different for each simulation run and therefore may or may not be statistically significant.
In order to calculate the difference between the two groups, we need to standardize the data. Because the number of samples can be different between the two groups, we should compare the probability of successes, p.
According to the central limit theorem, by calculating many sample means we can approximate the true mean of the population from which the data for the control group was taken. The distribution of the sample means will be normally distributed around the true mean with a standard deviation equal to the standard error of the mean.
```
SE_A = np.sqrt(p_A * (1-p_A)) / np.sqrt(A_total)
SE_B = np.sqrt(p_B * (1-p_B)) / np.sqrt(B_total)
SE_A, SE_B
fig, ax = plt.subplots(figsize=(12,6))
xA = np.linspace(0, .3, A_total)
yA = scs.norm(p_A, SE_A).pdf(xA)
ax.plot(xA, yA)
ax.axvline(x=p_A, c='red', alpha=0.5, linestyle='--')
xB = np.linspace(0, .3, B_total)
yB = scs.norm(p_B, SE_B).pdf(xB)
ax.plot(xB, yB)
ax.axvline(x=p_B, c='blue', alpha=0.5, linestyle='--')
plt.xlabel('Converted Proportion')
plt.ylabel('PDF')
```
## Next Steps
You have completed the exercise for implementing an A/B test using the experimentation framework in the Retail Demo Store. Close this notebook and open the notebook for the next exercise, **[3.4-Multi-Armed-Bandit-Experiment](./3.4-Multi-Armed-Bandit-Experiment.ipynb)**.
### References and Further Reading
- [Large Scale Validation and Analysis of Interleaved Search Evaluation](http://olivier.chapelle.cc/pub/interleaving.pdf), Chapelle et al
- [Innovating Faster on Personalization Algorithms at Netflix Using Interleaving](https://medium.com/netflix-techblog/interleaving-in-online-experiments-at-netflix-a04ee392ec55), Netflix Technology Blog
| true |
code
| 0.657181 | null | null | null | null |
|
# Deep Reinforcement Learning for the CartPole Environment
```
# Install packages
import gym
import copy
import torch
from torch.autograd import Variable
import random
import matplotlib.pyplot as plt
from PIL import Image
from IPython.display import clear_output
import math
import torchvision.transforms as T
import numpy as np
import time
```
## Environment
The CartPole environment consists of a pole which moves along a frictionless track. The system is controlled by applying a force of +1 or -1 to the cart. The pendulum starts upright, and the goal is to prevent it from falling over. The state space is represented by four values: cart position, cart velocity, pole angle, and the velocity of the tip of the pole. The action space consists of two actions: moving left or moving right. A reward of +1 is provided for every timestep that the pole remains upright. The episode ends when the pole is more than 15 degrees from vertical, or the cart moves more than 2.4 units from the center.
Source: [https://gym.openai.com/envs/CartPole-v1/](Open AI Gym).
The cell below plots a bunch of example frames from the environment.
```
env = gym.envs.make("CartPole-v1")
# Demonstration
env = gym.envs.make("CartPole-v1")
def get_screen():
''' Extract one step of the simulation.'''
screen = env.render(mode='rgb_array').transpose((2, 0, 1))
screen = np.ascontiguousarray(screen, dtype=np.float32) / 255.
return torch.from_numpy(screen)
# Speify the number of simulation steps
num_steps = 2
# Show several steps
for i in range(num_steps):
clear_output(wait=True)
env.reset()
plt.figure()
plt.imshow(get_screen().cpu().permute(1, 2, 0).numpy(),
interpolation='none')
plt.title('CartPole-v0 Environment')
plt.xticks([])
plt.yticks([])
plt.show()
```
## Plotting Function
This function will make it possible to analyze how the agent learns over time. The resulting plot consists of two subplots. The first one plots the total reward the agent accumulates over time, while the other plot shows a histogram of the agent's total rewards for the last 50 episodes.
```
def plot_res(values, title=''):
''' Plot the reward curve and histogram of results over time.'''
# Update the window after each episode
clear_output(wait=True)
# Define the figure
f, ax = plt.subplots(nrows=1, ncols=2, figsize=(12,5))
f.suptitle(title)
ax[0].plot(values, label='score per run')
ax[0].axhline(195, c='red',ls='--', label='goal')
ax[0].set_xlabel('Episodes')
ax[0].set_ylabel('Reward')
x = range(len(values))
ax[0].legend()
# Calculate the trend
try:
z = np.polyfit(x, values, 1)
p = np.poly1d(z)
ax[0].plot(x,p(x),"--", label='trend')
except:
print('')
# Plot the histogram of results
ax[1].hist(values[-50:])
ax[1].axvline(195, c='red', label='goal')
ax[1].set_xlabel('Scores per Last 50 Episodes')
ax[1].set_ylabel('Frequency')
ax[1].legend()
plt.show()
```
## Random Search
Before implementing any deep learning approaches, I wrote a simple strategy where the action is sampled randomly from the action space. This approach will serve as a baseline for other strategies and will make it easier to understand how to work with the agent using the Open AI Gym environment.
```
def random_search(env, episodes,
title='Random Strategy'):
""" Random search strategy implementation."""
final = []
for episode in range(episodes):
state = env.reset()
done = False
total = 0
while not done:
# Sample random actions
action = env.action_space.sample()
# Take action and extract results
next_state, reward, done, _ = env.step(action)
# Update reward
total += reward
if done:
break
# Add to the final reward
final.append(total)
plot_res(final,title)
return final
# Get random search results
episodes = 30
random_s = random_search(env, episodes)
```
The plot above presents the random strategy. As expected, it's impossible to solve the environment using this approach. The agent is not learning from their experience. Despite being lucky sometimes (getting a reward of almost 75), their average performance is as low as 10 steps.
## Deep Q Learning
The main idea behind Q-learning is that we have a function $Q: State \times Action \rightarrow \mathbb{R}$, which can tell the agent what actions will result in what rewards. If we know the value of Q, it is possible to construct a policy that maximizes rewards:
\begin{align}\pi(s) = \arg\!\max_a \ Q(s, a)\end{align}
However, in the real world, we don't have access to full information, that's why we need to come up with ways of approximating Q. One traditional method is creating a lookup table where the values of Q are updated after each of the agent's actions. However, this approach is slow and does not scale to large action and state spaces. Since neural networks are universal function approximators, I will train a network that can approximate $Q$.
The DQL class implementation consists of a simple neural network implemented in PyTorch that has two main methods--predict and update. The network takes the agent's state as an input and returns the Q values for each of the actions. The maximum Q value is selected by the agent to perform the next action.
```
class DQN():
''' Deep Q Neural Network class. '''
def __init__(self, state_dim, action_dim, hidden_dim=64, lr=0.05):
self.criterion = torch.nn.MSELoss()
self.model = torch.nn.Sequential(
torch.nn.Linear(state_dim, hidden_dim),
torch.nn.LeakyReLU(),
torch.nn.Linear(hidden_dim, hidden_dim*2),
torch.nn.LeakyReLU(),
torch.nn.Linear(hidden_dim*2, action_dim)
)
self.optimizer = torch.optim.Adam(self.model.parameters(), lr)
def update(self, state, y):
"""Update the weights of the network given a training sample. """
y_pred = self.model(torch.Tensor(state))
loss = self.criterion(y_pred, Variable(torch.Tensor(y)))
self.optimizer.zero_grad()
loss.backward()
self.optimizer.step()
def predict(self, state):
""" Compute Q values for all actions using the DQL. """
with torch.no_grad():
return self.model(torch.Tensor(state))
```
The q_learning function is the main loop for all the algorithms that follow.
It has many parameters, namely:
- Env represents the Open Ai Gym environment that we want to solve (CartPole.)
- Episodes stand for the number of games we want to play (from the beginning until the end.)
- Gamma is a discounting factor that is multiplied by future rewards to dampen these rewards' effect on the agent. It is designed to make future rewards worth less than immediate rewards.
- Epsilon represents the proportion of random actions relative to actions that are informed by existing "knowledge" that the agent accumulates during the episode. Before playing the game, the agent doesn't have any experience, so it is common to set epsilon to higher values and then gradually decrease its value.
- Eps_decay parameter indicates the speed at which the epsilon decreases as the agent learns. 0.99 comes from the original DQN paper.
I will explain other parameters later on when we will get to the corresponding agents.
The most straightforward agent updates its Q-values based on its most recent observation. It doesn't have any memory, but it learns by first exploring the environment and the gradually decreasing its epsilon value to make informed decisions:
```
def q_learning(env, model, episodes, gamma=0.9,
epsilon=0.3, eps_decay=0.99,
replay=False, replay_size=20,
title = 'DQL', double=False,
n_update=10, soft=False, verbose=True):
"""Deep Q Learning algorithm using the DQN. """
final = []
memory = []
episode_i=0
sum_total_replay_time=0
_max = [0.0, 0.0, 0.0, 0.0]
for episode in range(episodes):
episode_i+=1
if double and not soft:
# Update target network every n_update steps
if episode % n_update == 0:
model.target_update()
if double and soft:
model.target_update()
# Reset state
state = env.reset()
done = False
total = 0
while not done:
# Implement greedy search policy to explore the state space
if random.random() < epsilon:
action = env.action_space.sample()
else:
q_values = model.predict(state)
action = torch.argmax(q_values).item()
# Take action and add reward to total
next_state, reward, done, _ = env.step(action)
# Update total and memory
total += reward
memory.append((state, action, next_state, reward, done))
q_values = model.predict(state).tolist()
if done:
if not replay:
q_values[action] = reward
# Update network weights
model.update(state, q_values)
break
if replay:
t0=time.time()
# Update network weights using replay memory
model.replay(memory, replay_size, gamma)
t1=time.time()
sum_total_replay_time+=(t1-t0)
else:
# Update network weights using the last step only
q_values_next = model.predict(next_state)
q_values[action] = reward + gamma * torch.max(q_values_next).item()
model.update(state, q_values)
for _i in range(len(_max)):
if np.abs(state[i]) > _max[i]:
_max[i] = np.abs(state[i])
state = next_state
# Update epsilon
epsilon = max(epsilon * eps_decay, 0.01)
final.append(total)
plot_res(final, title)
if verbose:
print("episode: {}, total reward: {}".format(episode_i, total))
if replay:
print("Average replay time:", sum_total_replay_time/episode_i)
print(_max)
return final
```
### Parameters
```
# Number of states
n_state = env.observation_space.shape[0]
# Number of actions
n_action = env.action_space.n
# Number of episodes
episodes = 150
# Number of hidden nodes in the DQN
n_hidden = 50
# Learning rate
lr = 0.001
# Get DQN results
simple_dqn = DQN(n_state, n_action, n_hidden, lr)
simple = q_learning(env, simple_dqn, episodes, gamma=.9, epsilon=0.3)
```
The graph above shows that the performance of the agent has significantly improved. It got to 175 steps, which, as we've seen before, is impossible for a random agent. The trend line is also positive, and we can see that the performance increases over time. At the same time, the agent didn't manage to get above the goal line after 150 epochs, and its average performance is still around 15 steps, so there is definitely enough room for improvement.
## Replay
The approximation of Q using one sample at a time is not very effective. The graph above is a nice illustration of that. The network managed to achieve a much better performance compared to a random agent. However, it couldn't get to the threshold line of 195 steps. I implemented experience replay to improve network stability and make sure previous experiences are not discarded but used in training.
Experience replay stores the agent's experiences in memory. Batches of experiences are randomly sampled from memory and are used to train the neural network. Such learning consists of two phases--gaining experience and updating the model. The size of the replay controls the number of experiences that are used for the network update. Memory is an array that stores the agent's state, reward, and action, as well as whether the action finished the game and the next state.
```
# Expand DQL class with a replay function.
class DQN_replay(DQN):
#old replay function
#def replay(self, memory, size, gamma=0.9):
#""" Add experience replay to the DQN network class. """
# Make sure the memory is big enough
#if len(memory) >= size:
#states = []
#targets = []
# Sample a batch of experiences from the agent's memory
#batch = random.sample(memory, size)
# Extract information from the data
#for state, action, next_state, reward, done in batch:
#states.append(state)
# Predict q_values
#q_values = self.predict(state).tolist()
#if done:
#q_values[action] = reward
#else:
#q_values_next = self.predict(next_state)
#q_values[action] = reward + gamma * torch.max(q_values_next).item()
#targets.append(q_values)
#self.update(states, targets)
#new replay function
def replay(self, memory, size, gamma=0.9):
"""New replay function"""
#Try to improve replay speed
if len(memory)>=size:
batch = random.sample(memory,size)
batch_t = list(map(list, zip(*batch))) #Transpose batch list
states = batch_t[0]
actions = batch_t[1]
next_states = batch_t[2]
rewards = batch_t[3]
is_dones = batch_t[4]
states = torch.Tensor(states)
actions_tensor = torch.Tensor(actions)
next_states = torch.Tensor(next_states)
rewards = torch.Tensor(rewards)
is_dones_tensor = torch.Tensor(is_dones)
is_dones_indices = torch.where(is_dones_tensor==True)[0]
all_q_values = self.model(states) # predicted q_values of all states
all_q_values_next = self.model(next_states)
#Update q values
all_q_values[range(len(all_q_values)),actions]=rewards+gamma*torch.max(all_q_values_next, axis=1).values
all_q_values[is_dones_indices.tolist(), actions_tensor[is_dones].tolist()]=rewards[is_dones_indices.tolist()]
self.update(states.tolist(), all_q_values.tolist())
```
### replay using old replay function
```
# Get replay results
dqn_replay = DQN_replay(n_state, n_action, n_hidden, lr)
replay = q_learning(env, dqn_replay,
episodes, gamma=.9,
epsilon=0.2, replay=True,
title='DQL with Replay')
```
### replay using new replay function
```
# Get replay results
dqn_replay = DQN_replay(n_state, n_action, n_hidden, lr)
replay = q_learning(env, dqn_replay,
episodes, gamma=.9,
epsilon=0.2, replay=True,
title='DQL with Replay')
```
As expected, the neural network with the replay seems to be much more robust and smart compared to its counterpart that only remembers the last action. After approximately 60 episodes, the agent managed to achieve the winning threshold and remain at this level. I also managed to achieve the highest reward possible--500.
## Double Q Learning
Traditional Deep Q Learning tends to overestimate the reward, which leads to unstable training and lower quality policy. Let's consider the equation for the Q value:

The last part of the equation takes the estimate of the maximum value. This procedure results in systematic overestimation, which introduces a maximization bias. Since Q-learning involves learning estimates from estimates, such overestimation is especially worrying.
To avoid such a situation, I will define a new target network. The Q values will be taken from this new network, which is meant to reflect the state of the main DQN. However, it doesn't have identical weights because it's only updated after a certain number of episodes. This idea has been first introduced in Hasselt et al., 2015.
The addition of the target network might slow down the training since the target network is not continuously updated. However, it should have a more robust performance over time.
n_update parameter specifies the interval, after which the target network should be updated.
```
class DQN_double(DQN):
def __init__(self, state_dim, action_dim, hidden_dim, lr):
super().__init__(state_dim, action_dim, hidden_dim, lr)
self.target = copy.deepcopy(self.model)
def target_predict(self, s):
''' Use target network to make predicitons.'''
with torch.no_grad():
return self.target(torch.Tensor(s))
def target_update(self):
''' Update target network with the model weights.'''
self.target.load_state_dict(self.model.state_dict())
def replay(self, memory, size, gamma=1.0):
''' Add experience replay to the DQL network class.'''
if len(memory) >= size:
# Sample experiences from the agent's memory
data = random.sample(memory, size)
states = []
targets = []
# Extract datapoints from the data
for state, action, next_state, reward, done in data:
states.append(state)
q_values = self.predict(state).tolist()
if done:
q_values[action] = reward
else:
# The only difference between the simple replay is in this line
# It ensures that next q values are predicted with the target network.
q_values_next = self.target_predict(next_state)
q_values[action] = reward + gamma * torch.max(q_values_next).item()
targets.append(q_values)
self.update(states, targets)
# Get replay results
dqn_double = DQN_double(n_state, n_action, n_hidden, lr)
double = q_learning(env, dqn_double, episodes, gamma=.9,
epsilon=0.2, replay=True, double=True,
title='Double DQL with Replay', n_update=10)
```
Double DQL with replay has outperformed the previous version and has consistently performed above 300 steps. The performance also seems to be a bit more stable, thanks to the separation of action selection and evaluation. Finally, let's explore the last modification to the DQL agent.
## Soft Target Update
The method used to update the target network implemented above was introduced in the original DQN paper. In this section, we will explore another well-established method of updating the target network weights. Instead of updating weights after a certain number of steps, we will incrementally update the target network after every run using the following formula:
target_weights = target_weights * (1-TAU) + model_weights * TAU
where 0 < TAU < 1
This method of updating the target network is known as “soft target network updates” and was introduced in Lillicrap et al., 2016. Method implementation is shown below:
```
class DQN_double_soft(DQN_double):
def target_update(self, TAU=0.1):
''' Update the targer gradually. '''
# Extract parameters
model_params = self.model.named_parameters()
target_params = self.target.named_parameters()
updated_params = dict(target_params)
for model_name, model_param in model_params:
if model_name in target_params:
# Update parameter
updated_params[model_name].data.copy_((TAU)*model_param.data + (1-TAU)*target_params[model_param].data)
self.target.load_state_dict(updated_params)
dqn_double_soft = DQN_double_soft(n_state, n_action, n_hidden, lr)
double = q_learning(env, dqn_double_soft, episodes, gamma=.9,
epsilon=0.2, replay=True, double=True,
title='Double DQL with Replay', n_update=10, soft=True)
```
The network with soft target updates performed quite well. However, it doesn't seem to be better than hard weight updates after a certain number of steps.
## Conclusion
The implementation of the experience replay and the target network have significantly improved the performance of a Deep Q Learning agent in the Open AI CartPole environment. Some other modifications to the agent, such as Dueling Network Architectures (Wang et al., 2015), can be added to this implementation to improve the agent's performance. The algorithm is also generalizable to other environments. Thus, it's possible to test how well it performs on other tasks.
## References:
(1) Reinforcement Q-Learning from Scratch in Python with OpenAI Gym. (2019). Learndatasci.com. Retrieved 9 December 2019, from https://www.learndatasci.com/tutorials/reinforcement-q-learning-scratch-python-openai-gym/
(2) Paszke, A., (2019). Reinforcement Learning (DQN) tutorial. Retrieved from: https://pytorch.org/tutorials/intermediate/reinforcement_q_learning.html
(3) Lillicrap, T. P., Hunt, J. J., Pritzel, A., Heess, N., Erez, T., Tassa, Y., ... & Wierstra, D. (2015). Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971.
(4) Van Hasselt, H., Guez, A., & Silver, D. (2016, March). Deep reinforcement learning with double q-learning. In Thirtieth AAAI conference on artificial intelligence.
(5) Wang, Z., Schaul, T., Hessel, M., Van Hasselt, H., Lanctot, M., & De Freitas, N. (2015). Dueling network architectures for deep reinforcement learning. arXiv preprint arXiv:1511.06581.
(6) Double DQN Implementation to Solve OpenAI Gym’s CartPole v-0. (2019). Medium. Retrieved 20 December 2019, from https://medium.com/@leosimmons/double-dqn-implementation-to-solve-openai-gyms-cartpole-v-0-df554cd0614d
| true |
code
| 0.622402 | null | null | null | null |
|
```
# To enable plotting graphs in Jupyter notebook
%matplotlib inline
import pandas as pd
from sklearn.linear_model import LogisticRegression
# importing ploting libraries
import matplotlib.pyplot as plt
#importing seaborn for statistical plots
import seaborn as sns
#Let us break the X and y dataframes into training set and test set. For this we will use
#Sklearn package's data splitting function which is based on random function
from sklearn.model_selection import train_test_split
import numpy as np
# calculate accuracy measures and confusion matrix
from sklearn import metrics
# The data lies in the following URL.
#url = "https://archive.ics.uci.edu/ml/machine-learning-databases/pima-indians-diabetes/pima-indians-diabetes.data"
# Since it is a data file with no header, we will supply the column names which have been obtained from the above URL
# Create a python list of column names called "names"
#colnames = ['preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age', 'class']
#Load the file from local directory using pd.read_csv which is a special form of read_table
#while reading the data, supply the "colnames" list
#pima_df = pd.read_csv("pima-indians-diabetes-2 (1).csv", names= colnames)
pima_df = pd.read_csv("pima-indians-diabetes-2 (1).csv")
pima_df.head(50)
# Let us check whether any of the columns has any value other than numeric i.e. data is not corrupted such as a "?" instead of
# a number.
# we use np.isreal a numpy function which checks each column for each row and returns a bool array,
# where True if input element is real.
# applymap is pandas dataframe function that applies the np.isreal function columnwise
# Following line selects those rows which have some non-numeric value in any of the columns hence the ~ symbol
pima_df[~pima_df.applymap(np.isreal).all(1)]
# replace the missing values in pima_df with median value :Note, we do not need to specify the column names
# every column's missing value is replaced with that column's median respectively
#pima_df = pima_df.fillna(pima_df.median())
#pima_df
#Lets analysze the distribution of the various attributes
pima_df.describe().transpose()
# Let us look at the target column which is 'class' to understand how the data is distributed amongst the various values
pima_df.groupby(["class"]).count()
# Most are not diabetic. The ratio is almost 1:2 in favor or class 0. The model's ability to predict class 0 will
# be better than predicting class 1.
# Let us do a correlation analysis among the different dimensions and also each dimension with the dependent dimension
# This is done using scatter matrix function which creates a dashboard reflecting useful information about the dimensions
# The result can be stored as a .png file and opened in say, paint to get a larger view
#pima_df_attr = pima_df.iloc[:,0:9]
#axes = pd.plotting.scatter_matrix(pima_df_attr)
#plt.tight_layout()
#plt.savefig('d:\greatlakes\pima_pairpanel.png')
# Pairplot using sns
sns.pairplot(pima_df)
#data for all the attributes are skewed, especially for the variable "test"
#The mean for test is 80(rounded) while the median is 30.5 which clearly indicates an extreme long tail on the right
# Attributes which look normally distributed (plas, pres, skin, and mass).
# Some of the attributes look like they may have an exponential distribution (preg, test, pedi, age).
# Age should probably have a normal distribution, the constraints on the data collection may have skewed the distribution.
# There is no obvious relationship between age and onset of diabetes.
# There is no obvious relationship between pedi function and onset of diabetes.
array = pima_df.values
X = pima_df.iloc[:,0:8]
y = pima_df.iloc[:,8]
#X = array[:,0:8] # select all rows and first 8 columns which are the attributes
#Y = array[:,8] # select all rows and the 8th column which is the classification "Yes", "No" for diabeties
test_size = 0.30 # taking 70:30 training and test set
seed =1 # Random numbmer seeding for reapeatability of the code
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=test_size, random_state=seed)
# Fit the model on 30%
model = LogisticRegression()
model.fit(X_train, y_train)
y_predict = model.predict(X_test)
coef_df = pd.DataFrame(model.coef_)
coef_df['intercept'] = model.intercept_
print(coef_df)
model_score = model.score(X_test, y_test)
print(model_score)
print(metrics.confusion_matrix(y_test, y_predict))
# Improve the model -----------------------------Iteration 2 -----------------------------------------------
# To scale the dimensions we need scale function which is part of sckikit preprocessing libraries
from sklearn import preprocessing
# scale all the columns of the mpg_df. This will produce a numpy array
#pima_df_scaled = preprocessing.scale(pima_df[0:7])
X_train_scaled = preprocessing.scale(X_train)
X_test_scaled = preprocessing.scale(X_test)
# Fit the model on 30%
model = LogisticRegression()
model.fit(X_train_scaled, y_train)
y_predict = model.predict(X_test_scaled)
model_score = model.score(X_test_scaled, y_test)
print(model_score)
# IMPORTANT: first argument is true values, second argument is predicted values
# this produces a 2x2 numpy array (matrix)
print(metrics.confusion_matrix(y_test, y_predict))
```
Analyzing the confusion matrix
True Positives (TP): we correctly predicted that they do have diabetes 46
True Negatives (TN): we correctly predicted that they don't have diabetes 134
False Positives (FP): we incorrectly predicted that they do have diabetes (a "Type I error") 13
Falsely predict positive Type I error
False Negatives (FN): we incorrectly predicted that they don't have diabetes (a "Type II error") 38
Falsely predict negative Type II error
| true |
code
| 0.652324 | null | null | null | null |
|
# Spam Text Classification
In second week of inzva Applied AI program, we are going to create a spam text classifier using RNN's. Our data have 2 columns. The first column is the label and the second column is text message itself. We are going to create our model using following techniques
- Embeddings
- SimpleRNN
- GRU
- LSTM
- Ensemble Model
### SimpleRNN
Simple RNN layer. Nothing special. The reason it is 'Simple' because it is not GRU nor LSTM layer. You can read the documentation from https://keras.io/api/layers/recurrent_layers/simple_rnn/
### LSTM
https://keras.io/api/layers/recurrent_layers/lstm/
We will use tokenization and padding to preprocess our data. We are going to create 3 different models and compare them.
## Libraries
```
from keras.layers import SimpleRNN, Embedding, Dense, LSTM
from keras.models import Sequential
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns; sns.set()
```
## Dataset
```
data = pd.read_csv(".../datasets_2050_3494_SPAM text message 20170820 - Data.csv")
```
Let's see the first 20 rows of our data and read the messages. What do you think, are they really look like spam messages?
```
data.iloc[0:20,:]
```
Let's calculate spam and non-spam message counts.
```
texts = []
labels = []
for i, label in enumerate(data['Category']):
texts.append(data['Message'][i])
if label == 'ham':
labels.append(0)
else:
labels.append(1)
texts = np.asarray(texts)
labels = np.asarray(labels)
print("number of texts :" , len(texts))
print("number of labels: ", len(labels))
labels
sum(labels==0)
sum(labels==1)
```
### Data is imbalanced. Making it even more imbalanced by removing some of the spam messages and observing the model performance would be a good exercise to explore imbalanced dataset problem in Sequential Model context.
```
texts
```
## Data Preprocessing
Each sentence has different lengths. We need to have sentences of the same length. Besides, we need to represent them as integers.
As a concerete example, we have following sentences
- 'Go until jurong point crazy'
- 'any other suggestions'
First we will convert them to integers, this operation is known as Tokenizstion.
- [5, 10, 26, 67, 98]
- [7, 74, 107]
Now we have two integer vectors with different length. We need to make them have the same length.
### Post Padding
- [5, 10, 26, 67, 98]
- [7, 74, 107, 0, 0]
### Pre Padding
- [5, 10, 26, 67, 98]
- [0, 0, 7, 74, 107]
But you don't have to use padding in each task. For details please refer to this link https://github.com/keras-team/keras/issues/2375
```
from keras.layers import SimpleRNN, Embedding, Dense, LSTM
from keras.models import Sequential
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
# number of words in our vocabulary
max_features = 10000
# how many words from each document (max)?
maxlen = 500
```
## Train - Test Split
We will take a simple approach and create only train and test sets. Of course having train, test and validation sets is the best practise.
```
training_samples = int(len(labels)*0.8)
training_samples
validation_samples = int(5572 - training_samples)
assert len(labels) == (training_samples + validation_samples), "Not equal!"
print("The number of training {0}, validation {1} ".format(training_samples, validation_samples))
```
## Tokenization
```
tokenizer = Tokenizer()
tokenizer.fit_on_texts(texts)
sequences = tokenizer.texts_to_sequences(texts)
word_index = tokenizer.word_index
print("Found {0} unique words: ".format(len(word_index)))
#data = pad_sequences(sequences, maxlen=maxlen, padding='post')
data = pad_sequences(sequences, maxlen=maxlen)
print(data.shape)
data
np.random.seed(42)
# shuffle data
indices = np.arange(data.shape[0])
np.random.shuffle(indices)
data = data[indices]
labels = labels[indices]
texts_train = data[:training_samples]
y_train = labels[:training_samples]
texts_test = data[training_samples:]
y_test = labels[training_samples:]
```
## Model Creation
We will create 3 different models and compare their performances. One model will use SimpleRNN layer, the other will use GRU layer and the last one will use LSTM layer. Architecture of each model is the same. We can create deeper models but we already get good results.
```
model = Sequential()
model.add(Embedding(max_features, 32))
model.add(SimpleRNN(32))
model.add(Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop', loss='binary_crossentropy',
metrics=['acc'])
history_rnn = model.fit(texts_train, y_train, epochs=10,
batch_size=60, validation_split=0.2)
acc = history_rnn.history['acc']
val_acc = history_rnn.history['val_acc']
loss = history_rnn.history['loss']
val_loss = history_rnn.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, '-', color='orange', label='training acc')
plt.plot(epochs, val_acc, '-', color='blue', label='validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.show()
plt.plot(epochs, loss, '-', color='orange', label='training acc')
plt.plot(epochs, val_loss, '-', color='blue', label='validation acc')
plt.title('Training and validation loss')
plt.legend()
plt.show()
pred = model.predict_classes(texts_test)
acc = model.evaluate(texts_test, y_test)
proba_rnn = model.predict_proba(texts_test)
from sklearn.metrics import confusion_matrix
print("Test loss is {0:.2f} accuracy is {1:.2f} ".format(acc[0],acc[1]))
print(confusion_matrix(pred, y_test))
sum(y_test==1)
```
## GRU
```
from keras.layers import GRU
model = Sequential()
model.add(Embedding(max_features, 32))
model.add(GRU(32))
model.add(Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop', loss='binary_crossentropy',
metrics=['acc'])
history_rnn = model.fit(texts_train, y_train, epochs=10,
batch_size=60, validation_split=0.2)
pred = model.predict_classes(texts_test)
acc = model.evaluate(texts_test, y_test)
proba_gru = model.predict_proba(texts_test)
from sklearn.metrics import confusion_matrix
print("Test loss is {0:.2f} accuracy is {1:.2f} ".format(acc[0],acc[1]))
print(confusion_matrix(pred, y_test))
```
## LSTM
```
model = Sequential()
model.add(Embedding(max_features, 32))
model.add(LSTM(32))
model.add(Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['acc'])
history_lstm = model.fit(texts_train, y_train, epochs=10,
batch_size=60, validation_split=0.2)
acc = history_lstm.history['acc']
val_acc = history_lstm.history['val_acc']
loss = history_lstm.history['loss']
val_loss = history_lstm.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, '-', color='orange', label='training acc')
plt.plot(epochs, val_acc, '-', color='blue', label='validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.show()
plt.plot(epochs, loss, '-', color='orange', label='training acc')
plt.plot(epochs, val_loss, '-', color='blue', label='validation acc')
plt.title('Training and validation loss')
plt.legend()
plt.show()
pred = model.predict_classes(texts_test)
acc = model.evaluate(texts_test, y_test)
proba_ltsm = model.predict_proba(texts_test)
from sklearn.metrics import confusion_matrix
print("Test loss is {0:.2f} accuracy is {1:.2f} ".format(acc[0],acc[1]))
print(confusion_matrix(pred, y_test))
```
## Ensemble Model
```
ensemble_proba = 0.25 * proba_rnn + 0.35 * proba_gru + 0.4 * proba_lstm
ensemble_proba[:5]
ensemble_class = np.array([1 if i >= 0.3 else 0 for i in ensemble_proba])
print(confusion_matrix(ensemble_class, y_test))
```
| true |
code
| 0.703308 | null | null | null | null |
|
# Demo of Ch2. Linear Classifier
----
This is the sample code of TU-ETP-AD1062 Machine Learning Fundamentals.
For more information, please refer to:
https://sites.google.com/view/tu-ad1062-mlfundamentals/
## Import packages
----
- `numpy`: Provide linear algebra related computation ability, with `norm` used to measure the l2-norm of matrices and vectors
- `sklearn`: Scikit-Learn, provides basic data analysis and machine learning methods functionality
- `mlfund`:
- `dataset`: Used to generate data in normal distribution
- `Plot2D`: Used to plot the figure, implemented by using `matplotlib`
```
import numpy as np
from numpy.linalg import norm
import sklearn.metrics
import sklearn.svm
import sklearn.linear_model
from mlfund.dataset import Gaussian, GaussianParam
from mlfund.plot import Plot2D
%matplotlib inline
```
## 2.2. Perceptron
### Demo 2.2.1. Implement Perceptron Algorithm by the Simplest Gradient Descent
----
#### Perceptron Algorithm Implementation
The demo here shows how to use the simplest gradient descent (which leverages fixed learning rate `self._mu`) to implement Perceptron algorithm. Here's the method details:
##### `__delta_x(self, X_augmented, y)`:
For each augmented sample $\mathbf{x}$ stored in `X_augmented`, denotes each wrongly classified sample $\mathbf{x}\in Y$ by $\delta_{\mathbf{x}}$ defined as following:
$$
\delta_{\mathbf{x}}=\left\{
\begin{array}{ll}
-1, \text{if } \mathbf{x} \in \omega_{1}, \\
+1, \text{if } \mathbf{x} \in \omega_{2}
\end{array}
\right.
$$
##### `__gradient_w(self, X_augmented, y)`:
For each augmented sample $\mathbf{x}$ stored in `X_augmented`, compute the gradient with respect to $\mathbf{w}$ (i.e., `self._w`) as belows:
$$
\nabla J\left(\mathbf{w}\right) = \sum_{\mathbf{x}\in Y}\delta_{\mathbf{x}}\mathbf{x}
$$
##### `decision_function(self, X)`:
For each sample $\mathbf{x}$ stored in `X`, compute the value of $\mathbf{w}^T\mathbf{x}$
##### `cost(self, X, y)`:
For each sample $\mathbf{x}$ stored in `X`, compute the value of the Perception cost function:
$$
J\left(\mathbf{w}\right)=\sum_{\mathbf{x}\in Y} \delta_{\mathbf{x}} \mathbf{w}^T\mathbf{x}
$$
##### `fit(self, X, y)`:
Training the Perceptron by the following steps:
> - $\mathbf{w}_0 =$ Random init()
> - while (iteration < max_iteration)
> - $\mathbf{w}_{t+1} = \mathbf{w}_{t} - \mu \cdot \nabla J\left(\mathbf{w}\right)$
> - if ( $||\nabla J\left(\mathbf{w}_{t+1}\right)||^2$ < tolerance value )
> - break
> - return $\mathbf{w}_{tlast}$
Notice:
1. For the purpose of the visualization, here we don't use the random initialized $\mathbf{w}_0$, we use a fixed vector `[1,-2,0]` instead.
2. Here we don't return $\mathbf{w}_{tlast}$ directly. Instead, we stored it into `self._w` for the object oriented purposes.
##### `predict(self, X)`:
For each sample $\mathbf{x}$ stored in `X`, predict the label to `-1` or `+1` by using the trained parameter `self._w`.
```
class HandCraftedBinaryPerceptron:
def __init__(self):
self._w = None
self._mu = 0.01
self._max_itr = 50
self._verbose_log = True
def __log(self, title, cost, X, y):
if self._verbose_log == True:
print('%s, w = %s, cost: %2.5f' % (title, self._w.__str__(), cost))
plot = Plot2D()
plot.scatter(X, y)
plot.classifierContour(X_train, y_train, self)
plot.show()
def __validate_data_type(self, X, y):
assert isinstance(X, np.ndarray)
assert isinstance(y, np.ndarray)
assert len(np.unique(y)) == 2, '`%s` allows binary classification only, whereas input labels `y` contains %d different labels.' % (HandCraftedBinaryPerceptron.__name__, len(np.unique(y)))
assert set(np.unique(y)) == set([1, -1]), 'Labels in `y` allows +1 and -1 only.'
def __delta_x(self, X_augmented, y):
err_indices = np.array(X_augmented.dot(self._w) * y < 0, dtype='int')
return -1 * np.multiply(err_indices, y)
def __gradient_w(self, X_augmented, y):
delta_x = self.__delta_x(X_augmented, y)
return np.sum(np.multiply(X_augmented, np.repeat(delta_x.reshape( (len(y), 1) ), X_augmented.shape[1], axis=1)), axis=0)
def decision_function(self, X):
X_augmented = np.hstack((X, np.ones( ( len(X), 1) )))
return X_augmented.dot(self._w.transpose())
def cost(self, X, y):
decision_values = self.decision_function(X)
err_indices = decision_values * y < 0
return np.sum (np.abs( decision_values[err_indices] ))
def fit(self, X, y):
self.__validate_data_type(X, y)
X_augmented = np.hstack((X, np.ones( ( len(X), 1) )))
self._w = np.array([-1, 2, 0])
_cost = self.cost(X, y)
self.__log('Initial', _cost, X, y)
for i in range(self._max_itr):
grad_w = self.__gradient_w(X_augmented, y)
self._w = self._w - self._mu * grad_w
_cost = self.cost(X, y)
self.__log('Iteration %d' % i, _cost, X, y)
if norm(grad_w, 2) < 1e-4:
print('Converged at iteration %d, with cost = %2.3f' % (i, _cost))
break
def predict(self, X):
assert isinstance(self._w, np.ndarray)
assert isinstance(X, np.ndarray)
decision_values = self.decision_function(X)
ret = np.zeros(len(decision_values), dtype='int')
ret[decision_values > 0.0] = 1
ret[decision_values <= 0.0] = -1
return ret
```
#### Demo of the Hand-crafted Perceptron Alogrithm
- Generate 2 group of data, which is in normal distribution
- Trained by the `HandCraftedBinaryPerceptron`
```
# Generate Training data and plot it
np.random.seed(0)
params_train = []
param = GaussianParam()
param.mean = [-1, 2.5]
param.cov = [[1, 5], [0, 1]]
param.N = 100
params_train.append(param)
param = GaussianParam()
param.mean = [1, -2.5]
param.cov = [[1, 5], [0, 1]]
param.N = 100
params_train.append(param)
X_train, y_train = Gaussian.generate(params_train, label_type='positive_negative')
clf = HandCraftedBinaryPerceptron()
clf.fit(X_train, y_train)
plot = Plot2D()
plot.scatter(X_train, y_train)
plot.classifierContour(X_train, y_train, clf)
plot.show()
```
### Demo 2.2.2. Perceptron of Scikit-Learn
----
The demo here shows how to generate 2 normal distributed groups of data, then classified by Scikit-learn built-in Perceptron algorithm.
#### Data Generation
Here we generate data as belows:
1. Generate 200 training data `X_train`, with corresponded label `y_train`
2. Generate 100 testing data `X_test`, with corresponded label `y_test`
```
# Generate Training data
np.random.seed(0)
params_train = []
param = GaussianParam()
param.mean = [-1, 2.5]
param.cov = [[1, 5], [0, 1]]
param.N = 100
params_train.append(param)
param = GaussianParam()
param.mean = [1, -2.5]
param.cov = [[1, 5], [0, 1]]
param.N = 100
params_train.append(param)
X_train, y_train = Gaussian.generate(params_train)
plot = Plot2D()
plot.title('Training data')
plot.scatter(X_train, y_train)
plot.show()
# Generate testing data
params_test = []
param = GaussianParam()
param.mean = [-1, 2.5]
param.cov = [[1, 5], [0, 1]]
param.N = 50
params_test.append(param)
param = GaussianParam()
param.mean = [1, -2.5]
param.cov = [[1, 5], [0, 1]]
param.N = 50
params_test.append(param)
X_test, y_test = Gaussian.generate(params_test)
plot = Plot2D()
plot.title('Test data')
plot.scatter(X_test, y_test)
plot.show()
```
#### Training and Predicting
Training a Perceptron model (which is built-in with Scikit-learn) with `X_train`, then predict the labels of `X_test`, with MCE computed.
```
clfPLA = sklearn.linear_model.Perceptron()
clfPLA.fit(X_train, y_train)
y_test_predict = clfPLA.predict(X_test)
print("Training data:")
plot = Plot2D()
plot.scatter(X_train, y_train)
plot.classifierContour(X_train, y_train, clfPLA)
plot.show()
print("Testing data:")
print("MCE = %2.3f" % sklearn.metrics.zero_one_loss(y_test, y_test_predict))
plot = Plot2D()
plot.scatter(X_test, y_test)
plot.classifierContour(X_test, y_test, clfPLA)
plot.show()
```
## 2.3. Support Vector Machine (SVM)
### Demo 2.3.1. c-Support Vector Machine (c-SVC)
----
The demo here trains the model by SVM with `X_train`, then predict the testing data by `X_test`.
Notice that:
1. The number of support vectors is output via the attribute of `clfSVC.support_vectors_`
2. The support vectors are drawn via the wrapped function `mlfund.scatterSV`
```
clfSVC = sklearn.svm.SVC(C=1, kernel='linear')
clfSVC.fit(X_train, y_train)
y_test_predict = clfSVC.predict(X_test)
print("Training data:")
print("#SV = %d" % len(clfSVC.support_vectors_))
plot = Plot2D()
plot.scatter(X_train, y_train)
plot.scatterCSVC(clfSVC)
plot.classifierContour(X_train, y_train, clfSVC)
plot.show()
print("Testing data:")
print("MCE = %2.3f" % sklearn.metrics.zero_one_loss(y_test, y_test_predict))
plot = Plot2D()
plot.scatter(X_test, y_test)
plot.classifierContour(X_test, y_test, clfSVC)
plot.show()
```
### Demo 2.3.2. c-Support Vector Machine (c-SVC) - A More Crowded Case
----
The demo here use the same settings of the c-SVM model, but learning from a more crowded data. One could adjust the value of `C` to observe the support vectors being relaxed by slack variables
* The larger `C`, the less support vectors (due to the more penalty of $\xi_i$), but the smaller margin size
* The smaller `C`, the more support vectors (due to the less penalty of $\xi_i$), but the larger margin size
```
# Generate Training data and plot it
np.random.seed(0)
params_train = []
param = GaussianParam()
param.mean = [-0.3, 2]
param.cov = [[1, 5], [0, 1]]
param.N = 100
params_train.append(param)
param = GaussianParam()
param.mean = [0.3, -2]
param.cov = [[1, 5], [0, 1]]
param.N = 100
params_train.append(param)
X_train, y_train = Gaussian.generate(params_train)
# Generate testing data
params_test = []
param = GaussianParam()
param.mean = [-0.3, 2]
param.cov = [[1, 5], [0, 1]]
param.N = 50
params_test.append(param)
param = GaussianParam()
param.mean = [0.3, -2]
param.cov = [[1, 5], [0, 1]]
param.N = 50
params_test.append(param)
X_test, y_test = Gaussian.generate(params_test)
clfSVC = sklearn.svm.SVC(C=1000, kernel='linear')
clfSVC.fit(X_train, y_train)
y_test_predict = clfSVC.predict(X_test)
print("Training data:")
print("#SV = %d" % len(clfSVC.support_vectors_))
plot = Plot2D()
plot.scatter(X_train, y_train)
plot.scatterCSVC(clfSVC)
plot.classifierContour(X_train, y_train, clfSVC)
plot.show()
print("Testing data:")
print("MCE = %2.3f" % sklearn.metrics.zero_one_loss(y_test, y_test_predict))
plot = Plot2D()
plot.scatter(X_test, y_test)
plot.classifierContour(X_test, y_test, clfSVC)
plot.show()
```
| true |
code
| 0.710296 | null | null | null | null |
|
# Logistic Regression
(before Multinomial logistic regression)
We want to predict the probability of an input belonging to one of two classes.
---
## Study case :
Classify the zero and one digits from MNist dataset
### a) Dataset !
- Input: Images of size 28*28 where a one or two is present
- Output: 0 if the input is a 0, 1 otherwise
Lets... Load it, check it, quick, rewrite it !
```
import keras
import utils
reload(utils)
from utils import *
%pylab inline
# Get the input data (All the zeros and ones in the dataset)
(x, y), (x_, y_) = keras.datasets.mnist.load_data()
X = x[np.where(y<=1)]
Y = y[np.where(y<=1)]
Y = np.array(Y, dtype='int')
# Reshape the images to vectors
X = X.reshape(X.shape[0], -1)
X = X / 255. # Normalize inputs
# Vizualize the digits
pylab.rcParams['figure.figsize'] = (15, 4)
for i in range(12):
plt.subplot(2, 6, i+1)
plt.imshow(X[i].reshape([28, 28]))
plt.show()
```
---
### b) Classifier
We use a logistic regression,
(You may want to read this : http://cs229.stanford.edu/notes/cs229-notes1.pdf) :
The Cost is a function of the true output $Y$ and the prediction $p$, which itself is a function of a linear activation $s(x)$
- linear unit : $ s = (W^t \cdot X + b) $
- prediction : $ p(s) = \frac{1}{1 + e^{-s}} $
- Cost : $ C(y, p) = - y \ln(p) - (1-y)(\ln(1-p)) $
To use gradient descent, we have to compute the gradient of the cost with respect to w :
$ \frac{dC}{dW} $
We take adventage of the chain rule :
$ \frac{dC}{dW} = \frac{dC}{dp} \cdot \frac{dp}{ds} \cdot \frac{ds}{dw} $
---
We derive each terms :
\begin{align}
\frac{dC}{dp} &= - \frac{y}{p} - (-1) \cdot \frac{1-y}{1-p} \\
&= - \frac{y}{p} + \frac{1-y}{1-p} \\
&= \frac{-y + y.p + p - y.p}{p \cdot (1-p)} \\
&= \frac{-y+p}{p \cdot (1-p)}
\end{align}
---
\begin{align}
\frac{dp}{ds} &= - \frac{-e^{-s}}{1 + e^{-s}} \\
&= \frac{-e^{-s}}{1 + e^{-s}} \\
&= \frac{e^{-s} + 1 - 1}{(1 + e^{-s})^2} \\
&= \frac{e^{-s} + 1}{(1 + e^{-s})^2} - \frac{1}{(1 + e^{-s})^2} \\
&= \frac{1}{1 + e^{-s}} - \left(\frac{1}{1 + e^{-s}}\right)^2 \\
&= p - p^2 \\
&= p \cdot (1-p)
\end{align}
---
\begin{align}
\frac{ds}{dw} = x
\end{align}
---
All together, we have :
\begin{align}
\frac{dC}{dW} &= \frac{dC}{dp} \cdot \frac{dp}{ds} \cdot \frac{ds}{dw} \\
&= \frac{-y+p}{p \cdot (1-p)} \cdot p \cdot (1-p) \cdot x \\
&= (-y+p) \cdot x \\
&= (p-y) \cdot x
\end{align}
```
# Set-up the weights
W = np.random.random((784,))-.5
# Train
for _ in range(2):
acc = []
losses = []
for x,y in zip(X, Y):
pred = linear(x, W)
pred = sigmoid(pred)
acc.append(round(pred)==y)
loss = nll(pred, y)
losses.append(loss)
update = (pred - y) * x
W = W - .02 * update
print sum(acc) / float(len(acc)), sum(losses)/len(losses)
gen = batch_generator(1)
valid_gen = batch_generator(100)
X_valid, Y_valid = valid_gen.next()
W = np.random.normal(size=IMG_SIZE * IMG_SIZE)
b = np.random.normal()
log = lambda x: np.log(x + 1e-8)
exp = lambda x: np.exp(x + 1e-8)
alph_ = 1.6732632423543772848170429916717
lambd_ = 1.0507009873554804934193349852946
linear = lambda x: np.dot(W.T, x) + b
sigm = lambda x: 1 / (1 + exp(-x))
elu = lambda x, alpha: np.maximum(x, alpha * (exp(x) - 1))
selu = lambda x: lambd_ * elu(x, alph_)
nll = lambda p, y: - y * log(p) - (1 - y) * log(1 - p)
def prob(X):
return sigm(linear(X))
def loss(X, y):
# loss = - y .ln( sigm(WT.X+b))
# -(1-y).ln(1-sigm(WT.X+b))
p = prob(X)
return nll(p, y)
def gradient_loss(X, y):
# d.loss / d.W = (p-y).X
p = prob(X)
return ((p - y) * X)
def evaluate():
probs = np.array(map(prob, X_valid))
loss = nll(probs, Y_valid)
loss = loss.mean()
probs = map(round, probs)
accuracy = sum(probs == Y_valid)
return accuracy, loss
losses = []
alpha = 0.001
for epoch in range(60):
_loss = 0
alpha *= 0.95
for _ in range(2000):
X, Y = gen.next()
X, Y = X[0], Y[0]
_loss += loss(X, Y)
W = W - alpha * gradient_loss(X, Y)
losses.append(_loss / 2000)
print epoch, losses[-1], evaluate(), alpha
plt.plot(losses)
plt.show()
def prob(X):
return sigm(selu(linear(X)))
def loss(X, y):
# loss = - y .ln( sigm(WT.X+b))
# -(1-y).ln(1-sigm(WT.X+b))
p = prob(X)
return nll(p, y)
def gradient_loss(X, y):
# d.loss / d.W = (p-y).X
p = prob(X)
if linear(X) <= 0:
return X * (p - y) * (p + lambd_ * lambd_)
else:
return X * (p - y) * lambd_
def evaluate():
probs = np.array(map(prob, X_valid))
loss = nll(probs, Y_valid)
loss = loss.mean()
probs = map(round, probs)
accuracy = sum(probs == Y_valid)
return accuracy, loss
losses = []
alpha = 0.001
for epoch in range(30):
_loss = 0
alpha *= 0.95
for _ in range(2000):
X, Y = gen.next()
X, Y = X[0], Y[0]
_loss += loss(X, Y)
W = W - alpha * gradient_loss(X, Y)
losses.append(_loss / 2000)
print epoch, losses[-1], evaluate(), alpha
plt.plot(losses)
plt.show()
```
| true |
code
| 0.661117 | null | null | null | null |
|
```
import numpy as np
```
**Module** is an abstract class which defines fundamental methods necessary for a training a neural network. You do not need to change anything here, just read the comments.
```
class Module(object):
def __init__ (self):
self.output = None
self.gradInput = None
self.training = True
"""
Basically, you can think of a module as of a something (black box)
which can process `input` data and produce `ouput` data.
This is like applying a function which is called `forward`:
output = module.forward(input)
The module should be able to perform a backward pass: to differentiate the `forward` function.
More, it should be able to differentiate it if is a part of chain (chain rule).
The latter implies there is a gradient from previous step of a chain rule.
gradInput = module.backward(input, gradOutput)
"""
def forward(self, input):
"""
Takes an input object, and computes the corresponding output of the module.
"""
return self.updateOutput(input)
def backward(self, input, gradOutput):
"""
Performs a backpropagation step through the module, with respect to the given input.
This includes
- computing a gradient w.r.t. `input` (is needed for further backprop),
- computing a gradient w.r.t. parameters (to update parameters while optimizing).
"""
self.updateGradInput(input, gradOutput)
self.accGradParameters(input, gradOutput)
return self.gradInput
def updateOutput(self, input):
"""
Computes the output using the current parameter set of the class and input.
This function returns the result which is stored in the `output` field.
Make sure to both store the data in `output` field and return it.
"""
# The easiest case:
# self.output = input
# return self.output
pass
def updateGradInput(self, input, gradOutput):
"""
Computing the gradient of the module with respect to its own input.
This is returned in `gradInput`. Also, the `gradInput` state variable is updated accordingly.
The shape of `gradInput` is always the same as the shape of `input`.
Make sure to both store the gradients in `gradInput` field and return it.
"""
# The easiest case:
# self.gradInput = gradOutput
# return self.gradInput
pass
def accGradParameters(self, input, gradOutput):
"""
Computing the gradient of the module with respect to its own parameters.
No need to override if module has no parameters (e.g. ReLU).
"""
pass
def zeroGradParameters(self):
"""
Zeroes `gradParams` variable if the module has params.
"""
pass
def getParameters(self):
"""
Returns a list with its parameters.
If the module does not have parameters return empty list.
"""
return []
def getGradParameters(self):
"""
Returns a list with gradients with respect to its parameters.
If the module does not have parameters return empty list.
"""
return []
def training(self):
"""
Sets training mode for the module.
Training and testing behaviour differs for Dropout, BatchNorm.
"""
self.training = True
def evaluate(self):
"""
Sets evaluation mode for the module.
Training and testing behaviour differs for Dropout, BatchNorm.
"""
self.training = False
def __repr__(self):
"""
Pretty printing. Should be overrided in every module if you want
to have readable description.
"""
return "Module"
```
# Sequential container
**Define** a forward and backward pass procedures.
```
class Sequential(Module):
"""
This class implements a container, which processes `input` data sequentially.
`input` is processed by each module (layer) in self.modules consecutively.
The resulting array is called `output`.
"""
def __init__ (self):
super(Sequential, self).__init__()
self.modules = []
def add(self, module):
"""
Adds a module to the container.
"""
self.modules.append(module)
def updateOutput(self, input):
"""
Basic workflow of FORWARD PASS:
y_0 = module[0].forward(input)
y_1 = module[1].forward(y_0)
...
output = module[n-1].forward(y_{n-2})
Just write a little loop.
"""
self.y = [input]
for i in range(len(self.modules)):
self.y.append(self.modules[i].forward(self.y[i]))
self.output = self.y[-1]
return self.output
def backward(self, input, gradOutput):
"""
Workflow of BACKWARD PASS:
g_{n-1} = module[n-1].backward(y_{n-2}, gradOutput)
g_{n-2} = module[n-2].backward(y_{n-3}, g_{n-1})
...
g_1 = module[1].backward(y_0, g_2)
gradInput = module[0].backward(input, g_1)
!!!
To ech module you need to provide the input, module saw while forward pass,
it is used while computing gradients.
Make sure that the input for `i-th` layer the output of `module[i]` (just the same input as in forward pass)
and NOT `input` to this Sequential module.
!!!
"""
self.g = [gradOutput]
if not (input == self.y[0]).all():
self.updateOutput(input)
for i in range(len(self.modules)):
self.g.append(self.modules[-i - 1].backward(self.y[-i - 2],self.g[i]))
self.gradInput = self.g[-1]
return self.gradInput
def zeroGradParameters(self):
for module in self.modules:
module.zeroGradParameters()
def getParameters(self):
"""
Should gather all parameters in a list.
"""
return [x.getParameters() for x in self.modules]
def getGradParameters(self):
"""
Should gather all gradients w.r.t parameters in a list.
"""
return [x.getGradParameters() for x in self.modules]
def __repr__(self):
string = "".join([str(x) + '\n' for x in self.modules])
return string
def __getitem__(self,x):
return self.modules.__getitem__(x)
```
# Layers
- input: **`batch_size x n_feats1`**
- output: **`batch_size x n_feats2`**
```
np.sum([[0,1,2],[3,4,5]], axis=0)
class Linear(Module):
"""
A module which applies a linear transformation
A common name is fully-connected layer, InnerProductLayer in caffe.
The module should work with 2D input of shape (n_samples, n_feature).
"""
def __init__(self, n_in, n_out):
super(Linear, self).__init__()
# This is a nice initialization
stdv = 1./np.sqrt(n_in)
self.W = np.random.uniform(-stdv, stdv, size = (n_out, n_in))
self.b = np.random.uniform(-stdv, stdv, size = n_out)
self.gradW = np.zeros_like(self.W)
self.gradb = np.zeros_like(self.b)
def updateOutput(self, input):
self.output = np.transpose(self.W.dot(input.transpose()) + self.b.reshape((len(self.b), -1)))
return self.output
def updateGradInput(self, input, gradOutput):
self.gradInput = gradOutput.dot(self.W)
return self.gradInput
def accGradParameters(self, input, gradOutput):
self.gradW = input.transpose().dot(gradOutput).transpose()
self.gradb = np.sum(gradOutput, axis=0)
def zeroGradParameters(self):
self.gradW.fill(0)
self.gradb.fill(0)
def getParameters(self):
return [self.W, self.b]
def getGradParameters(self):
return [self.gradW, self.gradb]
def __repr__(self):
s = self.W.shape
q = 'Linear %d -> %d' %(s[1],s[0])
return q
```
This one is probably the hardest but as others only takes 5 lines of code in total.
- input: **`batch_size x n_feats`**
- output: **`batch_size x n_feats`**
```
class SoftMax(Module):
def __init__(self):
super(SoftMax, self).__init__()
def updateOutput(self, input):
# start with normalization for numerical stability
self.output = np.subtract(input, input.max(axis=1, keepdims=True))
probs = np.exp(self.output)
self.output = probs/np.sum(probs, axis=1, keepdims=True)
return self.output
def updateGradInput(self, input, gradOutput):
#YEAAAHHH! TENSORS!!!
input = np.subtract(input, input.max(axis=1, keepdims=True))
probs = np.exp(input)
probs = probs/np.sum(probs, axis=1, keepdims=True)
probs_reshaped = probs.reshape(input.shape[0], input.shape[1], 1)
self.gradInput = np.einsum('lji, lj->lji', np.einsum('i, jk', np.ones(input.shape[0]), np.eye(input.shape[1])), probs) - \
np.einsum('...j,...k', probs_reshaped, np.einsum('...kj', probs_reshaped)).reshape(input.shape[0], input.shape[1], input.shape[1])
self.gradInput = np.einsum('...j,...jk', gradOutput, self.gradInput).reshape(input.shape[0], input.shape[1])
return self.gradInput
def __repr__(self):
return "SoftMax"
```
Implement [**dropout**](https://www.cs.toronto.edu/~hinton/absps/JMLRdropout.pdf). The idea and implementation is really simple: just multimply the input by $Bernoulli(p)$ mask.
This is a very cool regularizer. In fact, when you see your net is overfitting try to add more dropout.
While training (`self.training == True`) it should sample a mask on each iteration (for every batch). When testing this module should implement identity transform i.e. `self.output = input`.
- input: **`batch_size x n_feats`**
- output: **`batch_size x n_feats`**
```
class Dropout(Module):
def __init__(self, p=0.5):
super(Dropout, self).__init__()
self.p = p
self.mask = None
def updateOutput(self, input):
if self.training == True:
self.mask = np.random.binomial(1, self.p, size=input.shape)
self.output=np.multiply(self.mask, input)
else:
self.output = input
return self.output
def updateGradInput(self, input, gradOutput):
self.gradInput = np.multiply(self.mask, gradOutput)
return self.gradInput
def __repr__(self):
return "Dropout"
```
# Activation functions
Here's the complete example for the **Rectified Linear Unit** non-linearity (aka **ReLU**):
```
class ReLU(Module):
def __init__(self):
super(ReLU, self).__init__()
def updateOutput(self, input):
self.output = np.maximum(input, 0)
return self.output
def updateGradInput(self, input, gradOutput):
self.gradInput = np.multiply(gradOutput , input > 0)
return self.gradInput
def __repr__(self):
return "ReLU"
```
Implement [**Leaky Rectified Linear Unit**](http://en.wikipedia.org/wiki%2FRectifier_%28neural_networks%29%23Leaky_ReLUs). Expriment with slope.
```
class LeakyReLU(Module):
def __init__(self, slope = 0.03):
super(LeakyReLU, self).__init__()
self.slope = slope
def updateOutput(self, input):
self.output = np.maximum(input, self.slope*input)
return self.output
def updateGradInput(self, input, gradOutput):
self.gradInput = np.multiply(gradOutput, input > 0) + \
np.multiply(gradOutput, input <= 0) * self.slope
return self.gradInput
def __repr__(self):
return "LeakyReLU"
```
# Criterions
Criterions are used to score the models answers.
```
class Criterion(object):
def __init__ (self):
self.output = None
self.gradInput = None
def forward(self, input, target):
"""
Given an input and a target, compute the loss function
associated to the criterion and return the result.
For consistency this function should not be overrided,
all the code goes in `updateOutput`.
"""
return self.updateOutput(input, target)
def backward(self, input, target):
"""
Given an input and a target, compute the gradients of the loss function
associated to the criterion and return the result.
For consistency this function should not be overrided,
all the code goes in `updateGradInput`.
"""
return self.updateGradInput(input, target)
def updateOutput(self, input, target):
"""
Function to override.
"""
return self.output
def updateGradInput(self, input, target):
"""
Function to override.
"""
return self.gradInput
def __repr__(self):
"""
Pretty printing. Should be overrided in every module if you want
to have readable description.
"""
return "Criterion"
```
The **MSECriterion**, which is basic L2 norm usually used for regression, is implemented here for you.
```
class MSECriterion(Criterion):
def __init__(self):
super(MSECriterion, self).__init__()
def updateOutput(self, input, target):
self.output = np.sum(np.power(input - target,2)) / input.shape[0]
return self.output
def updateGradInput(self, input, target):
self.gradInput = (input - target) * 2 / input.shape[0]
return self.gradInput
def __repr__(self):
return "MSECriterion"
```
You task is to implement the **ClassNLLCriterion**. It should implement [multiclass log loss](http://scikit-learn.org/stable/modules/model_evaluation.html#log-loss). Nevertheless there is a sum over `y` (target) in that formula,
remember that targets are one-hot encoded. This fact simplifies the computations a lot. Note, that criterions are the only places, where you divide by batch size.
```
class ClassNLLCriterion(Criterion):
def __init__(self):
a = super(ClassNLLCriterion, self)
super(ClassNLLCriterion, self).__init__()
def updateOutput(self, input, target):
# Use this trick to avoid numerical errors
eps = 1e-15
input_clamp = np.clip(input, eps, 1 - eps)
self.output = -1 * np.einsum('ik,ik', target, np.log(input_clamp)) / input.shape[0]
return self.output
def updateGradInput(self, input, target):
# Use this trick to avoid numerical errors
input_clamp = np.maximum(1e-15, np.minimum(input, 1 - 1e-15) )
self.gradInput = -1 * np.einsum('ik,ik->ik', target, 1.0/(input_clamp)) / input.shape[0]
return self.gradInput
def __repr__(self):
return "ClassNLLCriterion"
```
| true |
code
| 0.748174 | null | null | null | null |
|
```
import tensorflow as tf
from tensorflow import keras
print( 'Tensorflow : ',tf.__version__)
print( ' |-> Keras : ',keras.__version__)
```
# Text generation with LSTM
This notebook contains the code samples found in Chapter 8, Section 1 of [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python?a_aid=keras&a_bid=76564dff). Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments.
----
[...]
## Implementing character-level LSTM text generation
Let's put these ideas in practice in a Keras implementation. The first thing we need is a lot of text data that we can use to learn a
language model. You could use any sufficiently large text file or set of text files -- Wikipedia, the Lord of the Rings, etc. In this
example we will use some of the writings of Nietzsche, the late-19th century German philosopher (translated to English). The language model
we will learn will thus be specifically a model of Nietzsche's writing style and topics of choice, rather than a more generic model of the
English language.
## Preparing the data
Let's start by downloading the corpus and converting it to lowercase:
```
#import keras
import numpy as np
path = keras.utils.get_file(
'nietzsche.txt',
origin='https://s3.amazonaws.com/text-datasets/nietzsche.txt')
text = open(path).read().lower()
print('Corpus length:', len(text))
```
Next, we will extract partially-overlapping sequences of length `maxlen`, one-hot encode them and pack them in a 3D Numpy array `x` of
shape `(sequences, maxlen, unique_characters)`. Simultaneously, we prepare a array `y` containing the corresponding targets: the one-hot
encoded characters that come right after each extracted sequence.
```
# Length of extracted character sequences
maxlen = 60
# We sample a new sequence every `step` characters
step = 3
# This holds our extracted sequences
sentences = []
# This holds the targets (the follow-up characters)
next_chars = []
for i in range(0, len(text) - maxlen, step):
sentences.append(text[i: i + maxlen])
next_chars.append(text[i + maxlen])
print('Number of sequences:', len(sentences))
# List of unique characters in the corpus
chars = sorted(list(set(text)))
print('Unique characters:', len(chars))
# Dictionary mapping unique characters to their index in `chars`
char_indices = dict((char, chars.index(char)) for char in chars)
# Next, one-hot encode the characters into binary arrays.
print('Vectorization...')
x = np.zeros((len(sentences), maxlen, len(chars)), dtype=np.bool)
y = np.zeros((len(sentences), len(chars)), dtype=np.bool)
for i, sentence in enumerate(sentences):
for t, char in enumerate(sentence):
x[i, t, char_indices[char]] = 1
y[i, char_indices[next_chars[i]]] = 1
```
## Building the network
Our network is a single `LSTM` layer followed by a `Dense` classifier and softmax over all possible characters. But let us note that
recurrent neural networks are not the only way to do sequence data generation; 1D convnets also have proven extremely successful at it in
recent times.
```
#from keras import layers
model = keras.models.Sequential()
model.add(keras.layers.LSTM(128, input_shape=(maxlen, len(chars))))
model.add(keras.layers.Dense(len(chars), activation='softmax'))
```
Since our targets are one-hot encoded, we will use `categorical_crossentropy` as the loss to train the model:
```
optimizer = keras.optimizers.RMSprop(lr=0.01)
model.compile(loss='categorical_crossentropy', optimizer=optimizer)
```
## Training the language model and sampling from it
Given a trained model and a seed text snippet, we generate new text by repeatedly:
* 1) Drawing from the model a probability distribution over the next character given the text available so far
* 2) Reweighting the distribution to a certain "temperature"
* 3) Sampling the next character at random according to the reweighted distribution
* 4) Adding the new character at the end of the available text
This is the code we use to reweight the original probability distribution coming out of the model,
and draw a character index from it (the "sampling function"):
```
def sample(preds, temperature=1.0):
preds = np.asarray(preds).astype('float64')
preds = np.log(preds) / temperature
exp_preds = np.exp(preds)
preds = exp_preds / np.sum(exp_preds)
probas = np.random.multinomial(1, preds, 1)
return np.argmax(probas)
```
Finally, this is the loop where we repeatedly train and generated text. We start generating text using a range of different temperatures
after every epoch. This allows us to see how the generated text evolves as the model starts converging, as well as the impact of
temperature in the sampling strategy.
```
import random
import sys
for epoch in range(1, 60):
print('epoch', epoch)
# Fit the model for 1 epoch on the available training data
model.fit(x, y,
batch_size=128,
epochs=1)
# Select a text seed at random
start_index = random.randint(0, len(text) - maxlen - 1)
generated_text = text[start_index: start_index + maxlen]
print('--- Generating with seed: "' + generated_text + '"')
for temperature in [0.2, 0.5, 1.0, 1.2]:
print('------ temperature:', temperature)
sys.stdout.write(generated_text)
# We generate 400 characters
for i in range(400):
sampled = np.zeros((1, maxlen, len(chars)))
for t, char in enumerate(generated_text):
sampled[0, t, char_indices[char]] = 1.
preds = model.predict(sampled, verbose=0)[0]
next_index = sample(preds, temperature)
next_char = chars[next_index]
generated_text += next_char
generated_text = generated_text[1:]
sys.stdout.write(next_char)
sys.stdout.flush()
print()
```
As you can see, a low temperature results in extremely repetitive and predictable text, but where local structure is highly realistic: in
particular, all words (a word being a local pattern of characters) are real English words. With higher temperatures, the generated text
becomes more interesting, surprising, even creative; it may sometimes invent completely new words that sound somewhat plausible (such as
"eterned" or "troveration"). With a high temperature, the local structure starts breaking down and most words look like semi-random strings
of characters. Without a doubt, here 0.5 is the most interesting temperature for text generation in this specific setup. Always experiment
with multiple sampling strategies! A clever balance between learned structure and randomness is what makes generation interesting.
Note that by training a bigger model, longer, on more data, you can achieve generated samples that will look much more coherent and
realistic than ours. But of course, don't expect to ever generate any meaningful text, other than by random chance: all we are doing is
sampling data from a statistical model of which characters come after which characters. Language is a communication channel, and there is
a distinction between what communications are about, and the statistical structure of the messages in which communications are encoded. To
evidence this distinction, here is a thought experiment: what if human language did a better job at compressing communications, much like
our computers do with most of our digital communications? Then language would be no less meaningful, yet it would lack any intrinsic
statistical structure, thus making it impossible to learn a language model like we just did.
## Take aways
* We can generate discrete sequence data by training a model to predict the next tokens(s) given previous tokens.
* In the case of text, such a model is called a "language model" and could be based on either words or characters.
* Sampling the next token requires balance between adhering to what the model judges likely, and introducing randomness.
* One way to handle this is the notion of _softmax temperature_. Always experiment with different temperatures to find the "right" one.
| true |
code
| 0.605507 | null | null | null | null |
|
<a href="https://colab.research.google.com/github/amir1m/learning-ml/blob/master/FCML_CoinToss.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
from scipy.special import comb
from scipy.stats import beta
import matplotlib.pyplot as plt
import numpy as np
import time
def get_binomial_prob(N,y,r):
return (comb(N, y) * (r ** y) * (1 - r) ** (N - y) )
def get_binomial_prob_dist(N = 10,r = 0.5):
prob_y = []
for y in range(0,N+1,1):
prob_y.append(get_binomial_prob(N, y, r))
return prob_y
def plot_dist(prob_y):
N = len(prob_y)
plt.bar(range(0,N), prob_y)
plt.xticks(ticks=range(0,N))
plt.xlabel("y")
plt.ylabel("P(y)")
plt.plot(range(0,N), prob_y)
plot_dist(get_binomial_prob_dist(N = 10, r = 0.5))
plot_dist(get_binomial_prob_dist(N = 10, r = 0.9))
```
## Bayesian
```
N = 10
Y_N = 6
p_yn_given_r = []
for r in np.arange(0,1, 0.1):
p_yn_given_r.append(get_binomial_prob(N, Y_N, r))
plt.plot(np.arange(0,1, 0.1), p_yn_given_r, 'b')
N = 100
Y_N = 70
p_yn_given_r = []
for r in np.arange(0, 1, 0.1):
p_yn_given_r.append(get_binomial_prob(N, Y_N, r))
print(N, Y_N, r)
print(get_binomial_prob(N, Y_N, r))
plt.plot(np.arange(0,1, 0.1), p_yn_given_r, 'r')
```
## Three Scenarios
### No prior knowledge (3.3.1)
```
plt.plot(beta.pdf(np.linspace(0.0, 1.0, 100),1,1))
plt.plot(beta.pdf(np.linspace(0.0, 1.0, num = 10), 2,1)) # For one Toss as Head
tosses = [0,1,2,3,4,10]
heads = [0,1,1,2,3,6]
a = 1.0
b = 1.0
prob_yn_r = []
r_values = np.linspace(0.0,1.0, 10)
expectations = []
variance = []
for i in range(0,6):
N = tosses[i]
yN = heads[i]
delta = yN + a
gamma = N - yN + b
expectations.append(delta / (delta + gamma))
variance.append((delta * gamma) / ((delta + gamma)**2 * (delta + gamma + 1)))
print("Toss %d: \n\t Heads = %f, delta = %f, gamma = %f, expectations = %f, variance = %f" %(N,yN, delta, gamma,expectations[i],variance[i]))
prob_yn_r.append(beta.pdf(r_values, delta, gamma ))
figure, axes = plt.subplots(nrows=3, ncols=2)
i = 0
for row in range(3):
for col in range(2):
axes[row, col].plot(r_values, prob_yn_r[i])
axes[row, col].set_title('Tosses=%d,Heads:%d'%(tosses[i], heads[i]))
i += 1
figure.tight_layout(pad=1.0)
figure, axes = plt.subplots(nrows=1, ncols=2)
axes[0].plot(tosses,expectations)
axes[0].set_title("No. of tosses Vs Expectations")
axes[0].set_xlabel("No. of Coin Tosses")
axes[0].set_ylabel("Expectations")
axes[1].plot(tosses,variance)
axes[1].set_title("No. of tosses Vs Varince")
axes[1].set_xlabel("No. of Coin Tosses")
axes[1].set_ylabel("Expectations")
figure.tight_layout(pad=3.0)
```
| true |
code
| 0.472257 | null | null | null | null |
|
# 08. Pseudo-Random Numbers, Simulating from Some Discrete and Continuous Random Variables
## [Inference Theory 1](https://lamastex.github.io/scalable-data-science/infty/2018/01/)
©2018 Raazesh Sainudiin. [Attribution 4.0 International (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/)
- The $Uniform(0,1)$ RV
- The $Bernoulli(\theta)$ RV
- Simulating from the $Bernoulli(\theta)$ RV
- The Equi-Probable $de\,Moivre(k)$ RV
- Simulating from the Equi-Probable $de\,Moivre(k)$ RV
- The $Uniform(\theta_1, \theta_2)$ RV
- Simulating from the $Uniform(\theta_1, \theta_2)$ RV
- The $Exponential(\lambda)$ RV
- Simulating from the $Exponential(\lambda)$ RV
- The standard $Cauchy$ RV
- Simulating from the standard $Cauchy$ RV
- Investigating running means
- Replicable samples
- A simple simulation
In the last notebook, we started to look at how we can produce realisations from the most elementary $Uniform(0,1)$ random variable.
i.e., how can we produce samples $(x_1, x_2, \ldots, x_n)$ from $X_1, X_2, \ldots, X_n$ $\overset{IID}{\thicksim}$ $Uniform(0,1)$?
What is SageMath doing when we ask for random()?
```
random()
```
We looked at how Modular arithmetic and number theory gives us pseudo-random number generators.
We used linear congruential generators (LCG) as simple pseudo-random number generators.
Remember that "pseudo-random" means that the numbers are not really random. We saw that some linear congruential generators (LCG) have much shorter, more predictable, patterns than others and we learned what makes a good LCG.
We introduced the pseudo-random number generator (PRNG) called the Mersenne Twister that we will use for simulation purposes in this course. It is based on more sophisticated theory than that of LCG but the basic principles of recurrence relations are the same.
# The $Uniform(0,1)$ Random Variable
Recall that the $Uniform(0,1)$ random variable is the fundamental model as we can transform it to any other random variable, random vector or random structure. The PDF $f$ and DF $F$ of $X \sim Uniform(0,1)$ are:
$f(x) = \begin{cases} 0 & \text{if} \ x \notin [0,1] \\ 1 & \text{if} \ x \in [0,1] \end{cases}$
$F(x) = \begin{cases} 0 & \text{if} \ x < 0 \\ 1 & \text{if} \ x > 1 \\ x & \text{if} \ x \in [0,1] \end{cases}$
We use the Mersenne twister pseudo-random number generator to mimic independent and identically distributed draws from the $uniform(0,1)$ RV.
In Sage, we use the python random module to generate pseudo-random numbers for us. (We have already used it: remember randint?)
random() will give us one simulation from the $Uniform(0,1)$ RV:
```
random()
```
If we want a whole simulated sample we can use a list comprehension. We will be using this technique frequently so make sure you understand what is going on. "for i in range(3)" is acting like a counter to give us 3 simulated values in the list we are making
```
[random() for i in range(3)]
listOfUniformSamples = [random() for i in range(3) ]
listOfUniformSamples
```
If we do this again, we will get a different sample:
```
listOfUniformSamples2 = [random() for i in range(3) ]
listOfUniformSamples2
```
Often is it useful to be able to replicate the same random sample. For example, if we were writing some code to do some simulations using samples from a PRNG, and we "improved" the way that we were doing it, how would we want to test our improvement? If we could replicate the same samples then we could show that our new code was equivalent to our old code, just more efficient.
Remember when we were using the LCGs, and we could set the seed $x_0$? More sophisticated PRNGs like the Mersenne Twister also have a seed. By setting this seed to a specified value we can make sure that we can replicate samples.
```
?set_random_seed
set_random_seed(256526)
listOfUniformSamples = [random() for i in range(3) ]
listOfUniformSamples
initial_seed()
```
Now we can replicate the same sample again by setting the seed to the same value:
```
set_random_seed(256526)
listOfUniformSamples2 = [random() for i in range(3) ]
listOfUniformSamples2
initial_seed()
set_random_seed(2676676766)
listOfUniformSamples2 = [random() for i in range(3) ]
listOfUniformSamples2
initial_seed()
```
We can compare some samples visually by plotting them:
```
set_random_seed(256526)
listOfUniformSamples = [(i,random()) for i in range(100)]
plotsSeed1 = points(listOfUniformSamples)
t1 = text('Seed 1 = 256626', (60,1.2), rgbcolor='blue',fontsize=10)
set_random_seed(2676676766)
plotsSeed2 = points([(i,random()) for i in range(100)],rgbcolor="red")
t2 = text('Seed 2 = 2676676766', (60,1.2), rgbcolor='red',fontsize=10)
bothSeeds = plotsSeed1 + plotsSeed2
t31 = text('Seed 1 and', (30,1.2), rgbcolor='blue',fontsize=10)
t32 = text('Seed 2', (65,1.2), rgbcolor='red',fontsize=10)
show(graphics_array( (plotsSeed1+t1,plotsSeed2+t2, bothSeeds+t31+t32)),figsize=[9,3])
```
### YouTry
Try looking at the more advanced documentation and play a bit.
```
?sage.misc.randstate
```
(end of You Try)
---
---
### Question:
What can we do with samples from a $Uniform(0,1)$ RV? Why bother?
### Answer:
We can use them to sample or simulate from other, more complex, random variables.
# The $Bernoulli(\theta)$ Random Variable
The $Bernoulli(\theta)$ RV $X$ with PMF $f(x;\theta)$ and DF $F(x;\theta)$ parameterised by some real $\theta\in [0,1]$ is a discrete random variable with only two possible outcomes.
$f(x;\theta)= \theta^x (1-\theta)^{1-x} \mathbf{1}_{\{0,1\}}(x) =
\begin{cases}
\theta & \text{if} \ x=1,\\
1-\theta & \text{if} \ x=0,\\
0 & \text{otherwise}
\end{cases}$
$F(x;\theta) =
\begin{cases}
1 & \text{if} \ 1 \leq x,\\
1-\theta & \text{if} \ 0 \leq x < 1,\\
0 & \text{otherwise}
\end{cases}$
Here are some functions for the PMF and DF for a $Bernoulli$ RV along with various useful functions for us in the sequel. Let's take a quick look at them.
```
def bernoulliPMF(x, theta):
'''Probability mass function for Bernoulli(theta).
Param x is the value to find the Bernoulli probability mass of.
Param theta is the theta parameterising this Bernoulli RV.'''
retValue = 0
if x == 1:
retValue = theta
elif x == 0:
retValue = 1 - theta
return retValue
def bernoulliCDF(x, theta):
'''DF for Bernoulli(theta).
Param x is the value to find the Bernoulli cumulative density function of.
Param theta is the theta parameterising this Bernoulli RV.'''
retValue = 0
if x >= 1:
retValue = 1
elif x >= 0:
retValue = 1 - theta
# in the case where x < 0, retValue is the default of 0
return retValue
# PFM plot
def pmfPlot(outcomes, pmf_values):
'''Returns a pmf plot for a discrete distribution.'''
pmf = points(zip(outcomes,pmf_values), rgbcolor="blue", pointsize='20')
for i in range(len(outcomes)):
pmf += line([(outcomes[i], 0),(outcomes[i], pmf_values[i])], rgbcolor="blue", linestyle=":")
# padding
pmf += point((0,1), rgbcolor="black", pointsize="0")
return pmf
# CDF plot
def cdfPlot(outcomes, cdf_values):
'''Returns a DF plot for a discrete distribution.'''
cdf_pairs = zip(outcomes, cdf_values)
cdf = point(cdf_pairs, rgbcolor = "red", faceted = false, pointsize="20")
for k in range(len(cdf_pairs)):
x, kheight = cdf_pairs[k] # unpack tuple
previous_x = 0
previous_height = 0
if k > 0:
previous_x, previous_height = cdf_pairs[k-1] # unpack previous tuple
cdf += line([(previous_x, previous_height),(x, previous_height)], rgbcolor="grey")
cdf += points((x, previous_height),rgbcolor = "white", faceted = true, pointsize="20")
cdf += line([(x, previous_height),(x, kheight)], rgbcolor="blue", linestyle=":")
# padding
max_index = len(outcomes)-1
cdf += line([(outcomes[0]-0.2, 0),(outcomes[0], 0)], rgbcolor="grey")
cdf += line([(outcomes[max_index],cdf_values[max_index]),(outcomes[max_index]+0.2, cdf_values[max_index])], rgbcolor="grey")
return cdf
def makeFreqDictHidden(myDataList):
'''Make a frequency mapping out of a list of data.
Param myDataList, a list of data.
Return a dictionary mapping each data value from min to max in steps of 1 to its frequency count.'''
freqDict = {} # start with an empty dictionary
sortedMyDataList = sorted(myDataList)
for k in sortedMyDataList:
freqDict[k] = myDataList.count(k)
return freqDict # return the dictionary created
def makeEMFHidden(myDataList):
'''Make an empirical mass function from a data list.
Param myDataList, list of data to make emf from.
Return list of tuples comprising (data value, relative frequency) ordered by data value.'''
freqs = makeFreqDictHidden(myDataList) # make the frequency counts mapping
totalCounts = sum(freqs.values())
relFreqs = [fr/(1.0*totalCounts) for fr in freqs.values()] # use a list comprehension
numRelFreqPairs = zip(freqs.keys(), relFreqs) # zip the keys and relative frequencies together
numRelFreqPairs.sort() # sort the list of tuples
return numRelFreqPairs
from pylab import array
def makeEDFHidden(myDataList):
'''Make an empirical distribution function from a data list.
Param myDataList, list of data to make emf from.
Return list of tuples comprising (data value, cumulative relative frequency) ordered by data value.'''
freqs = makeFreqDictHidden(myDataList) # make the frequency counts mapping
totalCounts = sum(freqs.values())
relFreqs = [fr/(1.0*totalCounts) for fr in freqs.values()] # use a list comprehension
relFreqsArray = array(relFreqs)
cumFreqs = list(relFreqsArray.cumsum())
numCumFreqPairs = zip(freqs.keys(), cumFreqs) # zip the keys and culm relative frequencies together
numCumFreqPairs.sort() # sort the list of tuples
return numCumFreqPairs
# EPMF plot
def epmfPlot(samples):
'''Returns an empirical probability mass function plot from samples data.'''
epmf_pairs = makeEMFHidden(samples)
epmf = point(epmf_pairs, rgbcolor = "blue", pointsize="20")
for k in epmf_pairs: # for each tuple in the list
kkey, kheight = k # unpack tuple
epmf += line([(kkey, 0),(kkey, kheight)], rgbcolor="blue", linestyle=":")
# padding
epmf += point((0,1), rgbcolor="black", pointsize="0")
return epmf
# ECDF plot
def ecdfPlot(samples):
'''Returns an empirical probability mass function plot from samples data.'''
ecdf_pairs = makeEDFHidden(samples)
ecdf = point(ecdf_pairs, rgbcolor = "red", faceted = false, pointsize="20")
for k in range(len(ecdf_pairs)):
x, kheight = ecdf_pairs[k] # unpack tuple
previous_x = 0
previous_height = 0
if k > 0:
previous_x, previous_height = ecdf_pairs[k-1] # unpack previous tuple
ecdf += line([(previous_x, previous_height),(x, previous_height)], rgbcolor="grey")
ecdf += points((x, previous_height),rgbcolor = "white", faceted = true, pointsize="20")
ecdf += line([(x, previous_height),(x, kheight)], rgbcolor="blue", linestyle=":")
# padding
ecdf += line([(ecdf_pairs[0][0]-0.2, 0),(ecdf_pairs[0][0], 0)], rgbcolor="grey")
max_index = len(ecdf_pairs)-1
ecdf += line([(ecdf_pairs[max_index][0], ecdf_pairs[max_index][1]),(ecdf_pairs[max_index][0]+0.2, ecdf_pairs[max_index][1])],rgbcolor="grey")
return ecdf
```
We can see the effect of varying $\theta$ interactively:
```
@interact
def _(theta=(0.5)):
'''Interactive function to plot the bernoulli pmf and cdf.'''
if theta <=1 and theta >= 0:
outcomes = (0, 1) # define the bernoulli outcomes
print "Bernoulli (", RR(theta).n(digits=2), ") pmf and cdf"
# pmf plot
pmf_values = [bernoulliPMF(x, theta) for x in outcomes]
pmf = pmfPlot(outcomes, pmf_values) # this is one of our own, hidden, functions
# cdf plot
cdf_values = [bernoulliCDF(x, theta) for x in outcomes]
cdf = cdfPlot(outcomes, cdf_values) # this is one of our own, hidden, functions
show(graphics_array([pmf, cdf]),figsize=[8,3])
else:
print "0 <= theta <= 1"
```
Don't worry about how these plots are done: you are not expected to be able to understand all of these details now.
Just use them to see the effect of varying $\theta$.
## Simulating a sample from the $Bernoulli(\theta)$ RV
We can simulate a sample from a $Bernoulli$ distribution by transforming input from a $Uniform(0,1)$ distribution using the floor() function in Sage. In maths, $\lfloor x \rfloor$, the 'floor of $x$' is the largest integer that is smaller than or equal to $x$. For example, $\lfloor 3.8 \rfloor = 3$.
```
z=3.8
floor(z)
```
Using floor, we can do inversion sampling from the $Bernoulli(\theta)$ RV using the the $Uniform(0,1)$ random variable that we said is the fundamental model.
We will introduce inversion sampling more formally later. In general, inversion sampling means using the inverse of the CDF $F$, $F^{[-1]}$, to transform input from a $Uniform(0,1)$ distribution.
To simulate from the $Bernoulli(\theta)$, we can use the following algorithm:
### Input:
- $u \thicksim Uniform(0,1)$ from a PRNG, $\qquad \qquad \text{where, } \sim$ means "sample from"
- $\theta$, the parameter
### Output:
$x \thicksim Bernoulli(\theta)$
### Steps:
- $u \leftarrow Uniform(0,1)$
- $x \leftarrow \lfloor u + \theta \rfloor$
- Return $x$
We can illustrate this with SageMath:
```
theta = 0.5 # theta must be such that 0 <= theta <= 1
u = random()
x = floor(u + theta)
x
```
To make a number of simulations, we can use list comprehensions again:
```
theta = 0.5
n = 20
randomUs = [random() for i in range(n)]
simulatedBs = [floor(u + theta) for u in randomUs]
simulatedBs
```
To make modular reusable code we can package up what we have done as functions.
The function `bernoulliFInverse(u, theta)` codes the inverse of the CDF of a Bernoulli distribution parameterised by `theta`. The function `bernoulliSample(n, theta)` uses `bernoulliFInverse(...)` in a list comprehension to simulate n samples from a Bernoulli distribution parameterised by theta, i.e., the distribution of our $Bernoulli(\theta)$ RV.
```
def bernoulliFInverse(u, theta):
'''A function to evaluate the inverse CDF of a bernoulli.
Param u is the value to evaluate the inverse CDF at.
Param theta is the distribution parameters.
Returns inverse CDF under theta evaluated at u'''
return floor(u + theta)
def bernoulliSample(n, theta):
'''A function to simulate samples from a bernoulli distribution.
Param n is the number of samples to simulate.
Param theta is the bernoulli distribution parameter.
Returns a simulated Bernoulli sample as a list'''
us = [random() for i in range(n)]
return [bernoulliFInverse(u, theta) for u in us] # use bernoulliFInverse in a list comprehension
```
Note that we are using a list comprehension and the built-in SageMath `random()` function to make a list of pseudo-random simulations from the $Uniform(0,1)$. The length of the list is determined by the value of n. Inside the body of the function we assign this list to a variable named `us` (i.e., u plural). We then use another list comprehension to make our simulated sample. This list comprehension works by calling our function `bernoulliFInverse(...)` and passing in values for theta together with each u in us in turn.
Let's try a small number of samples:
```
theta = 0.2
n = 10
samples = bernoulliSample(n, theta)
samples
```
Now lets explore the effect of interactively varying n and $\theta$:
```
@interact
def _(theta=(0.5), n=(10,(0..1000))):
'''Interactive function to plot samples from bernoulli distribution.'''
if theta >= 0 and theta <= 1:
print "epmf and ecdf for ", n, " samples from Bernoulli (", theta, ")"
samples = bernoulliSample(n, theta)
# epmf plot
epmf = epmfPlot(samples) # this is one of our hidden functions
# ecdf plot
ecdf = ecdfPlot(samples) # this is one of our hidden functions
show(graphics_array([epmf, ecdf]),figsize=[8,3])
else:
print "0 <= theta <=1, n>0"
```
You can vary $\theta$ and $n$ on the interactive plot. You should be able to see that as $n$ increases, the empirical plots should get closer to the theoretical $f$ and $F$.
### YouTry
Check that you understand what `floor` is doing. We have put some extra print statements into our demonstration of floor so that you can see what is going on in each step. Try evaluating this cell several times so that you see what happens with different values of `u`.
```
theta = 0.5 # theta must be such that 0 <= theta <= 1
u = random()
print "u is", u
print "u + theta is", (u + theta)
print "floor(u + theta) is", floor(u + theta)
```
In the cell below we use floor to get 1's and 0's from the pseudo-random u's given by random(). It is effectively doing exactly the same thing as the functions above that we use to simulate a specified number of $Bernoulli(\theta)$ RVs, but the why that it is written may be easier to understand. If `floor` is doing what we want it to, then when `n` is sufficiently large, we'd expect our proportion of `1`s to be close to `theta` (remember Kolmogorov's axiomatic motivations for probability!). Try changing the value assigned to the variable `theta` and re-evaluting the cell to check this.
```
theta = 0.7 # theta must be such that 0 <= theta <= 1
listFloorResults = [] # an empty list to store results in
n = 100000 # how many iterations to do
for i in range(n): # a for loop to do something n times
u = random() # generate u
x = floor(u + theta) # use floor
listFloorResults.append(x) # add x to the list of results
listFloorResults.count(1)*1.0/len(listFloorResults) # proportion of 1s in the results
```
# The equi-probable $de~Moivre(\theta)$ Random Variable
The $de~Moivre(\theta_1,\theta_2,\ldots,\theta_k)$ RV is the natural generalisation of the $Bernoulli (\theta)$ RV to more than two outcomes. Take a die (i.e. one of a pair of dice): there are 6 possible outcomes from tossing a die if the die is a normal six-sided one (the outcome is which face is the on the top). To start with we can allow the possibility that the different faces could be loaded so that they have different probabilities of being the face on the top if we throw the die. In this case, k=6 and the parameters $\theta_1$, $\theta_2$, ...$\theta_6$ specify how the die is loaded, and the number on the upper-most face if the die is tossed is a $de\,Moivre$ random variable parameterised by $\theta_1,\theta_2,\ldots,\theta_6$.
If $\theta_1=\theta_2=\ldots=\theta_6= \frac{1}{6}$ then we have a fair die.
Here are some functions for the equi-probable $de\, Moivre$ PMF and CDF where we code the possible outcomes as the numbers on the faces of a k-sided die, i.e, 1,2,...k.
```
def deMoivrePMF(x, k):
'''Probability mass function for equi-probable de Moivre(k).
Param x is the value to evaluate the deMoirve pmf at.
Param k is the k parameter for an equi-probable deMoivre.
Returns the evaluation of the deMoivre(k) pmf at x.'''
if (int(x)==x) & (x > 0) & (x <= k):
return 1.0/k
else:
return 0
def deMoivreCDF(x, k):
'''DF for equi-probable de Moivre(k).
Param x is the value to evaluate the deMoirve cdf at.
Param k is the k parameter for an equi-probable deMoivre.
Returns the evaluation of the deMoivre(k) cdf at x.'''
return 1.0*x/k
@interact
def _(k=(6)):
'''Interactive function to plot the de Moivre pmf and cdf.'''
if (int(k) == k) and (k >= 1):
outcomes = range(1,k+1,1) # define the outcomes
pmf_values = [deMoivrePMF(x, k) for x in outcomes]
print "equi-probable de Moivre (", k, ") pmf and cdf"
# pmf plot
pmf = pmfPlot(outcomes, pmf_values) # this is one of our hidden functions
# cdf plot
cdf_values = [deMoivreCDF(x, k) for x in outcomes]
cdf = cdfPlot(outcomes, cdf_values) # this is one of our hidden functions
show(graphics_array([pmf, cdf]),figsize=[8,3])
else:
print "k must be an integer, k>0"
```
### YouTry
Try changing the value of k in the above interact.
## Simulating a sample from the equi-probable $de\,Moivre(k)$ random variable
We use floor ($\lfloor \, \rfloor$) again for simulating from the equi-probable $de \, Moivre(k)$ RV, but because we are defining our outcomes as 1, 2, ... k, we just add 1 to the result.
```
k = 6
u = random()
x = floor(u*k)+1
x
```
To simulate from the equi-probable $de\,Moivre(k)$, we can use the following algorithm:
#### Input:
- $u \thicksim Uniform(0,1)$ from a PRNG
- $k$, the parameter
#### Output:
- $x \thicksim \text{equi-probable } de \, Moivre(k)$
#### Steps:
- $u \leftarrow Uniform(0,1)$
- $x \leftarrow \lfloor uk \rfloor + 1$
- return $x$
We can illustrate this with SageMath:
```
def deMoivreFInverse(u, k):
'''A function to evaluate the inverse CDF of an equi-probable de Moivre.
Param u is the value to evaluate the inverse CDF at.
Param k is the distribution parameter.
Returns the inverse CDF for a de Moivre(k) distribution evaluated at u.'''
return floor(k*u) + 1
def deMoivreSample(n, k):
'''A function to simulate samples from an equi-probable de Moivre.
Param n is the number of samples to simulate.
Param k is the bernoulli distribution parameter.
Returns a simulated sample of size n from an equi-probable de Moivre(k) distribution as a list.'''
us = [random() for i in range(n)]
return [deMoivreFInverse(u, k) for u in us]
```
A small sample:
```
deMoivreSample(15,6)
```
You should understand the `deMoivreFInverse` and `deMoivreSample` functions and be able to write something like them if you were asked to.
You are not expected to be to make the interactive plots below (but this is not too hard to do by syntactic mimicry and google searches!).
Now let's do some interactive sampling where you can vary $k$ and the sample size $n$:
```
@interact
def _(k=(6), n=(10,(0..500))):
'''Interactive function to plot samples from equi-probable de Moivre distribution.'''
if n > 0 and k >= 0 and int(k) == k:
print "epmf and ecdf for ", n, " samples from equi-probable de Moivre (", k, ")"
outcomes = range(1,k+1,1) # define the outcomes
samples = deMoivreSample(n, k) # get the samples
epmf = epmfPlot(samples) # this is one of our hidden functions
ecdf = ecdfPlot(samples) # this is one of our hidden functions
show(graphics_array([epmf, ecdf]),figsize=[10,3])
else:
print "k>0 must be an integer, n>0"
```
Try changing $n$ and/or $k$. With $k = 40$ for example, you could be simulating the number on the first ball for $n$ Lotto draws.
### YouTry
A useful counterpart to the floor of a number is the ceiling, denoted $\lceil \, \rceil$. In maths, $\lceil x \rceil$, the 'ceiling of $x$' is the smallest integer that is larger than or equal to $x$. For example, $\lceil 3.8 \rceil = 4$. We can use the ceil function to do this in Sage:
```
ceil(3.8)
```
Try using `ceil` to check that you understand what it is doing. What would `ceil(0)` be?
# Inversion Sampler for Continuous Random Variables
When we simulated from the discrete RVs above, the $Bernoulli(\theta)$ and the equi-probable $de\,Moivre(k)$, we transformed some $u \thicksim Uniform(0,1)$ into some value for the RV.
Now we will look at the formal idea of an inversion sampler for continuous random variables. Inversion sampling for continuous random variables is a way to simulate values for a continuous random variable $X$ using $u \thicksim Uniform(0,1)$.
The idea of the inversion sampler is to treat $u \thicksim Uniform(0,1)$ as some value taken by the CDF $F$ and find the value $x$ at which $F(X \le x) = u$.
To find x where $F(X \le x) = u$ we need to use the inverse of $F$, $F^{[-1]}$. This is why it is called an **inversion sampler**.
Formalising this,
### Proposition
Let $F(x) := \int_{- \infty}^{x} f(y) \,d y : \mathbb{R} \rightarrow [0,1]$ be a continuous DF with density $f$, and let its inverse $F^{[-1]} $ be:
$$ F^{[-1]}(u) := \inf \{ x : F(x) = u \} : [0,1] \rightarrow \mathbb{R} $$
Then, $F^{[-1]}(U)$ has the distribution function $F$, provided $U \thicksim Uniform(0,1)$ ($U$ is a $Uniform(0,1)$ RV).
Note:
The infimum of a set A of real numbers, denoted by $\inf(A)$, is the greatest lower bound of every element of $A$.
Proof
The ``one-line proof'' of the proposition is due to the following equalities:
$$P(F^{[-1]}(U) \leq x) = P(\inf \{ y : F(y) = U)\} \leq x ) = P(U \leq F(x)) = F(x), \quad \text{for all } x \in \mathbb{R} . $$
# Algorithm for Inversion Sampler
#### Input:
- A PRNG for $Uniform(0,1)$ samples
- A procedure to give us $F^{[-1]}(u)$, inverse of the DF of the target RV $X$ evaluated at $u$
#### Output:
- A sample $x$ from $X$ distributed according to $F$
#### Algorithm steps:
- Draw $u \sim Uniform(0,1)$
- Calculate $x = F^{[-1]}(u)$
# The $Uniform(\theta_1, \theta_2)$RV
We have already met the$Uniform(\theta_1, \theta_2)$ RV.
Given two real parameters $\theta_1,\theta_2 \in \mathbb{R}$, such that $\theta_1 < \theta_2$, the PDF of the $Uniform(\theta_1,\theta_2)$ RV $X$ is:
$$f(x;\theta_1,\theta_2) =
\begin{cases}
\frac{1}{\theta_2 - \theta_1} & \text{if }\theta_1 \leq x \leq \theta_2\text{,}\\
0 & \text{otherwise}
\end{cases}
$$
and its DF given by $F(x;\theta_1,\theta_2) = \int_{- \infty}^x f(y; \theta_1,\theta_2) \, dy$ is:
$$
F(x; \theta_1,\theta_2) =
\begin{cases}
0 & \text{if }x < \theta_1 \\
\frac{x-\theta_1}{\theta_2-\theta_1} & \text{if}~\theta_1 \leq x \leq \theta_2,\\
1 & \text{if} x > \theta_2
\end{cases}
$$
For example, here are the PDF, CDF and inverse CDF for the $Uniform(-1,1)$:
<img src="images/UniformMinus11ThreeCharts.png" width=800>
As usual, we can make some SageMath functions for the PDF and CDF:
```
# uniform pdf
def uniformPDF(x, theta1, theta2):
'''Uniform(theta1, theta2) pdf function f(x; theta1, theta2).
x is the value to evaluate the pdf at.
theta1, theta2 are the distribution parameters.'''
retvalue = 0 # default return value
if x >= theta1 and x <= theta2:
retvalue = 1.0/(theta2-theta1)
return retvalue
# uniform cdf
def uniformCDF(x, theta1, theta2):
'''Uniform(theta1, theta2) CDF or DF function F(x; theta1, theta2).
x is the value to evaluate the cdf at.
theta1, theta2 are the distribution parameters.'''
retvalue = 0 # default return value
if (x > theta2):
retvalue = 1
elif (x > theta1): # else-if
retvalue = (x - theta1) / (theta2-theta1)
# if (x < theta1), retvalue will be 0
return retvalue
```
Using these functions in an interactive plot, we can see the effect of changing the distribution parameters $\theta_1$ and $\theta_2$.
```
@interact
def InteractiveUniformPDFCDFPlots(theta1=0,theta2=1):
if theta2 > theta1:
print "Uniform(", + RR(theta1).n(digits=2), ",", RR(theta2).n(digits=2), ") pdf and cdf"
p1 = line([(theta1-1,0), (theta1,0)], rgbcolor='blue')
p1 += line([(theta1,1/(theta2-theta1)), (theta2,1/(theta2-theta1))], rgbcolor='blue')
p1 += line([(theta2,0), (theta2+1,0)], rgbcolor='blue')
p2 = line([(theta1-1,0), (theta1,0)], rgbcolor='red')
p2 += line([(theta1,0), (theta2,1)], rgbcolor='red')
p2 += line([(theta2,1), (theta2+1,1)], rgbcolor='red')
show(graphics_array([p1, p2]),figsize=[8,3])
else:
print "theta2 must be greater than theta1"
```
# Simulating from the $Uniform(\theta_1, \theta_2)$ RV
We can simulate from the $Uniform(\theta_1,\theta_2)$ using the inversion sampler, provided that we can get an expression for $F^{[-1]}$ that can be implemented as a procedure.
We can get this by solving for $x$ in terms of $u=F(x;\theta_1,\theta_2)$:
$$
u = \frac{x-\theta_1}{\theta_2-\theta_1} \quad \iff \quad x = (\theta_2-\theta_1)u+\theta_1 \quad \iff \quad F^{[-1]}(u;\theta_1,\theta_2) = \theta_1+(\theta_2-\theta_1)u
$$
<img src="images/Week7InverseUniformSampler.png" width=600>
## Algorithm for Inversion Sampler for the $Uniform(\theta_1, \theta_2)$ RV
#### Input:
- $u \thicksim Uniform(0,1)$
- $F^{[-1]}(u)$
- $\theta_1$, $\theta_2$
#### Output:
- A sample $x \thicksim Uniform(\theta_1, \theta_2)$
#### Algorithm steps:
- Draw $u \sim Uniform(0,1)$
- Calculate $x = F^{[-1]}(u) = (\theta_1 + u(\theta_2 - \theta_1))$
- Return $x$
We can illustrate this with SageMath by writing a function to calculate the inverse of the CDF of a uniform distribution parameterised by theta1 and theta2. Given a value between 0 and 1 for the parameter u, it returns the height of the inverse CDF at this point, i.e. the value in the range theta1 to theta2 where the CDF evaluates to u.
```
def uniformFInverse(u, theta1, theta2):
'''A function to evaluate the inverse CDF of a uniform(theta1, theta2) distribution.
u, u should be 0 <= u <= 1, is the value to evaluate the inverse CDF at.
theta1, theta2, theta2 > theta1, are the uniform distribution parameters.'''
return theta1 + (theta2 - theta1)*u
```
This function transforms a single $u$ into a single simulated value from the $Uniform(\theta_1, \theta_2)$, for example:
```
u = random()
theta1, theta2 = 3, 6
uniformFInverse(u, theta1, theta2)
```
Then we can use this function inside another function to generate a number of samples:
```
def uniformSample(n, theta1, theta2):
'''A function to simulate samples from a uniform distribution.
n > 0 is the number of samples to simulate.
theta1, theta2 (theta2 > theta1) are the uniform distribution parameters.'''
us = [random() for i in range(n)]
return [uniformFInverse(u, theta1, theta2) for u in us]
```
The basic strategy is the same as for simulating $Bernoulli$ and $de \, Moirve$ samples: we are using a list comprehension and the built-in SAGE random() function to make a list of pseudo-random simulations from the $Uniform(0,1)$. The length of the list is determined by the value of n. Inside the body of the function we assign this list to a variable named us (i.e., u plural). We then use another list comprehension to make our simulated sample. This list comprehension works by calling our function uniformFInverse(...) and passing in values for theta1 and theta2 together with each u in us in turn.
You should be able to write simple functions like uniformFinverse and uniformSample yourself.
Try this for a small sample:
```
param1 = -5
param2 = 5
nToGenerate = 30
myUniformSample = uniformSample(nToGenerate, param1, param2)
print(myUniformSample)
```
Much more fun, we can make an interactive plot which uses the uniformSample(...) function to generate and plot while you choose the parameters and number to generate (you are not expected to be able to make interactive plots like this):
```
@interact
def _(theta1=0, theta2=1, n=(1..5000)):
'''Interactive function to plot samples from uniform distribution.'''
if theta2 > theta1:
if n == 1:
print n, "uniform(", + RR(theta1).n(digits=2), ",", RR(theta2).n(digits=2), ") sample"
else:
print n, "uniform(", + RR(theta1).n(digits=2), ",", RR(theta2).n(digits=2), ") samples"
sample = uniformSample(n, theta1, theta2)
pts = zip(range(1,n+1,1),sample) # plot so that first sample is at x=1
p=points(pts)
p+= text(str(theta1), (0, theta1), fontsize=10, color='black') # add labels manually
p+= text(str(theta2), (0, theta2), fontsize=10, color='black')
p.show(xmin=0, xmax = n+1, ymin=theta1, ymax = theta2, axes=false, gridlines=[[0,n+1],[theta1,theta2]], figsize=[7,3])
else:
print "Theta1 must be less than theta2"
```
We can get a better idea of the distribution of our sample using a histogram (the minimum sample size has been set to 50 here because the automatic histogram generation does not do a very good job with small samples).
```
import pylab
@interact
def _(theta1=0, theta2=1, n=(50..5000), Bins=5):
'''Interactive function to plot samples from uniform distribution as a histogram.'''
if theta2 > theta1:
sample = uniformSample(n, theta1, theta2)
pylab.clf() # clear current figure
n, bins, patches = pylab.hist(sample, Bins, normed=true)
pylab.ylabel('normalised count')
pylab.title('Normalised histogram')
pylab.savefig('myHist') # to actually display the figure
pylab.show()
else:
print "Theta1 must be less than theta2"
```
# The $Exponential(\lambda)$ Random Variable
For a given $\lambda$ > 0, an $Exponential(\lambda)$ Random Variable has the following PDF $f$ and DF $F$:
$$
f(x;\lambda) =\begin{cases}\lambda e^{-\lambda x} & \text{if }x \ge 0\text{,}\\ 0 & \text{otherwise}\end{cases}
$$
$$
F(x;\lambda) =\begin{cases}1 - e^{-\lambda x} & \text{if }x \ge 0\text{,}\\ 0 & \text{otherwise}\end{cases}
$$
An exponential distribution is useful because is can often be used to model inter-arrival times or making inter-event measurements (if you are familiar with the $Poisson$ distribution, a discrete distribution, you may have also met the $Exponential$ distribution as the time between $Poisson$ events). Here are some examples of random variables which are sometimes modelled with an exponential distribution:
time between the arrival of buses at a bus-stop
distance between roadkills on a stretch of highway
In SageMath, the we can use `exp(x)` to calculate $e^x$, for example:
```
x = 3.0
exp(x)
```
We can code some functions for the PDF and DF of an $Exponential$ parameterised by lambda like this $\lambda$.
**Note** that we cannot or should not use the name `lambda` for the parameter because in SageMath (and Python), the term `lambda` has a special meaning. Do you recall lambda expressions?
```
def exponentialPDF(x, lam):
'''Exponential pdf function.
x is the value we want to evaluate the pdf at.
lam is the exponential distribution parameter.'''
return lam*exp(-lam*x)
def exponentialCDF(x, lam):
'''Exponential cdf or df function.
x is the value we want to evaluate the cdf at.
lam is the exponential distribution parameter.'''
return 1 - exp(-lam*x)
```
You should be able to write simple functions like `exponentialPDF` and `exponentialCDF` yourself, but you are not expected to be able to make the interactive plots.
You can see the shapes of the PDF and CDF for different values of $\lambda$ using the interactive plot below.
```
@interact
def _(lam=('lambda',0.5),Xmax=(5..100)):
'''Interactive function to plot the exponential pdf and cdf.'''
if lam > 0:
print "Exponential(", RR(lam).n(digits=2), ") pdf and cdf"
from pylab import arange
xvalues = list(arange(0.1, Xmax, 0.1))
p1 = line(zip(xvalues, [exponentialPDF(y, lam) for y in xvalues]), rgbcolor='blue')
p2 = line(zip(xvalues, [exponentialCDF(y, lam) for y in xvalues]), rgbcolor='red')
show(graphics_array([p1, p2]),figsize=[8,3])
else:
print "Lambda must be greater than 0"
```
We are going to write some functions to help us to do inversion sampling from the $Exponential(\lambda)$ RV.
As before, we need an expression for $F^{[-1]}$ that can be implemented as a procedure.
We can get this by solving for $x$ in terms of $u=F(x;\lambda)$
### YouTry later
Show that
$$
F^{[-1]}(u;\lambda) =\frac{-1}{\lambda} \ln(1-u)
$$
$\ln = \log_e$ is the natural logarthim.
(end of You try)
---
---
# Simulating from the $Exponential(\lambda)$ RV
Algorithm for Inversion Sampler for the $Exponential(\lambda)$ RV
#### Input:
- $u \thicksim Uniform(0,1)$
- $F^{[-1]}(u)$
- $\lambda$
### Output:
- sample $x \thicksim Exponential(\lambda)$
#### Algorithm steps:
- Draw $u \sim Uniform(0,1)$
- Calculate $x = F^{[-1]}(u) = \frac{-1}{\lambda}\ln(1-u)$
- Return $x$
The function `exponentialFInverse(u, lam)` codes the inverse of the CDF of an exponential distribution parameterised by `lam`. Given a value between 0 and 1 for the parameter `u`, it returns the height of the inverse CDF of the exponential distribution at this point, i.e. the value where the CDF evaluates to `u`. The function `exponentialSample(n, lam)` uses `exponentialFInverse(...)` to simulate `n` samples from an exponential distribution parameterised by `lam`.
```
def exponentialFInverse(u, lam):
'''A function to evaluate the inverse CDF of a exponential distribution.
u is the value to evaluate the inverse CDF at.
lam is the exponential distribution parameter.'''
# log without a base is the natural logarithm
return (-1.0/lam)*log(1 - u)
def exponentialSample(n, lam):
'''A function to simulate samples from an exponential distribution.
n is the number of samples to simulate.
lam is the exponential distribution parameter.'''
us = [random() for i in range(n)]
return [exponentialFInverse(u, lam) for u in us]
```
We can have a look at a small sample:
```
lam = 0.5
nToGenerate = 30
sample = exponentialSample(nToGenerate, lam)
print sorted(sample) # recall that sorted makes a new sorted list
```
You should be able to write simple functions like `exponentialFinverse` and `exponentialSample` yourself by now.
The best way to visualise the results is to use a histogram. With this interactive plot you can explore the effect of varying lambda and n:
```
import pylab
@interact
def _(lam=('lambda',0.5), n=(50,(10..10000)), Bins=(5,(1,1000))):
'''Interactive function to plot samples from exponential distribution.'''
if lam > 0:
pylab.clf() # clear current figure
n, bins, patches = pylab.hist(exponentialSample(n, lam), Bins, normed=true)
pylab.ylabel('normalised count')
pylab.title('Normalised histogram')
pylab.savefig('myHist') # to actually display the figure
pylab.show()
else:
print "Lambda must be greater than 0"
```
# The Standard $Cauchy$ Random Variable
A standard $Cauchy$ Random Variable has the following PDF $f$ and DF $F$:
$$
f(x) =\frac{1}{\pi(1+x^2)}\text{,}\,\, -\infty < x < \infty
$$
$$
F(x) = \frac{1}{\pi}\tan^{-1}(x) + 0.5
$$
The $Cauchy$ distribution is an interesting distribution because the expectation does not exist:
$$
\int \left|x\right|\,dF(x) = \frac{2}{\pi} \int_0^{\infty} \frac{x}{1+x^2}\,dx = \left(x \tan^{-1}(x) \right]_0^{\infty} - \int_0^{\infty} \tan^{-1}(x)\, dx = \infty \ .
$$
In SageMath, we can use the `arctan` function for $tan^{-1}$, and `pi` for $\pi$ and code some functions for the PDF and DF of the standard Cauchy as follows.
```
def cauchyPDF(x):
'''Standard Cauchy pdf function.
x is the value to evaluate the pdf at.'''
return 1.0/(pi.n()*(1+x^2))
def cauchyCDF(x):
'''Standard Cauchy cdf function.
x is the value to evaluate the cdf at.'''
return (1.0/pi.n())*arctan(x) + 0.5
```
You can see the shapes of the PDF and CDF using the plot below. Note from the PDF $f$ above is defined for $-\infty < x < \infty$. This means we should set some arbitrary limits on the minimum and maximum values to use for the x-axis on the plots. You can change these limits interactively.
```
@interact
def _(lower=(-4), upper=(4)):
'''Interactive function to plot the Cauchy pdf and cdf.'''
if lower < upper:
print "Standard Cauchy pdf and cdf"
p1 = plot(cauchyPDF, lower,upper, rgbcolor='blue')
p2 = plot(cauchyCDF, lower,upper, rgbcolor='red')
show(graphics_array([p1, p2]),figsize=[8,3])
else:
print "Upper must be greater than lower"
```
#### Constructing a standard $Cauchy$ RVs
- Place a double light sabre (i.e., one that can shoot its lazer beam from both ends, like that of Darth Mole in Star Wars) on a cartesian axis so that it is centred on $(0, 1)$.
- Randomly spin it (so that its spin angle to the x-axis is $\theta \thicksim Uniform (0, 2\pi)$).
- Let it come to rest.
- The y-coordinate of the point of intersection with the y-axis is a standard Cauchy RV.
You can see that we are equally likely to get positive and negative values (the density function of the standard $Cauchy$ RV is symmetrical about 0) and whenever the spin angle is close to $\frac{\pi}{4}$ ($90^{\circ}$) or $\frac{3\pi}{2}$ ($270^{\circ}$), the intersections will be a long way out up or down the y-axis, i.e. very negative or very positive values. If the light sabre is exactly parallel to the y-axis there will be no intersection: a $Cauchy$ RV $X$ can take values $-\infty < x < \infty$
<img src="images/Week7CauchyLightSabre.png" width=300>
## Simulating from the standard $Cauchy$
We can perform inversion sampling on the $Cauchy$ RV by transforming a $Uniform(0,1)$ random variable into a $Cauchy$ random variable using the inverse CDF.
We can get this by replacing $F(x)$ by $u$ in the expression for $F(x)$:
$$
\frac{1}{\pi}tan^{-1}(x) + 0.5 = u
$$
and solving for $x$:
$$
\begin{array}{lcl} \frac{1}{\pi}tan^{-1}(x) + 0.5 = u & \iff & \frac{1}{\pi} tan^{-1}(x) = u - \frac{1}{2}\\ & \iff & tan^{-1}(x) = (u - \frac{1}{2})\pi\\ & \iff & tan(tan^{-1}(x)) = tan((u - \frac{1}{2})\pi)\\ & \iff & x = tan((u - \frac{1}{2})\pi) \end{array}
$$
## Inversion Sampler for the standard $Cauchy$ RV
#### Input:
- $u \thicksim Uniform(0,1)$
- $F^{[-1]}(u)$
#### Output:
- A sample $x \thicksim \text{standard } Cauchy$
#### Algorithm steps:
- Draw $u \sim Uniform(0,1)$
- Calculate $x = F^{[-1]}(u) = tan((u - \frac{1}{2})\pi)$
- Return $x$
The function `cauchyFInverse(u)` codes the inverse of the CDF of the standard Cauchy distribution. Given a value between 0 and 1 for the parameter u, it returns the height of the inverse CDF of the standard $Cauchy$ at this point, i.e. the value where the CDF evaluates to u. The function `cauchySample(n`) uses `cauchyFInverse(...)` to simulate `n` samples from a standard Cauchy distribution.
```
def cauchyFInverse(u):
'''A function to evaluate the inverse CDF of a standard Cauchy distribution.
u is the value to evaluate the inverse CDF at.'''
return RR(tan(pi*(u-0.5)))
def cauchySample(n):
'''A function to simulate samples from a standard Cauchy distribution.
n is the number of samples to simulate.'''
us = [random() for i in range(n)]
return [cauchyFInverse(u) for u in us]
```
And we can visualise these simulated samples with an interactive plot:
```
@interact
def _(n=(50,(0..5000))):
'''Interactive function to plot samples from standard Cauchy distribution.'''
if n == 1:
print n, "Standard Cauchy sample"
else:
print n, "Standard Cauchy samples"
sample = cauchySample(n)
pts = zip(range(1,n+1,1),sample)
p=points(pts)
p+= text(str(floor(min(sample))), (0, floor(min(sample))), \
fontsize=10, color='black') # add labels manually
p+= text(str(ceil(max(sample))), (0, ceil(max(sample))), \
fontsize=10, color='black')
p.show(xmin=0, xmax = n+1, ymin=floor(min(sample)), \
ymax = ceil(max(sample)), axes=false, \
gridlines=[[0,n+1],[floor(min(sample)),ceil(max(sample))]],\
figsize=[7,3])
```
Notice how we can get some very extreme values This is because of the 'thick tails' of the density function of the $Cauchy$ RV. Think about this in relation to the double light sabre visualisation. We can see effect of the extreme values with a histogram visualisation as well. The interactive plot below will only use values between lower and upper in the histogram. Try increasing the sample size to something like 1000 and then gradually widening the limits:
```
import pylab
@interact
def _(n=(50,(0..5000)), lower=(-4), upper=(4), Bins=(5,(1,100))):
'''Interactive function to plot samples from
standard Cauchy distribution.'''
if lower < upper:
if n == 1:
print n, "Standard Cauchy sample"
else:
print n, "Standard Cauchy samples"
sample = cauchySample(n) # the whole sample
sampleToShow=[c for c in sample if (c >= lower and c <= upper)]
pylab.clf() # clear current figure
n, bins, patches = pylab.hist(sampleToShow, Bins, normed=true)
pylab.ylabel('normalised count')
pylab.title('Normalised histogram, values between ' \
+ str(floor(lower)) + ' and ' + str(ceil(upper)))
pylab.savefig('myHist') # to actually display the figure
pylab.show()
else:
print "lower must be less than upper"
```
# Running means
When we introduced the $Cauchy$ distribution, we noted that the expectation of the $Cauchy$ RV does not exist. This means that attempts to estimate the mean of a $Cauchy$ RV by looking at a sample mean will not be successful: as you take larger and larger samples, the effect of the extreme values will still cause the sample mean to swing around wildly (we will cover estimation properly soon). You are going to investigate the sample mean of simulated $Cauchy$ samples of steadily increasing size and show how unstable this is. A convenient way of doing this is to look at a running mean. We will start by working through the process of calculating some running means for the $Uniform(0,10)$, which do stabilise. You will then do the same thing for the $Cauchy$ and be able to see the instability.
We will be using the pylab.cumsum function, so we make sure that we have it available. We then generate a sample from the $Uniform(0,10)$
```
from pylab import cumsum
nToGenerate = 10 # sample size to generate
theta1, theta2 = 0, 10 # uniform parameters
uSample = uniformSample(nToGenerate, theta1, theta2)
print(uSample)
```
We are going to treat this sample as though it is actually 10 samples of increasing size:
- sample 1 is the first element in uSample
- sample 2 contains the first 2 elements in uSample
- sample 3 contains the first 3 elements in uSample
- ...
- sample10 contains the first 10 elements in uSample
We know that a sample mean is the sum of the elements in the sample divided by the number of elements in the sample $n$:
$$
\bar{x} = \frac{1}{n} \sum_{i=1}^n x_i
$$
We can get the sum of the elements in each of our 10 samples with the cumulative sum of `uSample`.
We use `cumsum` to get the cumulative sum. This will be a `pylab.array` (or `numpy.arrat`) type, so we use the `list` function to turn it back into a list:
```
csUSample = list(cumsum(uSample))
print(csUSample)
```
What we have now is effectively a list
$$\left[\displaystyle\sum_{i=1}^1x_i, \sum_{i=1}^2x_i, \sum_{i=1}^3x_i, \ldots, \sum_{i=1}^{10}x_i\right]$$
So all we have to do is divide each element in `csUSample` by the number of elements that were summed to make it, and we have a list of running means
$$\left[\frac{1}{1}\displaystyle\sum_{i=1}^1x_i, \frac{1}{2}\sum_{i=1}^2x_i, \frac{1}{3}\sum_{i=1}^3x_i, \ldots, \frac{1}{10}\sum_{i=1}^{10}x_i\right]$$
We can get the running sample sizes using the `range` function:
```
samplesizes = range(1, len(uSample)+1,1)
samplesizes
```
And we can do the division with list comprehension:
```
uniformRunningMeans = [csUSample[i]/samplesizes[i] for i in range(nToGenerate)]
print(uniformRunningMeans)
```
We could pull all of this together into a function which produced a list of running means for sample sizes 1 to $n$.
```
def uniformRunningMeans(n, theta1, theta2):
'''Function to give a list of n running means from uniform(theta1, theta2).
n is the number of running means to generate.
theta1, theta2 are the uniform distribution parameters.
return a list of n running means.'''
sample = uniformSample(n, theta1, theta2)
from pylab import cumsum # we can import in the middle of code!
csSample = list(cumsum(sample))
samplesizes = range(1, n+1,1)
return [csSample[i]/samplesizes[i] for i in range(n)]
```
Have a look at the running means of 10 incrementally-sized samples:
```
nToGenerate = 10
theta1, theta2 = 0, 10
uRunningMeans = uniformRunningMeans(nToGenerate, theta1, theta2)
pts = zip(range(1, len(uRunningMeans)+1,1),uRunningMeans)
p = points(pts)
show(p, figsize=[5,3])
```
Recall that the expectation $E_{(\theta_1, \theta_2)}(X)$ of a $X \thicksim Uniform(\theta_1, \theta_2) = \frac{(\theta_1 +\theta_2)}{2}$
In our simulations we are using $\theta_1 = 0$, $\theta_2 = 10$, so if $X \thicksim Uniform(0,10)$, $E(X) = 5$
To show that the running means of different simulations from a $Uniform$ distribution settle down to be close to the expectation, we can plot say 5 different groups of running means for sample sizes $1, \ldots, 1000$. We will use a line plot rather than plotting individual points.
```
nToGenerate = 1000
theta1, theta2 = 0, 10
iterations = 5
xvalues = range(1, nToGenerate+1,1)
for i in range(iterations):
redshade = 0.5*(iterations - 1 - i)/iterations # to get different colours for the lines
uRunningMeans = uniformRunningMeans(nToGenerate, theta1, theta2)
pts = zip(xvalues,uRunningMeans)
if (i == 0):
p = line(pts, rgbcolor = (redshade,0,1))
else:
p += line(pts, rgbcolor = (redshade,0,1))
show(p, figsize=[5,3])
```
### YouTry!
Your task is to now do the same thing for some standard Cauchy running means.
To start with, do not put everything into a function, just put statements into the cell(s) below to:
Make variable for the number of running means to generate; assign it a small value like 10 at this stage
Use the cauchySample function to generate the sample from the standard $Cauchy$; have a look at your sample
Make a named list of cumulative sums of your $Cauchy$ sample using list and cumsum, as we did above; have a look at your cumulative sums
Make a named list of sample sizes, as we did above
Use a list comprehension to turn the cumulative sums and sample sizes into a list of running means, as we did above
Have a look at your running means; do they make sense to you given the individual sample values?
Add more cells as you need them.
When you are happy that you are doing the right things, **write a function**, parameterised by the number of running means to do, that returns a list of running means. Try to make your own function rather than copying and changing the one we used for the $Uniform$: you will learn more by trying to do it yourself. Please call your function `cauchyRunningMeans`, so that (if you have done everything else right), you'll be able to use some code we will supply you with to plot the results.
Try checking your function by using it to create a small list of running means. Check that the function does not report an error and gives you the kind of list you expect.
When you think that your function is working correctly, try evaluating the cell below: this will put the plot of 5 groups of $Uniform(0,10)$ running means beside a plot of 5 groups of standard $Cauchy$ running means produced by your function (as usual, you are not expected to be able to produce plots like this one).
```
nToGenerate = 10000
theta1, theta2 = 0, 10
iterations = 5
xvalues = range(1, nToGenerate+1,1)
for i in range(iterations):
shade = 0.5*(iterations - 1 - i)/iterations # to get different colours for the lines
uRunningMeans = uniformRunningMeans(nToGenerate, theta1, theta2)
problemStr="" # an empty string
# use try to catch problems with cauchyRunningMeans functions
try:
cRunningMeans = cauchyRunningMeans(nToGenerate)
##cRunningMeans = hiddenCauchyRunningMeans(nToGenerate)
cPts = zip(xvalues, cRunningMeans)
except NameError, e:
# cauchyRunningMeans is not defined
cRunningMeans = [1 for c in range(nToGenerate)] # default value
problemStr = "No "
except Exception, e:
# some other problem with cauchyRunningMeans
cRunningMeans = [1 for c in range(nToGenerate)]
problemStr = "Problem with "
uPts = zip(xvalues, uRunningMeans)
cPts = zip(xvalues, cRunningMeans)
if (i < 1):
p1 = line(uPts, rgbcolor = (shade, 0, 1))
p2 = line(cPts, rgbcolor = (1-shade, 0, shade))
cauchyTitleMax = max(cRunningMeans) # for placement of cauchy title
else:
p1 += line(uPts, rgbcolor = (shade, 0, 1))
p2 += line(cPts, rgbcolor = (1-shade, 0, shade))
if max(cRunningMeans) > cauchyTitleMax:
cauchyTitleMax = max(cRunningMeans)
titleText1 = "Uniform(" + str(theta1) + "," + str(theta2) + ") running means" # make title text
t1 = text(titleText1, (nToGenerate/2,theta2), rgbcolor='blue',fontsize=10)
titleText2 = problemStr + "standard Cauchy running means" # make title text
t2 = text(titleText2, (nToGenerate/2,ceil(cauchyTitleMax)+1), rgbcolor='red',fontsize=10)
show(graphics_array((p1+t1,p2+t2)),figsize=[10,5])
```
# Replicable samples
Remember that we know how to set the seed of the PRNG used by `random()` with `set_random_seed`? If we wanted our sampling functions to give repeatable samples, we could also pass the functions the seed to use. Try making a new version of `uniformSample` which has a parameter for a value to use as the random number generator seed. Call your new version `uniformSampleSeeded` to distinguish it from the original one.
Try out your new `uniformSampleSeeded` function: if you generate two samples using the same seed they should be exactly the same. You could try using a large sample and checking on sample statistics such as the mean, min, max, variance etc, rather than comparing small samples by eye.
Recall that you can also give parameters default values in SageMath. Using a default value means that if no value is passed to the function for that parameter, the default value is used. Here is an example with a very simple function:
```
def simpleDefaultExample(x, y=0):
'''A simple function to demonstrate default parameter values.
x is the first parameter, with no default value.
y is the second parameter, defaulting to 0.'''
return x + y
```
Note that parameters with default values need to come after parameters without default values when we define the function.
Now you can try the function - evaluate the following cells to see what you get:
```
simpleDefaultExample (1,3) # specifying two arguments for the function
simpleDefaultExample (1) # specifying one argument for the function
# another way to specify one argument for the function
simpleDefaultExample (x=6)
# this will give an error because x has no default value
simpleDefaultExample()
# this will also give an error because x has no default value
simpleDefaultExample (y=9)
```
Try making yet another version of the uniform sampler which takes a value to be used as a random number generator seed, but defaults to `None` if no value is supplied for that parameter. `None` is a special Python type.
```
x = None
type(x)
```
Using `set_random_seed(None)` will mean that the random seed is actually reset to a new ('random') value. You can see this by testing what happens when you do this twice in succession and then check what seed is being used with `initial_seed`:
```
set_random_seed(None)
initial_seed()
set_random_seed(None)
initial_seed()
```
Do another version of the `uniformSampleSeeded` function with a default value for the seed of `None`.
Check your function again by testing with both when you supply a value for the seed and when you don't.
### YouTry
### A Simple Simulation
We could use the samplers we have made to do a very simple simulation. Suppose the inter-arrival times, in minutes, of Orbiter buses at an Orbiter stop follows an $Exponential(\lambda = 0.1)$ distribution. Also suppose that this is quite a popular bus stop, and the arrival of people is very predictable: one new person will arrive in each whole minute. This means that the longer another bus takes in coming, the more people arrive to join the queue. Also suppose that the number of free seats available on any bus follows a $de\, Moivre(k=40)$ distribution, i.e, there are equally like to to be 1, or 2, or 3 ... or 40 spare seats. If there are more spare seats than people in the queue, everyone can get onto the bus and nobody is left waiting, but if there are not enough spare seats some people will be left waiting for the next bus. As they wait, more people arrive to join the queue....
This is not very realistic - we would want a better model for how many people arrive at the stop at least, and for the number of spare seats there will be on the bus. However, we are just using this as a simple example that you can do using the RVs you know how to simulate samples from.
Try to code this example yourself, using our suggested steps. We have put our version the code into a cell below, but you will get more out of this example by trying to do it yourself first.
#### Suggested steps:
- Get a list of 100 $Exponential(\lambda = 0.1)$ samples using the `exponentialSamples` function. Assign the list to a variable named something like `busTime`s. These are your 100 simulated bus inter-arrival times.
- Choose a value for the number of people who will be waiting at the busstop when you start the simulation. Call this something like `waiting`.
- Make a list called something like `leftWaiting`, which to begin with contains just the value assigned to `waiting`.
- Make an empty list called something like `boardBus`.
- Start a for loop which takes each element in `busTimes` in turn, i.e. each bus inter-arrival time, and within the for loop:
- Calculate the number of people arriving at the stop as the floor of the time taken for that bus to arrive (i.e., one person for each whole minute until the bus arrives)
- Add this to the number of people waiting (e.g., if the number of arrivals is assigned to a variable arrivals, then waiting = waiting + arrivals will increment the value assigned to the waiting variable by the value of arrivals).
- Simulate a value for the number of seats available on the bus as one simulation from a $de \, Moirve(k=40)$ RV (it may be easier to use `deMoirveFInverse` rather than `deMoivrveSample` because you only need one value - remember that you will have to pass a simulated $u \thicksim Uniform(0,1)$ to `deMoivreFInverse` as well as the value of the parameter $k$).
- The number of people who can get on the bus is the minimum of the number of people waiting in the queue and the number of seats on the bus. Calculate this value and assign it to a variable called something like `getOnBus`.
- Append `getOnBus` to the list `boardBus`.
- Subtract `getOnBus` from the number of people waiting, waiting (e.g., `waiting = waiting - getOnBus` will decrement waiting by the number of people who get on the bus).
- Append the new value of `waiting` to the list `leftWaiting`.
- That is the end of the for loop: you now have two lists, one for the number of people waiting at the stop and one for the number of people who can board each bus as it arrives.
### YouTry!
Here is our code to do the bust stop simulation.
Yours may be different - maybe it will be better!
```
buses = 100
lam = 0.1
busTimes = exponentialSample(buses,lam)
waiting = 0 # how many people are waiting at the start of the simulation
boardBus = [] # empty list
leftWaiting = [waiting] # list with just waiting in it
for time in busTimes: # for each bus inter-arrival time
arrivals = floor(time) # people who arrive at the stop before the bus gets there
waiting = waiting + arrivals # add them to the queue
busSeats = deMoivreFInverse(random(), 40) # how many seats available on the bus
getOnBus = min(waiting, busSeats) # how many people can get on the bus
boardBus.append(getOnBus) # add to the list
waiting = waiting - getOnBus # take the people who board the bus out of the queue
leftWaiting.append(waiting) # add to the list
print(leftWaiting) # look at the leftWaiting list
```
We could do a visualisation of this, showing the number of people able to board the bus and the number of people left by the height of lines on the plot.
```
p1 = line([(0.5,0),(0.5,leftWaiting[0])])
from pylab import cumsum
csBusTimes=list(cumsum(busTimes))
for i in range(1, len(leftWaiting), 1):
p1+= line([(csBusTimes[i-1],0),(csBusTimes[i-1],boardBus[i-1])], rgbcolor='green')
p1+= line([(csBusTimes[i-1]+.01,0),(csBusTimes[i-1]+.01,leftWaiting[i])], rgbcolor='red')
t1 = text("Boarding the bus", (csBusTimes[len(busTimes)-1]/3,max(max(boardBus),max(leftWaiting))+1), rgbcolor='green',fontsize=10)
t2 = text("Waiting", (csBusTimes[len(busTimes)-1]*(2/3),max(max(boardBus),max(leftWaiting))+1), rgbcolor='red',fontsize=10)
xaxislabel = text("Time", (csBusTimes[len(busTimes)-1],-10),fontsize=10,color='black')
yaxislabel = text("People", (-50,max(max(boardBus),max(leftWaiting))+1),fontsize=10,color='black')
show(p1+t1+t2+xaxislabel+yaxislabel,figsize=[8,5])
```
You could try the effect on your simulation of changing the $Exponential$ parameter $\lambda$, or some of the other factors involved.
#### Solution for CauchyRunningMeans
```
def hiddenCauchyRunningMeans(n):
'''Function to give a list of n running means from standardCauchy.
n is the number of running means to generate.'''
sample = cauchySample(n)
from pylab import cumsum
csSample = list(cumsum(sample))
samplesizes = range(1, n+1,1)
return [csSample[i]/samplesizes[i] for i in range(n)]
```
| true |
code
| 0.503235 | null | null | null | null |
|
# Hill Climbing
---
In this notebook, we will train hill climbing with adaptive noise scaling with OpenAI Gym's Cartpole environment.
### 1. Import the Necessary Packages
```
import gym
import numpy as np
from collections import deque
import matplotlib.pyplot as plt
%matplotlib inline
```
### 2. Define the Policy
```
env = gym.make('CartPole-v0')
print('observation space:', env.observation_space)
print('action space:', env.action_space)
class Policy():
def __init__(self, s_size=4, a_size=2):
self.w = 1e-4*np.random.rand(s_size, a_size) # weights for simple linear policy: state_space x action_space
def forward(self, state):
x = np.dot(state, self.w)
return np.exp(x)/sum(np.exp(x))
def act(self, state):
probs = self.forward(state)
#action = np.random.choice(2, p=probs) # option 1: stochastic policy
action = np.argmax(probs) # option 2: deterministic policy
return action
```
### 3. Train the Agent with Stochastic Policy Search
```
env = gym.make('CartPole-v0')
env.seed(0)
np.random.seed(0)
policy = Policy()
def hill_climbing(n_episodes=1000, max_t=1000, gamma=1.0, print_every=100, noise_scale=1e-2):
"""Implementation of hill climbing with adaptive noise scaling.
Params
======
n_episodes (int): maximum number of training episodes
max_t (int): maximum number of timesteps per episode
gamma (float): discount rate
print_every (int): how often to print average score (over last 100 episodes)
noise_scale (float): standard deviation of additive noise
"""
scores_deque = deque(maxlen=100)
scores = []
best_R = -np.Inf
best_w = policy.w
for i_episode in range(1, n_episodes+1):
rewards = []
state = env.reset()
for t in range(max_t):
action = policy.act(state)
state, reward, done, _ = env.step(action)
rewards.append(reward)
if done:
break
scores_deque.append(sum(rewards))
scores.append(sum(rewards))
discounts = [gamma**i for i in range(len(rewards)+1)]
R = sum([a*b for a,b in zip(discounts, rewards)])
if R >= best_R: # found better weights
best_R = R
best_w = policy.w
noise_scale = max(1e-3, noise_scale / 2)
policy.w += noise_scale * np.random.rand(*policy.w.shape)
else: # did not find better weights
noise_scale = min(2, noise_scale * 2)
policy.w = best_w + noise_scale * np.random.rand(*policy.w.shape)
if i_episode % print_every == 0:
print('Episode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_deque)))
if np.mean(scores_deque)>=195.0:
print('Environment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_deque)))
policy.w = best_w
break
return scores
scores = hill_climbing()
```
### 4. Plot the Scores
```
fig = plt.figure()
ax = fig.add_subplot(111)
plt.plot(np.arange(1, len(scores)+1), scores)
plt.ylabel('Score')
plt.xlabel('Episode #')
plt.show()
```
### 5. Watch a Smart Agent!
```
env = gym.make('CartPole-v0')
state = env.reset()
for t in range(2000): #Default of range(200) was way too short!
action = policy.act(state)
env.render()
state, reward, done, _ = env.step(action)
if done:
break
env.close()
```
| true |
code
| 0.569015 | null | null | null | null |
|
# 프로젝트 1. 영화 리뷰 감정 분석
**RNN 을 이용해 IMDB 데이터를 가지고 텍스트 감정분석을 해 봅시다.**
이번 책에서 처음으로 접하는 텍스트 형태의 데이터셋인 IMDB 데이터셋은 50,000건의 영화 리뷰로 이루어져 있습니다.
각 리뷰는 다수의 영어 문장들로 이루어져 있으며, 평점이 7점 이상의 긍정적인 영화 리뷰는 2로, 평점이 4점 이하인 부정적인 영화 리뷰는 1로 레이블링 되어 있습니다. 영화 리뷰 텍스트를 RNN 에 입력시켜 영화평의 전체 내용을 압축하고, 이렇게 압축된 리뷰가 긍정적인지 부정적인지 판단해주는 간단한 분류 모델을 만드는 것이 이번 프로젝트의 목표입니다.
```
import os
import torch
import torch.nn as nn
import torch.nn.functional as F
from torchtext import data, datasets
# 하이퍼파라미터
BATCH_SIZE = 64
lr = 0.001
EPOCHS = 10
USE_CUDA = torch.cuda.is_available()
DEVICE = torch.device("cuda" if USE_CUDA else "cpu")
print("다음 기기로 학습합니다:", DEVICE)
# 데이터 로딩하기
print("데이터 로딩중...")
TEXT = data.Field(sequential=True, batch_first=True, lower=True)
LABEL = data.Field(sequential=False, batch_first=True)
trainset, testset = datasets.IMDB.splits(TEXT, LABEL)
TEXT.build_vocab(trainset, min_freq=5)
LABEL.build_vocab(trainset)
# 학습용 데이터를 학습셋 80% 검증셋 20% 로 나누기
trainset, valset = trainset.split(split_ratio=0.8)
train_iter, val_iter, test_iter = data.BucketIterator.splits(
(trainset, valset, testset), batch_size=BATCH_SIZE,
shuffle=True, repeat=False)
vocab_size = len(TEXT.vocab)
n_classes = 2
print("[학습셋]: %d [검증셋]: %d [테스트셋]: %d [단어수]: %d [클래스] %d"
% (len(trainset),len(valset), len(testset), vocab_size, n_classes))
class BasicGRU(nn.Module):
def __init__(self, n_layers, hidden_dim, n_vocab, embed_dim, n_classes, dropout_p=0.2):
super(BasicGRU, self).__init__()
print("Building Basic GRU model...")
self.n_layers = n_layers
self.embed = nn.Embedding(n_vocab, embed_dim)
self.hidden_dim = hidden_dim
self.dropout = nn.Dropout(dropout_p)
self.gru = nn.GRU(embed_dim, self.hidden_dim,
num_layers=self.n_layers,
batch_first=True)
self.out = nn.Linear(self.hidden_dim, n_classes)
def forward(self, x):
x = self.embed(x)
h_0 = self._init_state(batch_size=x.size(0))
x, _ = self.gru(x, h_0) # [i, b, h]
h_t = x[:,-1,:]
self.dropout(h_t)
logit = self.out(h_t) # [b, h] -> [b, o]
return logit
def _init_state(self, batch_size=1):
weight = next(self.parameters()).data
return weight.new(self.n_layers, batch_size, self.hidden_dim).zero_()
def train(model, optimizer, train_iter):
model.train()
for b, batch in enumerate(train_iter):
x, y = batch.text.to(DEVICE), batch.label.to(DEVICE)
y.data.sub_(1) # 레이블 값을 0과 1로 변환
optimizer.zero_grad()
logit = model(x)
loss = F.cross_entropy(logit, y)
loss.backward()
optimizer.step()
def evaluate(model, val_iter):
"""evaluate model"""
model.eval()
corrects, total_loss = 0, 0
for batch in val_iter:
x, y = batch.text.to(DEVICE), batch.label.to(DEVICE)
y.data.sub_(1) # 레이블 값을 0과 1로 변환
logit = model(x)
loss = F.cross_entropy(logit, y, reduction='sum')
total_loss += loss.item()
corrects += (logit.max(1)[1].view(y.size()).data == y.data).sum()
size = len(val_iter.dataset)
avg_loss = total_loss / size
avg_accuracy = 100.0 * corrects / size
return avg_loss, avg_accuracy
model = BasicGRU(1, 256, vocab_size, 128, n_classes, 0.5).to(DEVICE)
optimizer = torch.optim.Adam(model.parameters(), lr=lr)
best_val_loss = None
for e in range(1, EPOCHS+1):
train(model, optimizer, train_iter)
val_loss, val_accuracy = evaluate(model, val_iter)
print("[이폭: %d] 검증 오차:%5.2f | 검증 정확도:%5.2f" % (e, val_loss, val_accuracy))
# 검증 오차가 가장 적은 최적의 모델을 저장
if not best_val_loss or val_loss < best_val_loss:
if not os.path.isdir("snapshot"):
os.makedirs("snapshot")
torch.save(model.state_dict(), './snapshot/txtclassification.pt')
best_val_loss = val_loss
model.load_state_dict(torch.load('./snapshot/txtclassification.pt'))
test_loss, test_acc = evaluate(model, test_iter)
print('테스트 오차: %5.2f | 테스트 정확도: %5.2f' % (test_loss, test_acc))
```
| true |
code
| 0.797103 | null | null | null | null |
|
# Tribolium embryo morphometry over time in Napari
Authors: Robert Haase, Daniela Vorkel, 2020
This is the pyclesperanto version of a workflow earlier [published for clij2](https://clij.github.io/clij2-docs/md/tribolium_morphometry/).
[ImageJ Macro original](https://github.com/clij/clij2-docs/tree/master/src/main/macro/tribolium_morphometry.ijm)
This script is an example of heavy GPU-accelerated processing. It is recommended to use a dedicated
graphics card with at least 8 GB of GDDR6 memory. Otherwise, it may be quite slow.
Let's start by checking that pyclesperanto is installed and which GPU it uses.
```
"""
import pyclesperanto_prototype as cle
import numpy as np
# show all graphics cards
#print(cle._tier0._pycl.filter_devices())
# show only GPU devices
print(cle._tier0._pycl.filter_devices(dev_type='gpu'))
# selecting an Nvidia RTX
cle.select_device("Quadro M2200")
print("Using OpenCL device " + cle.get_device().name)
"""
%gui qt
```
## Load a data set
The dataset shows a *Tribolium castaneum* embryo, imaged by a custom light sheet microscope, at a wavelength of 488nm (Imaging credits: Daniela Vorkel, Myers lab, MPI CBG).
The data set has been resampled to a voxel size of 1x1x1 microns. The embryo expresses nuclei-GFP. We will use the dataset to detect nuclei and to generate an estimated cell-segmentation.
All processing steps are performed in 3D space.
```
from aicspylibczi import CziFile
import imgfile_tools as imf
from aicsimageio import AICSImage
from skimage import data
import napari
import dask
import dask.array as da
from IPython.display import display, HTML
from dask import delayed
filename = r"c:\Testdata_Zeiss\LatticeLightSheet\LS_Mitosis_T=150-300.czi"
# get the metadata
md, addmd = imf.get_metadata(filename)
czi = CziFile(filename)
def load_image(czi, t=0):
zstack = czi.read_image(S=0, T=t)
return zstack
#lazy_imread = delayed(load_image)
#reader = lazy_imread(czi, t=0) # doesn't actually read the file
#array = reader.compute() # *now* it reads.
"""
sample = imread(filenames[0])
lazy_imread = delayed(imread) # lazy reader
lazy_arrays = [lazy_imread(fn) for fn in filenames]
dask_arrays = [
da.from_delayed(delayed_reader, shape=sample.shape, dtype=sample.dtype)
for delayed_reader in lazy_arrays
]
# Stack into one large dask.array
stack = da.stack(dask_arrays, axis=0)
stack.shape # (nfiles, nz, ny, nx)
# in jupyter notebook the repr of a dask stack provides a useful visual:
stack
"""
sp = [md['SizeC'], md['SizeZ'], md['SizeY'], md['SizeX']]
# create dask stack of lazy image readers
lazy_process_image = dask.delayed(load_image) # lazy reader
lazy_arrays = [lazy_process_image(czi, t=t) for t in range(0, md['SizeT'])]
dask_arrays = [
da.from_delayed(lazy_array, shape=sp, dtype=md['NumPy.dtype'])
for lazy_array in lazy_arrays
]
# Stack into one large dask.array
dask_stack = da.stack(dask_arrays, axis=0)
print(dask_stack.shape)
dask_stack
viewer = napari.Viewer()
# configure napari automatically based on metadata and show stack
layers = imf.show_napari(viewer, dask_stack, md,
blending='additive',
gamma=0.85,
add_mdtable=True,
rename_sliders=True)
from napari.utils import nbscreenshot
nbscreenshot(viewer)
```
| true |
code
| 0.532607 | null | null | null | null |
|
# Practice Assignment: Understanding Distributions Through Sampling
** *This assignment is optional, and I encourage you to share your solutions with me and your peers in the discussion forums!* **
To complete this assignment, create a code cell that:
* Creates a number of subplots using the `pyplot subplots` or `matplotlib gridspec` functionality.
* Creates an animation, pulling between 100 and 1000 samples from each of the random variables (`x1`, `x2`, `x3`, `x4`) for each plot and plotting this as we did in the lecture on animation.
* **Bonus:** Go above and beyond and "wow" your classmates (and me!) by looking into matplotlib widgets and adding a widget which allows for parameterization of the distributions behind the sampling animations.
Tips:
* Before you start, think about the different ways you can create this visualization to be as interesting and effective as possible.
* Take a look at the histograms below to get an idea of what the random variables look like, as well as their positioning with respect to one another. This is just a guide, so be creative in how you lay things out!
* Try to keep the length of your animation reasonable (roughly between 10 and 30 seconds).
```
import matplotlib.pyplot as plt
import numpy as np
%matplotlib notebook
# generate 4 random variables from the random, gamma, exponential, and uniform distributions
x1 = np.random.normal(-2.5, 1, 10000)
x2 = np.random.gamma(2, 1.5, 10000)
x3 = np.random.exponential(2, 10000)+7
x4 = np.random.uniform(14,20, 10000)
# plot the histograms
plt.figure(figsize=(9,3))
plt.hist(x1, normed=True, bins=20, alpha=0.5)
plt.hist(x2, normed=True, bins=20, alpha=0.5)
plt.hist(x3, normed=True, bins=20, alpha=0.5)
plt.hist(x4, normed=True, bins=20, alpha=0.5);
plt.axis([-7,21,0,0.6])
plt.text(x1.mean()-1.5, 0.5, 'x1\nNormal')
plt.text(x2.mean()-1.5, 0.5, 'x2\nGamma')
plt.text(x3.mean()-1.5, 0.5, 'x3\nExponential')
plt.text(x4.mean()-1.5, 0.5, 'x4\nUniform')
fig , ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2, sharex=True, sharey=True)
for ax in [ax1, ax2, ax3, ax4]:
for label in ax.get_xticklabels() + ax.get_yticklabels():
label.set_visible(True)
for ax in fig.get_axes():
ax.clear()
x1 = np.random.normal(0, 1, 10000)
x2 = np.random.gamma(2, 1.5, 10000)
x3 = np.random.exponential(2, 10000)
x4 = np.random.uniform(0, 1, 10000)
ax1.hist(x1, normed=True, bins=20, alpha=0.5)
ax2.hist(x2, normed=True, bins=20, alpha=0.5)
ax3.hist(x3, normed=True, bins=20, alpha=0.5)
ax4.hist(x4, normed=True, bins=20, alpha=0.5);
# plt.text(x1.mean()-1.5, 0.5, 'x1\nNormal')
# ax2.text(x2.mean()-1.5, 0.5, 'x2\nGamma')
# ax3.text(x3.mean()-1.5, 0.5, 'x3\nExponential')
# ax4.text(x4.mean()-1.5, 0.5, 'x4\nUniform')
fig.get_axes()
```
| true |
code
| 0.70094 | null | null | null | null |
|
# Tutorial 07: Networks from Custom Templates
In the previous tutorial, we discussed how OpenStreetMap files can be simulated in Flow. These networks, however, may at time be imperfect, as we can see in the toll section of the Bay Bridge (see the figure below). The simulators SUMO and Aimsun both possess methods for augmenting the network after they have been imported, and store the changes in their own versions of the initial template (whether it was generated via a custom network class or a network imported from OpenStreetMap). In order to utilize these newly generated networks, we demonstrate in this tutorial how simulator-generated template files can be imported when running a simulation in Flow.
<img src="img/osm_to_template.png">
<center> **Figure 1**: Example benefit of converting OpenStreetMap to a custom template </center>
The remainder of the tutorial is organized as follows. In section 1, we begin by importing the classic set of parameters. In section 2, we introduce the template files that are used as examples for importing the template files. In section 3, we present how custom SUMO network templates, i.e. the generated .net.xml files, can be modified and simulated in Flow for the purposed of improving network features. Finally, in section 4, we demonstrate how custom Aimsun network files can be simulated in Flow.
## 1. Importing Modules
Before we begin, let us import all relevant Flow parameters as we have done for previous tutorials. If you are unfamiliar with these parameters, you are encouraged to review tutorial 1.
```
# the TestEnv environment is used to simply simulate the network
from flow.envs import TestEnv
# the Experiment class is used for running simulations
from flow.core.experiment import Experiment
# the base network class
from flow.networks import Network
# all other imports are standard
from flow.core.params import VehicleParams
from flow.core.params import NetParams
from flow.core.params import InitialConfig
from flow.core.params import EnvParams
# create some default parameters parameters
env_params = EnvParams()
initial_config = InitialConfig()
vehicles = VehicleParams()
vehicles.add('human', num_vehicles=1)
```
## 2. Example Network
In this tutorial, we use the [Luxembourg SUMO Traffic (LuST) Network](https://github.com/lcodeca/LuSTScenario) as an example use case. This example consists of a well-calibrated model of vehicles in Luxembourg. A representation of the simulation can be seen in the figure below.
<img src="img/LuST_network.png" width="500">
<center><b>Figure 2</b>: Simulation of the LuST network </center>
Before, continuing with this tutorial, please begin by cloning the LuST scenario repository by running the following command.
git clone https://github.com/lcodeca/LuSTScenario.git
Once you have cloned the repository, please modify the code snippet below to match correct location of the repository's main directory.
```
LuST_dir = "/path/to/LuSTScenario"
```
## 3. Sumo Network Files
Sumo generates several network and simulation-specifc template files prior to starting a simulation. This procedure when creating custom networks and networks from OpenStreetMap is covered by the network class. Three of these files (\*.net.xml, \*.rou.xml, and vtype.add.xml) can be imported once again via the network class to recreate a previously designed network.
We start by creating the simulation parameters:
```
from flow.core.params import SumoParams
sim_params = SumoParams(render=True, sim_step=1)
```
### 3.1 Importing Network (\*.net.xml) Files
The \*.net.xml file covers the network geometry within a simulation, and can be imported independently of the SUMO route file (see section 1.2). This can be done through the `template` parameter within `NetParams` as follows:
```
import os
net_params = NetParams(
template=os.path.join(LuST_dir, "scenario/lust.net.xml"),
)
```
This network alone, similar to the OpenStreetMap file, does not cover the placement of vehicles or the routes vehicles can traverse. These, however, can be defined a they were in the previous tutorial for importing networks from OpenStreetMap. For the LuST network, this looks something similar to the following code snippet (note that the specific edges were not spoken for any specific reason).
```
# specify the edges vehicles can originate on
initial_config = InitialConfig(
edges_distribution=["-32410#3"]
)
# specify the routes for vehicles in the network
class TemplateNetwork(Network):
def specify_routes(self, net_params):
return {"-32410#3": ["-32410#3"]}
```
The simulation can then be executed as follows:
```
# create the network
network = TemplateNetwork(
name="template",
net_params=net_params,
initial_config=initial_config,
vehicles=vehicles
)
# create the environment
env = TestEnv(
env_params=env_params,
sim_params=sim_params,
network=network
)
# run the simulation for 1000 steps
exp = Experiment(env=env)
_ = exp.run(1, 1000)
```
### 3.2 Importing Additional Files
Sumo templates will at times contain files other than the network templates that can be used to specify the positions, speeds, and properties of vehicles at the start of a simulation, as well as the departure times of vehicles while the network is running and the routes that all these vehicles are meant to traverse. All these files can also be imported under the `template` attribute in order to recreate the simulation in it's entirety.
When incorporating files other that the net.xml file to the simulation, the template attribute is treated as a dictionary instead, with a different element for each of the additional files that are meant to be imported. Starting with the net.xml file, it is added to the template attribute as follows:
```
new_net_params = NetParams(
template={
# network geometry features
"net": os.path.join(LuST_dir, "scenario/lust.net.xml")
}
)
```
#### 3.2.1 Vehicle Type (vtype.add.xml)
The vehicle types file describing the properties of different vehicle types in the network. These include parameters such as the max acceleration and comfortable deceleration of drivers. This file can be imported via the "vtype" attribute in template.
Note that, when vehicle information is being imported from a template file, the `VehicleParams` object does not need be modified, unless you would like additionally vehicles to enter the network as well.
```
new_net_params = NetParams(
template={
# network geometry features
"net": os.path.join(LuST_dir, "scenario/lust.net.xml"),
# features associated with the properties of drivers
"vtype": os.path.join(LuST_dir, "scenario/vtype.add.xml")
}
)
# we no longer need to specify anything in VehicleParams
new_vehicles = VehicleParams()
```
#### 3.2.2 Route (\*.rou.xml)
Next, the routes can be imported from the \*.rou.xml files that are generated by SUMO. These files help define which cars enter the network at which point in time, whether it be at the beginning of a simulation or some time during it run. The route files are passed to the "rou" key in the templates attribute. Moreover, since the vehicle routes can be spread over multiple files, the "rou" key that a *list* of string filenames.
```
new_net_params = NetParams(
template={
# network geometry features
"net": os.path.join(LuST_dir, "scenario/lust.net.xml"),
# features associated with the properties of drivers
"vtype": os.path.join(LuST_dir, "scenario/vtypes.add.xml"),
# features associated with the routes vehicles take
"rou": [os.path.join(LuST_dir, "scenario/DUARoutes/local.0.rou.xml"),
os.path.join(LuST_dir, "scenario/DUARoutes/local.1.rou.xml"),
os.path.join(LuST_dir, "scenario/DUARoutes/local.2.rou.xml")]
}
)
# we no longer need to specify anything in VehicleParams
new_vehicles = VehicleParams()
```
#### 3.2.3 Running the Modified Simulation
Finally, the fully imported simulation can be run as follows.
**Warning**: the network takes time to initialize while the departure positions and times and vehicles are specified.
```
# create the network
network = Network(
name="template",
net_params=new_net_params,
vehicles=new_vehicles
)
# create the environment
env = TestEnv(
env_params=env_params,
sim_params=sim_params,
network=network
)
# run the simulation for 100000 steps
exp = Experiment(env=env)
_ = exp.run(1, 100000)
```
## 4. Aimsun Network Files
Flow can run templates that have been created in Aimsun and saved into an \*.ang file. Although it is possible to have control over the network, for instance add vehicles and monitor them directly from Flow, this tutorial only covers how to run the network.
We will use the template located at `tutorials/networks/test_template.ang`, which looks like this:
<img src="img/test_template.png">
<center><b>Figure 2</b>: Simulation of <code>test_template.ang</code> in Aimsun</center>
It contains two input and three output centroids that define the centroid configuration `Centroid Configuration 910`. The inflows are defined by two OD matrices, one for the type `Car` (in blue), the other for the type `rl` (in red). Note that there is no learning in this tutorial so the two types both act as regular cars. The two OD matrices form the traffic demand `Traffic Demand 925` that is used by the network `Dynamic Scenario 927`. Finally, the experiment `Micro SRC Experiment 928` and the replication `Replication 930` are created, and we will run this replication in the following.
First, we create the Aimsun-specific simulation parameters:
```
from flow.core.params import AimsunParams
sim_params = AimsunParams(
sim_step=0.1,
render=True,
emission_path='data',
replication_name="Replication 930",
centroid_config_name="Centroid Configuration 910"
)
```
As you can see, we need to specify the name of the replication we want to run as well as the centroid configuration that is to be used. There is an other optional parameter, `subnetwork_name`, that can be specified if only part of the network should be simulated. Please refer to the documentation for more information.
The template can then be imported as follows:
```
import os
import flow.config as config
net_params = NetParams(
template=os.path.join(config.PROJECT_PATH,
"tutorials/networks/test_template.ang")
)
```
Finally, we can run the simulation by specifying `'aimsun'` as the simulator to be used:
```
network = Network(
name="template",
net_params=net_params,
initial_config=initial_config,
vehicles=vehicles
)
env = TestEnv(
env_params,
sim_params,
network,
simulator='aimsun'
)
exp = Experiment(env)
exp.run(1, 1000)
```
| true |
code
| 0.591428 | null | null | null | null |
|
```
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
sns.set_style("whitegrid")
```
# Model: xgboost
```
from sklearn.metrics import classification_report, confusion_matrix, plot_confusion_matrix, roc_auc_score, roc_curve, precision_recall_curve
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.compose import ColumnTransformer
from sklearn.model_selection import train_test_split, GridSearchCV
from xgboost.sklearn import XGBClassifier
```
## Data
Load the dataset, applying no major transformations to it.
```
data = pd.read_csv('../dataset/creditcard.csv')
data.head()
X = data.drop(columns=['Class'])
y = data['Class']
```
Since the data is largely unbalanced we must use a stratified sampling to make sure we get both negative and positive samples to train with.
```
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=0, stratify=y)
```
## Pipeline (build)
```
numeric_feature_indexes = slice(0, 30)
preprocessor = ColumnTransformer(
transformers=[
('num', StandardScaler(), numeric_feature_indexes),
])
pipeline = Pipeline([
('preprocessor', preprocessor),
('classifier', XGBClassifier(objective= 'binary:logistic'))
])
num_features_type_map = {feature: 'float64' for feature in X_train.columns[numeric_feature_indexes]}
X_train = X_train.astype(num_features_type_map)
X_test = X_test.astype(num_features_type_map)
```
## Pipeline (train)
```
model = pipeline.fit(X_train, y_train, classifier__eval_metric='auc')
model
```
## Pipeline (evaluate)
```
y_pred = model.predict(X_test)
print(classification_report(y_test, y_pred))
disp = plot_confusion_matrix(model, X_test, y_test, display_labels=['normal', 'fraudulent'], cmap=plt.cm.Blues)
disp.ax_.grid(False)
```
Some great material is available here: https://machinelearningmastery.com/roc-curves-and-precision-recall-curves-for-classification-in-python/
```
y_pred_proba = pipeline.predict_proba(X_test)[::,1]
fpr, tpr, _ = roc_curve(y_test, y_pred_proba)
auc = roc_auc_score(y_test, y_pred_proba)
fig, ax = plt.subplots(figsize=(5,5))
ax.plot(fpr,tpr,label=f"auc {auc:2.2f}")
ax.legend(loc=4)
ax.set_xlabel('False Positive Rate')
ax.set_ylabel('True Positive Rate');
precision, recall, _ = precision_recall_curve(y_test, y_pred_proba)
fig, ax = plt.subplots(figsize=(5,5))
no_skill = len(y_test[y_test==1]) / len(y_test)
ax.plot([0, 1], [no_skill, no_skill], linestyle='--', label='No Skill')
ax.plot(recall, precision)
ax.set_xlabel('Precision')
ax.set_ylabel('Recall');
```
## Tune the pipeline
```
parameters = {
'classifier__max_depth': range (2, 10, 1),
'classifier__n_estimators': range(60, 220, 40),
'classifier__learning_rate': [0.1, 0.01, 0.05]
}
grid_search = GridSearchCV(
estimator=pipeline,
param_grid=parameters,
scoring = 'roc_auc',
n_jobs = 3,
cv = 3,
verbose=True
)
grid_search.fit(X_train, y_train)
```
Plot the outcome of the model search.
```
fig, ax = plt.subplots(figsize=(5, 5))
ax.plot(grid_search.cv_results_['mean_test_score'])
ax.set_ylabel("Average AUC score")
ax.set_xlabel("Model candidate")
sns.despine(offset=10)
```
| true |
code
| 0.666185 | null | null | null | null |
|
# This notebook refines the model using only playable songs:
### While generating new songs to play we only want to generate songs with difficulty of 8. Therefore we will load the refined model and then finetune it only to operate on our desired songs
```
from pathlib import Path
import pandas as pd
import re
#Get the list of all of the step files
step_files = list(Path("C:/Users/brent/Desktop/StepMania 5").rglob("*.[dD][wW][iI]"))
#Get the list of all of the step files
song_files = list(Path("C:/Users/brent/Desktop/StepMania 5").rglob("*.[mM][pP][3]"))
def process_song(path, title):
#Open File
text_file = open(path, "r")
lines = text_file.readlines()
text_file.close()
#Combine all text into single line
song = "".join(lines)
#Remove newline characters
song = re.sub('\n', '', song)
#Split on semicolon and then add the semicolons back into the respective lines
song = song.split(';')
song = [line+';' for line in song][:-1]
#Remove lines that start with 2 // (some files had this for some reason)
song = [line for line in song if (line.find('//') == -1)]
#Create a dataframe of the song
df = pd.DataFrame()
df[title] = song
return df
def pull_all_step_patterns(song, row):
song = song[row].str.split(":", n = 3, expand = True)
song = song[song[0].isin(["#SINGLE","#SOLO"])]
return song
def remove_leading_zeroes(songs):
"""Take a song step file and remove the leading zeroes"""
songs[3] = songs[3].str.replace(r"^0+","")
return songs
def fastaiFormat(songs):
"""Take a list of step files and make it into a format for fastai NLP"""
songs = songs.reset_index()
songs = songs[[1,3]]
songs.columns = ['label','text']
#Split the song into characters with spaces
songs['text'] = songs['text'].apply(lambda x: " ".join(x))
#Remove the trailing semicolon as we can add it back in when we are done predicting songs
songs['text'] = songs['text'].apply(lambda x: x[:-1])
return songs
def selectedDifficulty(songs, low=1, high=10):
"""Filters the songs only within a specific difficulty range given by low and high (inclusive)"""
songs = songs[pd.to_numeric(songs[2]).between(low,high)]
return songs
def join_all_step_patterns(step_files):
"""Create a dataframe of all songs for a fastai training model."""
songs = pd.DataFrame()
for row, path in enumerate(step_files):
df = process_song(path, row)
df = pull_all_step_patterns(df, row)
songs = pd.concat([songs,df])
songs = remove_leading_zeroes(songs)
songs = selectedDifficulty(songs, low=8, high=8)
songs = fastaiFormat(songs)
return songs
songs = join_all_step_patterns(step_files)
songs.head()
songs.to_csv("songs_8.csv", index=False)
```
# Refine our Language Model
```
%reload_ext autoreload
%autoreload 2
%matplotlib inline
from fastai.text import *
import os
from pathlib import Path
import pandas as pd
import re
import string
%reload_ext autoreload
%autoreload 2
%matplotlib inline
from fastai.text import *
import os
cwd = os.getcwd()
path = Path(cwd)
all_letters = list(string.printable + string.whitespace)
#We don't want to remove repetition in the DDR song as that is part of it
customtokenizer = Tokenizer(pre_rules= [], post_rules=[])
processors = [TokenizeProcessor(tokenizer=customtokenizer, mark_fields=False),
NumericalizeProcessor(vocab=Vocab.create(all_letters, max_vocab=1000, min_freq=0))]
data = (TextList.from_csv(path, "songs_8.csv", cols='text', processor=processors)
.split_by_rand_pct(0.2)
.label_for_lm()
.databunch(bs=96))
data.save('data_block_lm4.pkl')
data_lm = load_data(path, 'data_block_lm4.pkl',bs=96)
learn = language_model_learner(data_lm, AWD_LSTM, drop_mult=0.3)
learn.load('fine_tuned_3')
learn.load_encoder('fine_tuned_enc_3')
learn.fit_one_cycle(4, 1e-2, moms=(0.8,0.7))
learn.unfreeze()
learn.fit_one_cycle(10, 1e-3, moms=(0.8,0.7))
learn.fit_one_cycle(10, 1e-4, moms=(0.8,0.7))
learn.save('fine_tuned_4')
learn.save_encoder('fine_tuned_enc_4')
TEXT = ""
N_WORDS = 200
N_SENTENCES = 1
print("\n".join(learn.predict(TEXT, N_WORDS, temperature=0.50) for _ in range(N_SENTENCES)))
```
# Now we work on the classifier
```
from pathlib import Path
import pandas as pd
import re
import string
%reload_ext autoreload
%autoreload 2
%matplotlib inline
from fastai.text import *
import os
cwd = os.getcwd()
path = Path(cwd)
#Try out the datablock API to see if we can replicate and use either no tokenization or our custom tokenizer
all_letters = list(string.printable + string.whitespace)
#We don't want to remove repetition in the DDR song as that is part of it
customtokenizer = Tokenizer(pre_rules= [], post_rules=[])
processors = [TokenizeProcessor(tokenizer=customtokenizer, mark_fields=False),
NumericalizeProcessor(vocab=Vocab.create(all_letters, max_vocab=1000, min_freq=0))]
data_clas = (TextList.from_csv(path, 'songs_8.csv', cols='text', processor=processors)
.split_by_rand_pct(0.2)
.label_from_df('label')
.databunch(bs=12))
data_clas.save('data_clas_4.pkl')
data_clas = load_data(path, 'data_clas_4.pkl', bs=12)
learn = text_classifier_learner(data_clas, AWD_LSTM, drop_mult=0.3)
learn.load_encoder('fine_tuned_enc_4')
learn.fit_one_cycle(5, 1e-2, moms=(0.8,0.7))
learn.unfreeze()
learn.fit_one_cycle(10, 1e-3, moms=(0.8,0.7))
learn.fit_one_cycle(10, 1e-4, moms=(0.8,0.7))
learn.save('fine_tuned_classifier_4')
learn.save_encoder('fine_tuned_enc_classifier_4')
```
## What are the most frequently misclassified?
```
interp = ClassificationInterpretation.from_learner(learn)
losses,idxs = interp.top_losses()
len(data_clas.valid_ds)==len(losses)==len(idxs)
interp.plot_confusion_matrix(figsize=(12,12), dpi=60)
interp.most_confused(min_val=2)
```
| true |
code
| 0.479443 | null | null | null | null |
|
# Solving the Taxi Problem Using SARSA
### Goal:
Say our agent is the driving the taxi. There are totally four locations and the agent has to
pick up a passenger at one location and drop at the another. The agent will receive +20
points as a reward for successful drop off and -1 point for every time step it takes. The agent
will also lose -10 points for illegal pickups and drops. So the goal of our agent is to learn to
pick up and drop passengers at the correct location in a short time without boarding any illegal
passengers.
First, we import all necessary libraries and initialize the environment
```
import random
import gym
env = gym.make('Taxi-v2')
```
The environment is shown below, where the letters (R, G, Y, B) represents the different
locations and a tiny yellow colored rectangle is the taxi driving by our agent.
```
env.render()
```
Now, we initialize, Q table has a dictionary which stores state-action pair specifying value of performing an action in
a state s.
```
Q = {}
for s in range(env.observation_space.n):
for a in range(env.action_space.n):
Q[(s,a)] = 0.0
```
Then, we define a function for performing epsilon-greedy policy. In epsilon-greedy policy, either we select best action with probability 1-epsilon or we explore new action with probability epsilon
```
def epsilon_greedy(state, epsilon):
if random.uniform(0,1) < epsilon:
return env.action_space.sample()
else:
return max(list(range(env.action_space.n)), key = lambda x: Q[(state,x)])
```
Now we initialize necessary variables
alpha - TD learning rate
gamma - discount factor <br>
epsilon - epsilon value in epsilon greedy policy
```
alpha = 0.85
gamma = 0.90
epsilon = 0.8
```
Now, we perform SARSA!!
```
for i in range(4000):
# we store cumulative reward of each episodes in r
r = 0
# initialize the state,
state = env.reset()
# select the action using epsilon-greedy policy
action = epsilon_greedy(state,epsilon)
while True:
# env.render()
# then we perform the action and move to the next state, and receive the reward
nextstate, reward, done, _ = env.step(action)
# again, we select the next action using epsilon greedy policy
nextaction = epsilon_greedy(nextstate,epsilon)
# we calculate the Q value of previous state using our update rule
Q[(state,action)] += alpha * (reward + gamma * Q[(nextstate,nextaction)]-Q[(state,action)])
# finally we update our state and action with next action and next state
action = nextaction
state = nextstate
# store the rewards
r += reward
# we will break the loop, if we are at the terminal state of the episode
if done:
break
print("total reward: ", r)
env.close()
```
| true |
code
| 0.416648 | null | null | null | null |
|
# Deploying Tensorflow models on Verta
Within Verta, a "Model" can be any arbitrary function: a traditional ML model (e.g., sklearn, PyTorch, TF, etc); a function (e.g., squaring a number, making a DB function etc.); or a mixture of the above (e.g., pre-processing code, a DB call, and then a model application.) See more [here](https://docs.verta.ai/verta/registry/concepts).
This notebook provides an example of how to deploy a Tensorflow model on Verta as a Verta Standard Model either via convenience functions (for Keras) or by extending [VertaModelBase](https://verta.readthedocs.io/en/master/_autogen/verta.registry.VertaModelBase.html?highlight=VertaModelBase#verta.registry.VertaModelBase).
## 0. Imports
```
import os
import tensorflow as tf
# restart your notebook if prompted on Colab
try:
import verta
except ImportError:
!pip install verta
import os
# Ensure credentials are set up, if not, use below
# os.environ['VERTA_EMAIL'] =
# os.environ['VERTA_DEV_KEY'] =
# os.environ['VERTA_HOST'] =
from verta import Client
client = Client(os.environ['VERTA_HOST'])
```
## 1. Model Training
```
mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10)
])
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
model.compile(optimizer='adam',
loss=loss_fn,
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=5)
```
## 2. Register Model
```
registered_model = client.get_or_create_registered_model(
name="mnist", labels=["computer-vision", "tensorflow"])
```
### 2.1 Register from the model object
#### If you are in the same file where you have the model object handy, use the code below to package the model
```
from verta.environment import Python
model_version_from_obj = registered_model.create_standard_model_from_keras(
model, environment=Python(requirements=["tensorflow"]), name="v1")
```
### 2.2 (OR) Register a serialized version of the model using the VertaModelBase
```
model.save("mnist.tf_saved_model")
from verta.registry import VertaModelBase
class MNISTModel(VertaModelBase):
def __init__(self, artifacts):
import tensorflow as tf
self.model = tf.keras.models.load_model(
artifacts["mnist_model"])
def predict(self, input_data):
output = []
for input_data_point in input_data:
reshaped_data = tf.reshape(input_data_point, (1, 28, 28))
output.append(self.model(reshaped_data).numpy().tolist())
return output
# test locally
mnist_model1 = MNISTModel({"mnist_model" : "mnist.tf_saved_model/"})
mnist_model1.predict([x_test[0]])
model_version_from_cls = registered_model.create_standard_model(
MNISTModel,
environment=Python(["tensorflow"]),
name="v2",
artifacts={"mnist_model" : "mnist.tf_saved_model/"}
)
```
### 2.3 (OR) Register a serialized version of the model using the VertaModelBase (Variation: take in a base64 encoded input vs. a tensor)
```
class MNISTModel2(VertaModelBase):
def __init__(self, artifacts):
import tensorflow as tf
import base64
self.model = tf.keras.models.load_model(artifacts["mnist_model"])
def predict(self, input_data):
# decode base64
import base64
output = []
for input_data_point in input_data:
decoded_data = base64.b64decode(input_data_point["img_bytes"])
decoded_data = tf.io.decode_image(decoded_data)
decoded_data = tf.reshape(decoded_data, (1, 28, 28))
output.append(self.model(decoded_data).numpy().tolist())
return output
# test locally
import base64
mnist_model2 = MNISTModel2({"mnist_model" : "mnist.tf_saved_model/"})
with open("2.png", "rb") as image_file:
encoded_string = base64.b64encode(image_file.read())
print(mnist_model2.predict([{"img_bytes" : encoded_string}]))
model_version_from_cls_base64 = registered_model.create_standard_model(
MNISTModel2,
environment=Python(["tensorflow"]),
name="v3",
artifacts={"mnist_model" : "mnist.tf_saved_model/"}
)
```
## 3. Deploy model to endpoint
```
mnist_endpoint = client.get_or_create_endpoint("mnist")
mnist_endpoint.update(model_version_from_obj, wait=True)
deployed_model = mnist_endpoint.get_deployed_model()
deployed_model.predict([x_test[0]])
mnist_endpoint = client.get_or_create_endpoint("mnist")
mnist_endpoint.update(model_version_from_cls, wait=True)
deployed_model = mnist_endpoint.get_deployed_model()
deployed_model.predict([x_test[0]])
mnist_endpoint = client.get_or_create_endpoint("mnist")
mnist_endpoint.update(model_version_from_cls_base64, wait=True)
deployed_model = mnist_endpoint.get_deployed_model()
with open("2.png", "rb") as image_file:
encoded_string = base64.b64encode(image_file.read())
print(deployed_model.predict([{"img_bytes" : encoded_string}]))
```
---
| true |
code
| 0.614134 | null | null | null | null |
|
# Simulated Sky Signal in time domain
In this lesson we will use the TOAST Operator `OpSimPySM` to create timestreams for an instrument given a sky model.
```
# Load common tools for all lessons
import sys
sys.path.insert(0, "..")
from lesson_tools import (
fake_focalplane
)
# Capture C++ output in the jupyter cells
%reload_ext wurlitzer
import toast
import healpy as hp
import numpy as np
env = toast.Environment.get()
env.set_log_level("DEBUG")
```
## Scanning strategy
Before being able to scan a map into a timestream we need to define a scanning strategy
and get pointing information for each channel.
We use the same **satellite** scanning used in lesson 2 about scanning strategies,
see the `02_Simulated_Scan_Strategies/simscan_satellite.ipynb` for more details.
```
focal_plane = fake_focalplane()
focal_plane.keys()
focal_plane["0A"]["fwhm_arcmin"]
# Scan parameters
alpha = 50.0 # precession opening angle, degrees
beta = 45.0 # spin opening angle, degrees
p_alpha = 25.0 # precession period, minutes
p_beta = 1.25 # spin period, minutes
samplerate = 0.5 # sample rate, Hz
hwprpm = 5.0 # HWP rotation in RPM
nside = 64 # Healpix NSIDE
# We will use one observation per day, with no gaps in between, and
# run for one year.
obs_samples = int(24 * 3600.0 * samplerate) - 1
nobs = 366
# Slew the precession axis so that it completes one circle
deg_per_day = 360.0 / nobs
from toast.todmap import TODSatellite, slew_precession_axis
detquat = {ch: focal_plane[ch]["quat"] for ch in focal_plane}
# Create distributed data
comm = toast.Comm()
data = toast.Data(comm)
# Append observations
for ob in range(nobs):
obsname = "{:03d}".format(ob)
obsfirst = ob * (obs_samples + 1)
obsstart = 24 * 3600.0
tod = TODSatellite(
comm.comm_group,
detquat,
obs_samples,
firstsamp=obsfirst,
firsttime=obsstart,
rate=samplerate,
spinperiod=p_beta,
spinangle=beta,
precperiod=p_alpha,
precangle=alpha,
coord="E",
hwprpm=hwprpm
)
qprec = np.empty(4 * tod.local_samples[1], dtype=np.float64).reshape((-1, 4))
slew_precession_axis(
qprec,
firstsamp=obsfirst,
samplerate=samplerate,
degday=deg_per_day,
)
tod.set_prec_axis(qprec=qprec)
obs = dict()
obs["tod"] = tod
data.obs.append(obs)
from toast.todmap import (
get_submaps_nested,
OpPointingHpix,
OpAccumDiag
)
from toast.map import (
DistPixels
)
# Make a simple pointing matrix
pointing = OpPointingHpix(nside=nside, nest=True, mode="IQU")
pointing.exec(data)
# Compute the locally hit pixels
localpix, localsm, subnpix = get_submaps_nested(data, nside)
# Construct a distributed map to store the hit map
npix = 12 * nside**2
hits = DistPixels(
comm=data.comm.comm_world,
size=npix,
nnz=1,
dtype=np.int64,
submap=subnpix,
local=localsm,
)
hits.data.fill(0)
# Accumulate the hit map locally
build_hits = OpAccumDiag(hits=hits)
build_hits.exec(data)
# Reduce the map across processes (a No-op in this case)
hits.allreduce()
%matplotlib inline
hp.mollview(hits.data.flatten(), nest=True)
```
## Define PySM parameters and instrument bandpasses
Then we define the sky model parameters, choosing the desired set of `PySM` models and then we specify the band center and the bandwidth for a top-hat bandpass.
Currently top-hat bandpasses are the only type supported by the operator, in the future we will implement arbitrary bandpasses.
Then bandpass parameters can be added directly to the `focal_plane` dictionary:
```
for ch in focal_plane:
focal_plane[ch]["bandcenter_ghz"] = 70
focal_plane[ch]["bandwidth_ghz"] = 10
focal_plane[ch]["fwhm"] = 60*2
pysm_sky_config = ["s1", "f1", "a1", "d1"]
```
## Run the OpSimPySM operator
The `OpSimPySM` operator:
* Creates top-hat bandpasses arrays (frequency axis and weights) as expected by `PySM`
* Loops by channel and for each:
* Creates a `PySMSky` object just with 1 channel at a time
* Executes `PySMSky` to evaluate the sky models and bandpass-integrate
* Calls `PySM` to perform distributed smoothing with `libsharp`
* Gathers the map on the first MPI process
* Applies coordinate transformation if necessary (not currently implemented in `libsharp`)
* Use the `DistMap` object to communicate to each process the part of the sky they observe
* Calls `OpSimScan` to rescan the map to a timeline
```
from toast.todmap import OpSimPySM
OpSimPySM?
opsim_pysm = OpSimPySM(
comm=None,
pysm_model=pysm_sky_config,
nside=nside,
apply_beam=True,
debug=True,
focalplanes=[focal_plane],
subnpix=subnpix,
localsm=localsm
)
opsim_pysm.exec(data)
```
### Plot output timelines
```
%matplotlib inline
import matplotlib.pyplot as plt
tod = data.obs[0]['tod']
pix = tod.cache.reference("pixels_0A")
import toast.qarray as qa
theta, phi, pa = qa.to_angles(tod.read_pntg(detector="0A"))
pix
num = 10000
plt.figure(figsize=(7, 5))
plt.plot(np.degrees(theta[:num]), tod.cache.reference("signal_0A")[:num], ".")
plt.xlabel("$Colatitude [deg]$")
plt.ylabel("$Signal [ \mu K_{RJ} ]$");
```
### Bin the output to a map
```
from numba import njit
@njit
def just_make_me_a_map(output_map, signals):
"""Temperature only binner
Bins a list of (pix, signal) tuples into an output map,
it does not support polarization, so it just averages it out.
Parameters
----------
output_map : np.array
already zeroed output map
signals : numba.typed.List of (np.array[int64] pix, np.array[np.double] signal)
Returns
-------
hits : np.array[np.int64]
hitmap
"""
hits = np.zeros(len(output_map), dtype=np.int64)
for pix, signal in signals:
for p,s in zip(pix, signal):
output_map[p] += s
hits[p] += 1
output_map[hits != 0] /= hits[hits != 0]
return hits
from numba.typed import List
signals = List()
for obs in data.obs:
for ch in focal_plane:
signals.append((obs["tod"].cache.reference("pixels_%s" % ch), obs["tod"].cache.reference("signal_%s" % ch)))
output_map = np.zeros(npix, dtype=np.double)
h = just_make_me_a_map(output_map, signals)
hp.mollview(h, title="hitmap", nest=True)
hp.mollview(output_map, nest=True, min=0, max=1e-3, cmap="coolwarm")
hp.gnomview(output_map, rot=(0,0), xsize=5000, ysize=2000, cmap="coolwarm", nest=True, min=0, max=1e-2)
```
### Custom sky components
* `pysm_component_objects`: pass custom PySM component objects, see for example the [WebSkyCIB](https://so-pysm-models.readthedocs.io/en/latest/api/so_pysm_models.WebSkyCIB.html#so_pysm_models.WebSkyCIB) model in the [so_pysm_models](https://github.com/simonsobs/so_pysm_models) repository, it provides a Cosmic Infrared Background computed from
| true |
code
| 0.682587 | null | null | null | null |
|
# Image Classification using Pre-trained model
## Step 1- Download the model
```
!omz_downloader --name inception-resnet-v2-tf
```
## Step 2 - Import the libraries
```
import cv2
import matplotlib.pyplot as plt
import numpy as np
from openvino.runtime import Core
from pathlib import Path
from IPython.display import Markdown
```
## Step 3 - Convert the model to IR
```
# The paths of the source and converted models
model_path = Path("/home/chetan/public/inception-resnet-v2-tf/inception_resnet_v2.pb")
ir_path = Path(model_path).with_suffix(".xml")
# Construct the command for Model Optimizer
mo_command = f"""mo
--input_model "{model_path}"
--input_shape "[1,299,299,3]"
--mean_values="[127.5,127.5,127.5]"
--scale_values="[127.5]"
--data_type FP16
--output_dir "{model_path.parent}"
"""
mo_command = " ".join(mo_command.split())
print("Model Optimizer command to convert TensorFlow to OpenVINO:")
display(Markdown(f"`{mo_command}`"))
# Run Model Optimizer
print("Exporting TensorFlow model to IR... This may take a few minutes.")
! $mo_command
```
## Load the model
```
# Load the converted model
ie = Core()
model = ie.read_model(model="/home/chetan/public/inception-resnet-v2-tf/inception_resnet_v2.xml")
compiled_model = ie.compile_model(model=model, device_name="CPU")
```
## Get Model Information
```
input_layer = next(iter(compiled_model.inputs))
output_layer = next(iter(compiled_model.outputs))
network_input_shape = input_layer.shape
```
## Load an Image
```
## # The MobileNet network expects images in RGB format
image = cv2.cvtColor(cv2.imread(filename="data/Bengal-tiger-1.jpg"), code=cv2.COLOR_BGR2RGB)
# Resize image to network input image shape
resized_image = cv2.resize(src=image, dsize=(299, 299))
# Transpose image to network input shape
input_image = np.expand_dims(resized_image, 0)
plt.imshow(image);
```
## Inference
```
# Option 1
result = compiled_model([input_image])[output_layer]
result_index = np.argmax(result)
print('Result index', result_index)
# Convert the inference result to a class name.
imagenet_classes = open("/home/chetan/public/inception-resnet-v2-tf/labels.txt").read().splitlines()
print('Predicted class:', imagenet_classes[result_index])
# Option 2
request = compiled_model.create_infer_request()
request.infer(inputs={input_layer.any_name: input_image})
result = request.get_output_tensor(output_layer.index).data
result_index = np.argmax(result)
# Convert the inference result to a class name.
imagenet_classes = open("/home/chetan/public/inception-resnet-v2-tf/labels.txt").read().splitlines()
print('Predicted class:', imagenet_classes[result_index])
```
| true |
code
| 0.601798 | null | null | null | null |
|
<div style="text-align: right">Dino Konstantopoulos, 3 June 2021</div>
# Introducing sentence transformers
A python package called **sentence-transformers** that has specifically been optimized for doing semantic textual similarity searches. The model creates a 1024-dimensional embedding for each sentence, and the similarity between two such sentences can then be calculated by the cosine similarity between the corresponding two vectors.
A cosine similarity of 1 means the questions are identical (the angle is 0), and a cosine similarity of -1 means the questions are very different.
# ARC Classification dataset
The [ARC question classification dataset](https://allenai.org/data/arc-classification) is a dataset of 1700 questions. that went offline last week. But I found it on Amazon.
We can use it as our testing ground to experiment with the affinity of our sentence embeddings.
**Approach 1**: The transformer model outputs a 1024-dimensional vector for each token in our sentence. Then, we can mean-pool the vectors to generate a single sentence-level vector.
**Approach 2**: We can also calculate the cosine distance between each token in our query and each token in the sentence-to-compare-with, and then mean-pool the cosine angles. Calculating the cosine similarity between all token embeddingslets us see the contributions of each token towards the final similarity score and explaining what the model is doing.
>**Research Question**: Should we take the mean of all token embeddings ***prior*** to calculating cosine similarity between different sentence embeddings? Or should we see how each token embedding from the query is aligned against token embeddings in potentially matching questions? What is the best approach for our **belief models**?
# Install libraries required
# Experiment
Let's pretend that the first question in our dataset is our original query, and try to find the closest matching entry from the rest of the questions, and contrast our approaches.
# Download ARC dataset
```
!wget https://s3-us-west-2.amazonaws.com/ai2-website/data/ARC-V1-Feb2018.zip
from zipfile import ZipFile
with ZipFile('ARC-V1-Feb2018.zip', "r") as zip_obj:
zip_obj.extractall("data")
```
# Import dataset into Pandas
```
import pandas as pd
import numpy as np
df = pd.read_csv("./data/ARC-V1-Feb2018-2/ARC-Easy/ARC-Easy-Train.csv")
```
# Load transformer model
```
from tqdm import tqdm
from transformers import AutoTokenizer, AutoModel
from itertools import zip_longest
import torch
def grouper(iterable, n, fillvalue=None):
"""Taken from: https://docs.python.org/3/library/itertools.html#itertools-recipes"""
args = [iter(iterable)] * n
return zip_longest(*args, fillvalue=fillvalue)
def mean_pooling(model_output, attention_mask):
"""
Mean pooling to get sentence embeddings. See:
https://huggingface.co/sentence-transformers/paraphrase-distilroberta-base-v1
"""
token_embeddings = model_output[0]
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
sum_embeddings = torch.sum(token_embeddings * input_mask_expanded, 1) # Sum columns
sum_mask = torch.clamp(input_mask_expanded.sum(1), min=1e-9)
return sum_embeddings / sum_mask
# Sentences to embed
df = df[df.question.str.contains('\?')]
df.question = [s.split('?')[0] + '?' for s in df.question]
# Fetch the model & tokenizer from transformers library
model_name = 'sentence-transformers/stsb-roberta-large'
model = AutoModel.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
# Create sentence embeddings
```
sentence_embeddings = []
token_embeddings = []
# Embed 8 sentences at a time
for sentences in tqdm(grouper(df.question.tolist(), 8, None)):
# Ignore sentences with None
valid_sentences = [s for s in sentences if s]
# Tokenize input
encoded_input = tokenizer(valid_sentences, padding=True, truncation=True, max_length=512, return_tensors="pt")
# Create word embeddings
model_output = model(**encoded_input)
# For each sentence, store a list of token embeddings; i.e. a 1024-dimensional vector for each token
for i, sentence in enumerate(valid_sentences):
tokens = tokenizer.convert_ids_to_tokens(encoded_input['input_ids'][i])
embeddings = model_output[0][i]
token_embeddings.append(
[{"token": token, "embedding": embedding.detach().numpy()} for token, embedding in zip(tokens, embeddings)]
)
# Pool to get sentence embeddings; i.e. generate one 1024 vector for the entire sentence
sentence_embeddings.append(
mean_pooling(model_output, encoded_input['attention_mask']).detach().numpy()
)
# Concatenate all of the embeddings into one numpy array of shape (n_sentences, 1024)
sentence_embeddings = np.concatenate(sentence_embeddings)
```
# Perform Search & Show Search Context
```
from IPython.core.display import display, HTML
from sklearn.preprocessing import normalize
import matplotlib.ticker as ticker
import matplotlib.pyplot as plt
# Noralize the data
norm_data = normalize(sentence_embeddings, norm='l2')
# Set QUERY & BEST MATCH IDs
QUERY_ID = 0
scores = np.dot(norm_data, norm_data[QUERY_ID].T)
MATCH_ID = np.argsort(scores)[-2]
def get_token_embeddings(embeddings_word):
"""Returns a list of tokens and list of embeddings"""
tokens, embeddings = [], []
for word in embeddings_word:
if word['token'] not in ['<s>', '<pad>', '</pad>', '</s>']:
tokens.append(word['token'].replace('Ġ', ''))
embeddings.append(word['embedding'])
return tokens, normalize(embeddings, norm='l2')
# Get tokens & token embeddings
query_tokens, query_token_embeddings = get_token_embeddings(token_embeddings[QUERY_ID])
match_tokens, match_token_embeddings = get_token_embeddings(token_embeddings[MATCH_ID])
# Calculate cosine similarity between all tokens in query and match sentences
attention = (query_token_embeddings @ match_token_embeddings.T)
def plot_attention(src, trg, attention):
"""Plot 2D plot of cosine similarities"""
fig = plt.figure(dpi=150)
ax = fig.add_subplot(111)
cax = ax.matshow(attention, interpolation='nearest')
clb = fig.colorbar(cax)
ax.xaxis.set_major_locator(ticker.MultipleLocator(1))
ax.yaxis.set_major_locator(ticker.MultipleLocator(1))
ax.set_xticklabels([''] + src, rotation=90)
ax.set_yticklabels([''] + trg)
plot_attention(match_tokens, query_tokens, attention)
lengths_query
attention.shape
```
# How to run
Since I have trouble loading conda environments on my Jupyter notebook, I run the code in a python file on the command line.
# To think about
Our first experiments should be to see which of the two approaches outlined herein produce best results with the ARC dataset.
Also for next week, think about how can we combine LDA with transformer sentence embeddings.
# Using `SentenceTransformer`
```
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('bert-base-nli-mean-tokens')
sentences = ['This framework generates embeddings for each input sentence',
'A package that maps sentences into embeddings.',
'The quick brown fox jumps over the lazy dog.']
sentence_embeddings = model.encode(sentences)
print("Sentence embeddings:")
print(sentence_embeddings)
```
# Loading `stsb-roberta-large`
Which I found [here](https://huggingface.co/models).
This take 5 hours to run!
```
from sentence_transformers import SentenceTransformer
#model = SentenceTransformer('bert-base-nli-mean-tokens')
model = SentenceTransformer('stsb-roberta-large')
```
| true |
code
| 0.758044 | null | null | null | null |
|
# Basic CNN based digit recognizer
In this tutorial we shall go through a bangla digit recognizer model in details. Our model is going to be based on a convolutional neural network (CNN). The focus is to get familiar with the components of a bangla digit recognizer framework. There are three steps in building this digit recognizer, <br>
**Step 1 : Process the data.<br>
Step 2 : Design the model.<br>
Step 3 : Train the model.**
```
# Importing necessary libraries
import numpy as np
import os
import glob
import cv2
import matplotlib.pyplot as plt
import pandas as pd
import pickle
from keras.utils import to_categorical
from keras.layers import Dense, Input, Conv2D, Flatten, MaxPool2D, Activation
from keras.models import Model
from keras.callbacks import ModelCheckpoint
from keras import backend as K
```
While writing the codes, files and folder was organized in the following way
* Numta
* code
* data
* model
* Final_DB
The `code` folder contains this jupyter notebook, the processed images will be placed in the `data` folder, the trained model will be saved in the `model` folder, and the `Final_DB` folder has the raw image datasets.
## Step 1: Process the data
Our dataset comes from six different source. For this tutorial we are using only dataset **A**.
```
#Declaring constants
FIG_WIDTH=16 # Width of figure
ROW_HEIGHT=3 # Height of each row when showing a figure which consists of multiple rows
RESIZE_DIM=28 # The images will be resized to 28x28 pixels
project_dir='..'
# We hall get all the filepaths by using glob.glob() function
paths_train_a=glob.glob(os.path.join(project_dir,'Final_DB','training-a','*.png'))
paths_test_a=glob.glob(os.path.join(project_dir,'Final_DB','testing-a','*.png'))
path_label_train_a=os.path.join(project_dir,'Final_DB','training-a.csv')
path_label_test_a=os.path.join(project_dir,'Final_DB','testing-a.csv')
```
### Define some utility functions
We shall write some helper functions to process and visualize the images.
```
def get_key(path):
# seperates the key of an image from the filepath
key=path.split(sep=os.sep)[-1]
return key
def get_data(paths_img,path_label,resize_dim=None,rescale=True):
'''reads images from the filepaths, resizes them, and returns them in a numpy array
Args:
paths_img: image filepaths
path_label: image label filepath
Returns:
X: group of images
y: categorical true labels
'''
X=[] # initialize empty list for resized images
for i,path in enumerate(paths_img):
img=cv2.imread(path,cv2.IMREAD_GRAYSCALE) # read image, image size is 180x180
if resize_dim!=None:
img=cv2.resize(img,(resize_dim,resize_dim),interpolation=cv2.INTER_AREA) # resize image to 28x28
if rescale==True:
img=img/255
X.append(np.expand_dims(img,axis=2)) # expand image to 28x28x1 and append to the list.
# display progress
if i==len(paths_img)-1:
end='\n'
else: end='\r'
print('processed {}/{}'.format(i+1,len(paths_img)),end=end)
X=np.array(X) # tranform list to numpy array
df = pd.read_csv(path_label) # read labels
df=df.set_index('filename')
y_label=[df.loc[get_key(path)]['digit'] for path in paths_img] # get the labels corresponding to the images
y=to_categorical(y_label,10) # transfrom integer value to categorical variable
return X, y
def imshow_group(X,y=None,y_pred=None,n_per_row=10,phase='processed'):
'''helper function to visualize a group of images along with their categorical true labels (y).
Args:
X: group of images
y: categorical true labels
y_pred: predicted class probabilities
n_per_row: number of images per row to be plotted
'''
n_sample=len(X)
img_dim=X.shape[1]
j=np.ceil(n_sample/n_per_row)
fig=plt.figure(figsize=(FIG_WIDTH,HEIGHT_PER_ROW*j))
for i,img in enumerate(X):
plt.subplot(j,n_per_row,i+1)
img_sq=np.squeeze(img,axis=2)
plt.imshow(img_sq,cmap='gray')
if y is not None:
plt.title(np.argmax(y[i]))
if y_pred is not None:
top_n=3 # top 3 predictions with highest probabilities
ind_sorted=np.argsort(y_pred[i])[::-1]
h=img_dim+4
for k in range(top_n):
string='pred: {} ({:.0f}%)\n'.format(ind_sorted[k],y_pred[i,ind_sorted[k]]*100)
plt.text(img_dim/2, h, string, horizontalalignment='center',verticalalignment='center')
h+=4
plt.axis('off')
plt.show()
```
Next we are going to use the `get_data()` function to process all the images from dataset **A**
```
X_train_a,y_train_a=get_data(paths_train_a,path_label_train_a,resize_dim=RESIZE_DIM)
X_test_a,y_test_a=get_data(paths_test_a,path_label_test_a,resize_dim=RESIZE_DIM)
```
Let's see some samples of the processed data.
```
X_sample=X_train_a[:40]
y_sample=y_train_a[:40]
X_sample.shape
imshow_group(X=X_sample,y=y_sample)
```
Next, we are going to randomly choose 80% of the training data and use it to train our neural network. The remaining 20% images are going to be our validation data.
```
indices=list(range(len(X_train_a)))
np.random.shuffle(indices)
ind=int(len(indices)*0.80)
X_train=X_train_a[indices[:ind]] # train data
y_train=y_train_a[indices[:ind]]
X_val=X_train_a[indices[-(len(indices)-ind):]] # validation data
y_val=y_train_a[indices[-(len(indices)-ind):]]
```
## Step 2: Design the model
In this step we shall design our neural network model. We are going to build a small model based on the classic LeNet architecture. We shall use only three convolutional layers. Each convolution layer has rectified linear unit (ReLU) activation which is followed by a max pooling layer. The convolution layers are followed by two dense layers.
```
def get_model():
input_layer=Input(shape=(RESIZE_DIM,RESIZE_DIM,1))
x=Conv2D(filters=8,kernel_size=(5,5),padding='valid', activation='relu')(input_layer)
x=MaxPool2D(pool_size=(2,2),strides=2,padding='valid')(x)
x=Conv2D(filters=16,kernel_size=(3,3),padding='valid', activation='relu')(x)
x=MaxPool2D(pool_size=(2,2),strides=2,padding='valid')(x)
x=Conv2D(filters=32,kernel_size=(3,3),padding='valid', activation='relu')(x)
x=MaxPool2D(pool_size=(2,2),strides=2,padding='valid')(x)
x=Flatten()(x)
x=Dense(units=64)(x)
x=Dense(units=10)(x)
output_layer=Activation('softmax')(x)
model=Model(inputs=input_layer,outputs=output_layer)
model.compile(loss='categorical_crossentropy',metrics=['accuracy'],optimizer='adam')
return model
model=get_model()
model.summary()
```
## Step 3: Train the model
```
path_model=os.path.join(project_dir,'model','model_tutorial.h5') # save model at this location after each epoch
K.tensorflow_backend.clear_session() # destroys the current graph and builds a new one
model=get_model() # create the model
K.set_value(model.optimizer.lr,1e-3) # set the learning rate
# fit the model
h=model.fit(x=X_train,
y=y_train,
batch_size=512,
epochs=100,
verbose=1,
validation_data=(X_val,y_val),
shuffle=True,
callbacks=[
ModelCheckpoint(filepath=path_model),
]
)
```
After 100 epochs training accuracy is 92% and valiadation accuracy is 90%.
Let's evaluate the model performance on the test set
```
model.evaluate(X_test_a,y_test_a)
```
The loss and accuracy is similar to the validation set.
## Result Analysis
Let's observe the images which is misclassified by our model.
```
predictions=model.predict(X_test_a) # get predictions for all the test data
# get the indice of the images which were incorrectly labeled
incorrect_ind=[]
for i,pred in enumerate(predictions):
if np.argmax(y_test_a[i])!=np.argmax(pred):
incorrect_ind.append(i)
# let's observe some samples of the incorrect data
X_inc=X_test_a[incorrect_ind[:40]]
y_inc=predictions[incorrect_ind[:40]]
y_true=y_test_a[incorrect_ind[:40]]
imshow_group(X=X_inc,y=y_true,y_pred=y_inc)
```
Our model misclassifies often misclassifies '5' as '6', '9' as '1', among other mistakes. Since the neural network architecture used in this tutorial is shallow, has a simple architecture and not fine-tuned for this problem, its performance is not quite satisfactory. A deeper state of the art architecture should yield better results which will be investigated in future notebooks.
| true |
code
| 0.669691 | null | null | null | null |
|
<a href="https://colab.research.google.com/github/OUCTheoryGroup/colab_demo/blob/master/02_Unsupervised_Segmentation.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Unsupervised Image Segmentation. *ICASSP* 2018
**图片无监督语义分割**,作者是东京大学的 Asako Kanezaki ,这里采用的是曾伊言修改的代码。
GITHUB地址:https://github.com/Yonv1943/Unsupervised-Segmentation/tree/master
知乎链接:https://zhuanlan.zhihu.com/p/68528056
原作者的算法要运行30秒左右,这里的代码只需要5秒钟就可以取得相同的效果。
```
# 首先:下载待处理的图像,这里选择的是 tiger.jpg 这张图
! wget https://raw.githubusercontent.com/Yonv1943/Unsupervised-Segmentation/master/image/tiger.jpg
import os
import time
import cv2
import numpy as np
from skimage import segmentation
import torch
import torch.nn as nn
from matplotlib import pyplot as plt
```
论文的总体框架如下:

完整算法如下:

其中,$Net()$ 为作者使用的一个全卷积网络,接收输入图像进行特征提取,该网络由三层卷积组成,具体如下:
| | kernel | dim | stride | padding | activation |
|:--:|:--:|:--:|:--:|:--:|:--:|
|conv2d| 3x3 | 100 | 1 | 1 | RelU, BatchNorm |
|conv2d| 3x3 | 100 | 1 | 1 | RelU, BatchNorm |
|conv2d| 1x1 | 100 | 1 | 1 | BatchNorm |
为了提高效率,曾伊言对网络进行了改进,使用四层卷积,仿照SENet ,使用3x3与1x1交替,膨胀64 与 压缩32。网络的实现代码如下:
```
class MyNet(nn.Module):
def __init__(self, inp_dim, mod_dim1, mod_dim2):
super(MyNet, self).__init__()
self.seq = nn.Sequential(
nn.Conv2d(inp_dim, mod_dim1, kernel_size=3, stride=1, padding=1),
nn.BatchNorm2d(mod_dim1),
nn.ReLU(inplace=True),
nn.Conv2d(mod_dim1, mod_dim2, kernel_size=1, stride=1, padding=0),
nn.BatchNorm2d(mod_dim2),
nn.ReLU(inplace=True),
nn.Conv2d(mod_dim2, mod_dim1, kernel_size=3, stride=1, padding=1),
nn.BatchNorm2d(mod_dim1),
nn.ReLU(inplace=True),
nn.Conv2d(mod_dim1, mod_dim2, kernel_size=1, stride=1, padding=0),
nn.BatchNorm2d(mod_dim2),
)
def forward(self, x):
return self.seq(x)
```
## 1. 初始化参数
train_epoch 指定最大迭代 $2^6 = 64$ 个 epoch;inp_dim指输入图像是3通道; mod_dim1 和 mod_dim2 指网络为 64、32通道交替,因为是对原作者代码进行了修改,因此命名前加了 mod
```
input_image_path = 'tiger.jpg'
train_epoch = 2 ** 6
inp_dim = 3
mod_dim1 = 64
mod_dim2 = 32
gpu_id = 0
# if the label number small than it, break loop
min_label_num = 4
# if the label number small than it, start to show result image.
max_label_num = 256
start_time0 = time.time()
torch.cuda.manual_seed_all(1943)
np.random.seed(1943)
os.environ['CUDA_VISIBLE_DEVICES'] = str(gpu_id) # choose GPU:0
image = cv2.imread(input_image_path)
```
## 2. 超像素分割
这里使用了Efficient Graph-Based Image Segmentation - Felzenszwalb (MIT)2004 基于图的超像素分割算法 (简称Felz算法)。具体细节不过多介绍。对于超像素分割,有两个算法,一个是 Felz算法,另一个是 SLIC 算法。论文原作者使用的是 SLIC 算法,曾伊言推荐使用 Felz 算法替代 SLIC 算法。具体原因在知乎专栏里说的比较清楚,这里不再介绍。
```
seg_map = segmentation.felzenszwalb(image, scale=32, sigma=0.5, min_size=64)
plt.imshow(seg_map)
seg_map = seg_map.flatten()
seg_lab = [np.where(seg_map == u_label)[0]
for u_label in np.unique(seg_map)]
```
上面的代码首先进行超像素分割,分割结果保存在 seg_map 里。一共分割得到 616 个区域,各个区域像素的 index 保存在 seg_lab 数组里。
## 3. 算法训练
超像素分割的结果可以看作是**『预分类』**:相似颜色和纹理的像素保存相同的label。比如例子里的 tiger图,超像素分类得到616个区域,分别分配 0 至 615 的标签。
使用上面提到的CNN,对输入图片进行分类,分类的目标是:使输出的分割结果,每一个超像素内分配的标签一致,训练到收敛。
具体来说,把图像输入CNN得到一个图为 output,在 output 里,每个像素被分配一个 label (因为网络最后一层是32个 feature map,用 argmax 取值最大的那个为 label ,因此,label 的范围是 0 到 31)。统计每个超像素里像素的 label,以数量最多的为目标,放到一个 target 的图里,计划 output 和 target 间的交叉熵损失,然后反向传播。
经过多轮训练,CNN会逐步实现具备相同语义信息的小区块合并,得到大区块。(代码设置里,当最终只剩下4个区域时,会停止迭代。)
```
'''train init'''
device = torch.device("cuda" if torch.cuda.is_available() else 'cpu')
tensor = image.transpose((2, 0, 1))
tensor = tensor.astype(np.float32) / 255.0
tensor = tensor[np.newaxis, :, :, :]
tensor = torch.from_numpy(tensor).to(device)
model = MyNet(inp_dim, mod_dim1, mod_dim2).to(device)
criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=5e-2, momentum=0.9)
image_flatten = image.reshape((-1, 3))
color_avg = np.random.randint(255, size=(max_label_num, 3))
show = image
'''train loop'''
start_time1 = time.time()
model.train()
for batch_idx in range(train_epoch):
'''forward'''
optimizer.zero_grad()
output = model(tensor)[0]
output = output.permute(1, 2, 0).view(-1, mod_dim2)
target = torch.argmax(output, 1)
im_target = target.data.cpu().numpy()
'''refine'''
for inds in seg_lab:
u_labels, hist = np.unique(im_target[inds], return_counts=True)
im_target[inds] = u_labels[np.argmax(hist)]
'''backward'''
target = torch.from_numpy(im_target)
target = target.to(device)
loss = criterion(output, target)
loss.backward()
optimizer.step()
'''show image'''
un_label, lab_inverse = np.unique(im_target, return_inverse=True, )
if un_label.shape[0] < max_label_num: # update show
img_flatten = image_flatten.copy()
if len(color_avg) != un_label.shape[0]:
color_avg = [np.mean(img_flatten[im_target == label], axis=0, dtype=np.int) for label in un_label]
for lab_id, color in enumerate(color_avg):
img_flatten[lab_inverse == lab_id] = color
show = img_flatten.reshape(image.shape)
print('Loss:', batch_idx, loss.item())
if len(un_label) < min_label_num:
break
'''save'''
time1 = time.time() - start_time1
print('TimeUsed: %.2f' % time1)
cv2.imwrite("seg_%s_%ds.jpg" % (input_image_path[6:-4], time1), show)
plt.imshow(show)
```
## 4. 总结
**曾伊言对算法的理解:** 深度学习CNN在整个无监督语义分割任务中,承担的任务是:对经典机器学习无监督语义分割的细粒度预分类结果进行处理。并在迭代中,逐步对小区块进行合并,最后得到符合人类预期的语义分割结果。
但是,这个方法也有明显的**缺点**:那就是鲁棒性不强,算法受参数影响大(包括梯度下降法的参数,与机器学习的预分类算法的参数),并且算法多次随机重启的结果会有不同。
| true |
code
| 0.776983 | null | null | null | null |
|
# Monte Carlo Simulations with the Efficient Frontier
### Summary of Efficient Frontier
The Efficient fronter is a set of optimal portfolios that offer the highest expected return for a defined level of risk. It provides a great visualization on how to choose an optimal portfolio mathematically. _*Risk is defined as the assests actual return differing from our expected return.*_
"The efficient frontier is the set of optimal portfolios that offer the highest expected return for a defined level of risk or the lowest risk for a given level of expected return. Portfolios that lie below the efficient frontier are sub-optimal because they do not provide enough return for the level of risk." - Investopedia
# <center>Founder: Harry Markowitz</center>

Harry Markowitz introduced the efficient frontier theory in 1952 and later won a Nobel Memorial Prize in economics for the Modern Portfolio Theory in 1990. This theory is widely taught in every introductory Financial Course throughout the United States. His theory is written in detail in a paper: *Portfolio Selection* (1952).
# Summary
I will simulate weights on individual companies within a given portfolio to obtain an understanding on what return to risk is desired by the individual.
I picked 10 or so companies that are spread out in their corresponding Industries such that we have a relatively "low" correlation with each other.
# Companies
### Google | NVIDIA | Facebook
### Wells Fargo | Pfizer | COKE
### Disney | IMAX | Catepillar
### Southwest Airlines
```
import re
from io import StringIO
from datetime import datetime, timedelta
import requests
import pandas as pd
import numpy as np
```
# Obtaining the Data
### Companies of Interest (with their associated ticker)
| Technology | Finance | Health | Consumer | Entertainment | Industrials | Transportation |
| --- | --- | --- |--- | --- | --- | --- |
| (GOOG) Google | (WFC) Wells Fargo | (PFE) Pfizer | (COKE) Coke |(DIS) Disney | (CAT) Catepillar |(LUV) Southwest Airlines|
| (NVDA) NVIDIA | --- | --- | --- | (IMAX) IMAX | --- | --- |
| (FB) Facebook | --- | --- | --- | --- | --- | --- |
```
# Getting Data from 6 years back
# I will use the most recent 1 year to determine how well I would have done if I follow the efficient frontier.
# The market is open 252 times in a given year.
# I will get the adjusted close as my main data.
import pandas_datareader as pdr
from datetime import datetime
def get_historical_Data(tickers):
"""
This function returns a pd dataframe with all of the adjusted closing information
"""
data = pd.DataFrame()
names = list()
for i in tickers:
data = pd.concat([data,pdr.get_data_yahoo(symbols=i, start=datetime(2013, 10, 11), end=datetime(2020, 10, 11)).iloc[:,5]], axis = 1)
names.append(i)
data.columns = names
return data
# The ticker names of the companies that we will be looking at.
ticks = ["GOOG", "NVDA", "FB", "WFC","DIS", "IMAX", "LUV", "PFE", "COKE", "CAT"]
d = get_historical_Data(ticks)
print(d.shape)
# Most Recent Data
d.tail()
# Saving the most recent year data such that we can compare...
# Called dT (DataTest)
dT = d.iloc[d.shape[0] - 252:,:] # Data test
# Update the "Training" or "data full"
d = d.iloc[:d.shape[0] - 252,:] # Data Train for the Simulation
print("Testing Data dimensions: ", dT.shape)
print("Training Data dimensions:", d.shape)
dT # Test
d # Train
```
# Understanding Returns
```
from scipy import stats
expected_returns_a = d.pct_change() # Daily returns from trading day to day...
expected_returns_a.columns = ticks # Setting the Column names
expected_returns_aA = pd.DataFrame(expected_returns_a.mean()*250) # Annualizing the average rate of return
expected_returns_aA = expected_returns_aA.T # Transpose the values
dar = d.pct_change().iloc[1:,:]+1 # dar = portfolio returns for each period (in this case day to day)
# 6 is the number of years I am working with (Note: Remember that earlier I've took out a year for training purposes.)
gar = pd.DataFrame(np.prod(dar)**(1/float(6)) - 1) # Geometric Average Rate of Return
# print(gar)
full_return_annual = (pd.concat([expected_returns_aA.T, gar], axis = 1))
# DO NOTE that Arithmetic Average Return is not usually an appropriate method
# for calculating the average return and telling others...
# Example: Returns are the following (50%, 30%, -50%) on a yearly basis (jan 1st to dec 31st)
# Average: (50 + 30 - 50) / 3 = 10% average rate of return. This is not a great "representation of how well you done"
# Example
# Start with initial value of $ 100 Dollars:
# First year becomes 150.
# Second Year becomes 190.
# Third year becomes 97.5. You LOST money.
# Geometric Average: (also known as the Compounded annual growth rate)
# Using the example from above...
# ((1+ 0.5) * (1 + 0.3) * (0.5))^(1/3) - 1
# ((1.5)*(1.3)*(0.5))^(1/3) - 1
# .9916 - 1
# -0.0084
# or (-0.84) % average ANNUAL rate of return (more accurate gauge as to how well you've done.)
full_return_annual.columns = ["Average Arithmetic Returns", "Average Geometric Returns"]
print("Expected Annual Returns ", expected_returns_aA)
print("dar", dar)
print("Full Annual Return", full_return_annual)
```
# Equations Utilized
## Measuring the Adjusted Risk of Return
Measures the risk adjusted rate of return of a portfolio.
$$
\begin{aligned}
Sharpe Ratio = \frac{R_p - R_f}{\sigma_p}
\end{aligned}
$$
$\sigma_p$ = Standard Deviation of Portfolio \
$R_p$ = Return of Portfolio \
$R_f$ = Return of Risk Free Instrument
\
Rule of Thumb:
Sharpe Ratio < 1 sub-optimal... There is most likely a better option \
Sharpe Ratio > 1 is acceptable \
Sharpe Ratio > 2 is VERY good \
Sharpe Ratio > 3 is EXCELLENT!
# Volatility
$$
\begin{aligned}
\sum_{i=0}^N \sum_{j=0}^N {\sigma_{ij}}{X_i X}
\end{aligned}
$$
$X$ = Weights in Portfolio \
$\sigma_{ij}$ = Variance - Covariance Matrix
# Expected Return
$$
\begin{aligned}
\sum_{i=0}^N X_i \mu_i
\end{aligned}
$$
\
$X$ = Weights in Porfolio \
$\mu_i$ = Arithmetic Average Rate of Return for $i^{th}$ security
```
# Storing lists that retain returns, volatility, and weights of the Simulated portfolios
portfolio_returns = []
portfolio_volatility = []
sharpe_ratio = []
# This is what is going to be randomized
stock_weights = []
# Number of Indiviudal securities that will be a part of the portfolio
num_assets = len(ticks)
# Number of simulated iterations
num_portfolios = 100000
# Getting the covariance matrix
# Gets a percentage change one day to the next
daily_returns = d.pct_change()
# Converting daily returns to annual returns (standardizing to a year)
annual_returns = (daily_returns.mean() * 250) + 1
# Obtaining the covariance of annual
cov_daily = daily_returns.cov() # Covariance
cov_annual = cov_daily*250 # Covariance Annualized
print(annual_returns)
# Setting seed of interpretability
np.random.seed(3)
# Filling in the lists with a simulated return, risk, and a given weight
# num_portfolios
for i in range(num_portfolios):
# Randomly assign weights
weights = np.random.random(num_assets)
# Standardize the weights
weights /= np.sum(weights)
returns = (np.dot(weights, (annual_returns)))
volatility = np.sqrt(np.dot(weights.T, np.dot(cov_annual, weights)))
"""
sharpe ratio: This calculates the risk adjusted return
It suggests that adding assets to a portfolio that have low correlation can decrease portfolio risk without
sacrificing return
"""
sharpe = ((returns-1) / volatility)
sharpe_ratio.append(sharpe)
portfolio_returns.append(returns-1)
portfolio_volatility.append(volatility)
stock_weights.append(weights)
# Storing the portfolio values
portfolio = {'Returns': portfolio_returns,
'Volatility': portfolio_volatility,
'Sharpe Ratio': sharpe_ratio}
# Add an additional entry to the portfolio such that each indivudal weight is incorporated for its corresponding company
for counter,symbol in enumerate(ticks):
portfolio[symbol+' Weight'] = [Weight[counter] for Weight in stock_weights]
# make a nice dataframe of the extended dictionary
df = pd.DataFrame(portfolio)
df
# PLotting the efficient frontier.
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
df.plot.scatter(x='Volatility', y='Returns', c='Sharpe Ratio',
cmap='RdYlGn', edgecolors='black', figsize=(10, 8), grid=True)
plt.xlabel('Volatility (Std. Deviation)')
plt.ylabel('Expected Returns')
plt.title('Efficient Frontier')
plt.show()
# Finding the Optimal Portfolio
min_volatility = df['Volatility'].min()
max_sharpe = df['Sharpe Ratio'].max()
# use the min, max values to locate and create the two special portfolios
sharpe_portfolio = df.loc[df['Sharpe Ratio'] == max_sharpe]
min_variance_port = df.loc[df['Volatility'] == min_volatility]
# plot frontier, max sharpe & min Volatility values with a scatterplot
plt.style.use('fivethirtyeight')
df.plot.scatter(x='Volatility', y='Returns', c='Sharpe Ratio',
cmap='RdYlGn', edgecolors='black', figsize=(10, 8), grid=True)
plt.scatter(x=sharpe_portfolio['Volatility'], y=sharpe_portfolio['Returns'], c='red', marker='D', s=200)
plt.scatter(x=min_variance_port['Volatility'], y=min_variance_port['Returns'], c='blue', marker='D', s=200 )
plt.xlabel('Volatility (Std. Deviation)')
plt.ylabel('Expected Returns')
plt.title('Efficient Frontier')
plt.show()
# Additional Details
r_ef = pd.concat([min_variance_port.T,sharpe_portfolio.T], axis = 1)
r_ef.columns = ["Minimum Risk Adjusted Values", "Max Risk Adjusted Values"]
print(r_ef)
```
# If I were to invest 1,000 USD last year... what would I have now?
```
amount_invest = 1000
expected_return = pd.DataFrame(amount_invest * (1+r_ef.iloc[0,:]))
print("----------------------------------------------------------------")
print(" Expected Returns on my Portfolio")
print("----------------------------------------------------------------")
print(expected_return.T)
print("")
print("----------------------------------------------------------------")
print("If I invested", amount_invest,"USD on |", dT.index[0],"| I would have...")
actual_return = (dT.iloc[dT.shape[0]-1,:] - dT.iloc[0,:]) / ( dT.iloc[0,:])
# Multipling the weights to the price at the beginning of the year
beg_price = (dT.iloc[0,:])
end_price = dT.iloc[dT.shape[0]-1,:]
print("----------------------------------------------------------------")
# Weights derived from the Efficient Frontier Portfolio
# Weights for Minimum Risk
w = np.array(r_ef.iloc[3:,0])
percentage_change = (end_price - beg_price)/(beg_price)+1
print("Using the Portfolio Weights for Minimum Risk Return Portfolio")
money_left = sum(w * percentage_change* amount_invest)
print("")
print(" Starting balance $ 1000 : Ending with $ ",round(money_left, 2))
print("")
print("----------------------------------------------------------------")
print("Using the Portfolio Weights Maximized Risk-Return Portfolio")
# Weights for Maxmimum Risk
w1 = np.array(r_ef.iloc[3:,1])
money_left1 = sum(w1 * percentage_change* amount_invest)
print("")
print(" Starting balance $ 1000 : Ending with $ ", round(money_left1,2))
print("")
# Other models to take a look at...
# That try to predict a securities rate of return
# CAPM
# CCAPM
# ICAPM
# Fama French 3 factor, 4 factor, and 5 factor model.
```
| true |
code
| 0.652103 | null | null | null | null |
|
**Copyright 2018 Google LLC.**
Licensed under the Apache License, Version 2.0 (the "License");
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
# Training a Simple Neural Network, with tensorflow/datasets Data Loading
[](https://colab.research.google.com/github/google/jax/blob/main/docs/notebooks/neural_network_with_tfds_data.ipynb)
_Forked from_ `neural_network_and_data_loading.ipynb`

Let's combine everything we showed in the [quickstart notebook](https://colab.research.google.com/github/google/jax/blob/main/notebooks/quickstart.ipynb) to train a simple neural network. We will first specify and train a simple MLP on MNIST using JAX for the computation. We will use `tensorflow/datasets` data loading API to load images and labels (because it's pretty great, and the world doesn't need yet another data loading library :P).
Of course, you can use JAX with any API that is compatible with NumPy to make specifying the model a bit more plug-and-play. Here, just for explanatory purposes, we won't use any neural network libraries or special APIs for builidng our model.
```
import jax.numpy as jnp
from jax import grad, jit, vmap
from jax import random
```
## Hyperparameters
Let's get a few bookkeeping items out of the way.
```
# A helper function to randomly initialize weights and biases
# for a dense neural network layer
def random_layer_params(m, n, key, scale=1e-2):
w_key, b_key = random.split(key)
return scale * random.normal(w_key, (n, m)), scale * random.normal(b_key, (n,))
# Initialize all layers for a fully-connected neural network with sizes "sizes"
def init_network_params(sizes, key):
keys = random.split(key, len(sizes))
return [random_layer_params(m, n, k) for m, n, k in zip(sizes[:-1], sizes[1:], keys)]
layer_sizes = [784, 512, 512, 10]
param_scale = 0.1
step_size = 0.01
num_epochs = 10
batch_size = 128
n_targets = 10
params = init_network_params(layer_sizes, random.PRNGKey(0))
```
## Auto-batching predictions
Let us first define our prediction function. Note that we're defining this for a _single_ image example. We're going to use JAX's `vmap` function to automatically handle mini-batches, with no performance penalty.
```
from jax.scipy.special import logsumexp
def relu(x):
return jnp.maximum(0, x)
def predict(params, image):
# per-example predictions
activations = image
for w, b in params[:-1]:
outputs = jnp.dot(w, activations) + b
activations = relu(outputs)
final_w, final_b = params[-1]
logits = jnp.dot(final_w, activations) + final_b
return logits - logsumexp(logits)
```
Let's check that our prediction function only works on single images.
```
# This works on single examples
random_flattened_image = random.normal(random.PRNGKey(1), (28 * 28,))
preds = predict(params, random_flattened_image)
print(preds.shape)
# Doesn't work with a batch
random_flattened_images = random.normal(random.PRNGKey(1), (10, 28 * 28))
try:
preds = predict(params, random_flattened_images)
except TypeError:
print('Invalid shapes!')
# Let's upgrade it to handle batches using `vmap`
# Make a batched version of the `predict` function
batched_predict = vmap(predict, in_axes=(None, 0))
# `batched_predict` has the same call signature as `predict`
batched_preds = batched_predict(params, random_flattened_images)
print(batched_preds.shape)
```
At this point, we have all the ingredients we need to define our neural network and train it. We've built an auto-batched version of `predict`, which we should be able to use in a loss function. We should be able to use `grad` to take the derivative of the loss with respect to the neural network parameters. Last, we should be able to use `jit` to speed up everything.
## Utility and loss functions
```
def one_hot(x, k, dtype=jnp.float32):
"""Create a one-hot encoding of x of size k."""
return jnp.array(x[:, None] == jnp.arange(k), dtype)
def accuracy(params, images, targets):
target_class = jnp.argmax(targets, axis=1)
predicted_class = jnp.argmax(batched_predict(params, images), axis=1)
return jnp.mean(predicted_class == target_class)
def loss(params, images, targets):
preds = batched_predict(params, images)
return -jnp.mean(preds * targets)
@jit
def update(params, x, y):
grads = grad(loss)(params, x, y)
return [(w - step_size * dw, b - step_size * db)
for (w, b), (dw, db) in zip(params, grads)]
```
## Data Loading with `tensorflow/datasets`
JAX is laser-focused on program transformations and accelerator-backed NumPy, so we don't include data loading or munging in the JAX library. There are already a lot of great data loaders out there, so let's just use them instead of reinventing anything. We'll use the `tensorflow/datasets` data loader.
```
import tensorflow_datasets as tfds
data_dir = '/tmp/tfds'
# Fetch full datasets for evaluation
# tfds.load returns tf.Tensors (or tf.data.Datasets if batch_size != -1)
# You can convert them to NumPy arrays (or iterables of NumPy arrays) with tfds.dataset_as_numpy
mnist_data, info = tfds.load(name="mnist", batch_size=-1, data_dir=data_dir, with_info=True)
mnist_data = tfds.as_numpy(mnist_data)
train_data, test_data = mnist_data['train'], mnist_data['test']
num_labels = info.features['label'].num_classes
h, w, c = info.features['image'].shape
num_pixels = h * w * c
# Full train set
train_images, train_labels = train_data['image'], train_data['label']
train_images = jnp.reshape(train_images, (len(train_images), num_pixels))
train_labels = one_hot(train_labels, num_labels)
# Full test set
test_images, test_labels = test_data['image'], test_data['label']
test_images = jnp.reshape(test_images, (len(test_images), num_pixels))
test_labels = one_hot(test_labels, num_labels)
print('Train:', train_images.shape, train_labels.shape)
print('Test:', test_images.shape, test_labels.shape)
```
## Training Loop
```
import time
def get_train_batches():
# as_supervised=True gives us the (image, label) as a tuple instead of a dict
ds = tfds.load(name='mnist', split='train', as_supervised=True, data_dir=data_dir)
# You can build up an arbitrary tf.data input pipeline
ds = ds.batch(batch_size).prefetch(1)
# tfds.dataset_as_numpy converts the tf.data.Dataset into an iterable of NumPy arrays
return tfds.as_numpy(ds)
for epoch in range(num_epochs):
start_time = time.time()
for x, y in get_train_batches():
x = jnp.reshape(x, (len(x), num_pixels))
y = one_hot(y, num_labels)
params = update(params, x, y)
epoch_time = time.time() - start_time
train_acc = accuracy(params, train_images, train_labels)
test_acc = accuracy(params, test_images, test_labels)
print("Epoch {} in {:0.2f} sec".format(epoch, epoch_time))
print("Training set accuracy {}".format(train_acc))
print("Test set accuracy {}".format(test_acc))
```
We've now used the whole of the JAX API: `grad` for derivatives, `jit` for speedups and `vmap` for auto-vectorization.
We used NumPy to specify all of our computation, and borrowed the great data loaders from `tensorflow/datasets`, and ran the whole thing on the GPU.
| true |
code
| 0.73325 | null | null | null | null |
|
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc" style="margin-top: 1em;"><ul class="toc-item"><li><span><a href="#Intro" data-toc-modified-id="Intro-1"><span class="toc-item-num">1 </span>Intro</a></span></li><li><span><a href="#Save-and-Restore-Variables" data-toc-modified-id="Save-and-Restore-Variables-2"><span class="toc-item-num">2 </span><a href="https://www.tensorflow.org/programmers_guide/saved_model" target="_blank">Save and Restore Variables</a></a></span></li><li><span><a href="#Save-and-Restore-a-Model" data-toc-modified-id="Save-and-Restore-a-Model-3"><span class="toc-item-num">3 </span><a href="https://www.tensorflow.org/programmers_guide/saved_model" target="_blank">Save and Restore a Model</a></a></span></li><li><span><a href="#Serving-Client" data-toc-modified-id="Serving-Client-4"><span class="toc-item-num">4 </span>Serving Client</a></span></li></ul></div>
# Intro
Notebook revolving around the use and concepts of [Tensorflow](https://www.tensorflow.org/).
```
import os
from os.path import join
import sys
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
import tensorflow as tf
%matplotlib notebook
#%matplotlib inline
models_data_folder = "/Users/amartinelli/Documents/models/"
```
# [Save and Restore Variables](https://www.tensorflow.org/programmers_guide/saved_model)
```
# dummy variables
#v1 = tf.get_variable("v1", shape=[3], initializer=tf.zeros_initializer)
#v2 = tf.get_variablea("v2", shape=[5], initializer=tf.zeros_initializer)
v1 = tf.Variable(tf.constant(0), name='v1')
v2 = tf.Variable(tf.constant(5), name='v2')
# dummy operations
inc_v1 = v1.assign(v1+1)
dec_v2 = v2.assign(v2-1)
# Save variables
# def init op and saver
init_op = tf.global_variables_initializer()
saver = tf.train.Saver()
# run some operations and save sessions
with tf.Session() as sess:
sess.run(init_op)
inc_v1.op.run()
dec_v2.op.run()
save_path = saver.save(sess,
join(models_data_folder, 'tmp', "model.ckpt"))
print("Model saved in {}".format(save_path))
# test behavior in new session (need to rerun initializer)
with tf.Session() as sess:
sess.run(init_op)
print(v1.eval())
print(inc_v1.eval())
print(v1.eval())
# Restore Variables
# need to redefine the variable
v1 = tf.Variable(tf.constant(0), name='v1')
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess,
join(models_data_folder, 'tmp', "model.ckpt"))
#now v1 should have the value we previously saved
print(v1.eval())
```
# [Save and Restore a Model](https://www.tensorflow.org/programmers_guide/saved_model)
Uses *SavedModelBuilder* instead of *Saver*. Should this be done only for serving? In what way can I reload a model saved with the former and retrain?
```
# directory where model will be exported
# include version info in model path as required by TF
version = 0
export_dir = join(models_data_folder, "tf_test_models_export", str(version))
# dummy model
x = tf.Variable(tf.constant(0), name='x')
y = tf.Variable(tf.constant(5), name='y')
f = tf.multiply(x, y, name='f')
# save model
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
#consider difference between eval and run
#see: https://stackoverflow.com/questions/33610685/in-tensorflow-what-is-the-difference-between-session-run-and-tensor-eval
#sess.run(f, feed_dict={x:3.0, y:5.0})
fval = f.eval(feed_dict={x:3.0, y:5.0})
print(fval)
# Init builder
builder = tf.saved_model.builder.SavedModelBuilder(export_dir)
# Build info for inputs and outputs tensors
#??Is the key associated with the tensor name?
inputs = {
'x' : tf.saved_model.utils.build_tensor_info(x),
'y' : tf.saved_model.utils.build_tensor_info(y)
}
outputs = {
'f' : tf.saved_model.utils.build_tensor_info(f)
}
# Define signature (set of inputs and outputs for the graph)
prediction_signature = (
tf.saved_model.signature_def_utils.build_signature_def(
inputs=inputs,
outputs=outputs,
# method used for the inference
method_name=tf.saved_model.signature_constants.PREDICT_METHOD_NAME
)
)
# Add meta-graph (dataflow graph, variables, assets, and signatures)
# to the builder
builder.add_meta_graph_and_variables(
sess=sess,
tags=[tf.saved_model.tag_constants.SERVING],
# ??
signature_def_map={
'predict' : prediction_signature
},
# ??
#legacy_init_op = tf.group(tf.tables_initializer(), name='legacy_init_op')
)
# Finally save builder
builder.save()
# Restore model
# redefine target
x = tf.Variable(tf.constant(1), name='x')
y = tf.Variable(tf.constant(5), name='y')
#f = tf.Operation(None, name='f')
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
#print(f.eval())
mg = tf.saved_model.loader.load(
sess=sess,
tags=[tf.saved_model.tag_constants.SERVING],
export_dir
)
f = tf.get_default_graph().get_operation_by_name("f")
# ??Why session graph keeps getting new operations?
# isn't it clean every time we exit the "with" scope
#print(sess.graph.get_operations())
print(sess.run(f))
```
# Serving Client
Needs
pip install grpcio grpcio-tools
Plus Tensorflow Serving API files.
```
from grpc.beta import implementations
# reference local copy of Tensorflow Serving API Files
sys.path.append(os.path.join(os.getcwd(), *[os.pardir]*2, 'ext_libs'))
import lib.predict_pb2 as predict_pb2
import lib.prediction_service_pb2 as prediction_service_pb2
host='127.0.0.1'
port=9000
channel = implementations.insecure_channel(host, int(port))
stub = prediction_service_pb2.beta_create_PredictionService_stub(channel)
# build request
request = predict_pb2.PredictRequest()
request.model_spec.name = 'ed' # model name, as given to bazel script
request.model_spec.signature_name = 'predict' # as defined in ModelBuilder
# define inputs
x = 3
y = 4
x_tensor = tf.contrib.util.make_tensor_proto(x, dtype=tf.int32)
y_tensor = tf.contrib.util.make_tensor_proto(y, dtype=tf.int32)
request.inputs['x'].CopyFrom(x_tensor)
request.inputs['y'].CopyFrom(y_tensor)
# call prediction on the server
result = stub.Predict(request, timeout=10.0)
result
```
| true |
code
| 0.514217 | null | null | null | null |
|
## Coding Exercise #0707
### 1. Convolutional Neural Network (color images):
```
import numpy as np
import pandas as pd
# import tensorflow as tf
# from keras.datasets.cifar10 import load_data
import tensorflow.compat.v1 as tf
from tensorflow.keras.datasets.cifar10 import load_data
import matplotlib.pyplot as plt
tf.disable_v2_behavior()
%matplotlib inline
```
#### 1.1. Download the data:
More information about the dataset can be found [here](https://www.cs.toronto.edu/~kriz/cifar.html).
```
(X_train, y_train), (X_test, y_test) = load_data()
n_train_size = X_train.shape[0]
```
#### 1.2. Take a look at the dataset:
```
# Images already reshaped as 32x32.
# 3 Color channels.
# y is not one-hot-encoded yet.
print("Training data X shape: {}".format(X_train.shape))
print("Training data y shape: {}".format(y_train.shape))
print("\n")
print("Testing data X shape: {}".format(X_test.shape))
print("Testing data y shape: {}".format(y_test.shape))
```
Visualization.
```
i_image= 123 # Image index. You can change it at will.
a_single_image= X_train[i_image,:,:,:]
plt.imshow(a_single_image) # Display as a color image.
plt.show()
# Check for the minimum and maximum pixel value.
print("MIN : {}".format(a_single_image.min()))
print("MAX : {}".format(a_single_image.max()))
```
#### 1.3. Data preprocessing:
```
# Scaling.
X_train = X_train/255
X_test = X_test/255
# One-Hot-Encoding.
y = np.concatenate([y_train[:,0],y_test[:,0]],axis=0)
y = np.array(pd.get_dummies(y, drop_first=False)) # drop_frist = False for one-hot-encoding.
y_train = y[:n_train_size,:]
y_test = y[n_train_size:,:]
```
#### 1.4. Define the hyperparameters and placeholders:
```
batch_size = 8
n_epochs = 50001
learn_rate = 0.0001
drop_prob = 0.5 # For the dropout layer.
X_ph = tf.placeholder(tf.float32, [None, 32, 32, 3]) # 'None' means any number of rows (observations or batch_size)
y_ph = tf.placeholder(tf.float32,[None, 10])
drop_prob_ph = tf.placeholder(tf.float32) # The drop probability at the dropout layer is a hyperparameter.
```
#### 1.5. Define the Variables:
The configuration of the first convolution layer is as following:
- Kernel height = 7.
- Kernel width = 7.
- In_chanels = **3 (color)**.
- Out_channels = 32 (number of feature maps).
We need Variables with the folllowing shapes:
- Shape of the weight matrix = [kernel_height, kernel_width, in_channels, out_channels].
- Shape of the bias = [out_channels].
```
# Variables are defined according to the specifications mentioned above.
W1 = tf.Variable(initial_value=tf.random_normal([7,7,3,32], mean=0, stddev=0.1))
b1 = tf.Variable(initial_value=tf.fill([32], 0.1))
```
The configuration of the second convolution layer is as following:
- Kernel height = 7.
- Kernel width = 7.
- In_chanels = 32 (out_channels from the previous convolution layer).
- Out_channels = 64 (number of feature maps).
Again, we need Variables with the folllowing shapes:
- Shape of the weight matrix = [kernel_height, kernel_width, in_channels, out_channels].
- Shape of the bias = [out_channels].
```
# Variables are defined according to the specifications mentioned above.
W2 = tf.Variable(initial_value=tf.random_normal([7,7,32,64], mean=0, stddev=0.1))
b2 = tf.Variable(initial_value=tf.fill([64], 0.1))
```
We do the following considerations for the flattened fully connected layer:
- We will apply convolution twice with padding and there will be no image size reduction.
- We will also apply max pooling twice with stride = 2 (vertically and horizontally).
- At each max pooling with stride = 2, the image size is halved. Thus, **(32/2)/2 = 8** will be the size (vertical and horizontal) of the resulting final image.
- In the previous layer there were 64 output channels (feature maps).
- Considering all these facts, there should be **8x8x64 = 4096** nodes in the flattened layer.
- Finally, we will shrink the output from this layer to 1024.
```
# Variables are defined according to the specifications mentioned above.
W3 = tf.Variable(initial_value=tf.random_normal([4096,1024], mean=0, stddev=0.1))
b3 = tf.Variable(initial_value=tf.fill([1024], 0.1))
```
We do the following considerations for the final output layer:
- There are 1024 nodes to match with the output from the previous layer.
- We should shrink the output once more because there are 10 different labels (digits 0~9).
```
# Variables are defined according to the specifications mentioned above.
W4 = tf.Variable(initial_value=tf.random_normal([1024,10], mean=0, stddev=0.1))
b4 = tf.Variable(initial_value=tf.fill([10], 0.1))
```
#### 1.6. Define the deep learning model (CNN):
Explanation of the arguments:
- padding = 'SAME' to apply a padding. padding = 'VALID' to apply no padding.
- ksize = [1, kernel_height, kernel_width, 1]
- strides = [1, stride_vertical, stride_horizontal,1]
```
# 1st Convolution layer.
y1 = tf.nn.conv2d(X_ph, W1, strides=[1, 1, 1, 1], padding='SAME') + b1
conv1 = tf.nn.relu(y1) # Apply the ReLu activation function.
# 1st Pooling layer.
pool1 = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')
# 2nd Convolution layer.
y2 = tf.nn.conv2d(pool1, W2, strides=[1, 1, 1, 1], padding='SAME') + b2
conv2 = tf.nn.relu(y2) # Apply the ReLu activation function.
# 2nd Pooling layer.
pool2 = tf.nn.max_pool(conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')
# Flattened full layer.
conv2_flattened = tf.reshape(pool2, [-1,4096]) # 8x8x64 = 4096.
y3 = tf.matmul(conv2_flattened, W3) + b3
full_layer = tf.nn.relu(y3) # Apply the ReLu activation function.
# Dropout layer.
dropout_layer = tf.nn.dropout(full_layer, rate = drop_prob_ph)
# Output layer.
y_model = tf.matmul(dropout_layer, W4) + b4 # No activation function. Softmax at the output layer is optional.
```
#### 1.7. Define the loss function and the optimizer:
```
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(labels=y_ph, logits=y_model))
optimizer = tf.train.AdamOptimizer(learning_rate = learn_rate)
train = optimizer.minimize(loss)
init = tf.global_variables_initializer()
```
#### 1.8. Training and Testing:
```
with tf.Session() as sess:
sess.run(init)
for i in range(n_epochs):
idx_rnd = np.random.choice(range(n_train_size),batch_size,replace=False) # Random sampling w/o replacement for the batch indices.
batch_X, batch_y = X_train[idx_rnd,:,:] , y_train[idx_rnd] # Sample a batch!
my_feed = {X_ph:batch_X, y_ph:batch_y, drop_prob_ph:drop_prob}
sess.run(train, feed_dict = my_feed)
if i % 500 == 0:
correct_predictions = tf.equal(tf.argmax(y_ph, axis=1), tf.argmax(y_model, axis=1)) # In argmax(), axis=1 means horizontal direction.
accuracy = tf.reduce_mean(tf.cast(correct_predictions, tf.float32)) # Recast the Boolean as float32 first. Then calculate the mean.
my_feed = {X_ph:X_test, y_ph:y_test, drop_prob_ph:0.0} # No dropout for testing.
accuracy_value = sess.run(accuracy, feed_dict = my_feed)
print("Step = {} , Accuracy = {:5.3f} \n".format(i, accuracy_value))
```
| true |
code
| 0.686843 | null | null | null | null |
|
```
import pandas as pd
import numpy as np
```
# 自由现金流估值法 DCF
有以下四种模型:
1. 零增长模型
2. 不变增长模型
3. 两阶段模型
4. 三阶段模型
不同的是自由现金流的使用和贴现的方式不同。
**计算步骤**:
1. 计算自由现金流并依据相应的方法折现($\star\star\star\star\star$, the most important, this is what the code solves)
2. 计算股权价值= 1.+金融资产+长期股权投资-公司债务
3. 计算少数股东比例
4. 归属于上市公司股东的价值=股权价值$\times$(1-少数股东比例)
5. 每股内在价值=归属于上市公司股东的价值/股本
其中,
- 经营资产自由现金流=公司维持原有生产经营规模前提下的增量现金流入=经营活动现金流量净额-保全性资本支出=经营活动现金流量净额-固定资产折旧-无形资产和长期待摊费用摊销-处置长期资产的损失
- $WACC=k_d\times\frac{D}{D+E}\times(1-t)+k_e\times\frac{E}{D+E}$。其中债务资本成本率=债务资本总额/债务资本平均金额$\times$100%=(财务费用+汇兑收益)/(期初债务资本+期末债务资本)/2;股权资本成本率应该高于同期的国债利率,加上股票投资的风险溢价,我们普遍设置为8%;t为公司实际所得税税率=1-净利润/税前利润。
- 公司债务=有息债务
- 少数股东比例=$\frac{少数股东权益}{股东权益合计}$
- 股本=市值/股价
$$
\begin{aligned}
&零增长模型:V=\frac{FCF}{WACC}\\
&不变增长模型:V=\frac{FCF(1+g)}{WACC-g}\\
&两阶段模型:V=\sum_{t=1}^n\frac{{FCF}_t}{(1+WACC)^t}+\frac{TV}{(1+WACC)^n},\ \ 其中TV=\frac{FCF_n(1+g_2)}{WACC-g_2}\\
&三阶段模型:V=\sum_{t=1}^n\frac{{FCF}_0(1+g_1)}{(1+WACC)^t}+\sum_{t=n+1}^m\frac{{FCF}_n(1+g_2)}{(1+WACC)^t}+\frac{FCF_{n+m}(1+g_3)}{(WACC-g_3)(1+WACC)^{n+m}}\\
\end{aligned}
$$
零增长模型适用于成熟稳定、没有增长的公司,每年的自由现金流也保持在一个稳定的金额水平,类似于永续年金;如果该类公司的自由现金流全部用于发放股利现金,那么其得出的结果与股利贴现模型非常接近。
不变增长模型适用于成熟的公司,未来的自由现金流以非常缓慢的速度增长。
在两阶段模型中,投资者的预期回报WACC至少要高于总体的经济增长率;不变增长率g2通常小于WACC,反之,意味着很长时间以后公司的规模将超过总体经济规模。
在三阶段模型中,假设所有的公司经历三个阶段:成长阶段、过渡阶段和稳定阶段。三个阶段的成长率由高到低,稳定阶段保持较低增长率的不变增长。
```
#=== 变量
file_name = 'sz000977' # 股票代码
time = 4 # 采用最近n期的数据(季)
zero_change = False # 是否为零增长模型
one_change = False # 是否为不变增长模型
two_change = True # 是否为两阶段模型
three_change = False # 是否为三阶段模型
g1, g2, g3 = 0.2, 0.03, 0.01 # 增长速度,分别是后三个模型的g,如果使用不变增长模型,则不需要更改后两个
t1, t2 = np.arange(1, 3), np.arange(1,2) # 某阶段的几年,两阶段与三阶段模型需要,注意最大值减一为实际值
#=== functions
def read_file(file_name):
# 读取股票基本数据
df = pd.read_csv(r'youraddress\%s.csv' % file_name, encoding='GBK', skiprows=1, parse_dates=['交易日期'])
df = df[['股票代码', '股票名称', '交易日期', '总市值', '净利润TTM', '收盘价']]
print(df.tail(5))
# 读取股票财务数据
finance_df = pd.read_csv(r'youraddress\%s.csv' % file_name, parse_dates=['财报日期', '财报发布日期'], skiprows=1, encoding='gbk')
finance_df = finance_df.resample('Q', on='财报日期').first()
del finance_df['财报日期']
finance_df.reset_index(inplace=True)
finance_df.dropna(subset=['财报发布日期'], inplace=True)
finance_df.sort_values(by='财报发布日期', inplace=True)
return df, finance_df
def merge_data(df, finance_df):
add_columns = ['B_货币资金',
'B_交易性金融资产',
'B_衍生金融资产',
'B_应收票据及应收账款',
'B_应收票据',
'B_应收账款',
'B_应收款项融资',
'B_应收利息',
'B_应收股利',
'B_其他应收款',
'B_买入返售金融资产',
'B_发放贷款及垫款',
'B_可供出售金融资产',
'B_持有至到期投资',
'B_长期应收款',
'B_长期股权投资',
'B_投资性房地产',
'B_所有者权益(或股东权益)合计',
'C_经营活动产生的现金流量净额',
'B_短期借款',
'B_交易性金融负债',
'B_应付利息',
'B_应付短期债券',
'B_一年内到期的非流动负债',
'B_长期借款',
'B_应付债券',
'B_租赁负债',
'B_长期应付款(合计)',
'R_财务费用',
'R_汇兑收益',
'R_四、利润总额',
'R_减:所得税费用',
'C_固定资产折旧、油气资产折耗、生产性物资折旧', 'C_无形资产摊销', 'C_长期待摊费用摊销', 'C_处置固定资产、无形资产和其他长期资产的损失',
'B_少数股东权益']
col = ['财报发布日期', '财报日期'] + add_columns
stock_df = pd.merge_asof(df, finance_df[col], left_on='交易日期', right_on='财报日期', direction='backward')
print(stock_df.columns)
return stock_df
def data_been_prepared(now_df, stock_df):
now_df[['股票代码', '股票名称', '交易日期', '总市值', '财报发布日期', '财报日期', '净利润TTM', '收盘价']] = stock_df[['股票代码', '股票名称', '交易日期', '总市值', '财报发布日期', '财报日期', '净利润TTM', '收盘价']]
now_df['金融资产'] = 0
now_df['公司债务'] = 0
for factor1 in ['B_货币资金',
'B_交易性金融资产',
'B_衍生金融资产',
'B_应收票据及应收账款',
'B_应收票据',
'B_应收账款',
'B_应收款项融资',
'B_应收利息',
'B_应收股利',
'B_其他应收款',
'B_买入返售金融资产',
'B_发放贷款及垫款',
'B_可供出售金融资产',
'B_持有至到期投资',
'B_长期应收款',
'B_投资性房地产',
'B_长期股权投资']:
now_df['金融资产'] += stock_df[factor1]
for factor2 in ['B_短期借款',
'B_交易性金融负债',
'B_应付利息',
'B_应付短期债券',
'B_一年内到期的非流动负债',
'B_长期借款',
'B_应付债券',
'B_租赁负债',
'B_长期应付款(合计)']:
now_df['公司债务'] += stock_df[factor2]
now_df['债务资本成本总额'] = stock_df['R_财务费用'] + stock_df['R_汇兑收益']
now_df['经营资产自由现金流'] = stock_df['C_经营活动产生的现金流量净额'] - stock_df['C_固定资产折旧、油气资产折耗、生产性物资折旧'] - stock_df['C_无形资产摊销'] - stock_df['C_长期待摊费用摊销'] - stock_df['C_处置固定资产、无形资产和其他长期资产的损失']
now_df['实际企业所得税税率'] = 1 - ((stock_df['R_四、利润总额'] - stock_df['R_减:所得税费用']) / stock_df['R_四、利润总额'])
now_df['少数股东权益比例'] = stock_df['B_少数股东权益'] / stock_df['B_所有者权益(或股东权益)合计']
now_df['债务占比'] = now_df['公司债务'] / (stock_df['B_所有者权益(或股东权益)合计'] + now_df['公司债务'])
now_df.drop_duplicates(subset=['财报日期'], inplace=True)
now_df.reset_index(inplace=True)
del now_df['index']
print(now_df.tail(10))
return now_df
def cal_WACC(now_df, time):
WACC = (now_df['债务资本成本总额'] / ((now_df['公司债务'] + now_df['公司债务'].shift(time)) / 2) * now_df['债务占比'] * (1-now_df['实际企业所得税税率'])) + (0.09 * (1-now_df['债务占比']))
return WACC.tolist()[-time]
def fcf_discounted(now_df, WACC, time, zero_change, one_change, two_change, three_change, g1, g2, g3, t1, t2):
value = (now_df.loc[: ,'金融资产'].tolist()[-time] - now_df.loc[: ,'公司债务'].tolist()[-time])
if zero_change == True:
FCF = now_df.loc[: ,'经营资产自由现金流'].tolist()[-time] / WACC
if one_change == True:
FCF = (now_df.loc[: ,'经营资产自由现金流'].tolist()[-time] * (1+g1)) / (WACC - g1)
if two_change == True:
temp_sum = 0
for _ in t1:
temp = now_df.loc[: ,'经营资产自由现金流'].tolist()[-time] * ((1+g1) ** _) / ((1+WACC) ** _)
temp_sum = temp + temp_sum
FCF = ((now_df.loc[: ,'经营资产自由现金流'].tolist()[-time] * ((1+g1) ** (t1[-1]-1)) * (1+g2)) / ((WACC-g2)*((1+WACC)**t1[-1]))) + temp_sum
if three_change == True:
temp_sum1, temp_sum2 = 0, 0
for _ in t1:
temp1 = now_df.loc[: ,'经营资产自由现金流'].tolist()[-time] * ((1+g1) ** _)
temp = temp1 / ((1+WACC) ** _)
temp_sum1 = temp + temp_sum1
for _ in t2:
temp = temp1 * ((1+g2) ** _) / ((1+WACC) ** (_+t1[-1]))
temp_sum2 = temp + temp_sum2
FCF = (temp1 * ((1+g2) ** t2) * (1+g3)) / ((WACC-g3)*((1+WACC)**(t1[-1]+t2[-1]))) + temp_sum1 + temp_sum2
FCF_plus_value = (FCF + value) * (1 - now_df.loc[: ,'少数股东权益比例'].tolist()[-time])
result = FCF_plus_value / (now_df.loc[: ,'总市值'].tolist()[-time] / now_df.loc[: ,'收盘价'].tolist()[-time]) # 股票内在价值,价值/股数
print('归属于上市公司股东的价值:', FCF_plus_value, '\n', '股票内在价值:', result)
return FCF_plus_value, result
def statistics(now_df, time):
PE1 = now_df.loc[: ,'总市值'].tolist()[-time] / now_df.loc[: ,'净利润TTM'].tolist()[-time] # 市盈率
PE2 = FCF_plus_value / now_df.loc[: ,'净利润TTM'].tolist()[-time]
print('原PE: ', PE1, '估值PE:', PE2)
for time_n in [1, 2, 3, time, time+1, time+2, time+3]:
print('前%s个季度股票价格' % (time_n-1), now_df.loc[: ,'收盘价'].tolist()[-time_n]) # 股票收盘价
```
### 主程序
用来计算股票内在价值
```
#=== main
df, finance_df = read_file(file_name)
stock_df = merge_data(df, finance_df)
now_df = pd.DataFrame()
now_df = data_been_prepared(now_df, stock_df)
WACC = cal_WACC(now_df, time)
print('=============================')
print('WACC is ', WACC)
FCF_plus_value, result = fcf_discounted(now_df, WACC, time, zero_change, one_change, two_change, three_change, g1, g2, g3, t1, t2)
statistics(now_df, time)
```
### 循环测试
以下是进行g变量等情况的测试
```
a = []
for b in np.arange(0.01, 0.1, 0.01):
#=== main
g1 = b
df, finance_df = read_file(file_name)
stock_df = merge_data(df, finance_df)
now_df = pd.DataFrame()
now_df = data_been_prepared(now_df, stock_df)
print('=============================')
FCF_plus_value, result = fcf_discounted(now_df, WACC, time, zero_change, one_change, two_change, three_change, g1, g2, g3)
statistics(now_df, time)
a.append(result)
a.append(b)
print(a)
# pd.set_option('display.max_columns', 100)
finance_df = pd.read_csv(r'C:\Users\xueli\python_file\stock_quant\sina_financial_data\%s.csv' % file_name, parse_dates=['财报日期', '财报发布日期'], skiprows=1, encoding='gbk')
finance_df.columns.values.tolist()
```
| true |
code
| 0.267193 | null | null | null | null |
|
< [Classes](PythonIntroCh7.ipynb) | [Contents](PythonIntro.ipynb) | [File I/O](PythonIntroCh9.ipynb) >
# 8. Modules
## 8.1 Introduction
Last lesson we covered the killer topic of Classes. As you can remember, classes are neat combinations of variables and functions in a nice, neat package. Programming lingo calls this feature encapsulation, but regardless of what it is called, it's a really cool feature for keeping things together so the code can be used in many instances in lots of places. Of course, you've got to ask, "how do I get my classes to many places, in many programs?". The answer is to put them into a module, to be imported into other programs.
## 8.2 Module? What's a Module?
A module is a Python file that (generally) has only definitions of variables, functions, and classes. For example, a module might look like this, which we store in a file `moduletest.py`:
```Python
### EXAMPLE PYTHON MODULE
# Define some variables:
numberone = 1
ageofqueen = 78
# define some functions
def printhello():
print("hello")
def timesfour(input):
print(eval(input) * 4)
# define a class
class Piano:
def __init__(self):
self.type = input("What type of piano? ")
self.height = input("What height (in feet)? ")
self.price = input("How much did it cost? ")
self.age = input("How old is it (in years)? ")
def printdetails(self):
print("This piano is a/an " + self.height + " foot", end=" ")
print(self.type, "piano, " + self.age, "years old and costing\
" + self.price + " dollars.")
```
As you see, a module looks pretty much like your normal Python program.
So what do we do with a module? We `import` bits of it (or all of it) into other programs.
To import all the variables, functions and classes from `moduletest.py` into another program you are writing, we use the `import` operator. For example, to import `moduletest.py` into your main program (`mainprogram.py`), you would have this:
```Python
### mainprogam.py
### IMPORTS ANOTHER MODULE
import moduletest
```
This assumes that the module is in the same directory as `mainprogram.py`, or is a default module that comes with Python. You leave out the `.py` at the end of the file name - it is ignored. You normally put all `import` statements at the beginning of the Python file, but technically they can be anywhere. In order to use the items in the module in your main program, you use the following:
```Python
### USING AN IMPORTED MODULE
# Use the form modulename.itemname
# Examples:
print(moduletest.ageofqueen)
cfcpiano = moduletest.Piano()
cfcpiano.printdetails()
```
As you see, the modules that you import act very much like the classes we looked at last lesson - anything inside them must be preceded with `modulename.` for it to work.
## 8.3 More module thingummyjigs (in lack of a better title)
Wish you could get rid of the `modulename.` part that you have to put before every item you use from a module? No? Never? Well, I'll teach it to you anyway.
One way to avoid this hassle is to import only the wanted objects from the module. To do this, you use the `from` operator. You use it in the form of `from modulename import itemname`. Here is an example:
```Python
### IMPORT ITEMS DIRECTLY INTO YOUR PROGRAM
# import them
from moduletest import ageofqueen
from moduletest import printhello
# now try using them
print(ageofqueen)
printhello()
```
What is the point of this? Well, maybe you could use it to make your code a little more readable. If we get into heaps of modules inside modules, it could also remove that extra layer of crypticness.
If you wanted to, you could import everything from a module in this way by using `from modulename import *`. Of course, this can be troublesome if there are objects in your program with the same name as some items in the module. With large modules, this can easily happen, and can cause many a headache. A better way to do this would be to import a module in the normal way (without the `from` operator) and then assign items to a local name:
```Python
### ASSIGNING ITEMS TO A LOCAL NAME
# Assigning to a local name
timesfour = moduletest.timesfour
# Using the local name
print(timesfour(565))
```
This way, you can remove some crypticness, AND have all of the items from a certain module.
A final handy way to import modules is with an alias. Maybe you want to change a name because you've already used the same name for something else in your program, another module you imported uses the same name, or maybe you want to abbreviate a longer name that you use a lot. We can then use the `as` operator. That looks like this:
```Python
### IMPORT A MODULE WITH AN ALIAS
# import module
import moduletest as mt
# use module
print(mt.age)
cfcpiano = mt.Piano()
cfcpiano.printdetails()
```
## 8.4 Conclusion
That's it! A very simple lesson, but now you can organise your programs very neatly. In fact, now it is incredibly easy to make programs that can grow in complexity without ending up with one cryptic file that is full of bugs.
Modules are great for importing code. Next lesson, we learn about file input and output, and the saving of information inside classes, to be retrieved later. Will be great!
< [Classes](PythonIntroCh7.ipynb) | [Contents](PythonIntro.ipynb) | [File I/O](PythonIntroCh9.ipynb) >
| true |
code
| 0.544075 | null | null | null | null |
|
# Functions
If you find yourself doing the same thing over and over again in your code, it might be time to write a function.
Functions are blocks of reusable code -- little boxes that (usually) take inputs and return outputs. In Excel, `=SUM()` is a function. `print()` is one of Python's built-in function.
You can also _define your own functions_. This can save you some typing, and it will help separate your code into logical, easy-to-read pieces.
### Syntax
Functions start with the `def` keyword -- short for _define_, because you're defining a function -- then the name of the function, then parentheses (sometimes with the names of any `arguments` your function requires inside the parentheses) and then a colon. The function's code sits inside an indented block immediately below that line. In most cases, a function will `return` a value at the end.
Here is a function that takes a number and returns that number multiplied by 10:
```
def times_ten(number):
return number * 10
```
The `number` variable is just a placeholder for the values we're going to hand the function as input. We could have called that argument name "banana" and things would be just fine, though it would be confusing for people reading your code.
### Calling a function
By itself, a function doesn't do anything. We have built a tiny machine to multiply a number by 10. But it's just sitting on the workshop bench, waiting for us to use it.
Let's use it.
```
two_times_10 = times_ten(2)
print(two_times_10)
```
### Arguments
Functions can accept _positional_ arguments or _keyword_ arguments.
If your function uses _positional_ arguments, the order in which you pass arguments to the function matters. Here is a function that prints out a message based on its input (a person's name and their hometown).
```
def greet(name, hometown):
return f'Hello, {name} from {hometown}!'
```
Now let's call it.
```
print(greet('Cody', 'Pavillion, WY'))
```
If we change the order of the arguments, nonsense ensues.
```
print(greet('Pavillion, WY', 'Cody'))
```
Using _keyword_ arguments requires us to specify what value belongs to what argument, and it allows us to set a default value for the argument -- values that the function will use if you fail to pass any arguments when you call it. We could rewrite our function like this:
```
def greet(name='Cody', hometown='Pavillion, WY'):
return f'Hello, {name} from {hometown}!'
```
And now it doesn't matter what order we pass in the arguments, because we're defining the keyword that they belong to:
```
print(greet(hometown='Pittsburgh, PA', name='Jacob'))
```
What happens if we call the `greet()` function without any arguments at all, now? It'll use the default arguments.
```
print(greet())
```
### Lambda expressions
Sometimes, you'll see code that looks like this:
```python
df['new_column'] = df['old_column'].apply(lambda x: x[0])
```
That stuff inside the `apply()` parentheses? That's called a [_lambda expression_](https://docs.python.org/3/tutorial/controlflow.html#lambda-expressions), a time-saving way to turn loose a simple function on some values without having to write out a function with `def`. (It's a Python thing, not specific to pandas, but for our purposes that's probably where you'll see them most often.)
This code is equivalent but takes longer to write:
```python
def take_first_char(value):
return value[0]
df['new_column'] = df['old_column'].apply(take_first_char)
```
### More resources
- [TutorialsPoint post on functions](https://www.tutorialspoint.com/python/python_functions.htm)
- [LearnPython tutorial](https://www.learnpython.org/en/Functions)
- [Software Carpentry tutorial](https://swcarpentry.github.io/python-novice-inflammation/06-func/)
- [Hitchhiker's Guide to Python: Function Arguments](http://docs.python-guide.org/en/latest/writing/style/#function-arguments)
| true |
code
| 0.355887 | null | null | null | null |
|
# Convolutional Neural Networks with Tensorflow
"Deep Learning" is a general term that usually refers to the use of neural networks with multiple layers that synthesize the way the human brain learns and makes decisions. A convolutional neural network is a kind of neural network that extracts *features* from matrices of numeric values (often images) by convolving multiple filters over the matrix values to apply weights and identify patterns, such as edges, corners, and so on in an image. The numeric representations of these patterns are then passed to a fully-connected neural network layer to map the features to specific classes.
## Building a CNN
There are several commonly used frameworks for creating CNNs. In this notebook, we'll build a simple example CNN using Tensorflow. The example is a classification model that can classify an image as a circle, a triangle, or a square.
### Import framework
First, let's import the Tensorflow libraries we'll need.
```
import tensorflow
from tensorflow import keras
print('TensorFlow version:',tensorflow.__version__)
print('Keras version:',keras.__version__)
```
### Preparing the Data
Before we can train the model, we need to prepare the data. We'll divide the feature values by 255 to normalize them as floating point values between 0 and 1, and we'll split the data so that we can use 70% of it to train the model, and hold back 30% to validate it. When loading the data, the data generator will assing "hot-encoded" numeric labels to indicate which class each image belongs to based on the subfolders in which the data is stored. In this case, there are three subfolders - *circle*, *square*, and *triangle*, so the labels will consist of three *0* or *1* values indicating which of these classes is associated with the image - for example the label [0 1 0] indicates that the image belongs to the second class (*square*).
```
from tensorflow.keras.preprocessing.image import ImageDataGenerator
data_folder = 'data/shapes'
img_size = (128, 128)
batch_size = 30
print("Getting Data...")
datagen = ImageDataGenerator(rescale=1./255, # normalize pixel values
validation_split=0.3) # hold back 30% of the images for validation
print("Preparing training dataset...")
train_generator = datagen.flow_from_directory(
data_folder,
target_size=img_size,
batch_size=batch_size,
class_mode='categorical',
subset='training') # set as training data
print("Preparing validation dataset...")
validation_generator = datagen.flow_from_directory(
data_folder,
target_size=img_size,
batch_size=batch_size,
class_mode='categorical',
subset='validation') # set as validation data
classnames = list(train_generator.class_indices.keys())
print("class names: ", classnames)
```
### Defining the CNN
Now we're ready to create our model. This involves defining the layers for our CNN, and compiling them for multi-class classification.
```
# Define a CNN classifier network
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Dropout, Flatten, Dense
# Define the model as a sequence of layers
model = Sequential()
# The input layer accepts an image and applies a convolution that uses 32 6x6 filters and a rectified linear unit activation function
model.add(Conv2D(32, (6, 6), input_shape=train_generator.image_shape, activation='relu'))
# Next we;ll add a max pooling layer with a 2x2 patch
model.add(MaxPooling2D(pool_size=(2,2)))
# We can add as many layers as we think necessary - here we'll add another convolution, max pooling, and dropout layer
model.add(Conv2D(32, (6, 6), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
# And another set
model.add(Conv2D(32, (6, 6), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
# A dropout layer randomly drops some nodes to reduce inter-dependencies (which can cause over-fitting)
model.add(Dropout(0.2))
# Now we'll flatten the feature maps and generate an output layer with a predicted probability for each class
model.add(Flatten())
model.add(Dense(train_generator.num_classes, activation='sigmoid'))
# With the layers defined, we can now compile the model for categorical (multi-class) classification
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
print(model.summary())
```
### Training the Model
With the layers of the CNN defined, we're ready to train the model using our image data. In the example below, we use 5 iterations (*epochs*) to train the model in 30-image batches, holding back 30% of the data for validation. After each epoch, the loss function measures the error (*loss*) in the model and adjusts the weights (which were randomly generated for the first iteration) to try to improve accuracy.
> **Note**: We're only using 5 epochs to minimze the training time for this simple example. A real-world CNN is usually trained over more epochs than this. CNN model training is processor-intensive, involving a lot of matrix and vector-based operations; so it's recommended to perform this on a system that can leverage GPUs, which are optimized for these kinds of calculation. This will take a while to complete on a CPU-based system - status will be displayed as the training progresses.
```
# Train the model over 5 epochs using 30-image batches and using the validation holdout dataset for validation
num_epochs = 5
history = model.fit(
train_generator,
steps_per_epoch = train_generator.samples // batch_size,
validation_data = validation_generator,
validation_steps = validation_generator.samples // batch_size,
epochs = num_epochs)
```
### View the Loss History
We tracked average training and validation loss history for each epoch. We can plot these to verify that loss reduced as the model was trained, and to detect *overfitting* (which is indicated by a continued drop in training loss after validation loss has levelled out or started to increase.
```
%matplotlib inline
from matplotlib import pyplot as plt
epoch_nums = range(1,num_epochs+1)
training_loss = history.history["loss"]
validation_loss = history.history["val_loss"]
plt.plot(epoch_nums, training_loss)
plt.plot(epoch_nums, validation_loss)
plt.xlabel('epoch')
plt.ylabel('loss')
plt.legend(['training', 'validation'], loc='upper right')
plt.show()
```
### Evaluate Model Performance
We can see the final accuracy based on the test data, but typically we'll want to explore performance metrics in a little more depth. Let's plot a confusion matrix to see how well the model is predicting each class.
```
# Tensorflow doesn't have a built-in confusion matrix metric, so we'll use SciKit-Learn
import numpy as np
from sklearn.metrics import confusion_matrix
import matplotlib.pyplot as plt
%matplotlib inline
print("Generating predictions from validation data...")
# Get the image and label arrays for the first batch of validation data
x_test = validation_generator[0][0]
y_test = validation_generator[0][1]
# Use the moedl to predict the class
class_probabilities = model.predict(x_test)
# The model returns a probability value for each class
# The one with the highest probability is the predicted class
predictions = np.argmax(class_probabilities, axis=1)
# The actual labels are hot encoded (e.g. [0 1 0], so get the one with the value 1
true_labels = np.argmax(y_test, axis=1)
# Plot the confusion matrix
cm = confusion_matrix(true_labels, predictions)
plt.imshow(cm, interpolation="nearest", cmap=plt.cm.Blues)
plt.colorbar()
tick_marks = np.arange(len(classnames))
plt.xticks(tick_marks, classnames, rotation=85)
plt.yticks(tick_marks, classnames)
plt.xlabel("Predicted Shape")
plt.ylabel("True Shape")
plt.show()
```
### Using the Trained Model
Now that we've trained the model, we can use it to predict the class of a new image.
```
from tensorflow.keras import models
from random import randint
import os
%matplotlib inline
# Function to create a random image (of a square, circle, or triangle)
def create_image (size, shape):
from random import randint
import numpy as np
from PIL import Image, ImageDraw
xy1 = randint(10,40)
xy2 = randint(60,100)
col = (randint(0,200), randint(0,200), randint(0,200))
img = Image.new("RGB", size, (255, 255, 255))
draw = ImageDraw.Draw(img)
if shape == 'circle':
draw.ellipse([(xy1,xy1), (xy2,xy2)], fill=col)
elif shape == 'triangle':
draw.polygon([(xy1,xy1), (xy2,xy2), (xy2,xy1)], fill=col)
else: # square
draw.rectangle([(xy1,xy1), (xy2,xy2)], fill=col)
del draw
return np.array(img)
# Save the trained model
modelFileName = 'models/shape_classifier.h5'
model.save(modelFileName)
del model # deletes the existing model variable
# Create a random test image
classnames = os.listdir(os.path.join('data', 'shapes'))
classnames.sort()
img = create_image ((128,128), classnames[randint(0, len(classnames)-1)])
plt.axis('off')
plt.imshow(img)
# The model expects a batch of images as input, so we'll create an array of 1 image
imgfeatures = img.reshape(1, img.shape[0], img.shape[1], img.shape[2])
# We need to format the input to match the training data
# The generator loaded the values as floating point numbers
# and normalized the pixel values, so...
imgfeatures = imgfeatures.astype('float32')
imgfeatures /= 255
# Use the classifier to predict the class
model = models.load_model(modelFileName) # loads the saved model
class_probabilities = model.predict(imgfeatures)
# Find the class predictions with the highest predicted probability
class_idx = np.argmax(class_probabilities, axis=1)
print (classnames[int(class_idx[0])])
```
In this notebook, you used Tensorflow to train an image classification model based on a convolutional neural network.
| true |
code
| 0.759694 | null | null | null | null |
|
### DCGANs `MNIST` dataset.
```
import tensorflow as tf
from tensorflow.keras import layers, Model
from tensorflow.keras.layers import Input, Conv2D, Dense, Flatten, Reshape, Conv2DTranspose, MaxPooling2D, UpSampling2D, LeakyReLU
from tensorflow.keras.activations import relu
from tensorflow.keras.models import Sequential, load_model
from tensorflow.keras.callbacks import ModelCheckpoint, EarlyStopping
import tensorflow_datasets as tfds
import numpy as np
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings('ignore')
from packaging.version import parse as parse_version
```
### Loading the `mnist` dataset.
```
(ds_train, ds_test_), ds_info = tfds.load('mnist',
split=['train', 'test'],
shuffle_files=True,
as_supervised=True,
with_info=True)
batch_size = 256
def preprocess(image, label):
image = tf.cast(image, tf.float32)
image = image/255.
return image, image
ds_train = ds_train.map(preprocess)
ds_train = ds_train.cache() # put dataset into memory
ds_train = ds_train.shuffle(ds_info.splits['train'].num_examples)
ds_train = ds_train.batch(batch_size)
ds_test = ds_test_.map(preprocess).batch(batch_size).cache().prefetch(batch_size)
# return label for testing
def preprocess_with_label(image, label):
image = tf.cast(image, tf.float32)
image = tf.math.round(image/255.)
return image, label
ds_test_label = ds_test_.map(preprocess_with_label).batch(1000)
def Encoder(z_dim):
inputs = layers.Input(shape=[28,28,1])
x = inputs
x = Conv2D(filters=8, kernel_size=(3,3), strides=2, padding='same', activation='relu')(x)
x = Conv2D(filters=8, kernel_size=(3,3), strides=1, padding='same', activation='relu')(x)
x = Conv2D(filters=8, kernel_size=(3,3), strides=2, padding='same', activation='relu')(x)
x = Conv2D(filters=8, kernel_size=(3,3), strides=1, padding='same', activation='relu')(x)
x = Flatten()(x)
out = Dense(z_dim)(x)
return Model(inputs=inputs, outputs=out, name='encoder')
def Decoder(z_dim):
inputs = layers.Input(shape=[z_dim])
x = inputs
x = Dense(7*7*64, activation='relu')(x)
x = Reshape((7,7,64))(x)
x = Conv2D(filters=64, kernel_size=(3,3), strides=1, padding='same', activation='relu')(x)
x = UpSampling2D((2,2))(x)
x = Conv2D(filters=32, kernel_size=(3,3), strides=1, padding='same', activation='relu')(x)
x = UpSampling2D((2,2))(x)
out = Conv2D(filters=1, kernel_size=(3,3), strides=1, padding='same', activation='sigmoid')(x)
#return out
return Model(inputs=inputs, outputs=out, name='decoder')
class Autoencoder:
def __init__(self, z_dim):
self.encoder = Encoder(z_dim)
self.decoder = Decoder(z_dim)
model_input = self.encoder.input
model_output = self.decoder(self.encoder.output)
self.model = Model(model_input, model_output)
autoencoder = Autoencoder(z_dim=10)
model_path = "./models/autoencoder.h5"
os.makedirs("./models", exist_ok=True)
checkpoint = ModelCheckpoint(model_path,
monitor= "val_loss",
verbose=1,
save_best_only=True,
mode= "auto",
save_weights_only = False)
early = EarlyStopping(monitor= "val_loss",
mode= "auto",
patience = 5)
callbacks_list = [checkpoint, early]
autoencoder.model.compile(
loss = "mse",
optimizer=tf.keras.optimizers.RMSprop(learning_rate=3e-4))
#metrics=[tf.keras.losses.BinaryCrossentropy()])
autoencoder.model.fit(ds_train, validation_data=ds_test,
epochs = 100, callbacks = callbacks_list)
images, labels = next(iter(ds_test))
autoencoder.model = load_model(model_path)
outputs = autoencoder.model.predict(images)
# Display
grid_col = 10
grid_row = 2
f, axarr = plt.subplots(grid_row, grid_col, figsize=(grid_col*1.1, grid_row))
i = 0
for row in range(0, grid_row, 2):
for col in range(grid_col):
axarr[row,col].imshow(images[i,:,:,0], cmap='gray')
axarr[row,col].axis('off')
axarr[row+1,col].imshow(outputs[i,:,:,0], cmap='gray')
axarr[row+1,col].axis('off')
i += 1
f.tight_layout(0.1, h_pad=0.2, w_pad=0.1)
plt.show()
```
| true |
code
| 0.84607 | null | null | null | null |
|
# Random Forests - Redux
From Fastai ML1 [Lesson 1 Intro to Random Forests](https://github.com/fastai/fastai/blob/master/courses/ml1/lesson1-rf.ipynb)
This notebook turned into a redux of my [first RF Code Along](https://github.com/WNoxchi/Kaukasos/blob/master/FAML1/Lesson1-RandomForests.ipynb) with notes.
---
## 1 Imports
```
%load_ext autoreload
%autoreload 2
%matplotlib inline
from fastai.imports import *
from fastai.structured import *
from pandas_summary import DataFrameSummary
from sklearn.ensemble import RandomForestRegressor, RandomForestClassifier
from IPython.display import display
from sklearn import metrics
PATH = "../../data/competitions/bluebook-for-bulldozers/"
!ls {PATH}
```
## 2. Data
`low_memory=False` tells Pandas to read more of the file to decide what the types are.
`parse_dates=[...]` is used for any columns that contain dates.
```
df_raw = pd.read_csv(f'{PATH}Train.csv', low_memory=False, parse_dates=['saledate'])
```
Entering a DataFrame to display it will truncate it if it's too long.
This function sets the truncation threshold to 1000 rows & cols.
```
def display_all(df):
with pd.option_context("display.max_rows", 1000):
with pd.option_context("display.max_columns", 1000):
display(df)
```
`df_raw.tail()` will show the last few rows of the DataFrame. By default it shows the
cols at top and rows on side. There're a lot of cols, so using `.transpose()`
displays the table on its side.
```
# display_all(df_raw.tail().transpose())
# display_all(df_raw.describe(include='all').transpose())
# df_raw.head()
```
[RMSLE](https://www.kaggle.com/c/bluebook-for-bulldozers#evaluation) is used in the Kaggle competition. So by taking the log of all sale prices, we can just use RMSE later to calculate our loss. RMSLE: $Σ\big(($log(prediction) - log(actual)$)^2\big)$ : this means ratios not absolutes.
Here we also replace a column w/ a new column:
```
df_raw.SalePrice = np.log(df_raw.SalePrice)
```
### 2.2.2 Initial Processing
A Random Forest is smth of a universal machine learning technique. Could be a category or a continous variable - and can predict w/ cols of almost any kind (pixel data, zip codes, etc). RFs genrly don't overfit (prevention: easy). RFs don't genrly req validation sets, & can tell you how well they generalize - even w/ only 1 dataset. RFs have few if any statistical assumptions of your data, & req v.few pieces of feature engineering.
`model.fit(`__`Independant Variables`__`, `__`Dependent Variables`__`)`
Indep: used to predict; Dep: predicted. `pandas.DataFrame.drop(..)` returns a new DataFrame w/ a list of rows/cols removed. So we use everything but the SalePrice to predict the SalePrice.
```
model = RandomForestRegressor(n_jobs=-1) # n_jobs: number of cores to use. -1 ==> all
model.fit(df_raw.drop('SalePrice', axis=1), df_raw.SalePrice)
```
This dataset contains a mix of **continuous** and __categorical__ variables. Most ML models (incl. RFs) req numbers -- so we need to convert all our cols to numbers.
`sklearn.ensemble.RandomForestRegressor`: predict __continuous__ variables
`sklearn.ensemble.RandomForestClassifier`: predict __categorical__ variables
---
One issue is `saledate` was parsed as a date $ \longrightarrow $ as a number. But if we look at it, it isn't a number, it's a `datetime64` -- which is __not__ a number. So we need to do our first bit of feature engineering.
```
df_raw.saledate[:5]
```
Inside `fastai.structured` is a function called `add_datepart`, which we'll use to fix this.
__Overview of `add_datepart`:__
1. We pass in a dataframe and a field (in this case `'saledate'`) to `add_datepart(df, fldname)`. We can't do `df.fieldname` because that'd return a field called 'fieldname'. So `df[fldname]` is how we grab a column when that column name is stored in the variable `fldname`. This gives us the field itself, the `pd.Series`.
2. `add_datepart` then goes through a list of date attribute strings ('Year', 'Month', 'Dayofyear', etc) and builds new columns by looking them up in `fld`'s datetime attributes (`fld.dt`).
3. It finally drops the original `fldname` column (`'saledate'` here) because it isn't numerical.
---
***NOTE***: `'saledate'` is a date type because we told Pandas to make it such via `parse_dates=["saledate"]`. That's why it has the relevant datetime attributes.
```
add_datepart(df_raw, 'saledate')
df_raw.saleYear.head()
```
Now the datatype for `'saledate'` is numerical (`int64`). If we check the columns of the DataFrame we'll see the new ones added by `add_datepart`:
```
df_raw.columns
```
This isn't enough. One more bit of feature engineering is needed: there are strings in the dataset (`'Low'`, `'High'`) etc. FastAI has function to automatically create categorical variables for all strings - by creating a column (backend) mapping integers to strings.
FastAI also has a `apply_cats` function to preserve training-set category mappings for validation & test set use.
```
df_raw.head()
train_cats(df_raw)
```
Now we can access categorical variables as `.cat`attributes just as we could with `.dt` for datetime:
```
df_raw.UsageBand.cat.categories
```
'High', 'Low', 'Medium' in `UsageBand` will be seen by the RF as cats `0`, `1`, `2`. It'll form a split first on either `0` vs `1, 2`, or `2` vs `0, 1`. That translates to 'High' vs 'Low' & 'Medium' or 'Medium' vs 'High' & 'Low'. That's a bit odd, and though the DT can get to a correct split regardless, by using a sensible ordering we can ensure it gets there in fewer splits - thus improving our model.
So we reorder 'High', 'Low', 'Medium' st. they're ordered wrt the category numbers, ie: so that any split starts by comparing 'High' and 'Low':
'High','Medium','Low' $\longrightarrow$ 0, 1, 2
`ordered=True` preserved supplied order, `inplace=True` changes the DataFrame in place instead of returning a new one.
```
df_raw.UsageBand.cat.set_categories(['High','Medium','Low'], ordered=True, inplace=True)
print(df_raw.UsageBand[100:110])
print(df_raw.UsageBand.cat.codes[100:110])
```
### 2.2.3 Preprocessing
We still have a number of Null values. Here we display the fraction of Null values in each category:
```
display_all(df_raw.isnull().sum().sort_index()/len(df_raw))
```
First we'll save our dataframe to disk in feather format since we have a good starting point.
```
os.makedirs(PATH + 'tmp', exist_ok=True)
df_raw.to_feather(PATH + 'tmp/raw')
```
Now we want to replace the string categories with their numeric codes, handle missing continous values, and pull out the dependant variable (`SalePrice`) into a separate variable. The `fastai.structured.proc_df` is what we'll use to do this.
---
**Overview of `proc_df`:**
`df:` DataFrame | `y_fld`: name of dependent variable
• Makes copy of DataFrame. • Grabs y values. • Drops DepVar from DataFrame. • Then fixes missing via `fastai.structured.fix_missing`.
>**Overview of `fix_missing`:**
>
>• Check that the column has missing values (`pd.isnull(col).sum() != 0`). • Create new column with same name as original + '_na' that's a boolean column w/ **1** any time a value is missing, **0** otherwise. • Then replace all Null values with the columns median.
>
>ie: All NaNs replaced by col's median. New col added keeping track of NaNs.
That is done for numeric variables (cols) -- Pandas automatically handles categorical variables by setting them to `-1` if missing.
• Then call `fastai.structured.numericalize`.
>**Overview of `numericalize`:**
>
>• If column is **Not** numeric and **is** a categorical type: replace column w/ it's category codes (integers) + 1.
---
```
df, y, nans = proc_df(df_raw, 'SalePrice')
```
'SalePrice' is now absent from the DataFrame's columns, and all columns with a non-zero value for null-fractions have corresponding '_na' columns.
```
df.columns
```
If we check the DataFrame, we see that everything is now a number:
```
df.head()
```
Now we have something we can pass into a Random-Forest Regressor.
```
model = RandomForestRegressor(n_jobs=-1)
model.fit(df, y)
model.score(df, y)
```
***NOTE***: Random Forests are *trivially* parallelizable, meaning computation time more/less linearly scales (negatively) with number of CPUs.
The score is the R$^2$ value. Range is < 1. 1 is perfect. If your R$^2$ score is < 0 your model is worse than predicting the mean. [FastAI ML1 L2 bit on R2](https://youtu.be/blyXCk4sgEg?t=718). **Gist of R$^2$:** *the ratio between how good your model is (RMSE) vs. how good is the naïve mean model (RMSE)*.
We'll create a simple validation set to test this. The dataset is sorted by date, so the most recent `n` rows will make up the validation set.
```
def split_vals(a, n): return a[:n].copy(), a[n:].copy()
n_valid = 12000 # same as Kaggle's test set size # 12000 rows for val set
n_trn = len(df) - n_valid # all else in trn set
raw_train, raw_valid = split_vals(df_raw, n_trn)
X_train, X_valid = split_vals(df, n_trn)
y_train, y_valid = split_vals(y, n_trn)
X_train.shape, y_train.shape, X_valid.shape
```
## 3. Random Forests
### 3.1 Base Model
Now we'll run our model again, but with the separate training and validation sets:
```
def rmse(x,y): return math.sqrt(((x-y)**2).mean())
def print_score(m):
res = [rmse(m.predict(X_train), y_train), rmse(m.predict(X_valid), y_valid),
m.score(X_train, y_train), m.score(X_valid, y_valid)]
if hasattr(m, 'oob_score_'): res.append(m.oob_score_)
print(res)
model = RandomForestRegressor(n_jobs=-1)
%time model.fit(X_train, y_train)
print_score(model)
```
There was some overfitting going on -- but this 0.252 loss gets into the top 25% of the Kaggle public leaderboard.
---
[Fast.ai ML1 L2](https://youtu.be/blyXCk4sgEg?t=1114) p.much picks up from here.
Since the data/competition is predicted time-series data -- you want your validation set to reflect that by being a range of consecutive dates (specifically some tail slice of the dataset).
### 3.2 Speeding things up
One way to speed up iteration time for model development is to use the `subset` parameter in `proc_df`: which returns only a subset of the data to work on. This returns a randomly sampled subset of the data.
We need to make sure our train-subset doesn't overlap with our validation set. We also want to use our original val set, and **not** overwrite it, so of the 30k subset, we set the first 20k (this may overlap a bit..) to be training, and throw the rest away.
* Create `df_trn`, `y_trn` from a random 30k subset of `df_raw`.
* Set `X_train`, `y_train` to be the first 20k rows of `df_trn`, `y_trn`.
```
df_trn, y_trn, nans = proc_df(df_raw, 'SalePrice', subset=30000)
X_train, _ = split_vals(df_trn, 20000)
y_train, _ = split_vals(y_trn, 20000)
model = RandomForestRegressor(n_jobs=-1) ## initialize Model
%time model.fit(X_train, y_train) ## train Model
print_score(model) ## run predictions - still using orig valset
```
Loss Train, | Loss Valid, | R2 Train, | R2 Loss
### 3.3 Single Tree
Scikit-Learn calls trees estimators. `max_depth` is depth of splits. `bootstrap` controls random:on/off for Random Forest.
```
# A small Deterministic Decision Tree
model = RandomForestRegressor(n_estimators=1, max_depth=3, bootstrap=False, n_jobs=-1)
model.fit(X_train, y_train)
print_score(model)
```
`fastai.structured.draw_tree` lets us visualize Decision Trees. `model.estimators_[0]` returns the 1st estimator from an array.
```
# model.estimators_
draw_tree(model.estimators_[0], df_trn, precision=3)
```
We have 20k samples at the start of the this tree - because that's what we made the training set as when we split our data.
Looking at first node: in our whole dataset (X_train) there're 20k rows, the average sale price is ~10.1, and if we built a model where we used that average all the time: then our MSE would be 0.452. Iow that's the denominator of an R2. This 1st node is the most basic model: a tree with zero splits - just predict the average.
The best single split the RF is able to make is based on whether the Coupler_System is ≤ 0.5 (True / False). If it does that, the MSE of Coupler_System > 0.5 (False) goes to 0.114: a large improvement. In the other group: Coupler_System ≤ 0.5 (True) improves slightly to 0.397. The False group is a small fraction: ~1,700/20,000.
---
**How to find the best possible split with a Single Decision Tree?**:
* **For each** categorical variable:
* **For each** value of that variable:
* **Find** the split with the Minimum weighted-average MSE
* **Return** the categorry:value split w/ Minimum weighted-average MSE
Equivalent to this is to take the MSE of a hypothetical model where all values of a binary split are set to their decisions average.
---
Now we can improve this Decision Tree by setting `depth=None` to continue until each leaf node has only one decision possible for it. If we do that (surprise) we get a model that perfectly overfits our data. Our validation loss/error is not 1, but is better than our shallow tree.
```
model = RandomForestRegressor(n_estimators=1, bootstrap=False, n_jobs=-1)
model.fit(X_train, y_train)
print_score(model)
```
### 3.4 Bagging
#### 3.4.1 Intro to Bagging
We can improve these D.Trees by making forests of them. We create forests with a statistical technique called 'bagging'. Any kind of model can be bagged. A Random Forest is a way of bagging trees.
What if we created `N` different models, each of which was only somewhat predictive, but the models weren't at all corelated with eachother. That means the `N` models would had to have found different insights into the relationships in the data. If you took the average of those `N` models, you're effectively bringing in the insights from each of them. This is Ensembling.
What if we made a bunch of these big, deep, strongly-overfit trees, but each one only gets a random 1/10th of the data. So each tree will be perfect on that subset but bad at the rest. So each of the trees will be better than nothing, and all overfit in different ways on different things because they use different random samples.
So they all have errors, but the errors are random. The average of a bunch of random errors is **Zero**.
So if we take the average of all these trees (ea. of which trained on a dfnt rand subset) the errors will average out to zero and what's left is the true relationship. *That* is a **Random Forest**. [Lecture 2](https://youtu.be/blyXCk4sgEg?t=2971)
---
1. Grab random subset of data.
2. Build a Decision Tree on it.
3. Put that D.Tree aside and repeat `N` times
4. For each D.Tree make predictions by running test data through tree to get to leaf node
5. Take average in that leaf node $\forall$ the trees
6. Average them all together.
To do that we call `RandomForestRegressor`. An estimator (specfd by `n_estimators`) is a D.Tree.
---
The key insight is to construct multitple models that are better than nothing and where the errors are as much as possible uncorrelated with eachother. If the errors are correlated this breaks down.
For subsets, Scikit-Learn picks out `n` rows *with* replacement. This is called bootstrapping. This on average represents 62.3% of the rows, with a bunch represented multiple times. ***(I think this means 62.3% rows used on any given tree).***
[Lecture 2:](https://youtu.be/blyXCk4sgEg?t=3146) So instead of picking out a 1/10 of the rows, of an `n` row dataset, we pick out `n` rows with replacement, which on average represnts 62.3% of the rows, many of them multiple times.
*Aside:* The Whole point of modeling Machine Learning is to find a model that tells you which variables are important and how they interact together to drive your independent variable. The difference between using 'Tree Space / Random-Forest Space' and 'Euclidean Space' to find nearest neighbors is the difference between a model that makes good predictions and one that makes meaningless predictions.
---
In **Bagging** you want each of your individual estimators / trees to be as predictive as possible and for their predictions to be as uncorrelated as possible. The inventor of RFs in the 1990s spoke about this: trying to come up with predictive but poorly-correlated trees.
Recent research has shown correlation is more important than individual predictiveness: so recent methods focus on creating trees which are less accurate on their own, and aim to minimize correlation between trees. Scikit-Learn has an ExtraTrees[Regressor/Classifier] with the exact same API as RandomForest[R/C] (and can be dropped in to replace it) which stands for "Extremely-Randomized Trees Model" Instead of trying every split of every variable, it randomly tries a few splits of a few variables. It's much faster to train, has much more randomness, and in that time you can build more trees and get better generalization.
```
model = RandomForestRegressor(n_jobs=-1) # default is 10 estimators
model.fit(X_train, y_train)
print_score(model)
```
We'll grab predictions for each individual tree and look at one example. After you've built a RF, each tree is stored in the attribute: `.estimators_`
```
preds = np.stack([t.predict(X_valid) for t in model.estimators_])
preds[:,0], np.mean(preds[:,0]), y_valid[0] # print first tree's predictions
preds.shape # 12,000 predictions for each of 10 trees
```
Notice that most of the predictions were a bit off, but the mean of all of them is pretty good. 9.459 avg, 9.105 actual.
Here's a plot going through each of the 10 trees, taking the mean of all the predictions up to the i$^{th}$ tree (1st tree, 1st 2 trees, 1st 3 .. etc), and plot the R$^2$. Notice the final value on the plot matches the R$^2$ value of the model up above.
```
plt.plot([metrics.r2_score(y_valid, np.mean(preds[:i+1], axis=0)) for i in range(10)]);
```
Also note the plot's flattening out. (tested in original notebook and course nb): adding more trees won't improve the model much beyond this point.
The `n_estimators` hyperparameter is chosen based on:
1. amount of time you have for fitting
2. point of diminshing returns
More trees slows model fit/train time, but fewer trees can still offer valuable insight. J.Howard will often work through a day with a 20-tree RF, and at the end expand that to a 1000-tree model.
#### 3.4.2 Out-of-Bag (OoB) Score
Sometimes your dataset will be too small to create a validation set and a good model at the same time. There's a trick unique to Random Forests for this:
Recognize that some for each tree, some dataset rows did not get used. So pass those rows through those trees as their validation sets.
So you end up with a different validation set for each tree. Now to calculate our prediction we average all the trees where that row was not used for training. As long as you have enough trees every row will appear in the OoB sample for one of them at least.
So you create an OoB prediction by averaging all the trees you didn't use to train each individual row, then calculate your RMSE, R2, etc on that.
You can do this automatically by specifying the `oob_score=True` parameter in Scikit-Learn's `RandomForestRegressor`, creating a `.oob_score_` attribute in the resulting model.
```
model = RandomForestRegressor(n_estimators=40, n_jobs=-1, oob_score=True)
model.fit(X_train, y_train)
print_score(model)
```
The OoB Score will usually slightly underestimate the generalizability of the model -- the more trees you have, the less the underestimation of the model - but it works well enough anyway.
### 3.5 Reducing Over-Fitting
#### 3.5.1 Subsampling
One of the easiest ways to avoid over-fitting is also one of the best ways to speed up analysis: *subsampling*. Let's return to using our full dataset, so we can demonstrate the impact of this technique.
___
***NOTE***: before we took a subset of 30k rows of the data and built every model on that. Meaning every tree in the RF is a different subset of that subset of 30k. Why not pick a totally different subset of 30k each time. ie: leave the entire dataset of 300k records as is, and if we want to make things faster: pick a different subset of 30k each time. So rather than bootstrapping the entire set of rows: let's just randomly sample a subset of data.
---
So we'll do this by calling `proc_df` without our subset parameter:
```
df_trn, y_trn, nans = proc_df(df_raw, 'SalePrice')
X_train, X_valid = split_vals(df_trn, n_trn)
y_train, y_valid = split_vals(y_trn, n_trn)
```
The basic idea is this: rather than limit the total amount of data that our model can access, let's instead limit it to a *different* random subset per tree. That way, given enough trees, the model can still see *all* of the data, but for each individual tree, it'll be just as fast as if we had cut down our dataset as before.
Calling `fastai.structurered.set_rf_samples(n)` will change Scikit-learn's random forests to give each tree a random sample of `n` random rows.
When we do this, now when we run a RF, it's not going to bootstrap an entire set of 391k rows (len(X_train)), it'll just grab a subset of 20k rows. So when we run `set_rf_samples(20000)` it'll still run just as quickly as if we'd've originally done a random sample of 20k, but now every tree can have access to the entire dataset.
So if we use enough D.Trees, the RF will eventually see everything.
```
set_rf_samples(20000)
model = RandomForestRegressor(n_jobs=-1, oob_score=True)
%time model.fit(X_train, y_train)
print_score(model)
```
Now with 10 estimators (default) we get an R2 of 0.858.
```
model = RandomForestRegressor(n_estimators=40, n_jobs=-1, oob_score=True)
%time model.fit(X_train, y_train)
print_score(model)
```
Increasing to 40 esimates increases our R2 score from 0.858 to 0.877.
---
`set_rf_samples` will be very useful for working with massive structured datasets.
Unfortunately as of 31/10/2017 (and now 25/3/2018) it will not change how OoB is calculated (`set_rf_samples` is a hack that replaces scikit-learn's internal function call with a lambda function with the desired behavior). OoB should be turned off for now when `set_rf_samples` is used.
***NOTE*** to **reset** the RF sampling: call `fastai.structured.reset_rf_samples()`
---
When doing EDA (Exploratory Data Analysis) ie: when working and probing at a problem / doing interactive machine learning, [J.Howard](https://youtu.be/blyXCk4sgEg?t=4791) will use `set_rf_samples` (subsets) and reasonable small forests, because:
> all the insights that I'm going to get are exactly the same as the big ones, but I can run them in 3 or 4 seconds instead of hours.
> this is one of the biggest tips I can give you, and very very few people in industry or academia actually do this. Most people run all of their models on all of the data all of the time using their best possible parameters, which is just pointless.
> if you're trying to find out which features are important and how they're related to each other and so forth: having that 4th decimal place of accuracy isn't going to change any of your insights at all.
> do most of your models on a large enough sample size that your accuracy is reasonable (w/n a reasonable distance of the best accuracy you can get) and is taking a small number of seconds to train - so you can interactively do your analysis.
#### 3.5.2 Tree-building Parameters
We revert to using a full bootstrap sample in order to show the impact of other over-fitting avoidance methods.
```
reset_rf_samples()
```
Let's get a baseline for this full set to compare to.
```
model = RandomForestRegressor(n_estimators=40, n_jobs=-1, oob_score=True)
%time model.fit(X_train, y_train)
print_score(model)
```
Each of the estimators will train all the way down until the leaf nodes have 1 sample in them. **NOTE** that our OoB score is better than our validation R2 score (.89278) because our validation set is **not** a random sample: it's a different time period, and it's much harder to predict an entirely different time period than it is to predict random dates.
---
Another way to reduce over-fitting is to grow our trees less deeply. We do this by specifying (with `min_samples_leaf`) that we require some minimum number of rows in every leafe node. This has 2 benefits:
* There are fewer decision rules for each leaf node; simpler models should generalize better
* The predictions are made by averaging more rows in the leaf node, resulting in less volatility
example: `min_samples_leaf=3`: stop training the tree further when your leaf node has 3 or less samples in it.
In practice this means there'll be 1 or 2 fewer levels of decisions being made, which means about half or a quarter the number of actual decision criteria we have to do -- so it'll train quicker. It means also when we look at an individual tree, rather than just taking 1 point, we're taking the average of at least 3 points -- so we expect each tree to generalize a bit better; but ea. tree is also likely to be less powerful on its own.
```
model = RandomForestRegressor(n_estimators=40, min_samples_leaf=3, n_jobs=-1, oob_score=True)
%time model.fit(X_train, y_train)
print_score(model)
```
Values of **1, 3, 5, 10, 25** tend to work well for `min_samples_leaf`.
If working with a massive dataset without subsampling, you may need values of hundreds or thousands.
---
Here we see increasing `min_samples_leaf` from 1 to 3 has increased our Validation R$^2$ from 0.898 to 0.903. So it's a slight improvement and trains a bit faster.
---
We can also increase the amount of variation amongst the trees by not only using a sample of rows for each tree, but also using a sample of *columns* for each *split*. We do this by specifying `max_features`, which is the proportion of features to randomly select from at each split.
```
model = RandomForestRegressor(n_estimators=40, min_samples_leaf=3, max_features=0.5, n_jobs=-1, oob_score=True)
%time model.fit(X_train, y_train)
print_score(model)
```
Our model now has a validation R2 of 0.906. Our RMSE of log(price) has dropped from 0.233 to 0.229 as well. How good is that? Well on the [Kaggle public leaderboard](https://www.kaggle.com/c/bluebook-for-bulldozers/leaderboard) a loss of 0.2289 puts us in the top 20 of the competition. That's with a *"totally brainless random forest with some totally brainless minor hyperparameter tuning."*
> This why the Random Forest is such an important - not just first step but often only step in Machine Learning. Because it's hard to screw it up (even when we didn't tune the hyperparameters we still got a good result), and a small amt of hypar tuning got a much better result.
>So any kind of model - specifically Linear-type models which have a whole bunch of statistical assumptions and require a bunch of prereqs to get right before they start to work at all - can really throw you off track because they can give you totally wrong answers about how accurate the predictions can be.
>The Random Forest generally-speaking tends to work on most datasets most of the time with most sets of hypars.
-- [J.Howard Fast.ai ML1 Lecture 2](https://youtu.be/blyXCk4sgEg?t=5370)
Random Forests work because the trees are p.much infinitely flexible. Even with a categorical variable - if there're particular categories which have different levels of price: it can gradually zoom in on those groups by using multiple splits.
You can help it by telling it the order of your CatVar, but even if you don't: it's okay, it'll just take a few more decisions to get there.
In a Linear model, or almost *any* other kind of model, especially non-tree models, encoding CatVars the way RFs do won't work - because there's no linear relationship between arbitrary identifiers.
---
>What does `max_features` do? The idea is that the less correlated your trees are w.eachother, the better. Imagine you had 1 column that was so much better than all the others at being predictive, that every single tree you built - regardless of which subset of rows - always started with that column. So the trees will all be pretty similar.
>But you can imagine there might be some interaction of variables where that interaction is more important than that individual column. So if every tree always fits on the same thing the 1st time, you're not going to get much variation in those trees.
>So what we do is in addition to just taking a subset of rows: we then at every single split point take a different subset of columns.
>This is slightly different than row sampling. In row-sampling each new tree is based on a random set of rows. For column sampling every individual binary split we choose from a different subset of columns.
>In other words: rather than looking at every possible level of every possible column: we look at every possible level of a random subset of columns. And each binary split / decision point we use a different random subset.
>How many? you get to pick. `max_features=0.5` means randomly choose half of them. The default is to use all of them. There are also some special parameters you can pass in (sqrt, log, etc).
>In practice I've found good values to be 1, 0.5, log(2), or sqrt -- that'll give you a nice bit of variation.
-- [J.Howard Fast.ai ML1 Lecture 2](https://youtu.be/blyXCk4sgEg?t=5049)
---
As an example: here's what the Random Forest sees when it's making it's split decisions:
```
df_raw.fiProductClassDesc.cat.codes
df_raw.fiProductClassDesc.cat.categories
```
| true |
code
| 0.469763 | null | null | null | null |
|
<img src="http://upload.wikimedia.org/math/7/5/2/752fd6396a9c9d026f10eccb39ddca15.png"/>
$$V(x) = w\left(\frac{L}{2} - x\right)$$
$$M(x) = \frac{w}{2}\left(L x - x^2\right)$$
$$\theta(x) = \frac{- w}{2 EI}\left(\frac{L x^2}{2} - \frac{x^3}{3} +C\right)$$
$$\Delta(x) = \frac{- w}{2 EI}\left(\frac{L x^3}{6} - \frac{x^4}{12} +Cx + D \right)$$
$$\Delta(0) = \frac{-w}{2 EI}\left(\frac{L\cdot 0^3}{6} - \frac{0^4}{12} +C\cdot 0 + D \right) = 0 \therefore D = 0$$
$$\Delta(L) = \frac{-w}{2 EI}\left(\frac{L^4}{6} - \frac{L^4}{12} +CL \right) = 0 $$
$$\frac{L^4}{6} - \frac{L^4}{12} +CL = 0 $$
$$ CL = \frac{L^4}{12} - \frac{L^4}{6} $$
$$ CL = \frac{L^4}{12} - \frac{2 L^4}{12} $$
$$ CL = - \frac{L^4}{12} $$
$$ C = -\frac{L^3}{12}$$
$$\Delta(x) = \frac{- w}{2 EI}\left(\frac{L x^3}{6} - \frac{x^4}{12} -\frac{L^3}{12}x \right)$$
$$\theta(x) = \frac{-w}{2 EI}\left(\frac{L x^2}{2} - \frac{x^3}{3} - \frac{L^3}{12}\right)$$
$$\theta(0) = \frac{-w}{2 EI}\left(\frac{L \cdot 0^2}{2} - \frac{0^3}{3} - \frac{L^3}{12}\right)$$
$$\theta(0) = \frac{-w}{2 EI}\left(- \frac{L^3}{12}\right)$$
$$\theta(0) = \frac{w L^3}{24 EI}$$
$$\frac{\theta(0)}{\Delta(L/2)} = \frac{\frac{w L^3}{24 EI}}{\frac{5 w L^4}{384 E I}}$$
$$\frac{\theta(0)}{\Delta(L/2)} = \frac{384}{5\cdot 24\cdot L}$$
$$\frac{\theta(0)}{\Delta(L/2)} = \frac{16}{5 L}$$
$$\theta(0) = \frac{16}{5 L}\Delta(L/2)$$
$$P_0 = (x_0,y_0)\text{, known}$$
$$P_1 = (x_1,y_1)\text{, known}$$
$$C_0 = (x_2,y_2)$$
$$C_1 = (x_3,y_3)$$
$$P_I = (x_m,y_m)\text{, known}$$
$$y - y_m = \frac{\frac{y_0+y_3}{2}-y_m}{\frac{x_0+x_3}{2}-x_m}(x-x_m)\text{, known}$$
$$\frac{\frac{y_0+y_3}{2}-y_m}{\frac{x_0+x_3}{2}-x_m} = -\frac{x_3-x_0}{y_3-y_0}\text{, known}$$
$$y - y_c= m_{perp} (x-x_c)$$
$$y - y_c= m_{perp} x-m_{perp} x_c$$
$$y = m_{perp} x-m_{perp} x_c + y_c$$
$$y = m_{perp} x + (-m_{perp} x_c + y_c)$$
$$y = m_{perp} x + b_{perp}$$
$$y=a x^4 + b x^3 +c x^2 +d x + e$$
$$ 0 = a x^4 + b x^3 +c x^2 +(d-m_{perp}) x + (e-b_{perp})$$
$$\Delta(x) = \frac{- w}{2 EI}\left(- \frac{x^4}{12}+\frac{L x^3}{6} -\frac{L^3}{12}x \right)$$
$$\Delta(x) = \frac{w}{2 EI}\left(\frac{x^4}{12} - \frac{L x^3}{6} +\frac{L^3}{12}x \right)$$
$$y_0 - m_0 x_0 = - m_0 x_1 +y_1 \tag{1}$$
$$y_3-m_1 x_3 = -m_1 x_2+y_2\tag{2}$$
$$x_m -\frac{x_0+x_3}{8}= \frac{3}{8}x_1+\frac{3}{8}x_2 \tag{3}$$
$$y_m - \frac{y_0+y_3}{8}=\frac{3}{8}y_1+\frac{3}{8}y_2 \tag{4}$$
$$Y = \begin{bmatrix}y_0 - m_0 x_0\\y_3-m_1 x_3\\x_m -\frac{x_0+x_3}{8}\\y_m - \frac{y_0+y_3}{8}\end{bmatrix}\qquad E = \begin{bmatrix}-m_0&0&1&0\\0&-m_1&0&1\\3/8&3/8&0&0\\0&0&3/8&3/8\end{bmatrix}\qquad C = \begin{bmatrix}x_1\\x_2\\y_1\\y_2\end{bmatrix}$$
$$[Y]=[E][C]$$
$$[C]=[E]^{-1}[Y]$$
```
# Path CurveManip.py
from IPython.display import SVG
from numpy import matrix
from numpy.linalg import inv, pinv
from numpy import transpose as T
from collections import namedtuple
from numpy import sin, cos, tan, array, pi
import numpy as np
# from SVG_lib import point
def rotate(point, base, angle, DEBUG = False):
"Rotates the point about the bas by the angle"
R = matrix(((cos(angle),-sin(angle)),(sin(angle),cos(angle))))
point = array(point)
base = array(base)
tmp = point - base
R_tmp = array(T(R*T(matrix(tmp)))).reshape((1,2))
R_point = array(R_tmp[0]+T(base))#.reshape((1,2))
if DEBUG:
Debug_rotate = namedtuple('Debug_rotate','point angle_deg tmp R_tmp_size R_tmp base R_point')
debug = Debug_rotate(point, angle/pi*180, tmp, R_tmp.size, R_tmp, base, R_point)
print(debug)
print()
return R_point
def translate(point, vector):
"Returns a point (list) that is displaced from the original point be the vector (list)"
new_point = [x0+dx for x0,dx in zip(point, vector)]
return new_point
def reflect_y_axis(point):
"returns a point mirrored about the y axis"
px, py = point
return [-px, py]
def reflect_x_axis(point):
"returns a point mirrored about the x axis"
px, py = point
return [px, -py]
def mirror(point, mirror_line = [(0,0),(0,-1)]):
"Mirror a point about a line defined by two points"
p0, p1 = mirror_line
# Find angle of mirror line
angle = np.arctan2((p1[1]-p0[1]),(p1[0]-p0[0]))
# Rotate all points to make mirror line parallel to y-axis
flip_angles = [-angle,-pi/2]
for flip_angle in flip_angles:
p0 = rotate(p0,[0,0],flip_angle)
p1 = rotate(p1,[0,0],flip_angle)
point = rotate(point,[0,0],flip_angle)
if round((p0[0]-p1[0])*10000)!=0: #check for errors
er = "problem with fil_angle. post rotate x0, x1 = {}, {}".format(p0[0],p1[0])
raise(RuntimeError(er))
# translaste points so mirror line is on y-axis
point = translate(point,[-p0[0],0])
point = reflect_y_axis(point)
# translate back to original location
point = translate(point,[p0[0],0])
# rotate to original angle
flip_angles = [pi/2,angle]
for flip_angle in flip_angles:
point = rotate(point,[0,0],flip_angle)
p_x, p_y = float(point[0]), float(point[1])
return [p_x, p_y]
```
| true |
code
| 0.40592 | null | null | null | null |
|
_Lambda School Data Science_
This sprint, your project is about water pumps in Tanzania. Can you predict which water pumps are faulty?
# Decision Trees
#### Objectives
- clean data with outliers
- impute missing values
- use scikit-learn for decision trees
- understand why decision trees are useful to model non-linear, non-monotonic relationships and feature interactions
- get and interpret feature importances of a tree-based model
#### Links
- A Visual Introduction to Machine Learning
- [Part 1: A Decision Tree](http://www.r2d3.us/visual-intro-to-machine-learning-part-1/)
- [Part 2: Bias and Variance](http://www.r2d3.us/visual-intro-to-machine-learning-part-2/)
- [Decision Trees: Advantages & Disadvantages](https://christophm.github.io/interpretable-ml-book/tree.html#advantages-2)
- [How a Russian mathematician constructed a decision tree — by hand — to solve a medical problem](http://fastml.com/how-a-russian-mathematician-constructed-a-decision-tree-by-hand-to-solve-a-medical-problem/)
- [How decision trees work](https://brohrer.github.io/how_decision_trees_work.html)
- [Let’s Write a Decision Tree Classifier from Scratch](https://www.youtube.com/watch?v=LDRbO9a6XPU) — _Don’t worry about understanding the code, just get introduced to the concepts. This 10 minute video has excellent diagrams and explanations._
- [Random Forests for Complete Beginners: The definitive guide to Random Forests and Decision Trees](https://victorzhou.com/blog/intro-to-random-forests/)
### Libraries
#### category_encoders
You aren't required to use [category_encoders](https://github.com/scikit-learn-contrib/categorical-encoding), but it's recommended.
If you're working locally, you already installed it, probably with this shell command: `conda install -c conda-forge category_encoders`
If you're using Google Colab, you need to reinstall it every time you restart all runtimes: `pip install category_encoders`
#### scikit-learn version 0.21.2
Until recently, scikit-learn required graphviz to visualize decision trees, and it could be a pain to install. But sklearn's newest versions have a [plot_tree](https://scikit-learn.org/stable/modules/generated/sklearn.tree.plot_tree.html) function that uses matplotlib!
Google Colab already has version 0.21.2. But if you're running Anaconda locally, you may need to upgrade.
You can check your version with this Python code: `import sklearn; print(sklearn.__version__)`
If necessary, you can update your version with this shell command: `conda update scikit-learn`
This isn't required to do your assignment, but it's required to run this lecture notebook.
#### pdpbox
[PDPbox](https://github.com/SauceCat/PDPbox) stands for "Partial Dependence Plot toolbox." It's a tool for model interpretation & visualization.
You can install it on Colab or locally with this shell command: `pip install pdpbox`
This also isn't required to do your assignment, but it's used in the lecture notebook.
```
# !pip install pdpbox category_encoders
```
## Clean data with outliers, impute missing values (example solutions)
```
# !pip install category_encoders
%matplotlib inline
import category_encoders as ce
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import RobustScaler
LOCAL = '../data/tanzania/'
WEB = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Linear-Models/master/data/tanzania/'
train = pd.merge(pd.read_csv(WEB + 'train_features.csv'),
pd.read_csv(WEB + 'train_labels.csv'))
test = pd.read_csv(WEB + 'test_features.csv')
sample_submission = pd.read_csv(WEB + 'sample_submission.csv')
# Split train into train & val
train, val = train_test_split(train, train_size=0.80, test_size=0.20,
stratify=train['status_group'], random_state=42)
train.shape, val.shape, test.shape
```
Some of the locations are at ["Null Island"](https://en.wikipedia.org/wiki/Null_Island) instead of Tanzania.
```
sns.jointplot(x='longitude', y='latitude', data=train);
```
#### Define a function to wrangle train, validate, and test sets in the same way.
Fix the location, and do more data cleaning and feature engineering.
```
def wrangle(X):
"""Wrangles train, validate, and test sets in the same way"""
X = X.copy()
# About 3% of the time, latitude has small values near zero,
# outside Tanzania, so we'll treat these values like zero.
X['latitude'] = X['latitude'].replace(-2e-08, 0)
# When columns have zeros and shouldn't, they are like null values.
# So we will replace them with the column mean.
cols_with_zeros = ['construction_year', 'longitude', 'latitude']
for col in cols_with_zeros:
X[col] = X[col].replace(0, np.nan)
X[col] = X[col].fillna(X[col].mean())
# Convert date_recorded to datetime
X['date_recorded'] = pd.to_datetime(X['date_recorded'], infer_datetime_format=True)
# Extract year from date_recorded
X['year_recorded'] = X['date_recorded'].dt.year
# quantity & quantity_group are duplicates, so drop one
X = X.drop(columns='quantity_group')
# for categoricals with missing values, fill with the category 'MISSING'
categoricals = X.select_dtypes(exclude='number').columns
for col in categoricals:
X[col] = X[col].fillna('MISSING')
return X
train = wrangle(train)
val = wrangle(val)
test = wrangle(test)
```
Now the locations look better.
```
sns.relplot(x='longitude', y='latitude', hue='status_group',
data=train, alpha=0.1);
```
#### Select features
```
# The status_group column is the target
target = 'status_group'
# Get a dataframe with all train columns except the target & id
train_features = train.drop(columns=[target, 'id'])
# Get a list of the numeric features
numeric_features = train_features.select_dtypes(include='number').columns.tolist()
# Get a series with the cardinality of the nonnumeric features
cardinality = train_features.select_dtypes(exclude='number').nunique()
# Get a list of all categorical features with cardinality <= 50
categorical_features = cardinality[cardinality <= 50].index.tolist()
# Combine the lists
features = numeric_features + categorical_features
```
#### Encode categoricals, scale features, fit and score Logistic Regression model, make predictions
```
# Arrange data into X features matrix and y target vector
X_train = train[features]
y_train = train[target]
X_val = val[features]
y_val = val[target]
X_test = test[features]
# Encoder: fit_transform on train, transform on val & test
encoder = ce.OneHotEncoder(use_cat_names=True)
X_train_encoded = encoder.fit_transform(X_train)
X_val_encoded = encoder.transform(X_val)
X_test_encoded = encoder.transform(X_test)
# Scaler: fit_transform on train, transform on val & test
scaler = RobustScaler()
X_train_scaled = scaler.fit_transform(X_train_encoded)
X_val_scaled = scaler.transform(X_val_encoded)
X_test_scaled = scaler.transform(X_test_encoded)
# Model: Fit on train, score on val, predict on test
model = LogisticRegression(solver='lbfgs', multi_class='auto', n_jobs=-1)
model.fit(X_train_scaled, y_train)
print('Validation Accuracy', model.score(X_val_scaled, y_val))
y_pred = model.predict(X_test_scaled)
# Write submission csv file
submission = sample_submission.copy()
submission['status_group'] = y_pred
submission.to_csv('submission-02.csv', index=False)
```
#### Get and plot coefficients
```
coefficients = pd.Series(model.coef_[0], X_train_encoded.columns)
plt.figure(figsize=(10,30))
coefficients.sort_values().plot.barh(color='grey');
```
## Use scikit-learn for decision trees
### Compare a Logistic Regression with 2 features, longitude & latitude ...
### ... versus a Decision Tree Classifier with 2 features, longitude & latitude
https://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html
## Understand why decision trees are useful to model non-linear, non-monotonic relationships and feature interactions
#### What does _(non)monotonic_ mean?!?!
- See Figures 1-3 in Wikipedia's article, [Monotonic function](https://en.wikipedia.org/wiki/Monotonic_function)
- See [World Population Growth, 1700-2010](https://ourworldindata.org/world-population-growth-past-future). World Population is non-linear and monotonic. Annual growth rate is non-linear and non-monotonic.
- See [Accidents per Mile Driven, by Driver Age](http://howwedrive.com/2009/02/20/whats-the-real-risk-of-older-drivers/). This is non-linear and non-monotonic.
#### What does _feature interactions_ mean?!?!
- See the explanation in [_Interpretable Machine Learning_, Chapter 5.4.1, Feature Interaction](https://christophm.github.io/interpretable-ml-book/interaction.html#feature-interaction).
- See the exploration in this notebook, under the heading ***Interlude #2: Simple housing***
### Visualize decision tree
https://scikit-learn.org/stable/modules/generated/sklearn.tree.plot_tree.html
### Make 3 heatmaps, with longitude & latitude
- Actual % of functional waterpumps
- Decision Tree predicted probability of functional waterpumps
- Logistic Regression predicted probability of functional waterpumps
### Interlude #1: predicting golf putts
(1 feature, non-linear, regression)
https://statmodeling.stat.columbia.edu/2008/12/04/the_golf_puttin/
```
columns = ['distance', 'tries', 'successes']
data = [[2, 1443, 1346],
[3, 694, 577],
[4, 455, 337],
[5, 353, 208],
[6, 272, 149],
[7, 256, 136],
[8, 240, 111],
[9, 217, 69],
[10, 200, 67],
[11, 237, 75],
[12, 202, 52],
[13, 192, 46],
[14, 174, 54],
[15, 167, 28],
[16, 201, 27],
[17, 195, 31],
[18, 191, 33],
[19, 147, 20],
[20, 152, 24]]
putts = pd.DataFrame(columns=columns, data=data)
putts['rate of success'] = putts['successes'] / putts['tries']
putts.plot('distance', 'rate of success', kind='scatter', title='Golf Putts');
```
#### Compare Linear Regression ...
```
from sklearn.linear_model import LinearRegression
putts_X = putts[['distance']]
putts_y = putts['rate of success']
lr = LinearRegression()
lr.fit(putts_X, putts_y)
print('R^2 Score', lr.score(putts_X, putts_y))
ax = putts.plot('distance', 'rate of success', kind='scatter', title='Golf Putts')
ax.plot(putts_X, lr.predict(putts_X));
```
#### ... versus a Decision Tree Regressor
https://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeRegressor.html
```
import graphviz
from ipywidgets import interact
from sklearn.tree import DecisionTreeRegressor, export_graphviz
def viztree(decision_tree, feature_names):
dot_data = export_graphviz(decision_tree, out_file=None, feature_names=feature_names,
filled=True, rounded=True)
return graphviz.Source(dot_data)
def putts_tree(max_depth=1):
tree = DecisionTreeRegressor(max_depth=max_depth)
tree.fit(putts_X, putts_y)
print('R^2 Score', tree.score(putts_X, putts_y))
ax = putts.plot('distance', 'rate of success', kind='scatter', title='Golf Putts')
ax.step(putts_X, tree.predict(putts_X), where='mid')
plt.show()
display(viztree(tree, feature_names=['distance']))
interact(putts_tree, max_depth=(1,6,1));
```
### Interlude #2: Simple housing
(2 features, regression)
https://christophm.github.io/interpretable-ml-book/interaction.html#feature-interaction
```
columns = ['Price', 'Good Location', 'Big Size']
data = [[300000, 1, 1],
[200000, 1, 0],
[250000, 0, 1],
[150000, 0, 0]]
house = pd.DataFrame(columns=columns, data=data)
house
```
#### Compare Linear Regression ...
```
house_X = house.drop(columns='Price')
house_y = house['Price']
lr = LinearRegression()
lr.fit(house_X, house_y)
print('R^2', lr.score(house_X, house_y))
print('Intercept \t', lr.intercept_)
coefficients = pd.Series(lr.coef_, house_X.columns)
print(coefficients.to_string())
```
#### ... versus a Decision Tree Regressor
```
tree = DecisionTreeRegressor()
tree.fit(house_X, house_y)
print('R^2', tree.score(house_X, house_y))
viztree(tree, feature_names=house_X.columns)
```
### Simple housing, with a twist: _Feature Interaction_
```
house.loc[0, 'Price'] = 400000
house_X = house.drop(columns='Price')
house_y = house['Price']
house
```
#### Compare Linear Regression ...
```
lr = LinearRegression()
lr.fit(house_X, house_y)
print('R^2', lr.score(house_X, house_y))
print('Intercept \t', lr.intercept_)
coefficients = pd.Series(lr.coef_, house_X.columns)
print(coefficients.to_string())
```
#### ... versus a Decision Tree Regressor
```
tree = DecisionTreeRegressor()
tree.fit(house_X, house_y)
print('R^2', tree.score(house_X, house_y))
viztree(tree, feature_names=house_X.columns)
```
## Get and interpret feature importances of a tree-based model
# Assignment
- Start a clean notebook, or continue with yesterday's assignment notebook.
- Continue to participate in our Kaggle competition with the Tanzania Waterpumps data.
- Do more exploratory data analysis, data cleaning, feature engineering, and feature selection.
- Try a Decision Tree Classifier.
- Submit new predictions.
- Commit your notebook to your fork of the GitHub repo.
## Stretch Goals
- Create visualizations and share on Slack.
- Read more about decision trees and tree ensembles. You can start with the links at the top of this notebook.
- Try [scikit-learn pipelines](https://scikit-learn.org/stable/modules/compose.html):
> Pipeline can be used to chain multiple estimators into one. This is useful as there is often a fixed sequence of steps in processing the data, for example feature selection, normalization and classification. Pipeline serves multiple purposes here:
> - **Convenience and encapsulation.** You only have to call fit and predict once on your data to fit a whole sequence of estimators.
> - **Joint parameter selection.** You can grid search over parameters of all estimators in the pipeline at once.
> - **Safety.** Pipelines help avoid leaking statistics from your test data into the trained model in cross-validation, by ensuring that the same samples are used to train the transformers and predictors.
| true |
code
| 0.631481 | null | null | null | null |
|
```
from openeye import oechem, oedepict
import oenotebook as oenb
import pandas as pd
def depict_smiles(smiles):
mol = oechem.OEMol()
oechem.OESmilesToMol(mol,smiles)
return oenb.draw_mol(mol)
depict_smiles(smiles)
```
## SM11
Initial mol is the same as the tautomer: SM11_micro018 and SM11_micro020
SM11 resonance structures:
('SM11_micro018', 'SM11_micro020')
```
mol_ID = "SM11"
path_to_input_microstates = "corrections_for_v1_3_1/"
input_file_name = path_to_input_microstates + mol_ID +"_correction.csv"
df_microstates = pd.read_csv(input_file_name)
smiles1 = df_microstates[df_microstates["microstate ID"] == "SM11_micro018"]["canonical isomeric SMILES"].values[0]
print(smiles1)
smiles2 = df_microstates[df_microstates["microstate ID"] == "SM11_micro020"]["canonical isomeric SMILES"].values[0]
print(smiles2)
depict_smiles(smiles1)
depict_smiles(smiles2)
These are the same structure so I will deprecate SM11_micro018.
```
## SM18
('SM18_micro008', 'SM18_micro023')
('SM18_micro008', 'SM18_micro024')
('SM18_micro008', 'SM18_micro036')
('SM18_micro023', 'SM18_micro024')
('SM18_micro023', 'SM18_micro036')
('SM18_micro024', 'SM18_micro036')
('SM18_micro002', 'SM18_micro018')
('SM18_micro002', 'SM18_micro022')
('SM18_micro018', 'SM18_micro022')
('SM18_micro004', 'SM18_micro006')
('SM18_micro004', 'SM18_micro014')
('SM18_micro006', 'SM18_micro014')
```
mol_ID = "SM18"
path_to_input_microstates = "corrections_for_v1_3_1/"
input_file_name = path_to_input_microstates + mol_ID +"_correction.csv"
df_microstates = pd.read_csv(input_file_name)
smiles008 = df_microstates[df_microstates["microstate ID"] == "SM18_micro008"]["canonical isomeric SMILES"].values[0]
smiles023 = df_microstates[df_microstates["microstate ID"] == "SM18_micro023"]["canonical isomeric SMILES"].values[0]
smiles024 = df_microstates[df_microstates["microstate ID"] == "SM18_micro024"]["canonical isomeric SMILES"].values[0]
smiles036 = df_microstates[df_microstates["microstate ID"] == "SM18_micro036"]["canonical isomeric SMILES"].values[0]
depict_smiles(smiles008)
depict_smiles(smiles023)
depict_smiles(smiles024)
depict_smiles(smiles036)
```
SM18_micro008, SM18_micro023, SM18_micro024, and SM18_micro036 are resonance structures. SM18_micro023, SM18_micro024, and SM18_micro036 will be deprecated.
```
smiles002 = df_microstates[df_microstates["microstate ID"] == "SM18_micro002"]["canonical isomeric SMILES"].values[0]
smiles018 = df_microstates[df_microstates["microstate ID"] == "SM18_micro018"]["canonical isomeric SMILES"].values[0]
smiles022 = df_microstates[df_microstates["microstate ID"] == "SM18_micro022"]["canonical isomeric SMILES"].values[0]
depict_smiles(smiles002)
depict_smiles(smiles018)
depict_smiles(smiles022)
```
SM18_micro002, SM18_micro018, and SM18_micro022 are resonance structures. SM18_micro018 and SM18_micro022 will be deprecated.
```
smiles004 = df_microstates[df_microstates["microstate ID"] == "SM18_micro004"]["canonical isomeric SMILES"].values[0]
smiles006 = df_microstates[df_microstates["microstate ID"] == "SM18_micro006"]["canonical isomeric SMILES"].values[0]
smiles014 = df_microstates[df_microstates["microstate ID"] == "SM18_micro014"]["canonical isomeric SMILES"].values[0]
depict_smiles(smiles004)
depict_smiles(smiles006)
depict_smiles(smiles014)
```
SM18_micro004, SM18_micro006, and SM18_micro014 are resonance structures. SM18_micro006 and SM18_micro014 will be deprecated.
## SM23
SM23 resonance structures:
('SM23_micro001', 'SM23_micro003')
('SM23_micro001', 'SM23_micro009')
('SM23_micro001', 'SM23_micro023')
('SM23_micro001', 'SM23_micro031')
('SM23_micro001', 'SM23_micro032')
('SM23_micro001', 'SM23_micro037')
('SM23_micro003', 'SM23_micro009')
('SM23_micro003', 'SM23_micro023')
('SM23_micro003', 'SM23_micro031')
('SM23_micro003', 'SM23_micro032')
('SM23_micro003', 'SM23_micro037')
('SM23_micro009', 'SM23_micro023')
('SM23_micro009', 'SM23_micro031')
('SM23_micro009', 'SM23_micro032')
('SM23_micro009', 'SM23_micro037')
('SM23_micro023', 'SM23_micro031')
('SM23_micro023', 'SM23_micro032')
('SM23_micro023', 'SM23_micro037')
('SM23_micro031', 'SM23_micro032')
('SM23_micro031', 'SM23_micro037')
('SM23_micro032', 'SM23_micro037')
```
mol_ID = "SM23"
path_to_input_microstates = "corrections_for_v1_3_1/"
input_file_name = path_to_input_microstates + mol_ID +"_correction.csv"
df_microstates = pd.read_csv(input_file_name)
smiles001 = df_microstates[df_microstates["microstate ID"] == "SM23_micro001"]["canonical isomeric SMILES"].values[0]
smiles003 = df_microstates[df_microstates["microstate ID"] == "SM23_micro003"]["canonical isomeric SMILES"].values[0]
smiles009 = df_microstates[df_microstates["microstate ID"] == "SM23_micro009"]["canonical isomeric SMILES"].values[0]
smiles023 = df_microstates[df_microstates["microstate ID"] == "SM23_micro023"]["canonical isomeric SMILES"].values[0]
smiles031 = df_microstates[df_microstates["microstate ID"] == "SM23_micro031"]["canonical isomeric SMILES"].values[0]
smiles032 = df_microstates[df_microstates["microstate ID"] == "SM23_micro032"]["canonical isomeric SMILES"].values[0]
smiles037 = df_microstates[df_microstates["microstate ID"] == "SM23_micro037"]["canonical isomeric SMILES"].values[0]
depict_smiles(smiles001)
depict_smiles(smiles003)
depict_smiles(smiles009)
depict_smiles(smiles023)
depict_smiles(smiles031)
depict_smiles(smiles032)
depict_smiles(smiles037)
```
These are all resonance structures of the same microstate:
"SM23_micro001", "SM23_micro003", "SM23_micro009", "SM23_micro023", "SM23_micro031", "SM23_micro032", "SM23_micro037"
The following will be deprecated:
"SM23_micro003", "SM23_micro009", "SM23_micro023", "SM23_micro031", "SM23_micro032", "SM23_micro037"
## SM24
SM24 resonance structures:
('SM24_micro001', 'SM24_micro012')
('SM24_micro001', 'SM24_micro018')
('SM24_micro007', 'SM24_micro019')
('SM24_micro007', 'SM24_micro021')
('SM24_micro011', 'SM24_micro015')
('SM24_micro012', 'SM24_micro018')
('SM24_micro019', 'SM24_micro021')
```
mol_ID = "SM24"
path_to_input_microstates = "corrections_for_v1_3_1/"
input_file_name = path_to_input_microstates + mol_ID +"_correction.csv"
df_microstates = pd.read_csv(input_file_name)
smiles001 = df_microstates[df_microstates["microstate ID"] == "SM24_micro001"]["canonical isomeric SMILES"].values[0]
smiles012 = df_microstates[df_microstates["microstate ID"] == "SM24_micro012"]["canonical isomeric SMILES"].values[0]
smiles018 = df_microstates[df_microstates["microstate ID"] == "SM24_micro018"]["canonical isomeric SMILES"].values[0]
depict_smiles(smiles001)
depict_smiles(smiles012)
depict_smiles(smiles018)
```
SM24_micro001, SM24_micro012 and SM24_micro018 are resonance structures. SM24_micro012 and SM24_micro018 will be deprecated.
```
smiles007 = df_microstates[df_microstates["microstate ID"] == "SM24_micro007"]["canonical isomeric SMILES"].values[0]
smiles019 = df_microstates[df_microstates["microstate ID"] == "SM24_micro019"]["canonical isomeric SMILES"].values[0]
smiles021 = df_microstates[df_microstates["microstate ID"] == "SM24_micro021"]["canonical isomeric SMILES"].values[0]
depict_smiles(smiles007)
depict_smiles(smiles019)
depict_smiles(smiles021)
```
SM24_micro007, SM24_micro019 and SM24_micro021 are resonance structures. SM24_micro019 and SM24_micro021 will be deprecated.
```
smiles011 = df_microstates[df_microstates["microstate ID"] == "SM24_micro011"]["canonical isomeric SMILES"].values[0]
smiles015 = df_microstates[df_microstates["microstate ID"] == "SM24_micro015"]["canonical isomeric SMILES"].values[0]
depict_smiles(smiles011)
depict_smiles(smiles015)
```
SM24_micro011 and SM24_micro015 are resonance structures. SM24_micro015 will be deprecated.
| true |
code
| 0.283757 | null | null | null | null |
|
# Getting started with machine learning <br> using scikit-learn
## James Bourbeau
### Big Data Madison Meetup
April 24, 2018
### GitHub repo with materials:
https://github.com/jrbourbeau/big-data-madison-ml-sklearn <br>
### Slides:
https://jrbourbeau.github.io/big-data-madison-ml-sklearn
### Contact:
E-mail: james@jamesbourbeau.com
GitHub: [jrbourbeau](https://github.com/jrbourbeau)
Twitter: [\__jrbourbeau__](https://twitter.com/__jrbourbeau__)
LinkedIn: [jrbourbeau](https://www.linkedin.com/in/jrbourbeau/)
Source code for `plotting` Python module can be found on GitHub with the rest of the materials for this talk
```
import plotting
import numpy as np
np.random.seed(2)
%matplotlib inline
```
## Supervised machine learning workflow

<p style="font-size:14px">
Image source: <a href="https://sebastianraschka.com/blog/2016/model-evaluation-selection-part3.html">Model evaluation, model selection, and algorithm selection in machine learning</a> by Sebastian Raschka
</p>
## Outline
- What is machine learning?
- Classical programming vs. machine learning
- Supervised machine learning
- scikit-learn:
- Data representation
- Estimator API
- Example algorithm: decision tree classifier
- Model validation
- Cross validation
- Validation curves
# Machine learning vs. classical programming
## Classical programming
- Devise a set of rules (an algorithm) that are used to accomplish a task
- For example, labeling e-mails as either "spam" or "not spam"
```
def spam_filter(email):
"""Function that labels an email as 'spam' or 'not spam'
"""
if 'Act now!' in email.contents:
label = 'spam'
elif 'hotmail.com' in email.sender:
label = 'spam'
elif email.contents.count('$') > 20:
label = 'spam'
else:
label = 'not spam'
return label
```
## Machine learning
- "Field of study that gives computers the ability to learn without being explicitly programmed" — Arthur Samuel (1959)
- "A machine-learning system is trained rather than explicitly programmed. It’s presented with many examples relevant to a task, and it finds statistical structure in these examples that eventually allows the system to come up with rules for automating the task." — Francois Chollet, _Deep Learning with Python_
## Supervised machine learning
- From a labeled dataset, an algorithm learns a mapping between input data and the desired output label
- Goal is to have model generalize well to future, yet unseen, data
- Supervised machine learning is further divided into two types of problems:
- Classification — Labels are discrete. E.g. determine if a picture is of a cat, dog, or person.
- Regression — Labels are continuous. E.g. predict home prices.
```
plotting.plot_classification_vs_regression()
```
# Machine learning in Python with scikit-learn
## scikit-learn
- Popular Python machine learning library
- Designed to be a [well documented](http://scikit-learn.org/stable/) and approachable for non-specialist
- Built on top of NumPy and SciPy
- scikit-learn can be easily installed with `pip` or `conda`
- `pip install scikit-learn`
- `conda install scikit-learn`
## Data representation in scikit-learn
- Training dataset is described by a pair of matrices, one for the input data and one for the output
- Most commonly used data formats are a NumPy `ndarray` or a Pandas `DataFrame` / `Series`
- Each row of these matrices corresponds to one sample of the dataset
- Each column represents a quantitative piece of information that is used to describe each sample (called "features")
```
plotting.plot_data_representation()
```
## Iris dataset
- Dataset consists of 150 samples (individual flowers) that have 4 features: sepal length, sepal width, petal length, and petal width (all in cm)
- Each sample is labeled by its species: Iris Setosa, Iris Versicolour, Iris Virginica
- Task is to develop a model that predicts iris species
- Iris dataset is freely available from the [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/iris)

## Loading the iris dataset
```
import pandas as pd
iris = pd.read_csv('iris.csv')
iris = iris.sample(frac=1, random_state=2).reset_index(drop=True)
iris.head()
# Only include first two training features (sepal length and sepal width)
feature_columns = ['sepal_length', 'sepal_width']
X = iris[feature_columns].values
y = iris['species'].values
print(f'First 5 samples in X: \n{X[:5]}')
print(f'First 5 labels in y: \n{y[:5]}')
plotting.plot_2D_iris()
```
## Estimators in scikit-learn
- Algorithms are implemented as estimator classes in scikit-learn
- Each estimator in scikit-learn is extensively documented (e.g. the [KNeighborsClassifier documentation](http://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsClassifier.html)) with API documentation, user guides, and example usages.
```
from sklearn.tree import DecisionTreeClassifier, DecisionTreeRegressor
from sklearn.neighbors import KNeighborsClassifier, KNeighborsRegressor
from sklearn.ensemble import RandomForestClassifier, GradientBoostingRegressor
from sklearn.svm import SVC, SVR
from sklearn.linear_model import LinearRegression, LogisticRegression
```
- A model is an instance of one of these estimator classes
```
model = KNeighborsClassifier(n_neighbors=5)
print(model)
```
## Estimator API
<br>
```python
class Estimator(BaseClass):
def __init__(self, **hyperparameters):
# Setup Estimator here
def fit(self, X, y):
# Implement algorithm here
return self
def predict(self, X):
# Get predicted target from trained model
# Note: fit must be called before predict
return y_pred
```
<br>
See [API design for machine learning software:
experiences from the scikit-learn project](https://arxiv.org/pdf/1309.0238.pdf) for a discusses of the API design choices for scikit-learn
## Training a model — fit then predict
```
# Create the model
model = KNeighborsClassifier(n_neighbors=5)
# Fit the model
model.fit(X, y)
# Get model predictions
y_pred = model.predict(X)
y_pred[:10]
```
# Example algorithm: decision tree classifier
## Decision tree classifier
Idea behind the decision tree algorithm is to sequentially partition a training dataset by asking a series of questions.

<p style="font-size:14px">
Image source: Raschka, Sebastian, and Vahid Mirjalili. <a href="https://www.amazon.com/Python-Machine-Learning-scikit-learn-TensorFlow/dp/1787125939">Python Machine Learning</a>, 2nd Ed. Packt Publishing, 2017.
</p>
## Node splitting to maximize purity

## Features of decision tree classifier
- Easy to understand and interpretable model
- Requires little data preparation
- Can model non-linear relationships
- Building block for more advanced models (e.g. random forests, boosted decision trees)
## Decision tree classifier in scikit-learn
```
from sklearn.tree import DecisionTreeClassifier
clf = DecisionTreeClassifier(max_depth=2)
clf.fit(X, y)
```
## Visualizing decision trees — tree graph
```
plotting.plot_decision_tree(clf)
```
## Visualizing decision trees — decision regions
```
plotting.plot_tree_decision_regions(clf)
```
# Model validation
## Model performance metrics
- There are many different performance metrics for classification and regression problems. Which metric you should use depends on the particular problem you are working on
- Many commonly used performance metrics are built into the `metrics` subpackage in scikit-learn
- Custom user-defined scoring function can be created using the `sklearn.metrics.make_scorer` function
```
# Classification metrics
from sklearn.metrics import (accuracy_score, precision_score,
recall_score, f1_score, log_loss)
# Regression metrics
from sklearn.metrics import mean_squared_error, mean_absolute_error, r2_score
y_true = [0, 1, 1, 3, 2]
y_pred = [0, 2, 1, 3, 1]
accuracy_score(y_true, y_pred)
mean_squared_error(y_true, y_pred)
```
## Separate training & testing sets
- A trained model will generally perform better on data that was used to train it
- Want to measure how well a model generalizes to new, unseen data
- Need to have two separate datasets. One for training models and one for evaluating model performance
- scikit-learn has a convenient `train_test_split` function that randomly splits a dataset into a testing and training set
```
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2,
random_state=2)
print(f'X.shape = {X.shape}')
print(f'X_test.shape = {X_test.shape}')
print(f'X_train.shape = {X_train.shape}')
clf = DecisionTreeClassifier()
clf.fit(X_train, y_train)
print(f'training accuracy = {accuracy_score(y_train, clf.predict(X_train))}')
print(f'testing accuracy = {accuracy_score(y_test, clf.predict(X_test))}')
```
## Model selection — hyperparameter optimization
- Choose model hyperparameter values to avoid under- and over-fitting
- Under-fitting — model isn't sufficiently complex enough to properly model the dataset at hand
- Over-fitting — model is too complex and begins to learn the noise in the training dataset

<p style="font-size:14px">
Image source: <a href="http://scikit-learn.org/stable/auto_examples/model_selection/plot_underfitting_overfitting.html">Underfitting vs. Overfitting</a> in scikit-learn examples
</p>
## $k$-fold cross validation diagram

<p style="font-size:14px">
Image source: Raschka, Sebastian, and Vahid Mirjalili. <a href="https://www.amazon.com/Python-Machine-Learning-scikit-learn-TensorFlow/dp/1787125939">Python Machine Learning</a>, 2nd Ed. Packt Publishing, 2017.
</p>
## Cross validation in scikit-learn
```
from sklearn.model_selection import cross_validate
clf = DecisionTreeClassifier(max_depth=2)
scores = cross_validate(clf, X_train, y_train,
scoring='accuracy', cv=10,
return_train_score=True)
print(scores.keys())
test_scores = scores['test_score']
train_scores = scores['train_score']
print(test_scores)
print(train_scores)
print('\n10-fold CV scores:')
print(f'training score = {np.mean(train_scores)} +/- {np.std(train_scores)}')
print(f'validation score = {np.mean(test_scores)} +/- {np.std(test_scores)}')
```
## Validation curves
Validation curves are a good way to diagnose if a model is under- or over-fitting
```
plotting.plot_validation_curve()
plotting.plot_max_depth_validation(clf, X_train, y_train)
```
## Hyperparameter tuning via GridSearchCV
- In practice, you'll want to optimize many different hyperparameter values simultaneously
- The `GridSearchCV` object in scikit-learn's `model_selection` subpackage can be used to scan over many different hyperparameter combinations
- Calculates cross-validated training and testing scores for each hyperparameter combinations
- The combination that maximizes the testing score is deemed to be the "best estimator"
```
from sklearn.model_selection import GridSearchCV
# Instantiate a model
clf = DecisionTreeClassifier()
# Specify hyperparameter values to test
parameters = {'max_depth': range(1, 20),
'criterion': ['gini', 'entropy']}
# Run grid search
gridsearch = GridSearchCV(clf, parameters, scoring='accuracy', cv=10)
gridsearch.fit(X_train, y_train)
# Get best model
print(f'gridsearch.best_params_ = {gridsearch.best_params_}')
print(gridsearch.best_estimator_)
```
## Supervised machine learning workflow

<p style="font-size:14px">
Image source: <a href="https://sebastianraschka.com/blog/2016/model-evaluation-selection-part3.html">Model evaluation, model selection, and algorithm selection in machine learning</a> by Sebastian Raschka
</p>
## Step 1 — Separate training and testing datasets

<p style="font-size:14px">
Image source: <a href="https://sebastianraschka.com/blog/2016/model-evaluation-selection-part3.html">Model evaluation, model selection, and algorithm selection in machine learning</a> by Sebastian Raschka
</p>
```
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3,
random_state=2)
```
## Steps 2 & 3 — Optimize hyperparameters via cross validation

<p style="font-size:14px">
Image source: <a href="https://sebastianraschka.com/blog/2016/model-evaluation-selection-part3.html">Model evaluation, model selection, and algorithm selection in machine learning</a> by Sebastian Raschka
</p>
```
clf = DecisionTreeClassifier()
parameters = {'max_depth': range(1, 20),
'criterion': ['gini', 'entropy']}
gridsearch = GridSearchCV(clf, parameters, scoring='accuracy', cv=10)
gridsearch.fit(X_train, y_train)
print(f'gridsearch.best_params_ = {gridsearch.best_params_}')
best_clf = gridsearch.best_estimator_
best_clf
```
## Steps 4 — Model performance

<p style="font-size:14px">
Image source: <a href="https://sebastianraschka.com/blog/2016/model-evaluation-selection-part3.html">Model evaluation, model selection, and algorithm selection in machine learning</a> by Sebastian Raschka
</p>
```
y_pred = best_clf.predict(X_test)
test_acc = accuracy_score(y_test, y_pred)
print(f'test_acc = {test_acc}')
```
## Steps 5 — Train final model on full dataset

<p style="font-size:14px">
Image source: <a href="https://sebastianraschka.com/blog/2016/model-evaluation-selection-part3.html">Model evaluation, model selection, and algorithm selection in machine learning</a> by Sebastian Raschka
</p>
```
final_model = DecisionTreeClassifier(**gridsearch.best_params_)
final_model.fit(X, y)
```
## Iris classification problem
```
# Step 1: Get training and testing datasets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3,
random_state=2)
# Step 2: Use GridSearchCV to find optimal hyperparameter values
clf = DecisionTreeClassifier(random_state=2)
parameters = {'max_depth': range(1, 20),
'criterion': ['gini', 'entropy']}
gridsearch = GridSearchCV(clf, parameters, scoring='accuracy', cv=10)
gridsearch.fit(X_train, y_train)
print(f'gridsearch.best_params_ = {gridsearch.best_params_}')
# Step 3: Get model with best hyperparameters
best_clf = gridsearch.best_estimator_
# Step 4: Get best model performance from testing set
y_pred = best_clf.predict(X_test)
test_acc = accuracy_score(y_test, y_pred)
print(f'test_acc = {test_acc}')
# Step 5: Train final model on full dataset
final_model = DecisionTreeClassifier(random_state=2, **gridsearch.best_params_)
final_model.fit(X, y);
```
## Additional Resources
- _Python Machine Learning_ by Sebastian Raschka [[GitHub](https://github.com/rasbt/python-machine-learning-book-2nd-edition)][[Amazon](https://www.amazon.com/Python-Machine-Learning-scikit-learn-TensorFlow/dp/1787125939)]
- _Data Science Handbook_ by Jake VanderPlas [[GitHub](https://github.com/jakevdp/PythonDataScienceHandbook)][[Amazon](https://www.amazon.com/_/dp/1491912057?tag=oreilly20-20)]
- _The Elements of Statistical Learning_ by Hastie, Tibshirani and Friedman [[Free book!](https://web.stanford.edu/~hastie/ElemStatLearn/)]
- _Deep Learning_ by Ian Goodfellow, Yoshua Bengio, and Aaron Courville [[Amazon](https://www.amazon.com/Deep-Learning-Adaptive-Computation-Machine/dp/0262035618)]
# Thank you
## Any questions?
| true |
code
| 0.673165 | null | null | null | null |
|
# Generating 3D People in Scenes without People
Here we give a frontend demo of how to generate body meshes in a scene without people.
+ First, we use a pre-trained conditional VAE model to generate body meshes. Here we only show the one-stage model without scene loss.
+ Second, we perform scene geometry-aware fitting.
The code in this demo is slightly different from the code in other places. __To efficiently generate a large amount of body meshes for various scenes, we recommend to use the frontend sh scripts.__
## (1) loading dependencies, models and setup environments
```
from __future__ import absolute_import
from __future__ import print_function
from __future__ import division
import sys, os, glob
import json
import argparse
import numpy as np
import scipy.io as sio
import open3d as o3d
# proj_path = '/is/ps2/yzhang/workspaces/PSI-internal'
proj_path = '/home/yzhang/workspaces/smpl-env-gen-3d-internal'
sys.path.append(proj_path)
sys.path.append(proj_path+'/source')
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.nn import init
import torch.optim as optim
from torch.optim import lr_scheduler
import smplx
from human_body_prior.tools.model_loader import load_vposer
from cvae import HumanCVAES1, HumanCVAES2, ContinousRotReprDecoder
import time
import smplx
from human_body_prior.tools.model_loader import load_vposer
import chamfer_pytorch.dist_chamfer as ext
```
We put some auxilary functions here, mainly for coordinate transform and file parsing.
```
def recover_global_T(x_batch, cam_intrisic, max_depth):
xt_batch = x_batch[:,:3]
xr_batch = x_batch[:,3:]
fx_batch = cam_intrisic[:,0,0]
fy_batch = cam_intrisic[:,1,1]
px_batch = cam_intrisic[:,0,2]
py_batch = cam_intrisic[:,1,2]
s_ = 1.0 / torch.max(px_batch, py_batch)
z = (xt_batch[:, 2]+1.0)/2.0 * max_depth
x = xt_batch[:,0] * z / s_ / fx_batch
y = xt_batch[:,1] * z / s_ / fy_batch
xt_batch_recoverd = torch.stack([x,y,z],dim=-1)
return torch.cat([xt_batch_recoverd, xr_batch],dim=-1)
def convert_to_3D_rot(x_batch):
xt = x_batch[:,:3]
xr = x_batch[:,3:9]
xb = x_batch[:,9:]
xr_mat = ContinousRotReprDecoder.decode(xr) # return [:,3,3]
xr_aa = ContinousRotReprDecoder.matrot2aa(xr_mat) # return [:,3]
return torch.cat([xt, xr_aa, xb], dim=-1)
def body_params_encapsulate(x_body_rec, to_numpy=True, batched=False):
if to_numpy:
x_body_rec_np = x_body_rec.detach().cpu().numpy()
else:
x_body_rec_np = x_body_rec
if batched:
body_params_batch_rec={}
body_params_batch_rec['transl'] = x_body_rec_np[:,:3]
body_params_batch_rec['global_orient'] = x_body_rec_np[:,3:6]
body_params_batch_rec['betas'] = x_body_rec_np[:,6:16]
body_params_batch_rec['body_pose'] = x_body_rec_np[:,16:48]
body_params_batch_rec['left_hand_pose'] = x_body_rec_np[:,48:60]
body_params_batch_rec['right_hand_pose'] = x_body_rec_np[:,60:]
return body_params_batch_rec
else:
n_batch = x_body_rec_np.shape[0]
rec_list = []
for b in range(n_batch):
body_params_batch_rec={}
body_params_batch_rec['transl'] = x_body_rec_np[b:b+1,:3]
body_params_batch_rec['global_orient'] = x_body_rec_np[b:b+1,3:6]
body_params_batch_rec['betas'] = x_body_rec_np[b:b+1,6:16]
body_params_batch_rec['body_pose'] = x_body_rec_np[b:b+1,16:48]
body_params_batch_rec['left_hand_pose'] = x_body_rec_np[b:b+1,48:60]
body_params_batch_rec['right_hand_pose'] = x_body_rec_np[b:b+1,60:]
rec_list.append(body_params_batch_rec)
return rec_list
def data_preprocessing(img, modality, target_domain_size=[128, 128]):
"""
input:
- img (depthmap or semantic map): [height, width].
- modality: 'depth' or 'seg'
output:
canvas: with shape of target_domain_size, where the input is in the
center tightly, with shape target_domain_size
factor: the resizing factor
"""
# prepare the canvas
img_shape_o = img.shape
canvas = torch.zeros([1,1]+target_domain_size, dtype=torch.float32,
device=torch.device("cuda"))
# filter out unavailable values
if modality == 'depth':
img[img>6.0]=6.0
if modality == 'seg':
img[img>41] = 41
## rescale to [-1,1]
max_val = torch.max(img)
_img = 2* img / max_val - 1.0
## put _img to the canvas
if img_shape_o[0]>= img_shape_o[1]:
factor = float(target_domain_size[0]) / img_shape_o[0]
target_height = target_domain_size[0]
target_width = int(img_shape_o[1] * factor) //2 *2
# for depth map we use bilinear interpolation in resizing
# for segmentation map we use bilinear interpolation as well.
# note that float semantic label is not real in practice, but
# helpful in our work
target_size = [target_height, target_width]
_img = _img.view(1,1,img_shape_o[0],img_shape_o[1])
img_resize = F.interpolate(_img, size=target_size, mode='bilinear',
align_corners=False)
na = target_width
nb = target_domain_size[1]
lower = (nb //2) - (na //2)
upper = (nb //2) + (na //2)
canvas[:,:,:, lower:upper] = img_resize
else:
factor = float(target_domain_size[1]) / img_shape_o[1]
target_height = int(factor*img_shape_o[0]) //2 *2
target_width = target_domain_size[1]
target_size = [target_height, target_width]
_img = _img.view(1,1,img_shape_o[0],img_shape_o[1])
img_resize = F.interpolate(_img, size=target_size, mode='bilinear',
align_corners=False)
na = target_height
nb = target_domain_size[0]
lower = (nb //2) - (na //2)
upper = (nb //2) + (na //2)
canvas[:,:,lower:upper, :] = img_resize
return canvas, factor, max_val
def scipy_matfile_parse(filename):
'''
parse data from files and put them to GPU
Note that this function is for demo, and is different from the ones used in other places.
'''
data = sio.loadmat(filename)
depth0_np = data['depth']
seg0_np = data['seg']
## change them to torch tensor
depth0 = torch.tensor(depth0_np, dtype=torch.float32, device=torch.device("cuda"))
seg0 = torch.tensor(seg0_np, dtype=torch.float32, device=torch.device("cuda"))
## pre_processing
depth, factor_d,max_d = data_preprocessing(depth0, 'depth', target_domain_size=[128, 128])
seg, factor_s,_ = data_preprocessing(seg0, 'seg', target_domain_size=[128, 128])
cam_intrinsic_np = data['cam'][0][0]['intrinsic']
cam_intrinsic = torch.tensor(cam_intrinsic_np, dtype=torch.float32, device=torch.device("cuda")).unsqueeze(0)
cam_extrinsic_np = data['cam'][0][0]['extrinsic']
cam_extrinsic_np = np.linalg.inv(cam_extrinsic_np)
cam_extrinsic = torch.tensor(cam_extrinsic_np, dtype=torch.float32, device=torch.device("cuda")).unsqueeze(0)
return depth, seg, max_d.view(1), cam_intrinsic, cam_extrinsic
```
## (2) Prepare the scene without people
Our method requires the following data about a scene:
+ depth map
+ semantic segmentation
+ the camera parameters (extrinsic and intrinsic)
+ the scene signed distance function (SDF)
+ the scene mesh
Note that SDF and scene mesh are only used for scene-geometry aware fitting. For generating body meshes with the CVAE model, only the first three attributes are sufficient.
Here we use the 'MPH16' scene in the __PROXE__ dataset.
```
scenename = 'MPH16'
proxe_path = '/home/yzhang/Videos/PROXE'
## read the depth and semantics
scene_matfile_path = os.path.join(proxe_path, 'snapshot_for_testing/MPH16_00157_01/rec_000000.mat')
depth, seg, max_d, cam_intrinsic, cam_extrinsic = scipy_matfile_parse(scene_matfile_path)
## read the sdf
with open(os.path.join(proxe_path, 'scenes_sdf',scenename+'.json')) as f:
sdf_data = json.load(f)
grid_min = np.array(sdf_data['min'])
grid_max = np.array(sdf_data['max'])
grid_dim = sdf_data['dim']
sdf = np.load(os.path.join(proxe_path, 'scenes_sdf', scenename + '_sdf.npy')).reshape(grid_dim, grid_dim, grid_dim)
## read the scene mesh
scene_mesh = o3d.io.read_triangle_mesh(os.path.join(proxe_path, 'scenes_downsampled', scenename+'.ply'))
scene_verts = np.asarray(scene_mesh.vertices)
scene_faces = np.asarray(scene_mesh.triangles)
## We could visualize the scene data, or skip this step.
import matplotlib.pyplot as plt
plt.subplot(1,2,1)
depth_processed_np = depth.detach().cpu().squeeze().numpy()
plt.imshow(depth_processed_np)
plt.subplot(1,2,2)
seg_processed_np = seg.detach().cpu().squeeze().numpy()
plt.imshow(seg_processed_np)
# # we use webGL to visualize 3D, which is a different case from running locally.
# # only works for point cloud visualization
# # note that visualizing 3D here may cause slow responses.
# pcd = o3d.geometry.PointCloud()
# pcd.points = scene_mesh.vertices
# pcd.colors = scene_mesh.vertex_colors
# from open3d import JVisualizer
# visualizer = JVisualizer()
# visualizer.add_geometry(pcd)
# visualizer.show()
```
## (3) Generating body meshes using the pre-trained conditional VAE model
For demonstration purposes, we only use the **one-stage model without scene loss**. For other models, the pipeline is the same.
```
testconfig={
'smplx_model_path': '/home/yzhang/body_models/VPoser',
'scene_model_ckpt': '/home/yzhang/workspaces/smpl-env-gen-3d-internal/data/resnet18.pth',
'vposer_ckpt_path': '/home/yzhang/body_models/VPoser/vposer_v1_0',
'device': torch.device("cuda" if torch.cuda.is_available() else "cpu"),
'ckpt_dir': 'checkpoints_v2/checkpoints_proxtrain_models1_batch32_epoch30_LR0.0003_LossVposer0.001_LossKL0.1_LossContact0.000001_LossCollision0.000001',
'n_samples': 5
}
### our conditional vae model
model_h = HumanCVAES1(latentD=256, # default value in our checkpoints
n_dim_body=75,# global T(3d) + global R(6d) + shape (10d) + pose (32d) + hand (24d)
scene_model_ckpt=None,
test=True)
# model_h = HumanCVAES2(latentD_g=256, # default value in our checkpoints
# latentD_l=256, # default value in our checkpoints
# n_dim_body=75,# global T(3d) + global R(6d) + shape (10d) + pose (32d) + hand (24d)
# scene_model_ckpt=None,
# test=True)
### VPoesr
vposer, _ = load_vposer(testconfig['vposer_ckpt_path'], vp_model='snapshot')
### smplx
body_mesh_model = smplx.create(testconfig['smplx_model_path'],
model_type='smplx',
gender='neutral', ext='npz',
num_pca_comps=12,
create_global_orient=True,
create_body_pose=True,
create_betas=True,
create_left_hand_pose=True,
create_right_hand_pose=True,
create_expression=True,
create_jaw_pose=True,
create_leye_pose=True,
create_reye_pose=True,
create_transl=True,
batch_size=testconfig['n_samples']
)
## setup models and load checkpoints
model_h.eval()
model_h.to(testconfig['device'])
vposer.to(testconfig['device'])
body_mesh_model.to(testconfig['device'])
ckp_path = sorted(glob.glob(os.path.join(testconfig['ckpt_dir'],'epoch-*.ckp')),
key=os.path.getmtime)[-1]
checkpoint = torch.load(ckp_path)
print('[INFO] load checkpoints: ' + ckp_path)
model_h.load_state_dict(checkpoint['model_h_state_dict'])
```
Run the following code block to sample body configurations.
```
## generating body configurations
### concatenate depth and seg
xs = torch.cat([depth, seg],dim=1)
xs_n = xs.repeat(testconfig['n_samples'], 1,1,1)
### model inference
xhnr_gen= model_h.sample(xs_n)
### recover to the original translation/orientation range
xhn_gen = convert_to_3D_rot(xhnr_gen)
xh_gen = recover_global_T(xhn_gen, cam_intrinsic.repeat(testconfig['n_samples'],1,1),
max_d.repeat(testconfig['n_samples']))
```
In the following, we visualize the generated body configurations.
```
## visualizing a body mesh. Note that we use WebGL, which may cause slow responses or even stuck.
body_params = body_params_encapsulate(xh_gen, to_numpy=False, batched=True)
body_params['body_pose'] = vposer.decode(body_params['body_pose'], output_type='aa').view(testconfig['n_samples'],-1)
smplx_out = body_mesh_model(**body_params)
smplx_verts = smplx_out.vertices.detach().cpu().numpy().squeeze()
cam_ext = cam_extrinsic.squeeze().detach().cpu().numpy()
### create a body point cloud
pcd_body_list = []
for body_index in range(testconfig['n_samples']):
# body_index = 20
pcd_body = o3d.geometry.PointCloud()
pcd_body.points = o3d.utility.Vector3dVector(smplx_verts[body_index])
pcd_body = pcd_body.uniform_down_sample(every_k_points=10)
### perform transformation
pcd_body.transform(cam_ext)
pcd_body_list.append(pcd_body)
### create a scene point cloud
pcd_scene = o3d.geometry.PointCloud()
pcd_scene.points = scene_mesh.vertices
pcd_scene.colors = scene_mesh.vertex_colors
pcd_scene = pcd_scene.uniform_down_sample(every_k_points=10)
### create coord frame
mesh_frame = o3d.geometry.TriangleMesh.create_coordinate_frame(
size=0.6, origin=[0, 0, 0])
pcd_coord = o3d.geometry.PointCloud()
pcd_coord.points = mesh_frame.vertices
pcd_coord.colors = mesh_frame.vertex_colors
pcd_coord.transform(cam_ext)
### visualize in WebGL
from open3d import JVisualizer
visualizer = JVisualizer()
visualizer.add_geometry(pcd_scene)
visualizer.add_geometry(pcd_coord)
for body_index in range(testconfig['n_samples']):
visualizer.add_geometry(pcd_body_list[body_index])
visualizer.show()
```
### (4) scene geometry-aware fitting
One see that some generated body meshes are not physically plausible, either floating in the air or penetrating into the scene mesh. Therefore, we have this geometry-aware fitting to overcome these problems.
```
import torch.optim as optim
from torch.autograd import Variable
import chamfer_pytorch.dist_chamfer as ext
def get_contact_id(body_segments_folder, contact_body_parts=['L_Hand', 'R_Hand']):
contact_verts_ids = []
contact_faces_ids = []
for part in contact_body_parts:
with open(os.path.join(body_segments_folder, part + '.json'), 'r') as f:
data = json.load(f)
contact_verts_ids.append(list(set(data["verts_ind"])))
contact_faces_ids.append(list(set(data["faces_ind"])))
contact_verts_ids = np.concatenate(contact_verts_ids)
contact_faces_ids = np.concatenate(contact_faces_ids)
return contact_verts_ids, contact_faces_ids
def verts_transform(verts_batch, cam_ext_batch):
verts_batch_homo = F.pad(verts_batch, (0,1), mode='constant', value=1)
verts_batch_homo_transformed = torch.matmul(verts_batch_homo,
cam_ext_batch.permute(0,2,1))
verts_batch_transformed = verts_batch_homo_transformed[:,:,:-1]
return verts_batch_transformed
def cal_loss(xhr, xhr_rec, cam_ext_batch, s_verts_batch,
s_sdf_batch,s_grid_min_batch, s_grid_max_batch,
lossconfig, fittingconfig):
### reconstruction loss
loss_rec = lossconfig['weight_loss_rec']*F.l1_loss(xhr, xhr_rec)
xh_rec = convert_to_3D_rot(xhr_rec)
### vposer loss
vposer_pose = xh_rec[:,16:48]
loss_vposer = lossconfig['weight_loss_vposer'] * torch.mean(vposer_pose**2)
### contact loss
body_param_rec = body_params_encapsulate(xh_rec, to_numpy=False, batched=True)
body_param_rec['body_pose'] = vposer.decode(body_param_rec['body_pose'],
output_type='aa').view(xhr.shape[0], -1)
smplx_output = body_mesh_model(return_verts=True, **body_param_rec)
body_verts_batch = smplx_output.vertices #[b, 10475,3]
body_verts_batch = verts_transform(body_verts_batch, cam_ext_batch)
vid, fid = get_contact_id(body_segments_folder=fittingconfig['body_segments_folder'],
contact_body_parts=fittingconfig['contact_part'])
body_verts_contact_batch = body_verts_batch[:, vid, :]
dist_chamfer_contact = ext.chamferDist()
contact_dist, _ = dist_chamfer_contact(body_verts_contact_batch.contiguous(),
s_verts_batch.contiguous())
loss_contact = lossconfig['weight_contact'] * torch.mean(torch.sqrt(contact_dist+1e-4)
/(torch.sqrt(contact_dist+1e-4)+0.01))
### sdf collision loss
s_grid_min_batch = s_grid_min_batch.unsqueeze(1)
s_grid_max_batch = s_grid_max_batch.unsqueeze(1)
norm_verts_batch = (body_verts_batch - s_grid_min_batch) / (s_grid_max_batch - s_grid_min_batch) *2 -1
n_verts = norm_verts_batch.shape[1]
body_sdf_batch = F.grid_sample(s_sdf_batch.unsqueeze(1),
norm_verts_batch[:,:,[2,1,0]].view(-1, n_verts,1,1,3),
padding_mode='border')
# if there are no penetrating vertices then set sdf_penetration_loss = 0
if body_sdf_batch.lt(0).sum().item() < 1:
loss_sdf_pene = torch.tensor(0.0, dtype=torch.float32, device=self.device)
else:
loss_sdf_pene = body_sdf_batch[body_sdf_batch < 0].abs().mean()
loss_collision = lossconfig['weight_collision']*loss_sdf_pene
return loss_rec, loss_vposer, loss_contact, loss_collision
def fitting(xhr_in, cam_extrinsic,
s_verts, s_sdf, s_grid_min, s_grid_max, max_d,
fittingconfig, lossconfig):
batch_size = xhr_in.shape[0]
xhr_rec = Variable(torch.randn(batch_size,75).cuda(), requires_grad=True)
optimizer = optim.Adam([xhr_rec], lr=fittingconfig['init_lr_h'])
xhr_rec.data = xhr_in.clone()
cam_ext_batch = cam_extrinsic.repeat(batch_size, 1,1)
max_d_batch = max_d.repeat(batch_size)
s_verts_batch = s_verts.repeat(batch_size, 1,1)
s_sdf_batch = s_sdf.repeat(batch_size, 1,1,1)
s_grid_min_batch = s_grid_min.repeat(batch_size, 1)
s_grid_max_batch = s_grid_max.repeat(batch_size, 1)
for ii in range(fittingconfig['num_iter']):
optimizer.zero_grad()
loss_rec, loss_vposer, loss_contact, loss_collision = cal_loss(xhr_in, xhr_rec, cam_ext_batch, s_verts_batch,
s_sdf_batch,s_grid_min_batch, s_grid_max_batch,
lossconfig, fittingconfig)
loss = loss_rec + loss_vposer + loss_contact + loss_collision
if fittingconfig['verbose']:
print('[INFO][fitting] iter={:d}, l_rec={:f}, l_vposer={:f}, l_contact={:f}, l_collision={:f}'.format(
ii, loss_rec.item(), loss_vposer.item(),
loss_contact.item(), loss_collision.item()) )
loss.backward(retain_graph=True)
optimizer.step()
### recover global translation and orientation
xh_rec = convert_to_3D_rot(xhr_rec)
return xh_rec
fittingconfig={'init_lr_h': 0.05,
'num_iter': 50,
'contact_part': ['back','butt','L_Hand','R_Hand','L_Leg',
'R_Leg','thighs'],
'body_segments_folder': os.path.join(proxe_path,'body_segments'),
'verbose': True
}
lossconfig={
'weight_loss_rec': 1,
'weight_loss_vposer':0.01,
'weight_contact': 0.1,
'weight_collision' : 0.5
}
### put scene to tensors
s_verts = torch.tensor(scene_verts, dtype=torch.float32).cuda().unsqueeze(0)
s_grid_min = torch.tensor(grid_min, dtype=torch.float32).cuda().unsqueeze(0)
s_grid_max = torch.tensor(grid_max, dtype=torch.float32).cuda().unsqueeze(0)
s_sdf = torch.tensor(sdf, dtype=torch.float32).cuda().unsqueeze(0)
xhr_gen = recover_global_T(xhnr_gen, cam_intrinsic.repeat(testconfig['n_samples'],1,1),
max_d.repeat(testconfig['n_samples']))
xh_fitting = fitting(xhr_gen, cam_extrinsic,
s_verts, s_sdf, s_grid_min, s_grid_max, max_d,
fittingconfig, lossconfig)
## visualizing a body mesh. Note that we use WebGL, which may cause slow responses or even stuck.
body_params = body_params_encapsulate(xh_fitting, to_numpy=False, batched=True)
body_params['body_pose'] = vposer.decode(body_params['body_pose'], output_type='aa').view(testconfig['n_samples'],-1)
smplx_out = body_mesh_model(**body_params)
smplx_verts = smplx_out.vertices.detach().cpu().numpy().squeeze()
cam_ext = cam_extrinsic.squeeze().detach().cpu().numpy()
### create a body point cloud
pcd_body_list = []
for body_index in range(testconfig['n_samples']):
# body_index = 20
pcd_body = o3d.geometry.PointCloud()
pcd_body.points = o3d.utility.Vector3dVector(smplx_verts[body_index])
pcd_body = pcd_body.uniform_down_sample(every_k_points=10)
### perform transformation
pcd_body.transform(cam_ext)
pcd_body_list.append(pcd_body)
### create a scene point cloud
pcd_scene = o3d.geometry.PointCloud()
pcd_scene.points = scene_mesh.vertices
pcd_scene.colors = scene_mesh.vertex_colors
pcd_scene = pcd_scene.uniform_down_sample(every_k_points=10)
### create coord frame
mesh_frame = o3d.geometry.TriangleMesh.create_coordinate_frame(
size=0.6, origin=[0, 0, 0])
pcd_coord = o3d.geometry.PointCloud()
pcd_coord.points = mesh_frame.vertices
pcd_coord.colors = mesh_frame.vertex_colors
pcd_coord.transform(cam_ext)
### visualize in WebGL
from open3d import JVisualizer
visualizer = JVisualizer()
visualizer.add_geometry(pcd_scene)
visualizer.add_geometry(pcd_coord)
for body_index in range(testconfig['n_samples']):
visualizer.add_geometry(pcd_body_list[body_index])
visualizer.show()
```
| true |
code
| 0.59131 | null | null | null | null |
|
# 02 - XOR Modell mit TensorFlow
```
# see https://aimatters.wordpress.com/2016/01/16/solving-xor-with-a-neural-network-in-tensorflow/
import tensorflow as tf
import time
```
#### Trainings- und Testdaten
```
XOR_X = [[0,0],[0,1],[1,0],[1,1]]
XOR_Y = [[0],[1],[1],[0]]
```
#### Weight und Bias definieren
```
x_ = tf.placeholder(tf.float32, shape=[4,2], name = 'x-input')
y_ = tf.placeholder(tf.float32, shape=[4,1], name = 'y-input')
Weight1 = tf.Variable(tf.random_uniform([2,2], -1, 1, seed=80636), name = "Weight1")
Weight2 = tf.Variable(tf.random_uniform([2,1], -1, 1, seed=80636), name = "Weight2")
Bias1 = tf.Variable(tf.zeros([2]), name = "Bias1")
Bias2 = tf.Variable(tf.zeros([1]), name = "Bias2")
```
#### Layer definieren

```
with tf.name_scope("layer2") as scope:
A2 = tf.sigmoid(tf.matmul(x_, Weight1) + Bias1)
with tf.name_scope("layer3") as scope:
Hypothesis = tf.sigmoid(tf.matmul(A2, Weight2) + Bias2)
with tf.name_scope("cost") as scope:
cost = tf.reduce_mean(( (y_ * tf.log(Hypothesis)) +
((1 - y_) * tf.log(1.0 - Hypothesis)) ) * -1)
with tf.name_scope("train") as scope:
train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cost)
```
#### initialisieren
```
init = tf.global_variables_initializer()
sess = tf.Session()
```
#### TensorBoard
```
writer = tf.summary.FileWriter("./logs/xor_logs/xor_tf", sess.graph)
```
#### Training
```
sess.run(init)
t_start = time.clock()
for i in range(100001):
sess.run(train_step, feed_dict={x_: XOR_X, y_: XOR_Y})
if i % 10000 == 0:
print('Epoch ', i)
print('Hypothesis ', sess.run(Hypothesis, feed_dict={x_: XOR_X, y_: XOR_Y}))
print('cost ', sess.run(cost, feed_dict={x_: XOR_X, y_: XOR_Y}))
print('Weight1 ', sess.run(Weight1))
print('Bias1 ', sess.run(Bias1))
print('Weight2 ', sess.run(Weight2))
print('Bias2 ', sess.run(Bias2))
t_end = time.clock()
print('Elapsed time ', t_end - t_start)
```
#### Ergebnis
```
print sess.run(Hypothesis, feed_dict={x_: XOR_X, y_: XOR_Y})
```
#### freeze Modell und als TensorFlow-Datei speichern
```
freeze_var_names = list(set(v.op.name for v in tf.global_variables()))
print freeze_var_names
output_names = [Hypothesis.op.name]
print output_names
from tensorflow.python.framework.graph_util import remove_training_nodes
sub_graph_def = remove_training_nodes(sess.graph_def)
from tensorflow.python.framework import graph_util
frozen_graph = graph_util.convert_variables_to_constants(sess,
sub_graph_def,
output_names,
freeze_var_names)
graph_path = tf.train.write_graph(frozen_graph, "models", "xor_tf.pb", as_text=False)
print('%s written' % graph_path)
```
## uTensor
utensor-cli convert models/xor2n.pb --output-nodes=layer3_3/Sigmoid
unsupported op type in uTensor: Sigmoid
| true |
code
| 0.517205 | null | null | null | null |
|
# Ridge regression and model selection
Modified from the github repo: https://github.com/JWarmenhoven/ISLR-python which is based on the book by James et al. Intro to Statistical Learning.
## Loading data
```
# %load ../standard_import.txt
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.preprocessing import scale
from sklearn.linear_model import LinearRegression, Ridge, RidgeCV, Lasso, LassoCV
from sklearn.decomposition import PCA
from sklearn.metrics import mean_squared_error
%matplotlib inline
plt.style.use('ggplot')
datafolder = "../data/"
def loo_risk(X,y,regmod):
"""
Construct the leave-one-out square error risk for a regression model
Input: design matrix, X, response vector, y, a regression model, regmod
Output: scalar LOO risk
"""
loo = LeaveOneOut()
loo_losses = []
for train_index, test_index in loo.split(X):
X_train, X_test = X[train_index], X[test_index]
y_train, y_test = y[train_index], y[test_index]
regmod.fit(X_train,y_train)
y_hat = regmod.predict(X_test)
loss = np.sum((y_hat - y_test)**2)
loo_losses.append(loss)
return np.mean(loo_losses)
def emp_risk(X,y,regmod):
"""
Return the empirical risk for square error loss
Input: design matrix, X, response vector, y, a regression model, regmod
Output: scalar empirical risk
"""
regmod.fit(X,y)
y_hat = regmod.predict(X)
return np.mean((y_hat - y)**2)
# In R, I exported the dataset from package 'ISLR' to a csv file.
df = pd.read_csv(datafolder+'Hitters.csv', index_col=0).dropna()
df.index.name = 'Player'
df.info()
df.head()
dummies = pd.get_dummies(df[['League', 'Division', 'NewLeague']])
dummies.info()
print(dummies.head())
y = df.Salary
# Drop the column with the independent variable (Salary), and columns for which we created dummy variables
X_ = df.drop(['Salary', 'League', 'Division', 'NewLeague'], axis=1).astype('float64')
# Define the feature set X.
X = pd.concat([X_, dummies[['League_N', 'Division_W', 'NewLeague_N']]], axis=1)
X.info()
X.head(5)
```
## Ridge Regression
```
alphas = 10**np.linspace(10,-2,100)*0.5
ridge = Ridge()
coefs = []
for a in alphas:
ridge.set_params(alpha=a)
ridge.fit(scale(X), y)
coefs.append(ridge.coef_)
ax = plt.gca()
ax.plot(alphas, coefs)
ax.set_xscale('log')
ax.set_xlim(ax.get_xlim()[::-1]) # reverse axis
plt.axis('tight')
plt.xlabel('lambda')
plt.ylabel('weights')
plt.title('Ridge coefficients as a function of the regularization');
```
The above plot shows that the Ridge coefficients get larger when we decrease lambda.
## Exercises
__Exercise 1__ Plot the LOO risk and the empirical risk as a function of lambda.
```
alphas = np.linspace(30,1,100)
rcv = RidgeCV(alphas = alphas, store_cv_values=True,normalize=True)
rcv.fit(X,y)
cv_vals = rcv.cv_values_
LOOr = cv_vals.mean(axis=0)
EMPr = []
for a in alphas:
ridge.set_params(alpha=a)
ridge.fit(scale(X), y)
EMPr.append(emp_risk(X,y,ridge))
plt.plot(alphas,LOOr)
plt.xlabel('lambda')
plt.ylabel('Risk')
plt.title('LOO Risk for Ridge');
plt.show()
plt.plot(alphas,EMPr)
plt.xlabel('lambda')
plt.ylabel('Risk')
plt.title('Emp Risk for Ridge');
plt.show()
```
__Exercise 2__ Implement and test forward stagewise regression (recall that stagewise and stepwise are different).
```
n,p = X.shape
Xsc = scale(X)
ysc = scale(y)
```
I'll implement a different variant of forward stagewise, where the correlation updates the current beta vector by adding them.
```
MSEiter = []
res = ysc
beta = np.zeros(p)
tol = 1e-2
corrmax = 1.
while corrmax > tol:
res_corr = Xsc.T.dot(scale(res)) / n
jmax, corrmax = max(enumerate(np.abs(res_corr)), key=lambda x: x[1])
beta[jmax] = beta[jmax] + res_corr[jmax]
res = ysc - Xsc.dot(beta)
MSE = np.sum(res**2.)
MSEiter.append(MSE)
beta
lm = LinearRegression()
lm.fit(Xsc,ysc)
lm.coef_
```
| true |
code
| 0.676887 | null | null | null | null |
|
<a href="https://colab.research.google.com/github/0201shj/CNN-Cats-Dogs/blob/main/4_2_aug_pretrained_ipynb.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
from google.colab import drive
drive.mount('/content/drive')
%matplotlib inline
!ls -l
!unzip training_data.zip
!unzip validation_data.zip
import glob
import numpy as np
import matplotlib.pyplot as plt
from tensorflow.keras.preprocessing.image import ImageDataGenerator, load_img, img_to_array, array_to_img
IMG_DIM = (150, 150)
train_files = glob.glob('training_data/*')
train_imgs = [img_to_array(load_img(img, target_size=IMG_DIM)) for img in train_files]
train_imgs = np.array(train_imgs)
train_labels = [fn.split('/')[1].split('.')[0].strip() for fn in train_files]
validation_files = glob.glob('validation_data/*')
validation_imgs = [img_to_array(load_img(img, target_size=IMG_DIM)) for img in validation_files]
validation_imgs = np.array(validation_imgs)
validation_labels = [fn.split('/')[1].split('.')[0].strip() for fn in validation_files]
print('Train dataset shape:', train_imgs.shape,
'\tValidation dataset shape:', validation_imgs.shape)
train_imgs_scaled = train_imgs.astype('float32')
validation_imgs_scaled = validation_imgs.astype('float32')
train_imgs_scaled /= 255
validation_imgs_scaled /= 255
batch_size = 50
num_classes = 2
epochs = 50
input_shape = (150, 150, 3)
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
le.fit(train_labels)
# encode wine type labels
train_labels_enc = le.transform(train_labels)
validation_labels_enc = le.transform(validation_labels)
print(train_labels[0:5], train_labels_enc[0:5])
train_datagen = ImageDataGenerator( zoom_range=0.3, rotation_range=50, # rescale=1./255,
width_shift_range=0.2, height_shift_range=0.2, shear_range=0.2,
horizontal_flip=True, fill_mode='nearest')
val_datagen = ImageDataGenerator() # rescale=1./255
train_generator = train_datagen.flow(train_imgs, train_labels_enc, batch_size=30)
val_generator = val_datagen.flow(validation_imgs, validation_labels_enc, batch_size=20)
from tensorflow.keras.applications import vgg16
from tensorflow.keras.models import Model
import tensorflow.keras
vgg = vgg16.VGG16(include_top=False, weights='imagenet',
input_shape=input_shape)
output = vgg.layers[-1].output
output = tensorflow.keras.layers.Flatten()(output)
vgg_model = Model(vgg.input, output)
vgg_model.trainable = False
for layer in vgg_model.layers:
layer.trainable = False
vgg_model.summary()
import pandas as pd
pd.set_option('max_colwidth', -1)
layers = [(layer, layer.name, layer.trainable) for layer in vgg_model.layers]
pd.DataFrame(layers, columns=['Layer Type', 'Layer Name', 'Layer Trainable'])
print("Trainable layers:", vgg_model.trainable_weights)
bottleneck_feature_example = vgg.predict(train_imgs_scaled[0:1])
print(bottleneck_feature_example.shape)
plt.imshow(bottleneck_feature_example[0][:,:,0])
def get_bottleneck_features(model, input_imgs):
features = model.predict(input_imgs, verbose=0)
return features
train_features_vgg = get_bottleneck_features(vgg_model, train_imgs_scaled)
validation_features_vgg = get_bottleneck_features(vgg_model, validation_imgs_scaled)
print('Train Bottleneck Features:', train_features_vgg.shape,
'\tValidation Bottleneck Features:', validation_features_vgg.shape)
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten, Dense, Dropout, InputLayer
from tensorflow.keras.models import Sequential
from tensorflow.keras import optimizers
model = Sequential()
model.add(vgg_model)
model.add(Dense(512, activation='relu', input_dim=input_shape))
model.add(Dropout(0.3))
model.add(Dense(64, activation='relu'))
model.add(Dropout(0.3))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer=optimizers.RMSprop(lr=2e-5),
metrics=['accuracy'])
model.summary()
history = model.fit_generator(train_generator, epochs=50,
validation_data=val_generator, verbose=1)
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 4))
t = f.suptitle('Pre-trained CNN (Transfer Learning) Performance', fontsize=12)
f.subplots_adjust(top=0.85, wspace=0.3)
epoch_list = list(range(1,51))
ax1.plot(epoch_list, history.history['accuracy'], label='Train Accuracy')
ax1.plot(epoch_list, history.history['val_accuracy'], label='Validation Accuracy')
ax1.set_xticks(np.arange(0, 51, 5))
ax1.set_ylabel('Accuracy Value')
ax1.set_xlabel('Epoch')
ax1.set_title('Accuracy')
l1 = ax1.legend(loc="best")
ax2.plot(epoch_list, history.history['loss'], label='Train Loss')
ax2.plot(epoch_list, history.history['val_loss'], label='Validation Loss')
ax2.set_xticks(np.arange(0, 51, 5))
ax2.set_ylabel('Loss Value')
ax2.set_xlabel('Epoch')
ax2.set_title('Loss')
l2 = ax2.legend(loc="best")
model.save('4_2-pretrained Aug_cnn.h5')
```
| true |
code
| 0.713968 | null | null | null | null |
|
# Replication - Likelihood Approximation: Additional 1 (Large P) - Table
Here we provide a notebook to replicate the simulation results for the likelihood approximations. These are additional simualtions to evaluate the impact of the number of covariates P on the approximation.
This produced the table from the supplement.
The notebook replicates the results in:
- /out/simulation/tables/likelihood_approx_MPE_additional1.csv
- /out/simulation/tables/likelihood_approx_MAPE_additional1.csv
The main script can be found at:
- /scripts/simulation/tables/likelihood_approx_additional1.py
```
# google colab specific - installing probcox
!pip3 install probcox
# Modules
# =======================================================================================================================
import os
import sys
import shutil
import subprocess
import tqdm
import numpy as np
import pandas as pd
import torch
from torch.distributions import constraints
import pyro
import pyro.distributions as dist
from pyro.infer import SVI, Trace_ELBO
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings("ignore")
import probcox as pcox
dtype = torch.FloatTensor
np.random.seed(90834)
torch.manual_seed(873645)
# Custom function for evaluation
# =======================================================================================================================
# run the approximation 1000 times for a given setting and return MPE/MAPE
def run(surv, pred, batch, est):
total_obs = surv.shape[0]
total_events = torch.sum(surv[:, -1] == 1).numpy().tolist()
sampling_proportion = [total_obs, batch, total_events, None]
ll = []
ll2 = []
while len(ll) <=1000:
idx = np.unique(np.concatenate((np.random.choice(np.where(surv[:, -1]==1)[0], 2, replace=False), np.random.choice(range(surv.shape[0]), batch-2, replace=False))))
sampling_proportion[-1] = torch.sum(surv[idx, -1]).numpy().tolist()
if torch.sum(surv[idx, -1]) > 1:
e = pcox.CoxPartialLikelihood(pred=pred[idx], sampling_proportion=sampling_proportion).log_prob(surv=surv[idx]).detach().numpy()
MPE = ((e-est)/est)
MAPE = np.abs(MPE)
ll.append(MPE.tolist())
ll2.append(MAPE.tolist())
return(np.mean(ll), np.mean(ll2))
# Simulation Settings
# =======================================================================================================================
I = [10000] # individuals
P = [500, 1000] # covariates
C = [0.5, 0.75, 0.95, 0.99] # censorship
B = [64, 128, 256, 512] # batch size
# Simulation
# =======================================================================================================================
res = np.zeros((8, 4))
res2 = np.zeros((8, 4))
sim_n =[]
ii = 0
jj = 0
for p in P:
# make baselinehazard
cond = True
scale = 100
while cond:
theta = np.random.normal(0, 0.01, (p, 1))
TVC = pcox.TVC(theta=theta, P_binary=int(p/2), P_continuous=int(p/2), dtype=dtype)
TVC.make_lambda0(scale=scale)
s = np.sum([torch.sum(TVC.sample()[0][:, -1]).numpy() for ii in (range(1000))])/1000
if np.logical_and(s>=0.1, s<=0.9):
cond = False
scale = scale/5
theta_ = torch.normal(0, 0.01, (p, 1)).type(dtype)
for i in I:
for c in C:
# make dataset
print('s')
surv, X = TVC.make_dataset(obs=i, fraction_censored=c)
sim_n.append('I(N): ' + str(i) + '(' + str(surv.shape[0]) + ')' +', P: ' + str(p) + ', C: ' + str(c))
pred = torch.mm(X, theta_).type(dtype)
est = pcox.CoxPartialLikelihood(pred=pred, sampling_proportion=None).log_prob(surv=surv).detach().numpy()
# fit to batch
for b in tqdm.tqdm(B):
res[ii, jj], res2[ii, jj] = run(surv=surv, pred=pred, batch=b, est=est)
jj += 1
ii += 1
jj = 0
res = np.round(res, 2)
res2 = np.round(res2, 2)
MPE = pd.DataFrame(np.concatenate((np.asarray(sim_n)[:, None], res.astype(str)), axis=1))
MAPE = pd.DataFrame(np.concatenate((np.asarray(sim_n)[:, None], res2.astype(str)), axis=1))
MPE
MAPE
```
| true |
code
| 0.420124 | null | null | null | null |
|
In this notebook we will be using the smtd_preprocessing.py file which is a preprocessing pileline for twitter data to pre-process our tweets and then train our own twitter embeddings. <br>
We can find pre-trained twitter embedding
```
import os
import sys
import pandas as pd
from gensim.models import Word2Vec
import warnings
warnings.filterwarnings('ignore')
from nltk.tokenize import TweetTokenizer
tweet_tokenizer = TweetTokenizer()
PATH = "path to repo"
preprocessing_path = PATH+"/practical-nlp/Ch8/O5smtd_preprocessing.py"
sys.path.append(os.path.abspath(preprocessing_path))
import O5_smtd_preprocessing
```
Let's use the dir() function to find all the properties and methods in the package.
```
dir(smtd_preprocessing)
```
## Read Data
Let's read the data. Normally in csv files the values are separated by a ','.<br> In this case, it is separated by a ';' so we will specify the delimiter as ';'.
```
datapath = "/home/etherealenvy/github/practical-nlp/Ch8/Data/sts_gold_tweet.csv"
df = pd.read_csv(datapath,error_bad_lines=False,delimiter=";")
#let's have a loof at the dataset
df.head()
#pre-process tweets using our package
df['tweet'] = df['tweet'].apply(lambda x: smtd_preprocessing.process_TweetText(x))
df['tweet'] = df['tweet'].apply(lambda x: tweet_tokenizer.tokenize(x))
tweets = df['tweet'].values
```
## Train Embeddings
Let's train our own embeddings.
```
#CBOW
import time
start = time.time()
word2vec_tweet = Word2Vec(tweets,min_count=5, sg=0)
end = time.time()
print("CBOW Model Training Complete.\nTime taken for training is:{:.5f} sec ".format((end-start)))
#Summarize the loaded model
print("Summary of the model:",word2vec_tweet)
#Summarize vocabulary
words = list(word2vec_tweet.wv.vocab)
print("Small part of Vocabulary of our model:",words[:10])
#Acess vector for one word
print("Acess Vector for the word 'lol'",word2vec_tweet['lol'])
from gensim.models import Word2Vec, KeyedVectors #To load the model
import warnings
warnings.filterwarnings('ignore') #ignore any generated warnings
import numpy as np
import matplotlib.pyplot as plt #to generate the t-SNE plot
from sklearn.manifold import TSNE #scikit learn's TSNE
#Preprocessing our models vocabulary to make better visualizations
import nltk
from nltk.corpus import stopwords
nltk.download('stopwords')
words_vocab= list(word2vec_tweet.wv.vocab)#all the words in the vocabulary.
print("Size of Vocabulary:",len(words_vocab))
print("Few words in Vocabulary",words_vocab[:50])
#Let us remove the stop words from this it will help making the visualization cleaner
stopwords_en = stopwords.words('english')
words_vocab_without_sw = [word.lower() for word in words_vocab if not word in stopwords_en]
print("Size of Vocabulary without stopwords:",len(words_vocab_without_sw))
print("Few words in Vocabulary without stopwords",words_vocab_without_sw[:30])
#The size didnt reduce much after removing the stop words so lets try visualizing only a selected subset of words
#With the increase in the amount of data, it becomes more and more difficult to visualize and interpret
#In practice, similar words are combined into groups for further visualization.
keys = ['weekend','twitter','mcdonalds','coffee']
embedding_clusters = []
word_clusters = []
for word in keys:
embeddings = []
words = []
for similar_word, _ in word2vec_tweet.most_similar(word, topn=10):
words.append(similar_word)
embeddings.append(word2vec_tweet[similar_word])
embedding_clusters.append(embeddings)#apending access vector of all similar words
word_clusters.append(words)#appending list of all smiliar words
print("Embedding clusters:",embedding_clusters[0][0])#Access vector of the first word only
print("Word Clousters:",word_clusters[:2])
```
## Visualization
We will visualize our embeddings using T-SNE. If you do not know aht T-SNE is or have forgotten please refer to Ch3 in the book. We will be using the T-SNE code previously introduced in a notebook from Ch3 which can be found [here](https://github.com/practical-nlp/practical-nlp/blob/master/Ch3/09_Visualizing_Embeddings_Using_TSNE.ipynb).
```
from sklearn.manifold import TSNE
import numpy as np
embedding_clusters = np.array(embedding_clusters)
n, m, k = embedding_clusters.shape #geting the dimensions
tsne_model_en_2d = TSNE(perplexity=10, n_components=2, init='pca', n_iter=1500, random_state=2020)
embeddings_en_2d = np.array(tsne_model_en_2d.fit_transform(embedding_clusters.reshape(n * m, k))).reshape(n, m, 2) #reshaping it into 2d so we can visualize it
from sklearn.manifold import TSNE
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import numpy as np
%matplotlib inline
#script for constructing two-dimensional graphics using Matplotlib
def tsne_plot_similar_words(labels, embedding_clusters, word_clusters, a=0.7):
plt.figure(figsize=(16, 9))
for label, embeddings, words in zip(labels, embedding_clusters, word_clusters):
x = embeddings[:,0]
y = embeddings[:,1]
plt.scatter(x, y, alpha=a, label=label)
for i, word in enumerate(words):
plt.annotate(word, alpha=0.5, xy=(x[i], y[i]), xytext=(5, 2),
textcoords='offset points', ha='right', va='bottom', size=8)
plt.legend(loc=4)
plt.grid(True)
plt.show()
tsne_plot_similar_words(words_vocab_without_sw, embeddings_en_2d, word_clusters)
```
| true |
code
| 0.528047 | null | null | null | null |
|
# 1η εργαστηριακή άσκηση: Εισαγωγή στις γλωσσικές αναπαραστάσεις
<h2><center> Περιγραφή </center></h2>
__Σκοπός__ αυτού του μέρους της 1ης εργαστηριακής άσκησης είναι να γίνει μια εισαγωγή σε διαφορετικές γλωσσικές αναπαραστάσεις και τη χρήση τους για γλωσσικά tasks. Στο πρώτο μέρος θα εμπλουτίσουμε τον ορθογράφο που φτιάξαμε στην προπαρασκευή με character level και word level unigram γλωσσικά μοντέλα. Στο δεύτερο μέρος θα κάνουμε μια εισαγωγή στις λεξικές αναπαραστάσεις bag-of-words και word2vec και θα τις χρησιμοποιήσουμε σε ένα απλό πρόβλημα ταξινόμησης.
<h2><center> Μέρος 1: Ορθογράφος </h2></center>
Αρχικά κατεβάζουμε το corpus που θα χρησιμοποιήσουμε. Θα ασχοληθούμε με το βιβλίο __War of the Worlds__ όπως και στην προπαρασκευή έτσι ώστε να μπορούμε να συγκρίνουμε τα αποτελέσματα πάνω στο ίδιο corpus. Με την παρακάτω εντολή, λοιπόν, το κατεβάζουμε από το project Gutenberg σε plain txt μορφή και το αποθηκεύουμε με το όνομα __War.txt__.
```
! wget -c http://www.gutenberg.org/files/36/36-0.txt -O War.txt
```
### Βήμα 10: Εξαγωγή στατιστικών
Στο βήμα αυτό θα κατασκευάσουμε 2 πηγές στατιστικών για τα γλωσσικά μας μοντέλα, μία __word/token level__ και μία __character level__.
Για το βήμα αυτό αλλά και για την συνέχεια της άσκησης θα χρειαστούμε ορισμένες συναρτήσεις που υλοποιήθηκαν στην προπαρασκευή και μας βοηθάνε στην επεξεργασία του corpus. Συγκεκριμένα έχουμε τις εξής συναρτήσεις (η περιγραφή της λειτουργίας τους βρίσκεται στην προπαρασκευή):
__1. identity_preprocess:__
```
# Gets a string as input and just returns the same string.
def identity_preprocess(string_var):
return string_var
```
__2. read_path:__
```
# Reads a file tokenizing each line.
def read_path(file_path, preprocess = identity_preprocess):
# Initilize the list of processed lines
processed_lines = []
# Open file to read mode
with open(file_path, "r") as f:
for line in f:
# Omit spaces
if not line.isspace():
processed_lines.extend(preprocess(line))
return processed_lines
```
__3. tokenize:__
```
import string
# Tokenize a sttring
def tokenize(s):
# Remove possible spaces from the start or the end of the string and
# turn all letters lowercase.
s = s.strip().lower()
# Remove all punctuations, symbols and numbers from the string leaving
# only lowercase alphabetical letters.
s = "".join((char for char in s if char not in string.punctuation and not char.isdigit()))
# Replace new line characters with spaces
s = s.replace('\n',' ')
# Split the string in every space resulting in a list of tokens
res = s.split(" ")
return res
```
__4. get_tokens:__
```
# Get all separate tokens from a file.
def get_tokens(file_path):
tokens = read_path(file_path, tokenize)
distinct_tokens = list(dict.fromkeys(tokens))
return distinct_tokens
```
__5. get_alphabet:__
```
# Get the alphabet of a file given its tokens.
def get_alphabet(tokens):
alphabet = []
for token in tokens:
alphabet.extend(list(token))
alphabet = list(dict.fromkeys(alphabet))
return alphabet
```
Τώρα, λοιπόν, που έχουμε ορίσει τις συναρτήσεις που χρειαζόμαστε από την προπαρασκευή μπορούμε να συνεχίσουμε κανονικά στο βήμα 10.
__α) token level:__ Πρέπει να εξάγουμε την πιθανότητα εμφάνισης κάθε token (λέξης) του βιβλίου και να την αποθηκεύσουμε σε ένα λεξικό με __key το token και value την πιθανότητα εμφάνισής του__.
__Διαδικασία: __
- Θα φτιάξουμε μία συνάρτηση η οποία θα δέχεται ως όρισμα το path του corpus και θα επιστρέφει το ζητούμενο λεξικό. Αρχικά, θα αποθηκεύει σε μία λίστα όλα τα tokens χρησιμοποιώντας την συνάρτηση get_tokens και θα αρχικοποιεί το λεξικό μας με αυτά τα tokens ως keys και με value ίσο με 0. Στη συνέχεια, για κάθε λέξη του corpus θα αυξάνουμε το αντίστοιχο value στο λεξικό μας. Έτσι αφού διαιρέσουμε και κάθε value με τον αριθμό όλων των λέξεων του βιβλίου (για να μετατραπεί σε μία πιθανότητα) θα έχουμε δημιουργήσει το ζητούμενο λεξικό.
```
def token_level(path):
# Keys of the dictionary are all discrete tokens.
keys = get_tokens(path)
# Initialize the dictionary with the above keys and all values equal to 0.
dict_token = dict.fromkeys(keys, 0)
# Get a list with all the words containing in the corpus.
words = read_path(path, tokenize)
# For each word increase the value of the corresponding key.
for word in words:
dict_token[word] += 1
# Divide each value with the total number of words to get the probability of each key.
dict_token = {k: v / len(words) for k, v in dict_token.items()}
return dict_token
```
- Καλούμε την συνάρτηση που ορίσαμε παραπάνω και αποθηκεύουμε το λεξικό μας ως __dict_token__.
```
# Get the dictionary of the frequency of each token.
dict_token = token_level("War.txt")
```
__β) character level:__ Εδώ πρέπει να εξάγουμε την πιθανότητα εμφάνισης κάθε χαρακτήρα του corpus και, αντίστοιχα με πριν, να την αποθηκεύσουμε σε ένα λεξικό με key τον χαρακτήρα και value την πιθανότητα εμφάνισής του.
__Διαδικασία:__
- Αντίστοιχα λοιπόν παραπάνω θα φτιάξουμε μία παρόμοια συνάρτηση, η οποία αυτή τη φορά θα κάνει την ίδια διαδικασία για κάθε χαρακτήρα του corpus αντί για κάθε λέξη. Εδώ θα χρησιμοποιηθεί η συνάρτηση get_alphabet η οποία θα μας δώσει τα keys του λεξικού μας. Τα values θα υπολογιστούν διατρέχοντας μία φορά όλα το βιβλίο και αυξάνοντας κάθε φορά κατά 1 το value του που αντιστοιχεί στον χαρακτήρα που συναντάμε. Τέλος, πρέπει να διαιρέσουμε με όλους τους εμφανιζόμενους χαρακτήρες.
```
def character_level(path):
# Keys of the dictionary are the alphabet of the corpus.
keys = get_alphabet(get_tokens(path))
# Initialize the dictionary with the above keys and all values equal to 0.
dict_character = dict.fromkeys(keys, 0)
# Get a list with all the words containing in the corpus.
words = read_path(path, tokenize)
# Counter that will keep track of all the characters in the corpus.
total = 0
# For each letter of each word increase the corresponding value.
for word in words:
for char in list(word):
total += 1
dict_character[char] += 1
# Divide each value with the total number of characters to get the probability of each key.
dict_character = {k: v / total for k, v in dict_character.items()}
return dict_character
```
Καλούμε την συνάρτηση που ορίσαμε παραπάνω και αποθηκεύουμε το λεξικό μας ως __dict_character__.
```
dict_character = character_level("War.txt")
```
Ολοκληρώνοντας, λοιπόν, το βήμα 10 έχουμε δύο λεξικά που αποτελούν τις πηγές στατιστικών για τα γλωσσικά μας μοντέλα, ένα word/token level και ένα character level.
### Βήμα 11: Κατασκευή μετατροπέων FST
Για τη δημιουργία του ορθογράφου θα χρησιμοποιήσουμε μετατροπείς βασισμένους στην απόσταση Levenshtein. Θα χρησιμοποιήσουμε 3 τύπους από edits κάθε ένα από τα οποία χαρακτηρίζεται από ένα κόστος. Έχουμε:
- __εισαγωγές χαρακτήρων__
- __διαγραφές χαρακτήρων__
- __αντικαταστάσεις χαρακτήρων__
__α)__ Στο βήμα αυτό θα υπολογίσουμε την μέση τιμή των βαρών του word level μοντέλου που κατασκευάσαμε στο βήμα 10α, το οποίο θα αποτελεί το κόστος w των edits. Συγκεκριμένα, αφού έχουμε την πιθανότητα εμφάνισης κάθε λέξης, το βάρος της ορίζεται ως ο αρνητικός λογάριθμος της πιθανότητας εμφάνισής της, δηλαδή __w = -log(P)__. Υπολογίζοντας, λοιπόν, το βάρος κάθε λέξης και παίρνοντας την μέση τιμή όλων των βαρών έχουμε το κόστος w, το οποίο επειδή προκύπτει από το token level μοντέλο το ονομάζουμε __w_token__.
```
from math import log10
# Calculate weight of each word.
token_weights = {k:(-log10(v)) for k,v in dict_token.items()}
# Get the mean value of weigths.
w_token = sum(token_weights.values()) / len(token_weights.values())
```
__β)__ Στο βήμα αυτό θα κατασκευάσουμε τον μετατροπέα μας με μία κατάσταση που υλοποιεί την απόσταση Levenshtein αντιστοιχίζοντας:
- Kάθε χαρακτήρα στον εαυτό του με βάρος 0 __(no edit)__.
- Kάθε χαρακτήρα στο <epsilon\> (ε) με βάρος w __(deletion)__.
- Tο <epsilon\> σε κάθε χαρακτήρα με βάρος w __(insertion)__.
- Kάθε χαρακτήρα σε κάθε άλλο χαρακτήρα με βάρος w __(substitution)__.
Όπως και στην προπαρασκευή θα ορίσουμε την συνάρτηση format_arc η οποία διαμορφώνει μία γραμμή του αρχείου περιγραφής του κάθε FST. Συγκεκριμένα δέχεται ως όρισμα τα __src__, __dest__, __ilabel__, __olabel__ και το __weight__ (με default τιμή το 0) και τα επιστρέφει στην κατάλληλη μορφή όπως αναφέρεται και εδώ http://www.openfst.org/twiki/bin/view/FST/FstQuickTour#CreatingFsts/.
```
def format_arc(src, dest, ilabel, olabel, weight=0):
return (str(src) + " " + str(dest) + " " + str(ilabel) + " " + str(olabel) + " " + str(weight))
```
Ακόμη, από την στιγμή που θα κατασκευάσουμε ορισμένα FSTs θα χρειαστούμε ένα αρχείο __chars.syms__ το οποίο θα αντιστοιχίζει κάθε χαρακτήρα του αλφαβήτου με έναν αύξοντα ακέραιο αριθμό. Η διαδικασία αυτή έγινε στο βήμα 4 της προπαρασκευής και περιλαμβάνει την συνάρτηση alphabet_to_int όπως βλέπουμε και παρακάτω:
```
def alphabet_to_int(alphabet):
# Open file
f = open("chars.syms", "w")
# Match epsilon to 0
f.write("EPS" + 7*" " + str(0) + '\n')
num = 21
for character in alphabet:
# Match every other character to an increasing index
f.write(character + 7*" " + str(num) + '\n')
num += 1
f.close()
alphabet_to_int(get_alphabet(get_tokens("War.txt")))
```
Στη συνέχεια, διαμορφώνουμε το αρχείο περιγραφής του μετατροπεά μας σύμφωνα με τις παραπάνω αντιστοιχίσεις. Το αποτέλεσμα αποθηκεύεται στο αρχείο __transducer_token.fst__ (συμβολίζουμε το (ε) με "EPS").
```
# Get alphabet of the corpus
alphabet = get_alphabet(get_tokens("War.txt"))
# Open file to write mode
f = open("transducer_token.fst", "w")
for letter in alphabet:
# no edit
f.write(format_arc(0, 0, letter, letter) + "\n")
# deletion
f.write(format_arc(0, 0, letter, "EPS", w_token) + "\n")
# insertion
f.write(format_arc(0, 0, "EPS", letter, w_token) + "\n")
for i in range(len(alphabet)):
for j in range(len(alphabet)):
if i != j:
# substitution
f.write(format_arc(0, 0, alphabet[i], alphabet[j], w_token) + "\n")
# Make initial state also final state
f.write("0")
# Close file
f.close()
```
Αντίστοιχα με την προπαρασκευή τρέχουμε το παρακάτω shell command που κάνει compile τον μετατροπέα μας. Το binary αρχείο που προκύπτει με όνομα __transducer_token.fst__ είναι αυτό που θα χρησιμοποιήσουμε στις επόμενες λειτουργίες.
```
! fstcompile --isymbols=chars.syms --osymbols=chars.syms transducer_token.fst transducer_token.fst
```
__γ)__ Τώρα θα επαναλάβουμε την ίδια διαδικασία χρησιμοποιώντας το unigram γλωσσικό μοντέλο του βήματος 10β. Θα υπολογίσουμε αρχικά το νέο κόστος των edit το οποίο ισούται με τη μέση τιμή των βαρών του character level μοντέλου και στη συνέχεια θα γράψουμε στο αρχείο __transducer_char.fst__ την περιγραφή του μετατροπέα που θα χρησιμοποιεί το μοντέλο αυτό.
```
# Calculate weight of each character.
character_weigths = {k: (-log10(v)) for k,v in dict_character.items()}
# Get the mean value of weigths.
w_char = sum(character_weigths.values()) / len(character_weigths.values())
# Open file to write mode
f = open("transducer_char.fst", "w")
for letter in alphabet:
# no edit
f.write(format_arc(0, 0, letter, letter) + "\n")
# deletion
f.write(format_arc(0, 0, letter, "EPS", w_char) + "\n")
# insertion
f.write(format_arc(0, 0, "EPS", letter, w_char) + "\n")
for i in range(len(alphabet)):
for j in range(len(alphabet)):
if i != j:
# substitution
f.write(format_arc(0, 0, alphabet[i], alphabet[j], w_char) + "\n")
# Make initial state also final state
f.write("0")
# Close file
f.close()
! fstcompile --isymbols=chars.syms --osymbols=chars.syms transducer_char.fst transducer_char.fst
```
__δ)__ Αυτός είναι ένας αρκετά αφελής τρόπος για τον υπολογισμό των βαρών για κάθε edit. Αν τώρα είχαμε στη διάθεση μας ό,τι δεδομένα θέλουμε αυτό που θα κάναμε είναι ότι θα υπολογίζαμε τα βάρη με βάση το πόσο συχνά γίνεται αυτό το λάθος. Πιο συγκεκριμένα, θα υπολογίζαμε για κάθε σύμβολο του αλφαβήτου την πιθανότητα κάποιος να το διαγράψει, να το προσθέσει ή να το αντικαταστήσει με κάποιο άλλο. Στη συνέχεια, θα μετατρέπαμε αυτές τις πιθανότητες σε κόστη παίρνοντας τον αρνητικό λογάριθμο και θα είχαμε τα τελικά βάρη μας για κάθε σύμβολο στο deletion και το insertion και για κάθε δυάδα συμβόλων στο substitution. Ο υπολογισμός αυτός μπορεί να γίνει σε περίπτωση που είχουμε το ίδιο corpus αλλά με λάθη για να μπορούμε να βρούμε πολύ απλά τις μετρικές που θέλουμε.
### Βήμα 12: Κατασκευή γλωσσικών μοντέλων
__α)__ Στο βήμα αυτό θα κατασκευάσουμε έναν αποδοχέα με μία αρχική κατάσταση που θα αποδέχεται κάθε λέξη του λεξικού όπως αυτό ορίστηκε στην προπαρασκευή του εργαστηρίου στο βήμα 3α. Τώρα, όμως, ως βάρη θα χρησιμοποιήσουμε τον αρνητικό λογάριθμο της πιθανότητας εμφάνισης κάθε λέξης __-logP(w)__. Πρέπει το κόστος αυτό να κατανεμηθεί κάπως στην λέξη έτσι ώστε όλη η λέξη συνολικά να έχει το παραπάνω κόστος. Για λόγους βελτιστοποίησης και απλότητας προφανώς συμφέρει να βάλουμε όλο το κόστος της λέξης μόνο στην πρώτη ακμή της και τις υπόλοιπες να τις θέσουμε 0. Το αρχείο περιγραφής του αποδοχέα αποθηκεύεται ως __acceptor_token.fst__.
```
# Get tokens of the corpus (our acceptor should accept only these words)
tokens = get_tokens("War.txt")
# Open file to write mode
f = open("acceptor_token.fst", "w")
s = 1
for token in tokens:
cost = token_weights[token]
letters = list(token)
for i in range(0, len(letters)):
if i == 0:
# For each token make state 1 its first state
f.write(format_arc(1, s+1, letters[i], letters[i], cost) + "\n")
else:
f.write(format_arc(s, s+1, letters[i], letters[i]) + "\n")
s += 1
if i == len(letters) - 1:
# When reaching the end of a token go to final state 0 though an ε-transition
f.write(format_arc(s, 0, "EPS", "EPS") + "\n")
# Make state 0 final state
f.write("0")
# Close the file
f.close()
! fstcompile --isymbols=chars.syms --osymbols=chars.syms acceptor_token.fst acceptor_token.fst
```
__β)__ Στη συνέχεια καλούμε τις συναρτήσεις fstrmepsilon, fstdeterminize και fstminimize για να βελτιστοποιήσουμε το μοντέλο μας (η λειτουργία τους έχει αναφερθεί στην προπαρασκευή).
```
! fstrmepsilon acceptor_token.fst acceptor_token.fst
! fstdeterminize acceptor_token.fst acceptor_token.fst
! fstminimize acceptor_token.fst acceptor_token.fst
```
__γ)__ Τώρα θα επαναλάβουμε την ίδια διαδικασία για το character level γλωσσικό μοντέλο. Αυτό που θα αλλάξει δηλαδή είναι ότι αντί να τοποθετούμε στην πρώτη ακμή της λέξης το κόστος ολόκληρης της λέξης θα ορίζουμε για την μετάβαση σε κάθε γράμμα της λέξης το αντίστοιχο κόστος του. Σημειώνεται ότι αντίστοιχα με πριν το κόστος ενός χαρακτήρα ισούται με τον αρνητικό λογάριθμο της πιθανότητας εμφάνισής του. Το αρχείο περιγραφής του αποδοχέα αποθηκεύεται ως __acceptor_char.fst__.
```
# Get tokens of the corpus (our acceptor should accept only these words)
tokens = get_tokens("War.txt")
# Open file to write mode
f = open("acceptor_char.fst", "w")
s = 1
for token in tokens:
letters = list(token)
for i in range(0, len(letters)):
if i == 0:
# For each token make state 1 its first state
f.write(format_arc(1, s+1, letters[i], letters[i], character_weigths[letters[i]]) + "\n")
else:
f.write(format_arc(s, s+1, letters[i], letters[i], character_weigths[letters[i]]) + "\n")
s += 1
if i == len(letters) - 1:
# When reaching the end of a token go to final state 0 though an ε-transition
f.write(format_arc(s, 0, "EPS", "EPS") + "\n")
# Make state 0 final state
f.write("0")
# Close the file
f.close()
! fstcompile --isymbols=chars.syms --osymbols=chars.syms acceptor_char.fst acceptor_char.fst
! fstrmepsilon acceptor_char.fst acceptor_char.fst
! fstdeterminize acceptor_char.fst acceptor_char.fst
! fstminimize acceptor_char.fst acceptor_char.fst
```
### Βήμα 13: Κατασκευή ορθογράφων
Στο βήμα αυτό θα κατασκευάσουμε δύο ορθογράφους χρησιμοποιώντας τα FST από τα παραπάνω βήματα. Η διαδικασία για κάθε έναν ορθογράφο θα είναι ίδια με αυτή που ακολουθήθηκε στο βήμα 7 της προπαρασκευής.
__α)__ Ο πρώτος ορθογράφος που θα κατασκευάσουμε θα προκύψει συνθέτοντας τον word level transducer με το word level γλωσσικό μοντέλο.
Αρχικά θα ταξινομήσουμε τις εξόδους του transducer_token και τις εισόδους του acceptor_token με την συνάρτηση __fstarcsort__.
```
! fstarcsort --sort_type=olabel transducer_token.fst transducer_token.fst
! fstarcsort --sort_type=ilabel acceptor_token.fst acceptor_token.fst
```
Στη συνέχεια συνθέτουμε τον transducer_token με τον acceptor_token με την συνάρτηση fstcompose αποθηκεύοντας τον spell checker μας στο αρχείο __spell_checker1.fst__.
```
! fstcompose transducer_token.fst acceptor_token.fst spell_checker1.fst
```
__β)__ Ο δεύτερος ορθογράφος θα προκύψει συνθέτοντας τον word level tranducer με το unigram γλωσσικό μοντέλο.
Αρχικά θα ταξινομήσουμε τις εισόδους του acceptor_char με την συνάρτηση __fstarcsort__.
```
! fstarcsort --sort_type=ilabel acceptor_char.fst acceptor_char.fst
```
Στη συνέχεια συνθέτουμε τον transducer_token με τον acceptor_char με την συνάρτηση fstcompose αποθηκεύοντας τον spell checker μας στο αρχείο __spell_checker2.fst__.
```
! fstcompose transducer_token.fst acceptor_char.fst spell_checker2.fst
```
__γ)__ Η διαφορά των δύο ορθογράφων βρίσκεται στο γλωσσικό μοντέλο που χρησιμοποιούν. Συγκεκριμένα:
1. __Word-Level μοντέλο:__ Ο 1ος ορθογράφος για να διορθώσει μία λέξη κοιτάει (πέρα από τον αριθμό των edits) την συχνότητα εμφάνισης της κάθε λέξης στο corpus. Έτσι, διορθώνει μία λέξη σε μία άλλη που είναι πιο πιθανό να είχε εμφανιστεί.
2. __Unigram μοντέλο:__ Ο 2ος ορθογράφος για να διορθώσει μία λέξη κοιτάει (πέρα από τον αριθμό των edits) την συχνότητα εμφάνισης κάθε γράμματος της διορθωμένης λέξης. Έτσι, διορθώνει μία λέξη αλλάζοντας κάθε γράμμα της στο πιο πιθανό που ήταν να εμφανιστεί.
Για παράδειγμα έστω ότι έχουμε την λέξη __cit__ και οι δύο πιθανές λέξεις που βρίσκονται στο λεξικό μας και έχουν μόνο 1 αλλαγή είναι η __cat__ και η __cut__. Ο 1ος ορθογράφος πιθανώς να επιλέξει την cut επειδή είναι μία πιο συνιθισμένη λέξη. Από την άλλη, ο 2ο ορθογράφος μπορεί να επιλέξει την cat επειδή το γράμμα a εμφανίζεται πιο συχνά από το γράμμα u. Ένα αντίστοιχο παράδειγμα παρουσιάζεται στο τέλος του επόμενου βήματος όπου δίνουμε την λέξη qet στους δύο ορθογράφους.
### Βήμα 14: Αξιολόγηση των ορθογράφων
__α)__ Για να κάνουμε το evaluation των δύο ορθογράφων κατεβάζουμε το παρακάτω σύνολο δεδομένων:
```
! wget https://raw.githubusercontent.com/georgepar/python-lab/master/spell_checker_test_set
```
__β)__ Δημιουργούμε αρχικά μία συνάρτηση __predict__ η οποία δέχεται μία λέξη που πρέπει να διορθωθεί και γράφει σε ένα αρχείο __pred_word.fst__ την περιγραφή ενός FST το οποίο αποδέχεται την συγκεκριμένη λέξη. Το FST αυτό θα το κάνουμε στη συνέχεια compose με τον ορθογράφο για να πάρουμε το τελικό αποτέλεσμα.
```
def predict(word):
s= 1
letters = list(word)
# Open file to write mode
f = open("pred_word.fst", "w")
for i in range(0, len(letters)):
# For each letter of the word make a transition with zero weight
f.write(format_arc(s, s+1, letters[i], letters[i], 0) + '\n')
s += 1
if i == len(letters) - 1:
# When reaching the end the word make a ε-transition to the final state 0
f.write(format_arc(s, 0, "EPS", "EPS", 0) + '\n')
# Final state
f.write("0")
# Close the file
f.close()
```
Είμαστε έτοιμοι, λοιπόν, τώρα να αξιολογήσουμε τους δύο ορθογράφους. Θα επιλέξουμε 10 τυχαίες λέξεις από το evaluation set που κατεβάσαμε και θα τις διορθώσουμε χρησιμοποιώντας τους 2 ορθογράφους μας.
```
import random
random.seed(1)
test_words = []
for _ in range(10):
random_lines = random.choice(open('spell_checker_test_set').readlines())
test_words.append(random.choice(random_lines.strip('\n').split()[1:]))
for word in test_words:
print(word + ":" + " ",end='')
predict(word)
print("1: ",end='')
! ./predict.sh spell_checker1.fst
print(" 2: ",end='')
! ./predict.sh spell_checker2.fst
print('\n')
```
__γ)__ Παρατηρούμε ότι έχουν μία αρκετά καλή επίδοση οι δύο ορθογράφοι μας η οποία αυξάνοντας το corpus (το οποίο είναι μόνο ένα βιβλίο) θα μπορούσαν να γίνουν ακόμα καλύτεροι. Συγκεκριμένα:
- Ο 1ος ορθογράφος κατασκευάστηκε συνθέτοντας το word-level γλωσσικό μοντέλο με το word-level μετατροπέα. Αυτό σημαίνει, ότι ο ορθογράφος προσπαθεί να διορθώσει μία λέξη όχι μόνο λαμβάνοντας υπόψιν τις λιγότερες αλλαγές (όπως στην προπαρασκευή) αλλά και το πόσο πιθανή είναι η λέξη στην οποία θα μετατραπεί. Αυτό αυξάνει την επίδοσή του γιατί προφανώς όσο πιο πιθανή είναι μία λέξη τόσο και πιο πιθανό είναι να έχει γραφτεί λάθος. Ο μετατροπέας, τώρα, έγινε word-level έτσι ώστε να φέρουμε τα βάρη των edits στην ίδια τάξη μεγέθους με τα βάρη του γλωσσικού μοντέλου.
- Ο 2ος ορθογράφος κατασκευάστηκε συνθέτοντας το unigram γλωσσικό μοντέλο με το word-level μετατροπέα. Αυτό σημαίνει ότι ο ορθογράφος προσπαθεί να διορθώσει μία λέξη λαμβάνοντας υπόψιν αυτή τη φορά πόσο πιθανό είναι το γράμμα το οποίο θέλει να διορθώσει. Αυτό το γλωσσικό μοντέλο επίσης αυξάνει την απόδοση γιατί όσο πιο πιθανό είναι ένα γράμμα τόσο και πιο πιθανό είναι να έχει γραφτεί λάθος το συγκεκριμένο γράμμα. Τα βάρη του μετατροπέα τώρα κάνουν την ίδια δουλειά που αναφέρθηκε και παραπάνω.
Για να κατανοήσουμε καλύτερα την διαφορετική λειτουργία των 2 ορθογράφων δίνουμε ως είσοδο για διόρθωση την λέξη __qet__.
```
word = "qet"
print(word + ":" + " ",end='')
predict(word)
print("1: ",end='')
! ./predict.sh spell_checker1.fst
print(" 2: ",end='')
! ./predict.sh spell_checker2.fst
```
Παρατηρούμε ότι ο ορθογράφος με το word level γλωσσικό μοντέλο την διόρθωσε σε __get__, ενώ ο ορθογράφος με το unigram γλωσσικό μοντέλο την διόρθωσε σε __set__. Ο λόγος που συνέβη αυτό βρίσκεται στις πιθανότητες εμφάνισης κάθε λέξης αλλά και του συνολικού συνδυασμού των γραμμάτων κάθε λέξης.
```
print("Propability of word get: " + str(dict_token["get"]))
print("Propability of word set: " + str(dict_token["set"]))
print("Propability of characters g: " + str(dict_character["g"]))
print("Propability of characters s: " + str(dict_character["s"]))
```
Βλέπουμε ότι η πιθανότητα να δούμε get είναι μεγαλύτερη από το να δούμε set και γι´ αυτό ο word-level ορθογράφος μας που κοιτάει τα word-level βάρη επέλεξε να διορθώσει το qet σε get. Από την άλλη η πιθανότητα να δούμε s είναι μεγαλύτερη από το να δούμε g με αποτέλεσμα ο 2ος ορθογράφος που βασίζεται στις πιθανότητες εμφάνισης των γραμάτων διορθώνει την λέξη qet σε set.
<h2><center> Μέρος 2: Χρήση σημασιολογικών αναπαραστάσεων για ανάλυση συναισθήματος</center></h2>
Στο πρώτο μέρος της άσκησης ασχοληθήκαμε κυρίως με συντακτικά μοντέλα για την κατασκευή ενός ορθογράφου. Εδώ θα
ασχοληθούμε με τη __χρήση λεξικών αναπαραστάσεων για την κατασκευή ενός ταξινομητή συναισθήματος__ . Ως δεδομένα θα
χρησιμοποιήσουμε σχόλια για ταινίες από την ιστοσελίδα IMDB και θα τα ταξινομήσουμε σε θετικά και αρνητικά ως
προς το συναίσθημα.
### Βήμα 16: Δεδομένα και προεπεξεργασία
__α)__ Αρχικά κατεβάζουμε τα δεδομένα που θα χρησιμοποιήσουμε. Επειδή το αρχείο είναι μεγάλο η εντολή είναι σε σχόλιο σε περίπτωση που υπάρχει ήδη κατεβασμένο.
```
# ! wget -N http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
```
Στη συνέχεια το αποσυμπιέζουμε το αρχείου που κατεβάσαμε στον ίδιο φάκελο με το όνομα __aclImdb__.
```
# ! tar -zxf aclImdb_v1.tar.gz
```
Οι φάκελοι που μας ενδιαφέρουν είναι οι εξής:
- __train__ που περιέχει όλες τις κριτικές που θα χρησιμοπιήσουμε για την εκπαίδευση του μοντέλου μας και χωρίζεται σε:
- __train/pos__ το οποίο περιέχει αυτές που έχουν χαρακτηριστεί ως θετικές και
- __train/neg__ το οποίο περιέχει αυτές που έχουν χαρακτηριστεί ως αρνητικές.
- __test__ που περιέχει όλες τις κριτικές που θα χρησιμοποιήσουμε για να ελέγξουμε την επίδοση του μοντέλου μας και αντίστοιχα χωρίζεται σε:
- __test/pos__ με τις θετικές και
- __test/neg__ με τις αρνητικές.
__β)__ Στη συνέχεια πρέπει να διαβάσουμε και να προεπεξεργαστούμε τα δεδομένα μας. Ο κώδικας ανάγνωσης και κάποιες απλές συναρτήσεις προεπεξεργασίας (τα οποία μας δώθηκαν έτοιμα για διευκόλυνση) παρουσιάζονται παρακάτω.
- Αρχικά κάνουμε όλα τα απαραίτητα import.
```
import random
import os
import numpy as np
import re
try:
import glob2 as glob
except ImportError:
import glob
```
- Στη συνέχεια δηλώνουμε τα path των αρχείων που θα μας φανούνε χρήσιμα και κάποιες ακόμη μεταβλητές.
```
# Useful paths
data_dir = './aclImdb/'
train_dir = os.path.join(data_dir, 'train')
test_dir = os.path.join(data_dir, 'test')
pos_train_dir = os.path.join(train_dir, 'pos')
neg_train_dir = os.path.join(train_dir, 'neg')
pos_test_dir = os.path.join(test_dir, 'pos')
neg_test_dir = os.path.join(test_dir, 'neg')
# For memory limitations. These parameters fit in 8GB of RAM.
# If you have 16G of RAM you can experiment with the full dataset / W2V
MAX_NUM_SAMPLES = 5000
# Load first 1M word embeddings. This works because GoogleNews are roughly
# sorted from most frequent to least frequent.
# It may yield much worse results for other embeddings corpora
NUM_W2V_TO_LOAD = 1000000
# Fix numpy random seed for reproducibility
SEED = 42
np.random.seed(42)
```
- Η συνάρτηση __strip_punctuation__ δέχεται ως είσοδο ένα string και αντικαθιστά κάθε σύμβολό του που δεν είναι γράμμα με το κενό. Έτσι επιστρέφει ένα string που αποτελείται μόνο από κεφαλαία και μικρά γράμματα και κενά.
```
def strip_punctuation(s):
return re.sub(r'[^a-zA-Z\s]', ' ', s)
```
- Η συνάρτηση __preprocess__ δέχεται ένα string και απαλείφει τα σημεία στίξης χρησιμοποιώντας την strip_punctuation, μετατρέπει όλα τα γράμματα σε μικρά και, τέλος, αντικαθιστά τα συνεχόμενα κενά από ένα μόνο κενό.
```
def preprocess(s):
return re.sub('\s+',' ', strip_punctuation(s).lower())
```
- Η συνάρτηση __tokenize__ δέχεται ένα string και το διασπάσει στα κενά του, επιστρέφοντας μία λίστα με κάθε λέξη του string.
```
def tokenize(s):
return s.split(' ')
```
- Η συνάρτηση __preproc_tok__ δέχεται ένα string και επιστρέφει μία λίστα με τα tokens, τις λέξεις δηλαδή μόνο με μικρά γράμματα και χωρίς σημεία στίξης.
```
def preproc_tok(s):
return tokenize(preprocess(s))
```
- Η συνάρτηση __read_samples__ δέχεται ως ορίσματα το path ενός φακέλου που περιέχει τα samples και μία συνάρτηση preprocess (με default μία συνάρτηση που επιστρέφει ακριβώς όπως είναι το όρισμά της). Ανοίγει κάθε ένα από τα samples που είναι σε μορφή αρχείων .txt και καλεί την συνάρτηση preprocess. Το αποτέλεσμα __data__ είναι μία λίστα, όπου κάθε στοιχείο της αντιστοιχεί στο αποτέλεσμα της preprocess πάνω στην κάθε κριτική.
```
def read_samples(folder, preprocess=lambda x: x):
# Get all the .txt files that the folder contains
samples = glob.iglob(os.path.join(folder, '*.txt'))
data = []
for i, sample in enumerate(samples):
if MAX_NUM_SAMPLES > 0 and i == MAX_NUM_SAMPLES:
break
# Open the .txt file, preprocess each line and add the result to a list
with open(sample, 'r') as fd:
x = [preprocess(l) for l in fd][0]
data.append(x)
return data
```
- Η συνάρτηση __create_corpus__ δέχεται δύο λίστες που περιέχουν κριτικές για ταινίες με την πρώτη να έχει τις θετικές κριτικές και την δεύτερη τις αρνητικές. Επιστρέφει μία λίστα που περιέχει τις δωσμένες κριτικές σε τυχαία σειρά και μία λίστα που περιέχει το label της κάθε κριτικής. Ουσιαστικά αυτή η συνάρτηση δημιουργεί το training και το test set μας σε raw μορφή αφού η κάθε γραμμή είναι μία κριτική σε μορφή ενός string.
```
def create_corpus(pos, neg):
corpus = np.array(pos + neg)
y = np.array([1 for _ in pos] + [0 for _ in neg])
indices = np.arange(y.shape[0])
np.random.shuffle(indices)
return list(corpus), list(y)
```
Αφού ορίσαμε, λοιπόν, όλες μας τις συναρτήσεις τώρα πρέπει να διαβάσουμε τις κριτικές και την αντίστοιχη κατηγορία τους. Αυτό που θα κάνουμε είναι να δημιουργήσουμε τις εξής τέσσερις λίστες:
- __X_train_raw__ η οποία περιέχει όλες τις κριτικές που θα χρησιμοποιηθούν για το train του μοντέλου μας σε text μορφή.
- __Y_train__ η οποία περιέχει τα labels των παραπάνω κριτικών.
- __X_test_raw__ η οποία περιέχει όλες τις κριτικές που θα χρησιμοποιηθούν για το test του μοντέλου μας σε text μορφή.
- __Y_test__ η οποία περιέχει τα labels των παραπάνω κριτικών.
```
X_train_raw, Y_train = create_corpus(read_samples(pos_train_dir), read_samples(neg_train_dir))
X_test_raw, Y_test = create_corpus(read_samples(pos_test_dir), read_samples(neg_test_dir))
```
Μπορούμε να ελέγξουμε την 1η κριτική του training set και το αντιστοιχο label της για να δούμε ότι όλα πήγαν καλά.
```
print(X_train_raw[0])
print("Postive" if Y_train[0] else "Negative")
```
### Βήμα 17: Κατασκευή BOW αναπαραστάσεων και ταξινόμηση
Η πιο βασική αναπαράσταση για μια πρόταση είναι η χρήση __Bag of Words__. Σε αυτή την αναπαράσταση μια λέξη κωδικοποιείται σαν ένα one hot encoding πάνω στο λεξιλόγιο και μια πρόταση σαν το άθροισμα αυτών των encodings. Για παράδειγμα στο λεξιλόγιο [cat, dog, eat] η αναπαράσταση της λέξης cat είναι [1, 0,0], της λέξης dog [0, 1, 0] κοκ. Η αναπαράσταση της πρότασης dog eat dog είναι [0, 2, 1]. Επιπλέον μπορούμε να πάρουμε σταθμισμένο άθροισμα των one hot word encodings για την αναπαράσταση μιας πρότασης με βάρη TF-IDF (https://en.wikipedia.org/wiki/Tf–idf).
__α)__ Στην __Bag of Words__ αναπαράσταση υπολογίζουμε απλά πόσες φορές υπάρχει η κάθε λέξη στην κάθε κριτική. Έτσι, προκύπτει για κάθε κριτική ένας μεγάλος και αραιός πίνακας (με μήκος ίσο με το μέγεθος του λεξικου) που σε κάθε θέση του έχει τις φορές που παρουσιάζεται η κάθε λέξη στην κριτική. Αυτή η αναπαράσταση έχει δύο σημαντικά μειονεκτήματα τα οποία αντιμετωπίζονται με την προσθήκη βαρών __TF_IDF__. Συγκεκριμένα έχουμε ότι:
- Πρέπει να λαμβάνουμε υπόψιν και το μέγεθος της κάθε κριτικής γιατί άλλη βαρύτητα έχει η ύπαρξη μιας λέξης σε μία κριτική με μικρό μέγεθος και άλλη σε μία με μεγάλο. Γι' αυτό και στον πρώτο όρο της TF_IDF που είναι το __term frequency__ αφού υπολογίσουμε πόσες φορές υπάρχει μία λέξη στην κριτική, μετά διαιρούμε με τον συνολικό μέγεθος της κριτικής.
- Λέξεις οι οποίες είναι συνηθισμένες λαμβάνουν μεγάλο score σε κάθε κριτική χωρίς να πρέπει. Το νόημα είναι ότι οι σπάνιες λέξεις μας δίνουν περισσότερη πληροφορία από τις συνηθισμένες. Έτσι, ο δεύτερος όρος που είναι το __inverse document frequency__ είναι ο συνολικός αριθμός των κριτικών διαιρεμένος από τον αριθμό των κριτικών στις οποίες βρίσκεται η λέξη μας, με αποτέλεσμα ο όρος αυτός να αυξάνεται όσο πιο σπάνια είναι η λέξη.
__β)__ Τώρα θα χρησιμοποιήσουμε τον transformer CountVectorizer του sklearn για να εξάγουμε __μη σταθμισμένες BOW αναπαραστάσεις__.
```
from sklearn.feature_extraction.text import CountVectorizer
# Define the vectorizer using our preprocess and tokenize function.
vectorizer = CountVectorizer(analyzer = preproc_tok)
# Get training data X_train.
X_train = vectorizer.fit_transform(X_train_raw)
# Get test data X_test.
X_test = vectorizer.transform(X_test_raw)
```
__γ)__ Σε αυτό το στάδιο έχουμε τους πίνακες με τα training και τα test data και τα αντίστοιχα labels. Οπότε μπορούμε να εφαρμόσουμε τον ταξινομητή Linear Regression του sklearn για να ταξινομήσουμε τα σχόλια σε θετικά και αρνητικά.
```
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import zero_one_loss
# Define the clasifier
clf = LogisticRegression()
# Train the model
clf.fit(X_train, Y_train)
# Compute error on training data.
print("Training error =", zero_one_loss(Y_train, clf.predict(X_train)))
# Compute error on test data
print("Test error =", zero_one_loss(Y_test, clf.predict(X_test)))
```
__δ)__ Τώρα θα επαναλάβουμε την ίδια διαδικασία χρησιμοποιώντας τον TfidfVectorizer για την εξαγώγη TF-IDF αναπαραστάσεων.
```
from sklearn.feature_extraction.text import TfidfVectorizer
tfidf_vectorizer = TfidfVectorizer(analyzer = preproc_tok)
X_train = tfidf_vectorizer.fit_transform(X_train_raw)
X_test = tfidf_vectorizer.transform(X_test_raw)
# Define the clasifier
clf_tfidf = LogisticRegression()
# Train the model
clf_tfidf.fit(X_train, Y_train)
# Compute error on training data.
print("Training error =", zero_one_loss(Y_train, clf_tfidf.predict(X_train)))
# Compute error on test data
print("Test error =", zero_one_loss(Y_test, clf_tfidf.predict(X_test)))
```
#### Σύγκριση αποτελεσμάτων:
Παρατηρούμε ότι το test error μειώνεται κατά 1% περίπου όταν χρησιμοποιούμε βάρη TF-IDF για την αναπαράσταση μιας πρότασης. Το αποτέλεσμα αυτό ήταν αναμενόμενο γιατί όπως ειπώθηκε στο α) η αναπαράσταση αυτή καλύπτει κάποια κενά που είχε η μη σταθμισμένη BOW αναπαράσταση.
### Βήμα 18: Χρήση Word2Vec αναπαραστάσεων για ταξινόμηση
Ένας άλλος τρόπος για να αναπαραστήσουμε λέξεις και προτάσεις είναι να κάνουμε χρήση προεκπαιδευμένων embeddings. Σε αυτό το βήμα θα εστιάσουμε στα word2vec embeddings. Αυτά τα embeddings προκύπτουν από ένα νευρωνικό δίκτυο με ένα layer το οποίο καλείται να προβλέψει μια λέξη με βάση το context της (παράθυρο 3-5 λέξεων γύρω από αυτή). Αυτό ονομάζεται CBOW μοντέλο. Εναλλακτικά το δίκτυο καλείται να προβλέψει το context με βάση τη λέξη (skip-gram μοντέλο). Τα word2vec vectors είναι πυκνές (dense) αναπαραστάσεις σε λιγότερες διαστάσεις από τις BOW και κωδικοποιούν σημασιολογικά χαρακτηριστικά μιας λέξης με βάση την υπόθεση ότι λέξεις με παρόμοιο νόημα εμφανίζονται σε παρόμοια συγκείμενα (contexts). Μια πρόταση μπορεί να αναπαρασταθεί ως ο μέσος όρος των w2v διανυσμάτων κάθε λέξης που περιέχει (Neural Bag of Words).
Αρχικά θα επαναλάβουμε τα βήματα 9α, 9β της προπαρασκευής γιατί θα μας χρειαστούν για τα δύο πρώτα ερωτήματα.
- Διαβάζουμε το βιβλίο War of the Worlds που είχαμε κατεβάσει για το μέρος Α σε μία λίστα από tokenized προτάσεις.
```
import nltk
# We split the corpus in a list of tokenized sentences.
file_path = "War.txt"
tokenized_sentences = []
with open(file_path, "r") as f:
text = f.read()
sentences = nltk.sent_tokenize(text)
tokenized_sentences = [preproc_tok(sentence) for sentence in sentences]
```
- Xρησιμοποιούμε την κλάση Word2Vec του gensim για να εκπαιδεύσουμε 100-διάστατα word2vec embeddings με βάση τις παραπάνω προτάσεις. Θα χρησιμοποιήσουμε window = 5 και 1000 εποχές.
```
from gensim.models import Word2Vec
# Initialize word2vec. Context is taken as the 2 previous and 2 next words
myModel = Word2Vec(tokenized_sentences, window=5, size=100, workers=4)
# Train the model for 1000 epochs
myModel.train(tokenized_sentences, total_examples=len(tokenized_sentences), epochs=1000)
```
Η μεταβλητή __voc__ κρατάει το λεξικό μας ενώ η __dim__ το μέγεθος του κάθε embedding.
```
# get ordered vocabulary list
voc = myModel.wv.index2word
# get vector size
dim = myModel.vector_size
```
Η συνάρτηση __to_embeddings_Matrix__ δέχεται ως όρισμα το μοντέλο μας και επιστρέφει έναν 2-διάστατο πίνακα όπου κάθε γραμμή αναπαριστάσει ένα embedding και ένα λεξικό.
```
# Convert to numpy 2d array (n_vocab x vector_size)
def to_embeddings_Matrix(model):
embedding_matrix = np.zeros((len(model.wv.vocab), model.vector_size))
for i in range(len(model.wv.vocab)):
embedding_matrix[i] = model.wv[model.wv.index2word[i]]
return embedding_matrix, model.wv.index2word
```
__α)__ Σε αυτό το βήμα πρέπει να υπολογίσουμε το ποσοστό __out of vocabulary (OOV) words__ για τις παραπάνω αναπαραστάσεις.
```
tokens = get_tokens("War.txt")
oov = (1 - len(voc)/len(tokens)) * 100
print("Out of vocabulary words: " + str(oov) + "%")
```
__β)__ Τώρα χρησιμποιώντας αυτές τις αναπαραστάσεις θα κατασκευάσουμε ένα __Neural Bag of Words αναπαραστάσεων__ για κάθε σχόλιο στο corpus και θα εκπαιδεύσουμε ένα Logistic Regression μοντέλο για ταξινόμηση.
Αρχικά, αποθηκεύουμε τo training και το test set σε raw text μορφή.
```
X_train_raw, Y_train = create_corpus(read_samples(pos_train_dir), read_samples(neg_train_dir))
X_test_raw, Y_test = create_corpus(read_samples(pos_test_dir), read_samples(neg_test_dir))
```
Στη συνέχεια, για κάθε κριτική υπολογίζουμε το neural bag of words, που ορίζεται ως ο μέσος όρος των w2v διανυσμάτων κάθε λέξης που περιέχει.
```
# Initialize training set
X_train = np.zeros((len(X_train_raw), 100))
for row, sample in enumerate(X_train_raw):
words_included = 0
# Tokenize current review
sample_toks = preproc_tok(sample)
for tok in sample_toks:
# For each token check if it has a w2v representation
# and if yes add it.
if tok in myModel.wv:
X_train[row] += myModel.wv[tok]
words_included += 1
# Get the mean value
X_train[row] = X_train[row]/words_included
# Initialize test set
X_test = np.zeros((len(X_test_raw), 100))
for row, sample in enumerate(X_test_raw):
words_included = 0
# Tokenize current review
sample_toks = preproc_tok(sample)
for tok in sample_toks:
# For each token check if it has a w2v representation
# and if yes add it.
if tok in myModel.wv:
X_test[row] += myModel.wv[tok]
words_included += 1
# Get the mean value
X_test[row] = X_test[row]/words_included
# Define the clasifier
clf = LogisticRegression()
# Train the model
clf.fit(X_train, Y_train)
# Compute error on training data.
print("Training error =", zero_one_loss(Y_train, clf.predict(X_train)))
# Compute error on test data
print("Test error =", zero_one_loss(Y_test, clf.predict(X_test)))
```
Και τα δύο error είναι πάρα πολύ υψηλά με αποτέλεσμα το μοντέλο μας να έχει πάρα πολύ χαμηλή απόδοση. Η εξήγηση για αυτό είναι ότι έχουμε κατασκευάσει τα word embeddings με βάση ένα πάρα πολύ μικρό corpus το οποίο και έχει μικρό λεξικό (με αποτέλεσμα πολλές λέξεις να μην έχουν αναπαράσταση) και δεν βοηθάει στο να δημιουργηθούν παρόμοιες αναπαραστάσεις για κοντινά σημασιολογικά λέξεις (αυτό το παρατηρήσαμε και στην προπαρασκευή όταν είδαμε τις κοντινές σημασιολογικά λέξεις 10 τυχαίων λέξεν).
__γ, δ)__ Κατεβάζουμε το προεκπαιδευμένα GoogleNews vectors, τα φορτώνουμε με το gensim και εξάγουμε αναπαραστάσεις με βάση αυτά.
```
from gensim.models import KeyedVectors
googleModel = KeyedVectors.load_word2vec_format('./GoogleNews-vectors-negative300.bin',binary=True, limit=NUM_W2V_TO_LOAD)
```
Επαναλαμβάνουμε το ερώτημα 9γ της προπαρασκευής για να το συγκρίνουμε με τα GoogleNews.
```
selected_words = random.sample(voc, 10)
for word in selected_words:
# get most similar words
sim = myModel.wv.most_similar(word, topn=5)
print('"' + word + '"' + " is similar with the following words:")
for s in sim:
print('"' + s[0] + '"' + " with similarity " + str(s[1]))
print()
for word in selected_words:
# get most similar words
sim = googleModel.most_similar(word, topn=5)
print('"' + word + '"' + " is similar with the following words:")
for s in sim:
print('"' + s[0] + '"' + " with similarity " + str(s[1]))
print()
```
Αυτό που παρατηρούμε είναι ότι προφανώς με τα Google Vectors τα αποτελέσματα είναι εντυπωσικά αφού όλες οι κοντινές λέξεις είναι και στην πραγματικότητα πολύ κοντινές. Από την άλλη, το δικό μας μοντέλο έχει πολύ χαμηλές επιδόσεις που οφείλεται στο γεγονός ότι τα embeddings προέκυψαν από πολύ μικρό corpus. Τα Google Vectors από την άλλη έχουν ένα τεράστιο corpus από πίσω με αποτέλεσμα και να έχει τεράστιο λεξικό αλλά και οι σημασιολογικά κοντινές λέξεις να έχει και παρόμοια αναπαράσταση.
__ε)__ Αντίστοιχα με το myModel τώρα θα εκπαιδεύσουμε ένα Logistic Regression ταξινομητή με το μοντέλο που προέκυψε από τα Google Vectors.
```
# Initialize training set
X_train = np.zeros((len(X_train_raw), 300))
for row, sample in enumerate(X_train_raw):
words_included = 0
# Tokenize current review
sample_toks = preproc_tok(sample)
for tok in sample_toks:
# For each token check if it has a w2v representation
# and if yes add it.
if tok in googleModel:
X_train[row] += googleModel[tok]
words_included += 1
# Get the mean value
X_train[row] = X_train[row]/words_included
# Initialize test set
X_test = np.zeros((len(X_test_raw), 300))
for row, sample in enumerate(X_test_raw):
words_included = 0
# Tokenize current review
sample_toks = preproc_tok(sample)
for tok in sample_toks:
# For each token check if it has a w2v representation
# and if yes add it.
if tok in googleModel:
X_test[row] += googleModel[tok]
words_included += 1
# Get the mean value
X_test[row] = X_test[row]/words_included
# Define the clasifier
clf = LogisticRegression()
# Train the model
clf.fit(X_train, Y_train)
# Compute error on training data.
print("Training error =", zero_one_loss(Y_train, clf.predict(X_train)))
# Compute error on test data
print("Test error =", zero_one_loss(Y_test, clf.predict(X_test)))
```
Όπως ήταν αναμενόμενο το error μειώθηκε κατά πολύ καθώς τώρα τα embeddings ήταν καλύτερα. Σε σύγκριση με το TF_IDF το error εδώ είναι λίγο μεγαλύτερο αλλά κερδίζουμε πολύ σε χώρο και χρόνο καθώς οι πίνακες με τα training και test data είναι πολύ πιο μικροί και πυκνοί.
__στ)__ Τώρα θα δημιουργήσουμε αναπαραστάσεις των κριτικών με χρήση σταθμισμένου μέσου των w2v
αναπαραστάσεων των λέξεων. Ως βάρη θα χρησιμοποιήσουμε τα TF-IDF βάρη των λέξεων.
```
# Get the vocabulary of the words in the training set
# that contains their tf-idf value.
tfidf_vectorizer = TfidfVectorizer(analyzer = preproc_tok)
X_train_temp = tfidf_vectorizer.fit_transform(X_train_raw)
voc = tfidf_vectorizer.vocabulary_
# Do the same as before but now, we multiply each represantation by a the tf-idf of the word.
# Initialize training set
X_train = np.zeros((len(X_train_raw), 300))
for row, sample in enumerate(X_train_raw):
# Tokenize current review
sample_toks = preproc_tok(sample)
for tok in sample_toks:
# For each token check if it has a w2v representation
# and if yes add it.
if tok in googleModel and tok in voc:
X_train[row] += googleModel[tok] * X_train_temp[row,voc[tok]]
# Get the vocabulary of the words in the training set
# that contains their tf-idf value.
tfidf_vectorizer = TfidfVectorizer(analyzer = preproc_tok)
X_test_temp = tfidf_vectorizer.fit_transform(X_test_raw)
voc = tfidf_vectorizer.vocabulary_
# Do the same as before but now, we multiply each represantation by a the tf-idf of the word.
# Initialize test set
X_test = np.zeros((len(X_test_raw), 300))
for row, sample in enumerate(X_test_raw):
# Tokenize current review
sample_toks = preproc_tok(sample)
for tok in sample_toks:
# For each token check if it has a w2v representation
# and if yes add it.
if tok in googleModel and tok in voc:
X_test[row] += googleModel[tok] * X_test_temp[row,voc[tok]]
```
__ζ)__ Επαναλαμβάνουμε την ταξινόμηση με τις νέες αναπαραστάσεις.
```
# Define the clasifier
clf = LogisticRegression()
# Train the model
clf.fit(X_train, Y_train)
# Compute error on training data.
print("Training error =", zero_one_loss(Y_train, clf.predict(X_train)))
# Compute error on test data
print("Test error =", zero_one_loss(Y_test, clf.predict(X_test)))
```
| true |
code
| 0.387835 | null | null | null | null |
|
# Implementing kmeans from scratch
```
import numpy as np
import pandas as pd
from tqdm.notebook import tqdm
import matplotlib.pyplot as plt
from sklearn.datasets import make_blobs
from IPython.display import clear_output
import time
k, n = 3, 2
X, y = make_blobs(n_samples=10, centers=k, n_features=n, random_state=0,
cluster_std=4)
y
fig, ax = plt.subplots(figsize=(6, 6), ncols=1)
ax.scatter(X[:,0], X[:,1], s=100, alpha=.5, c=y)
plt.tight_layout()
plt.show()
k = 3
centroids = np.random.rand(k, n)
centroids
from sklearn.metrics.pairwise import euclidean_distances
def update(M):
c = M.mean(axis=0)
rss = np.power(euclidean_distances(c.reshape(1, -1),
M)[0], 2).sum()
return c, rss
RSS = []
for iteration in range(20):
clear_output(wait=True)
D = euclidean_distances(centroids, X)
y_pred = D.T.argmin(axis=1)
markers = ['+', '^', 'o']
fig, ax = plt.subplots(figsize=(16, 8), ncols=2)
ax[0].scatter(X[:,0], X[:,1], s=100, alpha=1.0)
ax[1].scatter(X[:,0], X[:,1], c=y_pred, s=100, alpha=1.0)
for i, c in enumerate(centroids):
ax[0].scatter(c[0], c[1], s=100, marker=markers[i])
plt.tight_layout()
plt.show()
assignment = {}
for i, c in enumerate(centroids):
assignment[i] = []
for j, f in enumerate(y_pred):
if f == i:
assignment[i].append(X[j])
A = {}
for p, v in assignment.items():
A[p] = np.array(v)
irss = 0
for z, w in A.items():
if w.shape[0] == 0:
pass
else:
nc, rss = update(w)
irss += rss
centroids[z] = nc
RSS.append(irss)
time.sleep(1)
fig, ax = plt.subplots(figsize=(6, 6), ncols=1)
ax.plot(RSS)
plt.tight_layout()
plt.show()
y_pred
y
tp, fp, fn, tn = 0, 0, 0, 0
for i, c in enumerate(y_pred):
y_true_i = y[i]
for j, z in enumerate(y_pred[i+1:]):
s = j + i + 1
y_true_j = y[s]
if c == z:
if y_true_i == y_true_j:
tp += 1
else:
fp += 1
else:
if y_true_i == y_true_j:
fn += 1
else:
tn += 1
print(tp, fp, fn, tn)
tp / (tp + fp)
(tp + tn) / (tp + fp + fn + tn)
```
## Implementazione sklearn
```
from sklearn.cluster import KMeans, AgglomerativeClustering
from sklearn.metrics import adjusted_rand_score
kmeans = KMeans(n_clusters=3)
aggc = AgglomerativeClustering(n_clusters=3)
y_pred_k = kmeans.fit_predict(X)
y_pred_a = aggc.fit_predict(X)
print(y_pred_k)
print(y_pred_a)
benchmark = {'Kmeans': KMeans(n_clusters=3),
'Agglomerative': AgglomerativeClustering(n_clusters=3)}
results = []
for name, alg in benchmark.items():
res = results.append(alg.fit_predict(X))
print(name, adjusted_rand_score(y, y_pred))
```
## Real example
```
data_file = 'data/fifa/players_20.csv'
P = pd.read_csv(data_file, index_col=0, usecols=range(77))
X = P[['height_cm', 'value_eur']]
fig, ax = plt.subplots(figsize=(6, 6), ncols=1)
ax.scatter(X.height_cm, X.value_eur)
plt.tight_layout()
plt.show()
kmeans = KMeans(n_clusters=5)
y = kmeans.fit_predict(X)
def select_points(X, y, cluster):
pos = [i for i, x in enumerate(y) if x == cluster]
return X.iloc[pos]
clusters = [select_points(X, y, c) for c in range(5)]
eur_values = np.array([x.value_eur.values for x in clusters], dtype='object')
h_values = np.array([x.height_cm.values for x in clusters], dtype='object')
fig, ax = plt.subplots(figsize=(16, 6), ncols=3, nrows=1)
ax[0].scatter(X.height_cm, X.value_eur, c=y)
ax[1].boxplot(h_values)
ax[1].set_xlabel('clusters')
ax[1].set_title('height')
ax[2].boxplot(eur_values)
ax[2].set_xlabel('clusters')
ax[2].set_title('eur_values')
plt.tight_layout()
plt.show()
```
## Scaling data
```
from sklearn.preprocessing import StandardScaler
Xs = pd.DataFrame(StandardScaler().fit_transform(X), index=X.index, columns=X.columns)
kmeans = KMeans(n_clusters=5)
y = kmeans.fit_predict(Xs)
clusters = [select_points(Xs, y, c) for c in range(5)]
eur_values = np.array([x.value_eur.values for x in clusters], dtype='object')
h_values = np.array([x.height_cm.values for x in clusters], dtype='object')
fig, ax = plt.subplots(figsize=(16, 6), ncols=3, nrows=1)
ax[0].scatter(Xs.height_cm, X.value_eur, c=y)
ax[1].boxplot(h_values)
ax[1].set_xlabel('clusters')
ax[1].set_title('height')
ax[2].boxplot(eur_values)
ax[2].set_xlabel('clusters')
ax[2].set_title('eur_values')
plt.tight_layout()
plt.show()
```
| true |
code
| 0.52543 | null | null | null | null |
|
# Using Variational Autoencoder and Deep Feature Loss to Generate Faces
From the "Using Variational Autoencoder to Generate Faces" example, we see that using VAE, we can generate realistic human faces, but the generated image is a little blury. Though, you can continue to tuning the hyper paramters or using more data to get a better result, in this example, we adopted the approach in [this paper](https://arxiv.org/abs/1610.00291). That is, instead of using pixel-by-pixel loss of between the original images and the generated images, we use the feature map generated by a pre-trained CNN network to define a feature perceptual loss. As you will see, the generated images will become more vivid.
```
from bigdl.nn.layer import *
from bigdl.nn.criterion import *
from bigdl.optim.optimizer import *
from bigdl.dataset import mnist
import datetime as dt
from bigdl.util.common import *
from glob import glob
import os
import scipy.misc
import numpy as np
from utils import *
image_size = 148
Z_DIM = 100
ENCODER_FILTER_NUM = 32
# we use the vgg16 model, it should work on other popular CNN models
# You can download them here (https://github.com/intel-analytics/analytics-zoo/tree/master/models
# download the data CelebA, and may repalce with your own data path
DATA_PATH = os.getenv("ANALYTICS_ZOO_HOME") + "/apps/variational-autoencoder/img_align_celeba"
VGG_PATH = os.getenv("ANALYTICS_ZOO_HOME")+"/apps/variational-autoencoder/analytics-zoo_vgg-16_imagenet_0.1.0.model"
init_engine()
```
## Define the Model
We are uing the same model as "Using Variational Autoencoder to Generate Faces" example.
```
def conv_bn_lrelu(in_channels, out_channles, kw=4, kh=4, sw=2, sh=2, pw=-1, ph=-1):
model = Sequential()
model.add(SpatialConvolution(in_channels, out_channles, kw, kh, sw, sh, pw, ph))
model.add(SpatialBatchNormalization(out_channles))
model.add(LeakyReLU(0.2))
return model
def upsample_conv_bn_lrelu(in_channels, out_channles, out_width, out_height, kw=3, kh=3, sw=1, sh=1, pw=-1, ph=-1):
model = Sequential()
model.add(ResizeBilinear(out_width, out_height))
model.add(SpatialConvolution(in_channels, out_channles, kw, kh, sw, sh, pw, ph))
model.add(SpatialBatchNormalization(out_channles))
model.add(LeakyReLU(0.2))
return model
def get_encoder_cnn():
input0 = Input()
#CONV
conv1 = conv_bn_lrelu(3, ENCODER_FILTER_NUM)(input0) # 32 * 32 * 32
conv2 = conv_bn_lrelu(ENCODER_FILTER_NUM, ENCODER_FILTER_NUM*2)(conv1) # 16 * 16 * 64
conv3 = conv_bn_lrelu(ENCODER_FILTER_NUM*2, ENCODER_FILTER_NUM*4)(conv2) # 8 * 8 * 128
conv4 = conv_bn_lrelu(ENCODER_FILTER_NUM*4, ENCODER_FILTER_NUM*8)(conv3) # 4 * 4 * 256
view = View([4*4*ENCODER_FILTER_NUM*8])(conv4)
# fully connected to generate mean and log-variance
mean = Linear(4*4*ENCODER_FILTER_NUM*8, Z_DIM)(view)
log_variance = Linear(4*4*ENCODER_FILTER_NUM*8, Z_DIM)(view)
model = Model([input0], [mean, log_variance])
return model
def get_decoder_cnn():
input0 = Input()
linear = Linear(Z_DIM, 4*4*ENCODER_FILTER_NUM*8)(input0)
reshape = Reshape([ENCODER_FILTER_NUM*8, 4, 4])(linear)
bn = SpatialBatchNormalization(ENCODER_FILTER_NUM*8)(reshape)
# upsampling
up1 = upsample_conv_bn_lrelu(ENCODER_FILTER_NUM*8, ENCODER_FILTER_NUM*4, 8, 8)(bn) # 8 * 8 * 128
up2 = upsample_conv_bn_lrelu(ENCODER_FILTER_NUM*4, ENCODER_FILTER_NUM*2, 16, 16)(up1) # 16 * 16 * 64
up3 = upsample_conv_bn_lrelu(ENCODER_FILTER_NUM*2, ENCODER_FILTER_NUM, 32, 32)(up2) # 32 * 32 * 32
up4 = upsample_conv_bn_lrelu(ENCODER_FILTER_NUM, 3, 64, 64)(up3) # 64 * 64 * 3
output = Tanh()(up4)
model = Model([input0], [output])
return model
def get_autoencoder_cnn():
input0 = Input()
encoder = get_encoder_cnn()(input0)
sampler = GaussianSampler()(encoder)
decoder_model = get_decoder_cnn()
decoder = decoder_model(sampler)
model = Model([input0], [encoder, decoder])
return model, decoder_model
```
## Load the pre-trained CNN model
```
def get_vgg():
# we use the vgg16 model, it should work on other popular CNN models
# You can download them here (https://github.com/intel-analytics/analytics-zoo/tree/master/models)
vgg_whole = Model.from_jvalue(Model.loadModel(VGG_PATH).value)
# we only use one feature map here for the sake of simlicity and efficiency
# You can and other feature to the outputs to mix high-level and low-level
# feature to get higher quality images
outputs = [vgg_whole.node(name) for name in ["relu1_2"]]
inputs = [vgg_whole.node(name) for name in ["data"]]
outputs[0].remove_next_edges()
vgg_light = Model(inputs, outputs).freeze()
return vgg_light
vgg = get_vgg()
model, decoder = get_autoencoder_cnn()
```
## Load the Datasets
```
def get_data():
data_files = glob(os.path.join(DATA_PATH, "*.jpg"))
rdd_train_images = sc.parallelize(data_files[:100000]) \
.map(lambda path: get_image(path, image_size).transpose(2, 0, 1))
rdd_train_sample = rdd_train_images.map(lambda img: Sample.from_ndarray(img, [np.array(0.0), img]))
return rdd_train_sample
from pyspark import SparkContext
sc =SparkContext.getOrCreate()
train_data = get_data()
```
## Define the Training Objective
```
criterion = ParallelCriterion()
criterion.add(KLDCriterion(), 0.005) # You may want to twick this parameter
criterion.add(TransformerCriterion(MSECriterion(), vgg, vgg), 1.0)
```
## Define the Optimizer
```
batch_size = 64
# Create an Optimizer
optimizer = Optimizer(
model=model,
training_rdd=train_data,
criterion=criterion,
optim_method=Adam(0.0005),
end_trigger=MaxEpoch(1),
batch_size=batch_size)
app_name='vae-'+dt.datetime.now().strftime("%Y%m%d-%H%M%S")
train_summary = TrainSummary(log_dir='/tmp/vae',
app_name=app_name)
optimizer.set_train_summary(train_summary)
print ("saving logs to ",app_name)
```
## Spin Up the Training
This could take a while. It took about 6 hours on a desktop with a intel i7-6700 cpu and 40GB java heap memory. You can reduce the training time by using less data (some changes in the "Load the Dataset" section), but the performce may not as good.
```
redire_spark_logs()
show_bigdl_info_logs()
def gen_image_row():
decoder.evaluate()
return np.column_stack([decoder.forward(np.random.randn(1, Z_DIM)).reshape(3, 64,64).transpose(1, 2, 0) for s in range(8)])
def gen_image():
return inverse_transform(np.row_stack([gen_image_row() for i in range(8)]))
for i in range(1, 6):
optimizer.set_end_when(MaxEpoch(i))
trained_model = optimizer.optimize()
image = gen_image()
if not os.path.exists("./images"):
os.makedirs("./images")
if not os.path.exists("./models"):
os.makedirs("./models")
# you may change the following directory accordingly and make sure the directory
# you are writing to exists
scipy.misc.imsave("./images/image_vgg_%s.png" % i , image)
decoder.saveModel("./models/decoder_vgg_%s.model" % i, over_write = True)
import matplotlib
matplotlib.use('Agg')
%pylab inline
import numpy as np
import datetime as dt
import matplotlib.pyplot as plt
loss = np.array(train_summary.read_scalar("Loss"))
plt.figure(figsize = (12,12))
plt.plot(loss[:,0],loss[:,1],label='loss')
plt.xlim(0,loss.shape[0]+10)
plt.grid(True)
plt.title("loss")
```
## Random Sample Some Images
```
from matplotlib.pyplot import imshow
img = gen_image()
imshow(img)
```
| true |
code
| 0.804761 | null | null | null | null |
|
# Adadelta --- 从0开始
我们在[Adagrad](adagrad-scratch.md)里提到,由于学习率分母上的变量$\mathbf{s}$一直在累加按元素平方的梯度,每个元素的学习率在迭代过程中一直在降低或不变。所以在有些问题下,当学习率在迭代早期降得较快时且当前解依然不理想时,Adagrad在迭代后期可能较难找到一个有用的解。我们在[RMSProp](rmsprop-scratch.md)介绍了应对这一问题的一种方法:对梯度按元素平方使用指数加权移动平均而不是累加。
事实上,Adadelta也是一种应对这个问题的方法。有意思的是,它没有学习率参数。
## Adadelta算法
Adadelta算法也像RMSProp一样,使用了一个梯度按元素平方的指数加权移动平均变量$\mathbf{s}$,并将其中每个元素初始化为0。在每次迭代中,首先计算[小批量梯度](gd-sgd-scratch.md) $\mathbf{g}$,然后对该梯度按元素平方后做指数加权移动平均并计算$\mathbf{s}$:
$$\mathbf{s} := \rho \mathbf{s} + (1 - \rho) \mathbf{g} \odot \mathbf{g} $$
然后我们计算当前需要更新的参数的变化量:
$$ \mathbf{g}^\prime = \frac{\sqrt{\Delta\mathbf{x} + \epsilon}}{\sqrt{\mathbf{s} + \epsilon}} \odot \mathbf{g} $$
其中$\epsilon$是为了维持数值稳定性而添加的常数,例如$10^{-5}$。和Adagrad一样,模型参数中每个元素都分别拥有自己的学习率。其中$\Delta\mathbf{x}$初始化为零张量,并做如下$\mathbf{g}^\prime$按元素平方的指数加权移动平均:
$$\Delta\mathbf{x} := \rho \Delta\mathbf{x} + (1 - \rho) \mathbf{g}^\prime \odot \mathbf{g}^\prime $$
同样地,最后的参数迭代步骤与小批量随机梯度下降类似。只是这里梯度前的学习率已经被调整过了:
$$\mathbf{x} := \mathbf{x} - \mathbf{g}^\prime $$
## Adadelta的实现
Adadelta的实现很简单。我们只需要把上面的数学公式翻译成代码。
```
# Adadalta
def adadelta(params, sqrs, deltas, rho, batch_size):
eps_stable = 1e-5
for param, sqr, delta in zip(params, sqrs, deltas):
g = param.grad / batch_size
sqr[:] = rho * sqr + (1. - rho) * nd.square(g)
cur_delta = nd.sqrt(delta + eps_stable) / nd.sqrt(sqr + eps_stable) * g
delta[:] = rho * delta + (1. - rho) * cur_delta * cur_delta
param[:] -= cur_delta
```
## 实验
实验中,我们以线性回归为例。其中真实参数`w`为[2, -3.4],`b`为4.2。我们把算法中基于指数加权移动平均的变量初始化为和参数形状相同的零张量。
```
from mxnet import ndarray as nd
import mxnet as mx
from mxnet import autograd
from mxnet import gluon
import random
mx.random.seed(1)
random.seed(1)
# 生成数据集。
num_inputs = 2
num_examples = 1000
true_w = [2, -3.4]
true_b = 4.2
X = nd.random_normal(scale=1, shape=(num_examples, num_inputs))
y = true_w[0] * X[:, 0] + true_w[1] * X[:, 1] + true_b
y += .01 * nd.random_normal(scale=1, shape=y.shape)
dataset = gluon.data.ArrayDataset(X, y)
# 构造迭代器。
import random
def data_iter(batch_size):
idx = list(range(num_examples))
random.shuffle(idx)
for batch_i, i in enumerate(range(0, num_examples, batch_size)):
j = nd.array(idx[i: min(i + batch_size, num_examples)])
yield batch_i, X.take(j), y.take(j)
# 初始化模型参数。
def init_params():
w = nd.random_normal(scale=1, shape=(num_inputs, 1))
b = nd.zeros(shape=(1,))
params = [w, b]
sqrs = []
deltas = []
for param in params:
param.attach_grad()
# 把算法中基于指数加权移动平均的变量初始化为和参数形状相同的零张量。
sqrs.append(param.zeros_like())
deltas.append(param.zeros_like())
return params, sqrs, deltas
# 线性回归模型。
def net(X, w, b):
return nd.dot(X, w) + b
# 损失函数。
def square_loss(yhat, y):
return (yhat - y.reshape(yhat.shape)) ** 2 / 2
```
接下来定义训练函数。当epoch大于2时(epoch从1开始计数),学习率以自乘0.1的方式自我衰减。训练函数的period参数说明,每次采样过该数目的数据点后,记录当前目标函数值用于作图。例如,当period和batch_size都为10时,每次迭代后均会记录目标函数值。
```
%matplotlib inline
import matplotlib as mpl
mpl.rcParams['figure.dpi']= 120
import matplotlib.pyplot as plt
import numpy as np
def train(batch_size, rho, epochs, period):
assert period >= batch_size and period % batch_size == 0
[w, b], sqrs, deltas = init_params()
total_loss = [np.mean(square_loss(net(X, w, b), y).asnumpy())]
# 注意epoch从1开始计数。
for epoch in range(1, epochs + 1):
for batch_i, data, label in data_iter(batch_size):
with autograd.record():
output = net(data, w, b)
loss = square_loss(output, label)
loss.backward()
adadelta([w, b], sqrs, deltas, rho, batch_size)
if batch_i * batch_size % period == 0:
total_loss.append(np.mean(square_loss(net(X, w, b), y).asnumpy()))
print("Batch size %d, Epoch %d, loss %.4e" %
(batch_size, epoch, total_loss[-1]))
print('w:', np.reshape(w.asnumpy(), (1, -1)),
'b:', b.asnumpy()[0], '\n')
x_axis = np.linspace(0, epochs, len(total_loss), endpoint=True)
plt.semilogy(x_axis, total_loss)
plt.xlabel('epoch')
plt.ylabel('loss')
plt.show()
```
使用Adadelta,最终学到的参数值与真实值较接近。
```
train(batch_size=10, rho=0.9999, epochs=3, period=10)
```
## 结论
* Adadelta没有学习率参数。
## 练习
* Adadelta为什么不需要设置学习率参数?它被什么代替了?
**吐槽和讨论欢迎点**[这里](https://discuss.gluon.ai/t/topic/2277)
| true |
code
| 0.751922 | null | null | null | null |
|
# 18DCE097 Muskaan Pirani
**Project title: Weather Forecast using LSTM**
1. Main aim is to reduce RMSE values for accurate predictions.
2. We have taken dataset from Kaggle to predict the temperature of a particular place.
* Train RMSE: 1.39 RMSE
* Test RMSE: 1.38 RMSE
```
import numpy
import matplotlib.pyplot as plt
from pandas import read_csv
import math
from keras.models import Sequential
from keras.layers import Dense, Dropout, LSTM, Bidirectional, GRU
from sklearn.preprocessing import MinMaxScaler
from sklearn.metrics import mean_squared_error
# convert an array of values into a dataset matrix
def create_dataset(dataset, look_back=1):
dataX, dataY = [], []
for i in range(len(dataset)-look_back-1):
a = dataset[i:(i+look_back), 0]
dataX.append(a)
dataY.append(dataset[i + look_back, 0])
return numpy.array(dataX), numpy.array(dataY)
# fix random seed for reproducibility
numpy.random.seed(7)
# load the dataset
dataframe = read_csv('/content/farm_temperature_data.csv', usecols=[1])
dataset = dataframe.values
dataset = dataset.astype('float32')
dataframe.head()
# normalize the dataset
scaler = MinMaxScaler(feature_range=(0, 1))
dataset = scaler.fit_transform(dataset)
# split into train and test sets
train_size = int(len(dataset) * 0.8)
test_size = len(dataset) - train_size
train, test = dataset[0:train_size,:], dataset[train_size:len(dataset),:]
# reshape into X=t and Y=t+1
look_back = 1
trainX, trainY = create_dataset(train, look_back)
testX, testY = create_dataset(test, look_back)
# reshape input to be [samples, time steps, features]
trainX = numpy.reshape(trainX, (trainX.shape[0], 1, trainX.shape[1]))
testX = numpy.reshape(testX, (testX.shape[0], 1, testX.shape[1]))
# create and fit the LSTM network
model = Sequential()
model.add(LSTM(64, input_shape=(1, look_back), return_sequences=True))
model.add(LSTM(16, input_shape=(1, look_back), return_sequences=True))
model.add(LSTM(4, input_shape=(1, look_back), return_sequences=False))
model.add(Dropout(0.2))
model.add(Dense(1))
# # create and fit the BiLSTM network
# model = Sequential()
# model.add(Bidirectional(LSTM(64, input_shape=(1, look_back), return_sequences=True)))
# model.add(Bidirectional(LSTM(16, input_shape=(1, look_back), return_sequences=True)))
# model.add(Bidirectional(LSTM(4, input_shape=(1, look_back), return_sequences=False)))
# model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam', metrics=['accuracy'])
model.fit(trainX, trainY, epochs=20, batch_size=1, verbose=2)
# make predictions
trainPredict = model.predict(trainX)
testPredict = model.predict(testX)
# invert predictions
trainPredict = scaler.inverse_transform(trainPredict)
trainY = scaler.inverse_transform([trainY])
testPredict = scaler.inverse_transform(testPredict)
testY = scaler.inverse_transform([testY])
# calculate root mean squared error
trainScore = numpy.sqrt(mean_squared_error(trainY[0], trainPredict[:,0]))
print('Train Score: %.2f RMSE' % (trainScore))
testScore = numpy.sqrt(mean_squared_error(testY[0], testPredict[:,0]))
print('Test Score: %.2f RMSE' % (testScore))
# shift train predictions for plotting
plt.figure(figsize=(20,10))
trainPredictPlot = numpy.empty_like(dataset)
trainPredictPlot[:, :] = numpy.nan
trainPredictPlot[look_back:len(trainPredict)+look_back, :] = trainPredict
# shift test predictions for plotting
testPredictPlot = numpy.empty_like(dataset)
testPredictPlot[:, :] = numpy.nan
testPredictPlot[len(trainPredict)+(look_back*2)+1:len(dataset)-1, :] = testPredict
# plot baseline and predictions
plt.title("Weather Forecast")
plt.xlabel("Days")
plt.ylabel("Temperature (Celcius)")
plt.plot(scaler.inverse_transform(dataset), label="Actual")
plt.plot(trainPredictPlot, label="Prediction (Train)")
plt.plot(testPredictPlot, label="Prediction (Test)")
plt.legend(loc="upper right")
plt.show()
```
| true |
code
| 0.722166 | null | null | null | null |
|
In this notebook, we introduce survival analysis and we show application examples using both R and Python. We will compare the two programming languages, and leverage Plotly's Python and R APIs to convert static graphics into interactive `plotly` objects.
[Plotly](https://plotly.com) is a platform for making interactive graphs with R, Python, MATLAB, and Excel. You can make graphs and analyze data on Plotly’s free public cloud. For collaboration and sensitive data, you can run Plotly [on your own servers](https://plotly.com/product/enterprise/).
For a more in-depth theoretical background in survival analysis, please refer to these sources:
- [Lecture Notes by John Fox](http://socserv.mcmaster.ca/jfox/Courses/soc761/survival-analysis.pdf)
- [Wikipedia article](http://en.wikipedia.org/wiki/Survival_analysis)
- [Presentation by Kristin Sainani](www.pitt.edu/~super4/33011-34001/33051-33061.ppt)
- [Lecture Notes by Germán Rodríguez](http://data.princeton.edu/wws509/notes/c7.pdf)
Need help converting Plotly graphs from R or Python?
- [R](https://plotly.com/r/user-guide/)
- [Python](https://plotly.com/python/matplotlib-to-plotly-tutorial/)
For this code to run on your machine, you will need several R and Python packages installed.
- Running `sudo pip install <package_name>` from your terminal will install a Python package.
- Running `install.packages("<package_name>")` in your R console will install an R package.
You will also need to create an account with [Plotly](https://plotly.com/feed/) to receive your API key.
```
# You can also install packages from within IPython!
# Install Python Packages
!pip install lifelines
!pip install rpy2
!pip install plotly
!pip install pandas
# Load extension that let us use magic function `%R`
%load_ext rpy2.ipython
# Install R packages
%R install.packages("devtools")
%R devtools::install_github("ropensci/plotly")
%R install.packages("OIsurv")
```
## Introduction
[Survival analysis](http://en.wikipedia.org/wiki/Survival_analysis) is a set of statistical methods for analyzing the occurrence of events over time. It is also used to determine the relationship of co-variates to the time-to-events, and accurately compare time-to-event between two or more groups. For example:
- Time to death in biological systems.
- Failure time in mechanical systems.
- How long can we expect a user to be on a website / service?
- Time to recovery for lung cancer treatment.
The statistical term 'survival analysis' is analogous to 'reliability theory' in engineering, 'duration analysis' in economics, and 'event history analysis' in sociology.
The two key functions in survival analysis are the *survival function* and the *hazard function*.
The **survival function**, conventionally denoted by $S$, is the probability that the event (say, death) has not occurred yet:
$$S(t) = Pr(T > t),$$
where $T$ denotes the time of death and $Pr$ the probability. Since $S$ is a probability, $0\leq S(t)\leq1$. Survival times are non-negative ($T \geq 0$) and, generally, $S(0) = 1$.
The **hazard function** $h(t)$ is the event (death) rate at time $t$, conditional on survival until $t$ (i.e., $T \geq t$):
\begin{align*}
h(t) &= \lim_{\Delta t \to 0} Pr(t \leq T \leq t + \Delta t \, | \, T \geq t) \\
&= \lim_{\Delta t \to 0} \frac{Pr(t \leq T \leq t + \Delta t)}{S(t)} = \frac{p(t)}{S(t)},
\end{align*}
where $p$ denotes the probability density function.
In practice, we do not get to observe the actual survival function of a population; we must use the observed data to estimate it. A popular estimate for the survival function $S(t)$ is the [Kaplan–Meier estimate](http://en.wikipedia.org/wiki/Kaplan–Meier_estimator):
\begin{align*}
\hat{S}(t) &= \prod_{t_i \leq t} \frac{n_i − d_i}{n_i}\,,
\end{align*}
where $d_i$ is the number of events (deaths) observed at time $t_i$ and $n_i$ is the number of subjects at risk observed at time $t_i$.
## Censoring
Censoring is a type of missing data problem common in survival analysis. Other popular comparison methods, such as linear regression and t-tests do not accommodate for censoring. This makes survival analysis attractive for data from randomized clinical studies.
In an ideal scenario, both the birth and death rates of a patient is known, which means the lifetime is known.
**Right censoring** occurs when the 'death' is unknown, but it is after some known date. e.g. The 'death' occurs after the end of the study, or there was no follow-up with the patient.
**Left censoring** occurs when the lifetime is known to be less than a certain duration. e.g. Unknown time of initial infection exposure when first meeting with a patient.
<hr>
For following analysis, we will use the [lifelines](https://github.com/CamDavidsonPilon/lifelines) library for python, and the [survival](http://cran.r-project.org/web/packages/survival/survival.pdf) package for R. We can use [rpy2](http://rpy.sourceforge.net) to execute R code in the same document as the python code.
```
# OIserve contains the survival package and sample datasets
%R library(OIsurv)
%R library(devtools)
%R library(plotly)
%R library(IRdisplay)
# Authenticate to plotly's api using your account
%R py <- plotly("rmdk", "0sn825k4r8")
# Load python libraries
import numpy as np
import pandas as pd
import lifelines as ll
# Plotting helpers
from IPython.display import HTML
%matplotlib inline
import matplotlib.pyplot as plt
import plotly.plotly as py
import plotly.tools as tls
from plotly.graph_objs import *
from pylab import rcParams
rcParams['figure.figsize']=10, 5
```
## Loading data into Python and R
We will be using the `tongue` dataset from the `KMsurv` package in R, then convert the data into a pandas dataframe under the same name.
This data frame contains the following columns:
- type: Tumor DNA profile (1=Aneuploid Tumor, 2=Diploid Tumor)
- time: Time to death or on-study time, weeks
- delta Death indicator (0=alive, 1=dead)
```
# Load in data
%R data(tongue)
# Pull data into python kernel
%Rpull tongue
# Convert into pandas dataframe
from rpy2.robjects import pandas2ri
tongue = pandas2ri.ri2py_dataframe(tongue)
```
We can now refer to `tongue` using both R and python.
```
%%R
summary(tongue)
tongue.describe()
```
We can even operate on R and Python within the same code cell.
```
%R print(mean(tongue$time))
print tongue['time'].mean()
```
In R we need to create a `Surv` object with the `Surv()` function. Most functions in the `survival` package apply methods to this object. For right-censored data, we need to pass two arguments to `Surv()`:
1. a vector of times
2. a vector indicating which times are observed and censored
```
%%R
attach(tongue)
tongue.surv <- Surv(time[type==1], delta[type==1])
tongue.surv
```
- The plus-signs identify observations that are right-censored.
# Estimating survival with Kaplan-Meier
### Using R
The simplest fit estimates a survival object against an intercept. However, the `survfit()` function has several optional arguments. For example, we can change the confidence interval using `conf.int` and `conf.type`.
See `help(survfit.formula)` for the comprehensive documentation.
```
%%R
surv.fit <- survfit(tongue.surv~1)
surv.fit
```
It is often helpful to call the `summary()` and `plot()` functions on this object.
```
%%R
summary(surv.fit)
%%R -h 400
plot(surv.fit, main='Kaplan-Meier estimate with 95% confidence bounds',
xlab='time', ylab='survival function')
```
Let's convert this plot into an interactive plotly object using [plotly](https://plotly.com) and [ggplot2](http://ggplot2.org).
First, we will use a helper ggplot function written by [Edwin Thoen](http://www.r-statistics.com/2013/07/creating-good-looking-survival-curves-the-ggsurv-function/) to plot pretty survival distributions in R.
```
%%R
ggsurv <- function(s, CI = 'def', plot.cens = T, surv.col = 'gg.def',
cens.col = 'red', lty.est = 1, lty.ci = 2,
cens.shape = 3, back.white = F, xlab = 'Time',
ylab = 'Survival', main = ''){
library(ggplot2)
strata <- ifelse(is.null(s$strata) ==T, 1, length(s$strata))
stopifnot(length(surv.col) == 1 | length(surv.col) == strata)
stopifnot(length(lty.est) == 1 | length(lty.est) == strata)
ggsurv.s <- function(s, CI = 'def', plot.cens = T, surv.col = 'gg.def',
cens.col = 'red', lty.est = 1, lty.ci = 2,
cens.shape = 3, back.white = F, xlab = 'Time',
ylab = 'Survival', main = ''){
dat <- data.frame(time = c(0, s$time),
surv = c(1, s$surv),
up = c(1, s$upper),
low = c(1, s$lower),
cens = c(0, s$n.censor))
dat.cens <- subset(dat, cens != 0)
col <- ifelse(surv.col == 'gg.def', 'black', surv.col)
pl <- ggplot(dat, aes(x = time, y = surv)) +
xlab(xlab) + ylab(ylab) + ggtitle(main) +
geom_step(col = col, lty = lty.est)
pl <- if(CI == T | CI == 'def') {
pl + geom_step(aes(y = up), color = col, lty = lty.ci) +
geom_step(aes(y = low), color = col, lty = lty.ci)
} else (pl)
pl <- if(plot.cens == T & length(dat.cens) > 0){
pl + geom_point(data = dat.cens, aes(y = surv), shape = cens.shape,
col = cens.col)
} else if (plot.cens == T & length(dat.cens) == 0){
stop ('There are no censored observations')
} else(pl)
pl <- if(back.white == T) {pl + theme_bw()
} else (pl)
pl
}
ggsurv.m <- function(s, CI = 'def', plot.cens = T, surv.col = 'gg.def',
cens.col = 'red', lty.est = 1, lty.ci = 2,
cens.shape = 3, back.white = F, xlab = 'Time',
ylab = 'Survival', main = '') {
n <- s$strata
groups <- factor(unlist(strsplit(names
(s$strata), '='))[seq(2, 2*strata, by = 2)])
gr.name <- unlist(strsplit(names(s$strata), '='))[1]
gr.df <- vector('list', strata)
ind <- vector('list', strata)
n.ind <- c(0,n); n.ind <- cumsum(n.ind)
for(i in 1:strata) ind[[i]] <- (n.ind[i]+1):n.ind[i+1]
for(i in 1:strata){
gr.df[[i]] <- data.frame(
time = c(0, s$time[ ind[[i]] ]),
surv = c(1, s$surv[ ind[[i]] ]),
up = c(1, s$upper[ ind[[i]] ]),
low = c(1, s$lower[ ind[[i]] ]),
cens = c(0, s$n.censor[ ind[[i]] ]),
group = rep(groups[i], n[i] + 1))
}
dat <- do.call(rbind, gr.df)
dat.cens <- subset(dat, cens != 0)
pl <- ggplot(dat, aes(x = time, y = surv, group = group)) +
xlab(xlab) + ylab(ylab) + ggtitle(main) +
geom_step(aes(col = group, lty = group))
col <- if(length(surv.col == 1)){
scale_colour_manual(name = gr.name, values = rep(surv.col, strata))
} else{
scale_colour_manual(name = gr.name, values = surv.col)
}
pl <- if(surv.col[1] != 'gg.def'){
pl + col
} else {pl + scale_colour_discrete(name = gr.name)}
line <- if(length(lty.est) == 1){
scale_linetype_manual(name = gr.name, values = rep(lty.est, strata))
} else {scale_linetype_manual(name = gr.name, values = lty.est)}
pl <- pl + line
pl <- if(CI == T) {
if(length(surv.col) > 1 && length(lty.est) > 1){
stop('Either surv.col or lty.est should be of length 1 in order
to plot 95% CI with multiple strata')
}else if((length(surv.col) > 1 | surv.col == 'gg.def')[1]){
pl + geom_step(aes(y = up, color = group), lty = lty.ci) +
geom_step(aes(y = low, color = group), lty = lty.ci)
} else{pl + geom_step(aes(y = up, lty = group), col = surv.col) +
geom_step(aes(y = low,lty = group), col = surv.col)}
} else {pl}
pl <- if(plot.cens == T & length(dat.cens) > 0){
pl + geom_point(data = dat.cens, aes(y = surv), shape = cens.shape,
col = cens.col)
} else if (plot.cens == T & length(dat.cens) == 0){
stop ('There are no censored observations')
} else(pl)
pl <- if(back.white == T) {pl + theme_bw()
} else (pl)
pl
}
pl <- if(strata == 1) {ggsurv.s(s, CI , plot.cens, surv.col ,
cens.col, lty.est, lty.ci,
cens.shape, back.white, xlab,
ylab, main)
} else {ggsurv.m(s, CI, plot.cens, surv.col ,
cens.col, lty.est, lty.ci,
cens.shape, back.white, xlab,
ylab, main)}
pl
}
```
Voila!
```
%%R -h 400
p <- ggsurv(surv.fit) + theme_bw()
p
```
We have to use a workaround to render an interactive plotly object by using an iframe in the ipython kernel. This is a bit easier if you are working in an R kernel.
```
%%R
# Create the iframe HTML
plot.ly <- function(url) {
# Set width and height from options or default square
w <- "750"
h <- "600"
html <- paste("<center><iframe height=\"", h, "\" id=\"igraph\" scrolling=\"no\" seamless=\"seamless\"\n\t\t\t\tsrc=\"",
url, "\" width=\"", w, "\" frameBorder=\"0\"></iframe></center>", sep="")
return(html)
}
%R p <- plot.ly("https://plotly.com/~rmdk/111/survival-vs-time/")
# pass object to python kernel
%R -o p
# Render HTML
HTML(p[0])
```
The `y axis` represents the probability a patient is still alive at time $t$ weeks. We see a steep drop off within the first 100 weeks, and then observe the curve flattening. The dotted lines represent the 95% confidence intervals.
### Using Python
We will now replicate the above steps using python. Above, we have already specified a variable `tongues` that holds the data in a pandas dataframe.
```
from lifelines.estimation import KaplanMeierFitter
kmf = KaplanMeierFitter()
```
The method takes the same parameters as it's R counterpart, a time vector and a vector indicating which observations are observed or censored. The model fitting sequence is similar to the [scikit-learn](http://scikit-learn.org/stable/) api.
```
f = tongue.type==1
T = tongue[f]['time']
C = tongue[f]['delta']
kmf.fit(T, event_observed=C)
```
To get a plot with the confidence intervals, we simply can call `plot()` on our `kmf` object.
```
kmf.plot(title='Tumor DNA Profile 1')
```
Now we can convert this plot to an interactive [Plotly](https://plotly.com) object. However, we will have to augment the legend and filled area manually. Once we create a helper function, the process is simple.
Please see the Plotly Python [user guide](https://plotly.com/python/overview/#in-%5B37%5D) for more insight on how to update plot parameters.
> Don't forget you can also easily edit the chart properties using the Plotly GUI interface by clicking the "Play with this data!" link below the chart.
```
p = kmf.plot(ci_force_lines=True, title='Tumor DNA Profile 1 (95% CI)')
# Collect the plot object
kmf1 = plt.gcf()
def pyplot(fig, ci=True, legend=True):
# Convert mpl fig obj to plotly fig obj, resize to plotly's default
py_fig = tls.mpl_to_plotly(fig, resize=True)
# Add fill property to lower limit line
if ci == True:
style1 = dict(fill='tonexty')
# apply style
py_fig['data'][2].update(style1)
# Change color scheme to black
py_fig['data'].update(dict(line=Line(color='black')))
# change the default line type to 'step'
py_fig['data'].update(dict(line=Line(shape='hv')))
# Delete misplaced legend annotations
py_fig['layout'].pop('annotations', None)
if legend == True:
# Add legend, place it at the top right corner of the plot
py_fig['layout'].update(
showlegend=True,
legend=Legend(
x=1.05,
y=1
)
)
# Send updated figure object to Plotly, show result in notebook
return py.iplot(py_fig)
pyplot(kmf1, legend=False)
```
<hr>
# Multiple Types
### Using R
Many times there are different groups contained in a single dataset. These may represent categories such as treatment groups, different species, or different manufacturing techniques. The `type` variable in the `tongues` dataset describes a patients DNA profile. Below we define a Kaplan-Meier estimate for each of these groups in R and Python.
```
%%R
surv.fit2 <- survfit( Surv(time, delta) ~ type)
p <- ggsurv(surv.fit2) +
ggtitle('Lifespans of different tumor DNA profile') + theme_bw()
p
```
Convert to a Plotly object.
```
#%R ggplotly(plt)
%R p <- plot.ly("https://plotly.com/~rmdk/173/lifespans-of-different-tumor-dna-profile/")
# pass object to python kernel
%R -o p
# Render HTML
HTML(p[0])
```
### Using Python
```
f2 = tongue.type==2
T2 = tongue[f2]['time']
C2 = tongue[f2]['delta']
ax = plt.subplot(111)
kmf.fit(T, event_observed=C, label=['Type 1 DNA'])
kmf.survival_function_.plot(ax=ax)
kmf.fit(T2, event_observed=C2, label=['Type 2 DNA'])
kmf.survival_function_.plot(ax=ax)
plt.title('Lifespans of different tumor DNA profile')
kmf2 = plt.gcf()
```
Convert to a Plotly object.
```
pyplot(kmf2, ci=False)
```
<hr>
# Testing for Difference
It looks like DNA Type 2 is potentially more deadly, or more difficult to treat compared to Type 1. However, the difference between these survival curves still does not seem dramatic. It will be useful to perform a statistical test on the different DNA profiles to see if their survival rates are significantly different.
Python's *lifelines* contains methods in `lifelines.statistics`, and the R package `survival` uses a function `survdiff()`. Both functions return a p-value from a chi-squared distribution.
It turns out these two DNA types do not have significantly different survival rates.
### Using R
```
%%R
survdiff(Surv(time, delta) ~ type)
```
### Using Python
```
from lifelines.statistics import logrank_test
summary_= logrank_test(T, T2, C, C2, alpha=99)
print summary_
```
<hr>
# Estimating Hazard Rates
### Using R
To estimate the hazard function, we compute the cumulative hazard function using the [Nelson-Aalen estimator](), defined as:
$$\hat{\Lambda} (t) = \sum_{t_i \leq t} \frac{d_i}{n_i}$$
where $d_i$ is the number of deaths at time $t_i$ and $n_i$ is the number of susceptible individuals. Both R and Python modules use the same estimator. However, in R we will use the `-log` of the Fleming and Harrington estimator, which is equivalent to the Nelson-Aalen.
```
%%R
haz <- Surv(time[type==1], delta[type==1])
haz.fit <- summary(survfit(haz ~ 1), type='fh')
x <- c(haz.fit$time, 250)
y <- c(-log(haz.fit$surv), 1.474)
cum.haz <- data.frame(time=x, cumulative.hazard=y)
p <- ggplot(cum.haz, aes(time, cumulative.hazard)) + geom_step() + theme_bw() +
ggtitle('Nelson-Aalen Estimate')
p
%R p <- plot.ly("https://plotly.com/~rmdk/185/cumulativehazard-vs-time/")
# pass object to python kernel
%R -o p
# Render HTML
HTML(p[0])
```
### Using Python
```
from lifelines.estimation import NelsonAalenFitter
naf = NelsonAalenFitter()
naf.fit(T, event_observed=C)
naf.plot(title='Nelson-Aalen Estimate')
naf.plot(ci_force_lines=True, title='Nelson-Aalen Estimate')
py_p = plt.gcf()
pyplot(py_p, legend=False)
from IPython.display import display, HTML
display(HTML('<link href="//fonts.googleapis.com/css?family=Open+Sans:600,400,300,200|Inconsolata|Ubuntu+Mono:400,700" rel="stylesheet" type="text/css" />'))
display(HTML('<link rel="stylesheet" type="text/css" href="http://help.plot.ly/documentation/all_static/css/ipython-notebook-custom.css">'))
! pip install publisher --upgrade
import publisher
publisher.publish(
'survival_analysis.ipynb', 'ipython-notebooks/survival-analysis-r-vs-python/',
'Survival Analysis with Plotly: R vs Python',
'An introduction to survival analysis with Plotly graphs using R, Python, and IPython notebooks',
name='Survival Analysis with Plotly')
```
| true |
code
| 0.516961 | null | null | null | null |
|
### Part4 Variant genotyping from whole genome graphs
In this part, we constructed whole genome graphs for Brown Swiss population, by augmenting ~14.1 M autosomal variants identified from 82 Brown Swiss to the Bovine UCD1.2 Hereford reference.
We then mapped 10 samples (not used for simulation) to this whole genome graph.
We then compared with mapping with linear genome using bwa or vg (empty graphs, only backbone without variations).
```
library(tidyverse)
library(magrittr)
```
### Comparison between unique and perfect mapping
Since reads were not simulated, we could not assess the mapping correctnes. Then, we followed previous approach in Novak et al. (2017), Prit et al. (2018) to calculate the reads that map perfectly (edit distance 0 without clipping) and reads that map uniquely, meaning that there is only single map location, or considerably high MQ (MQ=60) in case of multi-mapping
```
datunper <- read.table("../result/datuniqperf.tsv",header=TRUE)
head(datunper)
#since the data is per chromosome, then we combined across
datunper_sum <- datunper %>% group_by(anims,mapper) %>% summarise(perfect=sum(perfect)*100/sum(mapped),
uniq=sum(uniq)*100/sum(mapped))
options(repr.plot.width=8, repr.plot.height=8)
datunper_sum %<>% mutate(Mapping=case_when(mapper=="bwa"~"Linear (BWA)",
mapper=="vg_linear"~"Linear (VG)",
mapper=="vg_graph"~ "Graph (VG)"))
ggplot(datunper_sum,aes(x=uniq,y=perfect,col=Mapping,shape=Mapping)) +
geom_point(size=5,stroke=1)+
scale_color_manual(values=c("#E69F00", "#56B4E9", "#009E73"))+
scale_shape_manual(values=c(1,2,3))+
theme_bw()+
labs(x="Unique alignment (%)",y="Perfect alignment (%)",fill="Alignment")+
coord_cartesian(xlim = c(80,85))+
theme(text=element_text(size=18),
axis.title = element_text(face="bold"),
legend.position = "bottom")
```
### Quantify the difference across mapping scenarios
```
## The largest improvement is in the perfect mapping to the paths in the graphs
## We need to quantify this
datperf <- datunper_sum %>% select(anims,perfect,mapper) %>% pivot_wider(names_from = mapper,values_from = perfect) %>%
mutate(dif=vg_graph-bwa)
cat("Maximum improvement in perfect mapping in the graph alignment from linear BWA")
max(datperf$dif)
cat("Minimum improvement in perfect mapping in the graph alignment from linear BWA")
min(datperf$dif)
cat("Mean improvement in perfect mapping in the graph alignment from linear BWA")
mean(datperf$dif)
## However we noticed that the unique mapping is decreased (but very small) in graph alignments
datuniq <- datunper_sum %>% select(anims,uniq,mapper) %>% pivot_wider(names_from = mapper,values_from = uniq) %>%
mutate(dif=vg_graph-bwa)
cat("Minimum decreased in uniq mapping in the graph alignment from linear BWA")
max(datuniq$dif)
cat("Maximum decreased in uniq mapping in the graph alignment from linear BWA")
min(datuniq$dif)
cat("Mean decreased in uniq mapping in the graph alignment from linear BWA")
mean(datuniq$dif)
```
### Comparison of the genotypes discovered from linear vs graph alignments
We then surjected the graph alignment to the corresponding linear coordinates.
We then used the samtools multi-sample calling to call variants.
Finally, we compared with the matched SNP array to calculate concordance statistics as below.

```
## Statistics of concordance for samtools
## Mode indicate the mapping mode, bwa, graph, or vg(linear)
## Fil indicate the filtered or raw genotypes
datsam <- read.table("../result/samtools_concordance_all.tsv",header=TRUE) %>% select(-prog)
head(datsam)
## Since the statistics calculated based on each animals,
## We take mean and sd to report the performance of each caller
datsam %>% group_by(mode) %>% summarise(m_concor=mean(concor),
sd_concor=sd(concor),
m_recall=mean(recal),
sd_recall=sd(recal),
m_discre=mean(discre),
sd_discre=sd(discre),
m_precision=mean(precision),
sd_precision=sd(precision)) %>% as.data.frame()
```
Almost no difference among tools, we can plot it to see the pattern more clear
### Plot of the genotype concordance across sequencing depth
We test whether there is any difference across sequencing coverage between graphs and linear alignment.
```
options(warn=-1)
datcov <- read.table("../result/anims_coverage.tsv",header=FALSE)
colnames(datcov) <- c("anims","coverage")
datsamall <- datsam %>% left_join(datcov,by=c("anims"))
head(datsamall)
datfil <- datsamall %>% filter(! str_detect(mode,"_fil"))
datfil %<>% mutate(Mapping=case_when(mode=="bwa"~"Linear(BWA)",
mode=="graph"~"Graph(VG)",
mode=="linear"~"Linear(VG)"))
ggplot(datfil,aes(x=as.double(as.character(coverage)),y=concor,col=Mapping,shape=Mapping))+
geom_point(size=5,stroke=1)+
scale_y_continuous(breaks=seq(90,100,1),limits = c(96,100))+
scale_colour_manual(values=c("#E69F00", "#56B4E9", "#009E73","red"))+
scale_shape_manual(values=c(1,2,3))+
theme_bw()+
theme(text = element_text(size=18),
axis.title=element_text(face="bold"),
legend.position = "bottom")+
labs(x="Sequencing coverage",y="Genotype concordance")
```
### Plot relation between precision and recall of the array genotypes
We see no noticeable difference across sequencing coverage. We could also look into the relation between precision-recall in different samples.
```
ggplot(datfil,aes(x=precision,y=recal,shape=Mapping,col=Mapping))+
geom_point(size=5,stroke=1)+
theme_bw()+
theme(legend.position = "bottom",
text = element_text(size=18),
axis.title=element_text(face="bold"))+
scale_colour_manual(values=c("#E69F00", "#56B4E9", "#009E73"))+
scale_shape_manual(values=c(1,2,3))+
labs(x="Precision(%)",y="Recall(%)")
```
### Genotyping concordance for variants discovered from GATK and Graphtyper
We additionally discover and genotype variants using GATK and Graphtyper using pipeline we established in our previius paper. We want to see whether we see any difference using different variant callers.
```
datgatk <- read.table("../result/gatk4_concordance_all.tsv",header=TRUE) %>% select(-prog)
head(datgatk)
datgatk %>% group_by(mode) %>% summarise(m_concor=mean(concor),
sd_concor=sd(concor),
m_recall=mean(recal),
sd_recall=sd(recal),
m_discre=mean(discre),
sd_discre=sd(discre),
m_precision=mean(precision),
sd_precision=sd(precision)) %>% as.data.frame()
```
Again we see small difference, even the concordance in graph alignments become slightly lower, when variants called with GATK.
How're about genotypes from Graphtyper?
```
datgraph <- read.table("../result/graphtyper_concordance_all.tsv",header=TRUE)
head(datgraph)
datgraph %>% group_by(mode,prog) %>% summarise(m_concor=mean(concor),
sd_concor=sd(concor),
m_recall=mean(recal),
sd_recall=sd(recal),
m_discre=mean(discre),
sd_discre=sd(discre),
m_precision=mean(precision),
sd_precision=sd(precision)) %>% as.data.frame()
```
Again we also see the same pattern, interestingly we observed that concordance from *Graphtyper* is higher than from *Samtools* or *GATK*.
```
sessionInfo()
```
| true |
code
| 0.634741 | null | null | null | null |
|
The datasets used here are taken from [this](https://github.com/Nilabhra/kolkata_nlp_workshop_2019) repository.
```
import pandas as pd
train = pd.read_csv('https://raw.githubusercontent.com/Nilabhra/kolkata_nlp_workshop_2019/master/data/train.csv')
validation = pd.read_csv('https://raw.githubusercontent.com/Nilabhra/kolkata_nlp_workshop_2019/master/data/valid.csv')
test = pd.read_csv('https://raw.githubusercontent.com/Nilabhra/kolkata_nlp_workshop_2019/master/data/test.csv')
train.shape, validation.shape, test.shape
train.head()
validation.head()
test.head()
train['text'].loc[0]
```
### Removing digits for the text
```
from string import digits
def remove_digits(s):
remove_digits = str.maketrans('', '', digits)
res = s.translate(remove_digits)
return res
train['text'] = train['text'].apply(remove_digits)
validation['text'] = validation['text'].apply(remove_digits)
```
### Bag of words representation
```
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer(stop_words=None, lowercase=True,
ngram_range=(1, 1), min_df=2, binary=True)
train_features = vectorizer.fit_transform(train['text'])
train_labels = train['class']
valid_features = vectorizer.transform(validation['text'])
valid_labels = validation['class']
```
### Label encode the classes
```
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
train_labels = le.fit_transform(train_labels)
valid_labels = le.transform(valid_labels)
```
### Model building and compilation
```
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.layers import Dropout, Dense
model = keras.Sequential()
model.add(Dropout(rate=0.2, input_shape=train_features.shape[1:]))
for _ in range(2):
model.add(Dense(units=64, activation='relu'))
model.add(Dropout(rate=0.2))
model.add(Dense(units=1, activation='sigmoid'))
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['acc'])
# Define an EarlyStopping callback
es_cb = keras.callbacks.EarlyStopping(monitor='val_loss', patience=5)
```
### We are ready to train the model and validate
```
model.fit(train_features,
train_labels,
epochs=15,
batch_size=512,
validation_data=(valid_features, valid_labels),
callbacks=[es_cb],
verbose=1)
```
### How good is the model?
```
test['text'] = test['text'].apply(remove_digits)
test_features = vectorizer.transform(test['text'])
test_labels = le.transform(test['class'])
results = model.evaluate(test_features, test_labels)
print("Accuracy: {0:.2f}%".format(results[1]*100.))
```
### Combining the training and validation sets and retraining the model
```
data = pd.concat((train, validation), axis=0)
vectorizer = CountVectorizer(stop_words=None, lowercase=True,
ngram_range=(1, 1), min_df=2)
features = vectorizer.fit_transform(data['text'])
labels = le.fit_transform(data['class'])
test_features = vectorizer.transform(test['text'])
test_labels = le.transform(test['class'])
model = keras.Sequential()
model.add(Dropout(rate=0.2, input_shape=features.shape[1:]))
model.add(Dense(units=64, activation='relu'))
model.add(Dropout(rate=0.2))
model.add(Dense(units=64, activation='relu'))
model.add(Dropout(rate=0.2))
model.add(Dense(units=1, activation='sigmoid'))
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['acc'])
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['acc'])
model.fit(features,
labels,
epochs=15,
batch_size=512,
validation_data=(test_features, test_labels),
callbacks=[es_cb],
verbose=1)
```
> We will use this model for serving.
### Creating `sklearn` pipeline for deployment
For this we will have to wrap the `tf-keras` model into a `scikit-learn` compatible model class. Then we can use that as a part of a `scikit-learn` pipeline. Let's start by defining a `create_model()` method which would be required for the _scikit-learn model wrapping_ part.
```
# Defined this method in a separate .py file
# to resolve Runtime errors
from ModelCreate import create_model
from keras.wrappers.scikit_learn import KerasClassifier
# Same epoch and same batch size
model = KerasClassifier(build_fn=create_model, epochs=15, batch_size=512, verbose=0)
# Construct the pipeline
from sklearn.pipeline import Pipeline
pipeline = Pipeline([('feature_transformer', vectorizer),
('classifier', model)])
# Fit the pipeline
pipeline.fit(data['text'], labels)
# Use the pipeline to make inferences
le.inverse_transform(pipeline.predict([remove_digits('I had a very bad experience you know.')]))
# Ready to serialize/pickle the model
from sklearn.externals import joblib
# Courtesy: https://bit.ly/2IwQKSS
# Save the Keras model first
pipeline.named_steps['classifier'].model.save('model/keras_model.h5')
# This hack allows us to save the sklearn pipeline
pipeline.named_steps['classifier'].model = None
# Finally, save the pipeline
joblib.dump(pipeline, 'model/sklearn_pipeline.pkl')
# Load the pipeline first
pipeline = joblib.load('model/sklearn_pipeline.pkl')
# Then, load the Keras model
from keras.models import load_model
from keras.utils import CustomObjectScope
from keras.initializers import glorot_uniform
with CustomObjectScope({'GlorotUniform': glorot_uniform()}):
pipeline.named_steps['classifier'].model = load_model('model/keras_model.h5')
# Start making inference
le.inverse_transform(pipeline.predict([remove_digits('I had a very bad experience you know.')]))[0]
```
| true |
code
| 0.657291 | null | null | null | null |
|
<a href="https://githubtocolab.com/giswqs/geemap/blob/master/examples/notebooks/51_cartoee_projections.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab"/></a>
Uncomment the following line to install [geemap](https://geemap.org) and [cartopy](https://scitools.org.uk/cartopy/docs/latest/installing.html#installing) if needed. Keep in mind that cartopy can be challenging to install. If you are unable to install cartopy on your computer, you can try Google Colab with this the [notebook example](https://colab.research.google.com/github/giswqs/geemap/blob/master/examples/notebooks/cartoee_colab.ipynb).
See below the commands to install cartopy and geemap using conda/mamba:
```
conda create -n carto python=3.8
conda activate carto
conda install mamba -c conda-forge
mamba install cartopy scipy -c conda-forge
mamba install geemap -c conda-forge
jupyter notebook
```
```
# !pip install cartopy scipy
# !pip install geemap
```
# Working with projections in cartoee
`cartoee` is a lightweight module to aid in creatig publication quality maps from Earth Engine processing results without having to download data. The `cartoee` package does this by requesting png images from EE results (which are usually good enough for visualization) and `cartopy` is used to create the plots. Utility functions are available to create plot aethetics such as gridlines or color bars. **The notebook and the geemap cartoee module ([cartoee.py](https://geemap.org/cartoee)) were contributed by [Kel Markert](https://github.com/KMarkert). A huge thank you to him.**
```
import ee
import geemap
from geemap import cartoee
import cartopy.crs as ccrs
%pylab inline
geemap.ee_initialize()
```
## Plotting an image on a map
Here we are going to show another example of creating a map with EE results. We will use global sea surface temperature data for Jan-Mar 2018.
```
# get an earth engine image of ocean data for Jan-Mar 2018
ocean = (
ee.ImageCollection('NASA/OCEANDATA/MODIS-Terra/L3SMI')
.filter(ee.Filter.date('2018-01-01', '2018-03-01'))
.median()
.select(["sst"], ["SST"])
)
# set parameters for plotting
# will plot the Sea Surface Temp with specific range and colormap
visualization = {'bands':"SST", 'min':-2, 'max':30}
# specify region to focus on
bbox = [-180, -88, 180, 88]
fig = plt.figure(figsize=(15,10))
# plot the result with cartoee using a PlateCarre projection (default)
ax = cartoee.get_map(ocean, cmap='plasma', vis_params=visualization, region=bbox)
cb = cartoee.add_colorbar(ax, vis_params=visualization, loc='right', cmap='plasma')
ax.set_title(label = 'Sea Surface Temperature', fontsize = 15)
ax.coastlines()
plt.show()
```
### Mapping with different projections
You can specify what ever projection is available within `cartopy` to display the results from Earth Engine. Here are a couple examples of global and regions maps using the sea surface temperature example. Please refer to the [`cartopy` projection documentation](https://scitools.org.uk/cartopy/docs/latest/crs/projections.html) for more examples with different projections.
```
fig = plt.figure(figsize=(15,10))
# create a new Mollweide projection centered on the Pacific
projection = ccrs.Mollweide(central_longitude=-180)
# plot the result with cartoee using the Mollweide projection
ax = cartoee.get_map(ocean, vis_params=visualization, region=bbox,
cmap='plasma', proj=projection)
cb = cartoee.add_colorbar(ax,vis_params=visualization, loc='bottom', cmap='plasma',
orientation='horizontal')
ax.set_title("Mollweide projection")
ax.coastlines()
plt.show()
fig = plt.figure(figsize=(15,10))
# create a new Goode homolosine projection centered on the Pacific
projection = ccrs.Robinson(central_longitude=-180)
# plot the result with cartoee using the Goode homolosine projection
ax = cartoee.get_map(ocean, vis_params=visualization, region=bbox,
cmap='plasma', proj=projection)
cb = cartoee.add_colorbar(ax, vis_params=visualization, loc='bottom', cmap='plasma',
orientation='horizontal')
ax.set_title("Robinson projection")
ax.coastlines()
plt.show()
fig = plt.figure(figsize=(15,10))
# create a new Goode homolosine projection centered on the Pacific
projection = ccrs.InterruptedGoodeHomolosine(central_longitude=-180)
# plot the result with cartoee using the Goode homolosine projection
ax = cartoee.get_map(ocean, vis_params=visualization, region=bbox,
cmap='plasma', proj=projection)
cb = cartoee.add_colorbar(ax, vis_params=visualization, loc='bottom', cmap='plasma',
orientation='horizontal')
ax.set_title("Goode homolosine projection")
ax.coastlines()
plt.show()
fig = plt.figure(figsize=(15,10))
# create a new orographic projection focused on the Pacific
projection = ccrs.EqualEarth(central_longitude=-180)
# plot the result with cartoee using the orographic projection
ax = cartoee.get_map(ocean, vis_params=visualization, region=bbox,
cmap='plasma', proj=projection)
cb = cartoee.add_colorbar(ax, vis_params=visualization, loc='right', cmap='plasma',
orientation='vertical')
ax.set_title("Equal Earth projection")
ax.coastlines()
plt.show()
fig = plt.figure(figsize=(15,10))
# create a new orographic projection focused on the Pacific
projection = ccrs.Orthographic(-130,-10)
# plot the result with cartoee using the orographic projection
ax = cartoee.get_map(ocean, vis_params=visualization, region=bbox,
cmap='plasma', proj=projection)
cb = cartoee.add_colorbar(ax, vis_params=visualization, loc='right', cmap='plasma',
orientation='vertical')
ax.set_title("Orographic projection")
ax.coastlines()
plt.show()
```
### Warping artifacts
Often times global projections are not needed so we use specific projection for the map that provides the best view for the geographic region of interest. When we use these, sometimes image warping effects occur. This is because `cartoee` only requests data for region of interest and when mapping with `cartopy` the pixels get warped to fit the view extent as best as possible. Consider the following example where we want to map SST over the south pole:
```
fig = plt.figure(figsize=(15, 10))
# Create a new region to focus on
spole = [-180, -88, 180,0]
projection = ccrs.SouthPolarStereo()
# plot the result with cartoee focusing on the south pole
ax = cartoee.get_map(ocean, cmap='plasma', vis_params=visualization, region=spole, proj=projection)
cb = cartoee.add_colorbar(ax, vis_params=visualization, loc='right', cmap='plasma')
ax.coastlines()
ax.set_title('The South Pole')
plt.show()
```
As you can see from the result there are warping effects on the plotted image. There is really no way of getting aound this (other than requesting a larger extent of data which may not always be the case).
So, what we can do is set the extent of the map to a more realistic view after plotting the image as in the following example:
```
fig = plt.figure(figsize=(15,10))
# plot the result with cartoee focusing on the south pole
ax = cartoee.get_map(ocean, cmap='plasma', vis_params=visualization, region=spole, proj=projection)
cb = cartoee.add_colorbar(ax, vis_params=visualization, loc='right', cmap='plasma')
ax.coastlines()
ax.set_title('The South Pole')
# get bounding box coordinates of a zoom area
zoom = spole
zoom[-1] = -20
# convert bbox coordinate from [W,S,E,N] to [W,E,S,N] as matplotlib expects
zoom_extent = cartoee.bbox_to_extent(zoom)
# set the extent of the map to the zoom area
ax.set_extent(zoom_extent,ccrs.PlateCarree())
plt.show()
```
| true |
code
| 0.691719 | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.