repo_name
stringlengths 6
77
| path
stringlengths 8
215
| license
stringclasses 15
values | content
stringlengths 335
154k
|
---|---|---|---|
DJMedhaug/code_guild
|
wk0/notebooks/challenges/primes/.ipynb_checkpoints/primes_challenge-checkpoint.ipynb
|
mit
|
def list_primes(n):
# TODO: Implement me
pass
"""
Explanation: <small><i>This notebook was prepared by Thunder Shiviah. Source and license info is on GitHub.</i></small>
Challenge Notebook
Problem: Implement list_primes(n), which returns a list of primes up to n (inclusive).
Constraints
Test Cases
Algorithm
Code
Unit Test
Solution Notebook
Constraints
Does list_primes do anything else?
No
Test Cases
list_primes(1) -> [] # 1 is not prime.
list_primes(2) -> [2]
list_primes(12) -> [2, 3, 5, 7 , 11]
Algorithm
Primes are numbers which are only divisible by 1 and themselves.
5 is a prime since it can only be divided by itself and 1.
9 is not a prime since it can be divided by 3 (3*3 = 9).
1 is not a prime for reasons that only mathematicians care about.
To check if a number is prime, we can implement a basic algorithm, namely: check if a given number can be divided by any numbers smaller than the given number (note: you really only need to test numbers up to the square root of a given number, but it doesn't really matter for this assignment).
Code
End of explanation
"""
# %load test_list_primes.py
from nose.tools import assert_equal
class Test_list_primes(object):
def test_list_primes(self):
assert_equal(list_primes(1), [])
assert_equal(list_primes(2), [2])
assert_equal(list_primes(7), [2, 3, 5, 7])
assert_equal(list_primes(9), list_primes(7))
print('Success: test_list_primes')
def main():
test = Test_list_primes()
test.test_list_primes()
if __name__ == '__main__':
main()
"""
Explanation: Unit Test
The following unit test is expected to fail until you solve the challenge.
End of explanation
"""
|
cfobel/colonists
|
colonists/notebooks/Colonists map data structures.ipynb
|
gpl-3.0
|
# ## Create hex grid ##
hex_grid = HexGrid(8, 17, .165, 1.75)
np.random.seed(2)
# ## Set up board on grid ##
# - Assign region (land, port, sea) and terrain type (clay, sheep, ore, wheat, wood,
# desert, clay port, sheep port, ore port, wheat port, wood port, 3:1 port, sea)
# to each hex.
df_hexes = get_hexes(hex_grid.size)
# - Shuffle hexes within regions.
shuffle_regions(df_hexes, inplace=True)
# - Mark nodes based on the corresponding trade rules to apply. If the nodes do
# not correspond to any special trading privelege, assign a value of `NaN`.
df_nodes = mark_port_nodes(hex_grid.df_nodes, hex_grid.df_hex_paths,
hex_grid.df_hex_links, df_hexes)
# - Assign an ordered 0-based index to each hex within each region.
assign_region_hex_indexes(df_hexes, inplace=True)
# ### Assign collect index (i.e., dice number) to each *land* hex. ###
#
# - Select spiral direction and starting position for collect index assignment.
clockwise = np.random.randint(2)
shift = np.random.randint(0, 6)
hex_roll_order = get_hex_roll_order(shift=shift, clockwise=clockwise)
# - Assign collect index (i.e., dice number) to each *land* hex.
assign_collect_index(df_hexes, hex_roll_order, inplace=True)
import matplotlib.pyplot as plt
fig, axes = plt.subplots(1, 2, figsize=(10, 10))
# Color each hex according to *region* type, label each hex by index.
plot_hexes(df_nodes, df_hexes, hex_grid.df_hex_paths, colorby='region', axis=axes[0])
# Color each hex according to *terrain* type, label each hex by index.
plot_hexes(df_nodes, df_hexes, hex_grid.df_hex_paths, colorby='terrain', axis=axes[1])
fig, axis = plt.subplots(figsize=(8, 10))
plot_hexes(df_nodes, df_hexes, hex_grid.df_hex_paths, colorby='terrain',
axis=axis, labelby='collect_index')
axis.set_axis_off()
selected = hex_grid.df_nodes.loc[[60]]
axis.plot(selected.x.values, selected.y.values, marker='s',
markersize=12, linestyle='none')
pass
"""
Explanation: Initial hex layout
Arrange land hexes in random order.
Arrange port hexes in random order, interleaved with sea hexes
(one sea hex between each pair of port hexes).
End of explanation
"""
selected = 60
df_node_contents = get_empty_node_contents(hex_grid.df_nodes)
df_result = place_camp(hex_grid.df_nodes, hex_grid.df_edges, df_hexes,
hex_grid.df_hex_links, selected, df_node_contents)
df_node_contents.loc[selected]
df_result.loc[selected]
"""
Explanation: Place a camp [done]
Place a camp at a selected node, subject to the following conditions:
Selected node cannot already be occupied by a camp or village.
Any immediate neighbour (i.e., only a single edge away) of the selected
node cannot already be occupied by a camp or village.
End of explanation
"""
|
moagstar/puzzles
|
Array/Pascal's Triangle.ipynb
|
mit
|
import sys; sys.path.append('../..')
from puzzles import leet_puzzle
leet_puzzle('pascals-triangle')
"""
Explanation: Pascal's Triangle
End of explanation
"""
def pascals_triangle(k):
prev_row = None
for r in xrange(k+1):
row = [None] * r
for c in xrange(r):
if c == 0 or c == r-1:
row[c] = 1
else:
row[c] = prev_row[c] + prev_row[c-1]
prev_row = row
return row
for k in xrange(1, 10):
print pascals_triangle(k)
%%timeit
pascals_triangle(40)
"""
Explanation: Naive Solution
Given a piece of paper and 10 minutes this was the naive solution I could come up with, however it is horribly inefficient calculating all previous rows in order to get the current row. The time complexity is O((n² + n)/2) and space complexity is O(2n). However it does give the expected output.
End of explanation
"""
def pascals_triangle_2(k):
k = k - 1
row = [1]
# only need to calculate half of the row, since the triangle is
# symmetric
for n in xrange(k / 2):
row.append(row[n] * (k - n) / (n + 1))
# middle element is repeated only for odd values of k
r = list(reversed(row))
r = r[1:] if k % 2 == 0 else r
return row + r
for k in xrange(1, 10):
print pascals_triangle_2(k)
%%timeit
pascals_triangle_2(40)
"""
Explanation: Optimised Solution
However it is possible to do this in O(n) space and O(n/2) time complexity:
End of explanation
"""
pascals_triangle_2(50)
"""
Explanation: Corner Cases
There are a number of corner cases which should be considered, for example passing in a negative integer, or passing a string for example, but the typical corner case with pascals triangle is an integer overflow when dealing with large numbers. In python however integers have arbitrary size, and expand as required, for example:
End of explanation
"""
import sys
result2 = pascals_triangle_2(sys.maxint - 1)
"""
Explanation: With very large numbers in a python solution you will eventually run out of memory, for example:
End of explanation
"""
|
SylvainCorlay/bqplot
|
examples/Interactions/Mark Interactions.ipynb
|
apache-2.0
|
x_sc = LinearScale()
y_sc = LinearScale()
x_data = np.arange(20)
y_data = np.random.randn(20)
scatter_chart = Scatter(x=x_data, y=y_data, scales= {'x': x_sc, 'y': y_sc}, colors=['dodgerblue'],
interactions={'click': 'select'},
selected_style={'opacity': 1.0, 'fill': 'DarkOrange', 'stroke': 'Red'},
unselected_style={'opacity': 0.5})
ax_x = Axis(scale=x_sc)
ax_y = Axis(scale=y_sc, orientation='vertical', tick_format='0.2f')
Figure(marks=[scatter_chart], axes=[ax_x, ax_y])
scatter_chart.selected
"""
Explanation: Scatter Chart
Scatter Chart Selections
Click a point on the Scatter plot to select it. Now, run the cell below to check the selection. After you've done this, try holding the ctrl (or command key on Mac) and clicking another point. Clicking the background will reset the selection.
End of explanation
"""
scatter_chart.selected = [1, 2, 3]
"""
Explanation: Alternately, the selected attribute can be directly set on the Python side (try running the cell below):
End of explanation
"""
x_sc = LinearScale()
y_sc = LinearScale()
x_data = np.arange(20)
y_data = np.random.randn(20)
dd = Dropdown(options=['First', 'Second', 'Third', 'Fourth'])
scatter_chart = Scatter(x=x_data, y=y_data, scales= {'x': x_sc, 'y': y_sc}, colors=['dodgerblue'],
names=np.arange(100, 200), names_unique=False, display_names=False, display_legend=True,
labels=['Blue'])
ins = Button(icon='fa-legal')
scatter_chart.tooltip = ins
line = Lines(x=x_data, y=y_data, scales= {'x': x_sc, 'y': y_sc}, colors=['dodgerblue'])
scatter_chart2 = Scatter(x=x_data, y=np.random.randn(20),
scales= {'x': x_sc, 'y': y_sc}, colors=['orangered'],
tooltip=dd, names=np.arange(100, 200), names_unique=False, display_names=False,
display_legend=True, labels=['Red'])
ax_x = Axis(scale=x_sc)
ax_y = Axis(scale=y_sc, orientation='vertical', tick_format='0.2f')
fig = Figure(marks=[scatter_chart, scatter_chart2, line], axes=[ax_x, ax_y])
fig
def print_event(self, target):
print(target)
# Adding call back to scatter events
# print custom mssg on hover and background click of Blue Scatter
scatter_chart.on_hover(print_event)
scatter_chart.on_background_click(print_event)
# print custom mssg on click of an element or legend of Red Scatter
scatter_chart2.on_element_click(print_event)
scatter_chart2.on_legend_click(print_event)
line.on_element_click(print_event)
# Changing interaction from hover to click for tooltip
scatter_chart.interactions = {'click': 'tooltip'}
# Adding figure as tooltip
x_sc = LinearScale()
y_sc = LinearScale()
x_data = np.arange(10)
y_data = np.random.randn(10)
lc = Lines(x=x_data, y=y_data, scales={'x': x_sc, 'y':y_sc})
ax_x = Axis(scale=x_sc)
ax_y = Axis(scale=y_sc, orientation='vertical', tick_format='0.2f')
tooltip_fig = Figure(marks=[lc], axes=[ax_x, ax_y], layout=Layout(min_width='600px'))
scatter_chart.tooltip = tooltip_fig
"""
Explanation: Scatter Chart Interactions and Tooltips
End of explanation
"""
i = ImageIpy.from_file(os.path.abspath('../data_files/trees.jpg'))
bqi = Image(image=i, scales={'x': x_sc, 'y': y_sc}, x=(0, 10), y=(-1, 1))
fig_image = Figure(marks=[bqi], axes=[ax_x, ax_y])
fig_image
bqi.on_element_click(print_event)
"""
Explanation: Image
For images, on_element_click returns the location of the mouse click.
End of explanation
"""
# Adding default tooltip to Line Chart
x_sc = LinearScale()
y_sc = LinearScale()
x_data = np.arange(100)
y_data = np.random.randn(3, 100)
def_tt = Tooltip(fields=['name', 'index'], formats=['', '.2f'], labels=['id', 'line_num'])
line_chart = Lines(x=x_data, y=y_data, scales= {'x': x_sc, 'y': y_sc},
tooltip=def_tt, display_legend=True, labels=["line 1", "line 2", "line 3"] )
ax_x = Axis(scale=x_sc)
ax_y = Axis(scale=y_sc, orientation='vertical', tick_format='0.2f')
Figure(marks=[line_chart], axes=[ax_x, ax_y])
# Adding call back to print event when legend or the line is clicked
line_chart.on_legend_click(print_event)
line_chart.on_element_click(print_event)
"""
Explanation: Line Chart
End of explanation
"""
# Adding interaction to select bar on click for Bar Chart
x_sc = OrdinalScale()
y_sc = LinearScale()
x_data = np.arange(10)
y_data = np.random.randn(2, 10)
bar_chart = Bars(x=x_data, y=[y_data[0, :].tolist(), y_data[1, :].tolist()], scales= {'x': x_sc, 'y': y_sc},
interactions={'click': 'select'},
selected_style={'stroke': 'orange', 'fill': 'red'},
labels=['Level 1', 'Level 2'],
display_legend=True)
ax_x = Axis(scale=x_sc)
ax_y = Axis(scale=y_sc, orientation='vertical', tick_format='0.2f')
Figure(marks=[bar_chart], axes=[ax_x, ax_y])
# Adding a tooltip on hover in addition to select on click
def_tt = Tooltip(fields=['x', 'y'], formats=['', '.2f'])
bar_chart.tooltip=def_tt
bar_chart.interactions = {
'legend_hover': 'highlight_axes',
'hover': 'tooltip',
'click': 'select',
}
# Changing tooltip to be on click
bar_chart.interactions = {'click': 'tooltip'}
# Call back on legend being clicked
bar_chart.type='grouped'
bar_chart.on_legend_click(print_event)
"""
Explanation: Bar Chart
End of explanation
"""
# Adding tooltip for Histogram
x_sc = LinearScale()
y_sc = LinearScale()
sample_data = np.random.randn(100)
def_tt = Tooltip(formats=['', '.2f'], fields=['count', 'midpoint'])
hist = Hist(sample=sample_data, scales= {'sample': x_sc, 'count': y_sc},
tooltip=def_tt, display_legend=True, labels=['Test Hist'], select_bars=True)
ax_x = Axis(scale=x_sc, tick_format='0.2f')
ax_y = Axis(scale=y_sc, orientation='vertical', tick_format='0.2f')
Figure(marks=[hist], axes=[ax_x, ax_y])
# Changing tooltip to be displayed on click
hist.interactions = {'click': 'tooltip'}
# Changing tooltip to be on click of legend
hist.interactions = {'legend_click': 'tooltip'}
"""
Explanation: Histogram
End of explanation
"""
pie_data = np.abs(np.random.randn(10))
sc = ColorScale(scheme='Reds')
tooltip_widget = Tooltip(fields=['size', 'index', 'color'], formats=['0.2f', '', '0.2f'])
pie = Pie(sizes=pie_data, scales={'color': sc}, color=np.random.randn(10),
tooltip=tooltip_widget, interactions = {'click': 'tooltip'}, selected_style={'fill': 'red'})
pie.selected_style = {"opacity": "1", "stroke": "white", "stroke-width": "2"}
pie.unselected_style = {"opacity": "0.2"}
Figure(marks=[pie])
# Changing interaction to select on click and tooltip on hover
pie.interactions = {'click': 'select', 'hover': 'tooltip'}
"""
Explanation: Pie Chart
Set up a pie chart with click to show the tooltip.
End of explanation
"""
|
tbenthompson/tectosaur
|
examples/notebooks/fullspace_qd_run.ipynb
|
mit
|
import numpy as np
import matplotlib.pyplot as plt
import tectosaur.mesh.mesh_gen
import tectosaur as tct
import tectosaur.qd as qd
qd.configure(
gpu_idx = 0, # Which GPU to use if there are multiple. Best to leave as 0.
fast_plot = True, # Let's make fast, inexpensive figures. Set to false for higher resolution plots with latex fonts.
)
plt.style.use('default')
"""
Explanation: Quasidynamic earthquake simulation
In this example, we will simulate the evolution of a square fault in an elastic full space. The fault will be loaded plate motion and the relationship between slip rate and traction on the fault will be governed by rate and state friction.
First, let's import our tools!
End of explanation
"""
n_fault = 25
L = 1.0
W = 1.0
m = tectosaur.mesh.mesh_gen.make_rect(n_fault, n_fault, [
[-L, 0, W], [-L, 0, -W],
[L, 0, -W], [L, 0, W]
])
n_tris = m[1].shape[0]
"""
Explanation: Next, we'll create a square triangulated fault mesh:
End of explanation
"""
qd_cfg = dict(
# Material properties
sm = 3e10, # Shear modulus (Pa)
pr = 0.25, # Poisson ratio
density = 2670, # rock density (kg/m^3)
# Frictional properties
Dc = 0.000002, # state evolution length scale (m)
f0 = 0.6, # baseline coefficient of friction
V0 = 1e-6, # when V = V0, f = f0, V is (m/s)
a = np.ones(n_tris * 3) * 0.010, # The rate-state a parameter
b = np.ones(n_tris * 3) * 0.015, # The rate-state b parameter
# Boundary conditions
plate_rate = 1e-9, # (m/s), equivalent to ~31.5 mm/yr
# This is only necessary because this is a full space model and there's no concept of depth or gravity
additional_normal_stress = 50e6,
# numerical preferences
timestep_tol = 1e-4, # error tolerance for the RK45 time stepper
tectosaur_cfg = dict(
quad_coincident_order = 6, # Order of coincident quadrature
quad_edgeadj_order = 6, # Order of edge adjacent element quadrature
quad_vertadj_order = 6, # Order of vertex adjacent element quadrature
quad_near_order = 5, # Order of nearfield, but not touching, element quadrature
quad_near_threshold = 2.5, # The distance threshold for defining near vs farfield
quad_far_order = 2, # Order of farfield quadrature
quad_mass_order = 4, # Order of gaussian quadrature used for the mass operator
float_type = np.float32, # 32 bit or 64 bit floating point?
use_fmm = False, # Should we use the FMM for the farfield?
fmm_mac = 2.5, # The FMM P2P vs M2P distance criteria
pts_per_cell = 100, # The maximum number of points per cell in the FMM tree?
fmm_order = 2, # The order of the FMM solid harmonic approximation
log_level = 'INFO' # The logging level to display, use DEBUG for lots of info, INFO for less, and WARNING for almost none.
)
)
"""
Explanation: Now, let's define all the relevant material and numerical parameters needed:
End of explanation
"""
model = qd.FullspaceModel(m, qd_cfg)
"""
Explanation: The model object is the main implementation of the QD equations. We create a FullspaceModel. There is also a TopoModel for use when we want to have a free surface, possibly with non-planar topography.
End of explanation
"""
qd.print_length_scales(model)
"""
Explanation: There are two relevant length scales for QD simulations:
$h^*$ -- the minimum rupture patch length scale. It's good to know how this length scale compares to the size of the whole mesh. If the length scale is longest than the largest distance in the mesh, we do not expect to see ruptures in the model and after some initial slow slip events, the model should relax to a constant rate of creep despite the velocity weakening friction parameters. On the other hand, if the length scale is smaller than the largest distance in the mesh, we should expect to see ruptures.
$L_b$ -- the cohesive zone length scale. It's nice to check before running a model to make sure that we are resolving the cohesive zone. For best accuracy, the mesh length scale should be less than half the cohesive zone. However, the results with a mesh length scale approximately equal to the cohesive zone length scale will be adequate. These constraints are looser than the requirements from previous methods (Lapusta and Liu 2009) because we are using a more accurate SGBEM implementation with sub-element linear variation in slip/traction.
End of explanation
"""
init_state = np.full(model.m.n_tris('fault') * 3, 0.7)
init_slip = np.zeros(model.m.n_tris('fault') * 9)
init_conditions = np.concatenate((init_slip, init_state))
"""
Explanation: We will use an initial state variable value of 0.7 everywhere on the fault and an initial slip of 0. As a result, the initial stress state is zero everywhere.
End of explanation
"""
print(init_conditions.shape, 3 * 4 * n_tris)
"""
Explanation: Next, we create the Integrator object. This object encapsulates the logic that runs the Runge-Kutta time stepper and stores the results to a folder. By default, every 100 time steps, a file with the latest results will be output to a folder prefixed with data. If data0 already exists, data1 will be used and so on. To create an Integrator, we provide a model object that describes the frictional and elastic behavior of the model we're dealing with. We also provide the initial conditions, a big vector consisting of the all the slip degrees of freedom and then all the frictional state degrees of freedom. Because there are three basis functions per triangle and three components of slip and one component of state per basis function, we have 3 * 4 * n_tris total degrees of freedom.
End of explanation
"""
integrator = qd.Integrator(model, (0, init_conditions))
"""
Explanation: The most important part of the Integrator object are the two lists h_t and h_y. Each element in h_y is an array with the same shape as init_conditions containing the value of each degree of freedom at that time step.
End of explanation
"""
from IPython.display import clear_output
def display_fnc(integrator):
print(len(integrator.h_t))
print(integrator.h_t[-1])
if len(integrator.h_t) % 10 == 0:
clear_output(wait = True)
t = integrator.h_t[-1]
y = integrator.h_y[-1]
integrator.model.display(t, y)
integrator.integrate(
n_steps = 1000,
display_interval = 1,
display_fnc = display_fnc
)
"""
Explanation: Finally, we will start the simulation. We'll run for 1000 time steps and plot every 10th time step.
At each time step, many plots will be produced. The columns are the x, y and z components of each field. The rows are:
1. The slip field.
2. The velocity field.
3. The traction field.
4. The frictional state variable.
End of explanation
"""
import numpy as np
mesh, qd_cfg, init_conditions = np.load('data0/initial_data.npy')
print(mesh)
print(cfg)
print(init_conditions)
"""
Explanation: Finally, a quick description of how the data is stored. Everything from a given simulation is stored in the same data# folder (for example, data0). First there is a file, initial_data.npy that stores a tuple consisting of (mesh, qd_cfg, init_conditions). This is essentially all the information needed to restart the simulation from the beginning.
End of explanation
"""
ts, ys = np.load('data0/300.npy')
print(len(ts))
print(len(ys), ys[0].shape)
print(ts[:5])
print(ys[:5])
"""
Explanation: Then, the time history and slip/state history is stored each 100 time steps in a file named after the last time step (e.g. data0/300.npy).
End of explanation
"""
|
tensorflow/tfx
|
docs/tutorials/model_analysis/tfma_basic.ipynb
|
apache-2.0
|
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2021 The TensorFlow Authors.
End of explanation
"""
# Upgrade pip to the latest, and install TFMA.
!pip install -U pip
!pip install tensorflow-model-analysis
"""
Explanation: Note: You can run this example right now in a Jupyter-style notebook, no setup required! Just click "Run in Google Colab"
<div class="devsite-table-wrapper"><table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/tfx/tutorials/model_analysis/tfma_basic">
<img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a></td>
<td><a target="_blank" href="https://colab.sandbox.google.com/github/tensorflow/tfx/blob/master/docs/tutorials/model_analysis/tfma_basic.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png">Run in Google Colab</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/tfx/blob/master/docs/tutorials/model_analysis/tfma_basic.ipynb">
<img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">View source on GitHub</a></td>
<td><a target="_blank" href="https://storage.googleapis.com/tensorflow_docs/tfx/docs/tutorials/model_analysis/tfma_basic.ipynb">
<img width=32px src="https://www.tensorflow.org/images/download_logo_32px.png">Download notebook</a></td>
</table></div>
TensorFlow Model Analysis
An Example of a Key Component of TensorFlow Extended (TFX)
TensorFlow Model Analysis (TFMA) is a library for performing model evaluation across different slices of data. TFMA performs its computations in a distributed manner over large amounts of data using Apache Beam.
This example colab notebook illustrates how TFMA can be used to investigate and visualize the performance of a model with respect to characteristics of the dataset. We'll use a model that we trained previously, and now you get to play with the results! The model we trained was for the Chicago Taxi Example, which uses the Taxi Trips dataset released by the City of Chicago. Explore the full dataset in the BigQuery UI.
As a modeler and developer, think about how this data is used and the potential benefits and harm a model's predictions can cause. A model like this could reinforce societal biases and disparities. Is a feature relevant to the problem you want to solve or will it introduce bias? For more information, read about <a target='_blank' href='https://developers.google.com/machine-learning/fairness-overview/'>ML fairness</a>.
Note: In order to understand TFMA and how it works with Apache Beam, you'll need to know a little bit about Apache Beam itself. The <a target='_blank' href='https://beam.apache.org/documentation/programming-guide/'>Beam Programming Guide</a> is a great place to start.
The columns in the dataset are:
<table>
<tr><td>pickup_community_area</td><td>fare</td><td>trip_start_month</td></tr>
<tr><td>trip_start_hour</td><td>trip_start_day</td><td>trip_start_timestamp</td></tr>
<tr><td>pickup_latitude</td><td>pickup_longitude</td><td>dropoff_latitude</td></tr>
<tr><td>dropoff_longitude</td><td>trip_miles</td><td>pickup_census_tract</td></tr>
<tr><td>dropoff_census_tract</td><td>payment_type</td><td>company</td></tr>
<tr><td>trip_seconds</td><td>dropoff_community_area</td><td>tips</td></tr>
</table>
Install Jupyter Extensions
Note: If running in a local Jupyter notebook, then these Jupyter extensions must be installed in the environment before running Jupyter.
bash
jupyter nbextension enable --py widgetsnbextension --sys-prefix
jupyter nbextension install --py --symlink tensorflow_model_analysis --sys-prefix
jupyter nbextension enable --py tensorflow_model_analysis --sys-prefix
Install TensorFlow Model Analysis (TFMA)
This will pull in all the dependencies, and will take a minute.
End of explanation
"""
# This setup was tested with TF 2.5 and TFMA 0.31 (using colab), but it should
# also work with the latest release.
import sys
# Confirm that we're using Python 3
assert sys.version_info.major==3, 'This notebook must be run using Python 3.'
import tensorflow as tf
print('TF version: {}'.format(tf.__version__))
import apache_beam as beam
print('Beam version: {}'.format(beam.__version__))
import tensorflow_model_analysis as tfma
print('TFMA version: {}'.format(tfma.__version__))
"""
Explanation: Now you must restart the runtime before running the cells below.
End of explanation
"""
# Download the tar file from GCP and extract it
import io, os, tempfile
TAR_NAME = 'saved_models-2.2'
BASE_DIR = tempfile.mkdtemp()
DATA_DIR = os.path.join(BASE_DIR, TAR_NAME, 'data')
MODELS_DIR = os.path.join(BASE_DIR, TAR_NAME, 'models')
SCHEMA = os.path.join(BASE_DIR, TAR_NAME, 'schema.pbtxt')
OUTPUT_DIR = os.path.join(BASE_DIR, 'output')
!curl -O https://storage.googleapis.com/artifacts.tfx-oss-public.appspot.com/datasets/{TAR_NAME}.tar
!tar xf {TAR_NAME}.tar
!mv {TAR_NAME} {BASE_DIR}
!rm {TAR_NAME}.tar
print("Here's what we downloaded:")
!ls -R {BASE_DIR}
"""
Explanation: NOTE: The output above should be clear of errors before proceeding. Re-run the install if you are still seeing errors. Also, make sure to restart the runtime/kernel before moving to the next step.
Load The Files
We'll download a tar file that has everything we need. That includes:
Training and evaluation datasets
Data schema
Training and serving saved models (keras and estimator) and eval saved models (estimator).
End of explanation
"""
import tensorflow as tf
from google.protobuf import text_format
from tensorflow.python.lib.io import file_io
from tensorflow_metadata.proto.v0 import schema_pb2
from tensorflow.core.example import example_pb2
schema = schema_pb2.Schema()
contents = file_io.read_file_to_string(SCHEMA)
schema = text_format.Parse(contents, schema)
"""
Explanation: Parse the Schema
Among the things we downloaded was a schema for our data that was created by TensorFlow Data Validation. Let's parse that now so that we can use it with TFMA.
End of explanation
"""
import csv
datafile = os.path.join(DATA_DIR, 'eval', 'data.csv')
reader = csv.DictReader(open(datafile, 'r'))
examples = []
for line in reader:
example = example_pb2.Example()
for feature in schema.feature:
key = feature.name
if feature.type == schema_pb2.FLOAT:
example.features.feature[key].float_list.value[:] = (
[float(line[key])] if len(line[key]) > 0 else [])
elif feature.type == schema_pb2.INT:
example.features.feature[key].int64_list.value[:] = (
[int(line[key])] if len(line[key]) > 0 else [])
elif feature.type == schema_pb2.BYTES:
example.features.feature[key].bytes_list.value[:] = (
[line[key].encode('utf8')] if len(line[key]) > 0 else [])
# Add a new column 'big_tipper' that indicates if tips was > 20% of the fare.
# TODO(b/157064428): Remove after label transformation is supported for Keras.
big_tipper = float(line['tips']) > float(line['fare']) * 0.2
example.features.feature['big_tipper'].float_list.value[:] = [big_tipper]
examples.append(example)
tfrecord_file = os.path.join(BASE_DIR, 'train_data.rio')
with tf.io.TFRecordWriter(tfrecord_file) as writer:
for example in examples:
writer.write(example.SerializeToString())
!ls {tfrecord_file}
"""
Explanation: Use the Schema to Create TFRecords
We need to give TFMA access to our dataset, so let's create a TFRecords file. We can use our schema to create it, since it gives us the correct type for each feature.
End of explanation
"""
import tensorflow_model_analysis as tfma
# Setup tfma.EvalConfig settings
keras_eval_config = text_format.Parse("""
## Model information
model_specs {
# For keras (and serving models) we need to add a `label_key`.
label_key: "big_tipper"
}
## Post training metric information. These will be merged with any built-in
## metrics from training.
metrics_specs {
metrics { class_name: "ExampleCount" }
metrics { class_name: "BinaryAccuracy" }
metrics { class_name: "BinaryCrossentropy" }
metrics { class_name: "AUC" }
metrics { class_name: "AUCPrecisionRecall" }
metrics { class_name: "Precision" }
metrics { class_name: "Recall" }
metrics { class_name: "MeanLabel" }
metrics { class_name: "MeanPrediction" }
metrics { class_name: "Calibration" }
metrics { class_name: "CalibrationPlot" }
metrics { class_name: "ConfusionMatrixPlot" }
# ... add additional metrics and plots ...
}
## Slicing information
slicing_specs {} # overall slice
slicing_specs {
feature_keys: ["trip_start_hour"]
}
slicing_specs {
feature_keys: ["trip_start_day"]
}
slicing_specs {
feature_values: {
key: "trip_start_month"
value: "1"
}
}
slicing_specs {
feature_keys: ["trip_start_hour", "trip_start_day"]
}
""", tfma.EvalConfig())
# Create a tfma.EvalSharedModel that points at our keras model.
keras_model_path = os.path.join(MODELS_DIR, 'keras', '2')
keras_eval_shared_model = tfma.default_eval_shared_model(
eval_saved_model_path=keras_model_path,
eval_config=keras_eval_config)
keras_output_path = os.path.join(OUTPUT_DIR, 'keras')
# Run TFMA
keras_eval_result = tfma.run_model_analysis(
eval_shared_model=keras_eval_shared_model,
eval_config=keras_eval_config,
data_location=tfrecord_file,
output_path=keras_output_path)
"""
Explanation: Setup and Run TFMA
TFMA supports a number of different model types including TF keras models, models based on generic TF2 signature APIs, as well TF estimator based models. The get_started guide has the full list of model types supported and any restrictions. For this example we are going to show how to configure a keras based model as well as an estimator based model that was saved as an EvalSavedModel. See the FAQ for examples of other configurations.
TFMA provides support for calculating metrics that were used at training time (i.e. built-in metrics) as well metrics defined after the model was saved as part of the TFMA configuration settings. For our keras setup we will demonstrate adding our metrics and plots manually as part of our configuration (see the metrics guide for information on the metrics and plots that are supported). For the estimator setup we will use the built-in metrics that were saved with the model. Our setups also include a number of slicing specs which are discussed in more detail in the following sections.
After creating a tfma.EvalConfig and tfma.EvalSharedModel we can then run TFMA using tfma.run_model_analysis. This will create a tfma.EvalResult which we can use later for rendering our metrics and plots.
Keras
End of explanation
"""
import tensorflow_model_analysis as tfma
# Setup tfma.EvalConfig settings
estimator_eval_config = text_format.Parse("""
## Model information
model_specs {
# To use EvalSavedModel set `signature_name` to "eval".
signature_name: "eval"
}
## Post training metric information. These will be merged with any built-in
## metrics from training.
metrics_specs {
metrics { class_name: "ConfusionMatrixPlot" }
# ... add additional metrics and plots ...
}
## Slicing information
slicing_specs {} # overall slice
slicing_specs {
feature_keys: ["trip_start_hour"]
}
slicing_specs {
feature_keys: ["trip_start_day"]
}
slicing_specs {
feature_values: {
key: "trip_start_month"
value: "1"
}
}
slicing_specs {
feature_keys: ["trip_start_hour", "trip_start_day"]
}
""", tfma.EvalConfig())
# Create a tfma.EvalSharedModel that points at our eval saved model.
estimator_base_model_path = os.path.join(
MODELS_DIR, 'estimator', 'eval_model_dir')
estimator_model_path = os.path.join(
estimator_base_model_path, os.listdir(estimator_base_model_path)[0])
estimator_eval_shared_model = tfma.default_eval_shared_model(
eval_saved_model_path=estimator_model_path,
eval_config=estimator_eval_config)
estimator_output_path = os.path.join(OUTPUT_DIR, 'estimator')
# Run TFMA
estimator_eval_result = tfma.run_model_analysis(
eval_shared_model=estimator_eval_shared_model,
eval_config=estimator_eval_config,
data_location=tfrecord_file,
output_path=estimator_output_path)
"""
Explanation: Estimator
End of explanation
"""
eval_result = keras_eval_result
# eval_result = estimator_eval_result
"""
Explanation: Visualizing Metrics and Plots
Now that we've run the evaluation, let's take a look at our visualizations using TFMA. For the following examples, we will visualize the results from running the evaluation on the keras model. To view the estimator based model update the eval_result to point at our estimator_eval_result variable.
End of explanation
"""
tfma.view.render_slicing_metrics(eval_result, slicing_column='trip_start_hour')
"""
Explanation: Rendering Metrics
To view metrics you use tfma.view.render_slicing_metrics
By default the views will display the Overall slice. To view a particular slice you can either use the name of the column (by setting slicing_column) or provide a tfma.SlicingSpec.
The metrics visualization supports the following interactions:
Click and drag to pan
Scroll to zoom
Right click to reset the view
Hover over the desired data point to see more details.
Select from four different types of views using the selections at the bottom.
For example, we'll be setting slicing_column to look at the trip_start_hour feature from our previous slicing_specs.
End of explanation
"""
tfma.view.render_slicing_metrics(eval_result, slicing_column='trip_start_day')
"""
Explanation: Slices Overview
The default visualization is the Slices Overview when the number of slices is small. It shows the values of metrics for each slice. Since we've selected trip_start_hour above, it's showing us metrics like accuracy and AUC for each hour, which allows us to look for issues that are specific to some hours and not others.
In the visualization above:
Try sorting the feature column, which is our trip_start_hours feature, by clicking on the column header
Try sorting by precision, and notice that the precision for some of the hours with examples is 0, which may indicate a problem
The chart also allows us to select and display different metrics in our slices.
Try selecting different metrics from the "Show" menu
Try selecting recall in the "Show" menu, and notice that the recall for some of the hours with examples is 0, which may indicate a problem
It is also possible to set a threshold to filter out slices with smaller numbers of examples, or "weights". You can type a minimum number of examples, or use the slider.
Metrics Histogram
This view also supports a Metrics Histogram as an alternative visualization, which is also the default view when the number of slices is large. The results will be divided into buckets and the number of slices / total weights / both can be visualized. Columns can be sorted by clicking on the column header. Slices with small weights can be filtered out by setting the threshold. Further filtering can be applied by dragging the grey band. To reset the range, double click the band. Filtering can also be used to remove outliers in the visualization and the metrics tables. Click the gear icon to switch to a logarithmic scale instead of a linear scale.
Try selecting "Metrics Histogram" in the Visualization menu
More Slices
Our initial tfma.EvalConfig created a whole list of slicing_specs, which we can visualize by updating slice information passed to tfma.view.render_slicing_metrics. Here we'll select the trip_start_day slice (days of the week). Try changing the trip_start_day to trip_start_month and rendering again to examine different slices.
End of explanation
"""
tfma.view.render_slicing_metrics(
eval_result,
slicing_spec=tfma.SlicingSpec(
feature_keys=['trip_start_hour', 'trip_start_day']))
"""
Explanation: TFMA also supports creating feature crosses to analyze combinations of features. Our original settings created a cross trip_start_hour and trip_start_day:
End of explanation
"""
tfma.view.render_slicing_metrics(
eval_result,
slicing_spec=tfma.SlicingSpec(
feature_keys=['trip_start_day'], feature_values={'trip_start_hour': '12'}))
"""
Explanation: Crossing the two columns creates a lot of combinations! Let's narrow down our cross to only look at trips that start at noon. Then let's select binary_accuracy from the visualization:
End of explanation
"""
tfma.view.render_plot(
eval_result,
tfma.SlicingSpec(feature_values={'trip_start_hour': '1'}))
"""
Explanation: Rendering Plots
Any plots that were added to the tfma.EvalConfig as post training metric_specs can be displayed using tfma.view.render_plot.
As with metrics, plots can be viewed by slice. Unlike metrics, only plots for a particular slice value can be displayed so the tfma.SlicingSpec must be used and it must specify both a slice feature name and value. If no slice is provided then the plots for the Overall slice is used.
In the example below we are displaying the CalibrationPlot and ConfusionMatrixPlot plots that were computed for the trip_start_hour:1 slice.
End of explanation
"""
# Note this re-uses the EvalConfig from the keras setup.
# Run eval on each saved model
output_paths = []
for i in range(3):
# Create a tfma.EvalSharedModel that points at our saved model.
eval_shared_model = tfma.default_eval_shared_model(
eval_saved_model_path=os.path.join(MODELS_DIR, 'keras', str(i)),
eval_config=keras_eval_config)
output_path = os.path.join(OUTPUT_DIR, 'time_series', str(i))
output_paths.append(output_path)
# Run TFMA
tfma.run_model_analysis(eval_shared_model=eval_shared_model,
eval_config=keras_eval_config,
data_location=tfrecord_file,
output_path=output_path)
"""
Explanation: Tracking Model Performance Over Time
Your training dataset will be used for training your model, and will hopefully be representative of your test dataset and the data that will be sent to your model in production. However, while the data in inference requests may remain the same as your training data, in many cases it will start to change enough so that the performance of your model will change.
That means that you need to monitor and measure your model's performance on an ongoing basis, so that you can be aware of and react to changes. Let's take a look at how TFMA can help.
Let's load 3 different model runs and use TFMA to see how they compare using render_time_series.
End of explanation
"""
eval_results_from_disk = tfma.load_eval_results(output_paths[:2])
tfma.view.render_time_series(eval_results_from_disk)
"""
Explanation: First, we'll imagine that we've trained and deployed our model yesterday, and now we want to see how it's doing on the new data coming in today. The visualization will start by displaying AUC. From the UI you can:
Add other metrics using the "Add metric series" menu.
Close unwanted graphs by clicking on x
Hover over data points (the ends of line segments in the graph) to get more details
Note: In the metric series charts the X axis is the model directory name of the model run that you're examining. These names themselves are not meaningful.
End of explanation
"""
eval_results_from_disk = tfma.load_eval_results(output_paths)
tfma.view.render_time_series(eval_results_from_disk)
"""
Explanation: Now we'll imagine that another day has passed and we want to see how it's doing on the new data coming in today, compared to the previous two days:
End of explanation
"""
# Setup tfma.EvalConfig setting
eval_config_with_thresholds = text_format.Parse("""
## Model information
model_specs {
name: "candidate"
# For keras we need to add a `label_key`.
label_key: "big_tipper"
}
model_specs {
name: "baseline"
# For keras we need to add a `label_key`.
label_key: "big_tipper"
is_baseline: true
}
## Post training metric information
metrics_specs {
metrics { class_name: "ExampleCount" }
metrics { class_name: "BinaryAccuracy" }
metrics { class_name: "BinaryCrossentropy" }
metrics {
class_name: "AUC"
threshold {
# Ensure that AUC is always > 0.9
value_threshold {
lower_bound { value: 0.9 }
}
# Ensure that AUC does not drop by more than a small epsilon
# e.g. (candidate - baseline) > -1e-10 or candidate > baseline - 1e-10
change_threshold {
direction: HIGHER_IS_BETTER
absolute { value: -1e-10 }
}
}
}
metrics { class_name: "AUCPrecisionRecall" }
metrics { class_name: "Precision" }
metrics { class_name: "Recall" }
metrics { class_name: "MeanLabel" }
metrics { class_name: "MeanPrediction" }
metrics { class_name: "Calibration" }
metrics { class_name: "CalibrationPlot" }
metrics { class_name: "ConfusionMatrixPlot" }
# ... add additional metrics and plots ...
}
## Slicing information
slicing_specs {} # overall slice
slicing_specs {
feature_keys: ["trip_start_hour"]
}
slicing_specs {
feature_keys: ["trip_start_day"]
}
slicing_specs {
feature_keys: ["trip_start_month"]
}
slicing_specs {
feature_keys: ["trip_start_hour", "trip_start_day"]
}
""", tfma.EvalConfig())
# Create tfma.EvalSharedModels that point at our keras models.
candidate_model_path = os.path.join(MODELS_DIR, 'keras', '2')
baseline_model_path = os.path.join(MODELS_DIR, 'keras', '1')
eval_shared_models = [
tfma.default_eval_shared_model(
model_name=tfma.CANDIDATE_KEY,
eval_saved_model_path=candidate_model_path,
eval_config=eval_config_with_thresholds),
tfma.default_eval_shared_model(
model_name=tfma.BASELINE_KEY,
eval_saved_model_path=baseline_model_path,
eval_config=eval_config_with_thresholds),
]
validation_output_path = os.path.join(OUTPUT_DIR, 'validation')
# Run TFMA
eval_result_with_validation = tfma.run_model_analysis(
eval_shared_models,
eval_config=eval_config_with_thresholds,
data_location=tfrecord_file,
output_path=validation_output_path)
"""
Explanation: Model Validation
TFMA can be configured to evaluate multiple models at the same time. Typically this is done to compare a new model against a baseline (such as the currently serving model) to determine what the performance differences in metrics (e.g. AUC, etc) are relative to the baseline. When thresholds are configured, TFMA will produce a tfma.ValidationResult record indicating whether the performance matches expecations.
Let's re-configure our keras evaluation to compare two models: a candidate and a baseline. We will also validate the candidate's performance against the baseline by setting a tmfa.MetricThreshold on the AUC metric.
End of explanation
"""
tfma.view.render_time_series(eval_result_with_validation)
"""
Explanation: When running evaluations with one or more models against a baseline, TFMA automatically adds diff metrics for all the metrics computed during the evaluation. These metrics are named after the corresponding metric but with _diff appended to the metric name.
Let's take a look at the metrics produced by our run:
End of explanation
"""
validation_result = tfma.load_validation_result(validation_output_path)
print(validation_result.validation_ok)
"""
Explanation: Now let's look at the output from our validation checks. To view the validation results we use tfma.load_validator_result. For our example, the validation fails because AUC is below the threshold.
End of explanation
"""
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright © 2020 The TensorFlow Authors.
End of explanation
"""
|
samuelsinayoko/kaggle-housing-prices
|
prepare_data.ipynb
|
mit
|
from scipy.stats.mstats import mode
import pandas as pd
import numpy as np
import time
from sklearn.preprocessing import LabelEncoder
"""
Read Data
"""
train = pd.read_csv('data/train.csv')
test = pd.read_csv('data/test.csv')
target = train['SalePrice']
train = train.drop(['SalePrice'],axis=1)
trainlen = train.shape[0]
"""
Explanation: Clean the data
First, import packages for data cleaning and read the data
End of explanation
"""
df1 = train.head()
df2 = test.head()
pd.concat([df1, df2], axis=0, ignore_index=True)
alldata = pd.concat([train, test], axis=0, join='outer', ignore_index=True)
alldata = alldata.drop(['Id','Utilities'], axis=1)
alldata.dtypes
"""
Explanation: Combine the train and test set for cleaning
End of explanation
"""
alldata.ix[:,(alldata.dtypes=='int64') & (alldata.columns != 'MSSubClass')]=alldata.ix[:,(alldata.dtypes=='int64') & (alldata.columns!='MSSubClass')].astype('float64')
alldata.head(20)
"""
Explanation: Convert all ints to floats for XGBoost
End of explanation
"""
train = alldata.ix[0:trainlen-1, :]
test = alldata.ix[trainlen:alldata.shape[0],:]
test.to_csv('data/test_prepared_light.csv', index=False)
train.to_csv('data/train_prepared_light.csv', index=False)
"""
Explanation: Save lightly prepared data (no encoding)
End of explanation
"""
fMedlist=['LotFrontage']
fArealist=['MasVnrArea','TotalBsmtSF','BsmtFinSF1','BsmtFinSF2','BsmtUnfSF','BsmtFullBath', 'BsmtHalfBath','MasVnrArea','Fireplaces','GarageArea','GarageYrBlt','GarageCars']
for i in fArealist:
alldata.ix[pd.isnull(alldata.ix[:,i]),i] = 0
for i in fMedlist:
alldata.ix[pd.isnull(alldata.ix[:,i]),i] = np.nanmedian(alldata.ix[:,i])
"""
Explanation: Dealing with the NA values in the variables, some of them equal to 0 and some equal to median, based on the txt descriptions
End of explanation
"""
alldata.head(20)
"""
Explanation: Transforming Data
Use integers to encode categorical data.
End of explanation
"""
le = LabelEncoder()
nacount_category = np.array(alldata.columns[((alldata.dtypes=='int64') | (alldata.dtypes=='object')) & (pd.isnull(alldata).sum()>0)])
category = np.array(alldata.columns[((alldata.dtypes=='int64') | (alldata.dtypes=='object'))])
Bsmtset = set(['BsmtQual','BsmtCond','BsmtExposure','BsmtFinType1','BsmtFinType2'])
MasVnrset = set(['MasVnrType'])
Garageset = set(['GarageType','GarageYrBlt','GarageFinish','GarageQual','GarageCond'])
Fireplaceset = set(['FireplaceQu'])
Poolset = set(['PoolQC'])
NAset = set(['Fence','MiscFeature','Alley'])
# Put 0 and null values in the same category
for i in nacount_category:
if i in Bsmtset:
alldata.ix[pd.isnull(alldata.ix[:,i]) & (alldata['TotalBsmtSF']==0), i]='Empty'
alldata.ix[pd.isnull(alldata.ix[:,i]), i] = alldata.ix[:,i].value_counts().index[0]
elif i in MasVnrset:
alldata.ix[pd.isnull(alldata.ix[:,i]) & (alldata['MasVnrArea']==0),i]='Empty'
alldata.ix[pd.isnull(alldata.ix[:,i]),i]=alldata.ix[:,i].value_counts().index[0]
elif i in Garageset:
alldata.ix[pd.isnull(alldata.ix[:,i]) & (alldata['GarageArea']==0),i]='Empty'
alldata.ix[pd.isnull(alldata.ix[:,i]),i]=alldata.ix[:,i].value_counts().index[0]
elif i in Fireplaceset:
alldata.ix[pd.isnull(alldata.ix[:,i]) & (alldata['Fireplaces']==0),i]='Empty'
alldata.ix[pd.isnull(alldata.ix[:,i]),i]=alldata.ix[:,i].value_counts().index[0]
elif i in Poolset:
alldata.ix[pd.isnull(alldata.ix[:,i]) & (alldata['PoolArea']==0),i]='Empty'
alldata.ix[pd.isnull(alldata.ix[:,i]),i]=alldata.ix[:,i].value_counts().index[0]
elif i in NAset:
alldata.ix[pd.isnull(alldata.ix[:,i]),i]='Empty'
else:
alldata.ix[pd.isnull(alldata.ix[:,i]),i]=alldata.ix[:,i].value_counts().index[0]
for i in category:
alldata.ix[:,i]=le.fit_transform(alldata.ix[:,i])
train = alldata.ix[0:trainlen-1, :]
test = alldata.ix[trainlen:alldata.shape[0],:]
alldata.head()
"""
Explanation: Encode categorical data
End of explanation
"""
train.to_csv('data/train_prepared.csv')
test.to_csv('data/test_prepared.csv')
train.head()
target.to_csv('data/train_target.csv', header='SalePrice', index=False)
"""
Explanation: Export data
End of explanation
"""
|
rsignell-usgs/notebook
|
WMS/wms_sample.ipynb
|
mit
|
%matplotlib inline
from owslib.wms import WebMapService
#We just need a WMS url from one TDS dataset...
serverurl ='http://thredds.ucar.edu/thredds/wms/grib/NCEP/NAM/CONUS_12km/best'
wms = WebMapService( serverurl, version='1.1.1')
"""
Explanation: Exploring Web Map Service (WMS)
WMS and OWSLib
Getting some information about the service
Getting the basic information we need to perform a GetMap request
More on GetMap request
TDS-ncWMS styles and extensions
WMS and basemap
1. WMS and OWSLib
WMS is the Open Geospatial Consortium (OGC) standard interface for requesting georeferenced images through HTTP.
OWSLib is part of geopython, a GitHub organization comprised of Python projects related to geospatial.
OWSLib is a Python package for client programming with OGC Web Services (OWS) developed by Tom Kralidis.
OWSLib supports several OGC standards: WFS, WCS, SOS...and of course WMS 1.1.1. More.
Does not come installed with canopy but is available in the community packages.
Installation with enpkg:
enpkg OWSLib
current version (07/09/2013) --> 0.4.0-1
2. Getting some information about the service
We will use OWSLib package and in particular the owslib.wms module.
Within the TDS context, if WMS is enabled and set up in the catalogs, each dataset has a WMS url.
End of explanation
"""
#This is general information, common to all datasets in a TDS server
operations =[ op.name for op in wms.operations ]
print 'Available operations: '
print operations
print 'General information (common to all datasets):'
print wms.identification.type
print wms.identification.abstract
print wms.identification.keywords
print wms.identification.version
print wms.identification.title
"""
Explanation: The WebMapService object gets all the information available about the service through a GetCapabilities request:
End of explanation
"""
#Listing all available layers...
layers = list(wms.contents)
for l in layers:
print 'Layer title: '+wms[l].title +', name:'+wms[l].name
"""
Explanation: Bounding boxes, styles and dimensions are specific to each layer.
Each variable in a dataset translates into a layer in the WMS service.
Besides, the server creates virtual layers if it founds vector components in CF-1 or Grib conventions.
End of explanation
"""
#Values common to all GetMap requests: formats and http methods:
print wms.getOperationByName('GetMap').formatOptions
print wms.getOperationByName('GetMap').methods
#Let's choose: 'wind @ Isobaric surface' (the value in the parameter must be name of the layer)
wind = wms['wind @ Isobaric surface']
#What is its bounding box?
print wind.boundingBox
#available CRS
print wind.crsOptions
# --> NOT ALL THE AVAILABLE CRS OPTIONS ARE LISTED
#Function that saves the layer as an image
def saveLayerAsImage(layer, inname):
out = open(inname, 'wb')
out.write(layer.read())
out.close()
#let's get the image...
img_wind = wms.getmap( layers=[wind.name], #only takes one layer
srs='EPSG:4326',
bbox=(wind.boundingBox[0],wind.boundingBox[1], wind.boundingBox[2], wind.boundingBox[3]),
size=(512, 512),
format='image/png'
)
#Save it..
saveLayerAsImage(img_wind, 'test_wind.png')
#Display the image we've just saved...
from IPython.core.display import Image
Image(filename='test_wind.png')
"""
Explanation: 3. Getting the basic information we need to perform a GetMap request
All the information clients need is available in the capabilities document, which is stored in the WebMapService object.
TDS-WMS only supports GetMap requests on one layer (variable).
We need to choose our layer, bounding box, spatial reference system (SRS), size and format of the image.
End of explanation
"""
#Times are available in the timepositions property of the layer
times= [time.strip() for time in wind.timepositions]
print times
#We can choose any of the available times and make a request for it with the parameter time
#If no time is provided the default in TDS is the closest available time to the current time
img_wind = wms.getmap( layers=[wind.name],
srs='EPSG:4326',
bbox=(wind.boundingBox[0],wind.boundingBox[1], wind.boundingBox[2], wind.boundingBox[3]),
size=(600, 600),
format='image/png',
time= times[len(times)-1]
)
saveLayerAsImage(img_wind, 'test_wind.png')
Image(filename='test_wind.png')
#We can also specify a time interval to get an animated gif
#Format must be image/gif
img_wind = wms.getmap( layers=[wind.name],
srs='EPSG:4326',
bbox=(wind.boundingBox[0],wind.boundingBox[1], wind.boundingBox[2], wind.boundingBox[3]),
size=(600, 600),
format='image/gif',
time= times[len(times)-4]+'/'+times[len(times)-1]
)
#Image(url='http://python.org/images/python-logo.gif')
#saveLayerAsImage(img_wind, 'test_anim_wind.gif')
Image(url=img_wind.url)
"""
Explanation: 4. More on GetMap requests
Handling time and vertical dimensions
Changing styles
Changing the spatial reference system (SRS)
Handling time and vertical dimensions
Getting available times for a layer:
End of explanation
"""
#Next version of OWSLib will support this...
#elevations = [el.strip() for el in wind.elevations]
#print elevations
#In the meantime...
def find_elevations_for_layer(wms, layer_name):
"""
parses the wms capabilities document searching
the elevation dimension for the layer
"""
#Get all the layers
levels =None;
layers = wms._capabilities.findall(".//Layer")
layer_tag = None
for el in layers:
name = el.find("Name")
if name is not None and name.text.strip() == layer_name:
layer_tag = el
break
if layer_tag is not None:
elevation_tag = layer_tag.find("Extent[@name='elevation']")
if elevation_tag is not None:
levels = elevation_tag.text.strip().split(',')
return levels;
elevations = find_elevations_for_layer(wms, wind.name)
print elevations
#now we can change our vertical level with the parameter elevation
#If no elevation parameter is provided the default is the first vertical level in the dimension.
img_wind = wms.getmap( layers=['wind @ Isobaric surface'], #only takes one layer
srs='EPSG:4326',
bbox=(wind.boundingBox[0],wind.boundingBox[1], wind.boundingBox[2], wind.boundingBox[3]),
size=(600, 600),
format='image/png',
time= times[0],
elevation=elevations[len(elevations)-1 ]
)
saveLayerAsImage(img_wind, 'test_wind.png')
Image(filename='test_wind.png')
"""
Explanation: Getting the available vertical levels:
OWSLib does not support vertical levels, meaning the layer objects do not have a property "elevations" with the vertical levels. So, we need a little extra work to get the available vertical levels for a layer
End of explanation
"""
#available styles:
#print wind.styles
#Change the style of our layer
img_wind = wms.getmap( layers=[wind.name], #only takes one layer
styles=['barb/rainbow'], #one style per layer
srs='EPSG:4326',
bbox=(wind.boundingBox[0],wind.boundingBox[1], wind.boundingBox[2], wind.boundingBox[3]),
size=(600, 600),
format='image/png',
time= times[0]
)
saveLayerAsImage(img_wind, 'test_wind_barb.png')
Image(filename='test_wind_barb.png')
"""
Explanation: Changing styles
We can specify the style (any from the available styles for a layer) in the param styles
End of explanation
"""
#Reproject the bounding box to a global mercator (EPSG:3875, projection used by Google Maps, OSM...) using pyproj
from mpl_toolkits.basemap import pyproj
epsg = '3857'
psproj = pyproj.Proj(init="epsg:%s" % epsg)
xmin, ymin = psproj(wind.boundingBox[0], wind.boundingBox[1])
xmax, ymax = psproj(wind.boundingBox[2], wind.boundingBox[3])
img_wind = wms.getmap( layers=[wind.name],
srs='EPSG:'+ epsg,
bbox=(xmin, ymin, xmax, ymax),
size=(600, 600),
format='image/png',
time= times[0]
)
saveLayerAsImage(img_wind, 'test_wind_3857.png')
Image(filename='test_wind_3857.png')
"""
Explanation: Changing the spatial reference system (SRS)
We can reproject to any of the available SRS.
End of explanation
"""
temp =wms['Temperature_isobaric']
img_temp = wms.getmap( layers=[temp.name],
styles=['boxfill/rainbow'],
srs='EPSG:4326',
bbox=(temp.boundingBox[0],temp.boundingBox[1], temp.boundingBox[2], temp.boundingBox[3]),
size=(600, 600),
format='image/png',
time= times[0]
)
saveLayerAsImage(img_temp, 'test_temp.png')
Image(filename='test_temp.png')
"""
Explanation: Cool, we already know how to make get map requests. Let's change our layer...
End of explanation
"""
img_temp = wms.getmap( layers=[temp.name],
styles=['boxfill/rainbow'],
srs='EPSG:4326',
bbox=(wind.boundingBox[0],wind.boundingBox[1], wind.boundingBox[2], wind.boundingBox[3]),
size=(600, 600),
format='image/png',
time= times[0],
colorscalerange='250,320'
)
saveLayerAsImage(img_temp, 'test_temp.png')
Image(filename='test_temp.png')
"""
Explanation: ...well not that cool.
5. TDS-ncWMS styles and extensions
ncWMS/THREDDS provides some non-standard WMS parameters that allow clients some control on the styling.
Change the scale range:
Default is -50,50. Parameter colorscalerange allows us to use a different scale
End of explanation
"""
colorscalerange='290,310'
img_temp = wms.getmap( layers=[temp.name],
styles=['boxfill/rainbow'],
srs='EPSG:4326',
bbox=(wind.boundingBox[0],wind.boundingBox[1], wind.boundingBox[2], wind.boundingBox[3]),
size=(600, 600),
format='image/png',
time= times[0],
colorscalerange=colorscalerange,
abovemaxcolor='transparent',
belowmincolor='transparent'
)
saveLayerAsImage(img_temp, 'test_temp.png')
Image(filename='test_temp.png')
"""
Explanation: abovemaxcolor, belowmincolor params give us control on how we want the values out of range to be displayed.
valid values for those params are: extend (will use the highest/lowest value of the palette for values larger/smaller than the maximun/minimun), transparent and a color in 0xRRGGBB format
End of explanation
"""
params ={'request': 'GetLegendGraphic',
'colorbaronly':'False', #want the text in the legend
'layer':temp.name,
'colorscalerange':colorscalerange}
legendUrl=serverurl+'?REQUEST={request:s}&COLORBARONLY={colorbaronly:s}&LAYER={layer:s}&COLORSCALERANGE={colorscalerange:s}'.format(**params)
Image(url=legendUrl)
"""
Explanation: The GetLegendGraphic request gives us a legend for the map, but the request is not supported by OWSLib.
End of explanation
"""
import os
import urllib2
from mpl_toolkits.basemap import Basemap
import matplotlib.pyplot as plt
from matplotlib.offsetbox import AnnotationBbox, OffsetImage
from matplotlib._png import read_png
m = Basemap(llcrnrlon=temp.boundingBox[0], llcrnrlat=temp.boundingBox[1],
urcrnrlon=temp.boundingBox[2], urcrnrlat=temp.boundingBox[3]+5.0,
resolution='l',epsg=4326)
plt.figure(1, figsize=(16,12))
plt.title(temp.title +' '+times[0] )
m.wmsimage(serverurl,xpixels=600, ypixels=600, verbose=False,
layers=[temp.name],
styles=['boxfill/rainbow'],
time= times[0],
colorscalerange=colorscalerange,
abovemaxcolor='extend',
belowmincolor='transparent'
)
m.drawcoastlines(linewidth=0.25)
#Annotating the map with the legend
#Save the legend as image
cwd = os.getcwd()
legend = urllib2.urlopen(legendUrl)
saveLayerAsImage(legend, 'legend_temp.png')
#read the image as an array
arr = read_png('legend_temp.png')
imagebox = OffsetImage(arr, zoom=0.7)
xy =[ temp.boundingBox[2], temp.boundingBox[1] ]
#Gets the current axis
ax = plt.gca()
#Creates the annotation
ab = AnnotationBbox(imagebox, xy,
xybox=(-46.,100.),
xycoords='data',
boxcoords="offset points",
pad=0.)
#Adds the legend image as an AnnotationBbox to the map
ax.add_artist(ab)
plt.show()
"""
Explanation: 5. WMS and basemap
We can use basemap to overlay the layer with a coastline...
End of explanation
"""
|
karlstroetmann/Formal-Languages
|
Python/Top-Down-Parser.ipynb
|
gpl-2.0
|
import re
"""
Explanation: A Recursive Parser for Arithmetic Expressions
In this notebook we implement a simple recursive descend parser for arithmetic expressions.
This parser will implement the following grammar:
$$
\begin{eqnarray}
\mathrm{expr} & \rightarrow & \mathrm{product}\;\;\mathrm{exprRest} \[0.2cm]
\mathrm{exprRest} & \rightarrow & \texttt{'+'} \;\;\mathrm{product}\;\;\mathrm{exprRest} \
& \mid & \texttt{'-'} \;\;\mathrm{product}\;\;\mathrm{exprRest} \
& \mid & \varepsilon \[0.2cm]
\mathrm{product} & \rightarrow & \mathrm{factor}\;\;\mathrm{productRest} \[0.2cm]
\mathrm{productRest} & \rightarrow & \texttt{''} \;\;\mathrm{factor}\;\;\mathrm{productRest} \
& \mid & \texttt{'/'} \;\;\mathrm{factor}\;\;\mathrm{productRest} \
& \mid & \varepsilon \[0.2cm]
\mathrm{factor} & \rightarrow & \texttt{'('} \;\;\mathrm{expr} \;\;\texttt{')'} \
& \mid & \texttt{NUMBER}
\end{eqnarray*}
$$
Implementing a Scanner
We implement a scanner with the help of the module re.
End of explanation
"""
def tokenize(s):
'''Transform the string s into a list of tokens. The string s
is supposed to represent an arithmetic expression.
'''
lexSpec = r'''([ \t]+) | # blanks and tabs
([1-9][0-9]*|0) | # number
([()]) | # parentheses
([-+*/]) | # arithmetical operators
(.) # unrecognized character
'''
tokenList = re.findall(lexSpec, s, re.VERBOSE)
result = []
for ws, number, parenthesis, operator, error in tokenList:
if ws: # skip blanks and tabs
continue
elif number:
result += [ number ]
elif parenthesis:
result += [ parenthesis ]
elif operator:
result += [ operator ]
else:
result += [ f'ERROR({error})']
return result
tokenize('1 + (2 + @ 34 - 2**0)/7')
"""
Explanation: The function tokenize receives a string s as argument and returns a list of tokens.
The string s is supposed to represent an arithmetical expression.
Note:
1. We need to set the flag re.VERBOSE in our call of the function findall
below because otherwise we are not able to format the regular expression lexSpec the way
we have done it.
2. The regular expression lexSpec contains 5 parenthesized groups. Therefore,
findall returns a list of 5-tuples where the 5 components correspond to the 5
groups of the regular expression.
End of explanation
"""
def parse(s):
TL = tokenize(s)
result, Rest = parseExpr(TL)
assert Rest == [], f'Parse Error: could not parse {TL}'
return result
"""
Explanation: Implementing the Recursive Descend Parser
The function parse takes a string s as input and parses this string according to the recursive grammar
shown above. The function returns the floating point number that results from evaluation the expression given in s.
End of explanation
"""
def parseExpr(TL):
product, Rest = parseProduct(TL)
return parseExprRest(product, Rest)
"""
Explanation: The function parseExpr implements the following grammar rule:
$$ \mathrm{expr} \rightarrow \;\mathrm{product}\;\;\mathrm{exprRest} $$
It takes a token list TL as its input and returns a pair of the form (value, Rest) where
- value is the result of evaluating the arithmetical expression
that is represented by TL and
- Rest is a list of those tokens that have not been consumed during the parse process.
End of explanation
"""
def parseExprRest(Sum, TL):
if TL == []:
return Sum, []
elif TL[0] == '+':
product, Rest = parseProduct(TL[1:])
return parseExprRest(Sum + product, Rest)
elif TL[0] == '-':
product, Rest = parseProduct(TL[1:])
return parseExprRest(Sum - product, Rest)
else:
return Sum, TL
"""
Explanation: The function parseExprRest implements the following grammar rules:
$$
\begin{eqnarray}
\mathrm{exprRest} & \rightarrow & \texttt{'+'} \;\;\mathrm{product}\;\;\mathrm{exprRest} \
& \mid & \texttt{'-'} \;\;\mathrm{product}\;\;\mathrm{exprRest} \
& \mid & \;\varepsilon \[0.2cm]
\end{eqnarray}
$$
It takes two arguments:
- Sum is the value that has already been parsed,
- TL is the list of tokens that still need to be consumed.
It returns a pair of the form (value, Rest) where
- value is the result of evaluating the arithmetical expression
that is represented by TL and
- Rest is a list of those tokens that have not been consumed during the parse process.
End of explanation
"""
def parseProduct(TL):
factor, Rest = parseFactor(TL)
return parseProductRest(factor, Rest)
"""
Explanation: The function parseProduct implements the following grammar rule:
$$ \mathrm{product} \rightarrow \;\mathrm{factor}\;\;\mathrm{productRest} $$
It takes one argument:
- TL is the list of tokens that need to be consumed.
It returns a pair of the form (value, Rest) where
- value is the result of evaluating the arithmetical expression
that is represented by TL and
- Rest is a list of those tokens that have not been consumed while trying to parse a product.
End of explanation
"""
def parseProductRest(product, TL):
if TL == []:
return product, []
elif TL[0] == '*':
factor, Rest = parseFactor(TL[1:])
return parseProductRest(product * factor, Rest)
elif TL[0] == '/':
factor, Rest = parseFactor(TL[1:])
return parseProductRest(product / factor, Rest)
else:
return product, TL
"""
Explanation: The function parseProductRest implements the following grammar rules:
$$
\begin{eqnarray}
\mathrm{productRest} & \rightarrow & \texttt{''} \;\;\mathrm{factor}\;\;\mathrm{productRest} \
& \mid & \texttt{'/'} \;\;\mathrm{factor}\;\;\mathrm{productRest} \
& \mid & \;\varepsilon \
\end{eqnarray*}
$$
It takes two arguments:
- product is the value that has already been parsed,
- TL is the list of tokens that still need to be consumed.
It returns a pair of the form (value, Rest) where
- value is the result of evaluating the arithmetical expression
that is represented by TL and
- Rest is a list of those tokens that have not been consumed while trying to parse the rest of a product.
End of explanation
"""
def parseFactor(TL):
if TL[0] == '(':
expr, Rest = parseExpr(TL[1:])
assert Rest[0] == ')', 'Parse Error: expected ")"'
return expr, Rest[1:]
else:
return float(TL[0]), TL[1:]
"""
Explanation: The function parseFactor implements the following grammar rules:
$$
\begin{eqnarray}
\mathrm{factor} & \rightarrow & \texttt{'('} \;\;\mathrm{expr} \;\;\texttt{')'} \
& \mid & \;\texttt{NUMBER}
\end{eqnarray}
$$
It takes one argument:
- TL is the list of tokens that still need to be consumed.
It returns a pair of the form (value, Rest) where
- value is the result of evaluating the arithmetical expression
that is represented by TL and
- Rest is a list of those tokens that have not been consumed while trying to parse a factor.
End of explanation
"""
def test(s):
r1 = parse(s)
r2 = eval(s)
assert r1 == r2
return r1
test('11+22*(33-44)/(5-10*5/(4-3))')
test('0*11+22*(33-44)/(5-10*5/(4-3))')
"""
Explanation: Testing
End of explanation
"""
|
parkerzf/kaggle-expedia
|
notebooks/time_based_anlaysis.ipynb
|
bsd-3-clause
|
daily_stats[['count_click', 'count_booking_train', 'count_booking_test']].sum()/1000
print 'booking ratio for train set: ', daily_stats.count_booking_train.sum() * 1.0 \
/ (daily_stats.count_click.sum() + daily_stats.count_booking_train.sum())
print 'daily booking in train set: ', daily_stats.count_booking_train.sum() * 1.0 \
/ len(daily_stats[daily_stats.count_booking_train != 0])
print 'daily click in train set: ', daily_stats.count_click.sum() * 1.0 \
/ len(daily_stats[daily_stats.count_click != 0])
print 'daily booking in test set: ', daily_stats.count_booking_test.sum() * 1.0 \
/ len(daily_stats[daily_stats.count_booking_test != 0])
"""
Explanation: # Overall Summary
End of explanation
"""
monthly_number_stats_booking_train = (daily_stats.groupby(("year", "month"))["count_booking_train"].sum()/1000)
monthly_number_stats_click_train = (daily_stats.groupby(("year", "month"))["count_click"].sum()/1000)
monthly_number_stats_booking_test = (daily_stats.groupby(("year", "month"))["count_booking_test"].sum()/1000)
fig = monthly_number_stats_booking_train.plot(kind='bar', alpha=0.5, figsize=(14, 8))
monthly_number_stats_click_train.plot(kind='bar', alpha=0.3, color = 'r', figsize=(14, 8))
monthly_number_stats_booking_test.plot(kind='bar', alpha=0.5, color = 'y', figsize=(14, 8))
fig.legend()
fig.set_title("Total Booking per Month")
fig.set_ylabel("Thousands of Bookings/Clicks")
fig.set_xlabel("(Year , Month)" )
"""
Explanation: Monthly stats
End of explanation
"""
import locale, calendar
locale.setlocale(locale.LC_ALL, 'en_US.UTF-8')
fig, axes = plt.subplots(nrows=1, ncols=2)
fig.tight_layout()
fig.set_size_inches(18.5,5.5)
dow = map(lambda x: calendar.day_abbr[x].capitalize(), daily_stats.index.dayofweek)
dow_order = map(lambda x: calendar.day_abbr[x].capitalize(), np.arange(0,7))
sns.boxplot(daily_stats.count_booking/1000, groupby=dow, order=dow_order, ax=axes[0])
axes[0].set_title("Total number of bookings by Week day")
axes[0].set_ylabel("Nubmer of bookings (Thousands)")
dow_clicks = map(lambda x: calendar.day_abbr[x].capitalize(), daily_stats[daily_stats.count_click!=0].index.dayofweek)
dow_clicks_order = map(lambda x: calendar.day_abbr[x].capitalize(), np.arange(0,7))
sns.boxplot(daily_stats[daily_stats.count_click!=0].count_click/1000., groupby=dow_clicks, order=dow_clicks_order, ax=axes[1])
axes[1].set_title("Total number of clicks by Week day")
axes[1].set_ylabel("Nubmer of clicks (Thousands)")
"""
Explanation: Daily stats -- weekdays
End of explanation
"""
table = 'public.srch_ci_daily_stats'
daily_stats_ci = get_dataframe(
'''select * from %s where year between 2013 and 2016''' % table
)
daily_stats_ci.index = pd.to_datetime(daily_stats_ci.year*10000 + daily_stats_ci.month*100 + daily_stats_ci.day, format='%Y%m%d')
table = 'public.srch_co_daily_stats'
daily_stats_co = get_dataframe(
'''select * from %s where year between 2013 and 2016''' % table
)
daily_stats_co.index = pd.to_datetime(daily_stats_co.year*10000 + daily_stats_co.month*100 + daily_stats_co.day, format='%Y%m%d')
monthly_number_stats_ci_booking_train = (daily_stats_ci.groupby(("year", "month"))["count_booking_train"].sum()/1000)
monthly_number_stats_ci_click_train = (daily_stats_ci.groupby(("year", "month"))["count_click"].sum()/1000)
monthly_number_stats_ci_booking_test = (daily_stats_ci.groupby(("year", "month"))["count_booking_test"].sum()/1000)
monthly_number_stats_co_booking_train = (daily_stats_co.groupby(("year", "month"))["count_booking_train"].sum()/1000)
monthly_number_stats_co_click_train = (daily_stats_co.groupby(("year", "month"))["count_click"].sum()/1000)
monthly_number_stats_co_booking_test = (daily_stats_co.groupby(("year", "month"))["count_booking_test"].sum()/1000)
fig = monthly_number_stats_ci_booking_train.plot(kind='bar', alpha=0.5, figsize=(14, 8))
monthly_number_stats_ci_click_train.plot(kind='bar', alpha=0.3, color = 'r', figsize=(14, 8))
monthly_number_stats_ci_booking_test.plot(kind='bar', alpha=0.5, color = 'y', figsize=(14, 8))
fig.legend()
fig.set_title("Total Booking per Month (Checkin)")
fig.set_ylabel("Thousands of Bookings/Clicks")
fig.set_xlabel("(Year , Month)" )
fig = monthly_number_stats_co_booking_train.plot(kind='bar', alpha=0.5, figsize=(14, 8))
monthly_number_stats_co_click_train.plot(kind='bar', alpha=0.3, color = 'r', figsize=(14, 8))
monthly_number_stats_co_booking_test.plot(kind='bar', alpha=0.5, color = 'y', figsize=(14, 8))
fig.legend()
fig.set_title("Total Booking per Month (Checkout)")
fig.set_ylabel("Thousands of Bookings/Clicks")
fig.set_xlabel("(Year , Month)" )
"""
Explanation: There are weekly pattern in booking time, high from Monday to Fri, low in the Friday and weekend.
Monthly stats (Checkin and Checkout)
End of explanation
"""
import locale, calendar
locale.setlocale(locale.LC_ALL, 'en_US.UTF-8')
fig, axes = plt.subplots(nrows=1, ncols=2)
fig.tight_layout()
fig.set_size_inches(18.5,5.5)
dow = map(lambda x: calendar.day_abbr[x].capitalize(), daily_stats_ci.index.dayofweek)
dow_order = map(lambda x: calendar.day_abbr[x].capitalize(), np.arange(0,7))
sns.boxplot(daily_stats_ci.count_booking/1000, groupby=dow, order=dow_order, ax=axes[0])
axes[0].set_title("Total number of bookings by Week day (Checkin)")
axes[0].set_ylabel("Nubmer of bookings (Thousands)")
dow_clicks = map(lambda x: calendar.day_abbr[x].capitalize(), daily_stats_ci[daily_stats_ci.count_click!=0].index.dayofweek)
dow_clicks_order = map(lambda x: calendar.day_abbr[x].capitalize(), np.arange(0,7))
sns.boxplot(daily_stats_ci[daily_stats_ci.count_click!=0].count_click/1000., groupby=dow_clicks, order=dow_clicks_order, ax=axes[1])
axes[1].set_title("Total number of clicks by Week day(Checkin)")
axes[1].set_ylabel("Nubmer of clicks (Thousands)")
import locale, calendar
locale.setlocale(locale.LC_ALL, 'en_US.UTF-8')
fig, axes = plt.subplots(nrows=1, ncols=2)
fig.tight_layout()
fig.set_size_inches(18.5,5.5)
dow = map(lambda x: calendar.day_abbr[x].capitalize(), daily_stats_co.index.dayofweek)
dow_order = map(lambda x: calendar.day_abbr[x].capitalize(), np.arange(0,7))
sns.boxplot(daily_stats_co.count_booking/1000, groupby=dow, order=dow_order, ax=axes[0])
axes[0].set_title("Total number of bookings by Week day (Checkout)")
axes[0].set_ylabel("Nubmer of bookings (Thousands)")
dow_clicks = map(lambda x: calendar.day_abbr[x].capitalize(), daily_stats_co[daily_stats_co.count_click!=0].index.dayofweek)
dow_clicks_order = map(lambda x: calendar.day_abbr[x].capitalize(), np.arange(0,7))
sns.boxplot(daily_stats_co[daily_stats_co.count_click!=0].count_click/1000., groupby=dow_clicks, order=dow_clicks_order, ax=axes[1])
axes[1].set_title("Total number of clicks by Week day(Checkout)")
axes[1].set_ylabel("Nubmer of clicks (Thousands)")
"""
Explanation: Daily stats -- weekdays (Checkin and Checkout)
End of explanation
"""
|
xesscorp/myhdlpeek
|
examples/peeker_options.ipynb
|
mit
|
from myhdl import *
from myhdlpeek import Peeker
def adder_bit(a, b, c_in, sum_, c_out):
'''Single bit adder.'''
@always_comb
def adder_logic():
sum_.next = a ^ b ^ c_in
c_out.next = (a & b) | (a & c_in) | (b & c_in)
# Add some peekers to monitor the inputs and outputs.
Peeker(a, 'a')
Peeker(b, 'b')
Peeker(c_in, 'c_in')
Peeker(sum_, 'sum')
Peeker(c_out, 'c_out')
return adder_logic
def adder(a, b, sum_):
'''Connect single-bit adders to create a complete adder.'''
c = [Signal(bool(0)) for _ in range(len(a)+1)] # Carry signals between stages.
s = [Signal(bool(0)) for _ in range(len(a))] # Sum bit for each stage.
stages = [] # Storage for adder bit instances.
# Create the adder bits and connect them together.
for i in range(len(a)):
stages.append( adder_bit(a=a(i), b=b(i), sum_=s[i], c_in=c[i], c_out=c[i+1]) )
# Concatenate the sum bits and send them out on the sum_ output.
@always_comb
def make_sum():
sum_.next = ConcatSignal(*reversed(s))
return instances() # Return all the adder stage instances.
# Create signals for interfacing to the adder.
a, b, sum_ = [Signal(intbv(0,0,8)) for _ in range(3)]
# Clear-out any existing peeker stuff before instantiating the adder.
Peeker.clear()
# Instantiate the adder.
add_1 = adder(a=a, b=b, sum_=sum_)
# Create some more peekers to monitor the top-level buses.
Peeker(a, 'a_bus')
Peeker(b, 'b_bus')
Peeker(sum_, 'sum_bus')
# Create a testbench generator that applies random inputs to the adder.
from random import randrange
def test(duration):
for _ in range(duration):
a.next, b.next = randrange(0, a.max), randrange(0, a.max)
yield delay(1)
# Simulate the adder, testbench and peekers.
Simulation(add_1, test(8), *Peeker.instances()).run()
"""
Explanation: <h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Peeker-Options" data-toc-modified-id="Peeker-Options-1"><span class="toc-item-num">1 </span>Peeker Options</a></span><ul class="toc-item"><li><span><a href="#Selecting-Waveforms-to-Display" data-toc-modified-id="Selecting-Waveforms-to-Display-1.1"><span class="toc-item-num">1.1 </span>Selecting Waveforms to Display</a></span></li><li><span><a href="#Spacing-the-Waveforms" data-toc-modified-id="Spacing-the-Waveforms-1.2"><span class="toc-item-num">1.2 </span>Spacing the Waveforms</a></span></li><li><span><a href="#Specifying-a-Time-Window" data-toc-modified-id="Specifying-a-Time-Window-1.3"><span class="toc-item-num">1.3 </span>Specifying a Time Window</a></span></li><li><span><a href="#Showing-Cycle-Times" data-toc-modified-id="Showing-Cycle-Times-1.4"><span class="toc-item-num">1.4 </span>Showing Cycle Times</a></span></li><li><span><a href="#Adding-Titles-and-Captions" data-toc-modified-id="Adding-Titles-and-Captions-1.5"><span class="toc-item-num">1.5 </span>Adding Titles and Captions</a></span></li><li><span><a href="#Setting-the-Display-Size" data-toc-modified-id="Setting-the-Display-Size-1.6"><span class="toc-item-num">1.6 </span>Setting the Display Size</a></span></li><li><span><a href="#Skinning-It" data-toc-modified-id="Skinning-It-1.7"><span class="toc-item-num">1.7 </span>Skinning It</a></span></li><li><span><a href="#Accessing-the-WaveJSON-Data" data-toc-modified-id="Accessing-the-WaveJSON-Data-1.8"><span class="toc-item-num">1.8 </span>Accessing the WaveJSON Data</a></span></li></ul></li></ul></div>
Peeker Options
Once the simulation has run and all the peekers have gathered their signal traces, you can show the waveforms in various ways.
The options that affect the waveform display will be illustrated using the hierarchical adder example shown below.
End of explanation
"""
Peeker.show_waveforms('a_bus', 'b_bus', 'sum_bus', 'sum[2]', 'sum[1]', 'sum[0]')
"""
Explanation: Selecting Waveforms to Display
By default, to_wavedrom shows all the captured waveforms. But you can also specify a subset of the waveforms for display:
End of explanation
"""
Peeker.show_waveforms('a_bus b_bus sum_bus sum[2] sum[1] sum[0]')
"""
Explanation: If you don't like typing all those quotation marks, you can place multiple, space-separated peeker names inside a string:
End of explanation
"""
Peeker.show_waveforms('a_bus b_bus | sum_bus sum[2] sum[1] sum[0]')
"""
Explanation: Spacing the Waveforms
If you want to place some space between the waveforms, just insert a string that doesn't match any peeker's name (I like to use a | character for that):
End of explanation
"""
signals = 'a_bus b_bus | sum_bus sum[2] sum[1] sum[0]'
Peeker.show_waveforms(signals, start_time=5, stop_time=15)
"""
Explanation: Specifying a Time Window
To show only a segment of the waveforms, use the start_time and stop_time options:
End of explanation
"""
Peeker.show_waveforms(signals, start_time=5, stop_time=15, tock=True)
"""
Explanation: Showing Cycle Times
If it's hard to tell the interval covered by the waveforms, you can turn on the display of cycle times using the tick or tock options:
End of explanation
"""
Peeker.show_waveforms(signals, start_time=5, stop_time=15, tock=True,
title='Multi-Bit, Hierarchical Adder', caption='It really works!')
"""
Explanation: Adding Titles and Captions
You can also add a title and caption to your work:
End of explanation
"""
Peeker.show_waveforms(signals, start_time=5, stop_time=15, tock=True,
title='Multi-Bit, Hierarchical Adder', caption='It reall works!', width=8)
"""
Explanation: Setting the Display Size
Possibly you have less screen real estate available. The width option lets you reduce the entire waveform display:
End of explanation
"""
Peeker.clear_traces()
Simulation(add_1, test(100), *Peeker.instances()).run()
Peeker.to_wavedrom(signals, width=4000)
# Peeker.clear_traces()
"""
Explanation: Sometimes you'll have a long simulation that creates an unreadable display because it's squeezed
into the width of the page. You can restore legibility by setting width wider than the page
and then using the scroll bars to view the waveforms:
End of explanation
"""
Simulation(add_1, test(8), *Peeker.instances()).run()
Peeker.to_wavedrom(signals, skin='narrow')
"""
Explanation: Skinning It
The skin option lets you choose the set of graphic elements that are used to draw the waveforms.
Currently, the only allowed values are default and narrow.
End of explanation
"""
wavejson = Peeker.to_wavejson(signals)
wavejson
"""
Explanation: Accessing the WaveJSON Data
Finally, you might want to get hold of the WaveJSON data directly to get more control over the waveform display. This is done with the to_wavejson() method:
End of explanation
"""
from myhdlpeek import wavejson_to_wavedrom
wavejson_to_wavedrom(wavejson)
"""
Explanation: After you manipulate the WaveJSON data, you can display it using the wavejson_to_wavedrom() function:
End of explanation
"""
|
halflings/bio-data-workshop
|
notebook.ipynb
|
apache-2.0
|
# The dataset doesn't contain a header containing column names
# so we generate them ourselves.
feature_columns = ['feature_{}'.format(i) for i in range(1, 31)]
columns = ['id', 'diagnosis'] + feature_columns
# Reading data from a
#DATA_PATH = 'https://archive.ics.uci.edu/ml/machine-learning-databases/breast-cancer-wisconsin/wdbc.data'
DATA_PATH = 'wdbc.data'
df = pd.read_csv(DATA_PATH, header=None, names=columns, index_col='id')
df['diagnosis'] = df['diagnosis'].astype('category')
df.sample(10)
"""
Explanation: "Introduction to machine learning" Workshop
https://github.com/halflings/bio-data-analysis-workshop
Welcome to the machine learning workshop!
What's machine learning ?
Machine Learning is about building (statistical) models that improve as they learn from existing data. These models can solve a variety of issues, but the most common are supervised learning problems for classification and prediction tasks (other tasks include: clustering, generating data, ...)
Machine learning can be divided into two main categories: supervised learning and unsupervised learning.
Supervised Learning
In Supervised Learning, we have a dataset consisting of both features and the expected output values (labels).
The task is to train a model that is able to predict the label of an object
given its features. For example, predicting if someone has the flu based on physiological measurements.
Some more complicated examples are:
given a photograph of a person, identify the person in the photo.
given the list of songs a user listened to, recommend a song they would like
(we call this a recommender system, something we're actively working on at Spotify).
In the case of classification, the labels are discrete (usually strings or a limited number of integers). For example: identifying which species is on a photograph.
In the case of regression, the labels are continuous (usually floats or vectors of floats). For example: predicting the weight of a person based on her diet.
Unsupervised Learning
In Unsupervised Learning, there data points are not labeled and the task is usually to detect some fundamental structure present in the data: data points that could be grouped together, correlated dimensions, etc.
Examples of unsupervised learning tasks are:
clustering politicians based on their voting history
finding topics discussed in the aftonbladet
decomposing an electrical signal into the sub-signals that compose it
What will we do in this workshop?
We'll focus on supervised learning with a classification task.
We're going to use data donated by the University of California, Irvine, about cancer diagnostics.
The data consists of 32 columns:
ID number
Diagnosis (M = malignant, B = benign)
(to 32)
Ten real-valued features are computed for each cell nucleus:
radius (mean of distances from center to points on the perimeter)
texture (standard deviation of gray-scale values)
perimeter
area
smoothness (local variation in radius lengths)
compactness (perimeter^2 / area - 1.0)
concavity (severity of concave portions of the contour)
concave points (number of concave portions of the contour)
symmetry
fractal dimension ("coastline approximation" - 1)
Part I: Data loading and exploration
* 1) Loading the data
We'll try to load the data directly from the UCI website. Theirs servers have been in a pretty bad shape recently, returning a lot of server errors, so we also have a local copy of the data.
We'll use pandas to load this data:
End of explanation
"""
_ = df.hist(bins=15, figsize=(24, 16))
df['diagnosis'].value_counts()
"""
Explanation: Well, that was easier than expected, right?
* 2) Exploring the data
Randomly feeding your data into the the newest and most "hip" model ("RSNMIN: Recurrent Stochastic Neural Machine Implemented in Node.js") is the worst you could do at this point.
You should first try to explore your data and have a feel of how it's distributed, if there are any values missing, any special correlations, etc. Only after having all this information will you be able to choose the right model(s) and pre-processing to use.
Let's start by looking at how our data is distributed:
End of explanation
"""
from pandas.tools.plotting import scatter_matrix
label_colors = ['b' if d == 'B' else 'r' for d in df['diagnosis']]
_ = scatter_matrix(df[['feature_1', 'feature_2', 'feature_3', 'feature_20']], c=label_colors, diagonal='kde', s=25)
"""
Explanation: Most of the features seem to follow a gaussian distributions. Perfect! This is usually how you'd like your data to be distributed.
Uniformly distributed data and greatly skewed distributions can be painful to deal with, as they might not provide as much information to your models.
What we did just now is called univariate distributions: how every variable is distributed independantly of the others. Let's try to look at some multivariate distributions, and to be more precise: bi-variate distributions.
End of explanation
"""
correlation_matrix = np.corrcoef(df[feature_columns].values.T)
plt.matshow(np.abs(correlation_matrix))
plt.title('Pearson correlation matrix')
"""
Explanation: We have 3 observations to make here:
Correlated variables: It's pretty obvious that feature_1 and feature_3 are highly correlated.
Like we saw in the previous plot, feature_20 is highly skewed towards lower values, with some outliers in a higher range.
The other variables are not particularly correlated.
Correlated variables can have a big impact on the performance of our models. Some models are highly sensitive to them and work best with independant variables (like Naive Bayes) while others won't be affected as much by these correlations (like Logistic Regression, thanks to the regularization term that will eliminate variables that do not contribute anything new).
To get a more formal view of how our variables are correlated, we can calculate a "correlation" matrix using metrics like the Pearson correlation: the higher the absolute value of this metric is, the more the variables are correlated.
End of explanation
"""
import itertools
strongly_correlated_couples = [(feature_columns[i], feature_columns[j]) for i in range(30) for j in range(i+1, 30)
if abs(correlation_matrix[i, j]) >= 0.98]
strongly_correlated_features = list(set(feature for f_couple in strongly_correlated_couples for feature in f_couple))
_ = scatter_matrix(df[strongly_correlated_features], c=label_colors, s=25)
"""
Explanation: From looking at these matrices, we can see that a strong correlation exists between some features of the first molecule and the equivalent features in the third one. We can confirm this by extracting strongly correlated features and visualizing them:
End of explanation
"""
all_features = df[feature_columns].values
labels = df['diagnosis'].values.to_dense()
all_features
labels[:6]
"""
Explanation: Part II: The actual machine learning
(aka, building models!)
Model choice and pre-processing
Now that we have a better idea of what our data looks like, we can start the modelling part.
Depending on what models we use and the distribution of the data, it can be a good idea to do some "feature engineering" and do some pre-processing on the features.
Some common pre-processing operations:
- Normalizing the data: Centering the component around 0 with a standard deviation of 1.
- Scaling the data: Making sure all values of the component are comprised between certain minimum and maximum values.
- Feature selection: It might be good to drop some features if they have a negative impact on the performance of the model.
- Dimensionality reduction: If we have a very high number of features, including many that are highly correlated, it's a good idea to reduce the number of components by keeping the maximum amount of information.
Some models perform better if the data is normalized (Support Vector Machines, some Neural Networks), others are sensitive to correlated features (Naive Bayes). Depending on the model you choose, some pre-processing steps might improve or worsen performance. So choose wisely!
Evaluation
This is usually seen as the last step you take to check everything is working correctly once you've built your model... but really, it's more of a feedback loop than a sequential process!
Many models require hyper-parameters which you have to tune based on the performance of your model. It is (sometimes) possible to choose a sensible value for a hyper-parameter based on the nature of the data, but it's most likely that you'll just have to try a large number of values until you find those that work best for your data... and you need to do model evaluation for that!
Which library to use?
The clear leader in this field is scikit-learn: this is the most popular machine learning library in Python and is by itself enough of a reason to use Python instead of something like Java. It's under active developement, contains tons of models and tools to do feature engineering / pre-processing / visualize data. It's great!
1) A first simple model
It's often good to start with the simplest model (occam's razor!), so let's do a simple Logistic Regression: this model is usually used for binary classification tasks but can be extended for multi-class classifications. scikit-learn makes it super easy to use a model.
Let's first look at the features and labels we'll use with our model:
End of explanation
"""
from sklearn.linear_model import LogisticRegressionCV
from sklearn.cross_validation import train_test_split
all_features = df[feature_columns].values
labels = df['diagnosis'].values.to_dense()
model = LogisticRegressionCV()
X_train, X_test, y_train, y_test = train_test_split(all_features, labels, train_size=0.66, random_state=42)
model.fit(X_train, y_train)
model.predict(X_test)
y_test
"""
Explanation: Great!
Let's now instantiate a LogisticRegressionCV model that we'll feed our feature vectors and labels to:
End of explanation
"""
from sklearn.cross_validation import cross_val_score
cross_val_score(model, all_features, labels, cv=8, scoring='accuracy').mean()
"""
Explanation: Awesome!
What we've done here is:
Initialize the LogisticRegressionCV class (if you want to use a different model, just initialize a different class)
Split the data into training and test data. This is very important: if you train and test your models on the same data, you'll tend to optimize towards an unrealistic case. What you're trying to do is classify data you've never seen before.
We predict the class of the training test.
That's cool and all, but how can we know if our model performs well? Well, scikit-learn has a set of tools specifically dedicated to this task, and they're pretty easy to use:
End of explanation
"""
from sklearn.svm import SVC
from sklearn.feature_selection import RFECV
from sklearn.cross_validation import StratifiedKFold
svc = SVC(kernel="linear", C=1)
rfecv = RFECV(estimator=svc, step=1, cv=StratifiedKFold(labels, 2),
scoring='accuracy')
rfecv.fit(all_features, labels)
print("Optimal number of features : %d" % rfecv.n_features_)
# Plot number of features VS. cross-validation scores
plt.figure()
plt.xlabel("Number of features selected")
plt.ylabel("Cross validation score (nb of correct classifications)")
plt.plot(range(1, len(rfecv.grid_scores_) + 1), rfecv.grid_scores_)
plt.show()
"""
Explanation: Every value is the metric we chose (accuracy in this case) for every split of the data.
It's important to choose the right metric for evaluation, because depending on what you're trying to do you'll want to optimize for different things.
For instance, maybe it's OK to make a False Positive (predicting a benign cancer as being malignant), but it's super dangerous to do False Negatives (predicting a malignant cancer as being benign). And maybe you're doing fraud detection and you want to minimize the number of False Positives because every one of them costs you a lot of money to investigate.
Now that we know how to evaluate a model, let's try to look at something a bit more complex: how to pick the right variables for our model?
2) Feature selection
There's many ways to do feature selection: based on the variance of the data, based on its contribution to the model, etc.
We'll use "recursive feature elimination" to see pick the right features for our model, this time a Support Vector Machine:
End of explanation
"""
from sklearn.decomposition import PCA
from sklearn.linear_model import LogisticRegressionCV
# PCA projections
pca = PCA(n_components=10)
proj_features = pca.fit_transform(all_features)
model = LogisticRegressionCV()
print("Accuracy using all ({}) features:".format(all_features.shape[1]))
print(cross_val_score(model, all_features, labels, cv=5).mean())
print("Accuracy using only {} features:".format(proj_features.shape[1]))
print(cross_val_score(model, proj_features, labels, cv=5).mean())
"""
Explanation: Pretty easy, right?
It can be surprising that more features might be equal to worse performance. There's many possible reasons, but one might be that those features are too noisy and cause the model to over-fit: fit specific training data-points instead of generalizing to be usable on any data-point.
3) Dimensionality reduction
Some models work best when features are uncorrelated, and sometimes you just have too many features and training your model takes too much time.
For both these cases, dimensionality reduction can be useful. This is how we call all methods used to generate a new, smaller, set of features.
One of the most popular methods is PCA (Principal Component Analysis) and it uses the covariance of your variables to build a new vector space (with generally less components than the original space) where all dimensions are independant, and where feature vectors can be projected by losing the minimum amount of information.
End of explanation
"""
from sklearn.linear_model import LinearRegression
regr_features = ['feature_1', 'feature_3']
other_features = [f for f in feature_columns if not f in regr_features]
model = LinearRegression()
cross_val_score(model, df[other_features], df[regr_features], cv=8, scoring='mean_squared_error').mean()
"""
Explanation: But wait... what about regression?
All what we described above works more or less the same way for regression problems!
Let's try to find the value of two variables based on the remaining variables:
End of explanation
"""
# Initialize a new PandasPDB object
# and fetch the PDB file from rcsb.org
from biopandas.pdb import PandasPDB
ppdb = PandasPDB().fetch_pdb('3eiy')
ppdb.df['ATOM'].head()
"""
Explanation: That's about it!
Things this notebook doesn't talk about:
Unsupervised learning
Specificities of regression vs classification
Specificities of every model
Families of models (linear models, ensemble models, etc.)
More advanced feature engineering "tricks"
...
Let's talk about them now! Time for questions.
Annex (fresh out the oven!) - Pandas for protein data (PDB files)
Just last week, Sebastian Raschka released a Python library made for bioscientists called biopandas. This library lets you easily load protein data stored in the popular PDB (Protein Data Bank) format.
End of explanation
"""
|
santosjorge/cufflinks
|
Cufflinks Tutorial - Colors.ipynb
|
mit
|
import cufflinks as cf
"""
Explanation: Cufflinks Colors
Cufflinks also provides a wide set of tools for color managements; including color conversion across multiple spectrums and color table generation.
End of explanation
"""
# The colors module includes a pre-defined set of commonly used colors
cf.colors.cnames
# HEX to RGB
cf.colors.hex_to_rgb('red')
# RGB to HEX
cf.colors.rgb_to_hex('rgb(219, 64, 82)')
# RGB or HEX to RGBA (transparency)
cf.colors.to_rgba('#3780bf',.5), cf.colors.to_rgba('rgb(219, 64, 82)',.4)
# RGBA to RGB (flatten transparency)
# By default assumes that the transparency color is *white*, however this can be also passed as a parameter.
cf.colors.rgba_to_rgb('rgba(219, 64, 82, 0.4)','white')
"""
Explanation: Colors can be represented as strings:
HEX "#db4052"
RGB "rgb(219, 64, 82)"
RGBA "rgba(219, 64, 82, 1.0)"
Color Conversions
End of explanation
"""
# Cufflinks.colors.normalize will always return the an hex value for all types of colors
colors=['#f08','rgb(240, 178, 185)','rgba(219, 64, 82, 0.4)','green']
[cf.colors.normalize(c) for c in colors]
"""
Explanation: Normalization
End of explanation
"""
# 10 different tones of pink
cf.colors.color_range('pink',10)
"""
Explanation: Color Ranges
A range of colors can be generated using a base color and varying the saturation.
End of explanation
"""
# Displaying a table of defined colors (list)
colors=['#f08', 'rgb(240, 178, 185)', 'blue' , '#32ab60']
cf.colors.color_table(colors)
# Generating 15 shades of orange
cf.colors.color_table('orange',15)
"""
Explanation: Color Tables
This function is meant to be used in an iPython Notebook.
It generates an HTML table to display either a defined list of colors or to automatically generate a range of colors.
End of explanation
"""
# Create a generator using 3 defined base colors
colors=['green','orange','blue']
gen=cf.colors.colorgen(colors)
outputColors=[next(gen) for _ in range(15)]
cf.colors.color_table(outputColors)
# Create a generator with default set of colors
gen=cf.colors.colorgen()
outputColors=[next(gen) for _ in range(15)]
cf.colors.color_table(outputColors)
# We can see all available scales with
cf.get_scales()
# Other color scales can be also seen here
cf.colors.scales()
colorscale=cf.colors.get_scales('accent')
cf.colors.color_table(colorscale)
"""
Explanation: Color Generators
A color generator can be used to produce shades of colors in an iterative form. For example when plotting N timeseries so the color used are as distinctive as possible.
End of explanation
"""
|
agussman/aws_name_similarity
|
aws_name_similarity.ipynb
|
mit
|
from itertools import combinations
import jellyfish
from scipy.cluster import hierarchy
import numpy as np
import matplotlib.pyplot as plt
"""
Explanation: Setup
$ mkvirtualenv aws_name_similarity
$ pip install --upgrade pip
$ pip install jellyfish jupyter scipy matplotlib
$ jupyter notebook
End of explanation
"""
# Strings to compare
strs = [u"MARTHA", u"MARHTA", u"DWAYNE", u"DUANE", u"DIXON", u"DICKSONX"]
# Calculating Jaro similarity and converting to distance (use Jaro-Winkler below)
jaro_dists = [1 - jellyfish.jaro_distance(x,y) for x,y in combinations(strs, 2)]
jaro_dists
# Plot it
ytdist = np.array(jaro_dists)
Z = hierarchy.linkage(ytdist, 'single')
plt.figure()
hierarchy.set_link_color_palette(['m', 'c', 'y', 'k'])
dn = hierarchy.dendrogram(Z, above_threshold_color='#bcbddc', orientation='left', labels=strs)
hierarchy.set_link_color_palette(None) # reset to default after use
plt.show()
"""
Explanation: # Testing it out
End of explanation
"""
# I copied these from the AWS console. If anyone knows the AWS API endpoint to get an equivalent list, let me know!
strs = [
u"API Gateway",
u"Application Discovery Service",
u"AppStream",
u"AppStream 2.0",
u"Athena",
u"AWS IoT",
u"Certificate Manager",
u"CloudFormation",
u"CloudFront",
u"CloudSearch",
u"CloudTrail",
u"CloudWatch",
u"CodeBuild",
u"CodeCommit",
u"CodeDeploy",
u"CodePipeline",
u"Cognito",
u"Compliance Reports",
u"Config",
u"Data Pipeline",
u"Device Farm",
u"Direct Connect",
u"Directory Service",
u"DMS",
u"DynamoDB",
u"EC2",
u"EC2 Container Service",
u"Elastic Beanstalk",
u"Elastic File System",
u"Elastic Transcoder",
u"ElastiCache",
u"Elasticsearch Service",
u"EMR",
u"GameLift",
u"Glacier",
u"IAM",
u"Inspector",
u"Kinesis",
u"Lambda",
u"Lex",
u"Lightsail",
u"Machine Learning",
u"Managed Services",
u"Mobile Analytics",
u"Mobile Hub",
u"OpsWorks",
u"Pinpoint",
u"Polly",
u"QuickSight",
u"RDS",
u"Redshift",
u"Rekognition",
u"Route 53",
u"S3",
u"Server Migration",
u"Service Catalog",
u"SES",
u"Snowball",
u"SNS",
u"SQS",
u"Step Functions",
u"Storage Gateway",
u"SWF",
u"Trusted Advisor",
u"VPC",
u"WAF & Shield",
u"WorkDocs",
u"WorkMail",
u"WorkSpaces"
]
# Calculate similarity and convert to distance
jaro_dists = [1 - jellyfish.jaro_winkler(x,y) for x,y in combinations(strs, 2)]
ytdist = np.array(jaro_dists)
Z = hierarchy.linkage(ytdist, 'single')
plt.figure(figsize=(6, 10), facecolor='white')
# The colors don't mean anything; anything below the color_threshold uses one of these colors
plt.suptitle('Jaro-Winkler Similarity of AWS Service Names', y=.94, fontsize=16)
plt.title('github.com/agussman | T:@percontate', fontsize=10)
hierarchy.set_link_color_palette(['g', 'r', 'm', 'c'])
dn = hierarchy.dendrogram(Z, color_threshold=0.25, above_threshold_color='#bcbddc', orientation='left', labels=strs)
hierarchy.set_link_color_palette(None) # reset to default after use
plt.show()
"""
Explanation: With real AWS service names
End of explanation
"""
|
nslatysheva/data_science_blogging
|
expanding_ML_toolkit/expanding_toolkit.ipynb
|
gpl-3.0
|
import wget
import pandas as pd
import numpy as np
from sklearn.cross_validation import train_test_split
# Import the dataset
data_url = 'https://raw.githubusercontent.com/nslatysheva/data_science_blogging/master/datasets/wine/winequality-red.csv'
dataset = wget.download(data_url)
dataset = pd.read_csv(dataset, sep=";")
# Using a lambda function to bin quality scores
dataset['quality_is_high'] = dataset.quality.apply(lambda x: 1 if x >= 6 else 0)
# Convert the dataframe to a numpy array and split the
# data into an input matrix X and class label vector y
npArray = np.array(dataset)
X = npArray[:,:-2].astype(float)
y = npArray[:,-1]
# Split into training and test sets
XTrain, XTest, yTrain, yTest = train_test_split(X, y, random_state=1)
"""
Explanation: Expanding your machine learning toolkit: Randomized search, computational budgets, and new algorithms
Introduction
Previously, we wrote about some common trade-offs in machine learning and the importance of tuning models to your specific dataset. We demonstrated how to tune a random forest classifier using grid search, and how cross-validation can help avoid overfitting when tuning hyperparameters (HPs).
In this follow-up post, you'll beef up your machine learning toolbox by trying out some new, broadly-applicable tools. You'll learn a different strategy for traversing hyperparameter space - randomized search - and how to use it to tune two other classification algorithms - a support vector machine and a regularized logistic regression classifier.
We'll keep working with the wine dataset, which contains chemical characteristics of wines of varying quality. As before, our goal is to try to predict a wine's quality from these features.
Here are the things we'll cover in this blog post:
In the next blog post, you will learn how to take these three different tuned machine learning algorithms and combine them to build an aggregate model ensemble. Building ensembles often leads to improved model performance and generalizability. Stay tuned!
Loading and train/test splitting the dataset
You start off by collecting the dataset. We have covered the data loading, preprocessing, and train/test splitting previously, so we won't repeat ourselves here. Also check out this post on using plotly to create exploratory, interactive graphics of the wine dataset features.
You can fetch and format the data as follows:
End of explanation
"""
from scipy.stats import uniform
from scipy.stats import norm
from sklearn.grid_search import RandomizedSearchCV
from sklearn import metrics
# Designate distributions to sample hyperparameters from
n_estimators = np.random.uniform(70, 80, 5).astype(int)
max_features = np.random.normal(6, 3, 5).astype(int)
# Check max_features>0 & max_features<=total number of features
max_features[max_features <= 0] = 1
max_features[max_features > X.shape[1]] = X.shape[1]
hyperparameters = {'n_estimators': list(n_estimators),
'max_features': list(max_features)}
print (hyperparameters)
"""
Explanation: Introducing randomized search: comparison with grid search
You have already built a random forest classifier and tuned it using grid search to predict wine quality (here). Grid search is quite commonly used, and is essentially just a method that exhaustively tries out all combinations of manually prespecified HP values and reports the best option (i.e. the one leading to the highest test accuracy). The benefit of this approach is that you thoroughly test out various combinations, but this is of course very computationally expensive. For grid search to be tractable, you often have to restrict the number of combinations, which can severely limit how well you explore hyperparameter space and lead to you overlooking regions where accuracy would be highest.
Another way to search through hyperparameter space to find optima is via randomized search. In randomized search, you sample HP values a certain number of times from some distribution which you prespecify in advance. So unlike grid search, in which you specify particular numbers to combinatorially try out, you instead specify distributions that cover the HP space you want to explore. For example, you might specify a standard normal distribution over an HP if you think reasonable values are roughly centered around 0, or a uniform distribution over some range if you think values within that range are about as likely to be "good". In randomized search, you also specify a n_iter parameter, which acts as a computational budget, controlling how many different parameter settings are tried out in total.
We can visually summarize the grid search (grey boxes) and randomized search (purple boxes) strategies like so:
Here, both approaches are constrained by the same computational budget - they can only try out 9 different HP settings (i.e. certain values for HP1 and HP2). Randomized search tries out HP values from two normal distributions (purple bell curves), repeating the process 9 times and thus getting 9 different values of both HP1 and HP2. Most combinations fall into the meaty portion of the normal distributions, but occasionally the tails are sampled from as well - this means you have at least some chance of trying out distant regions that could potentially strike gold (i.e. the hypothetically optimal HP space leading to high accuracy, bottom right).
Meanwhile, grid search tries out 3 values each of HP1 and HP2. Of course, these values do not have to be as close to each other as we have drawn (and one could indeed hit the gold space with grid search), but the idea is that since you are constrained to trying out all combinations of prespecified HP values, this intrinsically limits how much of the HP space can be explored. Specifically, here randomized search has searched a space that is 16 times bigger (we drew a 3x3 box for the grid search and a 12x12 box for the larger grid). The n_iter argument controlling the number of HP combinations to try out gives you access to a tradeoff between computational resources invested and the HP space you can explore.
Check out this paper outlining the efficiency of randomized search compared to grid search, especially in high-dimensional HP spaces. You can imagine that if you already have 12x12=144 possible combinations of 2 HPs, adding another HP increases the number of possibilities to search through to 12x12x12=1728. This becomes very demanding very quickly and randomized search is the only feasible practical approach. However intuitively, were computational resources and patience infinite, grid search would become the better choice.
Using randomized search
Scikit makes using randomized search easy with RandomizedSearchCV. You can feed distributions of HPs to the RandomizedSearchCV object in two (fairly similar) ways:
1. You can either define distributions over HPs, without immediately sampling from them, and pass these distributions to RandomizedSearchCV, which will proceed to sample n_iter number of times with replacement from the distributions to generate candidate HP combinations.
2. You can sample from distributions immediately and pass a list of possible HP values to RandomizedSearchCV, and it will sample from these possible values n_iter number of times without replacement.
Both approaches lead to the same outcome and you will be using the second one here as it allows you to have a peak at the HP values that were sampled beforehand. Here is what this looks like with random forests:
End of explanation
"""
from sklearn.ensemble import RandomForestClassifier
# Run randomized search
randomCV = RandomizedSearchCV(RandomForestClassifier(), param_distributions=hyperparameters, n_iter=20)
randomCV.fit(XTrain, yTrain)
# Identify optimal hyperparameter values
best_n_estim = randomCV.best_params_['n_estimators']
best_max_features = randomCV.best_params_['max_features']
print("The best performing n_estimators value is: {:5d}".format(best_n_estim))
print("The best performing max_features value is: {:5d}".format(best_max_features))
# Train classifier using optimal hyperparameter values
# We could have also gotten this model out from randomCV.best_estimator_
rf = RandomForestClassifier(n_estimators=best_n_estim,
max_features=best_max_features)
rf.fit(XTrain, yTrain)
rf_predictions = rf.predict(XTest)
print (metrics.classification_report(yTest, rf_predictions))
print ("Overall Accuracy:", round(metrics.accuracy_score(yTest, rf_predictions), 3))
"""
Explanation: You then run the random search:
End of explanation
"""
# Create default rf
rf = RandomForestClassifier()
print(rf.get_params)
# Fit and predict with default rf
rf.fit(XTrain, yTrain)
rf_predictions = rf.predict(XTest)
print (metrics.classification_report(yTest, rf_predictions))
print ("Overall Accuracy:", round(metrics.accuracy_score(yTest, rf_predictions),3))
"""
Explanation: Let's compare this performance to the default random forest:
End of explanation
"""
from sklearn import svm
default_SVC = svm.SVC()
print ("Default SVC parameters are: \n{}".format(default_SVC.get_params))
"""
Explanation: Looks like the default performance is slightly lower, which is generally what you might expect. Either grid search or randomized search are good options for tuning random forests.
Let's look at how to tune the two other predictors.
Tuning a support vector machine
Let's train the second algorithm, a support vector machine (SVM) classifier, to do the same wine quality prediction task. A great introduction to the theory behind SVMs can be found in Chapter 9 of the Introduction to Statistical Learning book or in this nice blog post. Briefly, SVMs search for separating hyperplanes in the feature space which best divide the different classes in your dataset. If you had 2 features, SVMs would search for the best dividing line; if you had 3 features, SVMs search for the best dividing 2d plane, etc. Crucially, SVMs can construct complex, non-linear decision boundaries between classes by making use of a process called kernelling, which projects the data into a higher-dimensional space and facilitates the identification of a good boundary.
SVMs can use different types of kernel functions, like linear, polynomial, Gaussian or radial kernels, to throw the data into a different space. Let's use the popular radial basis function kernel (RBF kernel). In the case of RBF SVMs, the hyperparameters to tune include:
gamma - it controls how influential a single observation can be when being selected as a support vector in the model. Low values for gamma lead to large influence of individual observations and high values to less influence.
C - it controls the 'softness' of the classification boundary margin and hence the bias-variance tradeoff of the model. Lower values for C will draw smoother decision boundaries (less flexible), whereas higher values will give more rugged boundaries that can fit the training data better (more flexible)
Examine the default HP settings and performance:
End of explanation
"""
from sklearn.svm import SVC
# Create, fit, and test default SVM
rbfSVM = SVC(kernel='rbf')
rbfSVM.fit(XTrain, yTrain)
svm_predictions = rbfSVM.predict(XTest)
print (metrics.classification_report(yTest, svm_predictions))
print ("Overall Accuracy:", round(metrics.accuracy_score(yTest, svm_predictions),3))
"""
Explanation: Using the default settings, the SVM performance:
End of explanation
"""
# Designate distributions to sample hyperparameters from
np.random.seed(123)
g_range = np.random.uniform(0.0, 0.3, 5).astype(float)
C_range = np.random.normal(1, 0.1, 5).astype(float)
# Check that gamma>0 and C>0
C_range[C_range < 0] = 0.0001
hyperparameters = {'gamma': list(g_range),
'C': list(C_range)}
print (hyperparameters)
"""
Explanation: Now use randomized search to try to improve on this accuracy. First, define distributions you want to sample HP values from and create a dictionary of possible values:
End of explanation
"""
# Run randomized search
randomCV = RandomizedSearchCV(SVC(kernel='rbf', ), param_distributions=hyperparameters, n_iter=20)
randomCV.fit(XTrain, yTrain)
# Identify optimal hyperparameter values
best_gamma = randomCV.best_params_['gamma']
best_C = randomCV.best_params_['C']
print("The best performing gamma value is: {:5.2f}".format(best_gamma))
print("The best performing C value is: {:5.2f}".format(best_C))
"""
Explanation: Now pass this dictionary to param_distributions argument of RandomizedSearchCV:
End of explanation
"""
print (randomCV.grid_scores_[0:5])
# Train SVM and output predictions
rbfSVM = SVC(kernel='rbf', C=best_C, gamma=best_gamma)
rbfSVM.fit(XTrain, yTrain)
svm_predictions = rbfSVM.predict(XTest)
print (metrics.classification_report(yTest, svm_predictions))
print ("Overall Accuracy:", round(metrics.accuracy_score(yTest, svm_predictions),4))
"""
Explanation: We can examine the scores of e.g. the first 5 tested HP combinations:
End of explanation
"""
# Tuning a regularized logistic regression model
from sklearn.linear_model import LogisticRegression
# Examine defaults
default_lr = LogisticRegression()
print ("Default logistic regression parameters are: {}".format(default_lr.get_params))
# Train model and output predictions
classifier_logistic = LogisticRegression()
classifier_logistic_fit = classifier_logistic.fit(XTrain, yTrain)
logistic_predictions = classifier_logistic_fit.predict(XTest)
print metrics.classification_report(yTest, logistic_predictions)
print "Overall Accuracy:", round(metrics.accuracy_score(yTest, logistic_predictions),3)
"""
Explanation: Looks like we get similar accuracy as the default model in this case. This is fine, but in general you might think about doing things like casting the net wider to try out quite different HP values, adding new HPs to the tuning process, try a different learning algorithm, etc.
Tuning a logistic regression classifier
The final model you'll tune and apply to predict wine quality is a logistic regression classifier (LogisticRegression). This is a type of regression model which is used for predicting binary outcomes (like good wine/not good wine). Logistic regression fits a sigmoidal (S-shaped) curve through the data, but can be viewed as just a transformed version of linear regression - a straight line predicting the log odds of data points being in one of the two classes. A nice explanation of logistic regression can be found here.
One topic you will often encounter in machine learning is regularization, which is a class of techniques to reduce overfitting. The idea behind regularization is that you do not only want to maximize a model's fit to your data, since this is susceptible to overfitting. Regularization techniques try to cut down on overfitting by penalizing models, for example if they use too many parameters, or if they assign coefficients or weights that are "too big". Regularization means that models have to learn from the data under a series of constraints, which often leads to robust representations of the data.
You can adjust just how much regularization you want by adjusting regularization hyperparameters, and since this is something you might want to do often, scikit-learn comes with some pre-built models that can very efficiently fit data for a range of regularization hyperparameter values. This is the case for regularized linear regression models like Lasso regression and ridge regression, which use l1 and l2 penalties, respectively, to shrink the size of the regression coefficients. These scikit modules offer a shortcut to performing cross-validated selection of the regularization hyperparameter.
But you can also optimize how much regularization you want yourself, while at the same time tuning other hyperparameters (like the choice between l1 and l2 penalty), in the same manner as you've been doing.
Let's examine default HP settings and performance for a logistic regression model:
End of explanation
"""
# Specify HP distributions
penalty = ["l1", "l2"]
np.random.seed(123)
C_range = np.random.normal(1, 0.2, 10).astype(float)
# Check that C>0
C_range[C_range < 0] = 0.0001
hyperparameters = {'penalty': penalty,
'C': C_range}
print (hyperparameters)
"""
Explanation: Now to optimise the HPs:
End of explanation
"""
# Randomized search using cross-validation
randomCV = RandomizedSearchCV(LogisticRegression(), param_distributions=hyperparameters, cv=20)
randomCV.fit(XTrain, yTrain)
best_penalty = randomCV.best_params_['penalty']
best_C = randomCV.best_params_['C']
print ("The best performing penalty is: {}".format(best_penalty))
print ("The best performing C value is: {:5.2f}".format(best_C))
"""
Explanation: And feeding these values into RandomizedSearchCV:
End of explanation
"""
# Train model and output predictions
classifier_logistic = LogisticRegression(penalty=best_penalty, C=best_C)
classifier_logistic_fit = classifier_logistic.fit(XTrain, yTrain)
logistic_predictions = classifier_logistic_fit.predict(XTest)
print metrics.classification_report(yTest, logistic_predictions)
print "Overall Accuracy:", round(metrics.accuracy_score(yTest, logistic_predictions),3)
"""
Explanation: We can now use these values to train a new, hopefully better model:
End of explanation
"""
|
egentry/dwarf_photo-z
|
dwarfz/catalog_only_classifier/classifier_comparison.ipynb
|
mit
|
# give access to importing dwarfz
import os, sys
dwarfz_package_dir = os.getcwd().split("dwarfz")[0]
if dwarfz_package_dir not in sys.path:
sys.path.insert(0, dwarfz_package_dir)
import dwarfz
# back to regular import statements
%matplotlib inline
from matplotlib import pyplot as plt
import seaborn as sns
sns.set(context="poster", style="ticks", font_scale=1.4)
import numpy as np
import pandas as pd
from scipy.special import expit
import pathlib
import matplotlib as mpl
mpl.rcParams['savefig.dpi'] = 80
mpl.rcParams['figure.dpi'] = 80
mpl.rcParams['figure.figsize'] = 2*np.array((8,6))
mpl.rcParams['figure.facecolor'] = "white"
COSMOS_filename = pathlib.Path(dwarfz.data_dir_default) / "COSMOS_reference.sqlite"
COSMOS = dwarfz.datasets.COSMOS(COSMOS_filename)
HSC_filename = pathlib.Path(dwarfz.data_dir_default) / "HSC_COSMOS_median_forced.sqlite3"
HSC = dwarfz.datasets.HSC(HSC_filename)
matches_filename = pathlib.Path(dwarfz.data_dir_default) / "matches.sqlite3"
matches_df = dwarfz.matching.Matches.load_from_filename(matches_filename)
combined = matches_df[matches_df.match].copy()
combined["ra"] = COSMOS.df.loc[combined.index].ra
combined["dec"] = COSMOS.df.loc[combined.index].dec
combined["photo_z"] = COSMOS.df.loc[combined.index].photo_z
combined["log_mass"] = COSMOS.df.loc[combined.index].mass_med
photometry_cols = [
"gcmodel_flux","gcmodel_flux_err","gcmodel_flux_flags", "gcmodel_mag",
"rcmodel_flux","rcmodel_flux_err","rcmodel_flux_flags", "rcmodel_mag",
"icmodel_flux","icmodel_flux_err","icmodel_flux_flags", "icmodel_mag",
"zcmodel_flux","zcmodel_flux_err","zcmodel_flux_flags", "zcmodel_mag",
"ycmodel_flux","ycmodel_flux_err","ycmodel_flux_flags", "ycmodel_mag",
]
for col in photometry_cols:
combined[col] = HSC.df.loc[combined.catalog_2_ids][col].values
"""
Explanation: What do I want?
Match COSMOS galaxies to HSC objects
Remove non-matched HSC objects
Create a classification based on COSMOS features
(z < 0.15) and ($10^8$ < M_stellar < $10^9$ M$_\odot$)
Use that COSMOS classification to create an HSC training set
Run Random Forest & Logistic Regression on that data set (pure HSC features! no COSMOS features)
Create visual diagnostics
I'll test two basic classifiers: a RandomForest classifier and a Logistic Regression classifier. I'll try a very simplistic weak-learner: a straight i-band cut.
For my training data, I started by getting objects and labels from COSMOS. For input features, I then matched those COSMOS galaxies to their nearest HSC counterpart. I then used HSC i-band magnitude, along with HSC g-r, r-i, i-z, z-y colors. Finally, I augment it with some HSC photo-z information (FRANKEN-Z).
In this notebook I'll look at the full decision curves for the classifiers, in hopes of better understanding my results.
Code
End of explanation
"""
combined["g_minus_r"] = combined.gcmodel_mag - combined.rcmodel_mag
combined["r_minus_i"] = combined.rcmodel_mag - combined.icmodel_mag
combined["i_minus_z"] = combined.icmodel_mag - combined.zcmodel_mag
combined["z_minus_y"] = combined.zcmodel_mag - combined.ycmodel_mag
"""
Explanation: Turn magnitudes into colors
End of explanation
"""
mask = np.isfinite(combined["g_minus_r"]) & np.isfinite(combined["r_minus_i"]) \
& np.isfinite(combined["i_minus_z"]) & np.isfinite(combined["z_minus_y"]) \
& np.isfinite(combined["icmodel_mag"]) \
& (~combined.gcmodel_flux_flags) & (~combined.rcmodel_flux_flags) \
& (~combined.icmodel_flux_flags) & (~combined.zcmodel_flux_flags) \
& (~combined.ycmodel_flux_flags)
combined = combined[mask]
"""
Explanation: Filter out bad data
End of explanation
"""
df_frankenz = pd.read_sql_table("photo_z",
"sqlite:///{}".format(
pathlib.Path(dwarfz.data_dir_default)
/ "HSC_matched_to_FRANKENZ.sqlite"),
index_col="object_id")
df_frankenz.head()
combined = combined.join(df_frankenz[["photoz_best", "photoz_risk_best"]],
on="catalog_2_ids")
"""
Explanation: Get FRANKENZ photo-z's
End of explanation
"""
low_z = (combined.photo_z < .15)
low_mass = (combined.log_mass > 8) & (combined.log_mass < 9)
combined["low_z_low_mass"] = (low_z & low_mass)
combined.low_z_low_mass.mean()
combined.low_z_low_mass.sum()
combined.shape
"""
Explanation: Create classification labels
End of explanation
"""
features = combined.loc[:,["g_minus_r", "r_minus_i", "i_minus_z", "z_minus_y",
"icmodel_mag",
"photoz_best",
"photoz_risk_best" # The risk of photoz_best being outside of the range z_true +- 0.15(1+z_true). It ranges from 0 (safe) to 1(risky)
]]
target = combined.loc[:,["low_z_low_mass"]]
target.mean()
COSMOS_field_area = 2 # sq. degree
N_COSMOS_total = HSC.df.shape[0]
N_COSMOS_good = combined.shape[0]
true_dwarf_density = target.sum().values[0] / COSMOS_field_area
print("true dwarf density: {:.2f} / sq. deg.".format(true_dwarf_density))
"""
Explanation: Build Classifiers
End of explanation
"""
testing_fraction = .1
np.random.seed(0)
shuffled_indices = np.random.permutation(target.index.values)
N_testing_indices = int(testing_fraction*shuffled_indices.size)
testing_set_indices = shuffled_indices[:N_testing_indices]
training_set_indices = shuffled_indices[N_testing_indices:]
features_train = features.loc[training_set_indices]
features_test = features.loc[testing_set_indices]
target_train = target.loc[training_set_indices]
target_test = target.loc[testing_set_indices]
true_dwarf = target_test.values.flatten()
true_non_dwarf = ~target_test.values.flatten()
# def get_classification_characteristics(target_prob, threshold_prob, verbose=False):
# target_prediction = (target_prob > threshold_prob)
# prediction_dwarf = target_prediction
# prediction_non_dwarf = ~target_prediction
# completeness = (true_dwarf & prediction_dwarf).sum() / true_dwarf.sum()
# purity = (true_dwarf & prediction_dwarf).sum() / prediction_dwarf.sum()
# sample_size_reduction = prediction_dwarf.size / prediction_dwarf.sum()
# true_positives = np.sum(true_dwarf & prediction_dwarf)
# false_positives = np.sum(true_non_dwarf & prediction_dwarf)
# true_negatives = np.sum(true_non_dwarf & prediction_non_dwarf)
# false_negatives = np.sum(true_dwarf & prediction_non_dwarf)
# true_positive_rate = true_positives / true_dwarf.sum()
# false_positive_rate = false_positives / true_non_dwarf.sum()
# objects_per_sq_deg = N_COSMOS_good / COSMOS_field_area / sample_size_reduction
# if verbose:
# print("completeness: ", completeness)
# print("purity: ", purity)
# print("sample_size_reduction: ", sample_size_reduction)
# print("true positive rate: ", true_positive_rate)
# print("false positive rate: ", false_positive_rate)
# print("objects per sq deg: ", objects_per_sq_deg)
# return {
# "completeness": completeness,
# "purity": purity,
# "sample_size_reduction": sample_size_reduction,
# "threshold_prob": threshold_prob,
# "true_positive_rate": true_positive_rate,
# "false_positive_rate": false_positive_rate,
# "objects_per_sq_deg" : objects_per_sq_deg,
# }
"""
Explanation: Partition training and testing sets
End of explanation
"""
color_RF = "g"
color_LR = "b"
color_MC = "r"
label_RF = "Random Forest"
label_LR = "Logistic Regression"
label_MC = "Magnitude Cut"
linewidth = 4
"""
Explanation: Shared Plotting Parameters
End of explanation
"""
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
n_folds_default = 10
def get_cross_validation_matrix(classifier, seed=0, folds=n_folds_default, only_i_mag=False):
np.random.seed(seed)
testing_fraction = 1/folds
shuffled_indices = np.random.permutation(target.index.values)
all_indices_set = set(shuffled_indices)
results = {HSC_id: []
for HSC_id in combined.loc[shuffled_indices].catalog_2_ids.drop_duplicates().values}
for fold in range(folds):
print("\rfold: {} / {}".format(fold+1, folds), end="", flush=True)
if fold == folds-1:
testing_set_indices = shuffled_indices[fold*N_testing_indices:]
else:
testing_set_indices = shuffled_indices[fold*N_testing_indices:(fold+1)*N_testing_indices]
training_set_indices = np.array(list(all_indices_set - set(testing_set_indices)))
features_train = features.loc[training_set_indices]
features_test = features.loc[testing_set_indices]
if only_i_mag:
features_train = features_train[["icmodel_mag"]]
features_test = features_test[["icmodel_mag"]]
target_train = target.loc[training_set_indices]
target_test = target.loc[testing_set_indices]
classifier.fit(features_train, target_train.values.flatten())
target_prob = classifier.predict_proba(features_test)[:,1]
for i, COSMOS_id in enumerate(testing_set_indices):
HSC_id = combined.loc[COSMOS_id].catalog_2_ids
results[HSC_id].append(target_prob[i])
return results
"""
Explanation: Cross-validation Wrapper
Note: this doesn't perfectly handle when the number of objects doesn't evenly divide by the number of folds. In practice an example might be held out multiple times, and I'll just throw those few examples away.
End of explanation
"""
classifier_i_mag = LogisticRegression(class_weight=None,
solver="lbfgs",
max_iter=300)
"""
Explanation: i-mag cut
End of explanation
"""
classifier_LR = LogisticRegression(class_weight=None,
solver="lbfgs",
max_iter=300)
"""
Explanation: Logistic Regression
End of explanation
"""
classifier_RF = RandomForestClassifier(n_estimators=1000,
n_jobs=4)
"""
Explanation: Random Forest
End of explanation
"""
hdf_file = pathlib.Path("results_cross-validated_all.hdf5")
overwrite = False
if (not hdf_file.is_file()) or overwrite:
results_RF = get_cross_validation_matrix(classifier_RF)
results_LR = get_cross_validation_matrix(classifier_LR)
results_i_mag = get_cross_validation_matrix(classifier_i_mag,
only_i_mag=True)
HSC_ids = list(sorted(results_LR.keys()))
HSC_ids = [HSC_id for HSC_id in HSC_ids
if len(results_LR[HSC_id])==1]
df_results = pd.DataFrame({
"HSC_id": HSC_ids,
"LR_prob": [results_LR[HSC_id][0] for HSC_id in HSC_ids],
"RF_prob": [results_RF[HSC_id][0] for HSC_id in HSC_ids],
"i_mag_prob": [results_i_mag[HSC_id][0] for HSC_id in HSC_ids],
"target": combined.set_index("catalog_2_ids").loc[HSC_ids].low_z_low_mass
})
df_results.to_hdf(hdf_file, key="results")
else:
df_results = pd.read_hdf(hdf_file)
df_results.head()
"""
Explanation: Save all model results
Note, I previously saved this data at ../data/galaxy_images_training/2017_09_26-dwarf_galaxy_scores.csv. That's the version I use in constructing my DNN training set. The file created below isn't likely to be significantly better/different, but it uses different random seeds and thus will have some small variations in the probabilities assigned to each galaxy.
End of explanation
"""
threshold_probs = expit(np.linspace(-9, 6))
threshold_probs = np.array([-1e-6, *threshold_probs, 1+1e-6])
def get_purities(key, df_results=df_results, threshold_probs=threshold_probs):
purities = np.empty_like(threshold_probs)
df_tmp = df_results[[key, "target"]]
for i, threshold_prob in enumerate(threshold_probs):
mask = df_tmp[key] > threshold_prob
purities[i] = df_tmp["target"][mask].mean()
return purities
def get_completenesses(key, df_results=df_results, threshold_probs=threshold_probs):
completenesses = np.empty_like(threshold_probs)
df_tmp = df_results[[key, "target"]]
df_tmp = df_tmp[df_tmp.target]
for i, threshold_prob in enumerate(threshold_probs):
mask = df_tmp[key] > threshold_prob
completenesses[i] = mask.mean()
return completenesses
def get_selected_object_density(key, df_results=df_results, threshold_probs=threshold_probs):
"""per sq deg"""
object_density = np.empty_like(threshold_probs)
df_tmp = df_results[[key, "target"]]
for i, threshold_prob in enumerate(threshold_probs):
mask = df_tmp[key] > threshold_prob
object_density[i] = mask.sum()
return object_density / COSMOS_field_area
def get_FPRs(key, df_results=df_results, threshold_probs=threshold_probs):
FPRs = np.empty_like(threshold_probs)
df_tmp = df_results[[key, "target"]]
df_tmp = df_tmp[~df_tmp.target]
for i, threshold_prob in enumerate(threshold_probs):
mask = df_tmp[key] > threshold_prob
FPRs[i] = mask.mean()
return FPRs
purities_RF = get_purities("RF_prob")
completenesses_RF = get_completenesses("RF_prob")
TPR_RF = completenesses_RF
FPR_RF = get_FPRs("RF_prob")
object_density_RF = get_selected_object_density("RF_prob")
purities_LR = get_purities("LR_prob")
completenesses_LR = get_completenesses("LR_prob")
TPR_LR = completenesses_LR
FPR_LR = get_FPRs("LR_prob")
object_density_LR = get_selected_object_density("LR_prob")
purities_i_mag = get_purities("i_mag_prob")
completenesses_i_mag = get_completenesses("i_mag_prob")
TPR_i_mag = completenesses_i_mag
FPR_i_mag = get_FPRs("i_mag_prob")
object_density_i_mag = get_selected_object_density("i_mag_prob")
import sklearn
import sklearn.metrics
AUC_RF = sklearn.metrics.average_precision_score(df_results.target, df_results.RF_prob)
plt.plot(completenesses_RF, purities_RF,
marker="o", color=color_RF, label="Random Forest (AUC={:.2f})".format(AUC_RF),
linewidth=linewidth,
)
AUC_LR = sklearn.metrics.average_precision_score(df_results.target, df_results.LR_prob)
plt.plot(completenesses_LR, purities_LR,
marker="o", color=color_LR, label="Logistic Regression (AUC={:.2f})".format(AUC_LR),
linestyle="dashed",
linewidth=linewidth,
)
AUC_i_mag = sklearn.metrics.average_precision_score(df_results.target, df_results.i_mag_prob)
plt.plot(completenesses_i_mag, purities_i_mag,
marker="o", color=color_MC, label="$i$-band cut (AUC={:.2f})".format(AUC_i_mag),
linestyle="dotted",
linewidth=linewidth,
)
plt.xlabel("Completeness")
plt.ylabel("Purity")
plt.ylim(0,1)
leg = plt.legend(loc="best")
filename = "plots_for_thesis/purity-completeness-all"
plt.tight_layout()
plt.savefig(filename + ".pdf")
plt.savefig(filename + ".png")
AUC_RF = sklearn.metrics.roc_auc_score(df_results.target, df_results.RF_prob)
plt.plot(FPR_RF, TPR_RF,
marker="o", color=color_RF, label="Random Forest (AUC={:.3f})".format(AUC_RF),
drawstyle="steps-post",
linewidth=linewidth,
)
AUC_LR = sklearn.metrics.roc_auc_score(df_results.target, df_results.LR_prob)
plt.plot(FPR_LR, TPR_LR,
marker="o", color=color_LR, label="Logistic Regression (AUC={:.3f})".format(AUC_LR),
linestyle="dashed",
drawstyle="steps-post",
linewidth=linewidth,
)
AUC_i_mag = sklearn.metrics.roc_auc_score(df_results.target, df_results.i_mag_prob)
plt.plot(FPR_i_mag, TPR_i_mag,
marker="o", color=color_MC, label="$i$-band cut (AUC={:.3f})".format(AUC_i_mag),
linestyle="dotted",
drawstyle="steps-post",
linewidth=linewidth,
)
plt.plot([0,1], [0,1 ], linestyle="dotted", color="k", label="Random guessing",
linewidth=linewidth,
)
plt.xlabel("False Positive Rate")
plt.ylabel("True Positive Rate")
# plt.xlim(0,1)
# plt.ylim(0,1)
plt.legend(loc="best")
filename = "plots_for_thesis/ROC-all"
plt.tight_layout()
plt.savefig(filename + ".pdf")
plt.savefig(filename + ".png")
f, (ax1, ax2) = plt.subplots(2, sharex=True)
f.subplots_adjust(hspace=0.1)
ax1.plot(object_density_RF, purities_RF,
marker="o", color=color_RF, label=label_RF,
linewidth=linewidth,
)
ax1.axvline(1e3,
color="black", linestyle="dashed", label="DNN Training Set Density")
ax1.set_ylabel("Purity")
ax1.set_xscale("log")
ax1.set_ylim(0,1)
ax2.plot(object_density_RF, completenesses_RF,
marker="o", color=color_RF, label=label_RF,
linewidth=linewidth,
)
ax2.axvline(1e3,
color="black", linestyle="dashed", label="DNN Training Set Density",
linewidth=linewidth,
)
ax2.set_xlabel("Number of Selected Objects per sq. deg.")
ax2.set_ylabel("Completeness")
ax2.set_xscale("log")
ax2.set_ylim(0,1)
ax2.legend(loc="best")
plt.tight_layout()
filename = "plots_for_thesis/purity-completeness-RF"
plt.tight_layout()
plt.savefig(filename + ".pdf")
plt.savefig(filename + ".png")
theoretical_probs=np.linspace(0,1,num=11)
empirical_probs_RF = np.empty(theoretical_probs.size-1)
num_in_bin_RF = np.empty_like(empirical_probs_RF)
for i in range(theoretical_probs.size-1):
prob_lim_low = theoretical_probs[i]
prob_lim_high = theoretical_probs[i+1]
mask_RF = (df_results["RF_prob"] >= prob_lim_low) & (df_results["RF_prob"] < prob_lim_high)
empirical_probs_RF[i] = df_results["target"][mask_RF].mean()
num_in_bin_RF[i] = df_results["target"][mask_RF].size
f, (ax1, ax2) = plt.subplots(2, sharex=True,
gridspec_kw = {'height_ratios':[1, 3]},
)
ax1.plot(theoretical_probs, [num_in_bin_RF[0], *num_in_bin_RF],
drawstyle="steps", color=color_RF,
linewidth=linewidth,
)
ax1.set_yscale("log")
ax1.set_ylim(bottom=10**-.5, top=10**6.5)
ax1.yaxis.set_ticks([1e0, 1e3, 1e6])
ax1.set_ylabel("Number of \nGalaxies in Bin")
ax2.step(theoretical_probs, [empirical_probs_RF[0], *empirical_probs_RF],
linestyle="steps", color=color_RF, label=label_RF,
linewidth=linewidth,
)
ax2.fill_between(theoretical_probs, theoretical_probs-theoretical_probs[1], theoretical_probs,
step="pre", color="black", label="ideal", alpha=.2,
linewidth=linewidth,
)
plt.xlabel("Reported Probability")
plt.ylabel("Actual (Binned) Probability")
plt.legend(loc="best")
plt.xlim(0,1)
plt.ylim(0,1)
plt.tight_layout()
filename = "plots_for_thesis/probability-calibration-RF"
plt.tight_layout()
plt.savefig(filename + ".pdf")
plt.savefig(filename + ".png")
sklearn.metrics.log_loss(df_results.target, df_results.RF_prob)
n_trees = classifier_RF.n_estimators
n_pseudo_obs = 2
pseudo_obs_class_balance = 0.5
df_results["RF_prob_softened"] = (df_results["RF_prob"] * n_trees + n_pseudo_obs * pseudo_obs_class_balance) \
/ (n_trees + n_pseudo_obs)
sklearn.metrics.log_loss(df_results.target, df_results.RF_prob_softened)
sklearn.metrics.log_loss(df_results.target, df_results.LR_prob)
sklearn.metrics.log_loss(df_results.target, df_results.i_mag_prob)
"""
Explanation: Get Metrics from cross-validated results
End of explanation
"""
plt.hist(df_results["RF_prob"], bins=np.linspace(0,1), alpha=.5, color=color_RF, label=label_RF)
plt.hist(df_results["LR_prob"], bins=np.linspace(0,1), alpha=.5, color=color_LR, label=label_LR)
plt.yscale("log")
plt.xlabel("p(dwarf | model)")
plt.ylabel("Number of objects")
plt.legend(loc="best")
plt.hist(1-df_results["RF_prob"],
cumulative=True, alpha=0.9,
label="RF",
color=color_RF)
plt.hist(1-df_results["LR_prob"],
cumulative=True,
label="LR",
color=color_LR)
plt.ylim(ymin=1e-5)
plt.yscale("log")
plt.legend(loc="best")
plt.xlabel("1 - prob(dwarf)")
plt.ylabel("CDF")
"""
Explanation: Predicted Probabilities
End of explanation
"""
|
nwfpug/python-primer
|
notebooks/05-looping.ipynb
|
gpl-3.0
|
for num in range(10,20): #to iterate between 10 to 20
for i in range(2,num): #to iterate on the factors of the number
if num%i == 0: #to determine the first factor
j=num/i #to calculate the second factor
print '%d equals %d * %d' % (num,i,j)
break #to move to the next number, the #first FOR
else: # else part of the loop
print num, 'is a prime number'
i = 2
while(i < 100):
j = 2
while(j <= (i/j)):
if not(i%j):
break
j = j + 1
if (j > i/j) :
print i, " is prime"
i = i + 1
print "Good bye!"
count = 0
while (count < 9):
print 'The count is:', count
count = count + 1
print "Good bye!"
count = 0
while count < 5:
print count, " is less than 5"
count = count + 1
else:
print count, " is not less than 5"
for i in range(2):
print(i)
else:
print('completed for-loop')
for i in range(2):
print(i)
break
else:
print('completed for-loop')
i = 0
while i < 2:
print(i)
i += 1
else:
print('in else')
i = 0
while i < 2:
print(i)
i += 1
break
else:
print('completed while-loop')
# don't run this code
# flag = 10
#while (flag): print 'Given flag is really true!'
#print "Good bye!"
"""
Explanation: Python supports an else statement associated with a loop statement
- If the else statement is used with a for loop, the else statement is executed when the loop has exhausted iterating the list.
- If the else statement is used with a while loop, the else statement is executed when the condition becomes false.
End of explanation
"""
a_list=[0,1,2,3,4,5]
try:
print('first element:', a_list[0])
except IndexError:
print('raised IndexError')
else:
print('no error in try-block')
try:
print('third element:', a_list[2])
except IndexError:
print('raised IndexError')
else:
print('no error in try-block')
"""
Explanation: Exception handling with lists
End of explanation
"""
|
ini-python-course/ss15
|
notebooks/List Comprehensions.ipynb
|
mit
|
V = [2**i for i in range(13)]
print V
S = set([x**2 for x in range(10)])
print S
M = set([x for x in S if x % 2 == 0])
print M
"""
Explanation: List comprehensions
In Python there is a special way to initialize lists (and dictionaries) called list comprehensions. For many lists that we are going to create, list comprehensions are the recommended way to do so without using loops (remember that loops are slow in Python).
Another reason to use list comprehensions is that they can be elegant and resemble the way we would define sets in mathematical notation. Consider for instance the examples from this tutorial:
$V = (1, 2, 4, 8, \dots, 2^{12})$
$S = {x^2 : x \in {0 \dots 9}}$
$M = {x | x \in S \textrm{ and } x \textrm{ even }}$
We can initialize $V$, $S$ and $M$ easily with list comprehensions:
End of explanation
"""
A = [i for i in range(1, 20) if i%2 == 0]
B = [i for i in range(1, 20) if i%2 == 1]
print A
print B
"""
Explanation: Complex example
In the introduction notebook there was an exercise that we tend to solve with loops and if-else statements as in other languages. The more pythonic way however would be with list comprehensions.
Create a list A with even and a list B with odd numbers between 0 and 20 (both exklusive)
End of explanation
"""
def is_divider_of_a(b):
return len([a for a in A if a%b==0]) > 0
print is_divider_of_a(3)
"""
Explanation: Print all numbers of list B that divide a number in A
For a given number we can check how many numbers it divides in A like this:
End of explanation
"""
print [b for b in B if is_divider_of_a(b)]
"""
Explanation: This way the task above can be formulated as a list comprehension like this:
End of explanation
"""
import numpy as np
# your code here
# our solution
from solutions import *
decrypt_solution(solution_list_comprehensions, passphrase='foo')
"""
Explanation: Exercise
Project Euler is a website with many small mathematical challenges. Solve the first one with a list comprehension and np.sum():
If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23.
Find the sum of all the multiples of 3 or 5 below 1000.
If you like, you can make an account on their website and submit the solution.
End of explanation
"""
|
lknelson/text-analysis-2017
|
03-Pandas_and_DTM/01-DTM_DistinctiveWords.ipynb
|
bsd-3-clause
|
import pandas
#create a dataframe called "df"
df = pandas.read_csv("../Data/BDHSI2016_music_reviews.csv", sep = '\t', encoding = 'utf-8')
#view the dataframe
#The column "body" contains our text of interest.
df
#print the first review from the column 'body'
df.loc[0,'body']
"""
Explanation: The Document Term Matrix and Finding Distinctive Words
We have been dealing with texts as strings, or as lists of strings. Another way to represent text which opens up a variety of other possibilities for analysis is the Document Term Matrix (DTM).
The best Python library for this, along with the subsequent analyses we can peform on a DTM, is scikit-learn. It's a powerful library, and one you will continually return to as you advance in text analysis (and looks great on your CV!). At it's core, this library allow us to implement a variety of machine learning algorithms on our text.
Because scikit-learn is such a large and powerful library the goal today is not to become experts, but instead learn the basic functions in the library and gain an intuition about how you might use it to do text analysis. To give an overview, here are some of the things you can do using scikit-learn:
* word weighting
* feature extraction
* text classification / supervised machine learning
* L2 regression
* classification algorithms such as nearest neighbors, SVM, and random forest
* clustering / unsupervised machine learning
* k-means
* pca
* cosine similarity
* LDA
Today, we'll start with the Document Term Matrix (DTM). The DTM is the bread and butter of computational text analysis techniques, both simple and more sophisticated methods. In sum, the DTM vectorizes our text which allows us to do matrix manipulation on it. We'll see further uses of the DTM in future tutorials.
In this lesson we will use Python's scikit-learn package learn to make a document term matrix from a .csv Music Reviews dataset (collected from MetaCritic.com). We will visualize the DTM in a pandas dataframe. We will then use the DTM and a word weighting technique called tf-idf (term frequency inverse document frequency) to identify important and discriminating words within this dataset. The illustrating question: what words distinguish reviews of Rap albums, Indie Rock albums, and Jazz albums? Finally, we'll use the DTM to implement a difference of proportions calculation on two novels in our data folder.
Learning Goals
Understand the DTM and why it's important to text analysis
Learn how to create a DTM from a .csv file
Learn basic functionality of Python's package scikit-learn
Understand tf-idf scores, and word scores in general
Learn a simple way to identify distinctive words
Outline
<ol start="0">
<li>[The Pandas Dataframe: Music Reviews](#df)</li>
<li>[Explore the Data using Pandas](#stats)</li>
-Basic descriptive statistics
<li>[Creating the DTM: scikit-learn](#dtm)</li>
-CountVectorizer function
<li>[What can we do with a DTM?](#counting)</li>
<li>[Tf-idf scores](#tfidf)</li>
-TfidfVectorizer function
<li>[Identifying Distinctive Words 1](#distinct)</li>
-Application: Identify distinctive words by genre
</ol>
Key Terms
Document Term Matrix:
a mathematical matrix that describes the frequency of terms that occur in a collection of documents. In a document-term matrix, rows correspond to documents in the collection and columns correspond to terms.
TF-IDF Scores:
short for term frequency–inverse document frequency, is a numerical statistic that is intended to reflect how important a word is to a document in a collection or corpus.
Further Resources
This blog post goes through finding distinctive words using Python in more detail.
Paper: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict, Burt Monroe, Michael Colaresi, Kevin Quinn
<a id='df'></a>
0. The Pandas Dataframe: Music Reviews
First, we read a music reviews corpus, which is stored as a .csv file on our hard drive, into a Pandas dataframe. These data were collected from MetaCritic.com, and include all user reviews of albums from the start of the website through 2014.
End of explanation
"""
#Write your code here
"""
Explanation: <a id='stats'></a>
1. Explore the Data using Pandas
You folks are experts at this now. Write Python code using pandas to do the following exploration of the data:
What different genres are in the data?
Who are the reviewers?
Who are the artists?
What is the average score given?
What is the average score by genre? What is the genre with the highest average score?
End of explanation
"""
df['body'] = df['body'].apply(lambda x: ''.join([i for i in x if not i.isdigit()]))
"""
Explanation: <a id='dtm'></a>
2. Creating the DTM: scikit-learn
Ok, that's the summary of the metadata. Next, we turn to analyzing the text of the reviews. Remember, the text is stored in the 'body' column. First, a preprocessing step to remove numbers. To do this we will use a lambda function.
End of explanation
"""
#import the function CountVectorizer
from sklearn.feature_extraction.text import CountVectorizer
#Create our CountVectorizer object
countvec = CountVectorizer()
sklearn_dtm = countvec.fit_transform(df.body)
print(sklearn_dtm)
"""
Explanation: Our next step is to turn the text into a document term matrix using the scikit-learn function called CountVectorizer. There are two ways to do this. We can turn it into a sparse matrix type, which can be used within scikit-learn for further analyses. We do this by "fitting" the text to our CountVectorizer object, which calculates the full vocabulary in our corpus, and then "transform" our text by counting the number of times each word occurs in each docuent. We combine these two steps by calling the fit_transform() function.
Let's first look at the documentation for CountVectorizer.
End of explanation
"""
print(countvec.get_feature_names()[:10])
"""
Explanation: How do we know what word corresponds to each number? We can access the words themselves through the CountVectorizer function get_feature_names().
End of explanation
"""
#we do the same as we did above, but covert it into a Pandas dataframe. Note this takes quite a bit more memory, so will not be good for bigger data.
#don't understand this code? we'll go through it, but don't worry about understanding it.
dtm_df = pandas.DataFrame(countvec.fit_transform(df.body).toarray(), columns=countvec.get_feature_names(), index = df.index)
#view the dtm dataframe
dtm_df
"""
Explanation: This format is called Compressed Sparse Format. It save a lot of memory to store the dtm in this format, but it is difficult to look at for a human. To illustrate the techniques in this lesson we will first convert this matrix back to a Pandas dataframe, a format we're more familiar with.
Note: This is a case of do as I say, not as I do. As we continue we will rarely transform a DTM into a Pandas dataframe, because of memory issues. I'm doing it today so we can understand the intuition behind the DTM, word scores, and distinctive words.
End of explanation
"""
print(dtm_df.sum().sort_values(ascending=False))
##Ex: print the average number of times each word is used in a review
#Print this out sorted from highest to lowest.
"""
Explanation: <a id='counting'></a>
3. What can we do with a DTM?
We can do a number of calculations using a DTM. For a toy example, we can quickly identify the most frequent words (compare this to how many steps it took using NLTK).
End of explanation
"""
#import the function
from sklearn.feature_extraction.text import TfidfVectorizer
tfidfvec = TfidfVectorizer()
#create the dtm, but with cells weigthed by the tf-idf score.
dtm_tfidf_df = pandas.DataFrame(tfidfvec.fit_transform(df.body).toarray(), columns=tfidfvec.get_feature_names(), index = df.index)
#view results
dtm_tfidf_df
"""
Explanation: Question: What does this tell us about our data?
What else does the DTM enable? Because it is in the format of a matrix, we can perform any matrix algebra or vector manipulation on it, which enables some pretty exciting things (think vector space and Euclidean geometry). But, what do we lose when we reprsent text in this format?
Today, we will use variations on the DTM to find distinctive words in this dataset.
<a id='tfidf'></a>
4. Tf-idf scores
How to find content words in a corpus is a long-standing question in text analysis. We have seen a few ways of doing this: removing stop words and identifying and counting only nouns, verbs, and adjectives. Today, we'll learn one more simple approach to this: word scores. The idea behind words scores is to weight words not just by their frequency, but by their frequency in one document compared to their distribution across all documents. Words that are frequent, but are also used in every single document, will not be indicative of the content of that document. We want to instead identify frequent words that are unevenly distributed across the corpus.
One of the most popular ways to weight words (beyond frequency counts) is tf-idf scores. By offsetting the frequency of a word by its document frequency (the number of documents in which it appears) will in theory filter out common terms such as 'the', 'of', and 'and'; what we have been calling stop words.
More precisely, the inverse document frequency is calculated as such:
number_of_documents / number_documents_with_term
so:
tfidf_word1 = word1_frequency_document1 * (number_of_documents / number_document_with_word1)
You can, and often should, normalize the numerator:
tfidf_word1 = (word1_frequency_document1 / word_count_document1) * (number_of_documents / number_document_with_word1)
We can calculate this manually, but scikit-learn has a built-in function to do so. We'll use it, but a challenge for you: use Pandas to calculate this manually.
To do so, we simply do the same thing we did above with CountVectorizer, but instead we use the function TfidfVectorizer.
End of explanation
"""
print(dtm_tfidf_df.max().sort_values(ascending=False)[:20])
"""
Explanation: Let's look at the 20 words with highest tf-idf weights.
End of explanation
"""
#Copy our tfidf df to a new df to add genre
dtm_tfidf_df_genre = dtm_tfidf_df
#add a 'GENRE' column to our tfidf df
dtm_tfidf_df_genre['GENRE'] = df['genre']
#Question: Why is 'GENRE' in caps?
dtm_tfidf_df_genre
"""
Explanation: Ok! We have successfully identified content words, without removing stop words and without part-of-speech tagging. What else do you notice about this list?
<a id='distinct'></a>
5. Identifying Distinctive Words
What can we do with this? These scores are best used when you want to identify distinctive words for individual documents, or groups of documents, compared to other groups or the corpus as a whole. To illustrate this, let's compare three genres and identify the most distinctive words by genre.
First we add the genre of the document into our dtm weighted by tf-idf scores, and then compare genres.
End of explanation
"""
#pull out the reviews for three genres, Rap, Alternative/Indie Rock, and Jazz
dtm_rap = dtm_tfidf_df_genre[dtm_tfidf_df_genre['GENRE']=="Rap"]
dtm_indie = dtm_tfidf_df_genre[dtm_tfidf_df_genre['GENRE']=="Alternative/Indie Rock"]
dtm_jazz = dtm_tfidf_df_genre[dtm_tfidf_df_genre['GENRE']=="Jazz"]
#print the words with the highest tf-idf scores for each genre
print("Rap Words")
print(dtm_rap.max(numeric_only=True).sort_values(ascending=False)[0:20])
print()
print("Indie Words")
print(dtm_indie.max(numeric_only=True).sort_values(ascending=False)[0:20])
print()
print("Jazz Words")
print(dtm_jazz.max(numeric_only=True).sort_values(ascending=False)[0:20])
"""
Explanation: Now lets compare the words with the highest tf-idf weight for each genre.
Note: there are other ways to do this. Challenge: what is a different approach to identifying rows from a certain genre in our dtm?
End of explanation
"""
##Write your code here
"""
Explanation: There we go! A method of identifying content words, and distinctive words based on groups of texts. You notice there are some proper nouns in there. How might we remove those if we're not interested in them?
Ex: Compare the distinctive words for two artists in the data
Note: the artists should have a number of reviews, so check your frequency counts to identify artists.
HINT: Copy and paste the above code and modify it as needed.
End of explanation
"""
|
ihmeuw/dismod_mr
|
examples/cross_walks.ipynb
|
agpl-3.0
|
import numpy as np, pandas as pd, dismod_mr
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
# set a random seed to ensure reproducible simulation results
np.random.seed(123456)
# simulate data
n = 20
data = dict(age=np.random.randint(0, 10, size=n)*10,
year=np.random.randint(1990, 2010, size=n))
data = pd.DataFrame(data)
# in the simulation, some data will be measured, and some will be self-reported
# the self-reported data values will be shifted by N(-.25, .1) in log-space
data['self_report'] = np.random.randint(2, size=n)
data['beta_self_report'] = np.random.normal(-.25, .1, size=n)
data['true_value'] = (.1 + .001 * data.age) + np.random.normal(0., .01, size=n)
data['self_report_value'] = data.true_value * np.exp(data.beta_self_report)
# let's see the difference between the (simulated) true value and self-report value
sns.lmplot('true_value', 'self_report_value', data, aspect=1.5);
# to really see what is going on, it is better to look at residuals of some sort
data['log_residual'] = np.log(data.true_value) - np.log(data.self_report_value)
sns.residplot('true_value', 'log_residual', data, lowess=True);
data['data_type'] = 'p'
data['age_start'] = data.age
data['age_end'] = data.age+10
# for prettier display, include jittered age near midpoint of age interval
data['jittered_age'] = .5*(data.age_start + data.age_end) + np.random.normal(size=n)
# keep things simple, no spatial random effects, no sex effect
data['area'] = 'all'
data['sex'] = 'total'
# quantification of uncertainty that says these numbers are believed to be quite precise
data['standard_error'] = -99
data['upper_ci'] = np.nan
data['lower_ci'] = np.nan
data['effective_sample_size'] = 1.e8
def new_model(data):
# build the dismod_mr model
dm = dismod_mr.data.ModelData()
# set simple model parameters, for decent, fast computation
dm.set_knots('p', [0,100])
dm.set_level_bounds('p', lower=0, upper=1)
dm.set_level_value('p', age_before=0, age_after=100, value=0)
dm.set_heterogeneity('p', value='Slightly')
dm.set_effect_prior('p', cov='x_sex', value=dict(dist='Constant', mu=0))
# copy data into model
dm.input_data = data.copy()
return dm
"""
Explanation: Cross-walking and DisMod-MR
This notebook provides some examples of cross-walking in descriptive epidemiological meta-regression. In this context, cross-walking refers to mapping between two different, but related, quantities that have been measured. For example, it is more accurate to directly measure blood pressure to test for hypertension, but it is cheaper to simply ask respondents if they have ever been told by a medical professional that they have high blood pressure.
It is possible to cross-walk inside DisMod-MR, but if sufficent information is available cross-walking outside of DisMod-MR can provide more flexibility to you, the analyst.
This notebook uses simulated data to compare simple approaches for cross-walking inside and outside of DisMod-MR.
End of explanation
"""
t = pd.melt(data, id_vars=['jittered_age'], value_vars=['true_value', 'self_report_value'])
sns.lmplot('jittered_age', 'value', t, hue='variable', aspect=1.5);
"""
Explanation: Here is what things would look like if we had all measured values (true_value) or all self-reported values (self_report_value):
End of explanation
"""
plt.figure(figsize=(9,6))
data['value'] = data.true_value
dismod_mr.plot.data_bars(data[data.self_report==0], color=dismod_mr.plot.colors[0], label='Measured')
data['value'] = data.self_report_value
dismod_mr.plot.data_bars(data[data.self_report==1], color=dismod_mr.plot.colors[1], label='Self-report')
plt.legend(loc='upper left');
"""
Explanation: But often the dataset we find has no overlap, like this:
End of explanation
"""
data['value'] = data.true_value
dm = new_model(data)
dm.setup_model('p', rate_model='neg_binom')
%time dm.fit(how='mcmc', iter=2000, burn=1000, thin=1)
dm.plot()
"""
Explanation: If we had access to the true value, we would just fit that, and get nice results.
End of explanation
"""
# simulate 20% portion of data with both measured and self-reported values
data['measured_both'] = np.random.randint(5, size=n) == 0
log_diff = np.log(data.true_value[data.measured_both]) - np.log(data.self_report_value[data.measured_both])
sns.distplot(log_diff, rug=True);
# use exp of mean of log(difference) to cross-walk self-reported data before building DisMod-MR model
data['cross_walked'] = data.self_report_value * np.exp(log_diff.mean())
data['value'] = np.where(data.self_report==1, data.cross_walked, data.true_value)
dm = new_model(data)
dm.setup_model('p', rate_model='neg_binom')
%time dm.fit(how='mcmc', iter=2000, burn=1000, thin=1)
dm.plot()
"""
Explanation: If we have measurements of both the measured value and the self-reported for many data points, we could do a cross-walk outside of dismod and adjust all of the data with only self-reported measurements before building the DisMod-MR model:
End of explanation
"""
data['value'] = np.where(data.self_report==1, data.self_report_value, data.true_value)
data['x_self_report'] = data.self_report
dm = new_model(data)
dm.set_effect_prior('p', 'x_self_report',
dict(dist='Constant', mu=-log_diff.mean()))
dm.setup_model('p', rate_model='neg_binom')
%time dm.fit(how='mcmc', iter=2000, burn=1000, thin=1)
dm.plot()
"""
Explanation: With this simple model of a cross-walk, it would be equivalent to include the mean of the log-differences an informative prior on the self-report effect coefficient, which can be more convenient than shifting the data. Note the sign of the effect prior:
End of explanation
"""
data['value'] = np.where(data.self_report==1, data.self_report_value, data.true_value)
data['x_self_report'] = data.self_report
dm = new_model(data)
dm.setup_model('p', rate_model='neg_binom')
%time dm.fit(how='mcmc', iter=2000, burn=1000, thin=1)
dm.plot()
data['value'] = data.true_value
dismod_mr.plot.data_bars(data[data.x_self_report==0], color=dismod_mr.plot.colors[0], label='Measured')
data['value'] = data.self_report_value
dismod_mr.plot.data_bars(data[data.x_self_report==1], color=dismod_mr.plot.colors[1], label='Self-report')
plt.legend(loc='upper left');
sns.distplot(dm.vars['p']['beta'][0].trace())
"""
Explanation: Doing the cross-walk completely outside of DisMod-MR, and adjusting the data before building the model allows for more complicated cross-walk modeling, however, such as different cross-walk factors based on age or other relevant demographic variables. (Of course, this requires more cross-walk-able data, and is a topic that we will not explore further here.)
If we have little or no studies that have reported both outcomes, we can cross-walk with DisMod-MR fixed effects:
End of explanation
"""
data['value'] = np.where(data.self_report==1, data.self_report_value, data.true_value)
dm = new_model(data)
dm.set_effect_prior('p', 'x_self_report',
dict(dist='TruncatedNormal', mu=-log_diff.mean(), sigma=log_diff.std(), lower=-.25, upper=-.2))
dm.setup_model('p', rate_model='neg_binom')
%time dm.fit(how='mcmc', iter=2000, burn=1000, thin=1)
dm.plot()
data['value'] = data.true_value
dismod_mr.plot.data_bars(data[data.x_self_report==0], color=dismod_mr.plot.colors[0], label='Measured')
data['value'] = data.self_report_value
dismod_mr.plot.data_bars(data[data.x_self_report==1], color=dismod_mr.plot.colors[1], label='Self-report')
plt.legend(loc='upper left');
sns.distplot(dm.vars['p']['beta'][0].trace())
!date
"""
Explanation: It is sometimes helpful to include informative priors on the effect coefficients, for example if it is known that the self-report outcome is always less than the direct measurement.
DisMod-MR has three distributions that may be used as an informative prior on a
This can also be used to combine a cross-walk outside of DisMod-MR with a tune-up inside of the model:
End of explanation
"""
|
lmcinnes/hdbscan
|
notebooks/How HDBSCAN Works.ipynb
|
bsd-3-clause
|
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import sklearn.datasets as data
%matplotlib inline
sns.set_context('poster')
sns.set_style('white')
sns.set_color_codes()
plot_kwds = {'alpha' : 0.5, 's' : 80, 'linewidths':0}
"""
Explanation: How HDBSCAN Works
HDBSCAN is a clustering algorithm developed by Campello, Moulavi, and Sander. It extends DBSCAN by converting it into a hierarchical clustering algorithm, and then using a technique to extract a flat clustering based in the stability of clusters. The goal of this notebook is to give you an overview of how the algorithm works and the motivations behind it. In contrast to the HDBSCAN paper I'm going to describe it without reference to DBSCAN. Instead I'm going to explain how I like to think about the algorithm, which aligns more closely with Robust Single Linkage with flat cluster extraction on top of it.
Before we get started we'll load up most of the libraries we'll need in the background, and set up our plotting (because I believe the best way to understand what is going on is to actually see it working in pictures).
End of explanation
"""
moons, _ = data.make_moons(n_samples=50, noise=0.05)
blobs, _ = data.make_blobs(n_samples=50, centers=[(-0.75,2.25), (1.0, 2.0)], cluster_std=0.25)
test_data = np.vstack([moons, blobs])
plt.scatter(test_data.T[0], test_data.T[1], color='b', **plot_kwds)
"""
Explanation: The next thing we'll need is some data. To make for an illustrative example we'll need the data size to be fairly small so we can see what is going on. It will also be useful to have several clusters, preferably of different kinds. Fortunately sklearn has facilities for generating sample clustering data so I'll make use of that and make a dataset of one hundred data points.
End of explanation
"""
import hdbscan
clusterer = hdbscan.HDBSCAN(min_cluster_size=5, gen_min_span_tree=True)
clusterer.fit(test_data)
"""
Explanation: Now, the best way to explain HDBSCAN is actually just use it and then go through the steps that occurred along the way teasing out what is happening at each step. So let's load up the hdbscan library and get to work.
End of explanation
"""
clusterer.minimum_spanning_tree_.plot(edge_cmap='viridis',
edge_alpha=0.6,
node_size=80,
edge_linewidth=2)
"""
Explanation: So now that we have clustered the data -- what actually happened? We can break it out into a series of steps
Transform the space according to the density/sparsity.
Build the minimum spanning tree of the distance weighted graph.
Construct a cluster hierarchy of connected components.
Condense the cluster hierarchy based on minimum cluster size.
Extract the stable clusters from the condensed tree.
Transform the space
To find clusters we want to find the islands of higher density amid a sea of sparser noise -- and the assumption of noise is important: real data is messy and has outliers, corrupt data, and noise. The core of the clustering algorithm is single linkage clustering, and it can be quite sensitive to noise: a single noise data point in the wrong place can act as a bridge between islands, gluing them together. Obviously we want our algorithm to be robust against noise so we need to find a way to help 'lower the sea level' before running a single linkage algorithm.
How can we characterize 'sea' and 'land' without doing a clustering? As long as we can get an estimate of density we can consider lower density points as the 'sea'. The goal here is not to perfectly distinguish 'sea' from 'land' -- this is an initial step in clustering, not the ouput -- just to make our clustering core a little more robust to noise. So given an identification of 'sea' we want to lower the sea level. For practical purposes that means making 'sea' points more distant from each other and from the 'land'.
That's just the intuition however. How does it work in practice? We need a very inexpensive estimate of density, and the simplest is the distance to the kth nearest neighbor. If we have the distance matrix for our data (which we will need imminently anyway) we can simply read that off; alternatively if our metric is supported (and dimension is low) this is the sort of query that kd-trees are good for. Let's formalise this and (following the DBSCAN, LOF, and HDBSCAN literature) call it the core distance defined for parameter k for a point x and denote as $\mathrm{core}_k(x)$. Now we need a way to spread apart points with low density (correspondingly high core distance). The simple way to do this is to define a new distance metric between points which we will call (again following the literature) the mutual reachability distance. We define mutual reachability distance as follows:
<center>$d_{\mathrm{mreach-}k}(a,b) = \max {\mathrm{core}_k(a), \mathrm{core}_k(b), d(a,b) }$</center>
where $d(a,b)$ is the original metric distance between a and b. Under this metric dense points (with low core distance) remain the same distance from each other but sparser points are pushed away to be at least their core distance away from any other point. This effectively 'lowers the sea level' spreading sparse 'sea' points out, while leaving 'land' untouched. The caveat here is that obviously this is dependent upon the choice of k; larger k values interpret more points as being in the 'sea'. All of this is a little easier to understand with a picture, so let's use a k value of five. Then for a given point we can draw a circle for the core distance as the circle that touches the sixth nearest neighbor (counting the point itself), like so:
<img src="distance1.svg" alt="Diagram demonstrating mutual reachability distance" width=640 height=480>
Pick another point and we can do the same thing, this time with a different set of neighbors (one of them even being the first point we picked out).
<img src="distance2.svg" alt="Diagram demonstrating mutual reachability distance" width=640 height=480>
And we can do that a third time for good measure, with another set of six nearest neighbors and another circle with slightly different radius again.
<img src="distance3.svg" alt="Diagram demonstrating mutual reachability distance" width=640 height=480>
Now if we want to know the mutual reachabiility distance between the blue and green points we can start by drawing in and arrow giving the distance between green and blue:
<img src="distance4.svg" alt="Diagram demonstrating mutual reachability distance" width=640 height=480>
This passes through the blue circle, but not the green circle -- the core distance for green is larger than the distance between blue and green. Thus we need to mark the mutual reachability distance between blue and green as larger -- equal to the radius of the green circle (easiest to picture if we base one end at the green point).
<img src="distance4a.svg" alt="Diagram demonstrating mutual reachability distance" width=640 height=480>
On the other hand the mutual reachablity distance from red to green is simply distance from red to green since that distance is greater than either core distance (i.e. the distance arrow passes through both circles).
<img src="distance5.svg" alt="Diagram demonstrating mutual reachability distance" width=640 height=480>
In general there is underlying theory to demonstrate that mutual reachability distance as a transform works well in allowing single linkage clustering to more closely approximate the hierarchy of level sets of whatever true density distribution our points were sampled from.
Build the minimum spanning tree
Now that we have a new mutual reachability metric on the data we want start finding the islands on dense data. Of course dense areas are relative, and different islands may have different densities. Conceptually what we will do is the following: consider the data as a weighted graph with the data points as vertices and an edge between any two points with weight equal to the mutual reachability distance of those points.
Now consider a threshold value, starting high, and steadily being lowered. Drop any edges with weight above that threshold. As we drop edges we will start to disconnect the graph into connected components. Eventually we will have a hierarchy of connected components (from completely connected to completely disconnected) at varying threshold levels.
In practice this is very expensive: there are $n^2$ edges and we don't want to have to run a connected components algorithm that many times. The right thing to do is to find a minimal set of edges such that dropping any edge from the set causes a disconnection of components. But we need more, we need this set to be such that there is no lower weight edge that could connect the components. Fortunately graph theory furnishes us with just such a thing: the minimum spanning tree of the graph.
We can build the minimum spanning tree very efficiently via Prim's algorithm -- we build the tree one edge at a time, always adding the lowest weight edge that connects the current tree to a vertex not yet in the tree. You can see the tree HDBSCAN constructed below; note that this is the minimum spanning tree for mutual reachability distance which is different from the pure distance in the graph. In this case we had a k value of 5.
In the case that the data lives in a metric space we can use even faster methods, such as Dual Tree Boruvka to build the minimal spanning tree.
End of explanation
"""
clusterer.single_linkage_tree_.plot(cmap='viridis', colorbar=True)
"""
Explanation: Build the cluster hierarchy
Given the minimal spanning tree, the next step is to convert that into the hierarchy of connected components. This is most easily done in the reverse order: sort the edges of the tree by distance (in increasing order) and then iterate through, creating a new merged cluster for each edge. The only difficult part here is to identify the two clusters each edge will join together, but this is easy enough via a union-find data structure. We can view the result as a dendrogram as we see below:
End of explanation
"""
clusterer.condensed_tree_.plot()
"""
Explanation: This brings us to the point where robust single linkage stops. We want more though; a cluster hierarchy is good, but we really want a set of flat clusters. We could do that by drawing a a horizontal line through the above diagram and selecting the clusters that it cuts through. This is in practice what DBSCAN effectively does (declaring any singleton clusters at the cut level as noise). The question is, how do we know where to draw that line? DBSCAN simply leaves that as a (very unintuitive) parameter. Worse, we really want to deal with variable density clusters and any choice of cut line is a choice of mutual reachability distance to cut at, and hence a single fixed density level. Ideally we want to be able to cut the tree at different places to select our clusters. This is where the next steps of HDBSCAN begin and create the difference from robust single linkage.
Condense the cluster tree
The first step in cluster extraction is condensing down the large and complicated cluster hierarchy into a smaller tree with a little more data attached to each node. As you can see in the hierarchy above it is often the case that a cluster split is one or two points splitting off from a cluster; and that is the key point -- rather than seeing it as a cluster splitting into two new clusters we want to view it as a single persistent cluster that is 'losing points'. To make this concrete we need a notion of minimum cluster size which we take as a parameter to HDBSCAN. Once we have a value for minimum cluster size we can now walk through the hierarchy and at each split ask if one of the new clusters created by the split has fewer points than the minimum cluster size. If it is the case that we have fewer points than the minimum cluster size we declare it to be 'points falling out of a cluster' and have the larger cluster retain the cluster identity of the parent, marking down which points 'fell out of the cluster' and at what distance value that happened. If on the other hand the split is into two clusters each at least as large as the minimum cluster size then we consider that a true cluster split and let that split persist in the tree. After walking through the whole hierarchy and doing this we end up with a much smaller tree with a small number of nodes, each of which has data about how the size of the cluster at that node descreases over varying distance. We can visualize this as a dendrogram similar to the one above -- again we can have the width of the line represent the number of points in the cluster. This time, however, that width varies over the length of the line as points fall our of the cluster. For our data using a minimum cluster size of 5 the result looks like this:
End of explanation
"""
clusterer.condensed_tree_.plot(select_clusters=True, selection_palette=sns.color_palette())
"""
Explanation: This is much easier to look at and deal with, particularly in as simple a clustering problem as our current test dataset. However we still need to pick out clusters to use as a flat clustering. Looking at the plot above should give you some ideas about how one might go about doing this.
Extract the clusters
Intuitively we want the choose clusters that persist and have a longer lifetime; short lived clusters are ultimately probably merely artifacts of the single linkage approach. Looking at the previous plot we could say that we want to choose those clusters that have the greatest area of ink in the plot. To make a flat clustering we will need to add a further requirement that, if you select a cluster, then you cannot select any cluster that is a descendant of it. And in fact that intuitive notion of what should be done is exactly what HDBSCAN does. Of course we need to formalise things to make it a concrete algorithm.
First we need a different measure than distance to consider the persistence of clusters; instead we will use $\lambda = \frac{1}{\mathrm{distance}}$. For a given cluster we can then define values $\lambda_{\mathrm{birth}}$ and $\lambda_{\mathrm{death}}$ to be the lambda value when the cluster split off and became its own cluster, and the lambda value (if any) when the cluster split into smaller clusters respectively. In turn, for a given cluster, for each point p in that cluster we can define the value $\lambda_p$ as the lambda value at which that point 'fell out of the cluster' which is a value somewhere between $\lambda_{\mathrm{birth}}$ and $\lambda_{\mathrm{death}}$ since the point either falls out of the cluster at some point in the cluster's lifetime, or leaves the cluster when the cluster splits into two smaller clusters. Now, for each cluster compute the stability to as
$\sum_{p \in \mathrm{cluster}} (\lambda_p - \lambda_{\mathrm{birth}})$.
Declare all leaf nodes to be selected clusters. Now work up through the tree (the reverse topological sort order). If the sum of the stabilities of the child clusters is greater than the stability of the cluster then we set the cluster stability to be the sum of the child stabilities. If, on the other hand, the cluster's stability is greater than the sum of its children then we declare the cluster to be a selected cluster, and unselect all its descendants. Once we reach the root node we call the current set of selected clusters our flat clsutering and return that.
Okay, that was wordy and complicated, but it really is simply performing our 'select the clusters in the plot with the largest total ink area' subject to descendant constraints that we explained earlier. We can select the clusters in the condensed tree dendrogram via this algorithm, and you get what you expect:
End of explanation
"""
palette = sns.color_palette()
cluster_colors = [sns.desaturate(palette[col], sat)
if col >= 0 else (0.5, 0.5, 0.5) for col, sat in
zip(clusterer.labels_, clusterer.probabilities_)]
plt.scatter(test_data.T[0], test_data.T[1], c=cluster_colors, **plot_kwds)
"""
Explanation: Now that we have the clusters it is a simple enough matter to turn that into cluster labelling as per the sklearn API. Any point not in a selected cluster is simply a noise point (and assigned the label -1). We can do a little more though: for each cluster we have the $\lambda_p$ for each point p in that cluster; If we simply normalize those values (so they range from zero to one) then we have a measure of the strength of cluster membership for each point in the cluster. The hdbscan library returns this as a probabilities_ attribute of the clusterer object. Thus, with labels and membership strengths in hand we can make the standard plot, choosing a color for points based on cluster label, and desaturating that color according the strength of membership (and make unclustered points pure gray).
End of explanation
"""
|
antongrin/EasyMig
|
EasyMig_v3.ipynb
|
apache-2.0
|
# -*- coding: utf-8 -*-
"""
Created on Fri Feb 12 13:21:45 2016
@author: GrinevskiyAS
"""
from __future__ import division
import numpy as np
from numpy import sin,cos,tan,pi,sqrt
import matplotlib as mpl
import matplotlib.cm as cm
import matplotlib.pyplot as plt
%matplotlib inline
font = {'family': 'Arial', 'weight': 'normal', 'size':14}
mpl.rc('font', **font)
"""
Explanation: OK, to begin we need to import some standart Python modules
End of explanation
"""
#This would be the size of each grid cell (X is the spatial coordinate, T is two-way time)
xstep=10
tstep=10
#size of the whole grid
xmax = 301
tmax = 201
#that's the arrays of x and t
xarray=np.arange(0, xmax, xstep)
tarray=np.arange(0, tmax, tstep)
#now fimally we created a 2D array img, which is now all zeros, but later we will add some amplitudes there
img=np.zeros((len(xarray), len(tarray)))
"""
Explanation: First, let us setup the working area.
End of explanation
"""
plt.imshow(img.T,interpolation='none',cmap=cm.Greys, vmin=-2,vmax=2, extent=[xarray[0]-xstep/2, xarray[-1]+xstep/2, tarray[-1]+tstep/2, tarray[0]-tstep/2])
"""
Explanation: Let's show our all-zero image
End of explanation
"""
class Hyperbola:
def __init__(self, xarray, tarray, x0, v, t0):
###input parameters define a difractor's position (x0,t0), P-wave velocity of homogeneous subsurface, and x- and t-arrays to compute traveltimes on.
###
self.x=xarray
self.x0=x0
self.t0=t0
self.v=v
#compute traveltimes
self.t=sqrt(t0**2 + (2*(xarray-x0)/v)**2)
#obtain some grid parameters
xstep=xarray[1]-xarray[0]
tbegin=tarray[0]
tend=tarray[-1]
tstep=tarray[1]-tarray[0]
#delete t's and x's for samples where t exceeds maxt
self.x=self.x[ (self.t>=tbegin) & (self.t <= tend) ]
self.t=self.t[ (self.t>=tbegin) & (self.t <= tend) ]
self.imgind=((self.x-xarray[0])/xstep).astype(int)
#compute amplitudes' fading according to geometrical spreading
self.amp = 1/(self.t/self.t0)
self.grid_resample(xarray, tarray)
def grid_resample(self, xarray, tarray):
# that's a function that computes at which 'cells' of image should we place the hyperbola
tend=tarray[-1]
tstep=tarray[1]-tarray[0]
self.xind=((self.x-xarray[0])/xstep).astype(int) #X cells numbers
self.tind=np.round((self.t-tarray[0])/tstep).astype(int) #T cells numbers
self.tind=self.tind[self.tind*tstep<=tarray[-1]] #delete T's exceeding max.T
self.tgrid=tarray[self.tind] # get 'gridded' T-values
self.coord=np.vstack((self.xind,tarray[self.tind]))
def add_to_img(self, img, wavelet):
# puts the hyperbola into the right cells of image with a given wavelet
maxind=np.size(img,1)
wavlen=np.floor(len(wavelet)/2).astype(int)
self.imgind=self.imgind[self.tind < maxind-1]
self.tind=self.tind[self.tind < maxind-1]
ind_begin=self.tind-wavlen
for i,sample in enumerate(wavelet):
img[self.imgind,ind_begin+i]=img[self.imgind,ind_begin+i]+sample
return img
"""
Explanation: What we are now going to do is create a class named Hyperbola
Each object of this class is capable of computing traveltimes to a certain subsurface point (diffractor) and plotting this point response on a grid
End of explanation
"""
Hyp_test = Hyperbola(xarray, tarray, x0 = 100, t0 = 30, v = 2)
#Create a fugure and add axes to it
fgr_test1 = plt.figure(figsize=(7,5), facecolor='w')
ax_test1 = fgr_test1.add_subplot(111)
#Now plot Hyp_test's parameters: X vs T
ax_test1.plot(Hyp_test.x, Hyp_test.t, 'r', lw = 2)
#and their 'gridded' equivalents
ax_test1.plot(Hyp_test.x, Hyp_test.tgrid, ls='none', marker='o', ms=6, mfc=[0,0.5,1],mec='none')
#Some commands to add gridlines, change the directon of T axis and move x axis to top
ax_test1.set_ylim(tarray[-1],tarray[0])
ax_test1.xaxis.set_ticks_position('top')
ax_test1.grid(True, alpha = 0.1, ls='-',lw=.5)
ax_test1.set_xlabel('X, m')
ax_test1.set_ylabel('T, ms')
ax_test1.xaxis.set_label_position('top')
plt.show()
"""
Explanation: For testing purposes, let's create an object named Hyp_test and view its parameters
End of explanation
"""
|
bert9bert/statsmodels
|
examples/notebooks/statespace_arma_0.ipynb
|
bsd-3-clause
|
%matplotlib inline
from __future__ import print_function
import numpy as np
from scipy import stats
import pandas as pd
import matplotlib.pyplot as plt
import statsmodels.api as sm
from statsmodels.graphics.api import qqplot
"""
Explanation: Autoregressive Moving Average (ARMA): Sunspots data
This notebook replicates the existing ARMA notebook using the statsmodels.tsa.statespace.SARIMAX class rather than the statsmodels.tsa.ARMA class.
End of explanation
"""
print(sm.datasets.sunspots.NOTE)
dta = sm.datasets.sunspots.load_pandas().data
dta.index = pd.Index(sm.tsa.datetools.dates_from_range('1700', '2008'))
del dta["YEAR"]
dta.plot(figsize=(12,4));
fig = plt.figure(figsize=(12,8))
ax1 = fig.add_subplot(211)
fig = sm.graphics.tsa.plot_acf(dta.values.squeeze(), lags=40, ax=ax1)
ax2 = fig.add_subplot(212)
fig = sm.graphics.tsa.plot_pacf(dta, lags=40, ax=ax2)
arma_mod20 = sm.tsa.statespace.SARIMAX(dta, order=(2,0,0), trend='c').fit(disp=False)
print(arma_mod20.params)
arma_mod30 = sm.tsa.statespace.SARIMAX(dta, order=(3,0,0), trend='c').fit(disp=False)
print(arma_mod20.aic, arma_mod20.bic, arma_mod20.hqic)
print(arma_mod30.params)
print(arma_mod30.aic, arma_mod30.bic, arma_mod30.hqic)
"""
Explanation: Sunpots Data
End of explanation
"""
sm.stats.durbin_watson(arma_mod30.resid)
fig = plt.figure(figsize=(12,4))
ax = fig.add_subplot(111)
ax = plt.plot(arma_mod30.resid)
resid = arma_mod30.resid
stats.normaltest(resid)
fig = plt.figure(figsize=(12,4))
ax = fig.add_subplot(111)
fig = qqplot(resid, line='q', ax=ax, fit=True)
fig = plt.figure(figsize=(12,8))
ax1 = fig.add_subplot(211)
fig = sm.graphics.tsa.plot_acf(resid, lags=40, ax=ax1)
ax2 = fig.add_subplot(212)
fig = sm.graphics.tsa.plot_pacf(resid, lags=40, ax=ax2)
r,q,p = sm.tsa.acf(resid, qstat=True)
data = np.c_[range(1,41), r[1:], q, p]
table = pd.DataFrame(data, columns=['lag', "AC", "Q", "Prob(>Q)"])
print(table.set_index('lag'))
"""
Explanation: Does our model obey the theory?
End of explanation
"""
predict_sunspots = arma_mod30.predict(start='1990', end='2012', dynamic=True)
fig, ax = plt.subplots(figsize=(12, 8))
dta.ix['1950':].plot(ax=ax)
predict_sunspots.plot(ax=ax, style='r');
def mean_forecast_err(y, yhat):
return y.sub(yhat).mean()
mean_forecast_err(dta.SUNACTIVITY, predict_sunspots)
"""
Explanation: This indicates a lack of fit.
In-sample dynamic prediction. How good does our model do?
End of explanation
"""
|
BrentDorsey/pipeline
|
gpu.ml/notebooks/03a_Train_Model_GPU.ipynb
|
apache-2.0
|
import tensorflow as tf
from tensorflow.python.client import timeline
import pylab
import numpy as np
import os
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
tf.logging.set_verbosity(tf.logging.INFO)
"""
Explanation: Train Model with GPU (and CPU*)
CPU is still used to store variables that we are learning (W and b). This allows the GPU to focus on compute vs. storage.
End of explanation
"""
tf.reset_default_graph()
"""
Explanation: Reset TensorFlow Graph
Useful in Jupyter Notebooks
End of explanation
"""
config = tf.ConfigProto(
log_device_placement=True,
)
config.gpu_options.allow_growth=True
config.gpu_options.per_process_gpu_memory_fraction = 0.4
print(config)
sess = tf.Session(config=config)
print(sess)
"""
Explanation: Create TensorFlow Session
End of explanation
"""
from datetime import datetime
version = int(datetime.now().strftime("%s"))
"""
Explanation: Generate Model Version (current timestamp)
End of explanation
"""
num_samples = 100000
x_train = np.random.rand(num_samples).astype(np.float32)
print(x_train)
noise = np.random.normal(scale=0.01, size=len(x_train))
y_train = x_train * 0.1 + 0.3 + noise
print(y_train)
pylab.plot(x_train, y_train, '.')
x_test = np.random.rand(len(x_train)).astype(np.float32)
print(x_test)
noise = np.random.normal(scale=.01, size=len(x_train))
y_test = x_test * 0.1 + 0.3 + noise
print(y_test)
pylab.plot(x_test, y_test, '.')
with tf.device("/cpu:0"):
W = tf.get_variable(shape=[], name='weights')
print(W)
b = tf.get_variable(shape=[], name='bias')
print(b)
with tf.device("/gpu:0"):
x_observed = tf.placeholder(shape=[None],
dtype=tf.float32,
name='x_observed')
print(x_observed)
y_pred = W * x_observed + b
print(y_pred)
learning_rate = 0.025
with tf.device("/gpu:0"):
y_observed = tf.placeholder(shape=[None], dtype=tf.float32, name='y_observed')
print(y_observed)
loss_op = tf.reduce_mean(tf.square(y_pred - y_observed))
optimizer_op = tf.train.GradientDescentOptimizer(learning_rate)
train_op = optimizer_op.minimize(loss_op)
print("Loss Scalar: ", loss_op)
print("Optimizer Op: ", optimizer_op)
print("Train Op: ", train_op)
"""
Explanation: Load Model Training and Test/Validation Data
End of explanation
"""
with tf.device("/cpu:0"):
init_op = tf.global_variables_initializer()
print(init_op)
sess.run(init_op)
print("Initial random W: %f" % sess.run(W))
print("Initial random b: %f" % sess.run(b))
"""
Explanation: Randomly Initialize Variables (Weights and Bias)
The goal is to learn more accurate Weights and Bias during training.
End of explanation
"""
def test(x, y):
return sess.run(loss_op, feed_dict={x_observed: x, y_observed: y})
test(x_train, y_train)
"""
Explanation: View Accuracy of Pre-Training, Initial Random Variables
We want this to be close to 0, but it's relatively far away. This is why we train!
End of explanation
"""
loss_summary_scalar_op = tf.summary.scalar('loss', loss_op)
loss_summary_merge_all_op = tf.summary.merge_all()
train_summary_writer = tf.summary.FileWriter('/root/tensorboard/linear/gpu/%s/train' % version,
graph=tf.get_default_graph())
test_summary_writer = tf.summary.FileWriter('/root/tensorboard/linear/gpu/%s/test' % version,
graph=tf.get_default_graph())
"""
Explanation: Setup Loss Summary Operations for Tensorboard
End of explanation
"""
%%time
with tf.device("/gpu:0"):
run_metadata = tf.RunMetadata()
max_steps = 401
for step in range(max_steps):
if (step < max_steps - 1):
test_summary_log, _ = sess.run([loss_summary_merge_all_op, loss_op], feed_dict={x_observed: x_test, y_observed: y_test})
train_summary_log, _ = sess.run([loss_summary_merge_all_op, train_op], feed_dict={x_observed: x_train, y_observed: y_train})
else:
test_summary_log, _ = sess.run([loss_summary_merge_all_op, loss_op], feed_dict={x_observed: x_test, y_observed: y_test})
train_summary_log, _ = sess.run([loss_summary_merge_all_op, train_op], feed_dict={x_observed: x_train, y_observed: y_train},
options=tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE),
run_metadata=run_metadata)
trace = timeline.Timeline(step_stats=run_metadata.step_stats)
with open('timeline-gpu.json', 'w') as trace_file:
trace_file.write(trace.generate_chrome_trace_format(show_memory=True))
if step % 10 == 0:
print(step, sess.run([W, b]))
train_summary_writer.add_summary(train_summary_log, step)
train_summary_writer.flush()
test_summary_writer.add_summary(test_summary_log, step)
test_summary_writer.flush()
pylab.plot(x_train, y_train, '.', label="target")
pylab.plot(x_train, sess.run(y_pred,
feed_dict={x_observed: x_train,
y_observed: y_train}),
".",
label="predicted")
pylab.legend()
pylab.ylim(0, 1.0)
"""
Explanation: Train Model
End of explanation
"""
import os
optimize_me_parent_path = '/root/models/optimize_me/linear/gpu'
saver = tf.train.Saver()
os.system('rm -rf %s' % optimize_me_parent_path)
os.makedirs(optimize_me_parent_path)
unoptimized_model_graph_path = '%s/unoptimized_gpu.pb' % optimize_me_parent_path
print(unoptimized_model_graph_path)
tf.train.write_graph(sess.graph_def,
'.',
unoptimized_model_graph_path,
as_text=False)
model_checkpoint_path = '%s/model.ckpt' % optimize_me_parent_path
saver.save(sess,
save_path=model_checkpoint_path)
print(model_checkpoint_path)
print(optimize_me_parent_path)
os.listdir(optimize_me_parent_path)
sess.close()
"""
Explanation: View Loss Summaries in Tensorboard
Navigate to the Scalars and Graphs tab at this URL:
http://[ip-address]:6006
Save Graph For Optimization
We will use this later.
End of explanation
"""
%%bash
summarize_graph --in_graph=/root/models/optimize_me/linear/gpu/unoptimized_gpu.pb
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import re
from google.protobuf import text_format
from tensorflow.core.framework import graph_pb2
def convert_graph_to_dot(input_graph, output_dot, is_input_graph_binary):
graph = graph_pb2.GraphDef()
with open(input_graph, "rb") as fh:
if is_input_graph_binary:
graph.ParseFromString(fh.read())
else:
text_format.Merge(fh.read(), graph)
with open(output_dot, "wt") as fh:
print("digraph graphname {", file=fh)
for node in graph.node:
output_name = node.name
print(" \"" + output_name + "\" [label=\"" + node.op + "\"];", file=fh)
for input_full_name in node.input:
parts = input_full_name.split(":")
input_name = re.sub(r"^\^", "", parts[0])
print(" \"" + input_name + "\" -> \"" + output_name + "\";", file=fh)
print("}", file=fh)
print("Created dot file '%s' for graph '%s'." % (output_dot, input_graph))
input_graph='/root/models/optimize_me/linear/gpu/unoptimized_gpu.pb'
output_dot='/root/notebooks/unoptimized_gpu.dot'
convert_graph_to_dot(input_graph=input_graph, output_dot=output_dot, is_input_graph_binary=True)
%%bash
dot -T png /root/notebooks/unoptimized_gpu.dot \
-o /root/notebooks/unoptimized_gpu.png > /tmp/a.out
from IPython.display import Image
Image('/root/notebooks/unoptimized_gpu.png', width=1024, height=768)
"""
Explanation: Show Graph
End of explanation
"""
|
machinelearningnanodegree/stanford-cs231
|
solutions/vijendra/assignment1/knn.ipynb
|
mit
|
# Run some setup code for this notebook.
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
# This is a bit of magic to make matplotlib figures appear inline in the notebook
# rather than in a new window.
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# Some more magic so that the notebook will reload external python modules;
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
# Load the raw CIFAR-10 data.
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# As a sanity check, we print out the size of the training and test data.
print 'Training data shape: ', X_train.shape
print 'Training labels shape: ', y_train.shape
print 'Test data shape: ', X_test.shape
print 'Test labels shape: ', y_test.shape
# Visualize some examples from the dataset.
# We show a few examples of training images from each class.
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
num_classes = len(classes)
samples_per_class = 7
for y, cls in enumerate(classes):
idxs = np.flatnonzero(y_train == y)
idxs = np.random.choice(idxs, samples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt_idx = i * num_classes + y + 1
plt.subplot(samples_per_class, num_classes, plt_idx)
plt.imshow(X_train[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls)
plt.show()
# Subsample the data for more efficient code execution in this exercise
num_training = 5000
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
num_test = 500
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
# Reshape the image data into rows
X_train = np.reshape(X_train, (X_train.shape[0], -1))
X_test = np.reshape(X_test, (X_test.shape[0], -1))
print X_train.shape, X_test.shape
from cs231n.classifiers import KNearestNeighbor
# Create a kNN classifier instance.
# Remember that training a kNN classifier is a noop:
# the Classifier simply remembers the data and does no further processing
classifier = KNearestNeighbor()
classifier.train(X_train, y_train)
"""
Explanation: k-Nearest Neighbor (kNN) exercise
Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the assignments page on the course website.
The kNN classifier consists of two stages:
During training, the classifier takes the training data and simply remembers it
During testing, kNN classifies every test image by comparing to all training images and transfering the labels of the k most similar training examples
The value of k is cross-validated
In this exercise you will implement these steps and understand the basic Image Classification pipeline, cross-validation, and gain proficiency in writing efficient, vectorized code.
End of explanation
"""
# Open cs231n/classifiers/k_nearest_neighbor.py and implement
# compute_distances_two_loops.
# Test your implementation:
dists = classifier.compute_distances_two_loops(X_test)
print dists.shape
# We can visualize the distance matrix: each row is a single test example and
# its distances to training examples
plt.imshow(dists, interpolation='none')
plt.show()
"""
Explanation: We would now like to classify the test data with the kNN classifier. Recall that we can break down this process into two steps:
First we must compute the distances between all test examples and all train examples.
Given these distances, for each test example we find the k nearest examples and have them vote for the label
Lets begin with computing the distance matrix between all training and test examples. For example, if there are Ntr training examples and Nte test examples, this stage should result in a Nte x Ntr matrix where each element (i,j) is the distance between the i-th test and j-th train example.
First, open cs231n/classifiers/k_nearest_neighbor.py and implement the function compute_distances_two_loops that uses a (very inefficient) double loop over all pairs of (test, train) examples and computes the distance matrix one element at a time.
End of explanation
"""
# Now implement the function predict_labels and run the code below:
# We use k = 1 (which is Nearest Neighbor).
y_test_pred = classifier.predict_labels(dists, k=1)
# Compute and print the fraction of correctly predicted examples
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)
"""
Explanation: Inline Question #1: Notice the structured patterns in the distance matrix, where some rows or columns are visible brighter. (Note that with the default color scheme black indicates low distances while white indicates high distances.)
What in the data is the cause behind the distinctly bright rows?
What causes the columns?
Your Answer:
Distincly bright can only happen if it is either very black or really white.
In case of rows if it is black that means a test image is very similar to a lot of training images (In case of similar backgrounds for eg. aeroplane images,birds will all have sky so a lot of blue). And white means very less similar (Again if we think about 2 images taken with 2 different background and the image itself is very small).
In case of columns a single train image is very similar or disimilar the same logic of above applies here.
End of explanation
"""
y_test_pred = classifier.predict_labels(dists, k=5)
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)
"""
Explanation: You should expect to see approximately 27% accuracy. Now lets try out a larger k, say k = 5:
End of explanation
"""
# Now lets speed up distance matrix computation by using partial vectorization
# with one loop. Implement the function compute_distances_one_loop and run the
# code below:
dists_one = classifier.compute_distances_one_loop(X_test)
#print dists_one
# To ensure that our vectorized implementation is correct, we make sure that it
# agrees with the naive implementation. There are many ways to decide whether
# two matrices are similar; one of the simplest is the Frobenius norm. In case
# you haven't seen it before, the Frobenius norm of two matrices is the square
# root of the squared sum of differences of all elements; in other words, reshape
# the matrices into vectors and compute the Euclidean distance between them.
difference = np.linalg.norm(dists - dists_one, ord='fro')
print 'Difference was: %f' % (difference, )
if difference < 0.001:
print 'Good! The distance matrices are the same'
else:
print 'Uh-oh! The distance matrices are different'
# Now implement the fully vectorized version inside compute_distances_no_loops
# and run the code
dists_two = classifier.compute_distances_no_loops(X_test)
# check that the distance matrix agrees with the one we computed before:
difference = np.linalg.norm(dists - dists_two, ord='fro')
print 'Difference was: %f' % (difference, )
if difference < 0.001:
print 'Good! The distance matrices are the same'
else:
print 'Uh-oh! The distance matrices are different'
# Let's compare how fast the implementations are
def time_function(f, *args):
"""
Call a function f with args and return the time (in seconds) that it took to execute.
"""
import time
tic = time.time()
f(*args)
toc = time.time()
return toc - tic
two_loop_time = time_function(classifier.compute_distances_two_loops, X_test)
print 'Two loop version took %f seconds' % two_loop_time
one_loop_time = time_function(classifier.compute_distances_one_loop, X_test)
print 'One loop version took %f seconds' % one_loop_time
no_loop_time = time_function(classifier.compute_distances_no_loops, X_test)
print 'No loop version took %f seconds' % no_loop_time
# you should see significantly faster performance with the fully vectorized implementation
"""
Explanation: You should expect to see a slightly better performance than with k = 1.
End of explanation
"""
num_folds = 5
k_choices = [1, 3, 5, 8, 10, 12, 15, 20, 50, 100]
X_train_folds = []
y_train_folds = []
################################################################################
# TODO: #
# Split up the training data into folds. After splitting, X_train_folds and #
# y_train_folds should each be lists of length num_folds, where #
# y_train_folds[i] is the label vector for the points in X_train_folds[i]. #
# Hint: Look up the numpy array_split function. #
################################################################################
X_train_folds = np.array_split(X_train,num_folds)
y_train_folds = np.array_split(y_train,num_folds)
###############################################################################
# END OF YOUR CODE #
################################################################################
# A dictionary holding the accuracies for different values of k that we find
# when running cross-validation. After running cross-validation,
# k_to_accuracies[k] should be a list of length num_folds giving the different
# accuracy values that we found when using that value of k.
k_to_accuracies = {}
##create a empty list for all k choices
for k in k_choices:
k_to_accuracies[k] = []
################################################################################
# TODO: #
# Perform k-fold cross validation to find the best value of k. For each #
# possible value of k, run the k-nearest-neighbor algorithm num_folds times, #
# where in each case you use all but one of the folds as training data and the #
# last fold as a validation set. Store the accuracies for all fold and all #
# values of k in the k_to_accuracies dictionary. #
################################################################################
for k in k_choices:
print "evaluating for value k =",k
for i in range(num_folds):
X_train_cv = np.vstack(X_train_folds[0:i]+X_train_folds[i+1:])
y_train_cv = np.hstack(y_train_folds[0:i] + y_train_folds[i+1:])
X_test_cv = X_train_folds[i]
y_test_cv = y_train_folds[i]
classifier.train(X_train_cv,y_train_cv)
dists_cv = classifier.compute_distances_no_loops(X_test_cv)
y_pred_data = classifier.predict_labels(dists_cv,k)
num_of_correct = np.sum(y_test_cv == y_pred_data)
accuracy = (float)(num_of_correct)/y_test_cv.shape[0]
k_to_accuracies[k].append(accuracy)
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out the computed accuracies
for k in sorted(k_to_accuracies):
for accuracy in k_to_accuracies[k]:
print 'k = %d, accuracy = %f' % (k, accuracy)
# plot the raw observations
for k in k_choices:
accuracies = k_to_accuracies[k]
plt.scatter([k] * len(accuracies), accuracies)
# plot the trend line with error bars that correspond to standard deviation
accuracies_mean = np.array([np.mean(v) for k,v in sorted(k_to_accuracies.items())])
accuracies_std = np.array([np.std(v) for k,v in sorted(k_to_accuracies.items())])
plt.errorbar(k_choices, accuracies_mean, yerr=accuracies_std)
plt.title('Cross-validation on k')
plt.xlabel('k')
plt.ylabel('Cross-validation accuracy')
plt.show()
# Based on the cross-validation results above, choose the best value for k,
# retrain the classifier using all the training data, and test it on the test
# data. You should be able to get above 28% accuracy on the test data.
best_k = 1
classifier = KNearestNeighbor()
classifier.train(X_train, y_train)
y_test_pred = classifier.predict(X_test, k=best_k)
# Compute and display the accuracy
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)
"""
Explanation: Cross-validation
We have implemented the k-Nearest Neighbor classifier but we set the value k = 5 arbitrarily. We will now determine the best value of this hyperparameter with cross-validation.
End of explanation
"""
|
bouhlelma/smt
|
tutorial/SMT_MixedInteger_application.ipynb
|
bsd-3-clause
|
%matplotlib inline
from math import exp
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import colors
from mpl_toolkits.mplot3d import Axes3D
from scipy.stats import norm
from scipy.optimize import minimize
import scipy
import six
from smt.applications import EGO
from smt.surrogate_models import KRG
from smt.sampling_methods import FullFactorial
from smt.sampling_methods import LHS
from sklearn import gaussian_process
from sklearn.gaussian_process.kernels import Matern, WhiteKernel, ConstantKernel
import matplotlib.font_manager
from smt.applications.mixed_integer import MixedIntegerSurrogateModel
import warnings
warnings.filterwarnings("ignore")
from smt.applications.mixed_integer import (
FLOAT,
INT,
ENUM,
MixedIntegerSamplingMethod,
cast_to_mixed_integer, unfold_with_enum_mask
)
"""
Explanation: <div class="jumbotron text-left"><b>
This tutorial describes how to perform a mixed optimization using the SMT toolbox. The idea is to use a Bayesian Optimization (EGO method) to solve an unconstrained optimization problem with mixed variables.
<div>
October 2020
Paul Saves and Nathalie BARTOLI ONERA/DTIS/M2CI)
<p class="alert alert-success" style="padding:1em">
To use SMT models, please follow this link : https://github.com/SMTorg/SMT/blob/master/README.md. The documentation is available here: http://smt.readthedocs.io/en/latest/
</p>
The reference paper is available
here https://www.sciencedirect.com/science/article/pii/S0965997818309360?via%3Dihub
or as a preprint: http://mdolab.engin.umich.edu/content/python-surrogate-modeling-framework-derivatives
For mixed integer with continuous relaxation, the reference paper is available here https://www.sciencedirect.com/science/article/pii/S0925231219315619
### Mixed Integer EGO
For mixed integer EGO, the model is the continuous one. The discrete variables being relaxed continuously
End of explanation
"""
def PlotEgo(criterion, xdoe, bounds,npt,n_iter=12,xtypes=None) :
ego = EGO(n_iter=n_iter, criterion=criterion, xdoe=xdoe,xtypes=xtypes, xlimits=bounds,n_start=20,n_max_optim=35,enable_tunneling=False, surrogate=KRG(print_global=False))
x_opt, y_opt, ind_best, x_data, y_data = ego.optimize(fun=f)
print("Minimum in x={:.0f} with f(x)={:.10f}".format(int(x_opt), float(y_opt)))
x_plot = np.atleast_2d(np.linspace(bounds[0][0], bounds[0][1], 9*(npt-1)+1)).T
fig = plt.figure(figsize=[15, 15])
for i in range(n_iter):
k = n_doe + i
x_data_k = x_data[0:k]
y_data_k = y_data[0:k]
#if check list, not already evaluated
y_data[k]=f(x_data[k][:, np.newaxis])
ego.gpr.set_training_values(x_data_k, y_data_k)
ego.gpr.train()
y_gp_plot = ego.gpr.predict_values(x_plot)
y_gp_plot_var = ego.gpr.predict_variances(x_plot)
y_ei_plot = ego.EI(x_plot,y_data_k)
ax = fig.add_subplot((n_iter + 1) // 2, 2, i + 1)
ax1 = ax.twinx()
ei, = ax1.plot(x_plot, y_ei_plot, color="red")
true_fun = ax.scatter(Xsol, Ysol,color='k',marker='d')
data, = ax.plot(
x_data_k, y_data_k, linestyle="", marker="o", color="orange"
)
if i < n_iter - 1:
opt, = ax.plot(
x_data[k], y_data[k], linestyle="", marker="*", color="r"
)
print(x_data[k], y_data[k])
gp, = ax.plot(x_plot, y_gp_plot, linestyle="--", color="g")
sig_plus = y_gp_plot + 3 * np.sqrt(y_gp_plot_var)
sig_moins = y_gp_plot - 3 * np.sqrt(y_gp_plot_var)
un_gp = ax.fill_between(
x_plot.T[0], sig_plus.T[0], sig_moins.T[0], alpha=0.3, color="g"
)
lines = [true_fun, data, gp, un_gp, opt, ei]
fig.suptitle("EGO optimization of a set of points")
fig.subplots_adjust(hspace=0.4, wspace=0.4, top=0.8)
ax.set_title("iteration {}".format(i + 1))
fig.legend(
lines,
[
"set of points",
"Given data points",
"Kriging prediction",
"Kriging 99% confidence interval",
"Next point to evaluate",
"Expected improvment function",
],
)
plt.show()
"""
Explanation: Definition of the plot function
End of explanation
"""
#definition of the 1D function
def f(X) :
x= X[:, 0]
if (np.abs(np.linalg.norm(np.floor(x))-np.linalg.norm(x))< 0.000001):
y = (x - 3.5) * np.sin((x - 3.5) / (np.pi))
else :
print("error")
return y
#to plot the function
bounds = np.array([[0, 25]])
npt=26
Xsol = np.linspace(bounds[0][0],bounds[0][1], npt)
Xs= Xsol[:, np.newaxis]
Ysol = f(Xs)
print("Min of the DOE: ",np.min(Ysol))
plt.scatter(Xs,Ysol,marker='d',color='k')
plt.show()
#to run the optimization process
n_iter = 8
xdoe = np.atleast_2d([0,10]).T
n_doe = xdoe.size
xtypes=[INT]
criterion = "EI" #'EI' or 'SBO' or 'UCB'
PlotEgo(criterion,xdoe,bounds,npt,n_iter,xtypes=xtypes)
"""
Explanation: Local minimum trap: 1D function
The 1D function to optimize is described by:
- 1 discrete variable $\in [0, 25]$
End of explanation
"""
def f(X) :
x= X[:, 0]
y = -np.square(x-25)/220+0.25*(np.sin((x - 3.5) * np.sin((x - 3.5) / (np.pi)))+np.cos(x**2))
return -y
#to plot the function
xlimits = np.array([[0, 60]])
npt=61
Xsol = np.linspace(xlimits[0][0],xlimits[0][1], npt)
Xs= Xsol[:, np.newaxis]
Ysol = f(Xs)
print("min of the DOE: ", np.min(Ysol))
plt.scatter(Xs,Ysol,marker='d',color='k')
plt.show()
#to run the optimization process
n_iter = 10
n_doe=2
sampling = MixedIntegerSamplingMethod(xtypes, xlimits, LHS, criterion="ese")
xdoe = sampling(2)
xtypes=[INT]
criterion = "EI" #'EI' or 'SBO' or 'UCB'
PlotEgo(criterion,xdoe,xlimits,npt,n_iter,xtypes)
"""
Explanation: On this 1D test case, 4 iterations are required to find the global minimum, evaluated at iteration 5.
## 1D function with noisy values
The 1D function to optimize is described by:
- 1 discrete variable $\in [0, 60]$
End of explanation
"""
#definition of the 2D function
#the first variable is a integer one and the second one is a continuous one
import math
def f(X) :
x1 = X[:,0]
x2 = X[:,1]
PI = math.pi #3.14159265358979323846
a = 1
b = 5.1/(4*np.power(PI,2))
c = 5/PI
r = 6
s = 10
t = 1/(8*PI)
y= a*(x2 - b*x1**2 + c*x1 -r)**2 + s*(1-t)*np.cos(x1) + s
return y
#to define and compute the doe
xtypes = [INT, FLOAT]
xlimits = np.array([[-5.0, 10.0],[0.0,15.0]])
n_doe=20
sampling = MixedIntegerSamplingMethod(xtypes, xlimits, LHS, criterion="ese")
xt = sampling(n_doe)
yt = f(xt)
#to build the mixed surrogate model
sm = MixedIntegerSurrogateModel(xtypes=xtypes, xlimits=xlimits, surrogate=KRG())
sm.set_training_values(xt, yt)
sm.train()
num = 100
x = np.linspace(-5.0,10., 100)
y = np.linspace(0,15., 100)
xv, yv = np.meshgrid(x, y)
x_plot= np.array([np.ravel(xv), np.ravel(yv)]).T
y_plot = f(np.floor(x_plot))
fig = plt.figure(figsize=[14, 7])
y_gp_plot = sm.predict_values(x_plot)
y_gp_plot_sd = np.sqrt(sm.predict_variances(x_plot))
l=y_gp_plot-3*y_gp_plot_sd
h=y_gp_plot+3*y_gp_plot_sd
ax = fig.add_subplot(1, 3, 1, projection='3d')
ax1 = fig.add_subplot(1, 3, 2, projection='3d')
ax2 = fig.add_subplot(1, 3,3)
ii=-100
ax.view_init(elev=15., azim=ii)
ax1.view_init(elev=15., azim=ii)
true_fun = ax.plot_surface(xv, yv, y_plot.reshape((100, 100)), label ='true_function',color='g')
data3 = ax2.scatter(xt.T[0],xt.T[1],s=60,marker="o",color="orange")
gp1 = ax1.plot_surface(xv, yv, l.reshape((100, 100)), color="b")
gp2 = ax1.plot_surface(xv, yv, h.reshape((100, 100)), color="r")
gp3 = ax2.contour(xv, yv, y_gp_plot.reshape((100, 100)), color="k", levels=[0,1,2,5,10,20,30,40,50,60])
fig.suptitle("Mixed Branin function surrogate")
ax.set_title("True model")
ax1.set_title("surrogate model, DOE de taille {}".format(n_doe))
ax2.set_title("surrogate mean response")
"""
Explanation: On this noisy case, it toook 7 iterations to understand the shape of the curve but then, it took time to explore the "random" noise aroudn the minimum.
2D mixed branin function
The 2D function to optimize is described by:
- 1 discrete variable $\in [-5, 10]$
- 1 continuous variable $\in [0., 15.]$
End of explanation
"""
#to define the 4D function
def function_test_mixed_integer(X):
import numpy as np
# float
x1 = X[:, 0]
# enum 1
c1 = X[:, 1]
x2 = c1 == 0
x3 = c1 == 1
x4 = c1 == 2
# enum 2
c2 = X[:, 2]
x5 = c2 == 0
x6 = c2 == 1
# int
i = X[:, 3]
y = (
(x2 + 2 * x3 + 3 * x4) * x5 * x1
+ (x2 + 2 * x3 + 3 * x4) * x6 * 0.95 * x1
+ i
)
return y
#to run the optimization process
n_iter = 15
xtypes = [FLOAT, (ENUM, 3), (ENUM, 2), INT]
xlimits = np.array([[-5, 5], ["blue", "red", "green"], ["large", "small"], [0, 2]])
criterion = "EI" #'EI' or 'SBO' or 'UCB'
qEI = "KB"
sm = KRG(print_global=False)
n_doe = 2
sampling = MixedIntegerSamplingMethod(xtypes, xlimits, LHS, criterion="ese")
xdoe = sampling(n_doe)
ydoe = function_test_mixed_integer(xdoe)
print('Initial DOE: \n', 'xdoe = ',xdoe, '\n ydoe = ',ydoe)
ego = EGO(
n_iter=n_iter,
criterion=criterion,
xdoe=xdoe,
ydoe=ydoe,
xtypes=xtypes,
xlimits=xlimits,
surrogate=sm,
qEI=qEI,
)
x_opt,y_opt, _, _, y_data = ego.optimize(fun=function_test_mixed_integer)
#to plot the objective function during the optimization process
min_ref = -15
mini = np.zeros(n_iter)
for k in range(n_iter):
mini[k] = np.log(np.abs(np.min(y_data[0 : k + n_doe - 1]) - min_ref))
x_plot = np.linspace(1, n_iter + 0.5, n_iter)
u = max(np.floor(max(mini)) + 1, -100)
l = max(np.floor(min(mini)) - 0.2, -10)
fig = plt.figure()
axes = fig.add_axes([0.1, 0.1, 0.8, 0.8])
axes.plot(x_plot, mini, color="r")
axes.set_ylim([l, u])
plt.title("minimum convergence plot", loc="center")
plt.xlabel("number of iterations")
plt.ylabel("log of the difference w.r.t the best")
plt.show()
print(" 4D EGO Optimization: Minimum in x=",cast_to_mixed_integer(xtypes, xlimits, x_opt), "with y value =",y_opt)
"""
Explanation: On the left, we have the real model in green.
In the middle we have the mean surrogate $+3\times \mbox{ standard deviation}$ (red) and the mean surrogate $-3\times \mbox{ standard deviation}$ (blue) in order to represent an approximation of the $99\%$ confidence interval.
On the right, the contour plot of the mean surrogate are given where yellow points are the values at the evaluated points (DOE).
4D mixed test case
The 4D function to optimize is described by:
- 1 continuous variable $\in [-5, 5]$
- 1 categorical variable with 3 labels $["blue", "red", "green"]$
- 1 categorical variable with 2 labels $ ["large", "small"]$
- 1 discrete variable $\in [0, 2]$
End of explanation
"""
#to give the initial doe in the initial space
print('Initial DOE in the initial space: ')
for i in range(n_doe):
print("Doe point i={} ={}".format((i), (cast_to_mixed_integer(xtypes, xlimits, xdoe[i]))),'\n')
#to give the initial doe in the relaxed space
print('Initial DOE in the unfold space (or relaxed space): ')
for i in range(n_doe):
print("Doe point i={} ={}".format((i), (unfold_with_enum_mask(xtypes, xdoe[i]))),'\n')
#to print the used DOE
print('Initial DOE in the fold space: ')
for i in range(n_doe):
print("Doe point i={} ={}".format((i), xdoe[i]),'\n')
"""
Explanation: Manipulate the DOE
End of explanation
"""
|
laurentperrinet/Khoei_2017_PLoSCB
|
notebooks/figure_3_FLE.ipynb
|
mit
|
%%writefile experiment_fle.py
import MotionParticlesFLE as mp
gen_dot = mp.generate_dot
import numpy as np
import os
from default_param import *
image = {}
experiment = 'FLE'
do_sim = False
do_sim = True
for stimulus_tag, im_arg in zip(stim_labels, stim_args):
# generating the movie
image[stimulus_tag] = {}
image[stimulus_tag]['args'] = im_arg
image[stimulus_tag]['im'] = gen_dot(N_X=N_X, N_Y=N_Y, N_frame=N_frame, **image[stimulus_tag]['args'])
mp.anim_save(image[stimulus_tag]['im'], os.path.join(mp.figpath, experiment + '-' + stimulus_tag + '-anim'))
image[stimulus_tag]['result'] = {}
if do_sim:
# running PX and MBP with 2 different latencies
for D_x, D_V, v_prior, label in zip([mp.D_x, PBP_D_x], [mp.D_V, PBP_D_V], [mp.v_prior, PBP_prior], ['MBP', 'PBP']):
figname = os.path.join(mp.figpath, experiment + '-' + stimulus_tag + '-' + label)
image[stimulus_tag]['result'][label] = {}
image[stimulus_tag]['args'].update(D_V=D_V, D_x=D_x, v_prior=v_prior)
_ = mp.figure_image_variable(
figname,
N_X, N_Y, N_frame, gen_dot, order=None,
do_figure=do_figure, do_video=do_video, N_quant_X=N_quant_X, N_quant_Y=N_quant_Y,
fixed_args=image[stimulus_tag]['args'], latency=latencies)
try:
for latency in latencies:
matname = mp.make_figname(figname, {'latency': latency}).replace(mp.figpath, mp.matpath) + '.npy'
image[stimulus_tag]['result'][label][latency] = np.load(matname)
except:
print('no result yet for ', matname)
%run experiment_fle.py
print (image[stimulus_tag]['result'].keys())
print (image[stimulus_tag]['result']['MBP'].keys())
"""
Explanation: FLE
In this script the CONDENSATION is done for rightward and leftward motion of a dot stimulus, at different levels of noise. also for flashing stimuli needed for simulation of flash initiated and flash_terminated FLEs.
The aim is to generate generate (Berry et al 99)'s figure 2: shifting RF position in the direction of motion.
running the standard FLE
End of explanation
"""
fig, ax = plt.subplots(1, 1, figsize=(fig_width, fig_width/phi))
__temp = np.zeros((N_X, N_frame, 3))
for i, stimulus_tag in enumerate(['flash', 'dot']):
__temp[:, :, i] = image[stimulus_tag]['im'][:, N_Y//2, :]
ax.imshow(.5*__temp+.5, interpolation='nearest', aspect='auto', vmin=-1., vmax=1.)
fig = plt.figure(figsize=(fig_width, fig_width /phi))
a = fig.add_subplot(1, 1, 1)
a.plot(image['flash']['im'][N_X//2, N_Y//2, :], 'o')
fig, ax = plt.subplots(1, 1, figsize=(fig_width, fig_width/phi))
__temp = np.zeros((N_X, N_frame, 3))
for i, stimulus_tag in enumerate(['flash', 'dot']):
white = mp.whitening(image[stimulus_tag]['im'])
white /= np.abs(white).max()
__temp[:, :, i] = white[:, N_Y//2, :]
ax.imshow(.5*__temp+.5, interpolation='nearest', aspect='auto', vmin=-1., vmax=1.)
fig = plt.figure(figsize=(fig_width, fig_width /phi))
a = fig.add_subplot(1, 1, 1)
a.plot(__temp[N_X//2, :, 0], 'o')
"""
Explanation: x-t plot of the stimulation
End of explanation
"""
v_hist = {}
time = np.linspace(0, 1, N_frame)
for i, stimulus_tag in enumerate(stim_labels):
particles = image[stimulus_tag]['result']['MBP'][10].copy()
v_hist[stimulus_tag], x_edges, y_edges = mp.spatial_readout(particles, N_quant_X=N_quant_Y, N_quant_Y=1, display=False)
v_hist[stimulus_tag] /= np.sum(v_hist[stimulus_tag], axis=0)
_, x_edges, y_edges = mp.spatial_readout(particles, N_quant_X=N_quant_Y, N_quant_Y=N_quant_Y, display=False)
x_middle = .5*(x_edges[1:] + x_edges[:-1])
print('Position of the middle of the bins =', x_middle)
y_middle = .5*(y_edges[1:] + y_edges[:-1])
#fig, axs = plt.subplots(1, 2, figsize=(2*fig_width, fig_width /2))
for i, stimulus_tag in enumerate(stim_labels):
particles = image[stimulus_tag]['result']['MBP'][10].copy()
fig, axs = mp.spatial_readout(particles, N_quant_X=N_quant_Y, N_quant_Y=1)#, fig=fig, a=axs[i])
average_pos = np.sum(v_hist[stimulus_tag]*x_middle[:, np.newaxis], axis=0)
axs[0].plot(time, average_pos, lw=4, ls='--')
"""
Explanation: x-t histograms of the results
First showing the histogram of detected positions for both stimuli:
End of explanation
"""
plt.matshow(v_hist['flash']*x_middle[:, np.newaxis])
average_pos = np.sum(v_hist[stimulus_tag]*x_middle[:, np.newaxis], axis=0)
average_pos
plt.matshow(-v_hist[stimulus_tag]*np.log(v_hist[stimulus_tag]))
plt.colorbar();
stimulus_tag = 'flash'
entropy = np.sum(-v_hist[stimulus_tag]*np.log(v_hist[stimulus_tag]), axis=0)
#print(average_std)
ind_min = np.argmin(entropy)
print('Flash occurs at frame ', ind_min)
fig = plt.figure(figsize=(fig_width, fig_width /phi))
a = fig.add_subplot(1, 1, 1)
a.plot(entropy, 'o')
_ = a.plot([ind_min], [entropy[ind_min]], 'r*')
stimulus_tag = 'flash'
average_pos = np.sum(v_hist[stimulus_tag]*x_middle[:, np.newaxis], axis=0)
average_std = np.sqrt(np.sum(v_hist[stimulus_tag]*(x_middle[:, np.newaxis]-average_pos[np.newaxis, :])**2, axis=0))
time_ind_max_flash = np.argmin(average_std)
print('Flash occurs at frame ', ind_min)
fig = plt.figure(figsize=(fig_width, fig_width /phi))
a = fig.add_subplot(1, 1, 1)
a.plot(average_std, 'o')
_ = a.plot([time_ind_max_flash], [average_std[time_ind_max_flash]], 'r*')
"""
Explanation: Find the maximum of position precision for the flash:
End of explanation
"""
fig, axA = plt.subplots(1, 1, figsize=(fig_width/1.618, fig_width/1.618))
N_frame_av = 0
v_hist_max = np.zeros((N_quant_X, N_quant_Y, 3))
#time_ind_max_flash = 60
for i, (stimulus_tag, cmap, color) in enumerate(zip(stim_labels, [plt.cm.Greens, plt.cm.Reds], ['g', 'r'])):
#if stimulus_tag == 'flash':
# average_pos = np.sum(v_hist[stimulus_tag]*x_middle[:, np.newaxis], axis=0)
# average_std = np.sum(v_hist[stimulus_tag]*(x_middle[:, np.newaxis]-average_pos[np.newaxis, :])**2, axis=0)
# time_ind_max_flash = np.argmin(average_std)
particles = image[stimulus_tag]['result']['MBP'][10].copy()
for i_frame in range(time_ind_max_flash-N_frame_av, time_ind_max_flash+N_frame_av+1):
#print(i_frame)
v_hist_max_av, x_edges, y_edges = mp.spatial_readout(particles[:, :, i_frame],
N_quant_X=N_quant_Y, N_quant_Y=N_quant_Y, display=False)
#v_hist_max_av /= np.sum(v_hist_max_av, axis=0)
v_hist_max[:, :, i] += v_hist_max_av/(2*N_frame_av+1)
#v_hist_max[:, :, i] -= v_hist_max[:, :, i].min() # HACK
#a.pcolor(x_edges, y_edges, v_hist_max[:, :, 0], vmin=0., vmax=v_hist_max[:, :, 0].max(), cmap=plt.cm.winter, edgecolor='k', alpha=.3)
c= axA.pcolormesh(x_edges, y_edges, v_hist_max[:, :, i].T,
vmin=0., vmax=v_hist_max[:, :, i].max(), cmap=cmap)#, edgecolor=(1, 1, 1, 1.), alpha=.8)#*i)#(1-i))
c._is_stroked = False
average_xpos_max = np.sum(v_hist_max[:, :, i].sum(axis=1)*x_middle, axis=0)
average_xvar_max = np.sum(v_hist_max[:, :, i].sum(axis=1)*(x_middle[:]-average_xpos_max)**2, axis=0)
average_ypos_max = np.sum(v_hist_max[:, :, i].sum(axis=0)*y_middle, axis=0)
average_yvar_max = np.sum(v_hist_max[:, :, i].sum(axis=0)*(y_middle[:]-average_ypos_max)**2, axis=0)
print('average_xpos_max', average_xpos_max, 'average_ypos_max', average_ypos_max)
print('average_xvar_max', average_xvar_max, 'average_yvar_max', average_yvar_max)
circle = axA.add_artist(plt.Circle((average_xpos_max, average_ypos_max),
dot_size, #np.sqrt(average_xstd_max+average_ystd_max)/20,
alpha=.6, facecolor=color, linewidth=3))
if stimulus_tag == 'dot':
circle = axA.add_artist(plt.Circle((average_xpos_max, average_ypos_max),
np.sqrt(average_xvar_max+average_yvar_max),
alpha=.8, facecolor='none', edgecolor=color, linewidth=3, linestyle='--'))
axA.set_xlabel('Horizontal Space (a.u.)')
axA.set_ylabel('Vertical Space (a.u.)')
#axA.set_ylabel('Probability')
_ = axA.axis([-mp.width/2, mp.width/2, -mp.width/2, mp.width/2])
plt.tight_layout()
for ext in FORMATS: fig.savefig(os.path.join('../figures/', 'FLE-A' + ext))
"""
Explanation: Now showing the spatial histogram for both stimuli at this moment:
End of explanation
"""
N_frame_av = 2
fig, axs = plt.subplots(2*N_frame_av+1, 2, figsize=(fig_width/2.618, fig_width/2.618))
for ax in axs.ravel():
ax.axis(c='b', lw=2, axisbg='w')
ax.set_facecolor('w')
print (stim_labels)
for j, (label, dlabel) in enumerate(zip(['PBP', 'MBP'], ['PBP', 'dMBP'])):
for k, (stimulus_tag, cmap, color) in enumerate(zip(stim_labels, [plt.cm.Greens, plt.cm.Reds], ['g', 'r'])):
particles = image[stimulus_tag]['result'][label][10].copy()
v_hist[stimulus_tag], x_edges, y_edges = mp.spatial_readout(particles, N_quant_X=N_quant_Y, N_quant_Y=1, display=False)
v_hist[stimulus_tag] /= np.sum(v_hist[stimulus_tag], axis=0)
if stimulus_tag == 'flash':
average_pos = np.sum(v_hist[stimulus_tag]*x_middle[:, np.newaxis], axis=0)
average_std = np.sum(v_hist[stimulus_tag]*(x_middle[:, np.newaxis]-average_pos[np.newaxis, :])**2, axis=0)
time_ind_max_flash = np.argmin(average_std)
#print(ind_max_flash)
pos_mean_flash = average_pos[time_ind_max_flash]
for k, (stimulus_tag, cmap, color) in enumerate(zip(stim_labels, [plt.cm.Greens, plt.cm.Reds], ['g', 'r'])):
particles = image[stimulus_tag]['result'][label][10]
for i, i_frame in enumerate(range(time_ind_max_flash-N_frame_av, time_ind_max_flash+N_frame_av+1)):
v_hist_, x_edges, y_edges = mp.spatial_readout(particles[:, :, i_frame],
N_quant_X=N_quant_X, N_quant_Y=1, display=False)
axs[i][j].plot(x_middle, v_hist_, c=color)#, alpha=.2+.8*(i_frame-ind_max_flash)/(2*N_frame_av+1))
axs[i][j].plot([pos_mean_flash, pos_mean_flash], [0, v_hist_.max()], ls='--', lw=2, c='k')
average_pos_ = np.sum(v_hist[stimulus_tag]*x_middle[:, np.newaxis], axis=0)
axs[i][j].plot([average_pos_[i_frame], average_pos_[i_frame]], [0, v_hist_.max()], ls='--', lw=2, c=color)
if i < time_ind_max_flash+N_frame_av:
plt.setp(axs[i][j], xticks=[], yticks=[])
axs[i][0].set_ylabel('i{:+d}'.format(i_frame-time_ind_max_flash))
axs[2][0].plot([0], [0], lw=2, c=color, label=stimulus_tag)
#if j is 0: axs[2][0].legend()
axs[0][j].set_title(dlabel)
axs[i][j].set_xlabel('Space (a.u.)')
plt.tight_layout()
fig.subplots_adjust(hspace = .01)
for ext in FORMATS: fig.savefig(os.path.join('../figures/', 'FLE-B' + ext))
"""
Explanation: panel temporal evolution
End of explanation
"""
speeds = np.linspace(3.75, 1., 11, endpoint=True)
speeds
%%writefile experiment_speed.py
import MotionParticlesFLE as mp
gen_dot = mp.generate_dot
import numpy as np
import os
from default_param import *
image_speed = {}
experiment = 'speed'
speeds = np.linspace(3.75, 1., 11, endpoint=True)
# make such that the dot crosses the middle of the screen at t=.5 while making the same distance
X_0s = -speeds
dot_starts = .5 - .4 / speeds
flash_durations = .8 / speeds
stimulus_tag = stim_labels[0] # 'dot'
im_arg = stim_args[0]
# generating the movie
image_speed[stimulus_tag] = {}
image_speed[stimulus_tag]['args'] = {'Y_0':0, 'im_noise':mp.im_noise, 'dot_size':dot_size}
image_speed[stimulus_tag]['im'] = gen_dot(N_X=N_X, N_Y=N_Y, N_frame=N_frame, **image_speed[stimulus_tag]['args'])
image_speed[stimulus_tag]['result'] = {}
# running PX and MBP with 2 different latencies
for D_x, D_V, v_prior, label in zip([mp.D_x, PBP_D_x], [mp.D_V, PBP_D_V], [mp.v_prior, PBP_prior], ['MBP', 'PBP']):
figname = os.path.join(mp.figpath, experiment + '-' + stimulus_tag + '-' + label)
image_speed[stimulus_tag]['result'][label] = {}
image_speed[stimulus_tag]['args'].update(D_V=D_V, D_x=D_x, v_prior=v_prior)
kwargs_variable = mp.figure_image_variable(
figname,
N_X, N_Y, N_frame, gen_dot, order=None, do_figure=do_figure, do_video=do_video, N_quant_X=N_quant_X, N_quant_Y=N_quant_Y,
fixed_args=image_speed[stimulus_tag]['args'],
V_X=speeds, X_0=X_0s, flash_start=dot_starts, flash_duration=flash_durations)
for new_kwargs in kwargs_variable:
try:
matname = mp.make_figname(figname, new_kwargs).replace(mp.figpath, mp.matpath) + '.npy'
image_speed[stimulus_tag]['result'][label][new_kwargs['V_X']] = np.load(matname)
except:
print('no result yet for ', matname)
%run experiment_speed.py
#time_ind_max_flash = 60
fig, axC = plt.subplots(1, 1, figsize=(fig_width/2.618, fig_width/3.618))
axC.axis(c='b', lw=2, axisbg='w')
axC.set_facecolor('w')
#print(time_ind_max_flash)
v_hist_max = np.zeros((N_quant_X, N_quant_Y, 3))
for i, (label, color) in enumerate(zip(['PBP', 'MBP'], ['w', 'k'])):
#for i, (label, color) in enumerate(zip(['MBP'], [ 'g'])):
stimulus_tag = stim_labels[0] # 'dot'
#print(stimulus_tag, x_middle)
spatial_lead = np.zeros((len(speeds), mp.N_trials, 2*N_frame_av+1))
spatial_lead_std = np.zeros((len(speeds), mp.N_trials, 2*N_frame_av+1))
for i_V, V_X in enumerate(speeds):
#dot_pos = np.zeros(mp.N_trials)
for i_trial in range(mp.N_trials):
particles = image_speed[stimulus_tag]['result'][label][V_X][:, (i_trial*mp.N_particles):((i_trial+1)*mp.N_particles), :].copy()
for i, i_frame in enumerate(range(time_ind_max_flash-N_frame_av, time_ind_max_flash+N_frame_av+1)):
v_hist_, x_edges, y_edges = mp.spatial_readout(particles[:, :, i_frame], N_quant_X=N_quant_Y, N_quant_Y=1, display=False)
v_hist_ /= np.sum(v_hist_)
v_hist_ = v_hist_.ravel()
x_middle = .5*(x_edges[1:] + x_edges[:-1])
spatial_lead[i_V, i_trial, i] = np.sum(v_hist_*x_middle)
spatial_lead_std[i_V, i_trial, i] = np.sqrt(np.sum(v_hist_*(x_middle-spatial_lead[i_V, i_trial, i])**2))
#print (v_hist_.min(), v_hist_.max(), v_hist_.shape, V_X, average_pos)
#dot_pos += x_middle[np.argmax(v_hist_)] / (2*N_frame_av+1)
#spatial_lead[i_V, i_trial] = dot_pos
#axC.errorbar(speeds, spatial_lead.mean(axis=1), yerr=spatial_lead.std(axis=1), c=color, fmt='o', ecolor='k')
axC.errorbar(speeds, spatial_lead.mean(axis=(1, 2)), yerr=spatial_lead_std.mean(axis=(1, 2)),
marker='o', c='k', markerfacecolor=color, fmt='o', ecolor='k', label=label)
#axC.plot(speeds, spatial_lead.mean(axis=1), 'o', c=color)
slope = .25 # TODO: fix to rather 2 units * 100ms / 1s
axC.plot(speeds, speeds*slope, c='k', lw=2, ls='--')
axC.set_xlabel('Speed (a. u.)')
axC.set_ylabel('Spatial lead (a.u.)')
axC.set_xlim((0.7, speeds.max()*1.15))
axC.set_ylim((-0.15, 1.1))
axC.legend(loc='best', bbox_to_anchor=(0., .6, .6, .1),
ncol=1, frameon=False)
axC.spines['top'].set_color('none')
axC.spines['right'].set_color('none')
axC.spines['left'].set_smart_bounds(True)
axC.spines['bottom'].set_smart_bounds(True)
axC.xaxis.set_ticks_position('bottom')
axC.yaxis.set_ticks_position('left')
plt.locator_params(axis = 'x', nbins = 5)
plt.locator_params(axis = 'y', nbins = 5)
plt.tight_layout()
for ext in FORMATS: fig.savefig(os.path.join('../figures/', 'FLE-C' + ext))
"""
Explanation: panel for speed
End of explanation
"""
import tikzmagic
!ls -l ../figures/FLE.* ../figures/FLE-*
%%tikz -e ../figures/FLE.pdf
\draw[white, fill=white] (0.\textwidth,0) rectangle (1.03\textwidth, .65\textwidth) ;
\draw [anchor=south west] (0, 0) node {\includegraphics[width=.618\textwidth]{../figures/FLE-A.pdf}};
\draw [anchor=south west] (.618\textwidth, .25\textwidth) node {\includegraphics[width=.382\textwidth]{../figures/FLE-B.pdf}};
\draw [anchor=south west] (.618\textwidth, 0) node {\includegraphics[width=.375\textwidth]{../figures/FLE-C.pdf}};
\begin{scope}[font=\bf\sffamily\large]
\draw [anchor=west,fill=white] (0, .618\textwidth) node {A};
\draw [anchor=west,fill=white] (.618\textwidth, .618\textwidth) node {B};
\draw [anchor=west,fill=white] (.618\textwidth, .275\textwidth) node {C};
\end{scope}
!convert -density 300 ../figures/FLE.pdf ../figures/FLE.jpg
!convert -density 600 ../figures/FLE.pdf ../figures/FLE.png
!convert -density 600 -resize 5400 -units pixelsperinch -flatten -compress lzw -depth 8 ../figures/FLE.pdf ../figures/FLE.tiff
from IPython.display import SVG, Image
Image('../figures/FLE.jpg')
!echo "width=" ; convert ../figures/FLE.tiff -format "%[fx:w]" info:
!echo ", \nheight=" ; convert ../figures/FLE.tiff -format "%[fx:h]" info:
!echo ", \nunit=" ; convert ../figures/FLE.tiff -format "%U" info:
!identify ../figures/FLE.tiff
"""
Explanation: Montage of the subplots
End of explanation
"""
!git pull
#!git add ../figures/*mp4
#!git add ../figures/FLE-A* ../figures/FLE-B* ../figures/FLE-C*
!git status
!git commit -m' figure FLE : relaunching nb ' ../khoei17fle.tex ../notebooks/figure_3_FLE.ipynb experiment_speed.py experiment_fle.py ../figures/FLE.* ../figures/FLE-A* ../figures/FLE-B* ../figures/FLE-C*
!git push
"""
Explanation: version control
End of explanation
"""
|
Sebbenbear/notebooks
|
Natural Language Processing.ipynb
|
apache-2.0
|
text6.concordance("swallow")
text6.similar("Soldier")
text6.common_contexts(["oh", "very"])
text6.dispersion_plot(["swallow", "European", "it", "oh", "very"])
len(text6)
sorted(set(text6))
"""
Explanation: Search text with context
End of explanation
"""
len(set(text6)) / len(text6)
text6.count("Allo")
sentence = ['Melby', 'is', 'the', 'best']
text5[16715:16735]
FreqDist(text6)
"""
Explanation: Measure the lexical richness of the text ~ 12.8% of the tokens are unique in this text
End of explanation
"""
V = set(text1)
long_words = [w for w in V if len(w) > 16]
sorted(long_words)
"""
Explanation: "the set of all w such that w is an element of V (the vocabulary) and w has property P".
{w | w ∈ V & P(w)}
[w for w in V if p(w)]
End of explanation
"""
#list(bigrams(['more', 'is', 'said', 'than', 'done']))
text6.collocations()
fdist = FreqDist(len(w) for w in text6)
fdist.most_common()
fdist.plot(cumulative=True)
[len(w) for w in text6]
"""
Explanation: A collocation is a sequence of words that occur together unusually often. Thus red wine is a collocation, whereas the wine is not. A characteristic of collocations is that they are resistant to substitution with words that have similar senses; for example, maroon wine sounds definitely odd.
To get a handle on collocations, we start off by extracting from a text a list of word pairs, also known as bigrams. This is easily accomplished with the function bigrams():
End of explanation
"""
|
seg/2016-ml-contest
|
JLOWE/JLowe_NN.ipynb
|
apache-2.0
|
import numpy as np
np.random.seed(1000)
import warnings
warnings.filterwarnings("ignore")
import time as tm
import pandas as pd
from scipy.signal import medfilt
from keras.models import Sequential
from keras.constraints import maxnorm
from keras.layers import Dense, Dropout
from keras.utils import np_utils
from sklearn.metrics import f1_score, confusion_matrix
from sklearn.model_selection import LeaveOneGroupOut
from sklearn import preprocessing
#Cross Val of final model
from sklearn.model_selection import cross_val_score, StratifiedKFold
from keras.wrappers.scikit_learn import KerasClassifier
training_data = pd.read_csv('../training_data.csv')
blind_data = pd.read_csv('../nofacies_data.csv')
def accuracy(conf):
total_correct = 0.
nb_classes = conf.shape[0]
for i in np.arange(0,nb_classes):
total_correct += conf[i][i]
acc = total_correct/sum(sum(conf))
return acc
adjacent_facies = np.array([[1], [0, 2], [1], [4], [3, 5], [4, 6, 7], [5, 7], [5, 6, 8], [6, 7]])
def accuracy_adjacent(conf, adjacent_facies):
nb_classes = conf.shape[0]
total_correct = 0.
for i in np.arange(0,nb_classes):
total_correct += conf[i][i]
for j in adjacent_facies[i]:
total_correct += conf[i][j]
return total_correct / sum(sum(conf))
# 1=sandstone 2=c_siltstone 3=f_siltstone
# 4=marine_silt_shale 5=mudstone 6=wackestone 7=dolomite
# 8=packstone 9=bafflestone
facies_colors = ['#F4D03F', '#F5B041','#DC7633','#6E2C00',
'#1B4F72','#2E86C1', '#AED6F1', '#A569BD', '#196F3D']
facies_labels = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS',
'WS', 'D','PS', 'BS']
#facies_color_map is a dictionary that maps facies labels
#to their respective colors
facies_color_map = {}
for ind, label in enumerate(facies_labels):
facies_color_map[label] = facies_colors[ind]
def label_facies(row, labels):
return labels[ row['Facies'] -1]
training_data.loc[:,'FaciesLabels'] = training_data.apply(lambda row: label_facies(row, facies_labels), axis=1)
"""
Explanation: Facies classification using Machine Learning
Joshua Lowe
https://uk.linkedin.com/in/jlowegeo
This notebook contains my submission to the SEG Machine Learning contest 2016/17.
I have implemented code to train a Neural Network and predict facies in a well from a variety or wireline logs.
I have used bits of code from the original tutorial by Brendon Hall and from PA_Team, where I have used the 'blind well test' implemented by using leaveonegroupout.
Thanks for all the different teams submissions as I have been able to learn a lot of skills around implementing machine learning algorithms in Python.
End of explanation
"""
X = training_data.drop(['Formation', 'Well Name', 'Facies', 'FaciesLabels'], axis=1).values
y = training_data['Facies'].values - 1
X_blind = blind_data.drop(['Formation', 'Well Name'], axis=1).values
wells = training_data["Well Name"].values
"""
Explanation: Sorting the data and dropping unwanted columns from the training and test data
Leave the depth in as a predictor - can the NN recognise depth trends? - Other teams gone much further and have taken into account a predictors relationship/change with depth.
End of explanation
"""
scaler = preprocessing.RobustScaler().fit(X)
X_scaled = scaler.transform(X)
"""
Explanation: Scaling predictors in the data.
End of explanation
"""
def DNN():
# Model
model = Sequential()
model.add(Dense(205, input_dim=8, activation='relu',W_constraint=maxnorm(5)))
model.add(Dropout(0.1))
model.add(Dense(69, activation='relu',W_constraint=maxnorm(5)))
model.add(Dropout(0.1))
model.add(Dense(69, activation='relu'))
model.add(Dense(9, activation='softmax'))
# Compilation
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
return model
"""
Explanation: Defining the neural network model
End of explanation
"""
logo = LeaveOneGroupOut()
t0 = tm.time()
f1s_ls = []
acc_ls = []
adj_ls = []
for train, test in logo.split(X_scaled, y, groups=wells):
well_name = wells[test[0]]
X_tr = X_scaled[train]
X_te = X_scaled[test]
#convert y array into categories matrix
classes = 9
y_tr = np_utils.to_categorical(y[train], classes)
# Method initialization
NN = DNN()
# Training
NN.fit(X_tr, y_tr, nb_epoch=15, batch_size=5, verbose=0)
# Predict
y_hat = NN.predict_classes(X_te, verbose=0)
y_hat = medfilt(y_hat, kernel_size=7)
try:
f1s = f1_score(y[test], y_hat, average="weighted", labels=[0, 1, 2, 3, 4, 5, 6, 7, 8])
except:
f1s = 0
try:
conf = confusion_matrix(y[test], y_hat, labels=[0, 1, 2, 3, 4, 5, 6, 7, 8])
acc = accuracy(conf) # similar to f1 micro
except:
acc = 0
try:
acc_adj = accuracy_adjacent(conf, adjacent_facies)
except:
acc_adj = 0
f1s_ls += [f1s]
acc_ls += [acc]
adj_ls += [acc_adj]
print("{:>20s} f1_weigthted:{:.3f} | acc:{:.3f} | acc_adj:{:.3f}".format(well_name, f1s, acc, acc_adj))
t1 = tm.time()
print("Avg F1", np.average(f1s_ls)*100, "Avg Acc", np.average(acc_ls)*100, "Avg Adj", np.average(adj_ls)*100)
print("Blind Well Test Run Time:",'{:f}'.format((t1-t0)), "seconds")
"""
Explanation: Cross Validation using a 'Blind Well Test'. Code adapted from PA_Team submission
End of explanation
"""
#Another robustness test of the model using statified K fold
X_train = X_scaled
Y_train = np_utils.to_categorical(y, classes)
t2 = tm.time()
estimator = KerasClassifier(build_fn=DNN, nb_epoch=15, batch_size=5, verbose=0)
skf = StratifiedKFold(n_splits=5, shuffle=True)
results_dnn = cross_val_score(estimator, X_train, Y_train, cv= skf.get_n_splits(X_train, Y_train))
print (results_dnn)
t3 = tm.time()
print("Cross Validation Run Time:",'{:f}'.format((t3-t2)), "seconds")
"""
Explanation: Cross Validation using stratified K-fold
End of explanation
"""
NN = DNN()
NN.fit(X_train, Y_train, nb_epoch=15, batch_size=5, verbose=0)
y_predicted = NN.predict_classes(X_train, verbose=0)
y_predicted = medfilt(y_predicted, kernel_size=7)
f1s = f1_score(y, y_predicted, average="weighted")
Avgf1s = np.average(f1s_ls)*100
print ("f1 training error: ", '{:f}'.format(f1s))
print ("f1 test error: ", '{:f}'.format(Avgf1s))
"""
Explanation: Final Model which uses all the training data
By using all the training data I may be potentially increasing the variance of the model but I believe it’s best to use all the data in the model as the data available is limited.
End of explanation
"""
x_blind = scaler.transform(X_blind)
y_blind = NN.predict_classes(x_blind, verbose=0)
y_blind = medfilt(y_blind, kernel_size=7)
blind_data["Facies"] = y_blind + 1 # return the original value (1-9)
blind_data.to_csv("J_Lowe_Submission.csv")
"""
Explanation: My variance is high and my bias is too low.
I haven’t found the optimum bias-variance trade off. --> Back to the drawing board.
Predicting the lithologies in the unknown test wells
End of explanation
"""
|
surprisoh/crowdfunding_prediction
|
4. Before Funding.ipynb
|
mit
|
from sklearn.neighbors import KNeighborsClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.ensemble import RandomForestClassifier
from sklearn.cross_validation import cross_val_score
from sklearn.cross_validation import KFold
from sklearn.cross_validation import StratifiedKFold
from sklearn.neighbors import KNeighborsClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.naive_bayes import MultinomialNB
import statsmodels.api as sm
import patsy
# Data Load
wadiz_df_original = pd.read_csv('wadiz_df_0329_1.csv', index_col=0)
user_comment = pd.read_csv('user_data_all_0329.csv', index_col=0)
provider_comment = pd.read_csv('provider_data_all_0329.csv', index_col=0)
wadiz_df = pd.read_csv('wadiz_provider_analysis_0329.csv', index_col=0)
provider_comment_grammar = pd.read_csv('comment_analysis.csv', index_col=0)
# grammar null값 제거
wadiz_df = wadiz_df[wadiz_df['provider_grammar_level'].notnull()]
# duration 처리
wadiz_df['date_duration'] = wadiz_df['date_duration'].apply(lambda x: int(x[:-24]))
"""
Explanation: Before Funding
Funding이 시작되기전 주어지는 정보를 이용하여 성공/실패 예측
최종 펀딩금액(result)을 예측할 수 있을까? (OLS)
성공/실패 여부 예측 (RandomForest, KNN, GaussianNB)
End of explanation
"""
x_static = pd.DataFrame([wadiz_df[u'target'],
wadiz_df['date_duration'],wadiz_df [u'category_label_art/photo/exhibit'],
wadiz_df['category_label_book/movie'], wadiz_df['category_label_education'],
wadiz_df['category_label_environment'], wadiz_df['category_label_figure/webtoon'],
wadiz_df['category_label_game/comics'], wadiz_df['category_label_life/fashion'],
wadiz_df['category_label_music/concert'], wadiz_df['category_label_share/public'],
wadiz_df['category_label_sports'], wadiz_df['category_label_tech/design'],
wadiz_df['category_label_travel'], wadiz_df['category_label_busan'],
wadiz_df['category_label_chungbuk'], wadiz_df[u'category_label_chungnam'],
wadiz_df['category_label_deagu'], wadiz_df['category_label_deajeon'],
wadiz_df['category_label_gwangju'], wadiz_df['category_label_incheon'],
wadiz_df['category_label_jeju'], wadiz_df['category_label_jeonbuk'],
wadiz_df['category_label_jeonnam'], wadiz_df['category_label_kangwon'],
wadiz_df['category_label_kyungbuk'], wadiz_df['category_label_kyungki'],
wadiz_df['category_label_kyungnam'], wadiz_df['category_label_sejong'],
wadiz_df['category_label_seoul'], wadiz_df['category_label_ulsan'], wadiz_df['provider_grammar_level']]).T
y = wadiz_df['result']
static_reg_model = sm.OLS(y, x_static).fit()
static_reg_model.summary()
"""
Explanation: 1. Regression
A. OLS
Attributes : Duration, Category, Area, Grammar_level, Target(목표펀딩금액)
5가지의 변수를 이용하여 최종펀딩금액 예측
Model : OLS
Category, Area -> OneHotEncoding
End of explanation
"""
# category, area 제거
x_static_2 = pd.DataFrame([wadiz_df[u'target'], wadiz_df['date_duration'],
wadiz_df['provider_grammar_level']]).T
static_reg_model_2 = sm.OLS(y, x_static_2).fit()
static_reg_model_2.summary()
"""
Explanation: 너무 많은 카테고리와 OLS모형에 영향을 미치지않는 feature들로 인해 R^2값이 아주 낮음
카테고리값 제거 후 재분석
End of explanation
"""
# grammar_level 변수 제거
x_static_3 = pd.DataFrame([wadiz_df['target'], wadiz_df['date_duration']]).T
static_reg_model_3 = sm.OLS(y, x_static_3).fit()
static_reg_model_3.summary()
"""
Explanation: 샘플의 개수에 비해 카테고리가 너무 많아 카테고리값들을 제거하여 R^2 값이 올라감.
하지만 grammar_level의 p-value > 0.05 이므로 95%의 신뢰도하에서는 기각됨
End of explanation
"""
#date_duration만 사용하였을때
x_static_4 = pd.DataFrame([wadiz_df['date_duration']]).T
static_reg_model_4 = sm.OLS(y, x_static_4).fit()
static_reg_model_4.summary()
#date_duration만 사용하였을때
x_static_5 = pd.DataFrame([wadiz_df['target']]).T
static_reg_model_5 = sm.OLS(y, x_static_5).fit()
static_reg_model_5.summary()
"""
Explanation: R^2 값은 조금 떨어지지만 target과 date_duration은 result값 예측 모형에 영향을 미침
End of explanation
"""
fig, ax = plt.subplots(figsize=(10, 8))
sm.graphics.influence_plot(static_reg_model_3, plot_alpha=0.3, ax=ax)
plt.show()
outlier_index = static_reg_model_3.outlier_test()[static_reg_model_3.outlier_test().ix[:, -1].abs() < 0.01].index
#outlier test 결과값
static_reg_model_3.outlier_test()[static_reg_model_3.outlier_test().ix[:, -1] < 0.01]
wadiz_outlier = wadiz_df.loc[outlier_index]
wadiz_no_outlier = wadiz_df.drop(outlier_index)
#outlier project
wadiz_outlier.iloc[:, :6]
wadiz_outlier['funding_rate']
"""
Explanation: 결국 Target과 Date_duration 변수 2개를 사용한 OLS모형이 최적의 모형임.
B. Outlier Test
Outlier들이 결과값에 너무 큰 영향을 미치면 모형에 왜곡이 생기므로 제거해주어야됨.
model_3(target, date_duration)을 이용하여 outlier test
End of explanation
"""
# OLS
x_static_3_no_outlier = pd.DataFrame([wadiz_no_outlier['target'], wadiz_no_outlier['date_duration']]).T
y_no_outlier = wadiz_no_outlier['result']
no_outlier_model = sm.OLS(y_no_outlier, x_static_3_no_outlier).fit()
no_outlier_model.summary()
"""
Explanation: outlier들은 funding_rate값이 너무 높거나 너무 낮은 projects들임
End of explanation
"""
reg_predict = no_outlier_model.predict(x_static_3_no_outlier)
plt.figure(figsize = (15,10));
plt.plot(x_static_3_no_outlier, reg_predict, 'v', markersize=10, markeredgewidth=1
, markeredgecolor='r', markerfacecolor='None', label = 'prediction');
plt.plot(x_static_3_no_outlier, y_no_outlier, 'o', markersize=10, markeredgewidth=1
, markeredgecolor='g', markerfacecolor='None', label = 'real', );
plt.legend(fontsize=20);
plt.xlabel('Target', fontsize=20);
plt.ylabel('Result', fontsize=20);
plt.xlim(-5000000, 42000000);
plt.ylim(-5000000, 40000000);
"""
Explanation: outlier 제거 결과 R^2 0.373으로 대폭 상승
B. OLS Model prediction
Attributes : Target, Duration
End of explanation
"""
x_classification = pd.DataFrame([wadiz_no_outlier['target'], wadiz_no_outlier['date_duration'], wadiz_no_outlier['provider_grammar_level']]).T
y = wadiz_no_outlier['success']
"""
Explanation: #Result
- OLS 모형에서 가장 중요한 Feature는 Target과 Date duration이다.
- Date_duration의 경우 coef가 (+)로 펀딩기간이 길수록 펀딩금액이 많이진다는 가정 확인.
- grammar_level은 전체 금액을 예측하는 모형에서는 영향을 미치지 않는다.
2. Classification
regression의 결과값이 너무 낮음 -> 그렇다면 성공/실패 예측을 위한 모형에서는 어떤 결과를 가져올까?
regression 모델에서 확인한 outlier들을 제거한 상태에서 분석
Attributes = Target, Duration, Grammar_level
Class = Success(1), Fail(0)
Models = RandomForest, KNN, NaiveBayes
End of explanation
"""
re = RandomForestClassifier()
x_re_list_1 = []
y_re_list_1 = []
for i in range(1, 20):
re_1 = RandomForestClassifier(n_estimators=i)
score = cross_val_score(re_1, x_classification, y, cv=10).mean()
x_re_list_1.append(i)
y_re_list_1.append(score)
base_success_rate = round((y.value_counts()[1] / len(y)), 2)
figure = plt.figure(figsize=(10,8))
plt.plot(x_re_list_1, y_re_list_1, 'o--', c = 'r', label = 'Accuracy')
plt.axhline(base_success_rate, ls = '--', label = 'base_success_rate')
plt.legend(fontsize=15)
plt.xlabel('n_estimator', fontsize=15)
plt.ylabel('accuracy', fontsize=15)
print('base_success_rate :', round((y.value_counts()[1] / len(y))*100, 2), '%')
print('max_accuracy :', round(max(y_re_list_1)*100, 2), '%')
"""
Explanation: A. RandomForest
End of explanation
"""
from sklearn.grid_search import GridSearchCV
# Gridsearch report를 위한 함수 생성
from operator import itemgetter
def report(grid_scores, n_top=3):
top_scores = sorted(grid_scores, key=itemgetter(1), reverse=True)[:n_top]
for i, score in enumerate(top_scores):
print("Model with rank: {0}".format(i + 1))
print("Mean validation score: {0:.3f} (std: {1:.3f})".format(
score.mean_validation_score,
np.std(score.cv_validation_scores)))
print("Parameters: {0}".format(score.parameters))
print("")
#parameter 설정
param_grid = {"max_depth": [5, 10, None],
"max_features": [1, 2, None],
"min_samples_split": [1, 3, 5],
"min_samples_leaf": [1, 3, 5, 10],
"n_estimators" : np.arange(3, 20)}
# run grid search
grid_search = GridSearchCV(re, param_grid=param_grid)
grid_search.fit(x_classification, y)
report(grid_search.grid_scores_)
"""
Explanation: RandomForest 결과 최고 정확도는 55.97%
B. Grid Search
End of explanation
"""
best_re = RandomForestClassifier(max_features=2, min_samples_split=1, n_estimators=15, max_depth=5, min_samples_leaf=10)
Stkfold = StratifiedKFold(y, n_folds=10)
print('Mean of Score :', np.mean(cross_val_score(best_re, x_classification, y, cv = Stkfold))),
print('Std of Score :', np.std(cross_val_score(best_re, x_classification, y, cv = Stkfold)))
best_re.fit(x_classification, y)
# feature importance
feature_importance = pd.DataFrame([best_re.feature_importances_], columns=x_classification.columns, index=['Importance']).T
feature_importance
"""
Explanation: Grid Search 결과에 따라 최고 Score를 기록한 parameter로 재설정
C. Cross Validation
End of explanation
"""
knn = KNeighborsClassifier()
knn_model = knn.fit(x_classification, y)
Stkfold = StratifiedKFold(y, n_folds=10)
knn_model = knn.fit(x_classification, y)
knn_best_score_mean = cross_val_score(knn_model, x_classification, y, cv = 10).mean()
knn_best_score_std = cross_val_score(knn_model, x_classification, y, cv = 10).std()
print('KNN Best Score :', knn_best_score_mean),
print('KNN Best Score(Standard Error) :', knn_best_score_std)
"""
Explanation: #RandomForest 결과
RandomForest의 score는 약 57%로 향상된 모델로 보이나 표준편차가 6%로 결국 전체 분포는 Base_line인 52%에서 크게 벗어나지 못함.
Feature Importance를 살펴본 결과 OLS 모형에서 가장 주효했던 target과 date_duration보다 grammar_level이 더 높은 중요도를 보임
D. KNN
End of explanation
"""
# target 제거
x_knn_1 = pd.DataFrame([wadiz_no_outlier['date_duration'], wadiz_no_outlier['provider_grammar_level']]).T
y_knn_1 = wadiz_no_outlier['success']
knn_model_1 = knn.fit(x_knn_1, y_knn_1)
knn_best_score_mean_1 = cross_val_score(knn_model_1, x_knn_1, y_knn_1, cv = 10).mean()
knn_best_score_std_1 = cross_val_score(knn_model_1, x_knn_1, y_knn_1, cv = 10).std()
print('KNN Best Score :', knn_best_score_mean_1),
print('KNN Best Score(Standard Error) :', knn_best_score_std_1)
# grammar_level 제거
x_knn_2 = pd.DataFrame([wadiz_no_outlier['target'], wadiz_no_outlier['date_duration']]).T
y_knn_2 = wadiz_no_outlier['success']
knn_model_2 = knn.fit(x_knn_2, y_knn_2)
knn_best_score_mean_2 = cross_val_score(knn_model_2, x_knn_2, y_knn_2, cv = 10).mean()
knn_best_score_std_2 = cross_val_score(knn_model_2, x_knn_2, y_knn_2, cv = 10).std()
print('KNN Best Score :', knn_best_score_mean_2),
print('KNN Best Score(Standard Error) :', knn_best_score_std_2)
# date_duration 제거
x_knn_3 = pd.DataFrame([wadiz_no_outlier['target'], wadiz_no_outlier['provider_grammar_level']]).T
y_knn_3 = wadiz_no_outlier['success']
knn_model_3 = knn.fit(x_knn_3, y_knn_3)
knn_best_score_mean_3 = cross_val_score(knn_model_3, x_knn_3, y_knn_3, cv = 10).mean()
knn_best_score_std_3 = cross_val_score(knn_model_3, x_knn_3, y_knn_3, cv = 10).std()
print('KNN Best Score :', knn_best_score_mean_3),
print('KNN Best Score(Standard Error) :', knn_best_score_std_3)
"""
Explanation: ??????? 너무 낮은 결과가 나옴.
무엇인가가 잘못되었다는 것을 의미.
feature들을 재선택하여 분석
End of explanation
"""
gnb = GaussianNB()
gnb_best_score_mean = cross_val_score(gnb, x_classification, y, cv=Stkfold).mean()
gnb_best_score_std = cross_val_score(gnb, x_classification, y, cv=Stkfold).std()
print('GNB Best Score :', gnb_best_score_mean),
print('GNB Best Score(Standard Error) :', gnb_best_score_std)
"""
Explanation: #KNN 결과
grammar_level과 target만 이용하였을 때 가장 높은 score를 보임. 하지만 아주 낮은 accuracy로 의미없는 모델임.
C. NaiveBayes
End of explanation
"""
|
hungiyang/StatisticalMethods
|
examples/XrayImage/Modeling.ipynb
|
gpl-2.0
|
from __future__ import print_function
import astropy.io.fits as pyfits
import astropy.visualization as viz
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 10.0)
"""
Explanation: Forward Modeling the X-ray Image data
In this notebook, we'll take a closer look at the X-ray image data products, and build a simple, generative, forward model for the observed data.
End of explanation
"""
imfits = pyfits.open('a1835_xmm/P0098010101M2U009IMAGE_3000.FTZ')
im = imfits[0].data
"""
Explanation: The XMM Image Data
Recall that we downloaded some XMM data in the "First Look" notebook.
We downloaded three files, and just looked at one - the "science" image.
End of explanation
"""
plt.imshow(viz.scale_image(im, scale='log', max_cut=40), cmap='gray', origin='lower');
"""
Explanation: im is the image, our observed data, presented after some "standard processing." The numbers in the pixels are counts (i.e. numbers of photoelectrons recorded by the CCD during the exposure).
We display the image on a log scale, which allows us to simultaneously see both the cluster of galaxies in the center, and the much fainter background and other sources in the field.
End of explanation
"""
pbfits = pyfits.open('a1835_xmm/P0098010101M2X000BKGMAP3000.FTZ')
pb = pbfits[0].data
exfits = pyfits.open('a1835_xmm/P0098010101M2U009EXPMAP3000.FTZ')
ex = exfits[0].data
"""
Explanation: A Model for the Cluster of Galaxies
We will use a common parametric model for the surface brightness of galaxy clusters: the azimuthally symmetric beta model:
$S(r) = S_0 \left[1.0 + \left(\frac{r}{r_c}\right)^2\right]^{-3\beta + 1/2}$,
where $r$ is projected distance from the cluster center.
The parameters of this model are:
$x_0$, the $x$ coordinate of the cluster center
$y_0$, the $y$ coordinate of the cluster center
$S_0$, the normalization, in surface brightness units
$r_c$, a radial scale (called the "core radius")
$\beta$, which determines the slope of the profile
Note that this model describes a 2D surface brightness distribution, since $r^2 = x^2 + y^2$
Let's draw a cartoon of this model on the whiteboard
Planning an Expected Counts Map
Our data are counts, i.e. the number of times a physical pixel in the camera was activated while pointing at the area of sky corresponding to a pixel in our image. We can think of different sky pixels as having different effective exposure times, as encoded by an exposure map, ex.
We expect to see counts due to a number of sources:
X-rays from the galaxy cluster
X-rays from other detected sources in the field
X-rays from unresolved sources (the Cosmic X-ray Background)
Diffuse X-rays from the Galactic halo and the local bubble (the local X-ray foreground)
Soft protons from the solar wind, cosmic rays, and other undesirables (the particle background)
Let's go through these in turn.
1. Counts from the Cluster
Since our data are counts in each pixel, our model needs to first predict the expected counts in each pixel. Physical models predict intensity (counts per second per pixel per unit effective area of the telescope). The spatial variation of the effective area relative to the aimpoint is one of the things accounted for in the exposure map, and we can leave the overall area to one side when fitting (although we would need it to turn our results into physically interesting conclusions about, e.g. the luminosity of the cluster).
Since the X-rays from the cluster are transformed according to the exposure map, the units of $S_0$ are counts/s/pixel, and the model prediction for the expected number of counts from the cluster is CL*ex, where CL is an image with pixel values computed from $S(r)$.
2-4. X-ray background model
The X-ray background will be "vignetted" in the same way as X-rays from the cluster. We can lump sources 2-4 together, to extend our model so that it is composed of a galaxy cluster, plus an X-ray background.
The simplest assumption we can make about the X-ray background is that it is spatially uniform, on average. The model must account for the varying effective exposure as a function of position, however. So the model prediction associated with this component is b*ex, where b is a single number with units of counts/s/pixel.
We can circumvent the problem of the other detected sources in the field by masking them out, leaving us with the assumption that any remaining counts are not due to the masked sources. This could be a source of systematic error, so we'll note it down for later.
5. Particle background model
The particle background represents a flux of particles that either do not traverse the telescope optics at all, or follow a different optical path than X-rays - so the exposure map (and its vignetting correction) does not apply.
Instead, we're given, from a black box, a prediction for the expected counts/pixel due to particles, so the extension to our model is simply to add this image, pb.
Full model
Combining these three components, the model (CL+b)*ex + pb gives us an expected number of counts/pixel across the field.
A Look at the Other XMM Products
The "exposure map" and the "particle background map" were supplied to us by the XMM reduction pipeline, along with the science image. Let's take a look at them now.
End of explanation
"""
plt.imshow(ex, cmap='gray', origin='lower');
"""
Explanation: The "Exposure Map"
The ex image is in units of seconds, and represents the effective exposure time at each pixel position.
This is actually the product of the exposure time that the detector was exposed for, and a relative sensitivity map accounting for the vignetting of the telescope, dithering, and bad pixels whose data have been excised.
Displaying the exposure map on a linear scale makes the vignetting pattern and other features clear.
End of explanation
"""
plt.imshow(pb, cmap='gray', origin='lower');
"""
Explanation: The "Particle Background Map"
pb is not data at all, but rather a model for the expected counts/pixel in this specific observation due to the "quiescent particle background."
This map comes out of a blackbox in the processing pipeline. Even though there are surely uncertainties in it, we have no quantitative description of them to work with.
Note that the exposure map above does not apply to the particle backround; some particles are vignetted by the telescope optics, but not to the same degree as X-rays. The resulting spatial pattern and the total exposure time are accounted for in pb.
End of explanation
"""
mask = np.loadtxt('a1835_xmm/M2ptsrc.txt')
for reg in mask:
# this is inefficient but effective
for i in np.round(reg[1]+np.arange(-np.ceil(reg[2]),np.ceil(reg[2]))):
for j in np.round(reg[0]+np.arange(-np.ceil(reg[2]),np.ceil(reg[2]))):
if (i-reg[1])**2 + (j-reg[0])**2 <= reg[2]**2:
ex[np.int(i-1), np.int(j-1)] = 0.0
"""
Explanation: Masking out the other sources
There are non-cluster sources in this field. To simplify the model-building exercise, we will crudely mask them out for the moment.
A convenient way to do this is by setting the exposure map to zero in these locations - as if a set of tiny little shutters in front of each of those pixels had not been opened. "Not observed" is different from "observed zero counts."
Let's read in a text file encoding a list of circular regions in the image, and set the exposure map pixels within each of those regions in to zero.
End of explanation
"""
plt.imshow(ex, cmap='gray', origin='lower');
"""
Explanation: As a sanity check, let's have a look at the modified exposure map.
Compare the location of the "holes" to the science image above.
End of explanation
"""
# import cluster_pgm
# cluster_pgm.forward()
from IPython.display import Image
Image(filename="cluster_pgm_forward.png")
def beta_model_profile(r, S0, rc, beta):
'''
The fabled beta model, radial profile S(r)
'''
return S0 * (1.0 + (r/rc)**2)**(-3.0*beta + 0.5)
def beta_model_image(x, y, x0, y0, S0, rc, beta):
'''
Here, x and y are arrays ("meshgrids" or "ramps") containing x and y pixel numbers,
and the other arguments are galaxy cluster beta model parameters.
Returns a surface brightness image of the same shape as x and y.
'''
r = np.sqrt((x-x0)**2 + (y-y0)**2)
return beta_model_profile(r, S0, rc, beta)
def model_image(x, y, ex, pb, x0, y0, S0, rc, beta, b):
'''
Here, x, y, ex and pb are images, all of the same shape, and the other args are
cluster model and X-ray background parameters. ex is the (constant) exposure map
and pb is the (constant) particle background map.
'''
return (beta_model_image(x, y, x0, y0, S0, rc, beta) + b) * ex + pb
# Set up the ramp images, to enable fast array calculations:
nx,ny = ex.shape
x = np.outer(np.ones(ny),np.arange(nx))
y = np.outer(np.arange(ny),np.ones(nx))
fig,ax = plt.subplots(nrows=1, ncols=2)
fig.set_size_inches(15, 6)
plt.subplots_adjust(wspace=0.2)
left = ax[0].imshow(x, cmap='gray', origin='lower')
ax[0].set_title('x')
fig.colorbar(left,ax=ax[0],shrink=0.9)
right = ax[1].imshow(y, cmap='gray', origin='lower')
ax[1].set_title('y')
fig.colorbar(right,ax=ax[1],shrink=0.9)
# Now choose parameters, compute model and plot, compared to data!
x0,y0 = 328,348 # The center of the image is 328,328
S0,b = 0.01,5e-7 # Cluster and background surface brightness, arbitrary units
beta = 2.0/3.0 # Canonical value is beta = 2/3
rc = 4 # Core radius, in pixels
# Realize the expected counts map for the model:
mu = model_image(x,y,ex,pb,x0,y0,S0,rc,beta,b)
# Draw a *sample image* from the Poisson sampling distribution:
mock = np.random.poisson(mu,mu.shape)
# The difference between the mock and the real data should be symmetrical noise if the model
# is a good match...
diff = im - mock
# Plot three panels:
fig,ax = plt.subplots(nrows=1, ncols=3)
fig.set_size_inches(15, 6)
plt.subplots_adjust(wspace=0.2)
left = ax[0].imshow(viz.scale_image(mock, scale='log', max_cut=40), cmap='gray', origin='lower')
ax[0].set_title('Mock (log, rescaled)')
fig.colorbar(left,ax=ax[0],shrink=0.6)
center = ax[1].imshow(viz.scale_image(im, scale='log', max_cut=40), cmap='gray', origin='lower')
ax[1].set_title('Data (log, rescaled)')
fig.colorbar(center,ax=ax[1],shrink=0.6)
right = ax[2].imshow(diff, vmin=-40, vmax=40, cmap='gray', origin='lower')
ax[2].set_title('Difference (linear)')
fig.colorbar(right,ax=ax[2],shrink=0.6)
"""
Explanation: A Generative Model for the X-ray Image
All of the discussion above was in terms of predicting the expected number of counts in each pixel, $\mu_k$. This is not what we observe: we observe counts.
To be able to generate a mock dataset, we need to make an assumption about the form of the sampling distribution for the counts $N$ in each pixel, ${\rm Pr}(N_k|\mu_k)$.
Let's assume that this distribution is Poisson, since we expect X-ray photon arrivals to be "rare events."
${\rm Pr}(N_k|\mu_k) = \frac{{\rm e}^{-\mu_k} \mu_k^{N_k}}{N_k !}$
Here, $\mu_k(\theta)$ is the expected number of counts in the $k$th pixel:
$\mu_k(\theta) = \left( S(r_k;\theta) + b \right) \cdot$ ex + pb
Note that writing the sampling distribution like this contains the assumption that the pixels are independent (i.e., there is no cross-talk between the cuboids of silicon that make up the pixels in the CCD chip). (Also note that this assumption is different from the assumption that the expected numbers of counts are independent! They are explicitly not independent: we wrote down a model for a cluster surface brightness distribution that is potentially many pixels in diameter.)
At this point we can draw the PGM for a forward model of this dataset, using the exposure and particle background maps supplied, and some choices for the model parameters.
Then, we can go ahead and simulate some mock data and compare with the image we have.
End of explanation
"""
|
wenduowang/git_home
|
python/MSBA/intro/HW3/HW3_WenduoWang.ipynb
|
gpl-3.0
|
gold = pd.read_table("gold.txt", names=["url", "category"]).dropna()
labels = pd.read_table("labels.txt", names=["turk", "url", "category"]).dropna()
"""
Explanation: Question 1: Read in data
Read in the data from "gold.txt" and "labels.txt".
Since there are no headers in the files, names parameter should be set explicitly.
Duplicate records in both dataframes are kept, for repeated test on the same url provides enables more precise information about the turks' discernibility
End of explanation
"""
url_list = gold["url"].unique()
labels_on_gold = labels[labels["url"].map(lambda s: s in url_list)]
labels_unknown = labels[labels["url"].map(lambda s: s not in url_list)]
"""
Explanation: Question 2: Split into two DataFrames
To determine if a url in labels is in gold, make a list of unique url in gold, and map the lambda expression on the url series in labels.
End of explanation
"""
rater_merged = pd.merge(
labels_on_gold,
gold,
left_on="url",
right_on="url",
suffixes=["_1", "_2"]
)
rater_merged["correct"] = rater_merged["category_1"] == rater_merged["category_2"]
rater_merged = rater_merged[["turk", "correct"]]
correct_counts = rater_merged.groupby("turk")["correct"].sum()
total_counts = rater_merged["turk"].value_counts()
avg_correctness = correct_counts/total_counts
rater_goodness = pd.DataFrame({"number_of_ratings": total_counts, "average_correctness": avg_correctness})
rater_goodness[:10]
"""
Explanation: Question 3: Compute accuracies of turks
Since the computation is all on "gold" set url, "labels_on_gold" dataframe is used instead of "labels"
Merge "labels_on_gold" with "gold" on url.
Create a new column correct in the new dataframe, and assign True where the "turk" rating is the same with the true rating.
Optional: drop the rating columns to reduce the size of the dataframe
groupby on turk, and sum up the True records on correct for each turk, the returned value is a series
value_counts on turk, a series of total rating numbers is returned.
Divide the previous two series to get the rating accuracy of each turk
Create a new dataframe "rater_goodness" with the total rating number series and rating accuracy series, index by default set as turk
End of explanation
"""
rater_goodness["odds"] = rater_goodness["average_correctness"].map(lambda x: x/(1.001-x))
rater_goodness[:20]
"""
Explanation: Question 4: Odds ratios
Use "map" function on average_correctness to get $\frac{average\ correctness}{1 - average\ correctness}$
By definition, when average_correctness = 1, the ratio should be assigned float("inf")
End of explanation
"""
rater_goodness[rater_goodness["number_of_ratings"]>=20].sort_values(by="average_correctness", ascending=False)[:10]
"""
Explanation: Question 5: Most accurate turks
Use rater_goodness["number of ratings"]>=20 to select turks who rated at least 20 times.
Sort the list by average_correctness in descending order.
.index.values is optional to return only turks, but for aesthetic reasons it is not applied.
End of explanation
"""
plot(rater_goodness['number_of_ratings'],
rater_goodness['average_correctness'],
marker='o',
color='blue',
linestyle='None')
xlabel('number of ratings')
ylabel('average correctness')
"""
Explanation: Question 6: Rating counts versus accuracy
Plotting average_correctness against number of ratings makes it easier to have an general idea between the two variables. However, from the plot, it is difficult to identify a clear pattern.
End of explanation
"""
import statsmodels.api as sm
from patsy import dmatrices
y, X = dmatrices('average_correctness ~ number_of_ratings', data=rater_goodness, return_type='dataframe')
model = sm.OLS(y, X)
result = model.fit()
print result.summary()
"""
Explanation: To quantitatively measure the linear correlation between number of ratings and average correctness, linear regression is used to draw insights.
From the model summary, it is still difficult to establish reliable linear correlation between the two variables, since the coefficient of number of ratings is not significantly different from zero.
statsmodels and patsy modules are imported for linear regression
End of explanation
"""
top_25_cutpoint = labels_on_gold["turk"].value_counts().quantile(q=.75)
turk_list = labels_on_gold["turk"].value_counts()
mask_1 = labels_unknown["turk"].map(lambda s: turk_list[s]>=top_25_cutpoint if s in turk_list else False)
labels_bytop25 = labels_unknown[mask_1]
rater_goodness["turk"] = rater_goodness.index
odds_top25 = rater_goodness[rater_goodness["turk"].map(lambda s: turk_list[s]>=top_25_cutpoint if s in turk_list else False)]
overall_odds = pd.merge(labels_bytop25,
odds_top25,
left_on="turk",
right_on="turk",
how="left").dropna()
overall_odds.groupby(["url", "category"])[["odds"]].prod()[:10]
"""
Explanation: Question 7: Overall predicted odds
Define the cutpoint of top 25% turks in term of number of ratings using quantile(q=.75).
Make a list of "turk: number of ratings"
Make a mask to select records rated by top 25% turks using map function.
Select from the total "labels" data set the records rated by top 25% turks.
Merge this dataframe with "labels_unknown" dataframe on url and category, duplicates dropped.
Next merge the resulting dataframe with "rater_goodness" dataframe.
First create a new turk column in "rater_goodness" dataframe from the index
Only select the records rated by top 25% turks from "rater_goodness" dataframe
Merge the two dataframe on turk
Drop duplicates and missing values
groupby the resulting dataframe on url and category.
Apply prod() on odds to calculate overall odds by url and category.
here odds is the "overall odds" as defined in the assignment description
End of explanation
"""
overall_odds_df = overall_odds.groupby(["url", "category"])[["odds"]].prod().unstack("category").T.fillna(0)
url_rating = pd.DataFrame(overall_odds_df.idxmax())
url_rating["top category"] = url_rating[0].map(lambda s: s[1])
url_rating = url_rating.set_index(url_rating.index.values)
url_rating["top odds"] = overall_odds_df.max()
url_rating = url_rating[["top category", "top odds"]]
url_rating[:10]
"""
Explanation: Question 8: Predicted categories
Create a dataframe from the groupby object in the last question, containing url, category and overall odds.
Apply unstack to breakdown category from index to columns.
Transpose the dataframe and get idxmax() on all columns, i.e. url, returned value is a series with url as index and np.array ("odds", category) as values.
Create a dataframe using the returned series, and convert the np.array into a string column "top category" by selecting the second element.
Create a new "top odds" column for the dataframe by max() on the transposed dataframe in step 2.
End of explanation
"""
top_75_cutpoint = labels_on_gold["turk"].value_counts().quantile(q=.25)
mask_2 = labels_unknown["turk"].map(lambda s: turk_list[s]>=top_75_cutpoint if s in turk_list else False)
labels_bytop75 = labels_unknown[mask_2]
odds_top75 = rater_goodness[rater_goodness["turk"].map(lambda s: turk_list[s]>=top_75_cutpoint if s in turk_list else False)]
overall_odds_75 = pd.merge(labels_bytop75,
odds_top75,
left_on="turk",
right_on="turk",
how="left").dropna()
overall_odds_df_75 = overall_odds_75.groupby(["url", "category"])[["odds"]].prod().unstack("category").T.fillna(0)
url_rating_75 = pd.DataFrame(overall_odds_df_75.idxmax())
url_rating_75["result_75"] = url_rating_75[0].map(lambda s: s[1])
url_rating_75 = pd.DataFrame(url_rating_75["result_75"])
url_rating_75 = url_rating_75.set_index(url_rating_75.index.values)
url_rating_25 = pd.DataFrame({"result_25": url_rating["top category"]})
url_rating_merged = pd.merge(url_rating_25,
url_rating_75,
left_index=True,
right_index=True,
).dropna()
url_rating_crosstab = pd.crosstab(index=url_rating_merged["result_25"],
columns=url_rating_merged["result_75"]
)
url_rating_crosstab
"""
Explanation: Question 9: Predicted categories using more turks
Repeat Question\ 7 and Question\ 8 to create a dataframe where url are rated by top 75% turks.
> Here only the "top category" column is kept and named result_75
Take out top category column from the dataframe from Question 8 and rename it result_25, and make it a dataframe.
Merge the two dataframes on index.
Create a crosstab with the two columns as index and columns respectively.
From the crosstab it can be seen that, the most errors are where the top 25% turks rated "G" but the top 75% turks rated "P" (836 occurences), "G" versus "R" (285 occurences), and "P" versus "G" (229 occurences).
End of explanation
"""
|
adelavega/neurosynth-mfc
|
other/Create MFC mask.ipynb
|
mit
|
cortex = nib.load('cerbcort.nii.gz')
# Binarize
cortex = nib.Nifti1Image((cortex.get_data() > 0).astype('int'), cortex.get_header().get_best_affine())
niplt.plot_roi(cortex)
"""
Explanation: Here, I'm going to create the mask that defined MFC for further analysis.
First, I load a cerebral cortex probabilty map, from the Harvard Oxford atlas
End of explanation
"""
i, j, k = np.meshgrid(*map(np.arange, cortex.get_data().shape), indexing='ij')
# Maximum left and right X coordinates
X_l = nib.affines.apply_affine(np.linalg.inv(cortex.get_affine()), [-10, 0, 0])[0]
X_r = nib.affines.apply_affine(np.linalg.inv(cortex.get_affine()), [10, 0, 0])[0]
# Maximum Y and Z coordinates
Y = nib.affines.apply_affine(np.linalg.inv(cortex.get_affine()), [0, -22, 0])[1]
Z = nib.affines.apply_affine(np.linalg.inv(cortex.get_affine()), [0, 0, -32])[2]
"""
Explanation: Next, I use meshgrid to mask voxels based on location. Importantly, we use nibabel's affine to convert an MNI coordinate (e.g. 10, 0, 0), to voxel space.
End of explanation
"""
## Exclude lateral
cortex.get_data()[
np.where((i < X_r) |
(i > X_l))] = 0
# Exclude posterior
cortex.get_data()[
np.where(j < Y)] = 0
## Exclude ventral
cortex.get_data()[
np.where(k < Z)] = 0
# Binarize
cortex.get_data()[cortex.get_data() < 1] = 0
cortex.get_data()[cortex.get_data() >= 1] = 1
niplt.plot_roi(cortex)
"""
Explanation: Finally, we use the voxel space coordinates to mask the 30% Harvard-Oxford cortical mask, and binarize it
End of explanation
"""
|
akseshina/dl_course
|
seminar_12/homework/homework.ipynb
|
gpl-3.0
|
import numpy as np
import tensorflow as tf
import tensorflow.contrib.slim as slim
from tensorflow.contrib.learn.python.learn.datasets.mnist import read_data_sets
import matplotlib.pyplot as plt
%matplotlib inline
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('fashion-mnist/data/fashion', one_hot=True)
n_samples = mnist.train.num_examples
"""
Explanation: Variational Autoencoder
End of explanation
"""
gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.3)
config=tf.ConfigProto(gpu_options=gpu_options)
class VAE:
def __init__(self, network_architecture, transfer_fct=tf.nn.softplus,
learning_rate=0.001, batch_size=100):
self.network_architecture = network_architecture
self.transfer_fct = transfer_fct
self.learning_rate = learning_rate
self.batch_size = batch_size
self.x = tf.placeholder(tf.float32, [None, network_architecture["n_input"]])
self._create_network()
self._create_loss_optimizer()
init = tf.global_variables_initializer()
self.sess = tf.InteractiveSession()
self.sess.run(init)
def _create_network(self):
# Use recognition network to determine mean and
# (log) variance of Gaussian distribution in latent
# space
self.z_mean, self.z_log_sigma_sq = self._recognition_network()
# Draw one sample z from Gaussian distribution
n_z = self.network_architecture["n_z"]
# tip: use tf.random_normal
eps = tf.random_normal(shape=tf.shape(self.z_log_sigma_sq))
# z = mu + sigma*epsilon
self.z = tf.add(self.z_mean,
tf.multiply(tf.sqrt(tf.exp(self.z_log_sigma_sq)), eps))
# Use generator to determine mean of
# Bernoulli distribution of reconstructed input
self.x_reconstr_mean = self._generator_network()
def _recognition_network(self):
layer_1 = slim.fully_connected(self.x, self.network_architecture['n_hidden_recog_1'])
layer_2 = slim.fully_connected(layer_1, self.network_architecture['n_hidden_recog_2'])
z_mean = slim.fully_connected(layer_2, self.network_architecture['n_z'],
activation_fn=None)
z_log_sigma_sq = slim.fully_connected(layer_2, self.network_architecture['n_z'])
return z_mean, z_log_sigma_sq
def _generator_network(self):
layer_1 = slim.fully_connected(self.z, self.network_architecture['n_hidden_recog_1'])
layer_2 = slim.fully_connected(layer_1, self.network_architecture['n_hidden_recog_2'])
x_reconstr_mean = slim.fully_connected(layer_2, self.network_architecture['n_input'],
activation_fn=None)
return x_reconstr_mean
def _create_loss_optimizer(self):
reconstr_loss = tf.reduce_sum(tf.square(tf.subtract(self.x, self.x_reconstr_mean)), axis=1)
net_normal_distr = tf.distributions.Normal(loc=self.z_mean,
scale=tf.sqrt(tf.exp(self.z_log_sigma_sq)))
ideal_normal_distr = tf.distributions.Normal(loc=0., scale=1.)
latent_loss = tf.reduce_sum(tf.distributions.kl_divergence(net_normal_distr, ideal_normal_distr), axis=1)
self.cost = tf.reduce_mean(reconstr_loss + latent_loss) # average over batch
self.optimizer = tf.train.AdamOptimizer(learning_rate=self.learning_rate).minimize(self.cost)
def partial_fit(self, X):
"""Train model based on mini-batch of input data.
Return cost of mini-batch.
"""
opt, cost = self.sess.run((self.optimizer, self.cost),
feed_dict={self.x: X})
return cost
def transform(self, X):
"""Transform data by mapping it into the latent space."""
# Note: This maps to mean of distribution, we could alternatively
# sample from Gaussian distribution
return self.sess.run(self.z_mean, feed_dict={self.x: X})
def generate(self, z_mu=None):
""" Generate data by sampling from latent space.
If z_mu is not None, data for this point in latent space is
generated. Otherwise, z_mu is drawn from prior in latent
space.
"""
if z_mu is None:
z_mu = np.random.normal(size=self.network_architecture["n_z"])
# Note: This maps to mean of distribution, we could alternatively
# sample from Gaussian distribution
return self.sess.run(self.x_reconstr_mean,
feed_dict={self.z: z_mu})
def reconstruct(self, X):
""" Use VAE to reconstruct given data. """
return self.sess.run(self.x_reconstr_mean,
feed_dict={self.x: X})
def train(network_architecture, learning_rate=0.001,
batch_size=1000, training_epochs=10, display_step=5):
vae = VAE(network_architecture,
learning_rate=learning_rate,
batch_size=batch_size)
# Training cycle
for epoch in range(training_epochs):
avg_cost = 0.
total_batch = int(n_samples / batch_size)
# Loop over all batches
for i in range(total_batch):
batch_xs, _ = mnist.train.next_batch(batch_size)
# Fit training using batch data
cost = vae.partial_fit(batch_xs)
# Compute average loss
avg_cost += cost / n_samples * batch_size
# Display logs per epoch step
if epoch % display_step == 0:
print("Epoch:", '%04d' % (epoch+1),
"cost=", "{:.9f}".format(avg_cost))
return vae
"""
Explanation: Task: fill the gaps in VAE
End of explanation
"""
network_architecture = \
dict(n_hidden_recog_1=500, # 1st layer encoder neurons
n_hidden_recog_2=500, # 2nd layer encoder neurons
n_hidden_gener_1=500, # 1st layer decoder neurons
n_hidden_gener_2=500, # 2nd layer decoder neurons
n_input=784, # MNIST data input (img shape: 28*28)
n_z=20) # dimensionality of latent space
vae = train(network_architecture, training_epochs=128)
x_sample = mnist.test.next_batch(1000)[0]
x_reconstruct = vae.reconstruct(x_sample)
plt.figure(figsize=(8, 12))
for i in range(5):
plt.subplot(5, 2, 2*i + 1)
plt.imshow(x_sample[i].reshape(28, 28), vmin=0, vmax=1, cmap="gray")
plt.title("Test input")
plt.colorbar()
plt.subplot(5, 2, 2*i + 2)
plt.imshow(x_reconstruct[i].reshape(28, 28), vmin=0, vmax=1, cmap="gray")
plt.title("Reconstruction")
plt.colorbar()
plt.tight_layout()
"""
Explanation: Visualize reconstruction quality
End of explanation
"""
network_architecture = \
dict(n_hidden_recog_1=500, # 1st layer encoder neurons
n_hidden_recog_2=500, # 2nd layer encoder neurons
n_hidden_gener_1=500, # 1st layer decoder neurons
n_hidden_gener_2=500, # 2nd layer decoder neurons
n_input=784, # MNIST data input (img shape: 28*28)
n_z=2) # dimensionality of latent space
vae_2d = train(network_architecture, training_epochs=128)
x_sample, y_sample = mnist.test.next_batch(5000)
z_mu = vae_2d.transform(x_sample)
plt.figure(figsize=(8, 6))
plt.scatter(z_mu[:, 0], z_mu[:, 1], c=np.argmax(y_sample, 1))
plt.colorbar()
plt.grid()
"""
Explanation: Illustrating latent space
Next, we train a VAE with 2d latent space and illustrates how the encoder (the recognition network) encodes some of the labeled inputs (collapsing the Gaussian distribution in latent space to its mean). This gives us some insights into the structure of the learned manifold (latent space)
End of explanation
"""
nx = ny = 20
x_values = np.linspace(-3, 3, nx)
y_values = np.linspace(-3, 3, ny)
canvas = np.empty((28*ny, 28*nx))
for i, yi in enumerate(x_values):
for j, xi in enumerate(y_values):
z_mu = np.array([[xi, yi]]*vae.batch_size)
x_mean = vae_2d.generate(z_mu)
canvas[(nx-i-1)*28:(nx-i)*28, j*28:(j+1)*28] = x_mean[0].reshape(28, 28)
plt.figure(figsize=(8, 10))
Xi, Yi = np.meshgrid(x_values, y_values)
plt.imshow(canvas, origin="upper", cmap="gray")
plt.tight_layout()
"""
Explanation: An other way of getting insights into the latent space is to use the generator network to plot reconstrunctions at the positions in the latent space for which they have been generated:
End of explanation
"""
|
TomTranter/OpenPNM
|
examples/percolation/Part B - Invasion Percolation.ipynb
|
mit
|
import sys
import openpnm as op
import numpy as np
np.random.seed(10)
import matplotlib.pyplot as plt
import porespy as ps
from ipywidgets import interact, IntSlider
from openpnm.topotools import trim
%matplotlib inline
ws = op.Workspace()
ws.settings["loglevel"] = 40
"""
Explanation: Part B: Invasion Percolation
The next percolation algorithm to be demonstrated is known as Invasion Percolation. Instead of identifying connected clusters and invading them all in one go, as Ordinary Percolation does, this algorithm progresses one invasion step at a time. This is a more dynamic process and better simulates scenarios where instead of controlling the pressure at the network boundaries something else such as mass flow rate is controlled as the pressure is allowed to fluctuate up and down in order to meet the lowest available entry pressure for the growing cluster(s).
End of explanation
"""
spacing=2.5e-5
net = op.network.Cubic([20, 20, 1], spacing=spacing)
geo = op.geometry.StickAndBall(network=net, pores=net.Ps, throats=net.Ts)
"""
Explanation: In order to also showcase some other network generation options we first start with a small 2D network with StickAndBall geometry.
End of explanation
"""
net.labels()
net.num_throats('surface')
trim(network=net, throats=net.throats('surface'))
h = net.check_network_health()
trim(network=net, pores=h['trim_pores'])
"""
Explanation: We then trim all the surface pores to obtain disctint sets of boundary edge pores.
End of explanation
"""
#NBVAL_IGNORE_OUTPUT
im = ps.io.openpnm_to_im(net, max_dim=1000)
im.shape
"""
Explanation: Then we use a function from our porespy package to generate a tomography style image of the abstract network providing the number of pixels in each dimension.
End of explanation
"""
#NBVAL_IGNORE_OUTPUT
fig, ax = plt.subplots(figsize=(5, 5))
plt.imshow(im[25:-25, 25:-25, 25].T)
crop = im[25:-25, 25:-25, :]
"""
Explanation: This creates a 3D image but we can crop it to get the central slice in 2D for visualization.
End of explanation
"""
#NBVAL_IGNORE_OUTPUT
snow_out = ps.networks.snow(crop > 0, voxel_size=4e-7)
snow_out.regions.shape
"""
Explanation: Next the snow algorithm is used to do network extraction on the tomography style image. Of course if you have your own tomogrpahy image this can be used instead.
End of explanation
"""
#NBVAL_IGNORE_OUTPUT
fig, ax = plt.subplots(figsize=(5, 5))
reg = snow_out.regions.astype(float) - 1
reg[reg == -1] = np.nan
region_slice = snow_out.regions[:, :, 28] - 1
mask = region_slice >= 0
plt.imshow(region_slice.T);
"""
Explanation: The SNOW algorithm provides a labelled region image containing the pore index. As zero is used for the background it is actually the pore index + 1 because python references arrays with first element as zero and we do not explicitly store the pore index.
End of explanation
"""
wrk = op.Workspace()
wrk.clear()
net = op.network.GenericNetwork()
net.update(snow_out)
geo = op.geometry.GenericGeometry(network=net, pores=net.Ps, throats=net.Ts)
"""
Explanation: Now our new network is extracted we can fill a network object with all the properties and begin simulation.
End of explanation
"""
def update_image(data):
data = data.astype(float)
out_im = np.ones(region_slice.shape, dtype=float)*-1
out_im[mask] = data[region_slice[mask]]
out_im[~mask] = np.nan
return out_im
#NBVAL_IGNORE_OUTPUT
fig, ax = plt.subplots(figsize=(5, 5))
out = update_image(net['pore.diameter'])
plt.imshow(out.T);
"""
Explanation: A helper function is defined for plotting a particular data set.
End of explanation
"""
water = op.phases.Water(network=net)
phys = op.physics.Standard(network=net, geometry=geo, phase=water)
#NBVAL_IGNORE_OUTPUT
fig, ax = plt.subplots(figsize=[5, 5])
ax.hist(phys['throat.entry_pressure'], bins=10)
"""
Explanation: Again, stadard physics is used to define the capillary entry pressures. And these are shown as a histogram for all the throats in the network.
End of explanation
"""
#NBVAL_IGNORE_OUTPUT
alg_ip = op.algorithms.InvasionPercolation(network=net)
alg_ip.setup(phase=water)
alg_ip.set_inlets(pores=net.pores('left'))
alg_ip.run()
fig, ax = plt.subplots(figsize=(5, 5))
out = update_image(alg_ip['pore.invasion_sequence'])
plt.imshow(out.T);
def plot_invasion(seq):
data = alg_ip['pore.invasion_sequence'] < seq
fig, ax = plt.subplots(figsize=(5, 5))
out = update_image(data)
plt.imshow(out.T);
"""
Explanation: Next, the algorithm is defined and run with no arguments or outlets defined. This will proceed step by step assessing which pores are currently invaded (i.e. inlets first), which throats connect to an uninvaded pore and of these, which throat has the lowest capillary entry pressure for invasion. Invasion then proceeds along the path of least capillary resistance.
End of explanation
"""
#NBVAL_IGNORE_OUTPUT
max_seq = alg_ip['pore.invasion_sequence'].max()
interact(plot_invasion, seq=IntSlider(min=0, max=max_seq, step=1, value=200))
"""
Explanation: Using the slider below we can interactively plot the saturation at each invasion step (this works best using the left and right arrow keys).
End of explanation
"""
#NBVAL_IGNORE_OUTPUT
fig, ax = plt.subplots(figsize=(5, 5))
alg_ip.plot_intrusion_curve(fig)
plt.show()
"""
Explanation: As with Ordinary Percolation we can plot a drainage or intrusion curve but this time the capillary pressure is plotted from one step to the next as a continuous process with dynamic pressure boundary conditions and so is allowed to increase and decrease to meet the next lowest entry pressure for the invading cluster.
End of explanation
"""
#NBVAL_IGNORE_OUTPUT
fig, ax = plt.subplots(figsize=(5, 5))
alg_op = op.algorithms.OrdinaryPercolation(network=net, phase=water)
alg_op.set_inlets(net.pores('left'))
alg_op.setup(pore_volume='pore.volume',
throat_volume='throat.volume')
alg_op.run(points=1000)
alg_op.plot_intrusion_curve(fig)
alg_ip.plot_intrusion_curve(fig)
plt.show()
"""
Explanation: We can compare the results of the two algorithms and see that the pressure envelope, i.e. maximum pressure reached historically by the invasion process is the same as the ordinary percolation value.
End of explanation
"""
#NBVAL_IGNORE_OUTPUT
alg_ip_t = op.algorithms.InvasionPercolation(network=net)
alg_ip_t.setup(phase=water)
alg_ip_t.set_inlets(pores=net.pores('left'))
alg_ip_t.run()
alg_ip_t.apply_trapping(outlets=net.pores(['boundary']))
fig, ax = plt.subplots(figsize=(5, 5))
out = update_image(alg_ip_t['pore.trapped'])
plt.imshow(out.T);
"""
Explanation: An additional feature of the algorithm is the ability to identify where the defending phase becomes trapped. Whether this happens in reality in-fact relies on the connectivity of the defending phase and whether it can reside in the invaded pores as thin wetting films. If not then the defending phase is completely pushed out of a pore when invaded and it can become isolated and trapped when encircled by the invading phase. OpenPNM actually calculates this trapping as a post-process, employing some clever logic described by Masson 2016.
End of explanation
"""
#NBVAL_IGNORE_OUTPUT
fig, ax = plt.subplots(figsize=(5, 5))
alg_ip.plot_intrusion_curve(fig)
alg_ip_t.plot_intrusion_curve(fig)
plt.show()
"""
Explanation: Here a reasonable fraction of the pore space is not invaded due to trapping of the defending phase. Generally this fraction will be lower in truly 3D networks as there are more routes out of the network because pores have higher connectivity. Also, typically if a defending phase is considered to be wetting then film flow is assumed to allow residual defending phase to escape. However, we can show the differences on one plot with and without trapping below.
End of explanation
"""
|
CivicKnowledge/metatab-packages
|
census.gov/census.gov-pums-20165/notebooks/Extract.ipynb
|
mit
|
rac1p_map = {
1: 'white',
2: 'black',
3: 'amind',
4: 'alaskanat',
5: 'aian',
6: 'asian',
7: 'nhopi',
8: 'other',
9: 'many'
}
pop['race'] = pop.rac1p.astype('category')
pop['race'] = pop.race.cat.rename_categories(rac1p_map)
# The raceeth variable is the race varaiable, but with 'white' replaced
# with 'hisp' for records that have both is_hsip and white set. So, for
# raceeth, 'white' means 'non-hispanic white'
pop['is_hisp'] = pop.hisp != 1
pop['raceeth'] = pop['race'].mask(((pop.is_hisp == True) & (pop.race == 'white')), 'hisp')
pop[['rac1p','race','is_hisp','raceeth']].head()
pop[pop.raceeth == 'white'].agep.hist()
pop[pop.raceeth == 'hisp'].agep.hist()
"""
Explanation: Recode Race and Ethnicity
RAC1P
Recoded detailed race code
1 .White alone
2 .Black or African American alone
3 .American Indian alone
4 .Alaska Native alone
5 .American Indian and Alaska Native tribes specified; or
.American Indian or Alaska Native, not specified and no
.other races
6 .Asian alone
7 .Native Hawaiian and Other Pacific Islander alone
8 .Some Other Race alone
9 .Two or More Races
End of explanation
"""
ages = ['18-25 YEARS',
'26-29 YEARS',
'30-34 YEARS',
'35-39 YEARS',
'40-44 YEARS',
'45-49 YEARS',
'50-54 YEARS',
'55-59 YEARS',
'60-64 YEARS',
'65-69 YEARS',
'70-74 YEARS',
'75-79 YEARS',
'80-84 YEARS',
'85+ YEARS']
def extract_age(v):
if v.startswith('85'):
return pd.Interval(left=85, right=120, closed='both')
else:
l,h,_ = v.replace('-',' ').split()
return pd.Interval(left=int(l), right=int(h), closed='both')
age_ranges = [ (extract_age(v), v) for v in ages]
age_index = pd.IntervalIndex(list(ar[0] for ar in age_ranges))
pop['age_group'] = pd.cut(pop.agep,age_index).astype('category')
pop['age_group'].cat.rename_categories(dict(age_ranges), inplace=True)
pop[['agep','age_group']].head()
"""
Explanation: Recode Age
Age groups from CHIS:
18-25 YEARS 1906
26-29 YEARS 867
30-34 YEARS 1060
35-39 YEARS 1074
40-44 YEARS 1062
45-49 YEARS 1302
50-54 YEARS 1621
55-59 YEARS 1978
60-64 YEARS 2343
65-69 YEARS 2170
70-74 YEARS 1959
75-79 YEARS 1525
80-84 YEARS 1125
85+ YEARS 1161
End of explanation
"""
povlvls = ['0-99% FPL', '100-199% FPL', '200-299% FPL', '300% FPL AND ABOVE']
pov_index = pd.IntervalIndex(
[pd.Interval(left=0, right=99, closed='both'),
pd.Interval(left=100, right=199, closed='both'),
pd.Interval(left=200, right=299, closed='both'),
pd.Interval(left=300, right=501, closed='both')]
)
pop.povpip.describe()
pop['pov_group'] = pd.cut(pop.povpip,pov_index).astype('category')
pop['pov_group'].cat.rename_categories(dict(zip(pov_index, povlvls)), inplace=True)
pop[['povpip','pov_group']].head()
pop.groupby('puma').pwgtp5.sum().sum()
dfx = pop[cols+['age_group','pov_group','race','is_hisp','raceeth']+weight_cols]
dfx.head(20).T
len(dfx)
"""
Explanation: Recode Poverty Level
End of explanation
"""
def build_set(df, rep_no):
new_rows = []
for row in df.iterrows():
repl = row[1].at['pwgtp'+str(rep_no)]
if repl > 1:
new_rows.extend([row]*(repl-1))
return new_rows
%time new_rows = build_set(dfx, 1)
%time t = dfx.copy().append(new_rows, ignore_index = True)
len(t)
t
from publicdata import parse_app_url
url = parse_app_url('census://2015/5/CA/140/B17001')
dfc = url.geoframe()
dfc.plot()
# The puma files moved, so the publicdata package is wrong.
url = parse_app_url('shape+ftp://ftp2.census.gov/geo/tiger/TIGER2018/PUMA/tl_2018_06_puma10.zip')
pumas = url.get_resource().geoframe()
pumas.plot()
url = parse_app_url('census://2015/5/CA/county/B17001')
url.geo_url.shape_url
counties_pkg = mp.open_package('http://library.metatab.org/census.gov-counties-2017-2.csv')
counties = counties_pkg.resource('counties').geoframe()
sd = counties[counties.name == 'San Diego']
#import geopandas as gpd
#gpd.sjoin(pumas, sd)
"""
Explanation: Build the full population set
End of explanation
"""
|
ALEXKIRNAS/DataScience
|
Coursera/Machine-learning-data-analysis/Course 2/Week_01/PA_linreg_stochastic_grad_descent.ipynb
|
mit
|
def write_answer_to_file(answer, filename):
with open(filename, 'w') as f_out:
f_out.write(str(round(answer, 3)))
"""
Explanation: Линейная регрессия и стохастический градиентный спуск
Задание основано на материалах лекций по линейной регрессии и градиентному спуску. Вы будете прогнозировать выручку компании в зависимости от уровня ее инвестиций в рекламу по TV, в газетах и по радио.
Вы научитесь:
решать задачу восстановления линейной регрессии
реализовывать стохастический градиентный спуск для ее настройки
решать задачу линейной регрессии аналитически
Введение
Линейная регрессия - один из наиболее хорошо изученных методов машинного обучения, позволяющий прогнозировать значения количественного признака в виде линейной комбинации прочих признаков с параметрами - весами модели. Оптимальные (в смысле минимальности некоторого функционала ошибки) параметры линейной регрессии можно найти аналитически с помощью нормального уравнения или численно с помощью методов оптимизации.
Линейная регрессия использует простой функционал качества - среднеквадратичную ошибку. Мы будем работать с выборкой, содержащей 3 признака. Для настройки параметров (весов) модели решается следующая задача:
$$\Large \frac{1}{\ell}\sum_{i=1}^\ell{{((w_0 + w_1x_{i1} + w_2x_{i2} + w_3x_{i3}) - y_i)}^2} \rightarrow \min_{w_0, w_1, w_2, w_3},$$
где $x_{i1}, x_{i2}, x_{i3}$ - значения признаков $i$-го объекта, $y_i$ - значение целевого признака $i$-го объекта, $\ell$ - число объектов в обучающей выборке.
Градиентный спуск
Параметры $w_0, w_1, w_2, w_3$, по которым минимизируется среднеквадратичная ошибка, можно находить численно с помощью градиентного спуска.
Градиентный шаг для весов будет выглядеть следующим образом:
$$\Large w_0 \leftarrow w_0 - \frac{2\eta}{\ell} \sum_{i=1}^\ell{{((w_0 + w_1x_{i1} + w_2x_{i2} + w_3x_{i3}) - y_i)}}$$
$$\Large w_j \leftarrow w_j - \frac{2\eta}{\ell} \sum_{i=1}^\ell{{x_{ij}((w_0 + w_1x_{i1} + w_2x_{i2} + w_3x_{i3}) - y_i)}},\ j \in {1,2,3}$$
Здесь $\eta$ - параметр, шаг градиентного спуска.
Стохастический градиентный спуск
Проблема градиентного спуска, описанного выше, в том, что на больших выборках считать на каждом шаге градиент по всем имеющимся данным может быть очень вычислительно сложно.
В стохастическом варианте градиентного спуска поправки для весов вычисляются только с учетом одного случайно взятого объекта обучающей выборки:
$$\Large w_0 \leftarrow w_0 - \frac{2\eta}{\ell} {((w_0 + w_1x_{k1} + w_2x_{k2} + w_3x_{k3}) - y_k)}$$
$$\Large w_j \leftarrow w_j - \frac{2\eta}{\ell} {x_{kj}((w_0 + w_1x_{k1} + w_2x_{k2} + w_3x_{k3}) - y_k)},\ j \in {1,2,3},$$
где $k$ - случайный индекс, $k \in {1, \ldots, \ell}$.
Нормальное уравнение
Нахождение вектора оптимальных весов $w$ может быть сделано и аналитически.
Мы хотим найти такой вектор весов $w$, чтобы вектор $y$, приближающий целевой признак, получался умножением матрицы $X$ (состоящей из всех признаков объектов обучающей выборки, кроме целевого) на вектор весов $w$. То есть, чтобы выполнялось матричное уравнение:
$$\Large y = Xw$$
Домножением слева на $X^T$ получаем:
$$\Large X^Ty = X^TXw$$
Это хорошо, поскольку теперь матрица $X^TX$ - квадратная, и можно найти решение (вектор $w$) в виде:
$$\Large w = {(X^TX)}^{-1}X^Ty$$
Матрица ${(X^TX)}^{-1}X^T$ - псевдообратная для матрицы $X$. В NumPy такую матрицу можно вычислить с помощью функции numpy.linalg.pinv.
Однако, нахождение псевдообратной матрицы - операция вычислительно сложная и нестабильная в случае малого определителя матрицы $X$ (проблема мультиколлинеарности).
На практике лучше находить вектор весов $w$ решением матричного уравнения
$$\Large X^TXw = X^Ty$$Это может быть сделано с помощью функции numpy.linalg.solve.
Но все же на практике для больших матриц $X$ быстрее работает градиентный спуск, особенно его стохастическая версия.
Инструкции по выполнению
В начале напишем простую функцию для записи ответов в текстовый файл. Ответами будут числа, полученные в ходе решения этого задания, округленные до 3 знаков после запятой. Полученные файлы после выполнения задания надо отправить в форму на странице задания на Coursera.org.
End of explanation
"""
import pandas as pd
adver_data = pd.read_csv('advertising.csv')
"""
Explanation: 1. Загрузите данные из файла advertising.csv в объект pandas DataFrame. Источник данных.
End of explanation
"""
adver_data.head(5)
adver_data.describe()
"""
Explanation: Посмотрите на первые 5 записей и на статистику признаков в этом наборе данных.
End of explanation
"""
import numpy as np
data = adver_data
X = np.array(data[['TV', 'Radio', 'Newspaper']])
y = np.array(data['Sales'])
X
"""
Explanation: Создайте массивы NumPy X из столбцов TV, Radio и Newspaper и y - из столбца Sales. Используйте атрибут values объекта pandas DataFrame.
End of explanation
"""
means, stds = np.mean(X, axis=0), np.std(X, axis=0)
X = (X - means) / stds
X
"""
Explanation: Отмасштабируйте столбцы матрицы X, вычтя из каждого значения среднее по соответствующему столбцу и поделив результат на стандартное отклонение. Для определенности, используйте методы mean и std векторов NumPy (реализация std в Pandas может отличаться). Обратите внимание, что в numpy вызов функции .mean() без параметров возвращает среднее по всем элементам массива, а не по столбцам, как в pandas. Чтобы произвести вычисление по столбцам, необходимо указать параметр axis.
End of explanation
"""
X = np.hstack((X, np.ones(len(X)).reshape(len(X), 1)))
X
"""
Explanation: Добавьте к матрице X столбец из единиц, используя методы hstack, ones и reshape библиотеки NumPy. Вектор из единиц нужен для того, чтобы не обрабатывать отдельно коэффициент $w_0$ линейной регрессии.
End of explanation
"""
def mserror(y, y_pred):
return np.sum(np.square(y - y_pred)) / len(y)
"""
Explanation: 2. Реализуйте функцию mserror - среднеквадратичную ошибку прогноза. Она принимает два аргумента - объекты Series y (значения целевого признака) и y_pred (предсказанные значения). Не используйте в этой функции циклы - тогда она будет вычислительно неэффективной.
End of explanation
"""
answer1 = mserror(y, np.median(y))
print(answer1)
write_answer_to_file(answer1, '1.txt')
"""
Explanation: Какова среднеквадратичная ошибка прогноза значений Sales, если всегда предсказывать медианное значение Sales по исходной выборке? Запишите ответ в файл '1.txt'.
End of explanation
"""
def normal_equation(X, y):
return np.dot(np.linalg.pinv(X), y)
norm_eq_weights = normal_equation(X, y)
print(norm_eq_weights)
"""
Explanation: 3. Реализуйте функцию normal_equation, которая по заданным матрицам (массивам NumPy) X и y вычисляет вектор весов $w$ согласно нормальному уравнению линейной регрессии.
End of explanation
"""
answer2 = np.sum(np.array([0, 0, 0, 1]) * norm_eq_weights)
print(answer2)
write_answer_to_file(answer2, '2.txt')
"""
Explanation: Какие продажи предсказываются линейной моделью с весами, найденными с помощью нормального уравнения, в случае средних инвестиций в рекламу по ТВ, радио и в газетах? (то есть при нулевых значениях масштабированных признаков TV, Radio и Newspaper). Запишите ответ в файл '2.txt'.
End of explanation
"""
def linear_prediction(X, w):
return np.dot(X, w)
"""
Explanation: 4. Напишите функцию linear_prediction, которая принимает на вход матрицу X и вектор весов линейной модели w, а возвращает вектор прогнозов в виде линейной комбинации столбцов матрицы X с весами w.
End of explanation
"""
answer3 = mserror(y, linear_prediction(X, norm_eq_weights))
print(answer3)
write_answer_to_file(answer3, '3.txt')
"""
Explanation: Какова среднеквадратичная ошибка прогноза значений Sales в виде линейной модели с весами, найденными с помощью нормального уравнения? Запишите ответ в файл '3.txt'.
End of explanation
"""
def stochastic_gradient_step(X, y, w, train_ind, eta=0.01):
x = X[train_ind] * (np.sum(X[train_ind] * w) - y[train_ind]) * (2 / X.shape[0])
grad0 = x[0]
grad1 = x[1]
grad2 = x[2]
grad3 = x[3]
return w - eta * np.array([grad0, grad1, grad2, grad3])
"""
Explanation: 5. Напишите функцию stochastic_gradient_step, реализующую шаг стохастического градиентного спуска для линейной регрессии. Функция должна принимать матрицу X, вектора y и w, число train_ind - индекс объекта обучающей выборки (строки матрицы X), по которому считается изменение весов, а также число $\eta$ (eta) - шаг градиентного спуска (по умолчанию eta=0.01). Результатом будет вектор обновленных весов. Наша реализация функции будет явно написана для данных с 3 признаками, но несложно модифицировать для любого числа признаков, можете это сделать.
End of explanation
"""
def stochastic_gradient_descent(X, y, w_init, eta=1e-2, max_iter=1e4,
max_weight_dist=1e-8, seed=42, verbose=False):
# Инициализируем расстояние между векторами весов на соседних
# итерациях большим числом.
weight_dist = np.inf
# Инициализируем вектор весов
w = w_init
# Сюда будем записывать ошибки на каждой итерации
errors = [mserror(y, linear_prediction(X, w))]
# Счетчик итераций
iter_num = 0
# Будем порождать псевдослучайные числа
# (номер объекта, который будет менять веса), а для воспроизводимости
# этой последовательности псевдослучайных чисел используем seed.
np.random.seed(seed)
# Основной цикл
while weight_dist > max_weight_dist and iter_num < max_iter:
# порождаем псевдослучайный
# индекс объекта обучающей выборки
random_ind = np.random.randint(X.shape[0])
# Ваш код здесь
new_w = stochastic_gradient_step(X, y, w, random_ind, eta)
errors.append(mserror(y, linear_prediction(X, new_w)))
if verbose:
print (errors[-1])
weight_dist = np.sqrt(np.sum(np.square(new_w - w)))
w = new_w
iter_num += 1
return w, errors
"""
Explanation: 6. Напишите функцию stochastic_gradient_descent, реализующую стохастический градиентный спуск для линейной регрессии. Функция принимает на вход следующие аргументы:
- X - матрица, соответствующая обучающей выборке
- y - вектор значений целевого признака
- w_init - вектор начальных весов модели
- eta - шаг градиентного спуска (по умолчанию 0.01)
- max_iter - максимальное число итераций градиентного спуска (по умолчанию 10000)
- max_weight_dist - максимальное евклидово расстояние между векторами весов на соседних итерациях градиентного спуска,
при котором алгоритм прекращает работу (по умолчанию 1e-8)
- seed - число, используемое для воспроизводимости сгенерированных псевдослучайных чисел (по умолчанию 42)
- verbose - флаг печати информации (например, для отладки, по умолчанию False)
На каждой итерации в вектор (список) должно записываться текущее значение среднеквадратичной ошибки. Функция должна возвращать вектор весов $w$, а также вектор (список) ошибок.
End of explanation
"""
%%time
stoch_grad_desc_weights, stoch_errors_by_iter = stochastic_gradient_descent(X, y, np.zeros(4), max_iter = 1e5,
verbose = False)
"""
Explanation: Запустите $10^5$ итераций стохастического градиентного спуска. Укажите вектор начальных весов w_init, состоящий из нулей. Оставьте параметры eta и seed равными их значениям по умолчанию (eta=0.01, seed=42 - это важно для проверки ответов).
End of explanation
"""
%pylab inline
plot(range(50), stoch_errors_by_iter[:50])
xlabel('Iteration number')
ylabel('MSE')
"""
Explanation: Посмотрим, чему равна ошибка на первых 50 итерациях стохастического градиентного спуска. Видим, что ошибка не обязательно уменьшается на каждой итерации.
End of explanation
"""
%pylab inline
plot(range(len(stoch_errors_by_iter)), stoch_errors_by_iter)
xlabel('Iteration number')
ylabel('MSE')
"""
Explanation: Теперь посмотрим на зависимость ошибки от номера итерации для $10^5$ итераций стохастического градиентного спуска. Видим, что алгоритм сходится.
End of explanation
"""
stoch_grad_desc_weights
"""
Explanation: Посмотрим на вектор весов, к которому сошелся метод.
End of explanation
"""
stoch_errors_by_iter[-20:]
"""
Explanation: Посмотрим на среднеквадратичную ошибку на последней итерации.
End of explanation
"""
answer4 = mserror(y, linear_prediction(X, stoch_grad_desc_weights))
print(answer4)
write_answer_to_file(answer4, '4.txt')
"""
Explanation: Какова среднеквадратичная ошибка прогноза значений Sales в виде линейной модели с весами, найденными с помощью градиентного спуска? Запишите ответ в файл '4.txt'.
End of explanation
"""
|
harsh6292/machine-learning-nd
|
projects/customer_segments/customer_segments.ipynb
|
mit
|
# Import libraries necessary for this project
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from IPython.display import display # Allows the use of display() for DataFrames
# Import supplementary visualizations code visuals.py
import visuals as vs
# Pretty display for notebooks
%matplotlib inline
# Load the wholesale customers dataset
try:
data = pd.read_csv("customers.csv")
data.drop(['Region', 'Channel'], axis = 1, inplace = True)
print "Wholesale customers dataset has {} samples with {} features each.".format(*data.shape)
except:
print "Dataset could not be loaded. Is the dataset missing?"
"""
Explanation: Machine Learning Engineer Nanodegree
Unsupervised Learning
Project: Creating Customer Segments
Welcome to the third project of the Machine Learning Engineer Nanodegree! In this notebook, some template code has already been provided for you, and it will be your job to implement the additional functionality necessary to successfully complete this project. Sections that begin with 'Implementation' in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!
In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a 'Question X' header. Carefully read each question and provide thorough answers in the following text boxes that begin with 'Answer:'. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide.
Note: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.
Getting Started
In this project, you will analyze a dataset containing data on various customers' annual spending amounts (reported in monetary units) of diverse product categories for internal structure. One goal of this project is to best describe the variation in the different types of customers that a wholesale distributor interacts with. Doing so would equip the distributor with insight into how to best structure their delivery service to meet the needs of each customer.
The dataset for this project can be found on the UCI Machine Learning Repository. For the purposes of this project, the features 'Channel' and 'Region' will be excluded in the analysis — with focus instead on the six product categories recorded for customers.
Run the code block below to load the wholesale customers dataset, along with a few of the necessary Python libraries required for this project. You will know the dataset loaded successfully if the size of the dataset is reported.
End of explanation
"""
# Display a description of the dataset
display(data.describe())
"""
Explanation: Data Exploration
In this section, you will begin exploring the data through visualizations and code to understand how each feature is related to the others. You will observe a statistical description of the dataset, consider the relevance of each feature, and select a few sample data points from the dataset which you will track through the course of this project.
Run the code block below to observe a statistical description of the dataset. Note that the dataset is composed of six important product categories: 'Fresh', 'Milk', 'Grocery', 'Frozen', 'Detergents_Paper', and 'Delicatessen'. Consider what each category represents in terms of products you could purchase.
End of explanation
"""
# TODO: Select three indices of your choice you wish to sample from the dataset
indices = [22, 165, 380]
# Create a DataFrame of the chosen samples
samples = pd.DataFrame(data.loc[indices], columns = data.keys()).reset_index(drop = True)
print "Chosen samples of wholesale customers dataset:"
display(samples)
import seaborn as sns
sns.heatmap((samples-data.mean())/data.std(ddof=0), annot=True, cbar=False, square=True)
"""
Explanation: Implementation: Selecting Samples
To get a better understanding of the customers and how their data will transform through the analysis, it would be best to select a few sample data points and explore them in more detail. In the code block below, add three indices of your choice to the indices list which will represent the customers to track. It is suggested to try different sets of samples until you obtain customers that vary significantly from one another.
End of explanation
"""
# TODO: Make a copy of the DataFrame, using the 'drop' function to drop the given feature
new_data = data.drop(['Delicatessen'], axis=1)
test_label = data['Delicatessen']
from sklearn.cross_validation import train_test_split
# TODO: Split the data into training and testing sets using the given feature as the target
X_train, X_test, y_train, y_test = train_test_split(new_data, test_label, test_size=0.25, random_state=2)
from sklearn.tree import DecisionTreeClassifier
# TODO: Create a decision tree regressor and fit it to the training set
regressor = DecisionTreeClassifier(random_state=2).fit(X_train, y_train)
# TODO: Report the score of the prediction using the testing set
score = regressor.score(X_test, y_test)
print score
"""
Explanation: Question 1
Consider the total purchase cost of each product category and the statistical description of the dataset above for your sample customers.
What kind of establishment (customer) could each of the three samples you've chosen represent?
Hint: Examples of establishments include places like markets, cafes, and retailers, among many others. Avoid using names for establishments, such as saying "McDonalds" when describing a sample customer as a restaurant.
Answer:
By looking at the data set above of 3 samples,
- Sample 0 customer buys a lot of Fresh and Frozen products as compared to others. This might represent a big retailer.
Sample 1 customer buys Milk, Grocery and Detergents and Paper. This kind will likely represent a restaurant.
Sample 2 customer buys Fresh products more than anyone. This one also can represent a market/retailer dealing in fresh products mostly.
Implementation: Feature Relevance
One interesting thought to consider is if one (or more) of the six product categories is actually relevant for understanding customer purchasing. That is to say, is it possible to determine whether customers purchasing some amount of one category of products will necessarily purchase some proportional amount of another category of products? We can make this determination quite easily by training a supervised regression learner on a subset of the data with one feature removed, and then score how well that model can predict the removed feature.
In the code block below, you will need to implement the following:
- Assign new_data a copy of the data by removing a feature of your choice using the DataFrame.drop function.
- Use sklearn.cross_validation.train_test_split to split the dataset into training and testing sets.
- Use the removed feature as your target label. Set a test_size of 0.25 and set a random_state.
- Import a decision tree regressor, set a random_state, and fit the learner to the training data.
- Report the prediction score of the testing set using the regressor's score function.
End of explanation
"""
corr = data.corr()
mask = np.zeros_like(corr)
mask[np.triu_indices_from(mask, 1)] = True
with sns.axes_style("white"):
ax = sns.heatmap(corr, mask=mask, square=True, annot=True,
cmap='RdBu', fmt='+.3f')
plt.xticks(rotation=45, ha='center')
# Produce a scatter matrix for each pair of features in the data
pd.scatter_matrix(data, alpha = 0.3, figsize = (14,8), diagonal = 'kde');
"""
Explanation: Question 2
Which feature did you attempt to predict? What was the reported prediction score? Is this feature necessary for identifying customers' spending habits?
Hint: The coefficient of determination, R^2, is scored between 0 and 1, with 1 being a perfect fit. A negative R^2 implies the model fails to fit the data.
Answer:
I attempted to predict 'Delicatessen'.
The prediction score given by Decision tree is 0.00909
Since the score predicted is very less if we remove this feature, it means by dropping this feature necessary information will be lost in correctly predicting the customer behavior.
Delicatessen feature thus is important part of the data and cannot be removed.
Visualize Feature Distributions
To get a better understanding of the dataset, we can construct a scatter matrix of each of the six product features present in the data. If you found that the feature you attempted to predict above is relevant for identifying a specific customer, then the scatter matrix below may not show any correlation between that feature and the others. Conversely, if you believe that feature is not relevant for identifying a specific customer, the scatter matrix might show a correlation between that feature and another feature in the data. Run the code block below to produce a scatter matrix.
End of explanation
"""
fig, axes = plt.subplots(2, 3)
axes = axes.flatten()
fig.set_size_inches(18, 6)
fig.suptitle('Distribution of Features')
for i, col in enumerate(data.columns):
feature = data[col]
sns.distplot(feature, label=col, ax=axes[i]).set(xlim=(-1000, 20000),)
axes[i].axvline(feature.mean(),linewidth=2, color='y')
axes[i].axvline(feature.median(),linewidth=1, color='r')
# TODO: Scale the data using the natural logarithm
log_data = np.log(data)
# TODO: Scale the sample data using the natural logarithm
log_samples = np.log(samples)
fig, axes = plt.subplots(2, 3)
axes = axes.flatten()
fig.set_size_inches(18, 6)
fig.suptitle('Distribution of Features for Log Data')
for i, col in enumerate(log_data.columns):
feature = log_data[col]
sns.distplot(feature, label=col, ax=axes[i])
axes[i].axvline(feature.mean(),linewidth=2, color='y')
axes[i].axvline(feature.median(),linewidth=1, color='r')
# Produce a scatter matrix for each pair of newly-transformed features
pd.scatter_matrix(log_data, alpha = 0.3, figsize = (14,8), diagonal = 'kde');
# set plot style & color scheme
sns.set_style('ticks')
with sns.color_palette("Reds_r"):
# plot densities of log data
plt.figure(figsize=(8,4))
for col in data.columns:
sns.kdeplot(log_data[col], shade=True)
plt.legend(loc='best')
"""
Explanation: Question 3
Are there any pairs of features which exhibit some degree of correlation? Does this confirm or deny your suspicions about the relevance of the feature you attempted to predict? How is the data for those features distributed?
Hint: Is the data normally distributed? Where do most of the data points lie?
Answer:
The pair of features exhibiting some type of correlation are [Milk, Grocery], [Grocery, Detergents_Paper].
From the above data, Delicatessen feature doesn't seem to have any correlation between any other features. All the features when compared to Deli are bunched near the origin and thus don't have a normalized distribution.
This denies my suspicion about the relevance of the feature since the scatter matrix did not show any correlation between Deli and other features.
Data Preprocessing
In this section, you will preprocess the data to create a better representation of customers by performing a scaling on the data and detecting (and optionally removing) outliers. Preprocessing data is often times a critical step in assuring that results you obtain from your analysis are significant and meaningful.
Implementation: Feature Scaling
If data is not normally distributed, especially if the mean and median vary significantly (indicating a large skew), it is most often appropriate to apply a non-linear scaling — particularly for financial data. One way to achieve this scaling is by using a Box-Cox test, which calculates the best power transformation of the data that reduces skewness. A simpler approach which can work in most cases would be applying the natural logarithm.
In the code block below, you will need to implement the following:
- Assign a copy of the data to log_data after applying logarithmic scaling. Use the np.log function for this.
- Assign a copy of the sample data to log_samples after applying logarithmic scaling. Again, use np.log.
End of explanation
"""
# Display the log-transformed sample data
display(log_samples)
"""
Explanation: Observation
After applying a natural logarithm scaling to the data, the distribution of each feature should appear much more normal. For any pairs of features you may have identified earlier as being correlated, observe here whether that correlation is still present (and whether it is now stronger or weaker than before).
Run the code below to see how the sample data has changed after having the natural logarithm applied to it.
End of explanation
"""
# For each feature find the data points with extreme high or low values
for feature in log_data.keys():
# TODO: Calculate Q1 (25th percentile of the data) for the given feature
Q1 = np.percentile(log_data[feature], 25.0)
# TODO: Calculate Q3 (75th percentile of the data) for the given feature
Q3 = np.percentile(log_data[feature], 75.0)
# TODO: Use the interquartile range to calculate an outlier step (1.5 times the interquartile range)
step = (Q3 - Q1) * 1.5
# Display the outliers
print "Data points considered outliers for the feature '{}':".format(feature)
display(log_data[~((log_data[feature] >= Q1 - step) & (log_data[feature] <= Q3 + step))])
# OPTIONAL: Select the indices for data points you wish to remove
outliers = [65, 66, 75, 128, 154]
# Remove the outliers, if any were specified
good_data = log_data.drop(log_data.index[outliers]).reset_index(drop = True)
"""
Explanation: Implementation: Outlier Detection
Detecting outliers in the data is extremely important in the data preprocessing step of any analysis. The presence of outliers can often skew results which take into consideration these data points. There are many "rules of thumb" for what constitutes an outlier in a dataset. Here, we will use Tukey's Method for identfying outliers: An outlier step is calculated as 1.5 times the interquartile range (IQR). A data point with a feature that is beyond an outlier step outside of the IQR for that feature is considered abnormal.
In the code block below, you will need to implement the following:
- Assign the value of the 25th percentile for the given feature to Q1. Use np.percentile for this.
- Assign the value of the 75th percentile for the given feature to Q3. Again, use np.percentile.
- Assign the calculation of an outlier step for the given feature to step.
- Optionally remove data points from the dataset by adding indices to the outliers list.
NOTE: If you choose to remove any outliers, ensure that the sample data does not contain any of these points!
Once you have performed this implementation, the dataset will be stored in the variable good_data.
End of explanation
"""
from sklearn.decomposition import PCA
# TODO: Apply PCA by fitting the good data with the same number of dimensions as features
pca = PCA(n_components=6).fit(good_data)
# TODO: Transform log_samples using the PCA fit above
pca_samples = pca.transform(log_samples)
# Generate PCA results plot
pca_results = vs.pca_results(good_data, pca)
"""
Explanation: Question 4
Are there any data points considered outliers for more than one feature based on the definition above? Should these data points be removed from the dataset? If any data points were added to the outliers list to be removed, explain why.
Answer:
There are some data points which are categorized as outliers in atleast 2 features.
The data points added to outliers list are: 65, 66, 75, 128, 154
These points should be removed from the dataset because these data points don't lie within normalized distribution range for two or more features. By removing these outliers, we can then correctly classify other data points.
Feature Transformation
In this section you will use principal component analysis (PCA) to draw conclusions about the underlying structure of the wholesale customer data. Since using PCA on a dataset calculates the dimensions which best maximize variance, we will find which compound combinations of features best describe customers.
Implementation: PCA
Now that the data has been scaled to a more normal distribution and has had any necessary outliers removed, we can now apply PCA to the good_data to discover which dimensions about the data best maximize the variance of features involved. In addition to finding these dimensions, PCA will also report the explained variance ratio of each dimension — how much variance within the data is explained by that dimension alone. Note that a component (dimension) from PCA can be considered a new "feature" of the space, however it is a composition of the original features present in the data.
In the code block below, you will need to implement the following:
- Import sklearn.decomposition.PCA and assign the results of fitting PCA in six dimensions with good_data to pca.
- Apply a PCA transformation of log_samples using pca.transform, and assign the results to pca_samples.
End of explanation
"""
# Display sample log-data after having a PCA transformation applied
display(pd.DataFrame(np.round(pca_samples, 4), columns = pca_results.index.values))
"""
Explanation: Question 5
How much variance in the data is explained in total by the first and second principal component? What about the first four principal components? Using the visualization provided above, discuss what the first four dimensions best represent in terms of customer spending.
Hint: A positive increase in a specific dimension corresponds with an increase of the positive-weighted features and a decrease of the negative-weighted features. The rate of increase or decrease is based on the individual feature weights.
Answer:
The variance in data explained by first and second PCA is 0.7068 or 70.68%
The variance explained by first four PCA dimensions is 0.9311 or 93.11%
The dimensions representing customer spending is as follows:
1. Dimension-1
It indicates that customer spends more on Detergents_Paper and if customer spends more on Detergents_Paper it will also spend more on Milk and Groceries while spending less on Fresh products and Frozen items. This dimension is inclined to represent maybe a coffee/drinks restaurant.
Dimension-2
In this dimension, all the weights are positive. Customer spends most on Fresh products, Frozen and Deli items with less focus on Milk, Grocery but also buys other items like Milk, Grocery and Detergents_Paper. This will likely represent a market.
Dimension-3
Here customer spends most on Deli and frozen products and doesn't focus on buying Fresh products and Detergents and paper. This most likely represents a Deli restaurant.
Dimension-4
This dimension is represented by Frozen. A customer buys more of Frozen products and Detergents and Paper while keeping away from Deli items and Fresh produce.
Observation
Run the code below to see how the log-transformed sample data has changed after having a PCA transformation applied to it in six dimensions. Observe the numerical value for the first four dimensions of the sample points. Consider if this is consistent with your initial interpretation of the sample points.
End of explanation
"""
# TODO: Apply PCA by fitting the good data with only two dimensions
pca = PCA(n_components=2).fit(good_data)
# TODO: Transform the good data using the PCA fit above
reduced_data = pca.transform(good_data)
# TODO: Transform log_samples using the PCA fit above
pca_samples = pca.transform(log_samples)
# Create a DataFrame for the reduced data
reduced_data = pd.DataFrame(reduced_data, columns = ['Dimension 1', 'Dimension 2'])
"""
Explanation: Implementation: Dimensionality Reduction
When using principal component analysis, one of the main goals is to reduce the dimensionality of the data — in effect, reducing the complexity of the problem. Dimensionality reduction comes at a cost: Fewer dimensions used implies less of the total variance in the data is being explained. Because of this, the cumulative explained variance ratio is extremely important for knowing how many dimensions are necessary for the problem. Additionally, if a signifiant amount of variance is explained by only two or three dimensions, the reduced data can be visualized afterwards.
In the code block below, you will need to implement the following:
- Assign the results of fitting PCA in two dimensions with good_data to pca.
- Apply a PCA transformation of good_data using pca.transform, and assign the results to reduced_data.
- Apply a PCA transformation of log_samples using pca.transform, and assign the results to pca_samples.
End of explanation
"""
# Display sample log-data after applying PCA transformation in two dimensions
display(pd.DataFrame(np.round(pca_samples, 4), columns = ['Dimension 1', 'Dimension 2']))
"""
Explanation: Observation
Run the code below to see how the log-transformed sample data has changed after having a PCA transformation applied to it using only two dimensions. Observe how the values for the first two dimensions remains unchanged when compared to a PCA transformation in six dimensions.
End of explanation
"""
# Create a biplot
vs.biplot(good_data, reduced_data, pca)
"""
Explanation: Visualizing a Biplot
A biplot is a scatterplot where each data point is represented by its scores along the principal components. The axes are the principal components (in this case Dimension 1 and Dimension 2). In addition, the biplot shows the projection of the original features along the components. A biplot can help us interpret the reduced dimensions of the data, and discover relationships between the principal components and original features.
Run the code cell below to produce a biplot of the reduced-dimension data.
End of explanation
"""
from sklearn.mixture import GMM
# TODO: Apply your clustering algorithm of choice to the reduced data
clusterer = GMM(n_components=2).fit(reduced_data)
# TODO: Predict the cluster for each data point
preds = clusterer.predict(reduced_data)
# TODO: Find the cluster centers
centers = clusterer.means_
# TODO: Predict the cluster for each transformed sample data point
sample_preds = clusterer.predict(pca_samples)
from sklearn.metrics import silhouette_score
# TODO: Calculate the mean silhouette coefficient for the number of clusters chosen
score = silhouette_score(reduced_data, preds)
print score
"""
Explanation: Observation
Once we have the original feature projections (in red), it is easier to interpret the relative position of each data point in the scatterplot. For instance, a point the lower right corner of the figure will likely correspond to a customer that spends a lot on 'Milk', 'Grocery' and 'Detergents_Paper', but not so much on the other product categories.
From the biplot, which of the original features are most strongly correlated with the first component? What about those that are associated with the second component? Do these observations agree with the pca_results plot you obtained earlier?
Clustering
In this section, you will choose to use either a K-Means clustering algorithm or a Gaussian Mixture Model clustering algorithm to identify the various customer segments hidden in the data. You will then recover specific data points from the clusters to understand their significance by transforming them back into their original dimension and scale.
Question 6
What are the advantages to using a K-Means clustering algorithm? What are the advantages to using a Gaussian Mixture Model clustering algorithm? Given your observations about the wholesale customer data so far, which of the two algorithms will you use and why?
Answer:
Advantages of K-means:
1. It is simple to implement and requires less computation cost than compared to other clustering algorithms.
2. Since it uses Euclidean distance to identify clusters, it is faster than other clustering algorithms.
3. It produces tighter clusters than other algorithms (hard classification)
Reference:
- https://en.wikipedia.org/wiki/K-means_clustering
- http://playwidtech.blogspot.hk/2013/02/k-means-clustering-advantages-and.html
- http://www.improvedoutcomes.com/docs/WebSiteDocs/Clustering/K-Means_Clustering_Overview.htm
Advantages of Gaussian Mixture Model:
1. With GMM, a data point is allowed to be loosely associated with one or more clusters based on the probability. The data point is not strictly associated with just one cluster.
2. Since a single data point can be associated with multiple clusters, GMM will avoid to create a cluster of a particular shape as opposed to K-means (soft classification).
In hard classification (like in K-means), a data point is assigned to let's say cluster A with 100% probability or belief that it belongs to cluster A. As the algorithm progresses, it might reverse it belief that the same data point now belongs to another cluster, Cluster B, but with same 100% probability that it belongs to this new cluster. In effect, in every iteration, the cluster assignment is hard in sense that either a data point belong to Cluster or not.
In Soft classification (like in GMM), a data point is assigned to Cluster A with some probability like 90%. At the same time the data point also has 10% OF chance that it belongs to Cluster B. In the next iteration, algorithm will recalculate cluster centers and it might lower the chance that data point now has 80% chance of belonging to Cluster A and 20% chance of belonging to Cluster B. GMM incorporates this degree of uncertainity into the algorithm.
References:
- http://scikit-learn.org/stable/modules/mixture.html#mixture
- https://www.quora.com/What-are-the-advantages-to-using-a-Gaussian-Mixture-Model-clustering-algorithm
- https://www.r-bloggers.com/k-means-clustering-is-not-a-free-lunch/
- https://www.quora.com/What-is-the-difference-between-K-means-and-the-mixture-model-of-Gaussian
- https://shapeofdata.wordpress.com/2013/07/30/k-means/
Based on the dataset and PCA analysis, it looks like some of the data points (customers) doesn't really belong to a particular group. The PCA dimension-2 group can be seen as this example. Customers buying Fresh, Frozen and Deli can also be loosely placed in group of customers buying more Frozen items or Deli items. For this reason, Gaussian Mixture model is more appropriate to use.
Implementation: Creating Clusters
Depending on the problem, the number of clusters that you expect to be in the data may already be known. When the number of clusters is not known a priori, there is no guarantee that a given number of clusters best segments the data, since it is unclear what structure exists in the data — if any. However, we can quantify the "goodness" of a clustering by calculating each data point's silhouette coefficient. The silhouette coefficient for a data point measures how similar it is to its assigned cluster from -1 (dissimilar) to 1 (similar). Calculating the mean silhouette coefficient provides for a simple scoring method of a given clustering.
In the code block below, you will need to implement the following:
- Fit a clustering algorithm to the reduced_data and assign it to clusterer.
- Predict the cluster for each data point in reduced_data using clusterer.predict and assign them to preds.
- Find the cluster centers using the algorithm's respective attribute and assign them to centers.
- Predict the cluster for each sample data point in pca_samples and assign them sample_preds.
- Import sklearn.metrics.silhouette_score and calculate the silhouette score of reduced_data against preds.
- Assign the silhouette score to score and print the result.
End of explanation
"""
# Display the results of the clustering from implementation
vs.cluster_results(reduced_data, preds, centers, pca_samples)
"""
Explanation: Question 7
Report the silhouette score for several cluster numbers you tried. Of these, which number of clusters has the best silhouette score?
Answer:
| Number of Clusters | Silhouette Score |
|---|---|
| 2 | 0.41181 |
| 3 | 0.37245 |
| 5 | 0.29544 |
| 7 | 0.32197 |
| 11 | 0.25546 |
| 15 | 0.22718 |
| 30 | 0.18454 |
| 130 | 0.12487 |
| 205 | 0.22035 |
| 435 | 0.29435 |
The number of clusters with best silhouette score is 2.
Cluster Visualization
Once you've chosen the optimal number of clusters for your clustering algorithm using the scoring metric above, you can now visualize the results by executing the code block below. Note that, for experimentation purposes, you are welcome to adjust the number of clusters for your clustering algorithm to see various visualizations. The final visualization provided should, however, correspond with the optimal number of clusters.
End of explanation
"""
# TODO: Inverse transform the centers
log_centers = pca.inverse_transform(centers)
# TODO: Exponentiate the centers
true_centers = np.exp(log_centers)
# Display the true centers
segments = ['Segment {}'.format(i) for i in range(0,len(centers))]
true_centers = pd.DataFrame(np.round(true_centers), columns = data.keys())
true_centers.index = segments
display(true_centers)
import seaborn as sns
sns.heatmap((true_centers-data.mean())/data.std(ddof=1), annot=True, cbar=False, square=True)
"""
Explanation: Implementation: Data Recovery
Each cluster present in the visualization above has a central point. These centers (or means) are not specifically data points from the data, but rather the averages of all the data points predicted in the respective clusters. For the problem of creating customer segments, a cluster's center point corresponds to the average customer of that segment. Since the data is currently reduced in dimension and scaled by a logarithm, we can recover the representative customer spending from these data points by applying the inverse transformations.
In the code block below, you will need to implement the following:
- Apply the inverse transform to centers using pca.inverse_transform and assign the new centers to log_centers.
- Apply the inverse function of np.log to log_centers using np.exp and assign the true centers to true_centers.
End of explanation
"""
# Display the predictions
for i, pred in enumerate(sample_preds):
print "Sample point", i, "predicted to be in Cluster", pred
"""
Explanation: Question 8
Consider the total purchase cost of each product category for the representative data points above, and reference the statistical description of the dataset at the beginning of this project. What set of establishments could each of the customer segments represent?
Hint: A customer who is assigned to 'Cluster X' should best identify with the establishments represented by the feature set of 'Segment X'.
Answer:
Based on the cluster centers in above figure and the statistical representation
- Segment 0 depicts customer that buys more Fresh and Frozen items with little spending on Grocery and Detergents and Paper.
- Segment 1 depicts a customer that buys more Milk, Grocery and Detergent_Paper with less focus on Fresh and Frozen products,
Segment 1 will then represent coffee/drink restaurant and Segment 0 will represent a prepared foods industry like catering for airline etc.
Question 9
For each sample point, which customer segment from Question 8 best represents it? Are the predictions for each sample point consistent with this?
Run the code block below to find which cluster each sample point is predicted to be.
End of explanation
"""
# Display the clustering results based on 'Channel' data
vs.channel_results(reduced_data, outliers, pca_samples)
"""
Explanation: Answer:
- For the sample point 0, Segment 1 best represents it.
- For the Sample point 1, Segment 1 best represents it.
- For the Sample point 2, Segment 0 best represents it.
Sample points 1 & 2 are predicted correctly. It looks like sample point 0 belongs more towards cluster 0, but is predicted to be in cluster 1.
Conclusion
In this final section, you will investigate ways that you can make use of the clustered data. First, you will consider how the different groups of customers, the customer segments, may be affected differently by a specific delivery scheme. Next, you will consider how giving a label to each customer (which segment that customer belongs to) can provide for additional features about the customer data. Finally, you will compare the customer segments to a hidden variable present in the data, to see whether the clustering identified certain relationships.
Question 10
Companies will often run A/B tests when making small changes to their products or services to determine whether making that change will affect its customers positively or negatively. The wholesale distributor is considering changing its delivery service from currently 5 days a week to 3 days a week. However, the distributor will only make this change in delivery service for customers that react positively. How can the wholesale distributor use the customer segments to determine which customers, if any, would react positively to the change in delivery service?
Hint: Can we assume the change affects all customers equally? How can we determine which group of customers it affects the most?
Answer:
Segment 1 customers buys more of Milk, Grocery and Detergent and Paper while Segment 0 customers buys more of Fresh products and Frozen items.
Distributor can first select segment 1 customers since Milk, Grocery items need not be delivered everyday and can work with reduced number of days. Distributor can then perform A/B test by dividing the segment 0 customers into two where one group will receive shipments 5 days a week and other group 3 days a week. Based on the response of two groups, if the group with shipment of 3 days respond positively, distributor can choose to switch customers of segment 1 to 3 day shipments.
Similarly, customers in Segment 0 order more Fresh produce which might make customers unhappy about 3 day shipment as fresh produce will not be much 'fresh' as compared to 5 day delivery.
However, all customers in segment 1 cannot be considered as equally since some customers may specifically require 5 day delivery to maintain fresh stock.
Question 11
Additional structure is derived from originally unlabeled data when using clustering techniques. Since each customer has a customer segment it best identifies with (depending on the clustering algorithm applied), we can consider 'customer segment' as an engineered feature for the data. Assume the wholesale distributor recently acquired ten new customers and each provided estimates for anticipated annual spending of each product category. Knowing these estimates, the wholesale distributor wants to classify each new customer to a customer segment to determine the most appropriate delivery service.
How can the wholesale distributor label the new customers using only their estimated product spending and the customer segment data?
Hint: A supervised learner could be used to train on the original customers. What would be the target variable?
Answer:
By combining the customer segment data as a new feature with the 6 original features, distributor can use supervised learning algorithm like Decision Trees or Gradient Boosting to train and predict a customer's delivery schedule based on the results of A/B testing done earlier.
The target variable will then be delivery schedule for new customers.
Visualizing Underlying Distributions
At the beginning of this project, it was discussed that the 'Channel' and 'Region' features would be excluded from the dataset so that the customer product categories were emphasized in the analysis. By reintroducing the 'Channel' feature to the dataset, an interesting structure emerges when considering the same PCA dimensionality reduction applied earlier to the original dataset.
Run the code block below to see how each data point is labeled either 'HoReCa' (Hotel/Restaurant/Cafe) or 'Retail' the reduced space. In addition, you will find the sample points are circled in the plot, which will identify their labeling.
End of explanation
"""
|
arborh/tensorflow
|
tensorflow/lite/experimental/micro/examples/hello_world/create_sine_model.ipynb
|
apache-2.0
|
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
"""
# TensorFlow is an open source machine learning library
!pip install tensorflow==2.0
import tensorflow as tf
# Numpy is a math library
import numpy as np
# Matplotlib is a graphing library
import matplotlib.pyplot as plt
# math is Python's math library
import math
"""
Explanation: Create and convert a TensorFlow model
This notebook is designed to demonstrate the process of creating a TensorFlow model and converting it to use with TensorFlow Lite. The model created in this notebook is used in the hello_world sample for TensorFlow Lite for Microcontrollers.
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/experimental/micro/examples/hello_world/create_sine_model.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/experimental/micro/examples/hello_world/create_sine_model.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
Import dependencies
Our first task is to import the dependencies we need. Run the following cell to do so:
End of explanation
"""
# We'll generate this many sample datapoints
SAMPLES = 1000
# Set a "seed" value, so we get the same random numbers each time we run this
# notebook. Any number can be used here.
SEED = 1337
np.random.seed(SEED)
tf.random.set_seed(SEED)
# Generate a uniformly distributed set of random numbers in the range from
# 0 to 2π, which covers a complete sine wave oscillation
x_values = np.random.uniform(low=0, high=2*math.pi, size=SAMPLES)
# Shuffle the values to guarantee they're not in order
np.random.shuffle(x_values)
# Calculate the corresponding sine values
y_values = np.sin(x_values)
# Plot our data. The 'b.' argument tells the library to print blue dots.
plt.plot(x_values, y_values, 'b.')
plt.show()
"""
Explanation: Generate data
Deep learning networks learn to model patterns in underlying data. In this notebook, we're going to train a network to model data generated by a sine function. This will result in a model that can take a value, x, and predict its sine, y.
In a real world application, if you needed the sine of x, you could just calculate it directly. However, by training a model to do this, we can demonstrate the basic principles of machine learning.
In the hello_world sample for TensorFlow Lite for Microcontrollers, we'll use this model to control LEDs that light up in a sequence.
The code in the following cell will generate a set of random x values, calculate their sine values, and display them on a graph:
End of explanation
"""
# Add a small random number to each y value
y_values += 0.1 * np.random.randn(*y_values.shape)
# Plot our data
plt.plot(x_values, y_values, 'b.')
plt.show()
"""
Explanation: Add some noise
Since it was generated directly by the sine function, our data fits a nice, smooth curve.
However, machine learning models are good at extracting underlying meaning from messy, real world data. To demonstrate this, we can add some noise to our data to approximate something more life-like.
In the following cell, we'll add some random noise to each value, then draw a new graph:
End of explanation
"""
# We'll use 60% of our data for training and 20% for testing. The remaining 20%
# will be used for validation. Calculate the indices of each section.
TRAIN_SPLIT = int(0.6 * SAMPLES)
TEST_SPLIT = int(0.2 * SAMPLES + TRAIN_SPLIT)
# Use np.split to chop our data into three parts.
# The second argument to np.split is an array of indices where the data will be
# split. We provide two indices, so the data will be divided into three chunks.
x_train, x_validate, x_test = np.split(x_values, [TRAIN_SPLIT, TEST_SPLIT])
y_train, y_validate, y_test = np.split(y_values, [TRAIN_SPLIT, TEST_SPLIT])
# Double check that our splits add up correctly
assert (x_train.size + x_validate.size + x_test.size) == SAMPLES
# Plot the data in each partition in different colors:
plt.plot(x_train, y_train, 'b.', label="Train")
plt.plot(x_validate, y_validate, 'y.', label="Validate")
plt.plot(x_test, y_test, 'r.', label="Test")
plt.legend()
plt.show()
"""
Explanation: Split our data
We now have a noisy dataset that approximates real world data. We'll be using this to train our model.
To evaluate the accuracy of the model we train, we'll need to compare its predictions to real data and check how well they match up. This evaluation happens during training (where it is referred to as validation) and after training (referred to as testing) It's important in both cases that we use fresh data that was not already used to train the model.
To ensure we have data to use for evaluation, we'll set some aside before we begin training. We'll reserve 20% of our data for validation, and another 20% for testing. The remaining 60% will be used to train the model. This is a typical split used when training models.
The following code will split our data and then plot each set as a different color:
End of explanation
"""
# We'll use Keras to create a simple model architecture
from tensorflow.keras import layers
model_1 = tf.keras.Sequential()
# First layer takes a scalar input and feeds it through 16 "neurons". The
# neurons decide whether to activate based on the 'relu' activation function.
model_1.add(layers.Dense(16, activation='relu', input_shape=(1,)))
# Final layer is a single neuron, since we want to output a single value
model_1.add(layers.Dense(1))
# Compile the model using a standard optimizer and loss function for regression
model_1.compile(optimizer='rmsprop', loss='mse', metrics=['mae'])
# Print a summary of the model's architecture
model_1.summary()
"""
Explanation: Design a model
We're going to build a model that will take an input value (in this case, x) and use it to predict a numeric output value (the sine of x). This type of problem is called a regression.
To achieve this, we're going to create a simple neural network. It will use layers of neurons to attempt to learn any patterns underlying the training data, so it can make predictions.
To begin with, we'll define two layers. The first layer takes a single input (our x value) and runs it through 16 neurons. Based on this input, each neuron will become activated to a certain degree based on its internal state (its weight and bias values). A neuron's degree of activation is expressed as a number.
The activation numbers from our first layer will be fed as inputs to our second layer, which is a single neuron. It will apply its own weights and bias to these inputs and calculate its own activation, which will be output as our y value.
Note: To learn more about how neural networks function, you can explore the Learn TensorFlow codelabs.
The code in the following cell defines our model using Keras, TensorFlow's high-level API for creating deep learning networks. Once the network is defined, we compile it, specifying parameters that determine how it will be trained:
End of explanation
"""
# Train the model on our training data while validating on our validation set
history_1 = model_1.fit(x_train, y_train, epochs=1000, batch_size=16,
validation_data=(x_validate, y_validate))
"""
Explanation: Train the model
Once we've defined the model, we can use our data to train it. Training involves passing an x value into the neural network, checking how far the network's output deviates from the expected y value, and adjusting the neurons' weights and biases so that the output is more likely to be correct the next time.
Training runs this process on the full dataset multiple times, and each full run-through is known as an epoch. The number of epochs to run during training is a parameter we can set.
During each epoch, data is run through the network in multiple batches. Each batch, several pieces of data are passed into the network, producing output values. These outputs' correctness is measured in aggregate and the network's weights and biases are adjusted accordingly, once per batch. The batch size is also a parameter we can set.
The code in the following cell uses the x and y values from our training data to train the model. It runs for 1000 epochs, with 16 pieces of data in each batch. We also pass in some data to use for validation. As you will see when you run the cell, training can take a while to complete:
End of explanation
"""
# Draw a graph of the loss, which is the distance between
# the predicted and actual values during training and validation.
loss = history_1.history['loss']
val_loss = history_1.history['val_loss']
epochs = range(1, len(loss) + 1)
plt.plot(epochs, loss, 'g.', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
"""
Explanation: Check the training metrics
During training, the model's performance is constantly being measured against both our training data and the validation data that we set aside earlier. Training produces a log of data that tells us how the model's performance changed over the course of the training process.
The following cells will display some of that data in a graphical form:
End of explanation
"""
# Exclude the first few epochs so the graph is easier to read
SKIP = 100
plt.plot(epochs[SKIP:], loss[SKIP:], 'g.', label='Training loss')
plt.plot(epochs[SKIP:], val_loss[SKIP:], 'b.', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
"""
Explanation: Look closer at the data
The graph shows the loss (or the difference between the model's predictions and the actual data) for each epoch. There are several ways to calculate loss, and the method we have used is mean squared error. There is a distinct loss value given for the training and the validation data.
As we can see, the amount of loss rapidly decreases over the first 50 epochs, before flattening out. This means that the model is improving and producing more accurate predictions!
Our goal is to stop training when either the model is no longer improving, or when the training loss is less than the validation loss, which would mean that the model has learned to predict the training data so well that it can no longer generalize to new data.
To make the flatter part of the graph more readable, let's skip the first 100 epochs:
End of explanation
"""
# Draw a graph of mean absolute error, which is another way of
# measuring the amount of error in the prediction.
mae = history_1.history['mae']
val_mae = history_1.history['val_mae']
plt.plot(epochs[SKIP:], mae[SKIP:], 'g.', label='Training MAE')
plt.plot(epochs[SKIP:], val_mae[SKIP:], 'b.', label='Validation MAE')
plt.title('Training and validation mean absolute error')
plt.xlabel('Epochs')
plt.ylabel('MAE')
plt.legend()
plt.show()
"""
Explanation: Further metrics
From the plot, we can see that loss continues to reduce until around 600 epochs, at which point it is mostly stable. This means that there's probably no need to train our network for so long.
However, we can also see that the lowest loss values are around 0.155. This is relatively high. In addition, the validation loss values are consistently higher.
To gain more insight into our model's performance we can plot some more data. This time, we'll plot the mean absolute error, which is another way of measuring how far the network's predictions are from the actual numbers:
End of explanation
"""
# Use the model to make predictions from our validation data
predictions = model_1.predict(x_train)
# Plot the predictions along with to the test data
plt.clf()
plt.title('Training data predicted vs actual values')
plt.plot(x_test, y_test, 'b.', label='Actual')
plt.plot(x_train, predictions, 'r.', label='Predicted')
plt.legend()
plt.show()
"""
Explanation: This graph of mean absolute error gives us some further clues. We can see that predictions with our training data show consistently lower error than with our validation data, which means that the network has likely overfit, or learned the training data so rigidly that it can't make effective predictions about new data.
In addition, the mean absolute error values are quite high, around ~0.31, which means many of the model's predictions are at least 31% off. A 31% error means we are very far from accurately modelling the sine wave.
To get more insight into what is happening, we can plot our network's predictions for the training data against the expected values:
End of explanation
"""
model_2 = tf.keras.Sequential()
# First layer takes a scalar input and feeds it through 16 "neurons". The
# neurons decide whether to activate based on the 'relu' activation function.
model_2.add(layers.Dense(16, activation='relu', input_shape=(1,)))
# The new second layer may help the network learn more complex representations
model_2.add(layers.Dense(16, activation='relu'))
# Final layer is a single neuron, since we want to output a single value
model_2.add(layers.Dense(1))
# Compile the model using a standard optimizer and loss function for regression
model_2.compile(optimizer='rmsprop', loss='mse', metrics=['mae'])
# Show a summary of the model
model_2.summary()
"""
Explanation: Oh dear! The graph makes it clear that our network has learned to approximate the sine function in a very limited way. The predictions are highly linear, and only very roughly fit the data.
The rigidity of this fit suggests that the model does not have enough capacity to learn the full complexity of the sine wave function, so it's only able to approximate it in an overly simplistic way. By making our model bigger, we should be able to improve its performance.
Change our model
To make our model bigger, let's add an additional layer of neurons. The following cell redefines our model in the same way as earlier, but with an additional layer of 16 neurons in the middle:
End of explanation
"""
history_2 = model_2.fit(x_train, y_train, epochs=600, batch_size=16,
validation_data=(x_validate, y_validate))
"""
Explanation: We'll now train the new model. To save time, we'll train for only 600 epochs:
End of explanation
"""
# Draw a graph of the loss, which is the distance between
# the predicted and actual values during training and validation.
loss = history_2.history['loss']
val_loss = history_2.history['val_loss']
epochs = range(1, len(loss) + 1)
plt.plot(epochs, loss, 'g.', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
# Exclude the first few epochs so the graph is easier to read
SKIP = 80
plt.clf()
plt.plot(epochs[SKIP:], loss[SKIP:], 'g.', label='Training loss')
plt.plot(epochs[SKIP:], val_loss[SKIP:], 'b.', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf()
# Draw a graph of mean absolute error, which is another way of
# measuring the amount of error in the prediction.
mae = history_2.history['mae']
val_mae = history_2.history['val_mae']
plt.plot(epochs[SKIP:], mae[SKIP:], 'g.', label='Training MAE')
plt.plot(epochs[SKIP:], val_mae[SKIP:], 'b.', label='Validation MAE')
plt.title('Training and validation mean absolute error')
plt.xlabel('Epochs')
plt.ylabel('MAE')
plt.legend()
plt.show()
"""
Explanation: Evaluate our new model
Each training epoch, the model prints out its loss and mean absolute error for training and validation. You can read this in the output above:
Epoch 600/600
600/600 [==============================] - 0s 143us/sample - loss: 0.0115 - mae: 0.0859 - val_loss: 0.0104 - val_mae: 0.0806
You can see that we've already got a huge improvement - validation loss has dropped from 0.17 to 0.01, and validation MAE has dropped from 0.36 to 0.08.
The following cell will print the same graphs we used to evaluate our original model, but showing our new training history:
End of explanation
"""
# Calculate and print the loss on our test dataset
loss = model_2.evaluate(x_test, y_test)
# Make predictions based on our test dataset
predictions = model_2.predict(x_test)
# Graph the predictions against the actual values
plt.clf()
plt.title('Comparison of predictions and actual values')
plt.plot(x_test, y_test, 'b.', label='Actual')
plt.plot(x_test, predictions, 'r.', label='Predicted')
plt.legend()
plt.show()
"""
Explanation: Great results! From these graphs, we can see two exciting things:
Metrics are better for validation than training, which means the network is not overfitting
The overall loss and MAE are much better than our previous network
The reason the metrics for validation are better than those for training (and not merely identical) is that validation metrics are calculated at the end of each epoch, while training metrics are calculated throughout the epoch, so validation happens on a model that has been trained slightly longer.
This all means our network seems to be performing well! To confirm, let's check its predictions against the test dataset we set aside earlier:
End of explanation
"""
# Convert the model to the TensorFlow Lite format without quantization
converter = tf.lite.TFLiteConverter.from_keras_model(model_2)
tflite_model = converter.convert()
# Save the model to disk
open("sine_model.tflite", "wb").write(tflite_model)
# Convert the model to the TensorFlow Lite format with quantization
converter = tf.lite.TFLiteConverter.from_keras_model(model_2)
# Indicate that we want to perform the default optimizations,
# which includes quantization
converter.optimizations = [tf.lite.Optimize.DEFAULT]
# Define a generator function that provides our test data's x values
# as a representative dataset, and tell the converter to use it
def representative_dataset_generator():
for value in x_test:
# Each scalar value must be inside of a 2D array that is wrapped in a list
yield [np.array(value, dtype=np.float32, ndmin=2)]
converter.representative_dataset = representative_dataset_generator
# Convert the model
tflite_model = converter.convert()
# Save the model to disk
open("sine_model_quantized.tflite", "wb").write(tflite_model)
"""
Explanation: Much better! The evaluation metrics we printed show that the model has a low loss and MAE on the test data, and the predictions line up visually with our data fairly well.
The model isn't perfect; its predictions don't form a smooth sine curve. For instance, the line becomes almost straight when x is between 4 and 5. If we wanted to go further, we could try further increasing the capacity of the model, perhaps using some techniques to defend from overfitting.
However, an important part of machine learning is knowing when to quit, and this model is good enough for our use case - which is to make some LEDs blink in a pleasing pattern.
Convert to TensorFlow Lite
We now have an acceptably accurate model in-memory. However, to use this with TensorFlow Lite for Microcontrollers, we'll need to convert it into the correct format and download it as a file. To do this, we'll use the TensorFlow Lite Converter. The converter outputs a file in a special, space-efficient format for use on memory-constrained devices.
Since this model is going to be deployed on a microcontroller, we want it to be as tiny as possible! One technique for reducing the size of models is called quantization. It reduces the precision of the model's weights, which saves memory, often without much impact on accuracy. Quantized models also run faster, since the calculations required are simpler.
The TensorFlow Lite Converter can apply quantization while it converts the model. In the following cell, we'll convert the model twice—once with quantization, once without:
End of explanation
"""
# Instantiate an interpreter for each model
sine_model = tf.lite.Interpreter('sine_model.tflite')
sine_model_quantized = tf.lite.Interpreter('sine_model_quantized.tflite')
# Allocate memory for each model
sine_model.allocate_tensors()
sine_model_quantized.allocate_tensors()
# Get indexes of the input and output tensors
sine_model_input_index = sine_model.get_input_details()[0]["index"]
sine_model_output_index = sine_model.get_output_details()[0]["index"]
sine_model_quantized_input_index = sine_model_quantized.get_input_details()[0]["index"]
sine_model_quantized_output_index = sine_model_quantized.get_output_details()[0]["index"]
# Create arrays to store the results
sine_model_predictions = []
sine_model_quantized_predictions = []
# Run each model's interpreter for each value and store the results in arrays
for x_value in x_test:
# Create a 2D tensor wrapping the current x value
x_value_tensor = tf.convert_to_tensor([[x_value]], dtype=np.float32)
# Write the value to the input tensor
sine_model.set_tensor(sine_model_input_index, x_value_tensor)
# Run inference
sine_model.invoke()
# Read the prediction from the output tensor
sine_model_predictions.append(
sine_model.get_tensor(sine_model_output_index)[0])
# Do the same for the quantized model
sine_model_quantized.set_tensor(sine_model_quantized_input_index, x_value_tensor)
sine_model_quantized.invoke()
sine_model_quantized_predictions.append(
sine_model_quantized.get_tensor(sine_model_quantized_output_index)[0])
# See how they line up with the data
plt.clf()
plt.title('Comparison of various models against actual values')
plt.plot(x_test, y_test, 'bo', label='Actual')
plt.plot(x_test, predictions, 'ro', label='Original predictions')
plt.plot(x_test, sine_model_predictions, 'bx', label='Lite predictions')
plt.plot(x_test, sine_model_quantized_predictions, 'gx', label='Lite quantized predictions')
plt.legend()
plt.show()
"""
Explanation: To create a quantized model that runs as efficiently as possible, we have to provide a "representative dataset"—a set of numbers that represent the full range of input values the dataset the model was trained on.
In the above cell, we can use our test dataset's x values as a representative dataset. We define a function, representative_dataset_generator(), that uses the yield operator to return them one by one.
Test the converted models
To prove these models are still accurate after conversion and quantization, we'll use both of them to make predictions and compare these against our test results:
End of explanation
"""
import os
basic_model_size = os.path.getsize("sine_model.tflite")
print("Basic model is %d bytes" % basic_model_size)
quantized_model_size = os.path.getsize("sine_model_quantized.tflite")
print("Quantized model is %d bytes" % quantized_model_size)
difference = basic_model_size - quantized_model_size
print("Difference is %d bytes" % difference)
"""
Explanation: We can see from the graph that the predictions for the original model, the converted model, and the quantized model are all close enough to be almost indistinguishable. This means that our quantized model is ready to use!
We can print the difference in file size:
End of explanation
"""
# Install xxd if it is not available
!apt-get -qq install xxd
# Save the file as a C source file
!xxd -i sine_model_quantized.tflite > sine_model_quantized.cc
# Print the source file
!cat sine_model_quantized.cc
"""
Explanation: Our quantized model is 224 bytes smaller than the original version, which is great - but it's only a minor reduction in size. At around 2.4 kilobytes, this model is already so small that the weights make up a small proportion of the overall size, meaning quantization only has a small effect.
More complex models have many more weights, meaning the space saving from quantization will be much higher, approaching 4x for most sophisticated models.
Regardless, our quantized model will take less time to execute than the original version, which is important on a tiny microcontroller!
Write to a C file
The final step in preparing our model for use with TensorFlow Lite for Microcontrollers is to convert it into a C source file. You can see an example of this format in hello_world/sine_model_data.cc.
To do so, we can use a command line utility named xxd. The following cell runs xxd on our quantized model and prints the output:
End of explanation
"""
|
csyhuang/hn2016_falwa
|
examples/nh2018_science/demo_script_for_nh2018.ipynb
|
mit
|
import numpy as np
from numpy import dtype
from math import pi
from netCDF4 import Dataset
import matplotlib.pyplot as plt
import datetime as dt
%matplotlib inline
from hn2016_falwa.oopinterface import QGField
import hn2016_falwa.utilities as utilities
import datetime as dt
"""
Explanation: Last updated on Apr 9, 2020
Demo script for the analyses done in Nakamura and Huang (2018, Science)
This is a complimentary demo script that can be used to implement the local wave activity, fluxes and flux convergence/divergence computation required in the analyses presented in Nakamura and Huang, Atmospheric Blocking as a Traffic Jam in the Jet Stream. Science. (2018)
This notebook demonstrate how to compute local wave activity and all the flux terms in equations (2) and (3) in NH2018 with the updated functionality in the python package hn2016_falwa. To run the script, please install the
package hn2016_falwa using
python setup.py develop
after cloning the GitHub repo.
The functionalities are enhanced and included in the class object QGField under hn2016_falwa.oopinterface. Please refer to the documentation (search QGField) or the end of this notebook for the input/methods this class provides.
Please raise an issue in the GitHub repo or contact Clare S. Y. Huang (csyhuang@uchicago.edu) if you have any questions or suggestions regarding the package.
End of explanation
"""
# --- Load the zonal wind and QGPV at 240hPa --- #
u_file = Dataset('2005-01-23_to_2005-01-30_u.nc', mode='r')
v_file = Dataset('2005-01-23_to_2005-01-30_v.nc', mode='r')
t_file = Dataset('2005-01-23_to_2005-01-30_t.nc', mode='r')
time_array = u_file.variables['time'][:]
time_units = u_file.variables['time'].units
time_calendar = u_file.variables['time'].calendar
ntimes = time_array.shape[0]
print('Dimension of time: {}'.format(time_array.size))
"""
Explanation: Load ERA-Interim reanalysis data retrieved from ECMWF server
The sample script in this directory download_example.py include the code to retrieve zonal wind field U, meridional
wind field V and temperature field T at various pressure levels. Given that you have an account on ECMWF server and
have the ecmwfapi package installed, you can run the scripts to download data from there:
python download_example.py
End of explanation
"""
xlon = u_file.variables['longitude'][:]
# latitude has to be in ascending order
ylat = u_file.variables['latitude'][:]
if np.diff(ylat)[0]<0:
print('Flip ylat.')
ylat = ylat[::-1]
# pressure level has to be in descending order (ascending height)
plev = u_file.variables['level'][:]
if np.diff(plev)[0]>0:
print('Flip plev.')
plev = plev[::-1]
nlon = xlon.size
nlat = ylat.size
nlev = plev.size
clat = np.cos(np.deg2rad(ylat)) # cosine latitude
p0 = 1000. # surface pressure [hPa]
kmax = 49 # number of grid points for vertical extrapolation (dimension of height)
dz = 1000. # differential height element
height = np.arange(0,kmax)*dz # pseudoheight [m]
dphi = np.diff(ylat)[0]*pi/180. # differential latitudinal element
dlambda = np.diff(xlon)[0]*pi/180. # differential latitudinal element
hh = 7000. # scale height
cp = 1004. # heat capacity of dry air
rr = 287. # gas constant
omega = 7.29e-5 # rotation rate of the earth
aa = 6.378e+6 # earth radius
prefactor = np.array([np.exp(-z/hh) for z in height[1:]]).sum() # integrated sum of density from the level
#just above the ground (z=1km) to aloft
npart = nlat # number of partitions to construct the equivalent latitude grids
maxits = 100000 # maximum number of iteration in the SOR solver to solve for reference state
tol = 1.e-5 # tolerance that define convergence of solution
rjac = 0.95 # spectral radius of the Jacobi iteration in the SOR solver.
jd = nlat//2+1 # (one plus) index of latitude grid point with value 0 deg
# This is to be input to fortran code. The index convention is different.
"""
Explanation: Load the dimension arrays
In this version, the QGField object takes only:
- latitude array in degree ascending order, and
- pressure level in hPa in decending order (from ground to aloft).
End of explanation
"""
# === Outputing files ===
output_fname = '2005-01-23_to_2005-01-30_output.nc'
output_file = Dataset(output_fname, 'w')
output_file.createDimension('levelist',kmax)
output_file.createDimension('latitude',nlat)
output_file.createDimension('longitude',nlon)
output_file.createDimension('time',ntimes)
plevs = output_file.createVariable('levelist',dtype('float32').char,('levelist',)) # Define the coordinate variables
lats = output_file.createVariable('latitude',dtype('float32').char,('latitude',)) # Define the coordinate variables
lons = output_file.createVariable('longitude',dtype('float32').char,('longitude',))
times = output_file.createVariable('time',dtype('int').char,('time',))
plevs.units = 'hPa'
lats.units = 'degrees_north'
lons.units = 'degrees_east'
times.units = time_units
times.calendar = time_calendar
plevs[:] = p0 * np.exp(-height/hh)
lats[:] = ylat
lons[:] = xlon
times[:] = time_array
qgpv = output_file.createVariable('qgpv',dtype('float32').char,('time','levelist','latitude','longitude'))
qgpv.units = '1/s'
interpolated_u = output_file.createVariable('interpolated_u',dtype('float32').char,('time','levelist','latitude','longitude'))
interpolated_u.units = 'm/s'
interpolated_v = output_file.createVariable('interpolated_v',dtype('float32').char,('time','levelist','latitude','longitude'))
interpolated_v.units = 'm/s'
interpolated_theta = output_file.createVariable('interpolated_theta',dtype('float32').char,('time','levelist','latitude','longitude'))
interpolated_theta.units = 'K'
qref = output_file.createVariable('qref',dtype('float32').char,('time','levelist','latitude'))
qref.units = '1/s'
uref = output_file.createVariable('uref',dtype('float32').char,('time','levelist','latitude'))
uref.units = 'm/s'
ptref = output_file.createVariable('ptref',dtype('float32').char,('time','levelist','latitude'))
ptref.units = 'K'
lwa = output_file.createVariable('lwa',dtype('float32').char,('time','levelist','latitude','longitude'))
lwa.units = 'm/s'
adv_flux_f1 = output_file.createVariable('Zonal advective flux F1',dtype('float32').char,('time','latitude','longitude'))
adv_flux_f1.units = 'm**2/s**2'
adv_flux_f2 = output_file.createVariable('Zonal advective flux F2',dtype('float32').char,('time','latitude','longitude'))
adv_flux_f2.units = 'm**2/s**2'
adv_flux_f3 = output_file.createVariable('Zonal advective flux F3',dtype('float32').char,('time','latitude','longitude'))
adv_flux_f3.units = 'm**2/s**2'
adv_flux_conv = output_file.createVariable('Zonal advective flux Convergence -Div(F1+F2+F3)',dtype('float32').char,('time','latitude','longitude'))
adv_flux_conv.units = 'm/s**2'
divergence_eddy_momentum_flux = output_file.createVariable('Eddy Momentum Flux Divergence',dtype('float32').char,('time','latitude','longitude'))
divergence_eddy_momentum_flux.units = 'm/s**2'
meridional_heat_flux = output_file.createVariable('Low-level Meridional Heat Flux',dtype('float32').char,('time','latitude','longitude'))
meridional_heat_flux.units = 'm/s**2'
lwa_baro = output_file.createVariable('lwa_baro',dtype('float32').char,('time','latitude','longitude'))
lwa_baro.units = 'm/s'
u_baro = output_file.createVariable('u_baro',dtype('float32').char,('time','latitude','longitude'))
u_baro.units = 'm/s'
"""
Explanation: Create a netCDF file to store output
A netCDF file 2005-01-23_to_2005-01-30_output.nc with same number of time steps in the input file is created to store all the computed quantities.
End of explanation
"""
tstamp = [dt.datetime(2005,1,23,0,0) + dt.timedelta(seconds=6*3600) * tt for tt in range(ntimes)]
plev_selected = 10 # selected pressure level to display
tstep_selected = 0
"""
Explanation: Set the level of pressure and the timestamp to display below
End of explanation
"""
for tstep in range(32): # or ntimes
uu = u_file.variables['u'][tstep, ::-1, ::-1, :].data
vv = v_file.variables['v'][tstep, ::-1, ::-1, :].data
tt = t_file.variables['t'][tstep, ::-1, ::-1, :].data
qgfield_object = QGField(xlon, ylat, plev, uu, vv, tt)
qgpv[tstep, :, :, :], interpolated_u[tstep, :, :, :], interpolated_v[tstep, :, :, :], \
interpolated_theta[tstep, :, :, :], static_stability = qgfield_object.interpolate_fields()
qref[tstep, :, :], uref[tstep, :, :], ptref[tstep, :, :] = \
qgfield_object.compute_reference_states(northern_hemisphere_results_only=False)
adv_flux_f1[tstep, :, :], \
adv_flux_f2[tstep, :, :], \
adv_flux_f3[tstep, :, :], \
adv_flux_conv[tstep, :, :], \
divergence_eddy_momentum_flux[tstep, :, :], \
meridional_heat_flux[tstep, :, :], \
lwa_baro[tstep, :, :], \
u_baro[tstep, :, :], \
lwa[tstep, :, :, :] \
= qgfield_object.compute_lwa_and_barotropic_fluxes(northern_hemisphere_results_only=False)
if tstep == tstep_selected:
# === Below demonstrate another way to access the computed variables ===
# 3D Variables that I would choose one pressure level to display
variables_3d = [
(qgfield_object.qgpv, 'Quasigeostrophic potential vorticity (QGPV)'),
(qgfield_object.lwa, 'Local wave activity (LWA)'),
(qgfield_object.interpolated_u, 'Interpolated zonal wind (u)'),
(qgfield_object.interpolated_v, 'Interpolated meridional wind (v)')]
# Reference states to be displayed on y-z plane
variables_yz = [
(qgfield_object.qref, 'Qref'),
(qgfield_object.uref, 'Uref'),
(qgfield_object.ptref, 'PTref')]
# Vertically averaged variables to be displayed on x-y plane
variables_xy = [
(qgfield_object.adv_flux_f1, 'Advective flux F1'),
(qgfield_object.adv_flux_f2, 'Advective flux F2'),
(qgfield_object.adv_flux_f3, 'Advective flux F3'),
(qgfield_object.convergence_zonal_advective_flux, 'Advective flux convergence -Div(F1+F2+F3)'),
(qgfield_object.divergence_eddy_momentum_flux, 'divergence_eddy_momentum_flux'),
(qgfield_object.meridional_heat_flux, 'meridional_heat_flux')
]
# Plot 240 hPa of 3D-variables
for variable, name in variables_3d:
plt.figure(figsize=(12,6))
plt.contourf(xlon, ylat[1:-1], variable[plev_selected, 1:-1, :], 50, cmap='jet')
if name=='Local wave activity (LWA)':
plt.axhline(y=0, c='w', lw=30)
plt.colorbar()
plt.ylabel('Latitude (deg)')
plt.xlabel('Longitude (deg)')
plt.title(name + ' at 240hPa | ' + str(tstamp[tstep]))
plt.show()
# Plot reference states
for variable, name in variables_yz:
plt.figure(figsize=(6,4))
plt.contourf(ylat[1:-1], height, variable[:, 1:-1], 50, cmap='jet')
plt.axvline(x=0, c='w', lw=2)
plt.xlabel('Latitude (deg)')
plt.ylabel('Pseudoheight (m)')
plt.colorbar()
plt.title(name + ' | ' + str(tstamp[tstep]))
plt.show()
# Plot barotropic (2D-)variables
for variable, name in variables_xy:
plt.figure(figsize=(12,6))
plt.contourf(xlon, ylat[1:-1], variable[1:-1, :], 50, cmap='jet')
plt.axhline(y=0, c='w', lw=30)
plt.ylabel('Latitude (deg)')
plt.xlabel('Longitude (deg)')
plt.colorbar()
plt.title(name + ' | ' + str(tstamp[tstep]))
plt.show()
print('tstep = {}/{}\n'.format(tstep, ntimes))
output_file.close()
print('Output {} timesteps of data to the file {}'.format(tstep + 1, output_fname))
"""
Explanation: Set names of the variables to display
Loop through the input file and store all the computed quantities in a netCDF file
End of explanation
"""
|
xiongzhenggang/xiongzhenggang.github.io
|
data-science/27-错误可视化.ipynb
|
gpl-3.0
|
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('seaborn-whitegrid')
import numpy as np
x = np.linspace(0, 10, 50)
dy = 0.8
y = np.sin(x) + dy * np.random.randn(50)
# yerr表示y的误差
plt.errorbar(x, y, yerr=dy, fmt='.k');
"""
Explanation: 错误可视化
对于任何科学的度量,准确地计算错误几乎和准确报告数字本身一样重要,甚至更为重要。例如,假设我正在使用一些天体观测来估计哈勃常数,这是对宇宙膨胀率的局部测量。我知道,目前的文献表明该值约为71(km / s)/ Mpc,我用我的方法测得的值为74(km / s)/ Mpc。值是否一致?给定此信息,唯一正确的答案是:没有办法知道。
假设我用报告的不确定性来补充此信息:当前文献表明该值约为71±
2.5(km / s)/ Mpc,我的方法测得的值为74±
5(km / s)/ Mpc。现在值是一致的吗?这是一个可以定量回答的问题。
在数据和结果的可视化中,有效显示这些错误可以使绘图传达更完整的信息。
基本错误栏表示
可以通过单个Matplotlib函数调用来创建基本的错误栏:
End of explanation
"""
plt.errorbar(x, y, yerr=dy, fmt='o', color='black',
ecolor='lightgray', elinewidth=3, capsize=0);
"""
Explanation: 这里的fmt是控制线和点外观的格式代码,并且具有与plt.plot中使用的简写相同的语法,在Simple Line Plots和Simple Scatter Plots中进行了概述。
除了这些基本选项之外,错误栏功能还有许多选项可以微调输出。使用这些附加选项,可以轻松自定义误差线图的美感。
End of explanation
"""
# GaussianProcessRegressor实现高斯回归
from sklearn.gaussian_process import GaussianProcessRegressor
from sklearn.gaussian_process.kernels import RBF, WhiteKernel
# define the model and draw some data
model = lambda x: x * np.sin(x)
xdata = np.array([1, 3, 5, 6, 8])
ydata = model(xdata)
#
kernel = 1.0 * RBF(length_scale=10.0, length_scale_bounds=(1e-2, 1e3)) \
+ WhiteKernel(noise_level=1e-5, noise_level_bounds=(1e-10, 1e+1))
gp = GaussianProcessRegressor(kernel=kernel,
alpha=0.0)
# # Compute the Gaussian process fit
# gp = GaussianProcessRegressor(corr='cubic', theta0=1e-2, thetaL=1e-4, thetaU=1E-1,
# random_start=100)
gp.fit(xdata[:, np.newaxis], ydata)
xfit = np.linspace(0, 10, 1000)
# MSE,yfit = gp.predict(xfit[:, np.newaxis], return_cov=True)
y_mean, y_cov= gp.predict(xfit[:, np.newaxis], return_cov=True)
dyfit = 2 * np.sqrt(np.diag(y_cov)) # 2*sigma ~ 95% confidence region
"""
Explanation: 除了这些选项之外,还可以指定水平误差线(xerr),单面误差线和许多其他变体。有关可用选项的更多信息,请参考plt.errorbar的文档字符串。
连续误差图
在某些情况下,希望显示连续数量的误差条。尽管Matplotlib没有针对此类应用程序的内置便利例程,但是将诸如plt.plot和plt.fill_between之类的原语组合起来相对容易,以获得有用的结果。
在这里,我们将使用Scikit-Learn API执行简单的高斯过程回归scikit doc。这是一种通过连续测量不确定性将非常灵活的非参数函数拟合到数据的方法。在这一点上,我们将不深入研究高斯过程回归的细节,而是将重点放在如何可视化这种连续误差测量上:
End of explanation
"""
# Visualize the result
plt.plot(xdata, ydata, 'or')
plt.plot(xfit, model(xfit), '-', color='gray')
plt.fill_between(xfit, y_mean - dyfit,
y_mean + dyfit,
alpha=0.5, color='gray')
plt.xlim(0, 10);
"""
Explanation: 现在,我们有了xfit,yfit和dyfit,它们可以对我们的数据进行连续拟合。我们可以像上面那样将它们传递给plt.errorbar函数,但是我们真的不想绘制带有1,000个误差线的1,000点。相反,我们可以将plt.fill_between函数与浅色配合使用以可视化此连续错误:
End of explanation
"""
|
Schiphol-Hub/schiphol-geo-notebooks
|
Creating_schiphol_map.ipynb
|
gpl-3.0
|
from arcgis.gis import *
from arcgis.viz import MapView
from IPython.display import display
"""
Explanation: Create a Schiphol map using Arcgis online and Jupyter notebook
Documentation for the beta Esri Arcgis Python API can be found here:
http://esri.github.io/arcgis-python-api/apidoc/html/index.html
End of explanation
"""
username = "Bierkaai"
password = "nen-HAy-d4C-yXQ"
gis_portal = GIS("http://www.arcgis.com", username, password)
"""
Explanation: Log in to the Arcgis Portal
Use your developers account. If you do not have one, you can register freely at http://developers.arcgis.com/sign-up
End of explanation
"""
map_shg2 = gis_portal.map("Schiphol")
content = gis_portal.content.search("IWF", "Feature Service")
for i,x in enumerate(content):
print("{0} -> {1}".format(i, x.title))
map_shg.add_layer(content[2])
map_shg.addlayer
map_shg2.addlayer
map_shg.center = gis_portal.tools.geocoder.find_best_match("Schiphol", 6)
map_shg
"""
Explanation: Get the Passenger Maps
End of explanation
"""
|
goyalsid/phageParser
|
demos/Spacer Length Analysis.ipynb
|
mit
|
%matplotlib inline
#Import packages
import requests
import json
import numpy as np
import random
import matplotlib.pyplot as plt
from matplotlib import mlab
import seaborn as sns
import pandas as pd
from scipy.stats import poisson
sns.set_palette("husl")
#Url of the phageParser API
apiurl = 'https://phageparser.herokuapp.com'
#Get the initial page for listing of accessible objects and get url for spacers
r=requests.get(apiurl)
spacerurl = r.json()['organisms']
#API is accessible by chunks of spacers that are in each page, get the total number of pages from meta field
r=requests.get(spacerurl)
last_page = r.json()['meta']['total_pages']
#Iterate through each page and store json response which only has length of spacers information
jbatches = []
for page in range(1,last_page):
#Exclude every field on spacer object other than length and move to a certain page
batch_url = spacerurl+'?page={}&exclude[]=*&include[]=loci.spacers.length'.format(page)
spacer_batch = requests.get(batch_url).json()
jbatches.append(spacer_batch)
#Get lengths of spacers per locus
org_spacer={}
for batch in jbatches:
for organism in batch['organisms']:
locusspacerlens = {}
if organism['loci'] == []:
continue
orgid = organism['loci'][0]['organism']
for locus in organism['loci']:
spacerlens = []
for spacer in locus['spacers']:
spacerlens.append(spacer['length'])
locusspacerlens[locus['id']]=np.array(spacerlens)
org_spacer[orgid] = locusspacerlens
#Get the global mean and variance
spacerbplengths = np.array([spacerlen for organism in org_spacer.values() for locusspacerlen in organism.values() for spacerlen in locusspacerlen]).flatten()
mu, sigma = spacerbplengths.mean(), spacerbplengths.std()
print("Calculated mean basepair length for spacers is {:.2f}+/-{:.2f}".format(mu,sigma))
"""
Explanation: phageParser - Analysis of Spacer Lengths
C.K. Yildirim (cemyildirim@fastmail.com)
The latest version of this IPython notebook demo is available at http://github.com/phageParser/phageParser
To run this notebook locally:
* git clone or download this repository
* Install Jupyter Notebook
* In a command prompt, type jupyter notebook - the notebook server will launch in your browser
* Navigate to the phageParser/demos folder and open the notebook
Introduction
This demo uses the REST API of phageParser to plot the distribution of spacer lengths.
In this case, the API is consumed using the requests library and the json responses are parsed for gathering
basepair length information of spacers.
End of explanation
"""
#Plot histogram of spacer lengths across all organisms
norm = False # change to false to show totals, true to show everything normalized to 1
plt.figure()
bins=range(5,100)
plt.hist(spacerbplengths,bins=bins,normed=norm,label='All organisms')
plt.yscale('log')
if norm == False:
plt.ylim(5*10**-1,10**5)
else:
plt.ylim(10**-6,10**0)
plt.xlim(10,100)
#Plot normal and poisson distribution of length
x=np.unique(spacerbplengths)
if norm == False:
y = mlab.normpdf(x, mu, sigma)*len(spacerbplengths)
y2 = poisson.pmf(x,mu)*len(spacerbplengths)
else:
y = mlab.normpdf(x, mu, sigma)
y2 = poisson.pmf(x,mu)
plt.plot(x, y, 'k--', linewidth=1.5, label='Normal distribution')
plt.plot(x, y2, 'r--',linewidth=1.5, label='Poissson distribution')
#Plot histogram for a single organism
for i in range(4):
org_id = random.choice(list(org_spacer.keys()))
orgspacerlens = np.concatenate(list(org_spacer[org_id].values()))
plt.hist(orgspacerlens,bins=bins, normed=norm)
plt.ylabel("Number of spacers")
plt.xlabel("Spacer length")
plt.legend();
"""
Explanation: Across the roughly ~3000 sequenced organisms that have what looks like a CRISPR locus, what is the distribution of CRISPR spacer lengths? The histogram below shows that spacer length is peaked at about 35 base pairs.
The standard deviation of spacer length is 4 base pairs, but the distribution has large tails - there are many more long spacers than would be expected if the lengths were normally distributed (black dashed line) or Poisson distributed (red dashed line).
Individual organisms (colours other than blue) have tighter distributions than the overall distribution.
End of explanation
"""
#Calculate means and standard deviations of spacer length for all individual loci
means = []
stds = []
for org in org_spacer.values():
for arr in list(org.values()):
means.append(np.mean(arr))
stds.append(np.std(arr))
print("The mean of all individual locus standard deviations is "
+ str(round(np.mean(stds),2))
+ ", smaller than the spacer length standard deviations for all organisms combined.")
plt.figure()
plt.hist(stds,bins=range(0,30))
plt.xlabel("Standard deviation of spacer length within locus")
plt.ylabel("Number of loci")
plt.ylim(8*10**-1,10**4)
plt.yscale('log');
"""
Explanation: What the above plot suggests is that individual organisms and loci have narrow spacer length distributions but that the total distribution is quite broad.
End of explanation
"""
fig, ax = plt.subplots(figsize=(8,4), dpi=100)
#Plot cumulative probability of data
sorted_data = np.sort(spacerbplengths)
ax.step(sorted_data, 1-np.arange(sorted_data.size)/sorted_data.size, label='Data')
#Plot normal distribution
x=np.unique(sorted_data)
y = mlab.normpdf(x, mu, sigma).cumsum()
y /= y[-1]
ax.plot(x, 1-y, 'k--', linewidth=0.5, label='Normal distribution')
#Format the figure and label
ax.set_yscale('log')
ax.grid(True)
ax.legend(loc='right')
ax.set_title('Cumulative step distribution of spacer lengths')
ax.set_xlabel("Spacer length (bps)")
ax.set_ylabel('Likelihood of occurrence of smaller spacers')
plt.show()
#Pick a random organism to plot the histogram for each locus
org_id = random.choice(list(org_spacer.keys()))
org_id=594
locusspacerlens = org_spacer[org_id]
fig, ax = plt.subplots(figsize=(8,4),dpi=100)
bins=range(30,45,1)
#Plot histogram of spacer length frequency
for loc in locusspacerlens:
sns.distplot(locusspacerlens[loc], ax=ax, kde=False, norm_hist=True, bins=bins)
plt.xlim([30,45])
#format the figure and label
ax.set_title("Histogram of spacer basepair lengths for organism with id {}".format(org_id))
ax.set_xlabel("Spacer length (bps)")
ax.set_ylabel("Occurence of spacers")
plt.show()
"""
Explanation: The following cumulative version of the total spacer length histogram shows again the deviation from normal distribution at large spacer lengths.
End of explanation
"""
|
amueller/scipy-2017-sklearn
|
notebooks/10.Case_Study-Titanic_Survival.ipynb
|
cc0-1.0
|
from sklearn.datasets import load_iris
iris = load_iris()
print(iris.data.shape)
"""
Explanation: Case Study - Titanic Survival
Feature Extraction
Here we will talk about an important piece of machine learning: the extraction of
quantitative features from data. By the end of this section you will
Know how features are extracted from real-world data.
See an example of extracting numerical features from textual data
In addition, we will go over several basic tools within scikit-learn which can be used to accomplish the above tasks.
What Are Features?
Numerical Features
Recall that data in scikit-learn is expected to be in two-dimensional arrays, of size
n_samples $\times$ n_features.
Previously, we looked at the iris dataset, which has 150 samples and 4 features
End of explanation
"""
measurements = [
{'city': 'Dubai', 'temperature': 33.},
{'city': 'London', 'temperature': 12.},
{'city': 'San Francisco', 'temperature': 18.},
]
from sklearn.feature_extraction import DictVectorizer
vec = DictVectorizer()
vec
vec.fit_transform(measurements).toarray()
vec.get_feature_names()
"""
Explanation: These features are:
sepal length in cm
sepal width in cm
petal length in cm
petal width in cm
Numerical features such as these are pretty straightforward: each sample contains a list
of floating-point numbers corresponding to the features
Categorical Features
What if you have categorical features? For example, imagine there is data on the color of each
iris:
color in [red, blue, purple]
You might be tempted to assign numbers to these features, i.e. red=1, blue=2, purple=3
but in general this is a bad idea. Estimators tend to operate under the assumption that
numerical features lie on some continuous scale, so, for example, 1 and 2 are more alike
than 1 and 3, and this is often not the case for categorical features.
In fact, the example above is a subcategory of "categorical" features, namely, "nominal" features. Nominal features don't imply an order, whereas "ordinal" features are categorical features that do imply an order. An example of ordinal features would be T-shirt sizes, e.g., XL > L > M > S.
One work-around for parsing nominal features into a format that prevents the classification algorithm from asserting an order is the so-called one-hot encoding representation. Here, we give each category its own dimension.
The enriched iris feature set would hence be in this case:
sepal length in cm
sepal width in cm
petal length in cm
petal width in cm
color=purple (1.0 or 0.0)
color=blue (1.0 or 0.0)
color=red (1.0 or 0.0)
Note that using many of these categorical features may result in data which is better
represented as a sparse matrix, as we'll see with the text classification example
below.
Using the DictVectorizer to encode categorical features
When the source data is encoded has a list of dicts where the values are either strings names for categories or numerical values, you can use the DictVectorizer class to compute the boolean expansion of the categorical features while leaving the numerical features unimpacted:
End of explanation
"""
import os
import pandas as pd
titanic = pd.read_csv(os.path.join('datasets', 'titanic3.csv'))
print(titanic.columns)
"""
Explanation: Derived Features
Another common feature type are derived features, where some pre-processing step is
applied to the data to generate features that are somehow more informative. Derived
features may be based in feature extraction and dimensionality reduction (such as PCA or manifold learning),
may be linear or nonlinear combinations of features (such as in polynomial regression),
or may be some more sophisticated transform of the features.
Combining Numerical and Categorical Features
As an example of how to work with both categorical and numerical data, we will perform survival predicition for the passengers of the HMS Titanic.
We will use a version of the Titanic (titanic3.xls) from here. We converted the .xls to .csv for easier manipulation but left the data is otherwise unchanged.
We need to read in all the lines from the (titanic3.csv) file, set aside the keys from the first line, and find our labels (who survived or died) and data (attributes of that person). Let's look at the keys and some corresponding example lines.
End of explanation
"""
titanic.head()
"""
Explanation: Here is a broad description of the keys and what they mean:
pclass Passenger Class
(1 = 1st; 2 = 2nd; 3 = 3rd)
survival Survival
(0 = No; 1 = Yes)
name Name
sex Sex
age Age
sibsp Number of Siblings/Spouses Aboard
parch Number of Parents/Children Aboard
ticket Ticket Number
fare Passenger Fare
cabin Cabin
embarked Port of Embarkation
(C = Cherbourg; Q = Queenstown; S = Southampton)
boat Lifeboat
body Body Identification Number
home.dest Home/Destination
In general, it looks like name, sex, cabin, embarked, boat, body, and homedest may be candidates for categorical features, while the rest appear to be numerical features. We can also look at the first couple of rows in the dataset to get a better understanding:
End of explanation
"""
labels = titanic.survived.values
features = titanic[['pclass', 'sex', 'age', 'sibsp', 'parch', 'fare', 'embarked']]
features.head()
"""
Explanation: We clearly want to discard the "boat" and "body" columns for any classification into survived vs not survived as they already contain this information. The name is unique to each person (probably) and also non-informative. For a first try, we will use "pclass", "sibsp", "parch", "fare" and "embarked" as our features:
End of explanation
"""
pd.get_dummies(features).head()
"""
Explanation: The data now contains only useful features, but they are not in a format that the machine learning algorithms can understand. We need to transform the strings "male" and "female" into binary variables that indicate the gender, and similarly for "embarked".
We can do that using the pandas get_dummies function:
End of explanation
"""
features_dummies = pd.get_dummies(features, columns=['pclass', 'sex', 'embarked'])
features_dummies.head(n=16)
data = features_dummies.values
import numpy as np
np.isnan(data).any()
"""
Explanation: This transformation successfully encoded the string columns. However, one might argue that the class is also a categorical variable. We can explicitly list the columns to encode using the columns parameter, and include pclass:
End of explanation
"""
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import Imputer
train_data, test_data, train_labels, test_labels = train_test_split(
data, labels, random_state=0)
imp = Imputer()
imp.fit(train_data)
train_data_finite = imp.transform(train_data)
test_data_finite = imp.transform(test_data)
np.isnan(train_data_finite).any()
from sklearn.dummy import DummyClassifier
clf = DummyClassifier('most_frequent')
clf.fit(train_data_finite, train_labels)
print("Prediction accuracy: %f"
% clf.score(test_data_finite, test_labels))
"""
Explanation: With all of the hard data loading work out of the way, evaluating a classifier on this data becomes straightforward. Setting up the simplest possible model, we want to see what the simplest score can be with DummyClassifier.
End of explanation
"""
# %load solutions/10_titanic.py
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>
Try executing the above classification, using LogisticRegression and RandomForestClassifier instead of DummyClassifier
</li>
<li>
Does selecting a different subset of features help?
</li>
</ul>
</div>
End of explanation
"""
|
TUW-GEO/pygeogrids
|
docs/examples/creating_and_working_with_grid_objects.ipynb
|
mit
|
import pygeogrids.grids as grids
import numpy as np
"""
Explanation: Basics
End of explanation
"""
# create the longitudes
lons = np.arange(-180 + 5, 180, 10)
print(lons)
lats = np.arange(90 - 5, -90, -10)
print(lats)
"""
Explanation: Let's create a simple regular 10x10 degree grid with grid points at the center of each 10x10 degree cell.
First by hand to understand what is going on underneath
End of explanation
"""
# create all the grid points by using the numpy.meshgrid function
longrid, latgrid = np.meshgrid(lons, lats)
"""
Explanation: These are just the dimensions or we can also call them the "sides" of the array that defines all the gridpoints.
End of explanation
"""
manualgrid = grids.BasicGrid(longrid.flatten(), latgrid.flatten(), shape=(18, 36))
# Each point of the grid automatically got a grid point number
gpis, gridlons, gridlats = manualgrid.get_grid_points()
print(gpis[:10], gridlons[:10], gridlats[:10])
"""
Explanation: now we can create a BasicGrid. We can also define the shape of the grid. The first part of the shape must be in longitude direction.
End of explanation
"""
ngpi, distance = manualgrid.find_nearest_gpi(15.84, 28.76)
print(ngpi, distance)
# convert the gpi to longitude and latitude
print(manualgrid.gpi2lonlat(ngpi))
"""
Explanation: The grid point indices or numbers are useful when creating lookup tables between grids.
We can now use the manualgrid instance to find the nearest gpi to any longitude and latitude
End of explanation
"""
autogrid = grids.genreg_grid(10, 10)
autogrid == manualgrid
"""
Explanation: The same grid can also be created by a method for creating regular grids
End of explanation
"""
row, col = autogrid.gpi2rowcol(ngpi)
print(row, col)
"""
Explanation: If your grid has a 2D shape like the ones we just created then you can also get the row and the column of a grid point.
This can be useful if you know that you have data stored on a specific grid and you want to read the data from a grid point.
End of explanation
"""
for i, (gpi, lon, lat) in enumerate(autogrid.grid_points()):
print(gpi, lon, lat)
if i==10: # this is just to keep the example output short
break
"""
Explanation: Iteration over gridpoints
End of explanation
"""
# lets generate a second grid with 10 random points on the Earth surface.
randlat = np.random.random(10) * 180 - 90
randlon = np.random.random(10) * 360 - 180
print(randlat)
print(randlon)
# This grid has no meaningful 2D shape so none is given
randgrid = grids.BasicGrid(randlon, randlat)
"""
Explanation: Calculation of lookup tables
If you have a two grids and you know that you want to get the nearest neighbors for all of its grid points in the second grid you can calculate a lookup table once and reuse it later.
End of explanation
"""
lut = randgrid.calc_lut(autogrid)
print(lut)
"""
Explanation: Now lets calculate a lookup table to the regular 10x10° grid we created earlier
End of explanation
"""
lut_lons, lut_lats = autogrid.gpi2lonlat(lut)
print(lut_lats)
print(lut_lons)
"""
Explanation: The lookup table contains the grid point indices of the other grid, autogrid in this case.
End of explanation
"""
import pygeogrids.netcdf as nc
nc.save_grid('example.nc', randgrid)
loadedgrid = nc.load_grid('example.nc')
loadedgrid
randgrid
"""
Explanation: Storing and loading grids
Grids can be stored to disk as CF compliant netCDF files
End of explanation
"""
grid_WGS84 = grids.BasicGrid(randlon, randlat, geodatum='WGS84')
grid_GRS80 = grids.BasicGrid(randlon, randlat, geodatum='GRS80')
grid_WGS84.geodatum.geod.a
grid_GRS80.geodatum.geod.a
grid_WGS84.kdTree.geodatum.geod.sphere
"""
Explanation: Define geodetic datum for grid
End of explanation
"""
|
amcdawes/QMlabs
|
Lab 3 - Operators.ipynb
|
mit
|
import matplotlib.pyplot as plt
from numpy import sqrt,cos,sin,arange,pi
from qutip import *
%matplotlib inline
H = Qobj([[1],[0]])
V = Qobj([[0],[1]])
P45 = Qobj([[1/sqrt(2)],[1/sqrt(2)]])
M45 = Qobj([[1/sqrt(2)],[-1/sqrt(2)]])
R = Qobj([[1/sqrt(2)],[-1j/sqrt(2)]])
L = Qobj([[1/sqrt(2)],[1j/sqrt(2)]])
"""
Explanation: Lab 3: Operators
An overview of operator properties
End of explanation
"""
H
Ph = H*H.dag()
Ph
"""
Explanation: Example 1: the outer product and the projection operator
We already have the $|H\rangle$ state represented as a vector in the HV basis, so the $\hat{P}_H$ operator is the outer product $|H\rangle\langle H|$ (a ket then a bra):
End of explanation
"""
Pv = V*V.dag()
Pv
"""
Explanation: Same with the $\hat{P}_V$ operator:
End of explanation
"""
identity(2)
Ph + Pv == identity(2)
P45*P45.dag()
M45*M45.dag()
P45*P45.dag() + M45*M45.dag()
L*L.dag()
R*R.dag()
L*L.dag() + R*R.dag()
"""
Explanation: Example 2: Verify Eq. 4.38 for the HV basis states. Repeat for the ±45, and LR basis
End of explanation
"""
def Rp(theta):
return Qobj([[cos(theta),-sin(theta)],[sin(theta),cos(theta)]]).tidyup()
Rp(pi/2)
V==Rp(pi/2)*H
# Solution Goes Here
"""
Explanation: Example 3: Represent the $\hat{R}_p(\theta)$ operator in the HV basis and verify your representation by operating on $|H\rangle$ and $|V\rangle$ states. Use the following template function definition.
End of explanation
"""
# Solution Goes Here
"""
Explanation: 1) Using the $\hat{R}_p(\theta)$ operator, verify the operator properties described in Sections 4.1 and 4.2. Specifically, verify Eqns. 4.6, 4.7, 4.16, 4.18, 4.22, and 4.27
End of explanation
"""
def sim_transform(o_basis1, o_basis2, n_basis1, n_basis2):
a = n_basis1.dag()*o_basis1
b = n_basis1.dag()*o_basis2
c = n_basis2.dag()*o_basis1
d = n_basis2.dag()*o_basis2
return Qobj([[a.data[0,0],b.data[0,0]],[c.data[0,0],d.data[0,0]]])
"""
Explanation: Example: the similarity transform
The following defines a function that creates a similarity transform matrix. It takes the two old basis vectors and the two new basis vectors as arguments. To apply the transform, simply multiply the matrix onto the state vector or ooperator matrix. Following the examples below, explore this transform.
End of explanation
"""
Shv45 = sim_transform(H,V,P45,M45) # as found in Example 4.A.1, Eq. 4.A.10.
Shv45
Shv45 * H # compare to Eq. 4.A.12
"""
Explanation: We can define a similarity transform that converts from $HV\rightarrow \pm 45$
End of explanation
"""
|
ray-project/ray
|
doc/source/tune/examples/tune-wandb.ipynb
|
apache-2.0
|
import numpy as np
import wandb
from ray import tune
from ray.tune import Trainable
from ray.tune.integration.wandb import (
WandbLoggerCallback,
WandbTrainableMixin,
wandb_mixin,
)
"""
Explanation: Using Weights & Biases with Tune
(tune-wandb-ref)=
Weights & Biases (Wandb) is a tool for experiment
tracking, model optimizaton, and dataset versioning. It is very popular
in the machine learning and data science community for its superb visualization
tools.
{image} /images/wandb_logo_full.png
:align: center
:alt: Weights & Biases
:height: 80px
:target: https://www.wandb.ai/
Ray Tune currently offers two lightweight integrations for Weights & Biases.
One is the {ref}WandbLoggerCallback <tune-wandb-logger>, which automatically logs
metrics reported to Tune to the Wandb API.
The other one is the {ref}@wandb_mixin <tune-wandb-mixin> decorator, which can be
used with the function API. It automatically
initializes the Wandb API with Tune's training information. You can just use the
Wandb API like you would normally do, e.g. using wandb.log() to log your training
process.
{contents}
:backlinks: none
:local: true
Running A Weights & Biases Example
In the following example we're going to use both of the above methods, namely the WandbLoggerCallback and
the wandb_mixin decorator to log metrics.
Let's start with a few crucial imports:
End of explanation
"""
def objective(config, checkpoint_dir=None):
for i in range(30):
loss = config["mean"] + config["sd"] * np.random.randn()
tune.report(loss=loss)
"""
Explanation: Next, let's define an easy objective function (a Tune Trainable) that reports a random loss to Tune.
The objective function itself is not important for this example, since we want to focus on the Weights & Biases
integration primarily.
End of explanation
"""
def tune_function(api_key_file):
"""Example for using a WandbLoggerCallback with the function API"""
analysis = tune.run(
objective,
metric="loss",
mode="min",
config={
"mean": tune.grid_search([1, 2, 3, 4, 5]),
"sd": tune.uniform(0.2, 0.8),
},
callbacks=[
WandbLoggerCallback(api_key_file=api_key_file, project="Wandb_example")
],
)
return analysis.best_config
"""
Explanation: Given that you provide an api_key_file pointing to your Weights & Biases API key, you cna define a
simple grid-search Tune run using the WandbLoggerCallback as follows:
End of explanation
"""
@wandb_mixin
def decorated_objective(config, checkpoint_dir=None):
for i in range(30):
loss = config["mean"] + config["sd"] * np.random.randn()
tune.report(loss=loss)
wandb.log(dict(loss=loss))
"""
Explanation: To use the wandb_mixin decorator, you can simply decorate the objective function from earlier.
Note that we also use wandb.log(...) to log the loss to Weights & Biases as a dictionary.
Otherwise, the decorated version of our objective is identical to its original.
End of explanation
"""
def tune_decorated(api_key_file):
"""Example for using the @wandb_mixin decorator with the function API"""
analysis = tune.run(
decorated_objective,
metric="loss",
mode="min",
config={
"mean": tune.grid_search([1, 2, 3, 4, 5]),
"sd": tune.uniform(0.2, 0.8),
"wandb": {"api_key_file": api_key_file, "project": "Wandb_example"},
},
)
return analysis.best_config
"""
Explanation: With the decorated_objective defined, running a Tune experiment is as simple as providing this objective and
passing the api_key_file to the wandb key of your Tune config:
End of explanation
"""
class WandbTrainable(WandbTrainableMixin, Trainable):
def step(self):
for i in range(30):
loss = self.config["mean"] + self.config["sd"] * np.random.randn()
wandb.log({"loss": loss})
return {"loss": loss, "done": True}
"""
Explanation: Finally, you can also define a class-based Tune Trainable by using the WandbTrainableMixin to define your objective:
End of explanation
"""
def tune_trainable(api_key_file):
"""Example for using a WandTrainableMixin with the class API"""
analysis = tune.run(
WandbTrainable,
metric="loss",
mode="min",
config={
"mean": tune.grid_search([1, 2, 3, 4, 5]),
"sd": tune.uniform(0.2, 0.8),
"wandb": {"api_key_file": api_key_file, "project": "Wandb_example"},
},
)
return analysis.best_config
"""
Explanation: Running Tune with this WandbTrainable works exactly the same as with the function API.
The below tune_trainable function differs from tune_decorated above only in the first argument we pass to
tune.run():
End of explanation
"""
import tempfile
from unittest.mock import MagicMock
mock_api = True
api_key_file = "~/.wandb_api_key"
if mock_api:
WandbLoggerCallback._logger_process_cls = MagicMock
decorated_objective.__mixins__ = tuple()
WandbTrainable._wandb = MagicMock()
wandb = MagicMock() # noqa: F811
temp_file = tempfile.NamedTemporaryFile()
temp_file.write(b"1234")
temp_file.flush()
api_key_file = temp_file.name
tune_function(api_key_file)
tune_decorated(api_key_file)
tune_trainable(api_key_file)
if mock_api:
temp_file.close()
"""
Explanation: Since you may not have an API key for Wandb, we can mock the Wandb logger and test all three of our training
functions as follows.
If you do have an API key file, make sure to set mock_api to False and pass in the right api_key_file below.
End of explanation
"""
|
tensorflow/docs-l10n
|
site/zh-cn/hub/tutorials/text_classification_with_tf_hub.ipynb
|
apache-2.0
|
# Copyright 2018 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""
Explanation: Copyright 2018 The TensorFlow Hub Authors.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
"""
# Install TF-Hub.
!pip install seaborn
"""
Explanation: 如何使用 TF-Hub 构建简单的文本分类器
注:本教程使用已弃用的 TensorFlow 1 功能。有关完成此任务的新方式,请参阅 TensorFlow 2 版本。
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/hub/tutorials/text_classification_with_tf_hub.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png">在 Google Colab 中运行 </a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/hub/tutorials/text_classification_with_tf_hub.ipynb"> <img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png"> 在 GitHub 上查看源代码</a></td>
<td><a href="https://tfhub.dev/google/nnlm-en-dim128/1"><img src="https://tensorflow.google.cn/images/hub_logo_32px.png">查看 TF Hub 模型</a></td>
</table>
TF-Hub 是一个共享打包在可重用资源(尤其是预训练的模块)中的机器学习专业知识的平台。本教程分为两个主要部分。
入门:使用 TF-Hub 训练文本分类器
我们将使用 TF-Hub 文本嵌入向量模块训练具有合理基线准确率的简单情感分类器。然后,我们将分析预测结果以确保模型合理,并提出改进措施以提高准确率。
高级:迁移学习分析
在本部分中,我们将使用各种 TF-Hub 模块来比较它们对 Estimator 准确率的影响,并展示迁移学习的优势和缺陷。
可选前提条件
对 Tensorflow 预制 Estimator 框架有基本了解。
熟悉 Pandas 库。
设置
End of explanation
"""
from absl import logging
import tensorflow as tf
import tensorflow_hub as hub
import matplotlib.pyplot as plt
import numpy as np
import os
import pandas as pd
import re
import seaborn as sns
"""
Explanation: 有关安装 Tensorflow 的更多详细信息,请访问 https://tensorflow.google.cn/install/。
End of explanation
"""
# Load all files from a directory in a DataFrame.
def load_directory_data(directory):
data = {}
data["sentence"] = []
data["sentiment"] = []
for file_path in os.listdir(directory):
with tf.io.gfile.GFile(os.path.join(directory, file_path), "r") as f:
data["sentence"].append(f.read())
data["sentiment"].append(re.match("\d+_(\d+)\.txt", file_path).group(1))
return pd.DataFrame.from_dict(data)
# Merge positive and negative examples, add a polarity column and shuffle.
def load_dataset(directory):
pos_df = load_directory_data(os.path.join(directory, "pos"))
neg_df = load_directory_data(os.path.join(directory, "neg"))
pos_df["polarity"] = 1
neg_df["polarity"] = 0
return pd.concat([pos_df, neg_df]).sample(frac=1).reset_index(drop=True)
# Download and process the dataset files.
def download_and_load_datasets(force_download=False):
dataset = tf.keras.utils.get_file(
fname="aclImdb.tar.gz",
origin="http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz",
extract=True)
train_df = load_dataset(os.path.join(os.path.dirname(dataset),
"aclImdb", "train"))
test_df = load_dataset(os.path.join(os.path.dirname(dataset),
"aclImdb", "test"))
return train_df, test_df
# Reduce logging output.
logging.set_verbosity(logging.ERROR)
train_df, test_df = download_and_load_datasets()
train_df.head()
"""
Explanation: 开始
数据
我们将尝试解决 Large Movie Review Dataset v1.0 任务(Mass 等人,2011 年)。数据集由 IMDB 电影评论组成,这些评论使用从 1 到 10 的正数标记。任务是将评论标记为负面或正面。
End of explanation
"""
# Training input on the whole training set with no limit on training epochs.
train_input_fn = tf.compat.v1.estimator.inputs.pandas_input_fn(
train_df, train_df["polarity"], num_epochs=None, shuffle=True)
# Prediction on the whole training set.
predict_train_input_fn = tf.compat.v1.estimator.inputs.pandas_input_fn(
train_df, train_df["polarity"], shuffle=False)
# Prediction on the test set.
predict_test_input_fn = tf.compat.v1.estimator.inputs.pandas_input_fn(
test_df, test_df["polarity"], shuffle=False)
"""
Explanation: 模型
输入函数
Estimator 框架提供了封装 Pandas 数据帧的输入函数。
End of explanation
"""
embedded_text_feature_column = hub.text_embedding_column(
key="sentence",
module_spec="https://tfhub.dev/google/nnlm-en-dim128/1")
"""
Explanation: 特征列
TF-Hub 提供了一个特征列,此列在给定的文本特征上应用模块,并进一步传递模块的输出。在本教程中,我们将使用 nnlm-en-dim128 模块。对于本教程而言,最重要的事实如下:
模块将字符串的一维张量中的一批句子作为输入。
模块负责句子的预处理(例如,移除标点符号和在空格处拆分)。
模块可以使用任何输入(例如,nnlm-en-dim128 将词汇中不存在的单词散列到约 20000 个桶中)。
End of explanation
"""
estimator = tf.estimator.DNNClassifier(
hidden_units=[500, 100],
feature_columns=[embedded_text_feature_column],
n_classes=2,
optimizer=tf.keras.optimizers.Adagrad(lr=0.003))
"""
Explanation: Estimator
要实现分类,我们可以使用 DNN 分类器(请注意本教程结尾处有关标签函数的不同建模的补充说明)。
End of explanation
"""
# Training for 5,000 steps means 640,000 training examples with the default
# batch size. This is roughly equivalent to 25 epochs since the training dataset
# contains 25,000 examples.
estimator.train(input_fn=train_input_fn, steps=5000);
"""
Explanation: 训练
以合理的步骤数训练 Estimator。
End of explanation
"""
train_eval_result = estimator.evaluate(input_fn=predict_train_input_fn)
test_eval_result = estimator.evaluate(input_fn=predict_test_input_fn)
print("Training set accuracy: {accuracy}".format(**train_eval_result))
print("Test set accuracy: {accuracy}".format(**test_eval_result))
"""
Explanation: 预测
为训练集和测试集运行预测。
End of explanation
"""
def get_predictions(estimator, input_fn):
return [x["class_ids"][0] for x in estimator.predict(input_fn=input_fn)]
LABELS = [
"negative", "positive"
]
# Create a confusion matrix on training data.
cm = tf.math.confusion_matrix(train_df["polarity"],
get_predictions(estimator, predict_train_input_fn))
# Normalize the confusion matrix so that each row sums to 1.
cm = tf.cast(cm, dtype=tf.float32)
cm = cm / tf.math.reduce_sum(cm, axis=1)[:, np.newaxis]
sns.heatmap(cm, annot=True, xticklabels=LABELS, yticklabels=LABELS);
plt.xlabel("Predicted");
plt.ylabel("True");
"""
Explanation: 混淆矩阵
我们可以目视检查混淆矩阵,以了解错误分类的分布。
End of explanation
"""
def train_and_evaluate_with_module(hub_module, train_module=False):
embedded_text_feature_column = hub.text_embedding_column(
key="sentence", module_spec=hub_module, trainable=train_module)
estimator = tf.estimator.DNNClassifier(
hidden_units=[500, 100],
feature_columns=[embedded_text_feature_column],
n_classes=2,
optimizer=tf.keras.optimizers.Adagrad(learning_rate=0.003))
estimator.train(input_fn=train_input_fn, steps=1000)
train_eval_result = estimator.evaluate(input_fn=predict_train_input_fn)
test_eval_result = estimator.evaluate(input_fn=predict_test_input_fn)
training_set_accuracy = train_eval_result["accuracy"]
test_set_accuracy = test_eval_result["accuracy"]
return {
"Training accuracy": training_set_accuracy,
"Test accuracy": test_set_accuracy
}
results = {}
results["nnlm-en-dim128"] = train_and_evaluate_with_module(
"https://tfhub.dev/google/nnlm-en-dim128/1")
results["nnlm-en-dim128-with-module-training"] = train_and_evaluate_with_module(
"https://tfhub.dev/google/nnlm-en-dim128/1", True)
results["random-nnlm-en-dim128"] = train_and_evaluate_with_module(
"https://tfhub.dev/google/random-nnlm-en-dim128/1")
results["random-nnlm-en-dim128-with-module-training"] = train_and_evaluate_with_module(
"https://tfhub.dev/google/random-nnlm-en-dim128/1", True)
"""
Explanation: 进一步改进
情感回归:我们使用分类器将每个样本分配给一个极性类。但实际上,我们还有另一个分类特征 - 情感。在这里,类实际上表示一个比例,并且基础值(正/负)可以很好地映射到连续范围内。我们可以通过计算回归(DNN 回归器)而不是分类(DNN 分类器)来利用此属性。
较大的模块:对于本教程而言,我们使用了较小的模块来限制内存使用。有些模块具有更大的词汇和更大的嵌入向量空间,可以提供更多的准确率点。
参数调节:我们可以通过调节元参数(例如学习率或步骤数)来提高准确率,尤其是在使用不同模块的情况下。如果我们想获得任何合理的结果,那么验证集非常重要,因为这样可以轻松建立一个模型来学习预测训练数据,而无需很好地泛化到测试集。
更复杂的模型:我们使用了一个通过嵌入每个单词并随后将其与平均值相结合来计算句子嵌入向量的模块。此外,也可以使用序贯模块(例如 Universal Sentence Encoder 模块)来更好地捕获句子的性质。或者,使用两个或多个 TF-Hub 模块的集合。
正则化:为了避免过拟合,我们可以尝试使用执行某种正则化的优化器,例如近端 Adagrad 优化器。
高级:迁移学习分析
迁移学习可以节省训练资源,即使基于小数据集训练也可以实现良好的模型泛化。在这一部分中,我们将通过使用两个不同的 TF-Hub 模块进行训练来演示这一点:
nnlm-en-dim128 - 预训练的文本嵌入向量模块;
random-nnlm-en-dim128 - 文本嵌入向量模块,其词汇和网络与 nnlm-en-dim128 相同,但权重只是随机初始化的,从未基于真实数据进行训练。
在以下两种模式下训练:
仅训练分类器(即冻结模块),以及
将分类器与模块一起训练。
我们运行一些训练和评估来查看使用各种模块如何影响准确率。
End of explanation
"""
pd.DataFrame.from_dict(results, orient="index")
"""
Explanation: 我们来看看结果。
End of explanation
"""
estimator.evaluate(input_fn=predict_test_input_fn)["accuracy_baseline"]
"""
Explanation: 我们已经看到了一些模式,但首先我们应当建立测试集的基线准确率 - 通过仅输出最具代表性的类的标签可以实现的下限:
End of explanation
"""
|
gprakhar/janCC
|
Janacare_User-Segmentation_dataset_Aug2014-Apr2016.ipynb
|
bsd-3-clause
|
# This to clear all variable values
%reset
# Import the required modules
import pandas as pd
import numpy as np
#import scipy as sp
# simple function to read in the user data file.
# the argument parse_dates takes in a list of colums, which are to be parsed as date format
user_data_raw = pd.read_csv("janacare_user-engagement_Aug2014-Apr2016.csv", parse_dates = [-3,-2,-1])
# data metrics
user_data_raw.shape # Rows , colums
# data metrics
user_data_raw.dtypes # data type of colums
"""
Explanation: Hello World!
This notebook describes the decision tree based Machine Learning model I have created
to segment the users of Habits app.
Looking around the data set
End of explanation
"""
user_data_to_clean = user_data_raw.rename(columns = {'watching_videos (binary - 1 for yes, blank/0 for no)':'watching_videos'})
# Some basic statistical information on the data
user_data_to_clean.describe()
"""
Explanation: The column name watching_videos (binary - 1 for yes, blank/0 for no) is too long and has special chars, lets change it to watching_videos
End of explanation
"""
# Lets check the health of the data set
user_data_to_clean.info()
"""
Explanation: Data Clean up
In the last section of looking around, I saw that a lot of rows do not have any values or have garbage values(see first row of the table above).
This can cause errors when computing anything using the values in these rows, hence a clean up is required.
We will clean up only those columns, that are being used for features.
num_modules_consumed
num_glucose_tracked
num_of_days_food_tracked
watching_videos
The next two colums will not be cleaned, as they contain time data which in my opinion should not be imputed
* first_login
* last_activity
End of explanation
"""
# Lets first delete the last column
user_data_to_clean_del_last_col = user_data_to_clean.drop("age_on_platform", 1)
# Check if colums has been deleted. Number of column changed from 19 to 18
user_data_to_clean_del_last_col.shape
# Copy data frame 'user_data_del_last_col' into a new one
user_data_to_clean = user_data_to_clean_del_last_col
"""
Explanation: As is visible from the last column (age_on_platform) data type, Pandas is not recognising it as date type format.
This will make things difficult, so I delete this particular column and add a new one.
Since the data in age_on_platform can be recreated by doing age_on_platform = last_activity - first_login
End of explanation
"""
# Run a loop through the data frame and check each row for this anamoly, if found swap
for index, row in user_data_to_clean.iterrows():
if row.first_login > row.last_activity:
temp_date_var = row.first_login
user_data_to_clean.set_value(index, 'first_login', row.last_activity)
user_data_to_clean.set_value(index, 'last_activity', temp_date_var)
#print "\tSw\t" + "first\t" + row.first_login.isoformat() + "\tlast\t" + row.last_activity.isoformat()
# Create new column 'age_on_platform' which has the corresponding value in date type format
user_data_to_clean["age_on_platform"] = user_data_to_clean["last_activity"] - user_data_to_clean["first_login"]
# Check the result in first few rows
user_data_to_clean["age_on_platform"].head(5)
# Lets check the health of the data set
user_data_to_clean.info()
"""
Explanation: But on eyeballing I noticed some, cells of column first_login have greater value than corresponding cell of last_activity. These cells need to be swapped, since its not possible to have first_login > last_activity
End of explanation
"""
# Lets remove all columns from the data set that do not have to be imputed -
user_data_to_impute = user_data_to_clean.drop(["user_id", "watching_videos", "num_of_days_steps_tracked", "num_of_days_weight_tracked", "insulin_a1c_count", "weight", "height", "bmi", "age", "gender", "has_diabetes", "first_login", "last_activity", "age_on_platform", "hemoglobin_count", "cholesterol_count"], 1 )
user_data_to_impute.info()
"""
Explanation: The second column of the above table describes, the number of non-null values in the respective column.
As is visible for the columns of interest for us,
eg. num_modules_consumed has ONLY 69 values out of possible 371 total
End of explanation
"""
# Import Imputation method KNN
##from fancyimpute import KNN
# First lets convert the Pandas Dataframe into a Numpy array. We do this since the data frame needs to be transposed,
# which is only possible if the format is an Numpy array.
##user_data_to_impute_np_array = user_data_to_impute.as_matrix()
# Lets Transpose it
##user_data_to_impute_np_array_transposed = user_data_to_impute_np_array.T
# Run the KNN method on the data. function usage X_filled_knn = KNN(k=3).complete(X_incomplete)
##user_data_imputed_knn_np_array = KNN(k=5).complete(user_data_to_impute_np_array_transposed)
"""
Explanation: The next 3 cells describes the steps to Impute data using KNN strategy, sadly this is not working well for our data set! One possible reason could be that the column is too sparse to find a neighbourer !
In future this method could be combined with the mean imputation method, so the values not covered by KNN get replaced with mean values.
Github repo and Documentation for fancyimpute
End of explanation
"""
# Lets use simpler method that is provided by Scikit Learn itself
# import the function
from sklearn.preprocessing import Imputer
# Create an object of class Imputer, with the relvant parameters
imputer_object = Imputer(missing_values='NaN', strategy='mean', axis=0, copy=False)
# Impute the data and save the generated Numpy array
user_data_imputed_np_array = imputer_object.fit_transform(user_data_to_impute)
"""
Explanation: The above 3 steps are for KNN based Imputation, did not work well. As visible 804 items could not be imputed for and get replaced with zero
Lets use simpler method that is provided by Scikit Learn itself
End of explanation
"""
# create a list of tuples, with the column name and data type for all existing columns in the Numpy array.
# exact order of columns has to be maintained
column_names_of_imputed_np_array = ['num_modules_consumed', 'num_glucose_tracked', 'num_of_days_food_tracked']
# create the Pandas data frame from the Numpy array
user_data_imputed_data_frame = pd.DataFrame(user_data_imputed_np_array, columns=column_names_of_imputed_np_array)
# Check if the data frame created now is proper
user_data_imputed_data_frame.info()
"""
Explanation: the user_data_imputed_np_array is a NumPy array, we need to convert it back to Pandas data frame
End of explanation
"""
# using the Series contructor from Pandas
user_data_imputed_data_frame['last_activity'] = pd.Series(user_data_to_clean['last_activity'])
user_data_imputed_data_frame['age_on_platform'] = pd.Series(user_data_to_clean['age_on_platform'])
# Check if every thing is Ok
user_data_imputed_data_frame.info()
"""
Explanation: Now lets add back the useful colums that we had removed from data set, these are
last_activity
age_on_platform
watching_videos
End of explanation
"""
# fillna(0) function will fill all blank cells with '0'
user_data_imputed_data_frame['watching_videos'] = pd.Series(user_data_to_clean['watching_videos'].fillna(0))
user_data_imputed_data_frame.info()
"""
Explanation: As mentioned in column description for watching_videos a blank or no value, means '0' also know as 'Not watching'
Since Scikit Learn models can ONLY deal with numerical values, lets convert all blanks to '0'
End of explanation
"""
# Since only these two columns are having null values, we can run the function *dropna()* on the whole data frame
# All rows with missing data get dropped
user_data_imputed_data_frame.dropna(axis=0, inplace=True)
user_data_imputed_data_frame.info()
"""
Explanation: Finally the columns last_activity, age_on_platform have missing values, as evident from above table. Since this is time data, that in my opinion should not be imputed, we will drop/delete the columns.
End of explanation
"""
# This if else section will bin the rows based on the critiria for labels mentioned in the table above
user_data_imputed_data_frame_labeled = user_data_imputed_data_frame
for index, row in user_data_imputed_data_frame.iterrows():
if row["age_on_platform"] >= np.timedelta64(30, 'D') and row["age_on_platform"] < np.timedelta64(180, 'D'):
if row['last_activity'] <= np.datetime64(2, 'D') and\
row['num_modules_consumed'] >= 12 and\
row['num_of_days_food_tracked'] >= 20 and\
row['num_glucose_tracked'] >= 16 and\
row['watching_videos'] == 1:
user_data_imputed_data_frame_labeled.set_value(index, 'label', 1)
else:
user_data_imputed_data_frame_labeled.set_value(index, 'label', 2)
elif row["age_on_platform"] >= np.timedelta64(180, 'D') and row["age_on_platform"] < np.timedelta64(360, 'D'):
if row['last_activity'] <= np.datetime64(7, 'D') and\
row['num_modules_consumed'] >= 48 and\
row['num_of_days_food_tracked'] >= 30 and\
row['num_glucose_tracked'] >= 96 and\
row['watching_videos'] == 1:
user_data_imputed_data_frame_labeled.set_value(index, 'label', 3)
else:
user_data_imputed_data_frame_labeled.set_value(index, 'label', 4)
elif row["age_on_platform"] >= np.timedelta64(360, 'D'):
if row['last_activity'] <= np.datetime64(14, 'D') and\
row['num_modules_consumed'] >= 48 and\
row['num_of_days_food_tracked'] >= 30 and\
row['num_glucose_tracked'] >= 192 and\
row['watching_videos'] == 1:
user_data_imputed_data_frame_labeled.set_value(index, 'label', 5)
else:
user_data_imputed_data_frame_labeled.set_value(index, 'label', 6)
else:
user_data_imputed_data_frame_labeled.set_value(index, 'label', 0)
user_data_imputed_data_frame_labeled['label'].unique()
"""
Explanation: Labelling the Raw data
Now comes the code that will based on the rules mentioned below label the provided data, so it can be used as trainning data for the classifer.
This tables defines the set of rules used to assign labels for Traning data
| label | age_on_platform | last_activity | num_modules_comsumed | num_of_days_food_tracked | num_glucose_tracked | watching_videos |
|---------------------|----------------------|---------------------------|-----------------------------|--------------------------|-----------------------------|------------------|
| Generic (ignore) | Converted to days | to be Measured from 16Apr | Good >= 3/week Bad < 3/week | Good >= 30 Bad < 30 | Good >= 4/week Bad < 4/week | Good = 1 Bad = 0 |
| good_new_user = 1 | >= 30 days && < 180 | <= 2 days | >= 12 | >= 20 | >= 16 | Good = 1 |
| bad_new_user = 2 | >= 30 days && < 180 | > 2 days | < 12 | < 20 | < 16 | Bad = 0 |
| good_mid_term_user = 3 | >= 180 days && < 360 | <= 7 days | >= 48 | >= 30 | >= 96 | Good = 1 |
| bad_mid_term_user = 4 | >= 180 days && <360 | > 7 days | < 48 | < 30 | < 96 | Bad = 0 |
| good_long_term_user = 5 | >= 360 days | <= 14 days | >= 48 | >= 30 | >= 192 | Good = 1 |
| bad_long_term_user = 6 | >= 360 days | > 14 days | < 48 | < 30 | < 192 | Bad = 0 |
End of explanation
"""
# Look at basic info for this Labeled data frame
user_data_imputed_data_frame_labeled.info()
"""
Explanation: The output above for the array says only 2,4,6,0 were selected as labels. Which means there are no good users in all three new, mid, long - term categories.
Consequently either I change the label selection model or get better data (which has good users) :P
End of explanation
"""
# Lets start with the column last_activity
# ts = (dt64 - np.datetime64('1970-01-01T00:00:00Z')) / np.timedelta64(1, 's')
# This function takes a datetime64 value and converts it into float value that represents time from epoch
def convert_datetime64_to_from_epoch(dt64):
ts = (dt64 - np.datetime64('1970-01-01T00:00:00Z')) / np.timedelta64(1, 's')
return ts
# Lets apply this function on last_activity column
user_data_imputed_data_frame_labeled_datetime64_converted = user_data_imputed_data_frame_labeled
user_data_imputed_data_frame_labeled_datetime64_converted['last_activity'] = user_data_imputed_data_frame_labeled['last_activity'].apply(convert_datetime64_to_from_epoch)
user_data_imputed_data_frame_labeled_datetime64_converted.info()
# Now its time to convert the timedelta64 column named age_on_platform
def convert_timedelta64_to_sec(td64):
ts = (td64 / np.timedelta64(1, 's'))
return ts
user_data_imputed_data_frame_labeled_datetime64_timedelta64_converted = user_data_imputed_data_frame_labeled_datetime64_converted
user_data_imputed_data_frame_labeled_datetime64_timedelta64_converted['age_on_platform'] = user_data_imputed_data_frame_labeled_datetime64_converted['age_on_platform'].apply(convert_timedelta64_to_sec)
user_data_imputed_data_frame_labeled_datetime64_timedelta64_converted.info()
user_data_imputed_data_frame_labeled_datetime64_timedelta64_converted.describe()
# Save the labeled data frame as excel file
from pandas import options
options.io.excel.xlsx.writer = 'xlsxwriter'
user_data_imputed_data_frame_labeled_datetime64_timedelta64_converted.to_excel('user_data_imputed_data_frame_labeled.xlsx')
"""
Explanation: One major limitation with Sci Kit Learn is with the datatypes it can deal with for features
the data type of last_activity is datetime64 and of age_on_platform is timedelta64
These we need to convert to a numerical type.
End of explanation
"""
# Total number of rows is 302; 30% of that is ~90
user_data_imputed_data_frame_labeled_training = user_data_imputed_data_frame_labeled_datetime64_timedelta64_converted.ix[90:]
user_data_imputed_data_frame_labeled_training.info()
# Lets first make our list of Labels column
#for index, row in user_data_imputed_data_frame.iterrows():
label_list = user_data_imputed_data_frame_labeled_training['label'].values.tolist()
# Check data type of elements of the list
type(label_list[0])
# Lets convert the data type of all elements of the list to int
label_list_training = map(int, label_list)
# Check data type of elements of the list
type(label_list_training[5])
"""
Explanation: Training and Testing the ML algorithm
Lets move on to the thing we all have been waiting for:
model training and testing
For the training the model we need two lists, one list with only the Labels column. Second list is actually a list of lists with each sub list containing the full row of feature columns.
Before we do anything we need to seprate out 30% of the data for testing purpose
End of explanation
"""
# Now to create the other list of lists with features as elements
# before that we will have to remove the Labels column
user_data_imputed_data_frame_UNlabeled_training = user_data_imputed_data_frame_labeled_training.drop(['label'] ,1)
user_data_imputed_data_frame_UNlabeled_training.info()
# As you may notice, the data type of watching_videos is float, while it should be int
user_data_imputed_data_frame_UNlabeled_training['watching_videos'] = user_data_imputed_data_frame_UNlabeled_training['watching_videos'].apply(lambda x: int(x))
user_data_imputed_data_frame_UNlabeled_training.info()
# Finally lets create the list of list from the row contents
features_list_training = map(list, user_data_imputed_data_frame_UNlabeled_training.values)
"""
Explanation: Here we remove the datetime64 & timedelta64 columns too, the issue is Sci Kit learn methods can only deal with numerical and string features. I am trying to sort this issue
End of explanation
"""
from sklearn import tree
classifier = tree.DecisionTreeClassifier() # We create an instance of the Decision tree object
classifier = classifier.fit(features_list_training, label_list_training) # Train the classifier
# Testing data is the first 90 rows
user_data_imputed_data_frame_labeled_testing = user_data_imputed_data_frame_labeled_datetime64_timedelta64_converted.ix[:90]
# take the labels in seprate list
label_list_test = user_data_imputed_data_frame_labeled_testing['label'].values.tolist()
label_list_test = map(int, label_list_test)
# Drop the time and Label columns
user_data_imputed_data_frame_UNlabeled_testing = user_data_imputed_data_frame_labeled_testing.drop(['label'] ,1)
# Check if every thing looks ok
user_data_imputed_data_frame_UNlabeled_testing.info()
# Finally lets create the list of list from the row contents for testing
features_list_test = map(list, user_data_imputed_data_frame_UNlabeled_testing.values)
len(features_list_test)
# the prediction results for first ten values of test data set
print list(classifier.predict(features_list_test[:20]))
# The labels for test data set as labeled by code
print label_list_test[:20]
"""
Explanation: Its time to train the model
End of explanation
"""
|
wikistat/Ateliers-Big-Data
|
CatsVSDogs/Atelier-keras-CatsVSDogs.ipynb
|
mit
|
# Utils
import sys
import os
import shutil
import time
import pickle
import numpy as np
# Deep Learning Librairies
import tensorflow as tf
import keras.preprocessing.image as kpi
import keras.layers as kl
import keras.optimizers as ko
import keras.backend as k
import keras.models as km
import keras.applications as ka
# Visualisaiton des données
from matplotlib import pyplot as plt
"""
Explanation: <center>
<a href="http://www.insa-toulouse.fr/" ><img src="http://www.math.univ-toulouse.fr/~besse/Wikistat/Images/logo-insa.jpg" style="float:left; max-width: 120px; display: inline" alt="INSA"/></a>
<a href="http://wikistat.fr/" ><img src="http://www.math.univ-toulouse.fr/~besse/Wikistat/Images/wikistat.jpg" style="max-width: 250px; display: inline" alt="Wikistat"/></a>
<a href="http://www.math.univ-toulouse.fr/" ><img src="http://www.math.univ-toulouse.fr/~besse/Wikistat/Images/logo_imt.jpg" width=400, style="float:right; display: inline" alt="IMT"/> </a>
</center>
Ateliers: Technologies des grosses data
Reconnaissance d'images: cats vs. dogs
Tranfert d'apprentissage avec <a href="https://www.tensorflow.org/"><img src="https://avatars0.githubusercontent.com/u/15658638?s=200&v=4" width=100, style="display: inline" alt="TensorFlow"/></a> tensorflow et <a href="https://keras.io/"><img src="https://s3.amazonaws.com/keras.io/img/keras-logo-2018-large-1200.png" width=250, style="display: inline" alt="Keras"/></a>
Résumé
Apprentissage d'un réseau convolutionnel élémentaire puis utilisation de réseaux pré-entrainés (VGG16, InceptionV3) sur la base ImageNet afin de résoudre un autre exemple de reconnaissance d'images. Utilisation de Keras pour piloter la librairie tensorFlow. Comparaison des performances des réseaux et des environnements de calcul CPU et GPU.
Introduction
Objectifs
La reconnaissance d'images a franchi une étape majeure en 2012. L'empilement de couches de neurones, dont certaines convolutionnelles, ont conduit à des algorithmes nettement plus performants en reconnaissance d'image, traitement du langage naturel, et à l'origine d'un battage médiatique considérable autour de l'apprentissage épais ou deep learning. Néanmoins, apprendre un réseau profond comportant des milions de paramètres nécessite une base d'apprentissage excessivement volumineuse (e.g. ImageNet) avec des millions d'images labellisées.
L'apprentissage s'avère donc très couteux en temps de calcul, même avec des technologies adaptées (GPU). Pour résoudre ce problème il est possible d'utiliser des réseaux pré-entrainés. Ces réseaux possèdent une structure particulière, établie de façon heuristique dans différents départements de recherche (Microsoft: Resnet, Google: Inception V3, Facebook: ResNet) avant d'être ajustés sur des banques d'images publiques telles que ImageNet.
La stratégie de ce transfert d'apprentissage consiste à exploiter la connaissance acquise sur un problème de classification général pour l’appliquer à un problème particulier.
La librairie Keras permet de construire de tels réseaux en utlisant relativement simplement l'environnement tensorFlow de Google à partir de programmes récrits en Python. De plus Keras permet d'utiliser les performances d'une carte GPU afin d'atteindre des performances endant possible ce transfert d'apprentissage, même avec des réseaux complexes.
L'objectif de ce tutoriel est de montrer les capacités du transfert d'apprentissage permettant de résoudre des problèmes complexes avec des moyens de calcul modestes. Néanmoins, une carte GPU est vivement conseillé.
Ce tutoriel est en grande partie inspiré du blog de François Chollet à l'initiative de Keras.
Environnement matériel et logiciel
Keras et tensorFlow s'installent simplement à partir de la distribution Anaconda de Python.
End of explanation
"""
from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())
MODE = "GPU" if "GPU" in [k.device_type for k in device_lib.list_local_devices()] else "CPU"
print(MODE)
"""
Explanation: La commande suivante permet de verifier qu'une carte GPU est bien disponible sur la machine utilisée. Si c'est le cas et si Keras a bien été installé dans la configuration GPU (c'est généralement le cas dans l'environement virtuel GPU d'Anaconda), deux options vont apparaitre, une CPU et une GPU. La configuration GPU sera alors automatiquement utilisée.
End of explanation
"""
data_dir = '' # chemin d'accès aux données
N_train = 200 #2000
N_val = 80 #800
data_dir_sub = data_dir+'subsample_%d_Ntrain_%d_Nval' %(N_train, N_val)
"""
Explanation: Prise en charge des données
Structure des données
Les données originales peuvent être téléchargées à partir du site kaggle.
L'ensemble d'apprentissage contient 25.000 images. C'est beaucoup trop pour des machines usuelles à moins de se montrer très patient. Aussi, deux sous-échantillons d'apprentissage ont été créés et disposés dans le dépôt.
100 images de chats et 100 images de chiens plus un échantillon de validation consitué de 40 images de chats et 40 images de chien.
1000 images de chats et 1000 images de chiens plus un échantillon de validation consitué de 400 images de chats et 400 images de chien.
Pour utiliser certaines fonctionnalités de Keras, les données doivent être organisées selon une abrorescence précise. Les fichiers appartenant à une même classe doivent être dans un même dossier.
data_dir
└───subsample/
│ └───train/
│ │ └───cats/
│ │ │ │ cat.0.jpg
│ │ │ │ cat.1.jpg
│ │ │ │ ...
│ │ └───dogs/
│ │ │ │ dog.0.jpg
│ │ │ │ dog.1.jpg
│ │ │ │ ...
│ └───test/
│ │ └───cats/
│ │ │ │ cat.1000.jpg
│ │ │ │ cat.1000.jpg
│ │ │ │ ...
│ │ └───dogs/
│ │ │ │ dog.1000.jpg
│ │ │ │ dog.1000.jpg
│ │ │ │ ...
N.B. Des sous-échantillons plus importants créés à partir des données originales doivent être enregistrés en respectant scrupuleusement cette structure.
Création d'un jeu d'apprentissage et de validation
Spécifier le chemin du dossier contenant les données, si ce n'est pas le répertoire courant, ainsi que les tailles des échantillons d'apprentissage et de validation.
End of explanation
"""
img = kpi.load_img(data_dir_sub+'/train/cats/cat.1.jpg') # this is a PIL image
img
"""
Explanation: Illustration des données
La fonction load_img permet de charger une image comme une image PIL.
End of explanation
"""
x = kpi.img_to_array(img)
plt.imshow(x/255, interpolation='nearest')
plt.show()
"""
Explanation: La fonction img_to_array génére un array numpy a partir d'une image PIL .
End of explanation
"""
x_0 = kpi.img_to_array(kpi.load_img(data_dir_sub+"/train/cats/cat.0.jpg"))
x_1 = kpi.img_to_array(kpi.load_img(data_dir_sub+"/train/cats/cat.1.jpg"))
x_0.shape, x_1.shape
"""
Explanation: Pré-traitements
Les images du jeu de données sont de dimensions différentes :
End of explanation
"""
datagen = kpi.ImageDataGenerator(
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
"""
Explanation: Or les images doivent être de même dimensions pour être utilisée dans un même réseau.
La fonction ImageDataGeneratorde Keras permet de remédier à ce problème.
Plus généralement cette fonction applique un certain nombre de traitements (transformation, normalisation) aléatoires sur les images de sorte que le modèle n'apprenne jamais deux fois la même image.
Quelques arguments de cette fonction:
* rotation_range: Un interval représentant les degrés possibles de rotation de l'image,
* width_shift and height_shift: intervales au sein desquels les données peuvent être translatées horizontalement ou verticalement,
* rescale: Une valeur par lequelle les données sont multipliées,
* shear_range: Transvection,
* zoom_range: Permet des zoom au sein d'une image,
* horizontal_flip: Inverse aléatoirement des images selon l'axe horizontal,
* fill_mode: La strategie adoptée pour combler les pixels manquants après une transformation.
End of explanation
"""
img_width = 150
img_height = 150
img = kpi.load_img(data_dir_sub+"/train/cats/cat.1.jpg") # this is a PIL image
x = kpi.img_to_array(img)
x_ = x.reshape((1,) + x.shape)
if not(os.path.isdir(data_dir_sub+"/preprocessing_example")):
os.mkdir(data_dir_sub+"/preprocessing_example")
i = 0
for batch in datagen.flow(x_, batch_size=1,save_to_dir=data_dir_sub+"/preprocessing_example", save_prefix='cat', save_format='jpeg'):
i += 1
if i > 7:
break
"""
Explanation: La commande .flow() genere de nouveaux exemples à partir de l'image originale et les sauve dans le dossier spécifié dans save_to_dir.
On force l'arrêt de cette génération après huits images générées.
End of explanation
"""
X_list=[]
for f in os.listdir(data_dir_sub+"/preprocessing_example"):
X_list.append(kpi.img_to_array(kpi.load_img(data_dir_sub+"/preprocessing_example/"+f)))
fig=plt.figure(figsize=(16,8))
fig.patch.set_alpha(0)
ax = fig.add_subplot(3,3,1)
ax.imshow(x/255, interpolation="nearest")
ax.set_title("Image original")
for i,xt in enumerate(X_list):
ax = fig.add_subplot(3,3,i+2)
ax.imshow(xt/255, interpolation="nearest")
ax.set_title("Random transformation %d" %(i+1))
plt.tight_layout()
plt.savefig("cats_transformation.png", dpi=100, bbox_to_anchor="tight", facecolor=fig.get_facecolor())
plt.show()
"""
Explanation: Illustration des images transformées.
End of explanation
"""
epochs = 10
batch_size=20
"""
Explanation: Classification d'image à l'aide du Deep Learning
Dans un premier temps, nous allons fixer le nombre d'epochs ainsi que la taille de notre batch afin que ces deux paramètres soit communs aux différentes méthodes que nous allons tester.
Queques règles à suivre pour le choix de ces paramètres :
epochs: Commencer avec un nombre d'epochs relativement faible (2,3) afin de voir le temps de calcul nécessaire à votre machine, puis augmenter le en conséquence.
batch_size: La taille du batch correspond au nombre d'éléments qui seront étudié a chaque itération au cours d'une epochs.
Important : Avec Keras, lorsque les données sont générés avec un générateur (voir précédemment) la taille du batch doit être un diviseur de la taille de l'échantillon. Sinon l'algorithme aura des comportement anormaux qui ne généreront pas forcément un message d'erreur.
End of explanation
"""
# this is the augmentation configuration we will use for training
train_datagen = kpi.ImageDataGenerator(
rescale=1./255,
)
# this is the augmentation configuration we will use for testing:
# only rescaling
valid_datagen = kpi.ImageDataGenerator(rescale=1./255)
# this is a generator that will read pictures found in
# subfolers of 'data/train', and indefinitely generate
# batches of augmented image data
train_generator = train_datagen.flow_from_directory(
data_dir_sub+"/train/", # this is the target directory
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode='binary') # since we use binary_crossentropy loss, we need binary labels
# this is a similar generator, for validation data
validation_generator = valid_datagen.flow_from_directory(
data_dir_sub+"/validation/",
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode='binary')
"""
Explanation: Réseau convolutionnel
Dans un premiers temps, on construit notre propre réseau de neurones convolutionnel.
Génération des données
On définit deux objets ImageDataGenerator :
train_datagen: pour l'apprentissage, où différentes transformations sont appliquées, comme précédement
valid_datagen: pour la validation, où l'on applique seulement une transformation rescale pour ne pas déformer les données.
Il est également important de définir la taille des images dans laquelle nos images seront reformatées. Ici nous choisirons un taille d'image de 150x150
End of explanation
"""
model_conv = km.Sequential()
model_conv.add(kl.Conv2D(32, (3, 3), input_shape=(img_width, img_height, 3), data_format="channels_last"))
model_conv.add(kl.Activation('relu'))
model_conv.add(kl.MaxPooling2D(pool_size=(2, 2)))
model_conv.add(kl.Conv2D(32, (3, 3)))
model_conv.add(kl.Activation('relu'))
model_conv.add(kl.MaxPooling2D(pool_size=(2, 2)))
model_conv.add(kl.Conv2D(64, (3, 3)))
model_conv.add(kl.Activation('relu'))
model_conv.add(kl.MaxPooling2D(pool_size=(2, 2)))
model_conv.add(kl.Flatten()) # this converts our 3D feature maps to 1D feature vectors
model_conv.add(kl.Dense(64))
model_conv.add(kl.Activation('relu'))
model_conv.add(kl.Dropout(0.5))
model_conv.add(kl.Dense(1))
model_conv.add(kl.Activation('sigmoid'))
model_conv.compile(loss='binary_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])
model_conv.summary()
"""
Explanation: Définition du modèle
Le modèle est consitué de 3 blocs de convolution consitutés chacun de:
Une couche de Convolution2D
Une couche d'Activation ReLU
Une couche MaxPooling2D
Suivi de :
* Une couche Flatten, permettant de convertir les features de 2 à 1 dimensions.
* Une couche Dense (Fully connected layer)
* Une couche d' Activation ReLU
* Une couche Dropout
* Une couche Dense de taille 1 suivi d'une Activation sigmoid permettant la classification binaire
On utilise la fonction de perte binary_crossentropy pour apprendre notre modèle
End of explanation
"""
ts = time.time()
model_conv.fit_generator(train_generator, steps_per_epoch=N_train // batch_size, epochs=epochs,
validation_data=validation_generator,validation_steps=N_val // batch_size)
te = time.time()
t_learning_conv_simple_model = te-ts
print("Learning TIme for %d epochs : %d seconds"%(epochs,t_learning_conv_simple_model))
model_conv.save(data_dir_sub+'/'+MODE+'_models_convolutional_network_%d_epochs_%d_batch_size.h5' %(epochs, batch_size))
"""
Explanation: Apprentissage
End of explanation
"""
ts = time.time()
score_conv_val = model_conv.evaluate_generator(validation_generator, N_val /batch_size, verbose=1)
score_conv_train = model_conv.evaluate_generator(train_generator, N_train / batch_size, verbose=1)
te = time.time()
t_prediction_conv_simple_model = te-ts
print('Train accuracy:', score_conv_train[1])
print('Validation accuracy:', score_conv_val[1])
print("Time Prediction: %.2f seconds" %t_prediction_conv_simple_model )
"""
Explanation: Prédiction
End of explanation
"""
model_VGG16_without_top = ka.VGG16(include_top=False, weights='imagenet')
model_VGG16_without_top.summary()
"""
Explanation: Q Commentez les valeurs de prédictions d'apprentissage et de validation. Comparez les avec les résultats de la dernière epochs d'apprentissage. Qu'observez vous? Est-ce normal?
Exercice Re-faites tournez ce modèle en ajoutant plus de transformation aléatoire dans le générateur d'image au moment de l'apprentissage. Que constatez-vous?
Réseau pré-entrainé : VGG16
Nous allons voir dans cette partie deux façon d'utiliser un modèle pré-entrainé:
Dans un premier temps on utilise le modèle pour extraire des features des images qui seront utilisés dans un réseaux de convolution "classique". Ces features sont le résultats des transformations des différents blocs de convolution sur nos images.
Dans un second temps on branchera le modèle "classique" généré directement sur le modèle pré-entrainé. Ce modèle sera ensuite ré-entraîné plus finement (Fine Tuning) sur le dernier bloc de convolution.
Illustration du réseau
Extraction de nouvelle caractéristiques (features)
Téléchargement des poids du modèle
Si cest la première fois que vous appeler l'application VGG16, le lancement des poids commencera automatiquement et seront stocké dans votre home : "~/.keras/models"
On utilise le modèle avec l'option ìnclude_top = False. C'est à dire que l'on ne télécharge pas le dernier bloc Fully connected classifier.
La fonction summary permet de retrouver la structure décrite précédemment.
End of explanation
"""
features_train_path = data_dir_sub+'/features_train.npy'
features_validation_path = data_dir_sub+'/features_validation.npy'
if os.path.isfile(features_train_path) and os.path.isfile(features_validation_path):
print("Load Features")
features_train = np.load(open(features_train_path, "rb"))
features_validation = np.load(open(features_validation_path, "rb"))
else:
print("Generate Features")
datagen = kpi.ImageDataGenerator(rescale=1. / 255)
generator = datagen.flow_from_directory(
data_dir_sub+"/train",
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode=None, # this means our generator will only yield batches of data, no labels
shuffle=False)
features_train = model_VGG16_without_top.predict_generator(generator, N_train / batch_size, verbose = 1)
# save the output as a Numpy array
np.save(open(features_train_path, 'wb'), features_train)
generator = datagen.flow_from_directory(
data_dir_sub+"/validation",
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode=None,
shuffle=False)
features_validation = model_VGG16_without_top.predict_generator(generator, N_val / batch_size, verbose = 1)
# save the output as a Numpy array
np.save(open(features_validation_path, 'wb'), features_validation)
"""
Explanation: Création des caractéristiques
On applique alors les 5 blocs du modèle VGG16 sur les images de nos échantillons d'apprentissage et de validation.
Cette opération peut-être couteuse, c'est pourquoi on va sauver ces features dans des fichiers afin d'effectuer qu'une fois cette opération.
Si ces fichiers existent, les poids seront téléchargés, sinon il seront créés.
End of explanation
"""
model_VGG_fcm = km.Sequential()
model_VGG_fcm.add(kl.Flatten(input_shape=features_train.shape[1:]))
model_VGG_fcm.add(kl.Dense(64, activation='relu'))
model_VGG_fcm.add(kl.Dropout(0.5))
model_VGG_fcm.add(kl.Dense(1, activation='sigmoid'))
model_VGG_fcm.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
model_VGG_fcm.summary()
"""
Explanation: Construction d'un réseaux de neurone classique.
On construit un réseaux de neurones "classique", identique à la seconde partie du réseau précédent.
Attention : La première couche de ce réseaux (Flatten) doit être configurée pour prendre en compte des données dans la dimension des features générées précédemment
End of explanation
"""
# On créer des vecteurs labels
train_labels = np.array([0] * int((N_train/2)) + [1] * int((N_train/2)))
validation_labels = np.array([0] * int((N_val/2)) + [1] * int((N_val/2)))
model_VGG_fcm.fit(features_train, train_labels,
epochs=epochs,
batch_size=batch_size,
validation_data=(features_validation, validation_labels))
t_learning_VGG_fcm = te-ts
"""
Explanation: Apprentissage
End of explanation
"""
model_VGG_fcm.save_weights(data_dir_sub+'/weights_model_VGG_fully_connected_model_%d_epochs_%d_batch_size.h5' %(epochs, batch_size))
"""
Explanation: Q Commentez les performances de ce nouveau modèle
Nous allons également sauver les poids de ce modèle afin de les réusiliser dans la prochaine partie.
End of explanation
"""
ts = time.time()
score_VGG_fcm_val = model_VGG_fcm.evaluate(features_validation, validation_labels)
score_VGG_fcm_train = model_VGG_fcm.evaluate(features_train, train_labels)
te = time.time()
t_prediction_VGG_fcm = te-ts
print('Train accuracy:', score_VGG_fcm_train[1])
print('Validation accuracy:', score_VGG_fcm_val[1])
print("Time Prediction: %.2f seconds" %t_prediction_VGG_fcm)
"""
Explanation: Prédiction
End of explanation
"""
# build the VGG16 network
model_VGG16_without_top = ka.VGG16(include_top=False, weights='imagenet', input_shape=(150,150,3))
print('Model loaded.')
"""
Explanation: Ajustement fin du réseau VGG16
Dans la partie précédente, nous avons configurer un bloc de réseaux de neurones, à même de prendre en entrée les features issues des transformation des 5 premiers blocs de convolution du modèle VGG16.
Dans cette partie, nous allons 'brancher' ce bloc directement sur les cinq premiers blocs du modèle VGG16 pour pouvoir affiner le modèle en itérant a la fois sur les blocs de convolution mais également sur notre bloc de réseau de neurone.
Création du modèle
On télécharge dans un premier temps le modèle VGG16, comme précédement.
Cependant, le modèle va cette fois être "entrainé" directement. Il ne va pas servir qu'a générer des features. Il faut donc préciser en paramètre la taille des images que l'on va lui donner.
End of explanation
"""
# build a classifier model to put on top of the convolutional model
top_model = km.Sequential()
top_model.add(kl.Flatten(input_shape=model_VGG16_without_top.output_shape[1:]))
top_model.add(kl.Dense(64, activation='relu'))
top_model.add(kl.Dropout(0.5))
top_model.add(kl.Dense(1, activation='sigmoid'))
# note that it is necessary to start with a fully-trained
# classifier, including the top classifier,
# in order to successfully do fine-tuning
top_model.load_weights(data_dir_sub+'/weights_model_VGG_fully_connected_model_%d_epochs_%d_batch_size.h5' %(epochs, batch_size))
"""
Explanation: On ajoute au modèle VGG, notre bloc de réseaux de neuronne construit précédemment pour générer des features.
Pour cela, on construit le bloc comme précédemment, puis on y ajoute les poids issus de l'apprentissage réalisé précédemment.
End of explanation
"""
# add the model on top of the convolutional base
model_VGG_LastConv_fcm = km.Model(inputs=model_VGG16_without_top.input, outputs=top_model(model_VGG16_without_top.output))
model_VGG_LastConv_fcm.summary()
"""
Explanation: Enfin on assemble les deux parties du modèles
End of explanation
"""
for layer in model_VGG_LastConv_fcm.layers[:15]:
layer.trainable = False
"""
Explanation: Gèle des 4 premiers blocs de convolution
En pratique, et pour pouvoir effectuer ces calculs dans un temps raisonable, nous allons "fine-tuner" seulement le dernier bloc de convolution du modèle, le bloc 5 (couches 16 à 19 dans le summary du modèle précédent) ainsi que le bloc de réseau de neurones que nous avons ajoutés.
Pour cela on va "geler" (Freeze) les 15 premières couches du modèle pour que leur paramètre ne soit pas optimiser pendant la phase d'apprentissage.
End of explanation
"""
# prepare data augmentation configuration
train_datagen = kpi.ImageDataGenerator(
rescale=1. / 255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
test_datagen = kpi.ImageDataGenerator(rescale=1. / 255)
train_generator = train_datagen.flow_from_directory(
data_dir_sub+"/train/",
target_size=(img_height, img_width),
batch_size=batch_size,
class_mode='binary')
validation_generator = test_datagen.flow_from_directory(
data_dir_sub+"/validation/",
target_size=(img_height, img_width),
batch_size=batch_size,
class_mode='binary')
"""
Explanation: Generate Data
End of explanation
"""
model_VGG_LastConv_fcm.compile(loss='binary_crossentropy',
optimizer=ko.SGD(lr=1e-4, momentum=0.9),
metrics=['accuracy'])
# fine-tune the model
ts = time.time()
model_VGG_LastConv_fcm.fit_generator(
train_generator,
steps_per_epoch=N_train // batch_size,
epochs=epochs,
validation_data=validation_generator,
validation_steps=N_val // batch_size)
te = time.time()
t_learning_VGG_LastConv_fcm = te-ts
"""
Explanation: Apprentissage
End of explanation
"""
ts = time.time()
score_VGG_LastConv_fcm_val = model_VGG_LastConv_fcm.evaluate_generator(validation_generator, N_val // batch_size)
score_VGG_LastConv_fcm_train = model_VGG_LastConv_fcm.evaluate_generator(train_generator, N_train // batch_size)
te = time.time()
t_prediction_VGG_LastConv_fcm = te-ts
print('Train accuracy:', score_VGG_LastConv_fcm_val[1])
print('Validation accuracy:', score_VGG_LastConv_fcm_train[1])
print("Time Prediction: %.2f seconds" %t_prediction_VGG_LastConv_fcm)
"""
Explanation: Prédiction
End of explanation
"""
data_dir_test = data_dir+'test/'
N_test = len(os.listdir(data_dir_test+"/test"))
test_datagen = kpi.ImageDataGenerator(rescale=1. / 255)
test_generator = test_datagen.flow_from_directory(
data_dir_test,
#data_dir_sub+"/train/",
target_size=(img_height, img_width),
batch_size=batch_size,
class_mode=None,
shuffle=False)
test_prediction = model_VGG_LastConv_fcm.predict_generator(test_generator, N_test // batch_size)
images_test = [data_dir_test+"/test/"+k for k in os.listdir(data_dir_test+"/test")][:9]
x_test = [kpi.img_to_array(kpi.load_img(image_test))/255 for image_test in images_test] # this is a PIL image
fig = plt.figure(figsize=(10,10))
for k in range(9):
ax = fig.add_subplot(3,3,k+1)
ax.imshow(x_test[k], interpolation='nearest')
pred = test_prediction[k]
if pred >0.5:
title = "Probabiliy for dog : %.1f" %(pred*100)
else:
title = "Probabiliy for cat : %.1f" %((1-pred)*100)
ax.set_title(title)
plt.show()
"""
Explanation: Autres modèles
Keras possède un certain nombre d'autres modèles pré-entrainés:
Xception
VGG16
VGG19
ResNet50
InceptionV3
InceptionResNetV2
MobileNet
Certains possèdent une structure bien plus complexe, notamment InceptionV3. Vous pouvez très facilement remplacer la fonction ka.VGG16 par une autre fonction (ex : ka.InceptionV3) pour tester la performance des ces différents modèles et leur complexité.
Exercice Vous pouvez re-effectuer les manipulations précédentes sur d'autres modèle pré-entrainé, en prenant le temps d'étudiez leur architecture.
Exercice Vous pouvez également re-effectuer ces apprentissage sur un jeu de données plus gros en en créant un nouveau à partir des données originales.
L'application de ces exercices sur les données du challenge est vivement conseillées :)
Prédiction sur le jeu test de Kaggle
Voyons à présent comment notre réseau performe sur un échantillon du dataset test de keras.
End of explanation
"""
|
eblur/AstroHackWeek2015
|
day3-machine-learning/09.1 - Linear models.ipynb
|
gpl-2.0
|
from sklearn.datasets import make_regression
from sklearn.cross_validation import train_test_split
X, y, true_coefficient = make_regression(n_samples=80, n_features=30, n_informative=10, noise=100, coef=True, random_state=5)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=5)
print(X_train.shape)
print(y_train.shape)
true_coefficient
X.shape, y.shape
plt.plot(X[:,1], y, 'bo', markersize=4);
"""
Explanation: Linear models for regression
y_pred = x_test[0] * coef_[0] + ... + x_test[n_features-1] * coef_[n_features-1] + intercept_
End of explanation
"""
from sklearn.linear_model import LinearRegression
linear_regression = LinearRegression().fit(X_train, y_train)
print("R^2 on training set: %f" % linear_regression.score(X_train, y_train))
print("R^2 on test set: %f" % linear_regression.score(X_test, y_test))
from sklearn.metrics import r2_score
print(r2_score(np.dot(X, true_coefficient), y))
plt.figure(figsize=(10, 5))
coefficient_sorting = np.argsort(true_coefficient)[::-1]
plt.plot(true_coefficient[coefficient_sorting], "o", label="true")
plt.plot(linear_regression.coef_[coefficient_sorting], "o", label="linear regression")
plt.legend()
"""
Explanation: Linear Regression
End of explanation
"""
from sklearn.linear_model import Ridge
ridge_models = {}
training_scores = []
test_scores = []
for alpha in [100, 10, 1, .01]:
ridge = Ridge(alpha=alpha).fit(X_train, y_train)
training_scores.append(ridge.score(X_train, y_train))
test_scores.append(ridge.score(X_test, y_test))
ridge_models[alpha] = ridge
plt.plot(training_scores, label="training scores")
plt.plot(test_scores, label="test scores")
plt.xticks(range(4), [100, 10, 1, .01])
plt.legend(loc="best")
plt.figure(figsize=(10, 5))
plt.plot(true_coefficient[coefficient_sorting], "o", label="true", c='b')
for i, alpha in enumerate([100, 10, 1, .01]):
plt.plot(ridge_models[alpha].coef_[coefficient_sorting], "o", label="alpha = %.2f" % alpha, c=plt.cm.summer(i / 3.))
plt.legend(loc="best")
"""
Explanation: Ridge Regression (L2 penalty)
End of explanation
"""
from sklearn.linear_model import Lasso
lasso_models = {}
training_scores = []
test_scores = []
for alpha in [30, 10, 1, .01]:
lasso = Lasso(alpha=alpha).fit(X_train, y_train)
training_scores.append(lasso.score(X_train, y_train))
test_scores.append(lasso.score(X_test, y_test))
lasso_models[alpha] = lasso
plt.plot(training_scores, label="training scores")
plt.plot(test_scores, label="test scores")
plt.xticks(range(4), [30, 10, 1, .01])
plt.legend(loc="best")
plt.figure(figsize=(10, 5))
plt.plot(true_coefficient[coefficient_sorting], "o", label="true", c='b')
for i, alpha in enumerate([30, 10, 1, .01]):
plt.plot(lasso_models[alpha].coef_[coefficient_sorting], "o", label="alpha = %.2f" % alpha, c=plt.cm.summer(i / 3.))
plt.legend(loc="best")
"""
Explanation: Lasso (L1 penalty)
End of explanation
"""
from plots import plot_linear_svc_regularization
plot_linear_svc_regularization()
"""
Explanation: Linear models for classification
y_pred = x_test[0] * coef_[0] + ... + x_test[n_features-1] * coef_[n_features-1] + intercept_ > 0
The influence of C in LinearSVC
End of explanation
"""
from sklearn.datasets import make_blobs
X, y = make_blobs(random_state=42)
plt.scatter(X[:, 0], X[:, 1], c=y)
from sklearn.svm import LinearSVC
linear_svm = LinearSVC().fit(X, y)
print(linear_svm.coef_.shape)
print(linear_svm.intercept_.shape)
plt.scatter(X[:, 0], X[:, 1], c=y)
line = np.linspace(-15, 15)
for coef, intercept in zip(linear_svm.coef_, linear_svm.intercept_):
plt.plot(line, -(line * coef[0] + intercept) / coef[1])
plt.ylim(-10, 15)
plt.xlim(-10, 8)
"""
Explanation: Multi-Class linear classification
End of explanation
"""
# %load solutions/linear_models.py
"""
Explanation: Exercises
Use GridSearchCV to tune the parameter C of LinearSVC on the digits dataset.
Compare l1 penalty and l2 penalty by plotting the coefficients as above for the digits dataset. Classify odd vs even digits to make it a binary task.
End of explanation
"""
|
tedunderwood/horizon
|
chapter3/notebooks/chapter3table3.ipynb
|
mit
|
# some standard modules
import csv, os, sys
from collections import Counter
import numpy as np
from scipy.stats import pearsonr
# now a module that I wrote myself, located
# a few directories up, in the software
# library for this repository
sys.path.append('../../lib')
import FileCabinet as filecab
"""
Explanation: Chapter 3, Table 3
This notebook explains how I used the Harvard General Inquirer to streamline interpretation of a predictive model.
I'm italicizing the word "streamline" because I want to emphasize that I place very little weight on the Inquirer: as I say in the text, "The General Inquirer has no special authority, and I have tried not to make it a load-bearing element of this argument."
To interpret a model, I actually spend a lot of time looking at lists of features, as well as predictions about individual texts. But to explain my interpretation, I need some relatively simple summary. Given real-world limits on time and attention, going on about lists of individual words for five pages is rarely an option. So, although wordlists are crude and arbitrary devices, flattening out polysemy and historical change, I am willing to lean on them rhetorically, where I find that they do in practice echo observations I have made in other ways.
I should also acknowledge that I'm not using the General Inquirer as it was designed to be used. The full version of this tool is not just a set of wordlists, it's a software package that tries to get around polysemy by disambiguating different word senses. I haven't tried to use it in that way: I think it would complicate my explanation, in order to project an impression of accuracy and precision that I don't particularly want to project. Instead, I have stressed that word lists are crude tools, and I'm using them only as crude approximations.
That said, how do I do it?
To start with, we'll load an array of modules. Some standard, some utilities that I've written myself.
End of explanation
"""
# start by loading the dictionary
dictionary = set()
with open('../../lexicons/MainDictionary.txt', encoding = 'utf-8') as f:
reader = csv.reader(f, delimiter = '\t')
for row in reader:
word = row[0]
count = int(row[2])
if count < 10000:
continue
# that ignores very rare words
# we end up with about 42,700 common ones
else:
dictionary.add(word)
"""
Explanation: Loading the General Inquirer.
This takes some doing, because the General Inquirer doesn't start out as a set of wordlists. I have to translate it into that form.
I start by loading an English dictionary.
End of explanation
"""
inquirer = dict()
suffixes = dict()
suffixes['verb'] = ['s', 'es', 'ed', 'd', 'ing']
suffixes['noun'] = ['s', 'es']
allinquirerwords = set()
with open('../../lexicons/inquirerbasic.csv', encoding = 'utf-8') as f:
reader = csv.DictReader(f)
fields = reader.fieldnames[2:-2]
for field in fields:
inquirer[field] = set()
for row in reader:
term = row['Entry']
if '#' in term:
parts = term.split('#')
word = parts[0].lower()
sense = int(parts[1].strip('_ '))
partialsense = True
else:
word = term.lower()
sense = 0
partialsense = False
if sense > 1:
continue
# we're ignoring uncommon senses
pos = row['Othtags']
if 'Noun' in pos:
pos = 'noun'
elif 'SUPV' in pos:
pos = 'verb'
forms = {word}
if pos == 'noun' or pos == 'verb':
for suffix in suffixes[pos]:
if word + suffix in dictionary:
forms.add(word + suffix)
if pos == 'verb' and word.rstrip('e') + suffix in dictionary:
forms.add(word.rstrip('e') + suffix)
for form in forms:
for field in fields:
if len(row[field]) > 1:
inquirer[field].add(form)
allinquirerwords.add(form)
print('Inquirer loaded')
print('Total of ' + str(len(allinquirerwords)) + " words.")
"""
Explanation: The next stage is to translate the Inquirer. It begins as a table where word senses are row labels, and the Inquirer categories are columns (except for two columns at the beginning and two at the end). This is, by the way, the "basic spreadsheet" described at this site:
http://www.wjh.harvard.edu/~inquirer/spreadsheet_guide.htm
I translate this into a dictionary where the keys are Inquirer categories, and the values are sets of words associated with each category.
But to do that, I have to do some filtering and expanding. Different senses of a word are broken out in the spreadsheet thus:
ABOUT#1
ABOUT#2
ABOUT#3
etc.
I need to separate the hashtag part. Also, because I don't want to allow rare senses of a word too much power, I ignore everything but the first sense of a word.
However, I also want to allow singular verb forms and plural nouns to count. So there's some code below that expands words by adding -s -ed, etc to the end. See the suffixes defined below for more details. Note that I use the English dictionary to determine which possible forms are real words.
End of explanation
"""
# the folder where wordcounts will live
# we're only going to load predictions
# that correspond to files located there
sourcedir = '../sourcefiles/'
docs = []
logistic = []
with open('../modeloutput/fullfiction.results.csv', encoding = 'utf-8') as f:
reader = csv.DictReader(f)
for row in reader:
genre = row['realclass']
docid = row['volid']
if not os.path.exists(sourcedir + docid + '.tsv'):
continue
docs.append(row['volid'])
logistic.append(float(row['logistic']))
logistic = np.array(logistic)
numdocs = len(docs)
assert numdocs == len(logistic)
print("We have information about " + str(numdocs) + " volumes.")
"""
Explanation: Load model predictions about volumes
The next step is to create some vectors that store predictions about volumes. In this case, these are predictions about the probability that a volume is fiction, rather than biography.
End of explanation
"""
wordcounts = filecab.get_wordfreqs(sourcedir, '.tsv', docs)
"""
Explanation: And get the wordcounts themselves
This cell of the notebook is very short (one line), but it takes a lot of time to execute. There's a lot of file i/o that happens inside the function get_wordfreqs, in the FileCabinet module, which is invoked here. We come away with a dictionary of wordcounts, keyed in the first instance by volume ID.
Note that these are normalized frequencies rather than the raw integer counts we had in the analogous notebook in chapter 1.
End of explanation
"""
# Initialize empty category vectors
categories = dict()
for field in fields:
categories[field] = np.zeros(numdocs)
# Now fill them
for i, doc in enumerate(docs):
ctcat = Counter()
allcats = 0
for word, count in wordcounts[doc].items():
if word in dictionary:
allcats += count
if word not in allinquirerwords:
continue
for field in fields:
if word in inquirer[field]:
ctcat[field] += count
for field in fields:
categories[field][i] = ctcat[field] / (allcats + 0.00000001)
# Laplacian smoothing there to avoid div by zero, among other things.
# notice that, since these are normalized freqs, we need to use a very small decimal
# If these are really normalized freqs, it may not matter very much
# that we divide at all. The denominator should always be 1, more or less.
# But I'm not 100% sure about that.
if i % 100 == 1:
print(i, allcats)
"""
Explanation: Now calculate the representation of each Inquirer category in each doc
We normalize by the total wordcount for a volume.
This cell also takes a long time to run. I've added a counter so you have some confidence that it's still running.
End of explanation
"""
logresults = []
for inq_category in fields:
l = pearsonr(logistic, categories[inq_category])[0]
logresults.append((l, inq_category))
logresults.sort()
"""
Explanation: Calculate correlations
Now that we have all the information, calculating correlations is easy. We iterate through Inquirer categories, in each case calculating the correlation between a vector of model predictions for docs, and a vector of category-frequencies for docs.
End of explanation
"""
short2long = dict()
with open('../../lexicons/long_inquirer_names.csv', encoding = 'utf-8') as f:
reader = csv.DictReader(f)
for row in reader:
short2long[row['short_name']] = row['long_name']
"""
Explanation: Load expanded names of Inquirer categories
The terms used in the inquirer spreadsheet are not very transparent. DAV for instance is "descriptive action verbs." BodyPt is "body parts." To make these more transparent, I have provided expanded names for many categories that turned out to be relevant in the book, trying to base my description on the accounts provided here: http://www.wjh.harvard.edu/~inquirer/homecat.htm
We load these into a dictionary.
End of explanation
"""
print('Printing the correlations of General Inquirer categories')
print('with the predicted probabilities of being fiction in allsubset2.csv:')
print()
print('First, top positive correlations: ')
print()
for prob, n in reversed(logresults[-15 : ]):
if n in short2long:
n = short2long[n]
if 'Laswell' in n:
continue
else:
print(str(prob) + '\t' + n)
print()
print('Now, negative correlations: ')
print()
for prob, n in logresults[0 : 15]:
if n in short2long:
n = short2long[n]
if 'Laswell' in n:
continue
else:
print(str(prob) + '\t' + n)
"""
Explanation: Print results
I print the top 12 correlations and the bottom 12, skipping categories that are drawn from the "Laswell value dictionary." The Laswell categories are very finely discriminated (things like "enlightenment gain" or "power loss"), and I have little faith that they're meaningful. I especially doubt that they could remain meaningful when the Inquirer is used crudely as a source of wordlists.
End of explanation
"""
|
landmanbester/fundamentals_of_interferometry
|
7_Observing_Systems/7_8_rfi.ipynb
|
gpl-2.0
|
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.display import HTML
HTML('../style/course.css') #apply general CSS
"""
Explanation: Outline
Glossary
7. Observing Systems
Previous: 7.7 Propagation Effects
Next: 7.x Further Reading and References
Import standard modules:
End of explanation
"""
HTML('../style/code_toggle.html')
"""
Explanation: Import section specific modules:
End of explanation
"""
|
shngli/Data-Mining-Python
|
Mining massive datasets/association.ipynb
|
gpl-3.0
|
from __future__ import division
import itertools
import operator
from sys import argv
support = 99
mappings = []
itemCounts = []
transactions = 0
"""
Explanation: Association Rules
Use the online browsing behavior dataset "browsing.txt". Each line represents a browsing session of a customer. On each line, each string of 8 characters represents the id of an item browsed during that session. The items are separated by spaces.
List the top 15 rules with corresponding confidence scores in decreasing order of confidence score for itemsets of size 2. Then list the top 15 rules with corresponding confidence scores in decreasing order of confidence score for itemsets of size 3. A rule is of the form: (item1, item2) ⇒ item3.
End of explanation
"""
data = open("browsing.txt","r")
for basket in data:
#print basket
transactions += 1
for item in set(basket.split()):
#print item
if item not in mappings:
mappings.append(item)
itemCounts.append(1)
else:
indexItem = mappings.index(item)
counter = itemCounts[indexItem]
counter += 1
itemCounts[indexItem] = counter
data.close()
"""
Explanation: Count all items
End of explanation
"""
frequentItems = [mappings.index(item) for item in mappings \
if itemCounts[mappings.index(item)] > support]
"""
Explanation: Get frequent items
End of explanation
"""
candidatePairs = {}
for pair in itertools.combinations(sorted(frequentItems),2):
candidatePairs[pair] = 0
"""
Explanation: Get all candidate pairs (all combination pairs of frequent items).
End of explanation
"""
data = open("browsing.txt","r")
for basket in data:
fitems = sorted( [ mappings.index(item) for item in set(basket.split()) ] )
# Generate pairs for them and update counts
for pair in itertools.combinations(fitems,2):
if pair in candidatePairs:
count = candidatePairs[pair]
count += 1
candidatePairs[pair] = count
data.close()
"""
Explanation: Get counts for all candidate pairs.
End of explanation
"""
frequentPairs = sorted([k for k,v in candidatePairs.iteritems() if v > support])
"""
Explanation: Get all frequent pairs
End of explanation
"""
candidateTriples = {}
allCandidateTriples = []
for fcPair in frequentPairs:
for jp in [joinPair for joinPair in frequentPairs \
if joinPair[0] == fcPair[1]]:
allCandidateTriples.append( (fcPair[0],fcPair[1],jp[1]) )
"""
Explanation: Generate candidate triples by frequentPairs JOIN frequentPairs
End of explanation
"""
for candidate in allCandidateTriples:
whatAboutIt = True
for pair in itertools.combinations(candidate,2):
if pair not in frequentPairs:
whatAboutIt = False
break
if whatAboutIt:
candidateTriples[candidate] = 0
"""
Explanation: Prune non frequent candidate triples
End of explanation
"""
data = open("browsing.txt","r")
for basket in data:
items = sorted([mappings.index(item) for item in set(basket.split())])
fPair = []
for triple in itertools.combinations(items,3):
if triple in candidateTriples:
tripleCount = candidateTriples[triple]
tripleCount = tripleCount +1
candidateTriples[triple] = tripleCount
data.close()
"""
Explanation: Get count for candidate triples
End of explanation
"""
frequentTriples = sorted ([k for k,v in candidateTriples.iteritems() if v > support])
"""
Explanation: Get frequent triples
End of explanation
"""
def confidence(I,J):
# Calculate P(IJ)
PIJ = 0
IJ = set(I).union(set(J))
if len(IJ) == 2:
PIJ = candidatePairs[tuple(sorted(IJ))]
elif len(IJ) == 3:
PIJ = candidateTriples[tuple(sorted(IJ))]
#Calculate P(I)
PI = 0
if len(I) == 1:
PI = itemCounts[I[0]]
elif len(I) == 2:
PI = candidatePairs[tuple(sorted(I))]
if PIJ > PI:
print I, J, IJ
print PIJ, PI, PIJ / PI
return PIJ / PI
"""
Explanation: Generating Rules for confidence
End of explanation
"""
pairRules = {}
for pair in frequentPairs:
pairRules[pair]=confidence( (pair[0],),(pair[1],) )
pairRules[(pair[1],pair[0])] = confidence( (pair[1],),(pair[0],) )
"""
Explanation: Frequent pairs by confidence
End of explanation
"""
tripleRules = {}
for triple in frequentTriples:
for pair in itertools.combinations(triple,2):
item2 = tuple(set(triple).difference(set(pair)))
tripleRules[(pair,item2)] = confidence(pair,item2)
"""
Explanation: Frequent triples by confidence
End of explanation
"""
cp = sorted(pairRules.iteritems(), key = operator.itemgetter(1))
cp.reverse()
cp5 = [ "%s-->%s %s" % (mappings[rule[0][0]],mappings[rule[0][1]],rule[1])\
for rule in cp[0:15] ]
print 'Top 15 pairs by confidence:'
print "\n".join(cp5)
ct = sorted(tripleRules.iteritems(), key = operator.itemgetter(1))
ct.reverse()
ct5 = [ "{%s,%s}-->%s %s" % (mappings[rule[0][0][0]], \
mappings[rule[0][0][1]], \
mappings[rule[0][1][0]], \
rule[1])\
for rule in ct[0:15] ]
print 'Top 15 triples by confidence:'
print "\n".join(ct5)
"""
Explanation: Final sort rules and get top 15 desc
End of explanation
"""
def lift(J,conf):
if isinstance(J, tuple):
suppJ = itemCounts[J[0]]
else:
suppJ = itemCounts[J]
SJ = suppJ / transactions
return conf / SJ
liftedPairRules = { k:lift(k[1],v) for k,v in pairRules.iteritems()}
lp = sorted(liftedPairRules.iteritems(), key = operator.itemgetter(1))
lp.reverse()
lp5 = [ "%s-->%s %s" % (mappings[rule[0][0]],mappings[rule[0][1]],rule[1])\
for rule in lp[0:15] ]
print 'Top 15 pairs by lift:'
print "\n".join(lp5)
liftedTripleRules = { k:lift(k[1],v) for k,v in tripleRules.iteritems()}
lt = sorted(liftedTripleRules.iteritems(), key = operator.itemgetter(1))
lt.reverse()
lt5 = [ "{%s,%s}-->%s %s" % (mappings[rule[0][0][0]], \
mappings[rule[0][0][1]], \
mappings[rule[0][1][0]], \
rule[1])\
for rule in lt[0:15] ]
print 'Top 15 triples by lift:'
print "\n".join(lt5)
"""
Explanation: Generating Rules for lift
End of explanation
"""
def conv(J,conf):
if isinstance(J, tuple):
suppJ = itemCounts[J[0]]
else:
suppJ = itemCounts[J]
SJ = suppJ / transactions
conv = float('inf')
if not conf == 1:
conv = (1 - SJ)/(1 - conf)
return conv
convictedPairRules = { k:conv(k[1],v) for k,v in pairRules.iteritems()}
convp = sorted(convictedPairRules.iteritems(), key = operator.itemgetter(1))
convp.reverse()
convp5 = [ "%s-->%s %s" % (mappings[rule[0][0]],mappings[rule[0][1]],rule[1])\
for rule in convp[0:15] ]
print 'Top 15 pairs by conviction:'
print "\n".join(convp5)
convictedTripleRules = { k:conv(k[1],v) for k,v in tripleRules.iteritems()}
convt = sorted(convictedTripleRules.iteritems(), key = operator.itemgetter(1))
convt.reverse()
convt5 = [ "{%s,%s}-->%s %s" % (mappings[rule[0][0][0]], \
mappings[rule[0][0][1]], \
mappings[rule[0][1][0]], \
rule[1])\
for rule in convt[0:15] ]
print 'Top 15 triples by conviction:'
print "\n".join(convt5)
"""
Explanation: Generating Rules for conviction
End of explanation
"""
|
datascience-course/datascience-course.github.io
|
2016/assets/slides/03-hypothesis-testing-1.ipynb
|
mit
|
import scipy as sc
from scipy.stats import bernoulli
from scipy.stats import binom
from scipy.stats import norm
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10, 6)
"""
Explanation: Introduction to Data Science, CS 5963 / Math 3900
Lecture 3: Hypothesis Testing I
In this lecture, we'll have a brief glimpse at hypothesis testing. To get started, we'll introduce a few concepts from probability.
Required reading:
Grus, Ch.7 link
Further reading:
Jay L. Devore, Probability and Statistics for Engineering and the Sciences, 9th ed. Cengage Learning (2016) Ch. 8 and 9.
For a more complete treatment, take Math 3070 (Applied Statistics I).
End of explanation
"""
n = 1000;
coin_flips = bernoulli.rvs(p=0.5, size=n)
print(coin_flips)
"""
Explanation: Bernoulli Distribution
The Bernoulli distribution, named after Jacob Bernoulli, is the probability distribution of a random variable which takes the value 1 (success) with probability $p$ and the value 0 (failure) with probability $q=1-p$.
The Bernoulli distribution with $p=0.5$ (implying $q=0.5$) describes a 'fair' coin toss where 1 and 0 represent "heads" and "tails", respectively. If the coin is unfair, then we would have that $p\neq 0.5$.
End of explanation
"""
print(sum(coin_flips))
print(sum(coin_flips)/n)
"""
Explanation: How many heads did we get? We just count the number of 1's.
End of explanation
"""
n = 1000000
coin_flips = bernoulli.rvs(p=0.5, size=n)
print(sum(coin_flips)/n)
"""
Explanation: What if we flip the coin more times?
End of explanation
"""
p = 0.5
n = 10
bin_vars = binom.rvs(n=n,p=p,size=1000000)
print(bin_vars[:100])
bins=sc.arange(12)-.5
plt.hist(bin_vars, bins=bins,normed=True)
plt.title("A histogram of binomial random variables")
plt.xlim([-.5,10.5])
plt.show()
"""
Explanation: Some facts about Bernoulli variables:
* mean is p
* variance is p(1-p)
Binomial distribution
The binomial distribution, with parameters $n$ and $p$, is a discrete probability distribution ``summarizing'' the outcome of $n$ Bernoulli random variables. For simplicity, take $p=0.5$ so that the Bernoulli distribution describes the outcome of a coin. For each flip, the probability of heads is $p$ (so the probability of tails is $q=1-p$). But we don't keep track of the individual flips. We only keep track of how many heads/tails there were in total. So, the binomial distribution can be thought of as summarizing a bunch of (independent) Bernoulli random variables.
The following code is equivalent to flipping a fair (p=0.5) coin n=10 times and counting the number of heads and then repeating this process 1,000,000 times.
End of explanation
"""
f = lambda k: binom.pmf(k, n=n,p=p)
x = sc.arange(n+1);
plt.plot(x, f(x),'*-')
plt.title("The probability mass function for a Binomial random variable")
plt.xlim([0,n])
plt.show()
"""
Explanation: Some facts about the binomial distribution:
* The mean is $np$
* The variance is $np(1-p)$
Mathematical aside: Binomial (and Bernoulli) random variables are examples of discrete random variables since they can take only discrete values. A Bernoulli random variable can take values $0$ or $1$. A binomial random variable can only take values
$$
0,1,\ldots, n.
$$
One can compute the probability that the variable takes each value. This is called the probability mass function.
For a Bernoulli random variable, the probability mass function is given by
$$
f(k) = \begin{cases} p & k=1 \ 1-p & k = 0 \end{cases}
$$
For a binomial random variable, the probability mass function is given by
$$
f(k) = \binom{n}{k} p^k (1-p)^{n-k}.
$$
Here, $\binom{n}{k} = \frac{n!}{k!(n-k)!}$ is the number of ways to arrange the
$k$ heads among the $n$ flips. For a fair coin, we have $p=0.5$ and $f(k) = \binom{n}{k} \frac{1}{2^n}$. This is the number of ways to arrange $k$ heads among $n$ outcomes divided by the total number of outcomes.
The probability mass function can be plotted using the scipy library as follows.
End of explanation
"""
mu = 0 # mean
sigma = 1 # standard deviation
x = sc.arange(mu-4*sigma,mu+4*sigma,0.001);
pdf = norm.pdf(x,loc=mu, scale=sigma)
# Here, I could have also written
# pdf = 1/(sigma * sc.sqrt(2 * sc.pi)) * sc.exp( - (x - mu)**2 / (2 * sigma**2))
plt.plot(x, pdf, linewidth=2, color='k')
plt.show()
"""
Explanation: Observe that the probability mass function looks very much like the histogram plot! (not a coincidence)
Concept check: what is a random variable?
A random variable is an abstraction of a coin. It can take on a set of possible different values, each with a preassigned probability. A Bernoulli r.v. takes value $1$ with probability $p$ and $0$ with probability $1-p$. A binomial r.v. takes values $0,1,\ldots,n$, with a given probability. The probabilities are given by the probability mass function. This function looks just like a histogram if you were to sample a large number of random variables.
Quiz: what is the random variable that describes a fair dice? the sum of two fair dice?
Normal (Gaussian) distribution
Roughly speaking, normal random variables are described by a "bell curve". The curve is centered at the mean, $\mu$, and has width given by the standard deviation, $\sigma$.
End of explanation
"""
plt.plot(x, pdf, linewidth=2, color='k')
x2 = sc.arange(mu-sigma,mu+sigma,0.001)
plt.fill_between(x2, y1= norm.pdf(x2,loc=mu, scale=sigma), facecolor='red', alpha=0.5)
plt.show()
"""
Explanation: A normal random variable is an example of a continuous random variable. A normal random variable can take any real value, but some numbers are more likely than others. More formally, we say that the probability density function (PDF) for the normal (Gaussian) distribution is
$$
f(x) = \frac{1}{\sqrt{ 2 \pi \sigma^2 }}
e^{ - \frac{ (x - \mu)^2 } {2 \sigma^2} },
$$
where $\mu$ is the mean and $\sigma$ is the variance. What this means is that the probability that a normal random variable will take values in the interval $[a,b]$ is given by
$$
\int_a^b f(x) dx.
$$
This is just the area under the curve for this interval. For $a=\mu-\sigma$ and $b = \mu+\sigma$, we plot this below.
End of explanation
"""
norm.cdf(mu+sigma, loc=mu, scale=sigma) - norm.cdf(mu-sigma, loc=mu, scale=sigma)
"""
Explanation: This integral can be computed using the cumulative distribution function (CDF)
$$
F(x) = \int_{-\infty}^x f(x) dx.
$$
We have that
$$
\int_a^b f(x) dx = F(b) - F(a)
$$
End of explanation
"""
norm_vars = norm.rvs(loc=mu,scale=sigma,size=1000000)
print(norm_vars[:100])
plt.hist(norm_vars, bins=100,normed=True)
plt.plot(x, pdf, linewidth=2, color='k')
plt.title("A histogram of normal random variables")
plt.show()
"""
Explanation: This means that 68% of the time, this normal random variable will have values between $\mu-\sigma$ and $\mu+\sigma$.
You used to have to look these values up in a table!
Let's see what it looks like if we sample 1,000,000 normal random variables and then plot a histogram.
End of explanation
"""
n = 1000
p = 0.5
bin_vars = binom.rvs(n=n,p=p,size=10000)
plt.hist(bin_vars, bins='auto',normed=True)
mu = n*p
sigma = sc.sqrt(n*p*(1-p))
x = sc.arange(mu-4*sigma,mu+4*sigma,0.1);
pdf = norm.pdf(x, loc=mu, scale=sigma)
# Here, I could also write
# pdf = 1/(sigma * sc.sqrt(2 * sc.pi)) * sc.exp( - (x - mu)**2 / (2 * sigma**2) )
plt.plot(x, pdf, linewidth=2, color='k')
plt.title("A comparison between the histogram of binomial random \n variables and the normal distribution predicted by the CLT")
plt.show()
"""
Explanation: The histogram of the sampled variables looks just like the probability distribution function!
Central Limit Theorem
One of the reasons that the normal distribution is so important is the following theorem.
Central Limit Theorem. Under "some assumptions", the sum of a "large number" $n$ of (independent) random variables, each with a finite mean $\mu$ and variance $\sigma^2$, will be approximately normally distributed with mean $n\mu$ and variance $n\sigma^2$.
How can we use the central limit theorem (CLT)?
The CLT tells us that if $n$ is large, binomial random variables will be distributed in a certain way. That is, if we flip a coin many times, the number of heads that we're likely to see is described by a normal distribution. This will allow us to ask questions like: How unusual is it to flip a fair coin 1000 times and see 545 heads?
Suppose we flip a fair ($p=0.5$) coin 1000 times.
Question: How many heads do we expect to see?
The CLT says that the number of heads (= sum of Bernoulli r.v. = binomial r.v.) is approximately normally distributed with mean
$$
n\mu = np = 10000.5 = 500
$$
and variance
$$
n \sigma^2 = np(1-p) = 10000.5*0.5 = 250.
$$
Let's do some experiments.
We call flipping a fair coin n=1,000 times and counting the number of heads a "simulation". Recall that the outcome is precisely a binomial random variable with n=1,000 and p = 0.5. We'll do 10,000 simulations and then compare the histogram of the binomial random variables and the normal distribution predicted by the CLT.
End of explanation
"""
n = 1000
p = 0.5
mu = n*p
sigma = sc.sqrt(n*p*(1-p))
print(norm.cdf(545, loc=mu, scale=sigma))
# a plot illustrating the integral
x = sc.arange(mu-4*sigma,mu+4*sigma,0.001);
plt.plot(x, norm.pdf(x, loc=mu, scale=sigma), linewidth=2, color='k')
x2 = sc.arange(mu-4*sigma,545,0.001)
plt.fill_between(x2, y1= norm.pdf(x2,loc=mu, scale=sigma), facecolor='red', alpha=0.5)
plt.xlim([mu-4*sigma,mu+4*sigma])
plt.show()
"""
Explanation: Hypothesis testing
So what is the likelihood of flipping a coin 1000 times and seeing less than 545 heads?
The CLT tells us that this is approximately
$$
\int_{-\infty}^{545} p(x) dx = F(545).
$$
This is something that we can easily evaluate using the cumulative distribution function (CDF).
End of explanation
"""
val_integral = norm.cdf(545, loc=mu, scale=sigma) - norm.cdf(455, loc=mu, scale=sigma)
print(val_integral)
print(1-val_integral)
"""
Explanation: So $99.8\%$ of the time, we would see fewer than 545 heads. So seeing 545 heads is very unlikely! It happens only $0.2\%$ of the time. This is so unlikely that we might declare that the coin is not fair!
This is precisely what hypothesis testing is.
In hypothesis testing, we make a null hypothesis, denoted $H_0$. In this case, the null hypothesis is
$$
H_0: \text{the coin is fair, i.e., $p=0.5$}.
$$
The alternative hypothesis, $H_a$, is typically the hypothesis that the researcher wants to validate. In this case, that the coin is unfair, i.e., $p\neq 0.5$.
We also choose a significance level for the test, $\alpha$, traditionally $1\%$ or $5\%$.
In this case, let's choose a significance level of $\alpha = 1\%$. We then perform an experiment. In this case, we flip the coin 1000 times and count the number of heads (in this case 545).
Finally, assuming the null hypothesis is true, we compute how how likely it is to see a number that is at least as far from the expected value as the number obtained. To do this, we compute the integral
$$
\int_{455}^{545} p(x) dx = F(545) - F(455)
$$
Question: why this lower bound?
End of explanation
"""
mu = 15
sigma = sc.sqrt(5.72**2/137)
print(2*norm.cdf(2.42, loc=mu, scale=sigma))
"""
Explanation: Thus, $99.6\%$ of the time we see a value less extreme than 545. In other words, we would see either more than 545 heads or less than 455 heads only 0.4% of the time. The is called the P-value. Since the P-value is smaller than the chosen significance level, we reject the null hypothesis and declare the coin to be unfair.
Some comments about the p-value:
1. A p-value is a probability calculated assuming that $H_0$ is true.
+ The smaller the p-value, the stronger the evidence against $H_0$.
+ A p-value is not the probability that the null hypothesis is true of false. It is the probability that an erroneous conclusion is reached. (More on this next lecture)
Example: "Freshman 15", Fact or Fiction
This example was taken from Devore, pp.314-315.
"A common belief among the lay public is that body weight increases after entry into college, and the phrase 'freshman 15' has been coined to describe the 15 puunds that students presumably gain over their freshman year."
Let $\mu$ denote the true average weight gain in the first year of college. We take the null hypothesis to be
$$
H_0: \mu = 15
$$
We suppose a random sample of $n$ students is selected, their weights (before and after the first year of college) are measured, and the sample mean $\bar{x}$ and sample standard deviation $s$ are computed. An article in the journal Obesity (2006) cites that for a sample of $n=137$ students, the sample mean weight gain was $\bar{x}=2.42$ lb and with a sample standard deviation of $s=5.72$ lb. Assuming $H_0$ to be true, how unlikely is it that we would observe such a small value?
We take a normal distribution with mean given by the null value ($\mu = 15$) and variance given by $s^2/n = (5.72)^2/137=0.2388$.
End of explanation
"""
|
maigimenez/trolls
|
Notebooks/0. Gather data.ipynb
|
mit
|
config = ConfigParser()
config.read(join(pardir,'src','credentials.ini'))
APP_KEY = config['twitter']['app_key']
APP_SECRET = config['twitter']['app_secret']
OAUTH_TOKEN = config['twitter']['oauth_token']
OAUTH_TOKEN_SECRET = config['twitter']['oauth_token_secret']
from twitter import oauth, Twitter, TwitterHTTPError
"""
Explanation: Configurar las credenciales para acceder al API de Twitter
End of explanation
"""
auth = oauth.OAuth(OAUTH_TOKEN, OAUTH_TOKEN_SECRET,
APP_KEY, APP_SECRET)
twitter_api = Twitter(auth=auth)
twitter_api.retry = True
"""
Explanation: Esta es la molona librería que vamos a utilizar: https://github.com/sixohsix/twitter/tree/master
End of explanation
"""
tweet = twitter_api.statuses.show(_id='628949369883000832')
tweet['text']
"""
Explanation: 1 . Recoger tweets a partir de un id
End of explanation
"""
femfreq_tweet_search = twitter_api.statuses.user_timeline(screen_name="femfreq", count=100)
femfreq_tweet_search[0]['user']['description']
femfreq_tweet_search[-1]['text']
"""
Explanation: 2. Recoger tweets de una usuaria
End of explanation
"""
tweets = twitter_api.search.tweets(q="#feminazi", count=100)
tweets['search_metadata']
import pandas as pd
text_gathered = [tweet_data['text'] for tweet_data in tweets['statuses']]
num_tweets = len(text_gathered)
pd_tweets = pd.DataFrame( {'tweet_text': text_gathered,
'troll_tag': [False] * num_tweets})
pd_tweets.head()
pd_tweets.to_csv('maybe_troll.csv')
ls
"""
Explanation: 3. Recoger tweets a partir de una consulta
End of explanation
"""
|
kaka0525/Process-Bike-Share-data-with-Pandas
|
bikeshare.ipynb
|
mit
|
import pandas as pd
import numpy as np
weather = pd.read_table("daily_weather.tsv")
usage = pd.read_table("usage_2012.tsv")
station = pd.read_table("stations.tsv")
"""
Explanation: <strong>Process Bike-Share data with Pandas</strong>
End of explanation
"""
weather
mean = weather.groupby('season_desc')['temp'].mean()
mean
weather.loc[weather['season_code'] == 1, 'season_desc'] = 'winter'
weather.loc[weather['season_code'] == 2, 'season_desc'] = 'spring'
weather.loc[weather['season_code'] == 3, 'season_desc'] = 'summer'
weather.loc[weather['season_code'] == 4, 'season_desc'] = 'fall'
weather
"""
Explanation: Question 1: Compute the average temperature by season ('season_desc'). Figure out what's wrong with the original data and fix it.
End of explanation
"""
weather['date'] = pd.to_datetime(weather['date'])
type(weather['date'][1])
month_rental = weather.groupby(weather['date'].dt.month)['total_riders'].sum()
month_rental
"""
Explanation: Question 2: Various of the columns represent dates or datetimes, but out of the box pd.read_table won't treat them correctly. This makes it hard to (for example) compute the number of rentals by month. Fix the dates and compute the number of rentals by month.
End of explanation
"""
weather
mean
month_rental
weather[['temp', 'total_riders']].corr()
weather['month'] = pd.DatetimeIndex(weather.date).month
weather[['month', 'temp', 'total_riders']].corr()
weather[['total_riders', 'temp', 'month']].groupby('month').corr()
"""
Explanation: Question 3: Investigate how the number of rentals varies with temperature. Is this trend constant across seasons? Across months?
End of explanation
"""
weather[['total_riders', 'temp', 'season_desc']].groupby('season_desc').corr()
"""
Explanation: The positive correlation is consistent throughout months.
End of explanation
"""
usage
usage.groupby("cust_type")["duration_mins"].mean()
"""
Explanation: And this is consistent throughout seasons as well.
4. There are various types of users in the usage data sets. What sorts of things can you say about how they use the bikes differently?
End of explanation
"""
season_group = weather.groupby(weather["season_desc"])["no_casual_riders", "no_reg_riders"].sum()
season_group
"""
Explanation: The casual bikers tend to ride their bikes longer per bike session in comparison to registered and subscriber bikers.
End of explanation
"""
weather.groupby(weather['weather_desc'])['no_casual_riders', 'no_reg_riders'].sum()
"""
Explanation: There are always more no regular riders than no casual riders. Summer has the highest number of riders in both no causal riders and no regular riders categories.
End of explanation
"""
|
NLeSC/noodles
|
notebooks/An interactive introduction.ipynb
|
apache-2.0
|
from noodles import schedule
@schedule
def add(x, y):
return x + y
@schedule
def mul(x,y):
return x * y
"""
Explanation: An interactive introduction to Noodles: translating Poetry
Noodles is there to make your life easier, in parallel! The reason why Noodles can be easy and do parallel Python at the same time is its functional approach. In one part you'll define a set of functions that you'd like to run with Noodles, in an other part you'll compose these functions into a workflow graph. To make this approach work a function should not have any side effects. Let's not linger and just start noodling! First we define some functions to use.
End of explanation
"""
a = add(1, 1)
b = mul(a, 2)
c = add(a, a)
d = mul(b, c)
"""
Explanation: Now we can create a workflow composing several calls to this function.
End of explanation
"""
from noodles.tutorial import get_workflow_graph
import ipywidgets as widgets
widgets.HBox([
widgets.VBox([
widgets.HTML('<b>{}</b>'.format(k)),
widgets.HTML(value=get_workflow_graph(w).pipe(format='svg').decode())])
for k, w in {'a': a, 'b': b, 'c': c, 'd': d}.items()])
"""
Explanation: That looks easy enough; the funny thing is though, that nothing has been computed yet! Noodles just created the workflow graphs corresponding to the values that still need to be computed. Until such time, we work with the promise of a future value. Using some function in pygraphviz we can look at the call graphs.
End of explanation
"""
from noodles import run_parallel
run_parallel(d, n_threads=2)
"""
Explanation: Now, to compute the result we have to tell Noodles to evaluate the program.
End of explanation
"""
import urllib.request
import json
import re
class Translate:
"""Translate words and sentences in the worst possible way. The Glosbe dictionary
has a nice REST interface that we query for a phrase. We then take the first result.
To translate a sentence, we cut it in pieces, translate it and paste it back into
a Frankenstein monster."""
def __init__(self, src_lang='en', tgt_lang='fy'):
self.src = src_lang
self.tgt = tgt_lang
self.url = 'https://glosbe.com/gapi/translate?' \
'from={src}&dest={tgt}&' \
'phrase={{phrase}}&format=json'.format(
src=src_lang, tgt=tgt_lang)
def query_phrase(self, phrase):
with urllib.request.urlopen(self.url.format(phrase=phrase.lower())) as response:
translation = json.loads(response.read().decode())
return translation
def word(self, phrase):
# translation = self.query_phrase(phrase)
translation = {'tuc': [{'phrase': {'text': phrase.lower()[::-1]}}]}
if len(translation['tuc']) > 0 and 'phrase' in translation['tuc'][0]:
result = translation['tuc'][0]['phrase']['text']
if phrase[0].isupper():
return result.title()
else:
return result
else:
return "<" + phrase + ">"
def sentence(self, phrase):
words = re.sub("[^\w]", " ", phrase).split()
space = re.sub("[\w]+", "{}", phrase)
return space.format(*map(self.word, words))
"""
Explanation: Making loops
Thats all swell, but how do we make a parallel loop? Let's look at a map operation; in Python there are several ways to perform a function on all elements in an array. For this example, we will translate some words using the Glosbe service, which has a nice REST interface. We first build some functionality to use this interface.
End of explanation
"""
shakespeare = [
"If music be the food of love, play on,",
"Give me excess of it; that surfeiting,",
"The appetite may sicken, and so die."]
def print_poem(intro, poem):
print(intro)
for line in poem:
print(" ", line)
print()
print_poem("Original:", shakespeare)
"""
Explanation: We start with a list of strings that desparately need translation.
End of explanation
"""
shakespeare_auf_deutsch = []
for line in shakespeare:
shakespeare_auf_deutsch.append(
Translate('en', 'de').sentence(line))
print_poem("Auf Deutsch:", shakespeare_auf_deutsch)
"""
Explanation: Beginning Python programmers like to append things; this is not how you are
supposed to program in Python; if you do, please go and read Jeff Knupp's Writing Idiomatic Python.
End of explanation
"""
shakespeare_ynt_frysk = \
(Translate('en', 'fy').sentence(line) for line in shakespeare)
print_poem("Yn it Frysk:", shakespeare_ynt_frysk)
"""
Explanation: Rather use a comprehension like so:
End of explanation
"""
shakespeare_pa_dansk = \
map(Translate('en', 'da').sentence, shakespeare)
print_poem("På Dansk:", shakespeare_pa_dansk)
"""
Explanation: Or use map:
End of explanation
"""
from noodles import schedule
@schedule
def format_string(s, *args, **kwargs):
return s.format(*args, **kwargs)
import urllib.request
import json
import re
class Translate:
"""Translate words and sentences in the worst possible way. The Glosbe dictionary
has a nice REST interface that we query for a phrase. We then take the first result.
To translate a sentence, we cut it in pieces, translate it and paste it back into
a Frankenstein monster."""
def __init__(self, src_lang='en', tgt_lang='fy'):
self.src = src_lang
self.tgt = tgt_lang
self.url = 'https://glosbe.com/gapi/translate?' \
'from={src}&dest={tgt}&' \
'phrase={{phrase}}&format=json'.format(
src=src_lang, tgt=tgt_lang)
def query_phrase(self, phrase):
with urllib.request.urlopen(self.url.format(phrase=phrase.lower())) as response:
translation = json.loads(response.read().decode())
return translation
@schedule
def word(self, phrase):
translation = {'tuc': [{'phrase': {'text': phrase.lower()[::-1]}}]}
# translation = self.query_phrase(phrase)
if len(translation['tuc']) > 0 and 'phrase' in translation['tuc'][0]:
result = translation['tuc'][0]['phrase']['text']
if phrase[0].isupper():
return result.title()
else:
return result
else:
return "<" + phrase + ">"
def sentence(self, phrase):
words = re.sub("[^\w]", " ", phrase).split()
space = re.sub("[\w]+", "{}", phrase)
return format_string(space, *map(self.word, words))
def __str__(self):
return "[{} -> {}]".format(self.src, self.tgt)
def __serialize__(self, pack):
return pack({'src_lang': self.src,
'tgt_lang': self.tgt})
@classmethod
def __construct__(cls, msg):
return cls(**msg)
"""
Explanation: Noodlify!
If your connection is a bit slow, you may find that the translations take a while to process. Wouldn't it be nice to do it in parallel? How much code would we have to change to get there in Noodles? Let's take the slow part of the program and add a @schedule decorator, and run! Sadly, it is not that simple. We can add @schedule to the word method. This means that it will return a promise.
Rule: Functions that take promises need to be scheduled functions, or refer to a scheduled function at some level.
We could write
return schedule(space.format)(*(self.word(w) for w in words))
in the last line of the sentence method, but the string format method doesn't support wrapping. We rely on getting the signature of a function by calling inspect.signature. In some cases of build-in function this raises an exception. We may find a work around for these cases in future versions of Noodles. For the moment we'll have to define a little wrapper function.
End of explanation
"""
from noodles import gather, run_parallel
shakespeare_en_esperanto = \
map(Translate('en', 'eo').sentence, shakespeare)
wf = gather(*shakespeare_en_esperanto)
result = run_parallel(wf, n_threads=8)
print_poem("Shakespeare en Esperanto:", result)
"""
Explanation: Let's take stock of the mutations to the original. We've added a @schedule decorator to word, and changed a function call in sentence. Also we added the __str__ method; this is only needed to plot the workflow graph. Let's run the new script.
End of explanation
"""
# if you know a way to shrink this image down, please send me a pull request
widgets.HTML(get_workflow_graph(wf).pipe(format='svg').decode())
"""
Explanation: The last peculiar thing that you may notice, is the gather function. It collects the promises that map generates and creates a single new promise. The definition of gather is very simple:
@schedule
def gather(*lst):
return lst
The workflow graph of the Esperanto translator script looks like this:
End of explanation
"""
from noodles import (schedule, gather_all)
import re
@schedule
def count_word_size(word):
return len(word)
@schedule
def format_string(s, *args, **kwargs):
return s.format(*args, **kwargs)
def word_sizes(phrase):
words = re.sub("[^\w]", " ", phrase).split()
space = re.sub("[\w]+", "{}", phrase)
word_lengths = map(count_word_size, words)
return format_string(space, *word_lengths)
from noodles.run.threading.vanilla import run_parallel
line = "Oote oote oote, Boe"
run_parallel(word_sizes(line), n_threads=4)
"""
Explanation: Dealing with repetition
In the following example we have a line with some repetition. It would be a shame to look up the repeated words twice, wouldn't it? Let's build a little counter routine to check if everything is working.
End of explanation
"""
# import logging
# logging.FileHandler(filename='mylog.log', mode='a')
from noodles.run.threading.sqlite3 import run_parallel
from noodles import serial
wf = Translate('de', 'fr').sentence(line)
run_parallel(wf, n_threads=4, registry=serial.base, db_file='jobs.db')
"""
Explanation: To see how this program is being run, we monitor the job submission, retrieval and result storage in a Sqlite3 database.
End of explanation
"""
from noodles import schedule
@schedule
class A:
def __init__(self, value):
self.value = value
@property
def square(self):
return self.value**2
@square.setter
def square(self, sqr):
self.value = sqr**(1/2)
def __str__(self):
return "[A {}]".format(self.value)
"""
Explanation: Try running the above cells again, and see what happens!
Objects in Noodles
We've already seen that we can @schedule class methods, just as easy as functions. What if a promised objects represents an object? Noodles actually catches references and assignments to perceived members of promised objects and translates them into function calls. We will have another example (this time a bit smaller) to show how this works. We will compute result to Pythagoras theorem by using setters and getters. Python has a beautiful way of capturing reference and assignment to member variables by means of the @property decorator. This concept alows Noodles to catch these in a most generic way.
End of explanation
"""
from noodles import run_single
from noodles.tutorial import add
u = A(3.0)
v = A(4.0)
u.square = add(u.square, v.square)
w = u.value
run_single(w)
get_workflow_graph(w)
"""
Explanation: We can now treat this object as normal in the user script, and do the following
End of explanation
"""
from noodles import (gather)
from noodles.tutorial import (sub, mul, accumulate)
from noodles.display import (DumbDisplay)
from noodles.run.runners import (run_parallel_with_display)
import time
@schedule(display="| {a} + {b}", confirm=True)
def add(a, b):
time.sleep(0.5)
return a + b
@schedule(display="{msg}")
def message(msg, value=0):
return value()
def test_logging():
A = add(1, 1)
B = sub(3, A)
multiples = [mul(add(i, B), A) for i in range(6)]
C = accumulate(gather(*multiples))
wf = message("\n+---(Running the test)", lambda: C)
with DumbDisplay() as display:
result = run_parallel_with_display(wf, n_threads=4, display=display)
print("\nThe answer is ", result)
"""
Explanation: Note that, to make this work in general parallel situations, the _setattr function has to create a deepcopy of the object and then return the modified object; so this style of programming can become quite expensive. A better solution would be to create a layered system, where updates only affect the values that are being updated.
User messages
If jobs take a long time (>1s) to run, it is nice to give the user a message when it starts, when it finishes and if it was a success. Noodles has an adaptor for runners to display messages.
End of explanation
"""
import threading
threading.Thread(target=test_logging, daemon=True).start()
"""
Explanation: We imported some predefined functions from noodles.tutorial. A new function that we haven't seen before is @schedule_hint. It does the same as @schedule, but now it also attaches some information to the function. This can be anything. Here we add a display string. This string is formatted using the arguments to the function that is being called.
End of explanation
"""
|
dsacademybr/PythonFundamentos
|
Cap07/DesafioDSA/Missao2/missao2.ipynb
|
gpl-3.0
|
# Versão da Linguagem Python
from platform import python_version
print('Versão da Linguagem Python Usada Neste Jupyter Notebook:', python_version())
"""
Explanation: <font color='blue'>Data Science Academy - Python Fundamentos - Capítulo 7</font>
Download: http://github.com/dsacademybr
End of explanation
"""
import math
class PrimeGenerator(object):
def generate_primes(self, max_num):
# Implemente aqui sua solução
def _cross_off(self, array, prime):
# Implemente aqui sua solução
def _next_prime(self, array, prime):
# Implemente aqui sua solução
"""
Explanation: Missão: Gerar uma lista de números primos.
Nível de Dificuldade: Médio
Premissas
É correto que 1 não seja considerado um número primo?
* Sim
Podemos assumir que as entradas são válidas?
* Não
Podemos supor que isso se encaixa na memória?
* Sim
Teste Cases
None -> Exception
Not an int -> Exception
20 -> [False, False, True, True, False, True, False, True, False, False, False, True, False, True, False, False, False, True, False, True]
Algoritmo
Para um número ser primo, ele deve ser 2 ou maior e não pode ser divisível por outro número diferente de si mesmo (e 1).
Todos os números não-primos são divisíveis por um número primo.
Use uma matriz (array) para manter o controle de cada número inteiro até o máximo
Comece em 2, termine em sqrt (max)
* Podemos usar o sqrt (max) em vez do max porque:
* Para cada valor que divide o número de entrada uniformemente, há um complemento b onde a * b = n
* Se a> sqrt (n) então b <sqrt (n) porque sqrt (n ^ 2) = n
* "Cross off" todos os números divisíveis por 2, 3, 5, 7, ... configurando array [index] para False
Animação do Wikipedia:
Solução
End of explanation
"""
%%writefile missao2.py
from nose.tools import assert_equal, assert_raises
class TestMath(object):
def test_generate_primes(self):
prime_generator = PrimeGenerator()
assert_raises(TypeError, prime_generator.generate_primes, None)
assert_raises(TypeError, prime_generator.generate_primes, 98.6)
assert_equal(prime_generator.generate_primes(20), [False, False, True,
True, False, True,
False, True, False,
False, False, True,
False, True, False,
False, False, True,
False, True])
print('Sua solução foi executada com sucesso! Parabéns!')
def main():
test = TestMath()
test.test_generate_primes()
if __name__ == '__main__':
main()
%run -i missao2.py
"""
Explanation: Teste da Solução
End of explanation
"""
|
YuriyGuts/kaggle-quora-question-pairs
|
notebooks/preproc-extract-unique-questions.ipynb
|
mit
|
from pygoose import *
import nltk
"""
Explanation: Preprocessing: Unique Question Corpus
Based on the training and test sets, extract a list of unique documents.
Imports
This utility package imports numpy, pandas, matplotlib and a helper kg module into the root namespace.
End of explanation
"""
project = kg.Project.discover()
"""
Explanation: Config
Automatically discover the paths to various data folders and compose the project structure.
End of explanation
"""
df_train = pd.read_csv(project.data_dir + 'train.csv').fillna('')
df_test = pd.read_csv(project.data_dir + 'test.csv').fillna('')
"""
Explanation: Read data
Original question datasets.
End of explanation
"""
tokenizer = nltk.tokenize.RegexpTokenizer(r'\w+')
"""
Explanation: Load tools
End of explanation
"""
df = pd.concat([df_train, df_test])
unique_question_texts = [
question.strip(' \'"')
for question in np.unique(df[['question1', 'question2']].values.ravel())
]
"""
Explanation: Remove duplicate questions
End of explanation
"""
def tokenize_question_text(q):
return tokenizer.tokenize(q.lower())
unique_question_tokens = kg.jobs.map_batch_parallel(
unique_question_texts,
item_mapper=tokenize_question_text,
batch_size=1000,
)
"""
Explanation: Tokenize unique questions
End of explanation
"""
kg.io.save_lines(unique_question_texts, project.preprocessed_data_dir + 'unique_questions_raw.txt')
kg.io.save(unique_question_tokens, project.preprocessed_data_dir + 'unique_questions_tokenized.pickle')
"""
Explanation: Save preprocessed data
End of explanation
"""
|
phoebe-project/phoebe2-docs
|
2.3/tutorials/meshes.ipynb
|
gpl-3.0
|
#!pip install -I "phoebe>=2.3,<2.4"
import phoebe
logger = phoebe.logger()
b = phoebe.default_binary()
"""
Explanation: Advanced: Accessing and Plotting Meshes
Setup
Let's first make sure we have the latest version of PHOEBE 2.3 installed (uncomment this line if running in an online notebook session such as colab).
End of explanation
"""
b.add_dataset('lc', times=phoebe.linspace(0,1,6))
b.add_dataset('mesh')
"""
Explanation: The 'Mesh' Dataset
You must create a mesh dataset and specify the times and columns which you'd like exposed. For more information, see the tutorial on the MESH dataset.
The mesh will be exposed at the times specified by the compute_times Parameter, as well as any times referenced by the include_times SelectParameter.
So let's add an LC and MESH datasets.
End of explanation
"""
print(b.get_parameter(qualifier='compute_times', kind='mesh'))
print(b.get_parameter(qualifier='include_times', kind='mesh'))
"""
Explanation: Unlike other datasets, the mesh dataset cannot accept actual observations, so there is no times parameter, only the compute_times and compute_phases parameters. For more details on these, see the Advanced: Compute Times & Phases tutorial.
End of explanation
"""
b.set_value('compute_times', kind='mesh', value=[10])
b.set_value('include_times', kind='mesh', value=['lc01'])
b.run_compute()
print(b.filter(kind='mesh', context='model').times)
"""
Explanation: Note that we can manually set the times of the mesh AND/OR reference the times for existing non-mesh datasets (such as the light curve we just added) as well as any of the various t0s in the system.
End of explanation
"""
print(b.filter(kind='mesh', context='model').qualifiers)
"""
Explanation: By default, the mesh only exposes the geometric columns of the triangles, in both plane-of-sky and roche coordinates.
End of explanation
"""
print(b.get_parameter(qualifier='columns', kind='mesh', context='dataset'))
b.set_value('columns', value=['teffs'])
b.run_compute()
print(b.filter(kind='mesh', context='model').qualifiers)
print(b.get_value('teffs', time=0.0, component='primary'))
"""
Explanation: But we can also specify other columns to be included (by setting the columns SelectParameter before calling run_compute)
End of explanation
"""
afig, mplfig = b.plot(kind='mesh', time=0.2, fc='teffs', ec='none', show=True)
"""
Explanation: Any of the exposed columns are then available for plotting the mesh, via b.plot.
End of explanation
"""
print(b.get_parameter(qualifier='coordinates', kind='mesh', context='dataset'))
b.set_value('coordinates', value=['uvw'])
b.run_compute()
print(b.filter(kind='mesh', context='model').qualifiers)
"""
Explanation: Additionally, if we know that we only want to expose (and plot) the mesh in plane-of-sky, we can save some computation time by ommitting roche coordinates when computing the model. This is done via the coordinates SelectParameter.
End of explanation
"""
|
tclaudioe/Scientific-Computing
|
SC1/10_GMRes.ipynb
|
bsd-3-clause
|
import numpy as np
import scipy as sp
from scipy import linalg as la
import matplotlib.pyplot as plt
import scipy.sparse.linalg
%matplotlib inline
#%load_ext memory_profiler
import matplotlib as mpl
mpl.rcParams['font.size'] = 14
mpl.rcParams['axes.labelsize'] = 20
mpl.rcParams['xtick.labelsize'] = 14
mpl.rcParams['ytick.labelsize'] = 14
M=8
"""
Explanation: <center>
<h1> ILI285 - Computación Científica I / INF285 - Computación Científica </h1>
<h2> Generalized Minimal Residual Method </h2>
<h2> <a href="#acknowledgements"> [S]cientific [C]omputing [T]eam </a> </h2>
<h2> Version: 1.21</h2>
</center>
Table of Contents
Introduction
Short reminder about Least Squares
GMRes
Theoretical Problems
Practical Problems
Acknowledgements
End of explanation
"""
# This is a very instructive implementation of GMRes.
def GMRes(A, b, x0=np.array([0.0]), m=10, flag_display=True, threshold=1e-12):
n = len(b)
if len(x0)==1:
x0=np.zeros(n)
r0 = b - np.dot(A, x0)
nr0=np.linalg.norm(r0)
out_res=np.array(nr0)
Q = np.zeros((n,n))
H = np.zeros((n,n))
Q[:,0] = r0 / nr0
flag_break=False
for k in np.arange(np.min((m,n))):
y = np.dot(A, Q[:,k])
if flag_display:
print('||y||=',np.linalg.norm(y))
for j in np.arange(k+1):
H[j][k] = np.dot(Q[:,j], y)
if flag_display:
print('H[',j,'][',k,']=',H[j][k])
y = y - np.dot(H[j][k],Q[:,j])
if flag_display:
print('||y||=',np.linalg.norm(y))
# All but the last equation are treated equally. Why?
if k+1<n:
H[k+1][k] = np.linalg.norm(y)
if flag_display:
print('H[',k+1,'][',k,']=',H[k+1][k])
if (np.abs(H[k+1][k]) > 1e-16):
Q[:,k+1] = y/H[k+1][k]
else:
print('flag_break has been activated')
flag_break=True
# Do you remember e_1? The canonical vector.
e1 = np.zeros((k+1)+1)
e1[0]=1
H_tilde=H[0:(k+1)+1,0:k+1]
else:
H_tilde=H[0:k+1,0:k+1]
# Solving the 'SMALL' least square problem.
# This could be improved with Givens rotations!
ck = np.linalg.lstsq(H_tilde, nr0*e1)[0]
if k+1<n:
x = x0 + np.dot(Q[:,0:(k+1)], ck)
else:
x = x0 + np.dot(Q, ck)
# Why is 'norm_small' equal to 'norm_full'?
norm_small=np.linalg.norm(np.dot(H_tilde,ck)-nr0*e1)
out_res = np.append(out_res,norm_small)
if flag_display:
norm_full=np.linalg.norm(b-np.dot(A,x))
print('..........||b-A\,x_k||=',norm_full)
print('..........||H_k\,c_k-nr0*e1||',norm_small);
if flag_break:
if flag_display:
print('EXIT: flag_break=True')
break
if norm_small<threshold:
if flag_display:
print('EXIT: norm_small<threshold')
break
return x,out_res
"""
Explanation: <div id='intro' />
Introduction
Welcome to another edition of our Jupyter Notebooks. A few notebooks back, we saw that the Conjugate Gradient Method, an iterative method, was very useful to solve $A\,\mathbf{x}=\mathbf{b}$ but it only worked when $A$ was positive definite and symmetric. So now we need an iterative method that works with nonsymmetric linear system of equations, and for that we have the Generalized Minimum Residual Method (GMRes). It works really well for finding the solution of large, sparse (and dense as well), nonsymmetric linear systems of equations. Of course, it will also have trouble for ill-conditioned linear system of equations. But it is really easy to add a left or right or both preconditioners!
<div id='LS' />
A quick review on Least Squares
Least Squares is used to solve overdetemined linear systems of equations $A\,\mathbf{x} = \mathbf{b}$. That is, for example, a linear system of equations where there are more equations than unknowns. It finds the best $\overline{\mathbf{x}}$ so that it minimizes the euclidean length of $\mathbf{r} = \mathbf{b} - A\,\mathbf{x}$.
So, you might be wondering, what does Least Squares have to do with GMRes? WELL, since you're dying to know, I'll tell you: the backward error of the system in GMRes is minimized at each iteration step using a Least Squares formulation.
<div id='GMR' />
GMRes
GMRes is a member of the family of Krylov methods. It finds an approximation of $\mathbf{x}$ restricted to live on the Krylov sub-space $\mathcal{K_k}$, where $\mathcal{K_k}={\mathbf{r}_0, A\,\mathbf{r}_0, A^2\,\mathbf{r}_0, \cdots, A^{k-1}\,\mathbf{r}_0}$ and $\mathbf{r}_0 = \mathbf{b} - A\,\mathbf{x}_0$ is the residual vector of the initial guess.
The idea behind this method is to look for improvements to the initial guess $\mathbf{x}_0$ in the Krylov space. At the $k$-th iteration, we enlarge the Krylov space by adding $A^k\,\mathbf{r}_0$, reorthogonalize the basis, and then use least squares to find the best improvement to add to $\mathbf{x}_0$.
The algorithm is as follows:
Generalized Minimum Residual Method
$\mathbf{x}0$ = initial guess<br>
$\mathbf{r}$ = $\mathbf{b} - A\,\mathbf{x}_0$<br>
$\mathbf{q}_1$ = $\mathbf{r} / \|\mathbf{r}\|_2$<br>
for $k = 1, ..., m$<br>
$\qquad \ \ \mathbf{y} = A\,\mathbf{q}_k$<br>
$\qquad$ for $j = 1,2,...,k$ <br>
$\qquad \qquad$ $h{jk} = \mathbf{q}j^*\,\mathbf{y}$<br>
$\qquad \qquad$ $\mathbf{y} = \mathbf{y} - h{jk}\, \mathbf{q}j$<br>
$\qquad$ end<br>
$\qquad \ h{k+1,k} = \|y\|2 \qquad$ (If $h{k+1,k} = 0$ , skip next line and terminate at bottom.) <br>
$\qquad \ \mathbf{q}{k+1} = \mathbf{y}/h{k+1,k}$ <br>
$\qquad$ Minimize $\left\|\widehat{H}_k\, \mathbf{c}_k - [\|\mathbf{r}\|_2 \ 0 \ 0 \ ... \ 0]^T \right\|_2$ for $\mathbf{c}_k$ <br>
$\qquad$ $\mathbf{x}_k = Q_k \, \mathbf{c}_k + \mathbf{x}_0$ <br>
end
Now we have to implement it.
End of explanation
"""
A = np.array([[1,1,0],[0,1,0],[0,1,1]])
b = np.array([1,2,3])
x0 = np.zeros(3)
# scipy gmres
x_scipy = scipy.sparse.linalg.gmres(A,b,x0)[0]
# our gmres
x_our, _ = GMRes(A, b)
# numpy solve
x_np= np.linalg.solve(A,b)
# Showing the solutions
print('--------------------------------')
print('x_scipy',x_scipy)
print('x_our',x_our)
print('x_np',x_np)
"""
Explanation: A very simple example
End of explanation
"""
A = np.array([[0,0,0,1],[1,0,0,0],[0,1,0,0],[0,0,1,0]])
b = np.array([1,0,1,0])
x_our, _ = GMRes(A, b, m=10)
norm_full=np.linalg.norm(b-np.dot(A,x_our))
print(norm_full)
A = np.random.rand(10,10)+10*np.eye(10)
b = np.random.rand(10)
x_our, out_res = GMRes(A, b, m=10,flag_display=True)
norm_full=np.linalg.norm(b-np.dot(A,x_our))
print(norm_full)
"""
Explanation: Another example, how may iteration does it need to converge?
End of explanation
"""
plt.figure(figsize=(M,M))
plt.semilogy(out_res,'.k',markersize=20,label='residual')
plt.grid(True)
plt.xlabel(r'$k$')
plt.ylabel(r'$\|\mathbf{b}-A\,\mathbf{x}_k\|_2$')
plt.grid(True)
plt.show()
"""
Explanation: Plotting the residual over the iterations
End of explanation
"""
|
megbedell/wobble
|
notebooks/espresso.ipynb
|
mit
|
data = wobble.Data()
filenames = glob.glob('/Users/mbedell/python/wobble/data/toi/TOI-*_CCF_A.fits')
for filename in tqdm(filenames):
try:
sp = wobble.Spectrum()
sp.from_ESPRESSO(filename, process=True)
data.append(sp)
except Exception as e:
print("File {0} failed; error: {1}".format(filename, e))
data.write('../data/toi.hdf5')
"""
Explanation: These cells are used to pre-process the data.
They only need to be run once, and after that the saved data file can be loaded up from disk.
End of explanation
"""
data = wobble.Data(filename='../data/toi.hdf5')
R = np.copy(data.R) # we'll need this later
data
data.drop_bad_orders(min_snr=3)
data.drop_bad_epochs(min_snr=3)
data.orders
r = 0
good = data.ivars[r] > 0.
for e in [0,10,20]:
plt.errorbar(data.xs[r][e][good[e]], data.ys[r][e][good[e]],
1./np.sqrt(data.ivars[r][e][good[e]]), ls='', fmt='o', ms=2, alpha=0.5)
plt.title('Echelle order #{0}'.format(data.orders[r]), fontsize=14);
"""
Explanation: Load the data
End of explanation
"""
star_filename = '../wobble/regularization/toi_star.hdf5'
tellurics_filename = '../wobble/regularization/toi_tellurics.hdf5'
wobble.generate_regularization_file(star_filename, R, type='star')
wobble.generate_regularization_file(tellurics_filename, R, type='telluric')
plot_dir = '../regularization/toi/'
if not os.path.exists(plot_dir):
os.makedirs(plot_dir)
"""
Explanation: Generate regularization parameter files & tune them
Since we don't have any existing regularization parameter files for ESPRESSO, we have to make some new ones.
This is needed because the default wobble regularization is tuned to HARPS, which has a different number of spectral orders and different wavelength coverage - if we try to run with those files, the optimization will (a) be non-optimal and (b) eventually crash when we try to access an order than does not exist for HARPS.
End of explanation
"""
validation_epochs = np.random.choice(data.N, data.N//6, replace=False) # 3 epochs for validation set
r = 100
for e in [validation_epochs[0]]:
plt.errorbar(data.xs[r][e][good[e]], data.ys[r][e][good[e]],
1./np.sqrt(data.ivars[r][e][good[e]]), ls='', fmt='o', ms=2, alpha=0.5)
"""
Explanation: We'll tune the regularization using a train-and-validate approach, so let's set aside some epochs to be the validation set:
End of explanation
"""
r = 100
o = data.orders[r]
objs = wobble.setup_for_order(r, data, validation_epochs)
wobble.improve_order_regularization(o, star_filename, tellurics_filename,
*objs,
verbose=False, plot=False,
basename='{0}o{1}'.format(plot_dir, o),
K_t=0, L1=True, L2=True)
"""
Explanation: Here's an example of how this regularization tuning will go for one order:
End of explanation
"""
|
spectralDNS/shenfun
|
binder/stokes.ipynb
|
bsd-2-clause
|
import os
import sys
import numpy as np
from sympy import symbols, sin, cos
from shenfun import *
"""
Explanation: <!-- dom:TITLE: Demo - Stokes equations -->
Demo - Stokes equations
<!-- dom:AUTHOR: Mikael Mortensen Email:mikaem@math.uio.no at Department of Mathematics, University of Oslo. -->
<!-- Author: -->
Mikael Mortensen (email: mikaem@math.uio.no), Department of Mathematics, University of Oslo.
Date: January 23, 2019
Summary. The Stokes equations describe the flow of highly viscous fluids.
This is a demonstration of how the Python module shenfun can be used to solve Stokes
equations using a mixed (coupled) basis in a 3D tensor product domain.
We assume homogeneous Dirichlet boundary conditions in one direction
and periodicity in the remaining two. The solver described runs with MPI
without any further considerations required from the user.
The solver assembles a block matrix with sparsity pattern as shown below
for the Legendre basis.
<!-- dom:FIGURE: [https://rawgit.com/spectralDNS/spectralutilities/master/figures/BlockMat.png] Coupled block matrix for Stokes equations. <a id="fig:BlockMat"></a> -->
<!-- begin figure -->
<a id="fig:BlockMat"></a>
<p>Coupled block matrix for Stokes equations.</p>
<img src="https://rawgit.com/spectralDNS/spectralutilities/master/figures/BlockMat.png" >
<!-- end figure -->
Stokes' equations
<a id="demo:stokes"></a>
Stokes' equations are given in strong form as
$$
\begin{align}
\nabla^2 \boldsymbol{u} - \nabla p &= \boldsymbol{f} \quad \text{in } \Omega, \
\nabla \cdot \boldsymbol{u} &= h \quad \text{in } \Omega \
\int_{\Omega} p dx &= 0
\end{align}
$$
where $\boldsymbol{u}$ and $p$ are, respectively, the
fluid velocity vector and pressure, and the domain
$\Omega = [0, 2\pi]^2 \times [-1, 1]$. The flow is assumed periodic
in $x$ and $y$-directions, whereas there is a no-slip homogeneous Dirichlet
boundary condition on $\boldsymbol{u}$ on the boundaries of the $z$-direction, i.e.,
$\boldsymbol{u}(x, y, \pm 1) = (0, 0, 0)$. (Note that we can configure shenfun with
non-periodicity in any of the three directions. However, since we are to
solve linear algebraic systems in the non-periodic direction, there is a speed
benefit from having the nonperiodic direction last. This has to do with Numpy
using a C-style row-major storage of arrays by default.)
The right hand side vector $\boldsymbol{f}(\boldsymbol{x})$ is an external applied body force.
The right hand side $h$ is usually zero in the regular Stokes equations. Here
we include it because it will be nonzero in the verification, which is using the
method of manufactured solutions. Note that the final $\int_{\Omega} p dx = 0$
is there because there is no Dirichlet boundary condition on the pressure
and the system of equations would otherwise be ill conditioned.
To solve Stokes' equations with the Galerkin method we need basis
functions for both velocity and pressure. A
Dirichlet basis will be used for velocity, whereas there is no boundary restriction
on the pressure basis. For both three-dimensional bases we will use one basis
function for the $x$-direction,
$\mathcal{X}(x)$, one for the $y$-direction, $\mathcal{Y}(y)$, and one for the
$z$-direction, $\mathcal{Z}(z)$. And
then we create three-dimensional basis functions like
<!-- Equation labels as ordinary links -->
<a id="_auto1"></a>
$$
\begin{equation}
v(x, y, z) = \mathcal{X}(x) \mathcal{Y}(y) \mathcal{Z} (z).
\label{_auto1} \tag{1}
\end{equation}
$$
The basis functions $\mathcal{X}(x)$ and $\mathcal{Y}(y)$ are chosen as Fourier
exponentials, since these functions are periodic:
<!-- Equation labels as ordinary links -->
<a id="_auto2"></a>
$$
\begin{equation}
\mathcal{X}_l(x) = e^{\imath l x}, \forall \, l \in \boldsymbol{l}^{N_0},
\label{_auto2} \tag{2}
\end{equation}
$$
<!-- Equation labels as ordinary links -->
<a id="_auto3"></a>
$$
\begin{equation}
\mathcal{Y}_m(y) = e^{\imath m y}, \forall \, m \in \boldsymbol{m}^{N_1},
\label{_auto3} \tag{3}
\end{equation}
$$
where $\boldsymbol{l}^{N_0} = (-N_0/2, -N_0/2+1, \ldots, N_0/2-1)$ and
$\boldsymbol{m}^{N_1} = (-N_1/2, -N_1/2+1, \ldots, N_1/2-1)$.
The size of the discretized problem in real physical space is
$\boldsymbol{N} = (N_0, N_1, N_2)$, i.e., there are $N_0 \cdot N_1 \cdot N_2$ quadrature points
in total.
The basis functions for $\mathcal{Z}(z)$ remain to be decided.
For the velocity we need homogeneous Dirichlet boundary conditions, and for this
we use composite Legendre or Chebyshev polynomials
<!-- Equation labels as ordinary links -->
<a id="_auto4"></a>
$$
\begin{equation}
\mathcal{Z}^0_n(z) = \phi_n(z) - \phi_{n+2}(z), \forall \, n \in \boldsymbol{n}^{N_2-2},
\label{_auto4} \tag{4}
\end{equation}
$$
where $\phi_n$ is the n'th Legendre or Chebyshev polynomial of the first kind.
$\boldsymbol{n}^{N_2-2} = (0, 1, \ldots, N_2-3)$, and the zero on $\mathcal{Z}^0$
is there to indicate the zero value on the boundary.
The pressure basis that comes with no restrictions for the boundary is a
little trickier. The reason for this has to do with
inf-sup stability. The obvious choice of basis is the regular Legendre or
Chebyshev basis, which is denoted as
<!-- Equation labels as ordinary links -->
<a id="eq:Zn"></a>
$$
\begin{equation}
\mathcal{Z}_n(z) = \phi_n(z), \forall \, n \in \boldsymbol{n}^{N_2}. \label{eq:Zn} \tag{5}
\end{equation}
$$
The problem is that for the natural choice of $n \in (0, 1, \ldots, N_2-1)$
there is a nullspace and one degree of freedom remains unresolved. It turns out
that the proper choice for the pressure basis is simply (5) for
$n \in \boldsymbol{n}^{N_2-2}$. (Also remember that we have to fix $\int_{\Omega} p dx = 0$.)
With given basis functions we obtain the spaces
<!-- Equation labels as ordinary links -->
<a id="_auto5"></a>
$$
\begin{equation}
V^{N_0} = \text{span}{ \mathcal{X}l }{l\in\boldsymbol{l}^{N_0}},
\label{_auto5} \tag{6}
\end{equation}
$$
<!-- Equation labels as ordinary links -->
<a id="_auto6"></a>
$$
\begin{equation}
V^{N_1} = \text{span}{ \mathcal{Y}m }{m\in\boldsymbol{m}^{N_1}},
\label{_auto6} \tag{7}
\end{equation}
$$
<!-- Equation labels as ordinary links -->
<a id="_auto7"></a>
$$
\begin{equation}
V^{N_2} = \text{span}{ \mathcal{Z}n }{n\in\boldsymbol{n}^{N_2-2}},
\label{_auto7} \tag{8}
\end{equation}
$$
<!-- Equation labels as ordinary links -->
<a id="_auto8"></a>
$$
\begin{equation}
V_0^{N_2} = \text{span}{ \mathcal{Z}^0_n }_{n\in\boldsymbol{n}^{N_2-2}},
\label{_auto8} \tag{9}
\end{equation}
$$
and from these we create two different tensor product spaces
<!-- Equation labels as ordinary links -->
<a id="_auto9"></a>
$$
\begin{equation}
W_0^{\boldsymbol{N}}(\boldsymbol{x}) = V^{N_0}(x) \otimes V^{N_1}(y) \otimes V_0^{N_2}(z),
\label{_auto9} \tag{10}
\end{equation}
$$
<!-- Equation labels as ordinary links -->
<a id="_auto10"></a>
$$
\begin{equation}
W^{\boldsymbol{N}}(\boldsymbol{x}) = V^{N_0}(x) \otimes V^{N_1}(y) \otimes V^{N_2}(z).
\label{_auto10} \tag{11}
\end{equation}
$$
The velocity vector is using a mixed basis, such that we will look for
solutions $\boldsymbol{u} \in [W_0^{\boldsymbol{N}}]^3 \, (=W_0^{\boldsymbol{N}} \times W_0^{\boldsymbol{N}} \times W_0^{\boldsymbol{N}})$,
whereas we look for the pressure
$p \in W^{\boldsymbol{N}}$. We now formulate a variational problem using the Galerkin method: Find
$\boldsymbol{u} \in [W_0^{\boldsymbol{N}}]^3$ and $p \in W^{\boldsymbol{N}}$ such that
<!-- Equation labels as ordinary links -->
<a id="eq:varform"></a>
$$
\begin{equation}
\int_{\Omega} (\nabla^2 \boldsymbol{u} - \nabla p ) \cdot \overline{\boldsymbol{v}} \, dx_w = \int_{\Omega} \boldsymbol{f} \cdot \overline{\boldsymbol{v}}\, dx_w \quad\forall \boldsymbol{v} \, \in \, [W_0^{\boldsymbol{N}}]^3, \label{eq:varform} \tag{12}
\end{equation}
$$
<!-- Equation labels as ordinary links -->
<a id="_auto11"></a>
$$
\begin{equation}
\int_{\Omega} \nabla \cdot \boldsymbol{u} \, \overline{q} \, dx_w = \int_{\Omega} h \overline{q} \, dx_w \quad\forall q \, \in \, W^{\boldsymbol{N}}.
\label{_auto11} \tag{13}
\end{equation}
$$
Here $dx_w=w_xdxw_ydyw_zdz$ represents a weighted measure, with weights $w_x(x), w_y(y), w_z(z)$.
Note that it is only Chebyshev polynomials that
make use of a non-constant weight $w_x=1/\sqrt{1-x^2}$. The Fourier weights are $w_y=w_z=1/(2\pi)$
and the Legendre weight is $w_x=1$.
The overline in $\boldsymbol{\overline{v}}$ and $\overline{q}$ represents a complex conjugate, which is needed here because
the Fourier exponentials are complex functions.
Mixed variational form
<a id="sec:mixedform"></a>
Since we are to solve for $\boldsymbol{u}$ and $p$ at the same time, we formulate a
mixed (coupled) problem: find $(\boldsymbol{u}, p) \in [W_0^{\boldsymbol{N}}]^3 \times W^{\boldsymbol{N}}$
such that
<!-- Equation labels as ordinary links -->
<a id="_auto12"></a>
$$
\begin{equation}
a((\boldsymbol{u}, p), (\boldsymbol{v}, q)) = L((\boldsymbol{v}, q)) \quad \forall (\boldsymbol{v}, q) \in [W_0^{\boldsymbol{N}}]^3 \times W^{\boldsymbol{N}},
\label{_auto12} \tag{14}
\end{equation}
$$
where bilinear ($a$) and linear ($L$) forms are given as
<!-- Equation labels as ordinary links -->
<a id="_auto13"></a>
$$
\begin{equation}
a((\boldsymbol{u}, p), (\boldsymbol{v}, q)) = \int_{\Omega} (\nabla^2 \boldsymbol{u} - \nabla p) \cdot \overline{\boldsymbol{v}} \, dx_w + \int_{\Omega} \nabla \cdot \boldsymbol{u} \, \overline{q} \, dx_w,
\label{_auto13} \tag{15}
\end{equation}
$$
<!-- Equation labels as ordinary links -->
<a id="_auto14"></a>
$$
\begin{equation}
L((\boldsymbol{v}, q)) = \int_{\Omega} \boldsymbol{f} \cdot \overline{\boldsymbol{v}}\, dx_w + \int_{\Omega} h \overline{q} \, dx_w.
\label{_auto14} \tag{16}
\end{equation}
$$
Note that the bilinear form will assemble to block matrices, whereas the right hand side
linear form will assemble to block vectors.
Implementation
Preamble
We will solve the Stokes equations using the shenfun Python module. The first thing needed
is then to import some of this module's functionality
plus some other helper modules, like Numpy and Sympy:
End of explanation
"""
x, y, z = symbols('x,y,z')
uex = sin(2*y)*(1-z**2)
uey = sin(2*x)*(1-z**2)
uez = sin(2*z)*(1-z**2)
pe = -0.1*sin(2*x)*cos(4*y)
fx = uex.diff(x, 2) + uex.diff(y, 2) + uex.diff(z, 2) - pe.diff(x, 1)
fy = uey.diff(x, 2) + uey.diff(y, 2) + uey.diff(z, 2) - pe.diff(y, 1)
fz = uez.diff(x, 2) + uez.diff(y, 2) + uez.diff(z, 2) - pe.diff(z, 1)
h = uex.diff(x, 1) + uey.diff(y, 1) + uez.diff(z, 1)
"""
Explanation: We use Sympy for the manufactured solution and Numpy for testing.
Manufactured solution
<a id="sec:mansol"></a>
The exact solutions $\boldsymbol{u}_e(\boldsymbol{x})$ and $p(\boldsymbol{x})$ are chosen to satisfy boundary
conditions, and the right hand sides $\boldsymbol{f}(\boldsymbol{x})$ and $h(\boldsymbol{x})$ are then
computed exactly using Sympy. These exact right hand sides will then be used to
compute a numerical solution that can be verified against the manufactured
solution. The chosen solution with computed right hand sides are:
End of explanation
"""
N = (20, 20, 20)
family = 'Legendre'
K0 = FunctionSpace(N[0], 'Fourier', dtype='D', domain=(0, 2*np.pi))
K1 = FunctionSpace(N[1], 'Fourier', dtype='d', domain=(0, 2*np.pi))
SD = FunctionSpace(N[2], family, bc=(0, 0))
ST = FunctionSpace(N[2], family)
"""
Explanation: Tensor product spaces
One-dimensional spaces are created using the FunctionSpace() function. A choice of
polynomials between Legendre or Chebyshev can be made, and the size
of the domain is given
End of explanation
"""
TD = TensorProductSpace(comm, (K0, K1, SD), axes=(2, 0, 1))
Q = TensorProductSpace(comm, (K0, K1, ST), axes=(2, 0, 1))
V = VectorSpace(TD)
VQ = CompositeSpace([V, Q])
"""
Explanation: Next the one-dimensional spaces are used to create two tensor product spaces Q = $W^{\boldsymbol{N}}$
and TD = $W_0^{\boldsymbol{N}}$, one vector V = $[W_0^{\boldsymbol{N}}]^3$ and one mixed
space VQ = V $\times$ Q.
End of explanation
"""
up = TrialFunction(VQ)
vq = TestFunction(VQ)
u, p = up
v, q = vq
"""
Explanation: Note that we choose to transform axes in the order $1, 0, 2$. This is to ensure
that the fully transformed arrays are aligned in the non-periodic direction 2.
And we need the arrays aligned in this direction, because this is the only
direction where there are tensor product matrices that are non-diagonal. All
Fourier matrices are, naturally, diagonal.
Test- and trialfunctions are created much like in a regular, non-mixed,
formulation. However, one has to create one test- and trialfunction for
the mixed space, and then split them up afterwards
End of explanation
"""
if family.lower() == 'chebyshev':
A = inner(v, div(grad(u)))
G = inner(v, -grad(p))
else:
A = inner(grad(v), -grad(u))
G = inner(div(v), p)
D = inner(q, div(u))
"""
Explanation: With the basisfunctions in place we may assemble the different blocks of the
final coefficient matrix. Since Legendre is using a constant weight function,
the equations may also be integrated by parts to obtain a symmetric system:
End of explanation
"""
M = BlockMatrix(A+G+D)
"""
Explanation: The assembled subsystems A, G and D are lists containg the different blocks of
the complete, coupled matrix. A actually contains 6
tensor product matrices of type TPMatrix. The first two
matrices are for vector component zero of the test function v[0] and
trial function u[0], the
matrices 2 and 3 are for components 1 and the last two are for components
2. The first two matrices are as such for
Python
A[0:2] = inner(v[0], div(grad(u[0])))
Breaking it down this inner product is mathematically
<!-- Equation labels as ordinary links -->
<a id="eq:partialeq1"></a>
$$
\begin{equation}
\label{eq:partialeq1} \tag{17}
\int_{\Omega} \boldsymbol{v}[0] \left(\frac{\partial^2 \boldsymbol{u}[0]}{\partial x^2} + \frac{\partial^2 \boldsymbol{u}[0]}{\partial y^2} + \frac{\partial^2 \boldsymbol{u}[0]}{\partial z^2}\right) w_x dx w_y dy w_z dz.
\end{equation}
$$
If we now use test function $\boldsymbol{v}[0]$
<!-- Equation labels as ordinary links -->
<a id="_auto15"></a>
$$
\begin{equation}
\boldsymbol{v}[0]_{lmn} = \mathcal{X}_l \mathcal{Y}_m \mathcal{Z}_n,
\label{_auto15} \tag{18}
\end{equation}
$$
and trialfunction
<!-- Equation labels as ordinary links -->
<a id="_auto16"></a>
$$
\begin{equation}
\boldsymbol{u}[0]{pqr} = \sum{p} \sum_{q} \sum_{r} \hat{\boldsymbol{u}}[0]_{pqr} \mathcal{X}_p \mathcal{Y}_q \mathcal{Z}_r,
\label{_auto16} \tag{19}
\end{equation}
$$
where $\hat{\boldsymbol{u}}$ are the unknown degrees of freedom, and then insert these functions
into (17), then we obtain after
performing some exact evaluations over the periodic directions
<!-- Equation labels as ordinary links -->
<a id="_auto17"></a>
$$
\begin{equation}
\Big( \underbrace{-\left(l^2 \delta_{lp} + m^2 \delta_{mq} \right) \int_{-1}^{1} \mathcal{Z}r(z) \mathcal{Z}_n(z) w_z dz}{A[0]} + \underbrace{\delta_{lp} \delta_{mq} \int_{-1}^{1} \frac{\partial^2 \mathcal{Z}r(z)}{\partial z^2} \mathcal{Z}_n(z) w_z dz}{A[1]} \Big) \hat{\boldsymbol{u}}[0]_{pqr},
\label{_auto17} \tag{20}
\end{equation}
$$
Similarly for components 1 and 2 of the test and trial vectors, leading to 6 tensor
product matrices in total for A. Similarly, we get three components of G
and three of D.
Eliminating the Fourier diagonal matrices, we are left with block matrices like
$$
H(l, m) =
\begin{bmatrix}
A[0]+A[1] & 0 & 0 & G[0] \
0 & A[2]+A[3] & 0 & G[1] \
0 & 0 & A[4]+A[5] & G[2] \
D[0] & D[1] & D[2] & 0
\end{bmatrix}
$$
Note that there will be one large block matrix $H(l, m)$ for each Fourier
wavenumber combination $(l, m)$. To solve the problem in the end we will need to
loop over these wavenumbers and solve the assembled linear systems one by one.
An example of the block matrix, for $l=m=5$ and $\boldsymbol{N}=(20, 20, 20)$ is given
in Fig. fig:BlockMat.
In the end we create a block matrix through
End of explanation
"""
# Get mesh (quadrature points)
X = TD.local_mesh(True)
# Get f and h on quad points
fh = Array(VQ, buffer=(fx, fy, fz, h))
f_, h_ = fh
# Compute inner products
fh_hat = Function(VQ)
f_hat, h_hat = fh_hat
f_hat = inner(v, f_, output_array=f_hat)
h_hat = inner(q, h_, output_array=h_hat)
"""
Explanation: The right hand side can easily be assembled since we have already
defined the functions $\boldsymbol{f}$ and $h$, see Sec. Manufactured solution
End of explanation
"""
# Solve problem
up_hat = M.solve(fh_hat, constraints=((3, 0, 0), (3, N[2]-1, 0)))
up = up_hat.backward()
u_, p_ = up
# Exact solution
ux, uy, uz = Array(V, buffer=(uex, uey, uez))
pe = Array(Q, buffer=pe)
error = [comm.reduce(np.linalg.norm(ux-u_[0])),
comm.reduce(np.linalg.norm(uy-u_[1])),
comm.reduce(np.linalg.norm(uz-u_[2])),
comm.reduce(np.linalg.norm(pe-p_))]
print(error)
"""
Explanation: In the end all that is left is to solve and compare with
the exact solution.
End of explanation
"""
|
pcm-ca/pcm-ca.github.io
|
pages/informatication/extra-files/codes/notebooks/Ajustes.ipynb
|
mit
|
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
medidas = np.random.normal(0, 1, size=100)
plt.figure()
plt.plot(medidas, '.')
plt.axhline(y=0, ls='--', c='k')
plt.show()
medidas = np.random.normal(0, 0.1, size=100)
plt.figure()
plt.plot(medidas, '.')
plt.axhline(y=0, ls='--', c='k')
plt.show()
x1 = np.random.normal(0, 1, size=1000)
x2 = np.random.normal(0, 0.1, size=1000)
plt.figure()
plt.subplot(221)
plt.plot(x1, '.')
plt.axhline(y=0, c='k')
plt.subplot(222)
plt.hist(x1)
plt.subplot(223)
plt.plot(x2, '.')
plt.axhline(y=0, c='k')
plt.subplot(224)
plt.hist(x2)
plt.tight_layout()
plt.show()
x1 = np.random.normal(0, 1, size=1000)
x2 = np.random.normal(10, 1, size=1000)
plt.figure()
plt.subplot(221)
plt.plot(x1, '.')
plt.axhline(y=0, c='k')
plt.subplot(222)
plt.hist(x1)
plt.subplot(223)
plt.plot(x2, '.')
plt.axhline(y=0, c='k')
plt.axhline(y=10, c='y')
plt.subplot(224)
plt.hist(x2)
plt.tight_layout()
plt.show()
x = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10], dtype=float)
y = 10 * x
y = y + np.random.normal(0, 1, size=10)
y_original = 10 * x
plt.figure()
plt.scatter(x, y)
plt.plot(x, y_original, c='r')
plt.xlabel('Numero de baldozas medidas')
plt.ylabel('Medida [cm]')
plt.grid()
plt.show()
from scipy.stats import linregress
x = np.arange(1, 11, 1)
y = 10.0 * x
y += np.random.normal(0, 2, size=y.shape)
m, b, r, *otros = linregress(x, y)
print(m, b, r)
y_ajustado = m * x + b
plt.figure()
plt.scatter(x, y, label='datos originales')
plt.plot(x, y_ajustado, c='green',
label='$y = {m:.2f}x {b:+.2f}$'.format(m=m, b=b))
plt.legend()
plt.grid()
plt.show()
x = np.arange(1, 11, 1)
y = 10.0 * x
y += np.random.normal(0, 100, size=y.shape)
m, b, r, *otros = linregress(x, y)
print(m, b, r)
y_ajustado = m * x + b
plt.figure()
plt.scatter(x, y, label='datos originales')
plt.plot(x, y_ajustado, c='green',
label='$y = {m:.2f}x {b:+.2f}$'.format(m=m, b=b))
plt.legend()
plt.grid()
plt.show()
"""
Explanation: Ajustes
End of explanation
"""
x = np.linspace(0, 10, 20)
y = 3 * x ** 2
y += np.random.normal(0, 0.5, size=y.shape)
m, b, r, *otros = linregress(x, y)
y_ajustado = m * x + b
plt.figure()
plt.scatter(x, y)
plt.plot(x, y_ajustado, '--g')
plt.title('Datos de la forma $y=ax^2$')
plt.xlabel('$x$')
plt.ylabel('$y$')
plt.grid()
plt.show()
"""
Explanation: Linealizacion
End of explanation
"""
X = x ** 2
plt.figure()
plt.scatter(X, y)
plt.xlabel('$X$')
plt.ylabel('$y$')
plt.title('Datos linealizados, de la forma $y = m x + b$')
plt.show()
X = x ** 2
m, b, r, *otros = linregress(X, y)
y_ajustado = m * X + b
print(m, b, r)
plt.figure()
plt.scatter(X, y, c='r', label='datos')
plt.plot(X, y_ajustado, c='g', label='ajuste')
plt.xlabel('$X$')
plt.ylabel('$y$')
plt.title('Datos linealizados, de la forma $y = m x + b$')
plt.legend()
plt.grid()
plt.show()
x = np.linspace(0, 10, 20)
y = 3 * x ** 2
y += np.random.normal(0, 0.5, size=y.shape)
m, b, r, *otros = linregress(x, y)
y_ajustado = m * x + b
fig = plt.figure()
plt.scatter(x, y)
plt.plot(x, y_ajustado, '--g')
plt.title('Datos de la forma $y=ax^2$')
plt.xlabel('$x$')
plt.ylabel('$y$')
plt.grid()
fig.add_axes([0.3, 0.6, 0.2, 0.2])
plt.scatter(x[x < 4], y[x < 4])
plt.plot(x[x < 4], y_ajustado[x < 4], '--g')
plt.grid()
plt.show()
x = np.linspace(0.1, 10)
y = 3.0 * x ** 1.32
# y += np.random.normal(0, 0.2, size=y.shape)
plt.figure()
plt.scatter(x, y)
plt.title('Datos de la forma $y = Ax^n$')
plt.grid()
plt.show()
"""
Explanation: Si tengo datos de la forma $$ y = a x ^ 2 $$ y aplico la transformacion $$ X = x ^ 2 $$ entonces obtengo $$ y = a X $$
End of explanation
"""
X = np.log2(x)
Y = np.log2(y)
m, b, r, *otros = linregress(X, Y)
print(m, b, r)
print(2 ** b)
plt.figure()
plt.scatter(X, Y)
plt.plot(X, m * X + b, c='orange')
plt.show()
X = np.log10(x)
Y = np.log10(y)
m, b, r, *otros = linregress(X, Y)
print(m, b, r)
print(10 ** b)
plt.figure()
plt.scatter(X, Y)
plt.plot(X, m * X + b, c='orange')
plt.show()
"""
Explanation: Si tenemos datos de la forma $$ y = A x ^ n $$ y aplicamos logaritmos a amabos lados, $$ \log y = \log {\left(A x ^ n \right)} $$ y luego algunas propiedades de los logaritmos, $$ \log y = \log A + \log {\left(x ^ n \right)} $$ y $$ \log y = \log A + n \log x $$
Luego de esto aplico las transformaciones, $$ X = \log x $$ y $$ Y = \log y $$ entonces obtengo, $$ Y = \log A + n X $$
End of explanation
"""
x = np.linspace(0, 10, 200)
y = np.exp(-(x - 5)**2 / 0.3) + 2 * np.exp(-(x - 7)**2 / 0.3)
plt.figure()
plt.scatter(x, y)
plt.show()
"""
Explanation: Funciones que no vamos a linealizar
End of explanation
"""
|
mne-tools/mne-tools.github.io
|
0.16/_downloads/plot_label_from_stc.ipynb
|
bsd-3-clause
|
# Author: Luke Bloy <luke.bloy@gmail.com>
# Alex Gramfort <alexandre.gramfort@telecom-paristech.fr>
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.minimum_norm import read_inverse_operator, apply_inverse
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
subjects_dir = data_path + '/subjects'
fname_inv = data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-inv.fif'
fname_evoked = data_path + '/MEG/sample/sample_audvis-ave.fif'
subjects_dir = data_path + '/subjects'
subject = 'sample'
snr = 3.0
lambda2 = 1.0 / snr ** 2
method = "dSPM" # use dSPM method (could also be MNE or sLORETA)
# Compute a label/ROI based on the peak power between 80 and 120 ms.
# The label bankssts-lh is used for the comparison.
aparc_label_name = 'bankssts-lh'
tmin, tmax = 0.080, 0.120
# Load data
evoked = mne.read_evokeds(fname_evoked, condition=0, baseline=(None, 0))
inverse_operator = read_inverse_operator(fname_inv)
src = inverse_operator['src'] # get the source space
# Compute inverse solution
stc = apply_inverse(evoked, inverse_operator, lambda2, method,
pick_ori='normal')
# Make an STC in the time interval of interest and take the mean
stc_mean = stc.copy().crop(tmin, tmax).mean()
# use the stc_mean to generate a functional label
# region growing is halted at 60% of the peak value within the
# anatomical label / ROI specified by aparc_label_name
label = mne.read_labels_from_annot(subject, parc='aparc',
subjects_dir=subjects_dir,
regexp=aparc_label_name)[0]
stc_mean_label = stc_mean.in_label(label)
data = np.abs(stc_mean_label.data)
stc_mean_label.data[data < 0.6 * np.max(data)] = 0.
# 8.5% of original source space vertices were omitted during forward
# calculation, suppress the warning here with verbose='error'
func_labels, _ = mne.stc_to_label(stc_mean_label, src=src, smooth=True,
subjects_dir=subjects_dir, connected=True,
verbose='error')
# take first as func_labels are ordered based on maximum values in stc
func_label = func_labels[0]
# load the anatomical ROI for comparison
anat_label = mne.read_labels_from_annot(subject, parc='aparc',
subjects_dir=subjects_dir,
regexp=aparc_label_name)[0]
# extract the anatomical time course for each label
stc_anat_label = stc.in_label(anat_label)
pca_anat = stc.extract_label_time_course(anat_label, src, mode='pca_flip')[0]
stc_func_label = stc.in_label(func_label)
pca_func = stc.extract_label_time_course(func_label, src, mode='pca_flip')[0]
# flip the pca so that the max power between tmin and tmax is positive
pca_anat *= np.sign(pca_anat[np.argmax(np.abs(pca_anat))])
pca_func *= np.sign(pca_func[np.argmax(np.abs(pca_anat))])
"""
Explanation: Generate a functional label from source estimates
Threshold source estimates and produce a functional label. The label
is typically the region of interest that contains high values.
Here we compare the average time course in the anatomical label obtained
by FreeSurfer segmentation and the average time course from the
functional label. As expected the time course in the functional
label yields higher values.
End of explanation
"""
plt.figure()
plt.plot(1e3 * stc_anat_label.times, pca_anat, 'k',
label='Anatomical %s' % aparc_label_name)
plt.plot(1e3 * stc_func_label.times, pca_func, 'b',
label='Functional %s' % aparc_label_name)
plt.legend()
plt.show()
"""
Explanation: plot the time courses....
End of explanation
"""
brain = stc_mean.plot(hemi='lh', subjects_dir=subjects_dir)
brain.show_view('lateral')
# show both labels
brain.add_label(anat_label, borders=True, color='k')
brain.add_label(func_label, borders=True, color='b')
"""
Explanation: plot brain in 3D with PySurfer if available
End of explanation
"""
|
atulsingh0/MachineLearning
|
HandsOnML/code/10_introduction_to_artificial_neural_networks.ipynb
|
gpl-3.0
|
# To support both python 2 and python 3
from __future__ import division, print_function, unicode_literals
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
def reset_graph(seed=42):
tf.reset_default_graph()
tf.set_random_seed(seed)
np.random.seed(seed)
# To plot pretty figures
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
plt.rcParams['axes.labelsize'] = 14
plt.rcParams['xtick.labelsize'] = 12
plt.rcParams['ytick.labelsize'] = 12
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "ann"
def save_fig(fig_id, tight_layout=True):
path = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID, fig_id + ".png")
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format='png', dpi=300)
"""
Explanation: Chapter 10 – Introduction to Artificial Neural Networks
This notebook contains all the sample code and solutions to the exercises in chapter 10.
Setup
First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures:
End of explanation
"""
import numpy as np
from sklearn.datasets import load_iris
from sklearn.linear_model import Perceptron
iris = load_iris()
X = iris.data[:, (2, 3)] # petal length, petal width
y = (iris.target == 0).astype(np.int)
per_clf = Perceptron(random_state=42)
per_clf.fit(X, y)
y_pred = per_clf.predict([[2, 0.5]])
y_pred
a = -per_clf.coef_[0][0] / per_clf.coef_[0][1]
b = -per_clf.intercept_ / per_clf.coef_[0][1]
axes = [0, 5, 0, 2]
x0, x1 = np.meshgrid(
np.linspace(axes[0], axes[1], 500).reshape(-1, 1),
np.linspace(axes[2], axes[3], 200).reshape(-1, 1),
)
X_new = np.c_[x0.ravel(), x1.ravel()]
y_predict = per_clf.predict(X_new)
zz = y_predict.reshape(x0.shape)
plt.figure(figsize=(10, 4))
plt.plot(X[y==0, 0], X[y==0, 1], "bs", label="Not Iris-Setosa")
plt.plot(X[y==1, 0], X[y==1, 1], "yo", label="Iris-Setosa")
plt.plot([axes[0], axes[1]], [a * axes[0] + b, a * axes[1] + b], "k-", linewidth=3)
from matplotlib.colors import ListedColormap
custom_cmap = ListedColormap(['#9898ff', '#fafab0'])
plt.contourf(x0, x1, zz, cmap=custom_cmap, linewidth=5)
plt.xlabel("Petal length", fontsize=14)
plt.ylabel("Petal width", fontsize=14)
plt.legend(loc="lower right", fontsize=14)
plt.axis(axes)
save_fig("perceptron_iris_plot")
plt.show()
"""
Explanation: Perceptrons
End of explanation
"""
def logit(z):
return 1 / (1 + np.exp(-z))
def relu(z):
return np.maximum(0, z)
def derivative(f, z, eps=0.000001):
return (f(z + eps) - f(z - eps))/(2 * eps)
z = np.linspace(-5, 5, 200)
plt.figure(figsize=(11,4))
plt.subplot(121)
plt.plot(z, np.sign(z), "r-", linewidth=2, label="Step")
plt.plot(z, logit(z), "g--", linewidth=2, label="Logit")
plt.plot(z, np.tanh(z), "b-", linewidth=2, label="Tanh")
plt.plot(z, relu(z), "m-.", linewidth=2, label="ReLU")
plt.grid(True)
plt.legend(loc="center right", fontsize=14)
plt.title("Activation functions", fontsize=14)
plt.axis([-5, 5, -1.2, 1.2])
plt.subplot(122)
plt.plot(z, derivative(np.sign, z), "r-", linewidth=2, label="Step")
plt.plot(0, 0, "ro", markersize=5)
plt.plot(0, 0, "rx", markersize=10)
plt.plot(z, derivative(logit, z), "g--", linewidth=2, label="Logit")
plt.plot(z, derivative(np.tanh, z), "b-", linewidth=2, label="Tanh")
plt.plot(z, derivative(relu, z), "m-.", linewidth=2, label="ReLU")
plt.grid(True)
#plt.legend(loc="center right", fontsize=14)
plt.title("Derivatives", fontsize=14)
plt.axis([-5, 5, -0.2, 1.2])
save_fig("activation_functions_plot")
plt.show()
def heaviside(z):
return (z >= 0).astype(z.dtype)
def sigmoid(z):
return 1/(1+np.exp(-z))
def mlp_xor(x1, x2, activation=heaviside):
return activation(-activation(x1 + x2 - 1.5) + activation(x1 + x2 - 0.5) - 0.5)
x1s = np.linspace(-0.2, 1.2, 100)
x2s = np.linspace(-0.2, 1.2, 100)
x1, x2 = np.meshgrid(x1s, x2s)
z1 = mlp_xor(x1, x2, activation=heaviside)
z2 = mlp_xor(x1, x2, activation=sigmoid)
plt.figure(figsize=(10,4))
plt.subplot(121)
plt.contourf(x1, x2, z1)
plt.plot([0, 1], [0, 1], "gs", markersize=20)
plt.plot([0, 1], [1, 0], "y^", markersize=20)
plt.title("Activation function: heaviside", fontsize=14)
plt.grid(True)
plt.subplot(122)
plt.contourf(x1, x2, z2)
plt.plot([0, 1], [0, 1], "gs", markersize=20)
plt.plot([0, 1], [1, 0], "y^", markersize=20)
plt.title("Activation function: sigmoid", fontsize=14)
plt.grid(True)
"""
Explanation: Activation functions
End of explanation
"""
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/")
X_train = mnist.train.images
X_test = mnist.test.images
y_train = mnist.train.labels.astype("int")
y_test = mnist.test.labels.astype("int")
import tensorflow as tf
config = tf.contrib.learn.RunConfig(tf_random_seed=42) # not shown in the config
feature_cols = tf.contrib.learn.infer_real_valued_columns_from_input(X_train)
dnn_clf = tf.contrib.learn.DNNClassifier(hidden_units=[300,100], n_classes=10,
feature_columns=feature_cols, config=config)
dnn_clf = tf.contrib.learn.SKCompat(dnn_clf) # if TensorFlow >= 1.1
dnn_clf.fit(X_train, y_train, batch_size=50, steps=40000)
from sklearn.metrics import accuracy_score
y_pred = dnn_clf.predict(X_test)
accuracy_score(y_test, y_pred['classes'])
from sklearn.metrics import log_loss
y_pred_proba = y_pred['probabilities']
log_loss(y_test, y_pred_proba)
"""
Explanation: FNN for MNIST
using tf.learn
End of explanation
"""
import tensorflow as tf
n_inputs = 28*28 # MNIST
n_hidden1 = 300
n_hidden2 = 100
n_outputs = 10
reset_graph()
X = tf.placeholder(tf.float32, shape=(None, n_inputs), name="X")
y = tf.placeholder(tf.int64, shape=(None), name="y")
def neuron_layer(X, n_neurons, name, activation=None):
with tf.name_scope(name):
n_inputs = int(X.get_shape()[1])
stddev = 2 / np.sqrt(n_inputs)
init = tf.truncated_normal((n_inputs, n_neurons), stddev=stddev)
W = tf.Variable(init, name="kernel")
b = tf.Variable(tf.zeros([n_neurons]), name="bias")
Z = tf.matmul(X, W) + b
if activation is not None:
return activation(Z)
else:
return Z
with tf.name_scope("dnn"):
hidden1 = neuron_layer(X, n_hidden1, name="hidden1",
activation=tf.nn.relu)
hidden2 = neuron_layer(hidden1, n_hidden2, name="hidden2",
activation=tf.nn.relu)
logits = neuron_layer(hidden2, n_outputs, name="outputs")
with tf.name_scope("loss"):
xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y,
logits=logits)
loss = tf.reduce_mean(xentropy, name="loss")
learning_rate = 0.01
with tf.name_scope("train"):
optimizer = tf.train.GradientDescentOptimizer(learning_rate)
training_op = optimizer.minimize(loss)
with tf.name_scope("eval"):
correct = tf.nn.in_top_k(logits, y, 1)
accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))
init = tf.global_variables_initializer()
saver = tf.train.Saver()
n_epochs = 40
batch_size = 50
with tf.Session() as sess:
init.run()
for epoch in range(n_epochs):
for iteration in range(mnist.train.num_examples // batch_size):
X_batch, y_batch = mnist.train.next_batch(batch_size)
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
acc_train = accuracy.eval(feed_dict={X: X_batch, y: y_batch})
acc_test = accuracy.eval(feed_dict={X: mnist.test.images,
y: mnist.test.labels})
print(epoch, "Train accuracy:", acc_train, "Test accuracy:", acc_test)
save_path = saver.save(sess, "./my_model_final.ckpt")
with tf.Session() as sess:
saver.restore(sess, "./my_model_final.ckpt") # or better, use save_path
X_new_scaled = mnist.test.images[:20]
Z = logits.eval(feed_dict={X: X_new_scaled})
y_pred = np.argmax(Z, axis=1)
print("Predicted classes:", y_pred)
print("Actual classes: ", mnist.test.labels[:20])
from IPython.display import clear_output, Image, display, HTML
def strip_consts(graph_def, max_const_size=32):
"""Strip large constant values from graph_def."""
strip_def = tf.GraphDef()
for n0 in graph_def.node:
n = strip_def.node.add()
n.MergeFrom(n0)
if n.op == 'Const':
tensor = n.attr['value'].tensor
size = len(tensor.tensor_content)
if size > max_const_size:
tensor.tensor_content = b"<stripped %d bytes>"%size
return strip_def
def show_graph(graph_def, max_const_size=32):
"""Visualize TensorFlow graph."""
if hasattr(graph_def, 'as_graph_def'):
graph_def = graph_def.as_graph_def()
strip_def = strip_consts(graph_def, max_const_size=max_const_size)
code = """
<script>
function load() {{
document.getElementById("{id}").pbtxt = {data};
}}
</script>
<link rel="import" href="https://tensorboard.appspot.com/tf-graph-basic.build.html" onload=load()>
<div style="height:600px">
<tf-graph-basic id="{id}"></tf-graph-basic>
</div>
""".format(data=repr(str(strip_def)), id='graph'+str(np.random.rand()))
iframe = """
<iframe seamless style="width:1200px;height:620px;border:0" srcdoc="{}"></iframe>
""".format(code.replace('"', '"'))
display(HTML(iframe))
show_graph(tf.get_default_graph())
"""
Explanation: Using plain TensorFlow
End of explanation
"""
n_inputs = 28*28 # MNIST
n_hidden1 = 300
n_hidden2 = 100
n_outputs = 10
reset_graph()
X = tf.placeholder(tf.float32, shape=(None, n_inputs), name="X")
y = tf.placeholder(tf.int64, shape=(None), name="y")
with tf.name_scope("dnn"):
hidden1 = tf.layers.dense(X, n_hidden1, name="hidden1",
activation=tf.nn.relu)
hidden2 = tf.layers.dense(hidden1, n_hidden2, name="hidden2",
activation=tf.nn.relu)
logits = tf.layers.dense(hidden2, n_outputs, name="outputs")
with tf.name_scope("loss"):
xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits)
loss = tf.reduce_mean(xentropy, name="loss")
learning_rate = 0.01
with tf.name_scope("train"):
optimizer = tf.train.GradientDescentOptimizer(learning_rate)
training_op = optimizer.minimize(loss)
with tf.name_scope("eval"):
correct = tf.nn.in_top_k(logits, y, 1)
accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))
init = tf.global_variables_initializer()
saver = tf.train.Saver()
n_epochs = 20
n_batches = 50
with tf.Session() as sess:
init.run()
for epoch in range(n_epochs):
for iteration in range(mnist.train.num_examples // batch_size):
X_batch, y_batch = mnist.train.next_batch(batch_size)
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
acc_train = accuracy.eval(feed_dict={X: X_batch, y: y_batch})
acc_test = accuracy.eval(feed_dict={X: mnist.test.images, y: mnist.test.labels})
print(epoch, "Train accuracy:", acc_train, "Test accuracy:", acc_test)
save_path = saver.save(sess, "./my_model_final.ckpt")
show_graph(tf.get_default_graph())
"""
Explanation: Using dense() instead of neuron_layer()
Note: the book uses tensorflow.contrib.layers.fully_connected() rather than tf.layers.dense() (which did not exist when this chapter was written). It is now preferable to use tf.layers.dense(), because anything in the contrib module may change or be deleted without notice. The dense() function is almost identical to the fully_connected() function, except for a few minor differences:
* several parameters are renamed: scope becomes name, activation_fn becomes activation (and similarly the _fn suffix is removed from other parameters such as normalizer_fn), weights_initializer becomes kernel_initializer, etc.
* the default activation is now None rather than tf.nn.relu.
* a few more differences are presented in chapter 11.
End of explanation
"""
n_inputs = 28*28 # MNIST
n_hidden1 = 300
n_hidden2 = 100
n_outputs = 10
reset_graph()
X = tf.placeholder(tf.float32, shape=(None, n_inputs), name="X")
y = tf.placeholder(tf.int64, shape=(None), name="y")
with tf.name_scope("dnn"):
hidden1 = tf.layers.dense(X, n_hidden1, name="hidden1",
activation=tf.nn.relu)
hidden2 = tf.layers.dense(hidden1, n_hidden2, name="hidden2",
activation=tf.nn.relu)
logits = tf.layers.dense(hidden2, n_outputs, name="outputs")
with tf.name_scope("loss"):
xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits)
loss = tf.reduce_mean(xentropy, name="loss")
loss_summary = tf.summary.scalar('log_loss', loss)
learning_rate = 0.01
with tf.name_scope("train"):
optimizer = tf.train.GradientDescentOptimizer(learning_rate)
training_op = optimizer.minimize(loss)
with tf.name_scope("eval"):
correct = tf.nn.in_top_k(logits, y, 1)
accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))
accuracy_summary = tf.summary.scalar('accuracy', accuracy)
init = tf.global_variables_initializer()
saver = tf.train.Saver()
"""
Explanation: Exercise solutions
1. to 8.
See appendix A.
9.
Train a deep MLP on the MNIST dataset and see if you can get over 98% precision. Just like in the last exercise of chapter 9, try adding all the bells and whistles (i.e., save checkpoints, restore the last checkpoint in case of an interruption, add summaries, plot learning curves using TensorBoard, and so on).
First let's create the deep net. It's exactly the same as earlier, with just one addition: we add a tf.summary.scalar() to track the loss and the accuracy during training, so we can view nice learning curves using TensorBoard.
End of explanation
"""
from datetime import datetime
def log_dir(prefix=""):
now = datetime.utcnow().strftime("%Y%m%d%H%M%S")
root_logdir = "tf_logs"
if prefix:
prefix += "-"
name = prefix + "run-" + now
return "{}/{}/".format(root_logdir, name)
logdir = log_dir("mnist_dnn")
"""
Explanation: Now we need to define the directory to write the TensorBoard logs to:
End of explanation
"""
file_writer = tf.summary.FileWriter(logdir, tf.get_default_graph())
"""
Explanation: Now we can create the FileWriter that we will use to write the TensorBoard logs:
End of explanation
"""
X_valid = mnist.validation.images
y_valid = mnist.validation.labels
m, n = X_train.shape
n_epochs = 10001
batch_size = 50
n_batches = int(np.ceil(m / batch_size))
checkpoint_path = "/tmp/my_deep_mnist_model.ckpt"
checkpoint_epoch_path = checkpoint_path + ".epoch"
final_model_path = "./my_deep_mnist_model"
best_loss = np.infty
epochs_without_progress = 0
max_epochs_without_progress = 50
with tf.Session() as sess:
if os.path.isfile(checkpoint_epoch_path):
# if the checkpoint file exists, restore the model and load the epoch number
with open(checkpoint_epoch_path, "rb") as f:
start_epoch = int(f.read())
print("Training was interrupted. Continuing at epoch", start_epoch)
saver.restore(sess, checkpoint_path)
else:
start_epoch = 0
sess.run(init)
for epoch in range(start_epoch, n_epochs):
for iteration in range(mnist.train.num_examples // batch_size):
X_batch, y_batch = mnist.train.next_batch(batch_size)
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
accuracy_val, loss_val, accuracy_summary_str, loss_summary_str = sess.run([accuracy, loss, accuracy_summary, loss_summary], feed_dict={X: X_valid, y: y_valid})
file_writer.add_summary(accuracy_summary_str, epoch)
file_writer.add_summary(loss_summary_str, epoch)
if epoch % 5 == 0:
print("Epoch:", epoch,
"\tValidation accuracy: {:.3f}%".format(accuracy_val * 100),
"\tLoss: {:.5f}".format(loss_val))
saver.save(sess, checkpoint_path)
with open(checkpoint_epoch_path, "wb") as f:
f.write(b"%d" % (epoch + 1))
if loss_val < best_loss:
saver.save(sess, final_model_path)
best_loss = loss_val
else:
epochs_without_progress += 5
if epochs_without_progress > max_epochs_without_progress:
print("Early stopping")
break
os.remove(checkpoint_epoch_path)
with tf.Session() as sess:
saver.restore(sess, final_model_path)
accuracy_val = accuracy.eval(feed_dict={X: X_test, y: y_test})
accuracy_val
"""
Explanation: Hey! Why don't we implement early stopping? For this, we are going to need a validation set. Luckily, the dataset returned by TensorFlow's input_data() function (see above) is already split into a training set (60,000 instances, already shuffled for us), a validation set (5,000 instances) and a test set (5,000 instances). So we can easily define X_valid and y_valid:
End of explanation
"""
|
jGaboardi/Facility_Location
|
Gurobi_v_Cplex__Set_Cover.ipynb
|
lgpl-3.0
|
import pysal as ps
import numpy as np
import networkx as nx
import shapefile as shp
import gurobipy as gbp
import cplex as cp
import datetime as dt
import time
from collections import OrderedDict
import IPython.display as IPd
%pylab inline
from mpl_toolkits.basemap import Basemap
"""
Explanation: <font size='5' face='Courier New'><h1 align="center"><i>Optimizer Performance Comparison in Python v2.7: Set Cover Location Problem</i></h1></font>
<font size='4' face='Arial'><h1 align="center"><i>Gurobi Optimizer v6.0.2</i></h1></font>
<font size='4' face='Arial'><h1 align="center"><i>vs.</i></h1></font>
<font size='4' face='Arial'><h1 align="center"><i>IBM ILOG CPLEX Optimizer v12.6.0</i></h1></font>
<font size='4' face='Arial'><h1 align="center"><i>PySAL v1.10.0</i></h1></font>
<font face='Times New Roman' size='6'><h3 align="center"><u>James D. Gaboardi</u></h3></font>
<font face='Times New Roman' size='5'><h3 align="center">Florida State University | Department of Geography</h3></font>
<font size='3' face='Times New Roman'><p><b>GNU LESSER GENERAL PUBLIC LICENSE</b></p>
<p><i>Version 3, 29 June 2007</i></p>
<p> Copyright (C) 2007 Free Software Foundation, Inc. <http://fsf.org/></p>
<p> Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.</p>
</font>
<font size='7' face='Times New Roman'><b><u>The Set Cover Location Problem</u></b></font>
<p><font size='5' face='Courier New'><u>Minimize</u></font></p>
<p><font size='6' face='Times New Roman'> $\sum_{j=1}^n$<i>  d<sub>j</sub>x<sub>j</sub></i></font></p>
<p><font size='5' face='Courier New'><u>Subject to</u></font></p>
<p><font size='6' face='Times New Roman'> $∑_{j=1}^n$<i>a<sub>ij</sub>x<sub>j</sub></i> ≥ 1, <i> i</i> = 1 ∊<i> n</i></font></p>
<p><font size='6' face='Times New Roman'> <i>x<sub>j</sub></i> ∊(0,1) <i> j</i> = 1 ∊ <i> m</i></font></p>
<p><font size='5' face='Courier New'><u>where</u></font></p>
<p><font size='5' face='Times New Roman'> − <i> i</i> = a specific origin</font></p>
<p><font size='5' face='Times New Roman'> − <i> j</i> = a specific destination</font></p>
<p><font size='5' face='Times New Roman'> − <i> n</i> = the set of origins</font></p>
<p><font size='5' face='Times New Roman'> − <i> m</i> = the set of destinations</font></p>
<p><font size='5' face='Times New Roman'> − <i> x<sub>i</sub></i> = the decision variable at each node in the matrix</font></p>
<p><font size='5' face='Times New Roman'> − <i> a<sub>ij</sub></i> = binary matrix describing the coverage of each node</font></p>
<p><font size='5' face='Courier New'> <u>where</u></font></p>
<p><font size='5' face='Times New Roman'> − <i> a<sub>ij</sub></i> = 1 ∀ <i> i</i>, <i> j</i> ∋ <i> d<sub>ij</sub></i> ≥ S (S is user defined) </font></p>
<p><font size='5' face='Times New Roman'> − <i> a<sub>ij</sub></i> = 0 otherwise</font></p>
<p><font size='5' face='Times New Roman'> − <i> d<sub>ij</sub></i> = distance from ith origin to jth destination</font></p>
<p><font size='4' face='Times New Roman'>Adapted from:</font></p>
<p><font size='4' face='Times New Roman'><b>Daskin, M. S.</b> 1995. <i>Network and Discrete Location: Models, Algorithms, and Applications</i>. Hoboken, NJ, USA: John Wiley & Sons, Inc.</font></p>
<font size='7' face='Times New Roman'><b>0. <u>Imports</u></b></font>
End of explanation
"""
ntw = ps.Network('Waverly/Waverly.shp')
print dt.datetime.now()
print 'PySAL.Network\n'
print dir(ntw)
"""
Explanation: <font size='7' face='Times New Roman'><b>1. <u>Data preparation and creation</u></b></font>
<font size='5' face='Times New Roman'><b>1.1 <u>Instantiate a network</u></b></font>
End of explanation
"""
# Roads and Nodes
g = nx.Graph()
# Graph of Roads and Nodes
g1 = nx.MultiGraph()
# Clients
GRAPH_client = nx.Graph()
# Snapped Clients
g_client = nx.Graph()
# Service
GRAPH_service = nx.Graph()
# Snapped Service
g_service = nx.Graph()
# Gurobi p-Median
GUROBI_setcover_g = nx.Graph()
# Cplex p-Median
CPLEX_setcover_g = nx.Graph()
"""
Explanation: <font size='5' face='Times New Roman'><b>1.2 <u>Instantiate all graphs to be drawn</u></b></font>
End of explanation
"""
shp_W = ps.open('Waverly/Waverly.shp')
shp_W.bbox
"""
Explanation: <font size='5' face='Times New Roman'><b>1.3 <u>Create Bounding Box from 'Waverly.shp'</u></b></font>
End of explanation
"""
lat_client = np.random.uniform(shp_W.bbox[0], shp_W.bbox[2], 500)
lon_client = np.random.uniform(shp_W.bbox[1], shp_W.bbox[3], 500)
lat_service = np.random.uniform(shp_W.bbox[0], shp_W.bbox[2], 500)
lon_service = np.random.uniform(shp_W.bbox[1], shp_W.bbox[3], 500)
"""
Explanation: <font size='5' face='Times New Roman'><b>1.4 <u>Create numpy arrays of random floats within a bounding box</u></b></font>
End of explanation
"""
rand_coords_client = map(list, zip(lat_client, lon_client))
rand_coords_service = map(list, zip(lat_service, lon_service))
"""
Explanation: <font size='5' face='Times New Roman'><b>1.5 <u>Zip the latitude and longitude lists together</u></b></font>
End of explanation
"""
points_client = {}
points_service = {}
"""
Explanation: <font size='5' face='Times New Roman'><b>1.6 <u>Create Empty Random Points Dictionaries</u></b></font>
End of explanation
"""
# CLIENT
for idx, coords in enumerate(rand_coords_client):
GRAPH_client.add_node(idx)
points_client[idx] = coords
GRAPH_client.node[idx] = coords
# SERVICE
for idx, coords in enumerate(rand_coords_service):
GRAPH_service.add_node(idx)
points_service[idx] = coords
GRAPH_service.node[idx] = coords
"""
Explanation: <font size='5' face='Times New Roman'><b>1.7 <u>Fill dictionaries of random roints</u></b></font>
End of explanation
"""
print dt.datetime.now()
#Instantiate Figure
figsize(10,10)
# Draw Graph of Actual Nodes and Roads
for e in ntw.edges:
g.add_edge(*e)
nx.draw(g, ntw.node_coords, node_size=5, alpha=0.25, edge_color='r', width=2)
# Draw only unique edges in graph
for e in ntw.graphedges:
g1.add_edge(*e)
# highlights cases where start and end node are the same
if e[0]==e[1]:
g1.add_node(e[0])
for node_id in g1.node:
g1.node[node_id] = ntw.node_coords[node_id]
nx.draw(g1, ntw.node_coords, node_size=10, alpha=0.5)
# Draw Graph of Random Client Points
nx.draw(GRAPH_client, points_client,
node_size=75, alpha=1, node_color='b')
# Draw Graph of Random Service Points
nx.draw(GRAPH_service, points_service,
node_size=100, alpha=1, node_color='c')
# Legend (Ordered Dictionary)
LEGEND = OrderedDict()
LEGEND['Network Nodes']=g
LEGEND['Roads']=g
LEGEND['Graph Vertices']=g1
LEGEND['Graph Edges']=g1
LEGEND['Client Nodes']=GRAPH_client
LEGEND['Service Nodes']=GRAPH_service
legend(LEGEND, loc='lower right', fancybox=True, framealpha=0.5)
# Title
title('Waverly Hills\n Tallahassee, Florida', family='Times New Roman',
size=40, color='k', backgroundcolor='w', weight='bold')
# North Arrow and 'N' --> Must be changed for different spatial resolutions, etc.
arrow(-84.281, 30.507, 0.0, 0.01, width=.0003, head_width=0.0012,
head_length=0.002, fc='k', ec='k',alpha=0.75,)
annotate('N', xy=(-84.2815, 30.52), fontstyle='italic', fontsize='xx-large',
fontweight='heavy', alpha=0.75)
"""
Explanation: <font size='5' face='Times New Roman'><b>1.8 <u>Draw network, simplified network, and random client & service nodes</u></b></font>
End of explanation
"""
S = .5
"""
Explanation: <font size='5' face='Times New Roman'><b>1.9 <u>Create S</u></b></font>
End of explanation
"""
# Client
client = shp.Writer(shp.POINT)
# Add Random Points
for i,j in rand_coords_client:
client.point(i,j)
# Add Fields
client.field('client_ID')
#client.field('Weight')
client.field('LAT')
client.field('LON')
counter = 0
for i in range(len(rand_coords_client)):
counter = counter + 1
client.record('client_' + str(counter), lat_client[i], lon_client[i])
# Save Shapefile
client.save('shapefiles/RandomPoints_CLIENT')
#Service
service = shp.Writer(shp.POINT)
# Add Random Points
for i,j in rand_coords_service:
service.point(i,j)
# Add Fields
service.field('y_ID')
service.field('x_ID')
service.field('LAT')
service.field('LON')
counter = 0
for i in range(len(rand_coords_service)):
counter = counter + 1
service.record('y' + str(counter), 'x' + str(counter), lat_service[i], lon_service[i])
# Save Shapefile
service.save('shapefiles/RandomPoints_SERVICE')
"""
Explanation: <font size='5' face='Times New Roman'><b>1.10 <u>Instantiate client and service shapefiles</u></b></font>
End of explanation
"""
t1 = time.time()
ntw.snapobservations('shapefiles/RandomPoints_CLIENT.shp',
'Rand_Points_CLIENT', attribute=True)
ntw.snapobservations('shapefiles/RandomPoints_SERVICE.shp',
'Rand_Points_SERVICE', attribute=True)
print round(time.time()-t1, 4), 'seconds'
"""
Explanation: <font size='5' face='Times New Roman'><b>1.11 <u>Snap Observations to NTW</u></b></font>
End of explanation
"""
# Instantiate Figure
figsize(10,10)
# Draw Graph of Roads
for e in ntw.edges:
g.add_edge(*e)
nx.draw(g, ntw.node_coords, node_size=5, alpha=0.25, edge_color='r', width=2)
# Draw Graph of Snapped Client Nodes
g_client = nx.Graph()
for p,coords in ntw.pointpatterns['Rand_Points_CLIENT'].snapped_coordinates.iteritems():
g_client.add_node(p)
g_client.node[p] = coords
nx.draw(g_client, ntw.pointpatterns['Rand_Points_CLIENT'].snapped_coordinates,
node_size=100, alpha=1, node_color='b')
# Draw Graph of Snapped Service Nodes
g_service = nx.Graph()
for p,coords in ntw.pointpatterns['Rand_Points_SERVICE'].snapped_coordinates.iteritems():
g_service.add_node(p)
g_service.node[p] = coords
nx.draw(g_service, ntw.pointpatterns['Rand_Points_SERVICE'].snapped_coordinates,
node_size=100, alpha=1, node_color='c')
# Draw Graph of Random Client Points
nx.draw(GRAPH_client, points_client,
node_size=20, alpha=1, node_color='y')
# Draw Graph of Random Client Points
nx.draw(GRAPH_service, points_service,
node_size=20, alpha=1, node_color='w')
# Legend (Ordered Dictionary)
LEGEND = OrderedDict()
LEGEND['Network Nodes']=g
LEGEND['Roads']=g
LEGEND['Snapped Client']=g_client
LEGEND['Snapped Service']=g_service
LEGEND['Client Nodes']=GRAPH_client
LEGEND['Service Nodes']=GRAPH_service
legend(LEGEND, loc='lower right', fancybox=True, framealpha=0.5)
# Title
title('Waverly Hills\n Tallahassee, Florida', family='Times New Roman',
size=40, color='k', backgroundcolor='w', weight='bold')
# North Arrow and 'N' --> Must be changed for different spatial resolutions, etc.
arrow(-84.281, 30.507, 0.0, 0.01, width=.0003, head_width=0.0012,
head_length=0.002, fc='k', ec='k',alpha=0.75,)
annotate('N', xy=(-84.2815, 30.52), fontstyle='italic', fontsize='xx-large',
fontweight='heavy', alpha=0.75)
"""
Explanation: <font size='5' face='Times New Roman'><b>1.12 <u>Draw NTW, snapped coords, & random coords</u></b></font>
End of explanation
"""
t1 = time.time()
All_Neigh_Dist = ntw.allneighbordistances(sourcepattern=ntw.pointpatterns['Rand_Points_CLIENT'],
destpattern=ntw.pointpatterns['Rand_Points_SERVICE'])
All_Dist_MILES = All_Neigh_Dist * float(10000/90) * 0.6214
seconds = round(time.time()-t1, 4)
print seconds, 'seconds'
print 'Client [i] x Service [j] Matrix Shape --> ', All_Dist_MILES.shape
"""
Explanation: <font size='5' face='Times New Roman'><b>1.13 <u>Create distance matrix</u></b></font>
End of explanation
"""
t1 = time.time()
# 1. Read In Data
# Cost Vector
Cij = All_Dist_MILES
# Create Aij: Determine Aij (nodes within S)
# S --> 1 = served; 0 = unserved
Aij = []
for i in np.nditer(Cij):
if i <= S:
outtext = 1
else:
outtext = 0
Aij.append(outtext)
rows, cols = Cij.shape
Aij = np.array(Aij)
Aij = Aij.reshape(len(Cij),len(Cij[0]))
client_nodes = range(len(Cij))
service_nodes = range(len(Cij[0]))
# 2. Create Model, Set MIP Focus, Add Variables, & Update Model
mSCLP_GUROBI = gbp.Model(" -- SCLP -- ")
# Set MIP Focus to 2 for optimality
gbp.setParam('MIPFocus', 2)
# 3. Add Service Decision Variables
serv_var = []
for dest in service_nodes:
serv_var.append(mSCLP_GUROBI.addVar(vtype=gbp.GRB.BINARY,
ub = 1,
name='x'+str(dest+1)))
# Update Model Variables
mSCLP_GUROBI.update()
# 4. Set Objective Function
mSCLP_GUROBI.setObjective(gbp.quicksum(serv_var[dest]
for dest in service_nodes),
gbp.GRB.MINIMIZE)
# 5. Add Constraints
#Add Coverage Constraints
for orig in client_nodes:
mSCLP_GUROBI.addConstr(gbp.quicksum(Aij[orig][dest]*serv_var[dest]
for dest in service_nodes) >= 1)
# 6. Optimize and Print Results
mSCLP_GUROBI.optimize()
mSCLP_GUROBI.write('GUROBI_LP.lp')
t2G = time.time()-t1
print '\n*****************************************************************************************'
selected = []
dbf1 = ps.open('shapefiles/RandomPoints_SERVICE.dbf')
NEW_Records_SCLP_GUROBI = []
for v in mSCLP_GUROBI.getVars():
if v.x > 0:
var = '%s' % v.VarName
selected.append(v.x)
for i in range(dbf1.n_records):
if var in dbf1.read_record(i):
x = dbf1.read_record(i)
NEW_Records_SCLP_GUROBI.append(x)
else:
pass
print ' | ', var
print ' | Selected Facility Locations --------------------------- ^^^^ '
print ' | Coverage (S) in miles --------------------------------- ', S
print ' | Client Nodes ------------------------------------------ ', len(client_nodes)
print ' | Facilities needed for 100% coverage of client nodes --- ', len(selected)
print ' | Real Time to Optimize (sec.) -------------------------- ', t2G
print ' | Date/Time --------------------------------------------- ', dt.datetime.now()
print '*****************************************************************************************'
print '\nJames Gaboardi, 2015'
"""
Explanation: <font size='7' face='Times New Roman'><b>2. <u>Mathematical Optimization</u></b></font>
<font size='5' face='Times New Roman'><b>2.1a <u>Gurobi Set Cover test</u></b></font>
End of explanation
"""
SHP_SetCover_GUROBI = shp.Writer(shp.POINT)
# Add Points
for idy,idx,x,y in NEW_Records_SCLP_GUROBI:
SHP_SetCover_GUROBI.point(float(x), float(y))
# Add Fields
SHP_SetCover_GUROBI.field('y_ID')
SHP_SetCover_GUROBI.field('x_ID')
SHP_SetCover_GUROBI.field('LAT')
SHP_SetCover_GUROBI.field('LON')
# Add Records
for idy,idx,x,y in NEW_Records_SCLP_GUROBI:
SHP_SetCover_GUROBI.record(idy,idx,x,y)
# Save Shapefile
SHP_SetCover_GUROBI.save('shapefiles/Selected_Locations_SCLP_GUROBI')
"""
Explanation: <font size='5' face='Times New Roman'><b>2.1b <u>Instantiate Selected Gurobi Set Cover shapefile</u></b></font>
End of explanation
"""
t1 = time.time()
# 1. Read In Data
# Cost Vector
Cij = All_Dist_MILES
# Create Aij: Determine Aij (nodes within S)
# S --> 1 = served; 0 = unserved
Cij = All_Dist_MILES
# Create Aij: Determine Aij (nodes within S)
# S --> 1 = served; 0 = unserved
Aij = []
for i in np.nditer(Cij):
if i <= S:
outtext = 1
else:
outtext = 0
Aij.append(outtext)
Aij = np.array(Aij)
Aij = Aij.reshape(len(Cij),len(Cij[0]))
# Indices & Variable Names
nodes = len(Cij)
Nodes = range(len(Cij))
all_nodes = len(Cij) * len(Cij)
ALL_nodes = range(all_nodes)
x = 'x'
cli_var = []
for i in Nodes:
for j in Nodes:
temp = x + str(j+1)
cli_var.append(temp)
client_var = np.array(cli_var)
results_var = []
for i in Nodes:
temp = x + str(i+1)
results_var.append(temp)
# 2. Create Model and Add Variables
# Create Model
mSCLP_CPLEX = cp.Cplex()
# Problem Name
mSCLP_CPLEX.set_problem_name('\n -- Set Cover Location Problem -- ')
print mSCLP_CPLEX.get_problem_name()
# Problem Type ==> Linear Programming
mSCLP_CPLEX.set_problem_type(mSCLP_CPLEX.problem_type.LP)
# Set MIP Emphasis to '2' --> Optimal
mSCLP_CPLEX.parameters.emphasis.mip.set(2)
print mSCLP_CPLEX.parameters.get_changed()
print '\nProblem Type\n ' + str(mSCLP_CPLEX.problem_type[mSCLP_CPLEX.get_problem_type()])
# Objective Function Sense ==> Minimize
mSCLP_CPLEX.objective.set_sense(mSCLP_CPLEX.objective.sense.minimize)
print 'Objective Sense\n ' + str(mSCLP_CPLEX.objective.sense[mSCLP_CPLEX.objective.get_sense()])
# Add Client Decision Variables
mSCLP_CPLEX.variables.add(names = [cli_var[i] for i in Nodes],
obj = [1] * nodes,
lb = [0] * nodes,
ub = [1] * nodes,
types = ['B'] * nodes)
# 3. Add Constraints
#Add Coverage Constraints
for orig in Nodes:
coverage_constraints = cp.SparsePair(ind = [client_var[dest]
for dest in Nodes],
val = [Aij[orig][dest]for dest in Nodes])
mSCLP_CPLEX.linear_constraints.add(lin_expr = [coverage_constraints],
senses = ['G'],
rhs = [1]);
# 4. Optimize and Print Results
mSCLP_CPLEX.solve()
t2C = time.time()-t1
solution = mSCLP_CPLEX.solution
mSCLP_CPLEX.write('CPLEX_LP.lp')
selected = []
dbf1 = ps.open('shapefiles/RandomPoints_SERVICE.dbf')
NEW_Records_SCLP_CPLEX = []
for v in mSCLP_CPLEX.variables.get_names():
if (solution.get_values(v) >
mSCLP_CPLEX.parameters.mip.tolerances.integrality.get()):
var = '%s' % v
selected.append(var)
for i in range(dbf1.n_records):
if var in dbf1.read_record(i):
x = dbf1.read_record(i)
NEW_Records_SCLP_CPLEX.append(x)
else:
pass
# solution.get_status() returns an integer code
print 'Solution status = ' , solution.get_status(), ':',
# the following line prints the corresponding string
print solution.status[solution.get_status()]
# Display solution.
print 'Selected Facility Locations = ' , solution.get_objective_value()
print 'Determination Time to Build Model in Python and Optimize (sec.): ', mSCLP_CPLEX.get_dettime(), 'ticks'
print 'Real Time to Build Model in Python and Optimize (sec.): ', t2C
print '****************************'
for f in results_var:
if (solution.get_values(f) >
mSCLP_CPLEX.parameters.mip.tolerances.integrality.get()):
print ' Facility %s is open' % f
else:
pass #print ' Facility %s is closed' % f
print '****************************'
print '\n----- Date/Time ------------------- ', dt.datetime.now()
print '\n----- Cplex Set Cover Loation Problem -----'
print '\n-----\nJames Gaboardi, 2015'
"""
Explanation: <font size='5' face='Times New Roman'><b>2.2a <u>Cplex Set Cover test</u></b></font>
End of explanation
"""
SHP_SCLP_CPLEX = shp.Writer(shp.POINT)
# Add Points
for idy,idx,x,y in NEW_Records_SCLP_CPLEX:
SHP_SCLP_CPLEX.point(float(x), float(y))
# Add Fields
SHP_SCLP_CPLEX.field('y_ID')
SHP_SCLP_CPLEX.field('x_ID')
SHP_SCLP_CPLEX.field('LAT')
SHP_SCLP_CPLEX.field('LON')
# Add Records
for idy,idx,x,y in NEW_Records_SCLP_CPLEX:
SHP_SCLP_CPLEX.record(idy,idx,x,y)
# Save Shapefile
SHP_SCLP_CPLEX.save('shapefiles/Selected_Locations_SCLP_CPLEX')
"""
Explanation: <font size='5' face='Times New Roman'><b>2.2b <u>Instantiate Selected Cplex Set Cover shapefile</u></b></font>
End of explanation
"""
figsize(10,10)
# Draw Network Actual Roads and Nodes
nx.draw(g, ntw.node_coords, node_size=5, alpha=0.25, edge_color='r', width=2)
# Draw Graph
nx.draw(g1, ntw.node_coords, node_size=10, alpha=0.5)
# Gurobi Set Cover
SetCover_GUROBI = ps.open('shapefiles/Selected_Locations_SCLP_GUROBI.shp')
points_SetCover_GUROBI = {}
for idx, coords in enumerate(SetCover_GUROBI):
GUROBI_setcover_g.add_node(idx)
points_SetCover_GUROBI[idx] = coords
GUROBI_setcover_g.node[idx] = coords
nx.draw(GUROBI_setcover_g, points_SetCover_GUROBI,
node_size=600, alpha=1, node_color='g')
# Cplex Set Cover
SetCover_CPLEX = ps.open('shapefiles/Selected_Locations_SCLP_CPLEX.shp')
points_SetCover_CPLEX = {}
for idx, coords in enumerate(SetCover_CPLEX):
CPLEX_setcover_g.add_node(idx)
points_SetCover_CPLEX[idx] = coords
CPLEX_setcover_g.node[idx] = coords
nx.draw(CPLEX_setcover_g, points_SetCover_CPLEX,
node_size=300, alpha=1, node_color='r')
# Draw Graph of Random Service
nx.draw(GRAPH_client, points_client,
node_size=15, alpha=.5, node_color='k')
# Draw Graph of Random Service
nx.draw(GRAPH_service, points_service,
node_size=50, alpha=1, node_color='k')
# Legend (Ordered Dictionary)
LEGEND = OrderedDict()
LEGEND['Network Nodes']=g
LEGEND['Roads']=g
LEGEND['Graph Vertices']=g1
LEGEND['Graph Edges']=g1
LEGEND['Gurobi Optimal Set Cover']=GUROBI_setcover_g
LEGEND['Cplex Optimal Set Cover']=CPLEX_setcover_g
LEGEND['Client Nodes']=GRAPH_client
LEGEND['Service Nodes']=GRAPH_service
legend(LEGEND, loc='lower right', fancybox=True, framealpha=0.5)
# Title
title('Waverly Hills\n Tallahassee, Florida', family='Times New Roman',
size=40, color='k', backgroundcolor='w', weight='bold')
# North Arrow and 'N' --> Must be changed for different spatial resolutions, etc.
arrow(-84.281, 30.507, 0.0, 0.01, width=.0003, head_width=0.0012,
head_length=0.002, fc='k', ec='k',alpha=0.75,)
annotate('N', xy=(-84.2815, 30.52), fontstyle='italic', fontsize='xx-large',
fontweight='heavy', alpha=0.75)
"""
Explanation: <font size='7' face='Times New Roman'><b>3. <u>Selected locations</u></b></font>
<font size='5' face='Times New Roman'><b>3.1 <u>Gurobi & Cplex Set Cover Selected locations </u></b></font>
End of explanation
"""
print '********************************************************'
print ' | Total Facilities to be opened for [ S =', S, ']'
print ' | | Gurobi ------------------ ', mSCLP_GUROBI.objVal, ' '
print ' | | CPLEX ------------------- ', solution.get_objective_value(), ' '
print '-------------------------------------------------------'
NSG = NEW_Records_SCLP_GUROBI
NSC = NEW_Records_SCLP_CPLEX
G_val = mSCLP_GUROBI.objVal
C_val = solution.get_objective_value()
if G_val == C_val:
print ' | Gurobi and CPLEX chose the same number of facilities'
else:
print ' | Gurobi and CPLEX chose different numbers of facilities'
if NSG == NSC:
print ' | ***'
print ' | Gurobi and CPLEX chose the same facilities'
else:
print ' | ***'
print ' | Gurobi and CPLEX chose different facilities'
print '-------------------------------------------------------'
print ' | Total Time to Build Model and Optimize: (seconds) '
print ' | | Gurobi ------------------ ', t2G, ' '
print ' | | CPLEX ------------------- ', t2C, ' '
print '********************************************************'
"""
Explanation: <font size='5' face='Times New Roman'><b>3.2 <u>Optimized Values</u></b></font>
End of explanation
"""
print '********************************************************'
print ' | Platform Specs: |'
print ' | | OS X Yosemite v10.10.4 |'
print ' | | MacBook Pro (Retina, 13-inch, Early 2015) |'
print ' | | Processor: 3.1 GHz Intel Core i7 |'
print ' | | Memory: 16 GB 1867 MHz DDR3 |'
print '********************************************************'
print ' | | Date/Time --------- ', dt.datetime.now(), '|'
print '********************************************************'
"""
Explanation: <font size='5' face='Times New Roman'><b>3.3 <u>System Information</u></b></font>
End of explanation
"""
|
kdmurray91/kwip-experiments
|
writeups/coalescent/50reps_2016-05-18/sqrt-dist.ipynb
|
mit
|
expts = list(map(lambda fp: path.basename(fp.rstrip('/')), glob('data/*/')))
print("Number of replicate experiments:", len(expts))
def process_expt(expt):
expt_results = []
def extract_info(filename):
return re.search(r'kwip/(\d\.?\d*)x-(0\.\d+)-(wip|ip).dist', filename).groups()
def r_sqrt(truth, dist):
return sp.stats.pearsonr(truth, np.sqrt(dist))[0]
def rho_sqrt(truth, dist):
return sp.stats.spearmanr(truth, np.sqrt(dist)).correlation
# dict of scale: distance matrix, populated as we go
truths = {}
truth_points = []
sim_points = []
for distfile in glob("data/{}/kwip/*.dist".format(expt)):
cov, scale, metric = extract_info(distfile)
if scale not in truths:
genome_dist_path = 'data/{ex}/all_genomes-{sc}.dist'.format(ex=expt, sc=scale)
truths[scale] = load_sample_matrix_to_runs(genome_dist_path)
exptmat = DistanceMatrix.read(distfile)
rho = distmat_corr(truths[scale], exptmat, stats.spearmanr).correlation
rho2 = distmat_corr(truths[scale], exptmat, rho_sqrt)
r = distmat_corr(truths[scale], exptmat, stats.pearsonr)[0]
r2 = distmat_corr(truths[scale], exptmat, r_sqrt)
if cov == "100" and scale == "0.001" and metric == "wip":
truth_points.append(truths[scale].condensed_form())
sim_points.append(exptmat.condensed_form())
expt_results.append({
"coverage": cov,
"scale": scale,
"metric": metric,
"rho": rho,
"rhosqrt": rho2,
"r": r,
"rsqrt": r2,
"seed": expt,
})
return expt_results, (truth_points, sim_points)
#process_expt('3662')
results = []
truepoints = []
simpoints = []
for res in map(process_expt, expts):
results.extend(res[0])
truepoints.extend(res[1][0])
simpoints.extend(res[1][1])
results = pd.DataFrame(results)
truepoints = np.concatenate(truepoints)
simpoints = np.concatenate(simpoints)
%%R -i truepoints -i simpoints
plot(truepoints, sqrt(simpoints), pch=".")
"""
Explanation: Calculate performance of kWIP
The next bit of python code calculates the performance of kWIP against the distance between samples calulcated from the alignments of their genomes.
This code caluclates spearman's $\rho$ between the off-diagonal elements of the triagnular distance matrices.
End of explanation
"""
%%R -i results
results$coverage = as.numeric(as.character(results$coverage))
results$scale = as.numeric(as.character(results$scale))
print(summary(results))
str(results)
%%R
# AND AGAIN WITHOUT SUBSETTING
dat = results %>%
filter(scale==0.001, metric=="wip") %>%
select(coverage, rho, r, rsqrt, rhosqrt)
mdat = melt(dat, id.vars="coverage", variable.name="measure", value.name="corr")
mdat$coverage = as.factor(mdat$coverage)
ggplot(mdat, aes(x=coverage, y=corr)) +
geom_boxplot() +
facet_wrap(~measure) +
theme_bw()
"""
Explanation: Visualisation
Below we see a summary and structure of the data
End of explanation
"""
|
mne-tools/mne-tools.github.io
|
0.22/_downloads/243172b1ef6a2d804d3245b8c0a927ef/plot_60_maxwell_filtering_sss.ipynb
|
bsd-3-clause
|
import os
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import numpy as np
import mne
from mne.preprocessing import find_bad_channels_maxwell
sample_data_folder = mne.datasets.sample.data_path()
sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(sample_data_raw_file, verbose=False)
raw.crop(tmax=60)
"""
Explanation: Signal-space separation (SSS) and Maxwell filtering
This tutorial covers reducing environmental noise and compensating for head
movement with SSS and Maxwell filtering.
:depth: 2
As usual we'll start by importing the modules we need, loading some
example data <sample-dataset>, and cropping it to save on memory:
End of explanation
"""
fine_cal_file = os.path.join(sample_data_folder, 'SSS', 'sss_cal_mgh.dat')
crosstalk_file = os.path.join(sample_data_folder, 'SSS', 'ct_sparse_mgh.fif')
"""
Explanation: Background on SSS and Maxwell filtering
Signal-space separation (SSS) :footcite:TauluKajola2005,TauluSimola2006
is a technique based on the physics
of electromagnetic fields. SSS separates the measured signal into components
attributable to sources inside the measurement volume of the sensor array
(the internal components), and components attributable to sources outside
the measurement volume (the external components). The internal and external
components are linearly independent, so it is possible to simply discard the
external components to reduce environmental noise. Maxwell filtering is a
related procedure that omits the higher-order components of the internal
subspace, which are dominated by sensor noise. Typically, Maxwell filtering
and SSS are performed together (in MNE-Python they are implemented together
in a single function).
Like SSP <tut-artifact-ssp>, SSS is a form of projection. Whereas SSP
empirically determines a noise subspace based on data (empty-room recordings,
EOG or ECG activity, etc) and projects the measurements onto a subspace
orthogonal to the noise, SSS mathematically constructs the external and
internal subspaces from spherical harmonics_ and reconstructs the sensor
signals using only the internal subspace (i.e., does an oblique projection).
<div class="alert alert-danger"><h4>Warning</h4><p>Maxwell filtering was originally developed for Elekta Neuromag® systems,
and should be considered *experimental* for non-Neuromag data. See the
Notes section of the :func:`~mne.preprocessing.maxwell_filter` docstring
for details.</p></div>
The MNE-Python implementation of SSS / Maxwell filtering currently provides
the following features:
Basic bad channel detection
(:func:~mne.preprocessing.find_bad_channels_maxwell)
Bad channel reconstruction
Cross-talk cancellation
Fine calibration correction
tSSS
Coordinate frame translation
Regularization of internal components using information theory
Raw movement compensation (using head positions estimated by MaxFilter)
cHPI subtraction (see :func:mne.chpi.filter_chpi)
Handling of 3D (in addition to 1D) fine calibration files
Epoch-based movement compensation as described in
:footcite:TauluKajola2005 through :func:mne.epochs.average_movements
Experimental processing of data from (un-compensated) non-Elekta
systems
Using SSS and Maxwell filtering in MNE-Python
For optimal use of SSS with data from Elekta Neuromag® systems, you should
provide the path to the fine calibration file (which encodes site-specific
information about sensor orientation and calibration) as well as a crosstalk
compensation file (which reduces interference between Elekta's co-located
magnetometer and paired gradiometer sensor units).
End of explanation
"""
raw.info['bads'] = []
raw_check = raw.copy()
auto_noisy_chs, auto_flat_chs, auto_scores = find_bad_channels_maxwell(
raw_check, cross_talk=crosstalk_file, calibration=fine_cal_file,
return_scores=True, verbose=True)
print(auto_noisy_chs) # we should find them!
print(auto_flat_chs) # none for this dataset
"""
Explanation: Before we perform SSS we'll look for bad channels — MEG 2443 is quite
noisy.
<div class="alert alert-danger"><h4>Warning</h4><p>It is critical to mark bad channels in ``raw.info['bads']`` *before*
calling :func:`~mne.preprocessing.maxwell_filter` in order to prevent
bad channel noise from spreading.</p></div>
Let's see if we can automatically detect it.
End of explanation
"""
bads = raw.info['bads'] + auto_noisy_chs + auto_flat_chs
raw.info['bads'] = bads
"""
Explanation: <div class="alert alert-info"><h4>Note</h4><p>`~mne.preprocessing.find_bad_channels_maxwell` needs to operate on
a signal without line noise or cHPI signals. By default, it simply
applies a low-pass filter with a cutoff frequency of 40 Hz to the
data, which should remove these artifacts. You may also specify a
different cutoff by passing the ``h_freq`` keyword argument. If you
set ``h_freq=None``, no filtering will be applied. This can be
useful if your data has already been preconditioned, for example
using :func:`mne.chpi.filter_chpi`,
:func:`mne.io.Raw.notch_filter`, or :meth:`mne.io.Raw.filter`.</p></div>
Now we can update the list of bad channels in the dataset.
End of explanation
"""
# Only select the data forgradiometer channels.
ch_type = 'grad'
ch_subset = auto_scores['ch_types'] == ch_type
ch_names = auto_scores['ch_names'][ch_subset]
scores = auto_scores['scores_noisy'][ch_subset]
limits = auto_scores['limits_noisy'][ch_subset]
bins = auto_scores['bins'] # The the windows that were evaluated.
# We will label each segment by its start and stop time, with up to 3
# digits before and 3 digits after the decimal place (1 ms precision).
bin_labels = [f'{start:3.3f} – {stop:3.3f}'
for start, stop in bins]
# We store the data in a Pandas DataFrame. The seaborn heatmap function
# we will call below will then be able to automatically assign the correct
# labels to all axes.
data_to_plot = pd.DataFrame(data=scores,
columns=pd.Index(bin_labels, name='Time (s)'),
index=pd.Index(ch_names, name='Channel'))
# First, plot the "raw" scores.
fig, ax = plt.subplots(1, 2, figsize=(12, 8))
fig.suptitle(f'Automated noisy channel detection: {ch_type}',
fontsize=16, fontweight='bold')
sns.heatmap(data=data_to_plot, cmap='Reds', cbar_kws=dict(label='Score'),
ax=ax[0])
[ax[0].axvline(x, ls='dashed', lw=0.25, dashes=(25, 15), color='gray')
for x in range(1, len(bins))]
ax[0].set_title('All Scores', fontweight='bold')
# Now, adjust the color range to highlight segments that exceeded the limit.
sns.heatmap(data=data_to_plot,
vmin=np.nanmin(limits), # bads in input data have NaN limits
cmap='Reds', cbar_kws=dict(label='Score'), ax=ax[1])
[ax[1].axvline(x, ls='dashed', lw=0.25, dashes=(25, 15), color='gray')
for x in range(1, len(bins))]
ax[1].set_title('Scores > Limit', fontweight='bold')
# The figure title should not overlap with the subplots.
fig.tight_layout(rect=[0, 0.03, 1, 0.95])
"""
Explanation: We called ~mne.preprocessing.find_bad_channels_maxwell with the optional
keyword argument return_scores=True, causing the function to return a
dictionary of all data related to the scoring used to classify channels as
noisy or flat. This information can be used to produce diagnostic figures.
In the following, we will generate such visualizations for
the automated detection of noisy gradiometer channels.
End of explanation
"""
raw.info['bads'] += ['MEG 2313'] # from manual inspection
"""
Explanation: <div class="alert alert-info"><h4>Note</h4><p>You can use the very same code as above to produce figures for
*flat* channel detection. Simply replace the word "noisy" with
"flat", and replace ``vmin=np.nanmin(limits)`` with
``vmax=np.nanmax(limits)``.</p></div>
You can see the un-altered scores for each channel and time segment in the
left subplots, and thresholded scores – those which exceeded a certain limit
of noisiness – in the right subplots. While the right subplot is entirely
white for the magnetometers, we can see a horizontal line extending all the
way from left to right for the gradiometers. This line corresponds to channel
MEG 2443, which was reported as auto-detected noisy channel in the step
above. But we can also see another channel exceeding the limits, apparently
in a more transient fashion. It was therefore not detected as bad, because
the number of segments in which it exceeded the limits was less than 5,
which MNE-Python uses by default.
<div class="alert alert-info"><h4>Note</h4><p>You can request a different number of segments that must be
found to be problematic before
`~mne.preprocessing.find_bad_channels_maxwell` reports them as bad.
To do this, pass the keyword argument ``min_count`` to the
function.</p></div>
Obviously, this algorithm is not perfect. Specifically, on closer inspection
of the raw data after looking at the diagnostic plots above, it becomes clear
that the channel exceeding the "noise" limits in some segments without
qualifying as "bad", in fact contains some flux jumps. There were just not
enough flux jumps in the recording for our automated procedure to report
the channel as bad. So it can still be useful to manually inspect and mark
bad channels. The channel in question is MEG 2313. Let's mark it as bad:
End of explanation
"""
raw_sss = mne.preprocessing.maxwell_filter(
raw, cross_talk=crosstalk_file, calibration=fine_cal_file, verbose=True)
"""
Explanation: After that, performing SSS and Maxwell filtering is done with a
single call to :func:~mne.preprocessing.maxwell_filter, with the crosstalk
and fine calibration filenames provided (if available):
End of explanation
"""
raw.pick(['meg']).plot(duration=2, butterfly=True)
raw_sss.pick(['meg']).plot(duration=2, butterfly=True)
"""
Explanation: To see the effect, we can plot the data before and after SSS / Maxwell
filtering.
End of explanation
"""
head_pos_file = os.path.join(mne.datasets.testing.data_path(), 'SSS',
'test_move_anon_raw.pos')
head_pos = mne.chpi.read_head_pos(head_pos_file)
mne.viz.plot_head_positions(head_pos, mode='traces')
"""
Explanation: Notice that channels marked as "bad" have been effectively repaired by SSS,
eliminating the need to perform interpolation <tut-bad-channels>.
The heartbeat artifact has also been substantially reduced.
The :func:~mne.preprocessing.maxwell_filter function has parameters
int_order and ext_order for setting the order of the spherical
harmonic expansion of the interior and exterior components; the default
values are appropriate for most use cases. Additional parameters include
coord_frame and origin for controlling the coordinate frame ("head"
or "meg") and the origin of the sphere; the defaults are appropriate for most
studies that include digitization of the scalp surface / electrodes. See the
documentation of :func:~mne.preprocessing.maxwell_filter for details.
Spatiotemporal SSS (tSSS)
An assumption of SSS is that the measurement volume (the spherical shell
where the sensors are physically located) is free of electromagnetic sources.
The thickness of this source-free measurement shell should be 4-8 cm for SSS
to perform optimally. In practice, there may be sources falling within that
measurement volume; these can often be mitigated by using Spatiotemporal
Signal Space Separation (tSSS) :footcite:TauluSimola2006.
tSSS works by looking for temporal
correlation between components of the internal and external subspaces, and
projecting out any components that are common to the internal and external
subspaces. The projection is done in an analogous way to
SSP <tut-artifact-ssp>, except that the noise vector is computed
across time points instead of across sensors.
To use tSSS in MNE-Python, pass a time (in seconds) to the parameter
st_duration of :func:~mne.preprocessing.maxwell_filter. This will
determine the "chunk duration" over which to compute the temporal projection.
The chunk duration effectively acts as a high-pass filter with a cutoff
frequency of $\frac{1}{\mathtt{st_duration}}~\mathrm{Hz}$; this
effective high-pass has an important consequence:
In general, larger values of st_duration are better (provided that your
computer has sufficient memory) because larger values of st_duration
will have a smaller effect on the signal.
If the chunk duration does not evenly divide your data length, the final
(shorter) chunk will be added to the prior chunk before filtering, leading
to slightly different effective filtering for the combined chunk (the
effective cutoff frequency differing at most by a factor of 2). If you need
to ensure identical processing of all analyzed chunks, either:
choose a chunk duration that evenly divides your data length (only
recommended if analyzing a single subject or run), or
include at least 2 * st_duration of post-experiment recording time at
the end of the :class:~mne.io.Raw object, so that the data you intend to
further analyze is guaranteed not to be in the final or penultimate chunks.
Additional parameters affecting tSSS include st_correlation (to set the
correlation value above which correlated internal and external components
will be projected out) and st_only (to apply only the temporal projection
without also performing SSS and Maxwell filtering). See the docstring of
:func:~mne.preprocessing.maxwell_filter for details.
Movement compensation
If you have information about subject head position relative to the sensors
(i.e., continuous head position indicator coils, or :term:cHPI <HPI>), SSS
can take that into account when projecting sensor data onto the internal
subspace. Head position data can be computed using
:func:mne.chpi.compute_chpi_locs and :func:mne.chpi.compute_head_pos,
or loaded with the:func:mne.chpi.read_head_pos function. The
example data <sample-dataset> doesn't include cHPI, so here we'll
load a :file:.pos file used for testing, just to demonstrate:
End of explanation
"""
|
boffi/boffi.github.io
|
dati_2016/08/Subspace2.ipynb
|
mit
|
def redeigh(K, M, phi):
"""Solves the reduced eigenproblem in subspace iteration method.
Input: phi, a 2-d array containing the current subspace;
output: 1. 1-d array of eigenvalues estimates;
2. 2-d array of eigenvector estimates in Ritz coordinates."""
# compute the reduced matrices
Mr = phi.T*M*phi
Kr = phi.T*K*phi
# solve the reduced eigenproblem, using a library function
return eigh(Kr,Mr)
def plot_eigenvectors(evecs):
floors = range(shape(evecs)[0]+1)
for i,v in enumerate(evecs.transpose()):
pl.plot(floors,
concatenate(([0],ravel(v))),
linewidth=2, label='Mode #%d'%(i+1,))
pl.hlines(0,0,floors[-1],colors='#777777')
ymn, ymx = pl.ylim() ; pl.ylim(ymn, 1.2*ymx)
pl.xticks(floors) ; pl.legend(loc=9, ncol=5)
pl.show()
def same_sign(arr):
"modifies \"in place\" a 2-D array, forcing the last row to be non-negative."
for col in asarray(arr).transpose():
col *= sign(col[-1])
return None
def error(arr):
err = []
for col in arr.transpose():
err.append(sqrt(sum(ravel(col)*ravel(col))))
return err
"""
Explanation: Following, some utility functions:
<dl>
<dt>redeig</dt>
<dd>solves the reduced eigenvalue problem</dd>
<dt>plot_eigenvectors</dt>
<dd>given a shear type building, this function plots its eigenvectors starting with a zero displacement at ground level (zero displacement that is not contained in any of the eigenvectors...)</dd>
<dt>same_sign</dt>
<dd>library functions return eigenvectors with arbitrary signs, this function
change the sign of a whole column if the sign of the last element is negative</dd>
<dt>error</dt>
<dd>starting from a matrix of errors, computes the SRSS column by column</dd>
</dl>
End of explanation
"""
story_stiffness = list(range(23,11,-1))
story_stiffness.append(0)
story_stiffness = array(story_stiffness)
y = array(range(13))
print("Storey: ",','.join([" %2d"%(i+1,) for i in y]))
print("Stiffness: ",",".join([" %2d"%(s,) for s in story_stiffness]))
"""
Explanation: Shear-Type Building, Subspace Iteration
We analyse a 2D frame, with negligible flexural deformations in beams and negligible
shear and axial deformation, all the deformations are due to the lateral flexibility
of columns. Under these assumptions, we can study a dynamic model where we have 1 DOF
for each floor.
Our frame has twelve floors and hence 12 DOF. DOF are numbered from the bottom up to
the top if the frame.
We need only the lowest 4 eigenvalues-eigenvectors, so we can use the subspace iteration method with a base
phi that's a 12x4 array.
The floor masses are all the same, while the story stiffnesses are decreasing with height, starting from
23$k$ for storey 1, i.e., between the ground and the first floor, down to 12$k$
We start creating a list containing the lateral stiffnesses, taking into account also the (zero!) stiffness of the (non existing) storey above the top, as this trick is handy when definining the coefficients of the stiffness matrix.
End of explanation
"""
M = matrix(eye(12))
ss = story_stiffness
K = (diag(+ss[:-1] + ss[1:]) +
diag(-ss[1:-1], k=+1) +
diag(-ss[1:-1], k=-1) ) * 1.0
K = matrix(K)
D = K.I*M
print ("normalized mass matrix M/m:")
print (M)
print ()
print ("normalized stiffness matrix K/k:")
print (K)
"""
Explanation: We construct the structural matrices, M being a unit matrix and K is constructed as the superposition of 3
matrices of which we specify one of the diagonals. The index set [:-1] means from the first to the penultimate,
[1:] means from the second to the last and [1:-1] means from the second to the penultimate.
While we are at it, we compute also the dynamic matrix.
End of explanation
"""
evals, evecs = eigh(K,M,eigvals=(0,3))
same_sign(evecs)
print("first four eigenvalues of the system")
print(evals,"\n")
print("first four eigenvectors of the system")
print(evecs)
"""
Explanation: To have something to compare our approximate results with, we compute, using a library function, a very
good approximation to the eigenvalues and eigenvectors of our system.
End of explanation
"""
random.seed(8+8+1988) # good luck
phi = matrix(random.random((12,4)))-0.5
print("initial subspace vectors")
print(phi)
"""
Explanation: The initial base is a 12x4 matrix, with linearly independent columns. To start with a very bad set,
we choose a matrix of random numbers. The call to random.seed insures repeatability of the results.
End of explanation
"""
description = Latex(r'$$\text{error}_i = \sqrt{\textstyle{\sum_j} \Delta\psi_{ji}^2}$$')
def iterate(phi, i):
display(HTML("<h3>Iteration #%2.2d</h3>"%(i,)))
ritz_evals, ritz_evecs = redeigh(K, M, phi)
# "_e" is for "estimate"
evals_e = ritz_evals
evecs_e = phi*ritz_evecs
same_sign(evecs_e) # force the same sign in the last component of evecs
# compute the new base
phi = D*evecs_e
# show what we have done
print("\"Real\" eigenvalues ", evals)
print("Estimated eigenvalues ", evals_e)
print("Relative error (e-r)/r ", (evals_e-evals)/evals_e)
print("2-norm of the difference between estimated eigenvectors and \"real\" ones")
display(description)
print(error(evecs_e-evecs))
display(HTML("<h5>The normalised shapes at iteration #%2.2d</h5>"%(i)))
plot_eigenvectors(evecs_e)
return phi
phi = iterate(phi, 1)
phi = iterate(phi, 2)
phi = iterate(phi, 3)
phi = iterate(phi, 4)
phi = iterate(phi, 5)
"""
Explanation: It's time to iterate, starting from a very bad choice for the initial base vectors.
compute evals and evecs in Ritz coordinates
compute estimates of evals and evecs in natural coordinates (no-op for evals, matrix mult for evecs)
compute new base for next iteration applying matrix iteration procedure to the
estimated eigenvectors
display the current results
End of explanation
"""
|
AjinkyaBhave/CarND_P1_FindLanes
|
P1.ipynb
|
agpl-3.0
|
#importing some useful packages
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import cv2
import os
import math
from scipy import misc
%matplotlib inline
# Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from IPython.display import HTML
"""
Explanation: Self-Driving Car Engineer Nanodegree
Project: Finding Lane Lines on the Road
In this project, you will use the tools you learned about in the lesson to identify lane lines on the road. You can develop your pipeline on a series of individual images, and later apply the result to a video stream (really just a series of images). Check out the video clip "raw-lines-example.mp4" (also contained in this repository) to see what the output should look like after using the helper functions below.
Once you have a result that looks roughly like "raw-lines-example.mp4", you'll need to get creative and try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4". Ultimately, you would like to draw just one line for the left side of the lane, and one for the right.
In addition to implementing code, there is a brief writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a write up template that can be used to guide the writing process. Completing both the code in the Ipython notebook and the writeup template will cover all of the rubric points for this project.
Let's have a look at our first image called 'test_images/solidWhiteRight.jpg'. Run the 2 cells below (hit Shift-Enter or the "play" button above) to display the image.
Note: If, at any point, you encounter frozen display windows or other confounding issues, you can always start again with a clean slate by going to the "Kernel" menu above and selecting "Restart & Clear Output".
The tools you have are color selection, region of interest selection, grayscaling, Gaussian smoothing, Canny Edge Detection and Hough Tranform line detection. You are also free to explore and try other techniques that were not presented in the lesson. Your goal is piece together a pipeline to detect the line segments in the image, then average/extrapolate them and draw them onto the image for display (as below). Once you have a working pipeline, try it out on the video stream below.
<figure>
<img src="examples/line-segments-example.jpg" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align: center;"> Your output should look something like this (above) after detecting line segments using the helper functions below </p>
</figcaption>
</figure>
<p></p>
<figure>
<img src="examples/laneLines_thirdPass.jpg" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align: center;"> Your goal is to connect/average/extrapolate line segments to get output like this</p>
</figcaption>
</figure>
Run the cell below to import some packages. If you get an import error for a package you've already installed, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, see this forum post for more troubleshooting tips.
Import Packages
End of explanation
"""
# Parameters for all helper functions
canny_low = 50
canny_high = 150
kernel_size = 3
rho = 2 # distance resolution of the Hough grid [pixel]
theta = np.pi/180 # angular resolution of the Hough grid [rad]
threshold = 15 # minimum number of votes (intersections in Hough grid cell)
min_line_len = 40 # minimum number of pixels making up a line
max_line_gap = 20 # maximum gap in pixels between connectable line segments
im_y = 540 # height of image
im_x = 960 # width of image
mask_y = int(0.6*im_y) # upper y-coordinate of fillPoly trapezoidal mask region
mask_x1 = int(0.48*im_x)# upper left x-coordinate of fillPoly trapezoidal mask region
mask_x2 = int(0.54*im_x)# upper right x-coordinate of fillPoly trapezoidal mask region
left_x_prev = [] # Holds last valid value of left_x array used in draw_lines()
left_y_prev = [] # Holds last valid value of left_y array used in draw_lines()
right_x_prev = [] # Holds last valid value of right_x array used in draw_lines()
right_y_prev = [] # Holds last valid value of right_y array used in draw_lines()
img_dir = "test_images/"
img_out_dir="test_images_out/"
def grayscale(img):
"""Applies the Grayscale transform
This will return an image with only one color channel
but NOTE: to see the returned image as grayscale
(assuming your grayscaled image is called 'gray')
you should call plt.imshow(gray, cmap='gray')"""
return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
# Or use BGR2GRAY if you read an image with cv2.imread()
# return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
def canny(img, low_threshold, high_threshold):
"""Applies the Canny transform"""
return cv2.Canny(img, low_threshold, high_threshold)
def gaussian_blur(img, kernel_size):
"""Applies a Gaussian Noise kernel"""
return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0)
def region_of_interest(img, vertices):
"""
Applies an image mask.
Only keeps the region of the image defined by the polygon
formed from `vertices`. The rest of the image is set to black.
"""
#defining a blank mask to start with
mask = np.zeros_like(img)
#defining a 3 channel or 1 channel color to fill the mask with depending on the input image
if len(img.shape) > 2:
channel_count = img.shape[2] # i.e. 3 or 4 depending on your image
ignore_mask_color = (255,) * channel_count
else:
ignore_mask_color = 255
#filling pixels inside the polygon defined by "vertices" with the fill color
cv2.fillPoly(mask, vertices, ignore_mask_color)
#returning the image only where mask pixels are nonzero
masked_image = cv2.bitwise_and(img, mask)
return masked_image
def draw_lines(img, lines, color=[255, 0, 0], thickness=2):
"""
NOTE: this is the function you might want to use as a starting point once you want to
average/extrapolate the line segments you detect to map out the full
extent of the lane (going from the result shown in raw-lines-example.mp4
to that shown in P1_example.mp4).
Think about things like separating line segments by their
slope ((y2-y1)/(x2-x1)) to decide which segments are part of the left
line vs. the right line. Then, you can average the position of each of
the lines and extrapolate to the top and bottom of the lane.
This function draws `lines` with `color` and `thickness`.
Lines are drawn on the image inplace (mutates the image).
If you want to make the lines semi-transparent, think about combining
this function with the weighted_img() function below
"""
global left_x_prev
global left_y_prev
global right_x_prev
global right_y_prev
global im_y
global im_x
global mask_y
left_slopes = []
right_slopes = []
left_x = []
left_y = []
right_x = []
right_y = []
sl_pt_lt = {}
sl_pt_rt = {}
left_slope = 0
right_slope = 0
for line in lines:
for x1,y1,x2,y2 in line:
slope = np.arctan((y2-y1)/(x2-x1))
slope = slope*180/np.pi
if slope < -15 and slope > -60:
sl_pt_lt[slope] = (x1,x2,y1,y2)
left_slopes.append(slope)
elif slope > 15 and slope < 60:
sl_pt_rt[slope] = (x1,x2,y1,y2)
right_slopes.append(slope)
left_mean = np.mean(left_slopes)
left_std = np.std(left_slopes)
keys_remove = [s for s in sl_pt_lt.keys() if np.abs(left_mean-s) > left_std]
for key_remove in keys_remove:
del sl_pt_lt[key_remove]
if len(list(sl_pt_lt.values())) == 0 :
left_x = left_x_prev.copy()
left_y = left_y_prev.copy()
else:
for val in list(sl_pt_lt.values()):
left_x.extend(val[0:2])
left_y.extend(val[2:])
left_x_prev = left_x.copy()
left_y_prev = left_y.copy()
right_mean = np.mean(right_slopes)
right_std = np.std(right_slopes)
keys_remove = [s for s in sl_pt_rt.keys() if np.abs(right_mean-s) > right_std]
for key_remove in keys_remove:
del sl_pt_rt[key_remove]
if len(list(sl_pt_rt.values())) == 0 :
right_x = right_x_prev.copy()
right_y = right_y_prev.copy()
else:
for val in list(sl_pt_rt.values()):
right_x.extend(val[0:2])
right_y.extend(val[2:])
right_x_prev = right_x.copy()
right_y_prev = right_y.copy()
left_coeff = np.polyfit(left_x, left_y, 1)
pt_l1 = (int((im_y-left_coeff[1])/left_coeff[0]),im_y)
pt_l2 = (int((mask_y-left_coeff[1])/left_coeff[0]), mask_y)
cv2.line(img, pt_l1, pt_l2, color, thickness)
right_coeff = np.polyfit(right_x, right_y,1)
pt_r1 = (int((im_y-right_coeff[1])/right_coeff[0]),im_y)
pt_r2 = (int((mask_y-right_coeff[1])/right_coeff[0]), mask_y)
cv2.line(img, pt_r1, pt_r2, color, thickness)
def hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap):
"""
`img` should be the output of a Canny transform.
Returns an image with hough lines drawn.
"""
lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap)
line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8)
draw_lines(line_img, lines)
return line_img
# Python 3 has support for cool math symbols.
def weighted_img(img, initial_img, α=0.8, β=1., λ=0.):
"""
`img` is the output of the hough_lines(), An image with lines drawn on it.
Should be a blank image (all black) with lines drawn on it.
`initial_img` should be the image before any processing.
The result image is computed as follows:
initial_img * α + img * β + λ
NOTE: initial_img and img must be the same shape!
"""
return cv2.addWeighted(initial_img, α, img, β, λ)
# Function to implement single image processing pipeline
def process_image(img):
# NOTE: The output you return should be a color image (3 channel) for processing video below
# you should return the final output (image where lines are drawn on lanes)
global im_y
global im_x
global mask_y
global mask_x1
global mask_x2
vertices = np.array([[(0,im_y),(mask_x1,mask_y), (mask_x2,mask_y), (im_x,im_y)]], dtype=np.int32)
img_gray = grayscale(img)
img_blur = gaussian_blur(img_gray, kernel_size)
img_edge = canny(img_blur, canny_low, canny_high)
img_crop = region_of_interest(img_edge, vertices)
img_line = hough_lines(img_crop,rho,theta,threshold,min_line_len,max_line_gap)
img_out = weighted_img(img_line, img, α=0.6, β=1., λ=0.)
cv2.line(img_out,(vertices[0][0][0],vertices[0][0][1]),(vertices[0][1][0],vertices[0][1][1]), (0,0,255),2)
cv2.line(img_out,(vertices[0][1][0],vertices[0][1][1]),(vertices[0][2][0],vertices[0][2][1]), (0,0,255),2)
cv2.line(img_out,(vertices[0][2][0],vertices[0][2][1]),(vertices[0][3][0],vertices[0][3][1]), (0,0,255),2)
cv2.line(img_out,(vertices[0][3][0],vertices[0][3][1]),(vertices[0][0][0],vertices[0][0][1]), (0,0,255),2)
return img_out
"""
Explanation: Ideas for Lane Detection Pipeline
Some OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are:
cv2.inRange() for color selection
cv2.fillPoly() for regions selection
cv2.line() to draw lines on an image given endpoints
cv2.addWeighted() to coadd / overlay two images
cv2.cvtColor() to grayscale or change color
cv2.imwrite() to output images to file
cv2.bitwise_and() to apply a mask to an image
Check out the OpenCV documentation to learn about these and discover even more awesome functionality!
Helper Functions
Below are some helper functions to help get you started. They should look familiar from the lesson!
End of explanation
"""
# Build your pipeline that will draw lane lines on the test_images
# then save them to the test_images directory.
img_names = os.listdir(img_dir)
for img_name in img_names:
img= mpimg.imread(img_dir+img_name)
img_out = process_image(img)
misc.imsave(img_out_dir+img_name, img_out)
plt.imshow(img_out)
"""
Explanation: Build the pipeline and run your solution on all test_images. Make copies into the test_images_output directory, and you can use the images in your writeup report.
Try tuning the various parameters, especially the low and high Canny thresholds as well as the Hough lines parameters.
Test Images
Build your pipeline to work on the images in the directory "test_images"
You should make sure your pipeline works well on these images before you try the videos.
Build a Lane Finding Pipeline
End of explanation
"""
# Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from IPython.display import HTML
## process_image() has been defined in the Helper Functions cell of this notebook for code reuse with testing images and video.
"""
Explanation: Test on Videos
You know what's cooler than drawing lanes over images? Drawing lanes over video!
We can test our solution on two provided videos:
solidWhiteRight.mp4
solidYellowLeft.mp4
Note: if you get an import error when you run the next cell, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, check out this forum post for more troubleshooting tips.
If you get an error that looks like this:
NeedDownloadError: Need ffmpeg exe.
You can download it by calling:
imageio.plugins.ffmpeg.download()
Follow the instructions in the error message and check out this forum post for more troubleshooting tips across operating systems.
End of explanation
"""
white_output = 'test_videos_output/solidWhiteRight.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4").subclip(0,5)
clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4")
white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!!
%time white_clip.write_videofile(white_output, audio=False)
"""
Explanation: Let's try the one with the solid white lane on the right first ...
End of explanation
"""
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(white_output))
"""
Explanation: Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice.
End of explanation
"""
yellow_output = 'test_videos_output/solidYellowLeft.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4').subclip(0,5)
clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4')
yellow_clip = clip2.fl_image(process_image)
%time yellow_clip.write_videofile(yellow_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(yellow_output))
"""
Explanation: Improve the draw_lines() function
At this point, if you were successful with making the pipeline and tuning parameters, you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. As mentioned previously, try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4".
Go back and modify your draw_lines function accordingly and try re-running your pipeline. The new output should draw a single, solid line over the left lane line and a single, solid line over the right lane line. The lines should start from the bottom of the image and extend out to the top of the region of interest.
Now for the one with the solid yellow lane on the left. This one's more tricky!
End of explanation
"""
im_y = 720 # height of image
im_x = 1280 # width of image
mask_y = int(0.6*im_y) # upper y-coordinate of fillPoly trapezoidal mask region
mask_x1 = int(0.48*im_x)# upper left x-coordinate of fillPoly trapezoidal mask region
mask_x2 = int(0.54*im_x)# upper right x-coordinate of fillPoly trapezoidal mask region
challenge_output = 'test_videos_output/challenge.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip3 = VideoFileClip('test_videos/challenge.mp4').subclip(0,5)
clip3 = VideoFileClip('test_videos/challenge.mp4')
challenge_clip = clip3.fl_image(process_image)
%time challenge_clip.write_videofile(challenge_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(challenge_output))
"""
Explanation: Writeup and Submission
If you're satisfied with your video outputs, it's time to make the report writeup in a pdf or markdown file. Once you have this Ipython notebook ready along with the writeup, it's time to submit for review! Here is a link to the writeup template file.
Optional Challenge
Try your lane finding pipeline on the video below. Does it still work? Can you figure out a way to make it more robust? If you're up for the challenge, modify your pipeline so it works with this video and submit it along with the rest of your project!
End of explanation
"""
|
ernestyalumni/MLgrabbag
|
sklearn_ML.ipynb
|
mit
|
%matplotlib inline
import matplotlib.pyplot as plt
import sklearn
from sklearn import datasets
import os, sys
os.getcwd()
os.listdir( os.getcwd() ) ;
import numpy as np
import scipy
import pandas as pd
"""
Explanation: Using sci-kit learn, i.e. sklearn for Machine Learning (ML); in combination with numpy,scipy, and pandas
End of explanation
"""
np.random.random(10).shape # testing, playing,
x = np.array([5,3,0,4])
y = np.array([4,4,1,3])
x.shape
slope, intercept, r_value, p_value, std_err = scipy.stats.linregress(x,y)
print( slope, intercept, r_value,p_value,std_err)
( (x-y)*(x-y) ).mean()/2.
-1.+0.5*4
"""
Explanation: cf. Linear Regression with One Variable, Coursera
End of explanation
"""
np.identity(5) # eye(5) in Matlab/Octave
"""
Explanation: cf. Week 1, Linear Algebra Review, Coursera, Machine Learning with Ng
I'll take this opportunity to provide a dictionary between the syntax of linear algebra math and numpy
Linear Regression
cf. Week 2 Programming Assignment:Linear Regression, 1st programming assignment, machine-learning-ex1.zip
cf. machine-learning-ex1/ex1/warmUpExercise.m of Week 2 Programming Assignment: Linear Regression of Coursera Machine Learning by Ng, i.e.
1 Simple Octave/MATLAB function
End of explanation
"""
os.listdir( './coursera_Ng/machine-learning-ex1/' )
os.listdir( './coursera_Ng/machine-learning-ex1/ex1' )
"""
Explanation: 2 Linear regression with one variable
cf. machine-learning-ex1/ex1/ex1data1.txt and machine-learning-ex1/ex1.pdf
End of explanation
"""
## ================================== Part 2: Plotting =====================================
print("Plotting Data ... \n")
linregdata = pd.read_csv('./coursera_Ng/machine-learning-ex1/ex1/ex1data1.txt', header=None)
print( linregdata.describe() )
linregdata.head()
X_linreg = linregdata.as_matrix()[:,0] # pandas.DataFrame.as_matrix convert frame to its numpy-array representation
y_linreg = linregdata.as_matrix()[:,1]
m_linreg = len(y_linreg) # number of training examples
print( X_linreg.shape, type(X_linreg))
print( y_linreg.shape, type(y_linreg))
print m_linreg
X_linreg
plt.xlabel('Population of City in 10,000s')
plt.ylabel('Profit in $10,000s')
plt.plot( X_linreg, y_linreg, 'rx', markersize=10)
"""
Explanation: pandas.read_csv, Parameters: header: defaults to 0, if no names are passed, but we need to set it to None so not to include the first row
End of explanation
"""
# np.hstack( ( np.ones((m_linreg,1)), X_linreg.reshape(,1) ) ).shape
input_X_linreg = np.vstack( (np.ones(m_linreg), X_linreg ) ).T
input_X_linreg.shape
b = np.ones((m_linreg,1))
b.shape
b[:,:-1] = X_linreg
theta = np.zeros((2,1)); print( theta )
# Some gradient descent settings
iterations = 1500
alpha = 0.01 # learning rate
print( np.dot( input_X_linreg, theta ).shape )
y_linreg.reshape((len(y_linreg),1)).shape
#(np.dot( input_X_linreg, theta) - y_linreg ).shape
res = np.dot(input_X_linreg,theta) - y_linreg.reshape((len(y_linreg),1))
(res*res).mean()
print( res.shape)
print( input_X_linreg.shape )
print( X_linreg.shape)
input_X_linreg[:,1].reshape( 97,1).shape
# ( res * input_X_linreg[:,1].reshape(m_linreg,1) ).shape
dres = ( res * input_X_linreg )
print(dres.shape)
dres.mean(axis=0)
temp = theta.flatten() - 0.5 * dres.mean(axis=0)
print( temp)
theta - np.vstack( temp)
"""
Explanation: Batch Gradient Descent
cf. machine-learning-ex1/ex1/ex1.m
cf. How to add column to numpy array
"
```
all_data = np.hstack((my_data, new_col))
or
all_data = np.concatenate((my_data, new_col), 1)
```
I believe that the only difference between these three functions (as well as np.vstack) are their default behaviors for when axis is unspecified:
* `concatenate` assumes axis = 0
* `hstack` assumes axis = 1 unless inputs are 1d, then `axis = 0`
* `vstack` assumes axis = 0 after adding an axis if inputs are 1d
* `append` flattens array
"
See also How to add an extra column to an numpy array
"I think a more straightforward solution and faster to boot is to do the following:"
import numpy as np
N = 10
a = np.random.rand(N,N)
b = np.zeros((N,N+1))
b[:,:-1] = a
End of explanation
"""
def computeCost(X,y,theta):
"""
COMPUTECOST Compute cost for linear regression
J = COMPUTECOST(X, y, theta) computes the cost of using theta as the
parameter for linear regression to fit the data points in X and y
"""
# Initialize some useful values
m = len(y) # number of training examples
# You will need to return the following variable correctly
J = 0
# attach a column of 1's to make \theta_0 a feature
input_X_linreg = np.vstack( (np.ones(m), X)).T # take the transpose since the np array is a "row"
# preprocess inputs X,y to make sure numpy array dimensions are correct
target_y_linreg = y.reshape((m,1))
# Compute the cost of a particular choice of theta. Set J to the cost
## assume that theta is a numpy array of dim. 2x1 and not (2,)
predicted_vals = np.dot( input_X_linreg, theta) # X * \theta
res = predicted_vals - target_y_linreg # res for residual
ressq = res * res
J = ressq.mean() * 0.5
return J
computeCost(X_linreg, y_linreg, theta)
"""
Explanation: The object of linear regression is to minimize the cost function:
$$
J(\theta) = \frac{1}{2m} \sum_{i=1}^m(h_{\theta}(x^{i}) - y^{(i)})^2
$$
End of explanation
"""
def gradientDescent(X, y, theta, alpha, num_iters=1500):
"""
GRADIENTDESCENT Performs gradient descent to learn theta
theta = GRADIENTDESCENT(X, y, theta, alpha, num_iters) updates theta by
taking num_iters gradient steps with learning rate alpha
"""
# Initialize some useful values
m = len(y)
J_history = np.zeros(num_iters);
# attach a column of 1's to make \theta_0 a feature
input_X_linreg = np.vstack( (np.ones(m), X)).T
# preprocess inputs X,y to make sure numpy array dimensions are correct
target_y_linreg = y.reshape((m,1))
for iter in range(num_iters):
# perform a single gradient step on the parameter vector theta
## assume that theta is a numpy array of dim. 2x1, and not (2,)
### predicted_vals is h_{\theta}(x^{(i)}), h is the hypothesis
predicted_vals = np.dot( input_X_linreg, theta) # X * \theta
res = predicted_vals - target_y_linreg # res for residual
dres = res * input_X_linreg # it's the partial derivative with respect to each x_j
temp = theta.flatten() - alpha * dres.mean( axis=0)
theta = np.vstack( temp ) # this is effectively numpy's transpose from row to column vector
J_history[iter] = computeCost(X,y,theta)
return theta, J_history
theta
result_1d_graddesc = gradientDescent(X_linreg, y_linreg,theta, 0.01)
%time result_1d_graddesc = gradientDescent(X_linreg, y_linreg,theta, 0.01)
print( result_1d_graddesc[0] )
print( result_1d_graddesc[0][0,0] )
print( result_1d_graddesc[0][1,0])
plt.plot( result_1d_graddesc[1] )
"""
Explanation: Gradient descent
Batch Gradient descent
End of explanation
"""
# Plot the linear fit
# cf. http://matthiaseisen.com/pp/patterns/p0170/ Draw a regression line with matplotlib
fig, ax = plt.subplots()
ax.set_xlabel('Population of City in 10,000s')
ax.set_ylabel('Profit in $10,000s')
#plt.plot( X_linreg, y_linreg, 'rx', markersize=10)
ax.scatter(X_linreg, y_linreg, s=40,c='r',marker='x' ) # s is size in points^2, c is color
ax.plot( X_linreg,
result_1d_graddesc[0][0,0] + X_linreg * result_1d_graddesc[0][1,0],'-')
type(X_linreg); X_linreg.shape
"""
Explanation: cf. 2.2.4 Gradient descent, pp. 7 of ex1.pdf:
"Your final values for $\theta$ will also be used to make predictions on profits in areas of 35,000 and 70,000 people. Note the way that the following lines in ex1.m uses matrix multiplication, rather than explicit summation or looping, to calculate the predictions.
"
End of explanation
"""
# Grid over which we will calculate J
theta0_vals = np.arange(-10,10,20./100.)
theta1_vals = np.arange(-1,4,5./100.)
theta0_vals, theta1_vals = np.meshgrid( theta0_vals, theta1_vals )
print( type( theta0_vals ), type( theta1_vals) )
print( theta0_vals.shape, theta1_vals.shape )
# Note to self: computeCost(X_linreg, y_linreg, theta)
# Fill out J_vals
J_vals = np.array( [[computeCost(X_linreg,y_linreg, np.vstack( np.array( [
theta0_vals[i,j],
theta1_vals[i,j] ] ) ) ) for j in range(100) ]
for i in range(100) ] )
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
surf_fig = plt.figure()
surf_ax = surf_fig.gca(projection='3d')
# Plot the surface
surftheta = surf_ax.plot_surface( theta0_vals, theta1_vals, J_vals,
cmap=cm.coolwarm, linewidth=0, antialiased=False)
# Add a color bar which maps values to colors
surf_fig.colorbar(surftheta, shrink=0.5, aspect=5)
theta
np.vstack( np.array( [ theta0_vals[2,3], theta1_vals[2,3] ] ) )
# Contour plot
plt.figure()
thetacontourplt = plt.contour(theta0_vals, theta1_vals, J_vals,
levels=np.logspace(-2,3,20) )
plt.clabel(thetacontourplt, inline=True, fontsize=10)
plt.xlabel('\theta_0')
plt.ylabel('\theta_1')
plt.plot( result_1d_graddesc[0][0,0], result_1d_graddesc[0][1,0], 'rx', markersize=10 )
plt.show()
"""
Explanation: Part 4: Visualizing J (theta_0, theta_1) ===========================================
cf. surface3d_demo.py from mplot3d tutorial - Matplotlib 2.0.0 documentation
End of explanation
"""
## Load Data
print("Loading data ... \n")
linregdata2 = pd.read_csv('./coursera_Ng/machine-learning-ex1/ex1/ex1data2.txt', header=None)
print( linregdata2.describe() )
linregdata2.head()
linregdata2.as_matrix([0,1])
X_linreg = linregdata2.as_matrix([0,1]) # pandas.DataFrame.as_matrix convert frame to its numpy-array representation
y_linreg = linregdata2.as_matrix([2])
m_linreg = len(y_linreg) # number of training examples
print( X_linreg.shape, type(X_linreg))
print( y_linreg.shape, type(y_linreg))
print m_linreg
"""
Explanation: Linear regression with multiple variables
Linear regression with multiple variables, via Week 2 Programming Exercise, ex1.pdf and ex1_multi.m
End of explanation
"""
print( X_linreg.mean(axis=0) )
y_linreg.mean()
print( X_linreg.std(axis=0) )
y_linreg.std()
normedX_linreg = (X_linreg - X_linreg.mean(axis=0))/X_linreg.std(axis=0)
normedy_linreg = (y_linreg - y_linreg.mean(axis=0))/y_linreg.std(axis=0)
print( normedX_linreg.mean() )
print( normedX_linreg.std() )
print( normedy_linreg.mean() )
normedy_linreg.std()
def featureNormalize(X):
"""
FEATURENORMALIZE Normalizes the features in X
FEATURENORMALIZE(X) returns a normalized version of X where
the mean value of each feature is 0 and the standard deviation
is 1. This is often a good preprocessing step to do when
working with learning algorithms.
"""
# You need to set these values correctly
X_norm = (X-X.mean(axis=0))/X.std(axis=0)
mu = X.mean(axis=0)
sigma = X.std(axis=0)
return [X_norm, mu, sigma]
print( normedX_linreg.shape )
theta = np.zeros((2,1))
np.zeros((2,1)).shape
predicted_val = np.dot( normedX_linreg, theta )
res = predicted_val - y_linreg
%timeit np.dot( res.T, res )/ m_linreg
%timeit (res*res).mean()
def computeCostMulti(X,y,theta):
"""
COMPUTECOSTMULTI Compute cost for linear regresion with multiple variables
J = COMPUTECOSTMULTI(X, y, theta) computes the cost of using theta as the
parameter for linear regression to fit the data points in X and y
"""
# Initialize some useful values
m = len(y) # number of training examples
# You need to return the following variables correctly
J = 0
# Compute the cost of a particualar choice of theta, setting J to the cost
predicted_val = np.dot(X,theta)
res = predicted_val - y
J = np.dot( res.T, res)/m * 0.5
return J
np.dot( np.dot( np.linalg.inv( np.dot( normedX_linreg.T, normedX_linreg) ) , normedX_linreg.T ) , y_linreg)
"""
Explanation: Feature Normalization
cf. 3.1 Feature Normalization of ex1.pdf
End of explanation
"""
def gradientDescentMulti(X, y, theta, alpha, num_iters=1500):
"""
GRADIENTDESCENTMULTI Performs gradient descent to learn theta
theta = GRADIENTDESCENTMULTI(X, y, theta, alpha, num_iters) updates theta by
taking num_iters gradient steps with learning rate alpha
"""
# Initialize some useful values
m = len(y) # number of training examples
J_history = np.zeros((num_iters , 1) )
## assume preprocessing is needed (in the case when using pandas for DataFrame, only column of 1 is needed)
input_X_linreg = np.hstack( (np.ones((m,1)), X))
for iter in range(num_iters):
# perform a single gradient step on the parameter vector theta
predicted_vals = np.dot( input_X_linreg, theta)
res = predicted_vals - y
dres = res * input_X_linreg # it's the partial derivative with respect to each x_j
temp = theta.flatten() - alpha * dres.mean( axis=0)
theta = np.vstack( temp ) # this is effectively numpy's transpose from row to column vector
J_history[iter] = computeCostMulti(input_X_linreg,y,theta)
return theta, J_history
input_X_linreg = np.hstack( ( np.ones((m_linreg,1)), normedX_linreg ) )
predicted_vals_multi = np.dot( input_X_linreg, np.zeros((3,1)) )
predicted_vals_multi.shape
res_multi = predicted_vals_multi - y_linreg
dres_multi = res_multi * input_X_linreg
dres_multi.shape
( np.zeros((3,1)).flatten() - 0.5 * dres_multi.mean(axis=0) ).shape
"""
Explanation: (Batch) gradient descent for multiple variables, multivariate (batch) gradient descent
cf. gradientDescentMulti.m
End of explanation
"""
[normedX_linreg, mu_multi, sigma_multi] = featureNormalize(X_linreg)
theta = np.zeros((3,1))
alpha = 0.01 # Learning rate
num_iters = 400
[theta, J_history] = gradientDescentMulti(normedX_linreg, y_linreg, theta, alpha, num_iters)
%time [theta, J_history] = gradientDescentMulti(normedX_linreg, y_linreg, theta, alpha, num_iters)
plt.plot( J_history)
plt.show()
theta
"""
Explanation: Bottom line for serial, (batch) gradient descent for multiple variables (multiple features)
End of explanation
"""
def normalEqn(X,y):
"""
NORMALEQN Computes the closed-form solution to linear regression
NORMALEQN(X,y) computes the closed-form solution to linear
regression using the normal equations.
"""
normX = np.dot( X.T, X)
normXinverse = np.linalg.inv( normX )
theta = np.dot( np.dot( normXinverse, X.T),y)
return theta
# Add intercept term to X
X_linreg_w_intercept = np.hstack( (np.ones((m_linreg,1)), X_linreg))
normalEqn( X_linreg_w_intercept, y_linreg)
"""
Explanation: Normal Equations
cf. 3.3 Normal Equations, pp. 14 of ex1.pdf
The closed-form solution to linear regression is
$$
\theta = (X^T X)^{-1}X^T \mathbf{y}
$$
End of explanation
"""
midterm_exampow2 = np.array( [ 7921, 5184, 8836, 4761])
midterm_exampow2min = midterm_exampow2.min()
midterm_exampow2max = midterm_exampow2.max()
midterm_exampow2range = float( midterm_exampow2max - midterm_exampow2min )
midterm_exampow2mean = midterm_exampow2.mean()
midterm_exampow2 / midterm_exampow2range
(midterm_exampow2 - midterm_exampow2mean) / midterm_exampow2range
"""
Explanation: Linear Regression with Multiple Variables quiz
cf. https://www.coursera.org/learn/machine-learning/exam/7pytE/linear-regression-with-multiple-variables
End of explanation
"""
# Load the diabetes dataset
diabetes = sklearn.datasets.load_diabetes()
diabetes_X = diabetes.data
diabetes_Y = diabetes.target
#diabetes_X1 = diabetes_X[:,np.newaxis,2]
diabetes_X1 = diabetes_X[:,np.newaxis, 2]
theta = np.zeros((2,1))
%time linreg_diabetes_result = gradientDescent(diabetes_X1.flatten(), diabetes_Y.flatten(),theta,0.01,num_iters=10000)
print(linreg_diabetes_result)
theta = np.zeros((diabetes_X.shape[1]+1,1))
%time linreg_diabetes_result = gradientDescentMulti(diabetes_X, np.vstack(diabetes_Y),theta,0.01,num_iters=10000)
diabetes_X.shape
theta.shape
diabetes_Y.shape
"""
Explanation: Took the quiz today for Week 2. 20170209
EY : Diabetes Sample data from sci-kit learn, sklearn
End of explanation
"""
os.listdir( './coursera_Ng/machine-learning-ex2/' )
os.listdir( './coursera_Ng/machine-learning-ex2/ex2' )
"""
Explanation: Logistic regression
cf. Coursera Programming Assignment: Logistic Regression
End of explanation
"""
## ================================== Part 2: Plotting =====================================
print("Plotting Data ... \n")
logregdata = pd.read_csv('./coursera_Ng/machine-learning-ex2/ex2/ex2data1.txt', header=None)
logregdata.describe()
plt.scatter( logregdata.as_matrix([0]) , logregdata.as_matrix([1])
df.loc[df['column_name'] == some_value]
logregdata.head()
# Find Indices of Positive and Negative Examples
logregdata.loc[ logregdata[2] == 0]
# First,
# Find Indices of Positive and Negative Examples
# neg, negative examples, y == 0
plt.scatter( logregdata.loc[ logregdata[2] == 0].as_matrix([0]) ,
logregdata.loc[ logregdata[2] == 0].as_matrix([1]), s=35,c='y',marker='o' , label='Not Admitted')
# parameters for scatter: s : size in points^2, c : color, marker : MarkerStyle, default 'o'
# pos. positive examples, y == 1
plt.scatter( logregdata.loc[ logregdata[2] == 1].as_matrix([0]) ,
logregdata.loc[ logregdata[2] == 1].as_matrix([1]), s=75,c='b',marker='+' , label='Admitted')
plt.xlabel("Exam 1 score")
plt.ylabel("Exam 2 score")
plt.legend(loc=5)
plt.show()
"""
Explanation: pandas.read_csv, Parameters: header: defaults to 0, if no names are passed, but we need to set it to None so not to include the first row
1.1 Visualizing the data
cf. pp. 3, 'ex2.pdf' Week 3 Programming Assigment: Logistic Regression for Coursera's Machine Learning Introduction by Ng
End of explanation
"""
def sigmoid(z):
"""
SIGMOID Compute sigmoid function
g = SIGMOID(z) computes the sigmoid of z
"""
g = (1. + np.exp(-z))
g = 1./g
return g
sigmoid( np.array([1,2,3]))
"""
Explanation: cf. sigmoid.m
The sigmoid function is defined as
$$
g(z) = \frac{1}{1 + e^{-z} }
$$
End of explanation
"""
def costFunction(theta, X,y):
"""
COSTFUNCTION Compute cost and gradient for logistic regression
J = COSTFUNCTION(theta, X, y) computes the cost of using theta as the
parameter for logistic regression and the gradient of the cost
w.r.t. to the parameters
RETURNS
=======
[J, grad]
"""
# Initialize some useful values
m = len(y) # number of training examples
# You need to return the following variables correctly
J = 0
grad = np.zeros( len(theta))
# Compute the cost of a particular choice of theta.
# You should set J to the cost.
# Compute the partial derivatives and set grad to the partial
# derivatives of the cost w.r.t. each parameter in theta
## assume preprocessing is needed (in the case when using pandas for DataFrame, only column of 1 is needed)
input_X_linreg = np.hstack( (np.ones((m,1)), X))
predicted_vals = np.dot( input_X_linreg, theta ) # h_{\theta}
predicted_vals = sigmoid( predicted_vals )
interpolation = -y * np.log( predicted_vals ) - (1. - y) * np.log( 1. - predicted_vals )
J = interpolation.mean()
res = predicted_vals - y # res for residual
dJ = res * input_X_linreg # res * x_j^{(i)}
grad = dJ.mean(axis=0)
return [J, grad]
len( np.vstack( np.zeros(3) ) )
np.log( logregdata.as_matrix([0]) ).mean()
#logregdata.loc[ logregdata[2] == 1].as_matrix([0])
## ======================= Part 2: Compute Cost and Gradient ==============================
d = logregdata.as_matrix().shape[1] - 1
# Initialize fitting parameters
initial_theta = np.zeros( (d + 1,1) )
X = logregdata.as_matrix( range(d) )
y = logregdata.as_matrix( [d])
# Compute and display initial cost and gradient
[cost, grad] = costFunction( initial_theta , X, y )
print("Cost at initial theta (zeros): %f\n" % cost )
print("Expected cost (approx): 0.693\n")
print("Gradient at initial theta (zeros): \n")
#print(" %f \n" % grad)
print(grad)
print("Expected gradients (approx):\n -0.1000\n -12.0092\n -11.2628\n")
# Compute and display cost and gradient with non-zero theta
test_theta = np.vstack( np.array( [-24., 0.2, 0.2]) )
[cost, grad] = costFunction( test_theta, X,y)
print("\nCost at test theta: %f\n" % cost)
print("Expected cost (approx): 0.218\n")
print("Gradient at test theta: \n")
print( grad)
print("Expected gradients (approx): \n 0.043\n 2.586\n 2.647\n")
## ===================== Part 3: optimizing using fminunc
#
"""
Explanation: Cost function and gradient (for logistic regression)
cf. 1.2.2 Cost function and gradient, pp. 4, ex2.pdf
$$
J(\theta) = \frac{1}{m} \sum_{i=1}^m \left[ -y^{(i)} \log{ (h_{\theta}(x^{(i)})) } - (1-y^{ (i) } ) \log{ (1- h_{\theta}(x^{ (i)} ) ) }\right]
$$
Remember that, for logistic regression hypothesis,
$$
h_{\theta}(x) = g(\theta^Tx )
$$
with $g$ being the sigmoid function, defined as
$$
g(z) = \frac{1}{ 1 + e^{-z} }
$$
End of explanation
"""
def costFunctionReg(theta, X,y,lambda_val):
"""
COSTFUNCTIONREG Compute cost and gradient for logistic regression with regularization
J = COSTFUNCTION(theta, X, y) computes the cost of using theta as the
parameter for regularized logistic regression and the gradient of the cost
w.r.t. to the parameters
RETURNS
=======
[J, grad]
"""
# Initialize some useful values
m = len(y) # number of training examples
# You need to return the following variables correctly
J = 0
grad = np.zeros( len(theta))
# Compute the cost of a particular choice of theta.
# You should set J to the cost.
# Compute the partial derivatives and set grad to the partial
# derivatives of the cost w.r.t. each parameter in theta
## assume preprocessing is needed (in the case when using pandas for DataFrame, only column of 1 is needed)
input_X_linreg = np.hstack( (np.ones((m,1)), X))
predicted_vals = np.dot( input_X_linreg, theta ) # h_{\theta}
predicted_vals = sigmoid( predicted_vals )
interpolation = -y * np.log( predicted_vals ) - (1. - y) * np.log( 1. - predicted_vals )
J = interpolation.mean()
# regularized term
theta1 = theta[1:] # "Note that you should not regularize the parameter $\theta_0$"
reg_term = np.inner( theta1.flatten(), theta1.flatten() ) # \sum_{j=1}^n \theta_j^2
reg_term = lambda_val * (1./(2. * m)) * reg_term
J += reg_term
res = predicted_vals - y # res for residual
dJ = res * input_X_linreg # res * x_j^{(i)}
grad = dJ.mean(axis=0)
# regularization term for gradient : grad_reg_term
grad_reg_term = np.zeros( len(theta))
grad_reg_term[1:] = theta[1:]
grad_reg_term *= lambda_val / float(m)
grad += grad_reg_term
return [J, grad]
initial_theta[1:].shape
np.inner( initial_theta.flatten(), initial_theta.flatten())
"""
Explanation: Regularized logistic regression; 2 Regularized logistic regression
Cost function and gradient; Regularized logistic regression
cf. 2.3 Cost function and gradient
Recall regularized cost function in logistic regression
$$
J(\theta) = \frac{1}{m} \sum_{i=1}^m \left[-y^{(i)} \log{ (h_{\theta}(x^{(i)} )) } - (1-y^{(i)} )\log{ (1-h_{\theta}(x^{(i)} )) } \right] + \frac{ \lambda}{ 2m } \sum_{j=1}^n \theta_j^2
$$
End of explanation
"""
os.getcwd()
os.listdir( './coursera_Ng/machine-learning-ex3/' )
os.listdir( './coursera_Ng/machine-learning-ex3/ex3/' )
# Load saved matrices from file
multiclscls_data = scipy.io.loadmat('./coursera_Ng/machine-learning-ex3/ex3/ex3data1.mat')
multiclscls_data.keys()
print( type(multiclscls_data['y'] ))
print( multiclscls_data['y'].shape )
print(type(multiclscls_data['X']))
print( multiclscls_data['X'].shape )
def sigmoid(z):
"""
SIGMOID COmpute sigmoid function
J = SIGMOID(z) computes the sigmoid of z.
"""
g = 1.0 / (1.0 + np.exp(-z))
return g
"""
Explanation: Multi-class Classification
cf. ex3.pdf, Programming Exercise 3: Multi-class Classification and Neural Networks, Machine Learning,
1 Multi-class Classification
End of explanation
"""
def lrCostFunction(theta, X, y, lambda_val):
"""
LRCOSTFUNCTION Compute cost and gradient for logistic regression with
regularization
J = LRCOSTFUNCTION(theta, X, y,lambda_val) computes the cost of using theta as the
parameter for regularized logistic regression and the gradient of the cost
w.r.t. to the parameters
theta
@type : numpy array of matrix size d+1,1 i.e. theta \in \mathbb{R}^{d+1}
@param : "weights"
RETURNS
=======
[J, grad]
"""
# Initialize some useful values
m = len(y) # number of training examples
# You need to return the following variables correctly
J = 0
grad = np.zeros( len(theta))
# Compute the cost of a particular choice of theta.
# You should set J to the cost.
# Compute the partial derivatives and set grad to the partial
# derivatives of the cost w.r.t. each parameter in theta
## assume preprocessing is needed (in the case when using pandas for DataFrame, only column of 1 is needed)
input_X_linreg = np.hstack( (np.ones((m,1)), X))
predicted_vals = np.dot( input_X_linreg, theta ) # h_{\theta}
predicted_vals = sigmoid( predicted_vals )
interpolation = -y * np.log( predicted_vals ) - (1. - y) * np.log( 1. - predicted_vals )
J = interpolation.mean()
res = predicted_vals - y # res for residual
# dJ = res * input_X_linreg # res * x_j^{(i)}
# grad = dJ.mean(axis=0)
dJ = np.dot( input_X_linreg.T , res )
# regularized term
theta1 = theta[1:] # "Note that you should not regularize the parameter $\theta_0$"
reg_term = np.inner( theta1.flatten(), theta1.flatten() ) # \sum_{j=1}^n \theta_j^2
reg_term = lambda_val * (1./(2. * m)) * reg_term
J += reg_term
# res = predicted_vals - y # res for residual
# dJ = res * input_X_linreg # res * x_j^{(i)}
# grad = dJ.mean(axis=0)
# regularization term for gradient : grad_reg_term
grad_reg_term = np.vstack( np.zeros( len(theta)) )
grad_reg_term[1:] = theta[1:]
grad_reg_term *= lambda_val / float(m)
grad = dJ + grad_reg_term
return [J, grad]
y = multiclscls_data['y']
X = multiclscls_data['X']
theta_t = np.vstack( np.array( [-2, -1, 1, 2]) )
X_t = np.array( [i/10. for i in range(1,16)]).reshape((3,5)).T
#X_t = np.hstack( ( np.ones((5,1)), X_t) ) # no need to preprocess the input data X with column of 1's
y_t = np.vstack( np.array( [1,0,1,0,1]))
[J_t, grad_t] = lrCostFunction( theta_t, X_t, y_t, 3);
print( J_t) # Expected cost: 2.534819
"""
Explanation: cf. 1.3 Vectorizing Logistic Regression, 1.3.1 Vectorizing the cost function pp. 4, ex3.pdf
Recall (unregularized) logistic regression:
$$
J(\theta) = \frac{1}{m} \sum_{i=1}^m \left[ -y^{ (i)} \log{ (h_{\theta}(x^{(i)} ) ) } - (1-y^{(i)} )\log{ (1-h_{\theta}(x^{(i)} ) ) } \right]
$$
End of explanation
"""
def lrCostFunction(theta, X, y, lambda_val):
"""
LRCOSTFUNCTION Compute cost and gradient for logistic regression with
regularization
J = LRCOSTFUNCTION(theta, X, y,lambda_val) computes the cost of using theta as the
parameter for regularized logistic regression and the gradient of the cost
w.r.t. to the parameters
theta
@type : numpy array of matrix size d+1,1 i.e. theta \in \mathbb{R}^{d+1}
@param : "weights"
RETURNS
=======
[J, grad]
"""
# Initialize some useful values
m = len(y) # number of training examples
# You need to return the following variables correctly
J = 0
grad = np.zeros( len(theta))
# Compute the cost of a particular choice of theta.
# You should set J to the cost.
# Compute the partial derivatives and set grad to the partial
# derivatives of the cost w.r.t. each parameter in theta
## assume preprocessing is needed (in the case when using pandas for DataFrame, only column of 1 is needed)
input_X = np.hstack( (np.ones((m,1)), X))
z = np.dot( input_X, theta )
predicted_vals = sigmoid( z ) # h_{\theta}
interpolation = - np.dot( y.T, np.log( predicted_vals) ) - np.dot( (1. -y).T, np.log( 1. - predicted_vals ))
J = interpolation[0][0] /float(m)
res = predicted_vals - y # res for residual
dJ = np.dot( input_X.T , res )
# regularized term
theta1 = theta[1:] # "Note that you should not regularize the parameter $\theta_0$"
reg_term = np.inner( theta1.flatten(), theta1.flatten() ) # \sum_{j=1}^n \theta_j^2
reg_term = lambda_val * (1./(2. * m)) * reg_term
J += reg_term
# regularization term for gradient : grad_reg_term
grad_reg_term = np.vstack( np.zeros( len(theta)) )
grad_reg_term[1:] = theta[1:]
grad_reg_term *= lambda_val / float(m)
grad = dJ + grad_reg_term
return [J, grad]
[J_t, grad_t] = lrCostFunction( theta_t, X_t, y_t, 3);
print( J_t) # Expected cost: 2.534819
X_t.shape
input_X_t = np.hstack( (np.ones((5,1)), X_t))
z = np.dot( input_X_t, theta_t )
predicted_vals = sigmoid( z )
print( y_t.shape )
print( predicted_vals.shape )
.1 - y_t
y_t
interpolation = - np.dot( y_t.T, np.log( predicted_vals) ) - np.dot( (1. -y_t).T, np.log( 1. - predicted_vals ))
interpolation[0][0]
grad_t
"""
Explanation: Let's try to "vectorize" this more.
End of explanation
"""
y = multiclscls_data['y']
X = multiclscls_data['X']
num_labels= 10
lambda_val = 0.1
from scipy import optimize
print( X.shape )
#print( np.zeros( (5, 8) ) )
#initial_theta = np.vstack( np.zeros( ( X.shape[1]+1, 1)) )
theta_t = np.array( [-2, -1, 1, 2])
X_t = np.array( [i/10. for i in range(1,16)]).reshape((3,5)).T
#X_t = np.hstack( ( np.ones((5,1)), X_t) ) # no need to preprocess the input data X with column of 1's
y_t = np.vstack( np.array( [1,0,1,0,1]))
input_X_t = np.hstack( (np.ones(( len(y_t), 1)), X_t))
print( np.dot( input_X_t, theta_t).shape )
print( np.dot( y_t.T, np.log( sigmoid( np.dot( input_X_t, theta_t) ) ))[0] )
print( np.dot( input_X_t.T, y_t.flatten() ) )
np.inner( theta_t[1:].flatten(), theta_t[1:].flatten() )
theta_t[1:]
def lrCostFunction(theta, X, y, lambda_val):
"""
LRCOSTFUNCTION Compute cost and gradient for logistic regression with
regularization
J = LRCOSTFUNCTION(theta, X, y,lambda_val) computes the cost of using theta as the
parameter for regularized logistic regression and the gradient of the cost
w.r.t. to the parameters
theta
@type : numpy array of matrix size (d+1,) i.e. theta \in \mathbb{R}^{d+1}
@param : "weights"
X
@type : numpy array of matrix size (m,d), i.e. X \in \text{Mat}_{\mathbb{R}}( d,m)
@param : input data i.e. input training data i.e. training examples
y
@type : numpy array of matrix size (m,1), i.e. X \in \mathbb{R}^m
@param : test data i.e. test values i.e. test examples
RETURNS
=======
J
"""
# Initialize some useful values
m = len(y) # number of training examples
# You need to return the following variables correctly
J = 0
grad = np.zeros( len(theta))
# Compute the cost of a particular choice of theta.
# You should set J to the cost.
# Compute the partial derivatives and set grad to the partial
# derivatives of the cost w.r.t. each parameter in theta
## assume preprocessing is needed (in the case when using pandas for DataFrame, only column of 1 is needed)
input_X = np.hstack( (np.ones((m,1)), X))
z = np.dot( input_X, theta ) # matrix size or dim. of (m,) dim(z) = m x 0
predicted_vals = sigmoid( z ) # h_{\theta}
interpolation = - np.dot( y.T, np.log( predicted_vals) ) - np.dot( (1. -y).T, np.log( 1. - predicted_vals )) # matrix size or dim. of (1,), i.e. dim(interpolation) = 1x0
J = interpolation[0] /float(m) # scalar value
res = predicted_vals - y.flatten() # res for residual, matrix size or dim. of (m,) dim(res) = mx0
dJ = np.dot( input_X.T , res ) # matrix size or dim. of (d+1,), i.e. dim(dJ) = (d+1)x0
# regularized term
theta1 = theta[1:] # "Note that you should not regularize the parameter $\theta_0$"
reg_term = np.inner( theta1.flatten(), theta1.flatten() ) # \sum_{j=1}^n \theta_j^2
reg_term = lambda_val * (1./(2. * m)) * reg_term
J += reg_term
# regularization term for gradient : grad_reg_term
grad_reg_term = np.zeros( theta.shape ) # matrix size or dim. of (d+1,) dim(grad_reg_term) = (d+1)x0
grad_reg_term[1:] = theta[1:]
grad_reg_term = lambda_val / float(m)
grad = dJ + grad_reg_term # matrix size or dim. of (d+1,), i.e. dim(grad) = (d+1)x0
return J
def oneVsAll(X, y, num_labels, lambda_value):
"""
ONEVSALL trains multiple logistic regression classifiers and returns all
the classifiers in a matrix all_theta, where the i-th row of all_theta
corresponds to the classifier for label i
[all_theta] = ONEVSALL(X, y, num_labels, lambda_value) trains num_labels
logistic regression classifiers and returns each of these classifiers
in a matrix all_theta, where the i-th row of all_theta corresponds
to the classifier for label i
"""
# Some useful variables
m = X.shape[0]
n = X.shape[1]
# You need to return the following variables correctly
# all_theta = np.zeros( (num_labels, n+1) )
all_theta = []
# Set Initial theta
initial_theta = np.zeros( (n+1,1))
for c in range(1,num_labels+1):
# Use y == c to obtain a vector of 1's and 0's that tell you whether the ground truth is true/false for this class.
y_c = (y == c).astype('float32')
# scipy.optimize.fmin_cg - scipy.optimize.fmin_cg(f, x0, fprime=None,args=()), fprime a function that returns the gradient of f
theta_c = optimize.fmin_cg(lrCostFunction, initial_theta, args=(X,y_c,lambda_value) )
all_theta.append( theta_c)
return all_theta
all_theta_digits = oneVsAll( X,y,10, 0.1)
print(type(all_theta_digits))
print(len(all_theta_digits))
for i in range( len(all_theta_digits)):
print( all_theta_digits[i].shape )
print( all_theta_digits[i][:3], all_theta_digits[i][-3:] )
print( np.array( all_theta_digits).shape )
print( X.shape )
input_X = np.hstack( (np.ones((X.shape[0], 1)), X))
z = np.dot( input_X , np.array( all_theta_digits).T)
h_theta = sigmoid(z)
h_theta.shape
predicted_cls = np.argmax( h_theta, axis=1 )
predicted_cls.shape
print( y.shape )
def predictOneVsAll( all_theta, X):
"""
PREDICT Predict the label for a trained one-vs-all classifier. The labels
are in the range 1..K, where K = all_theta.shape[0], i.e. y \in \lbrace 1 \dots K \rbrace
p = PREDICTONEVSALL(all_theta, X) will return a vector of predictions
for each example in the matrix X. Note that X contains the examples in
rows. all_theta is a matrix where the i-th row is a trained logistic
regression theta vector for the i-th class.
RETURNS:
@type list of 2: numpy array of dims. (m,)
"""
m = X.shape[0] # total number of training examples
num_labels = all_theta.shape[0] # total number of "classes", i.e. y \in \lbrace 1 \dots K \rbrace and num_labels=K
# Add ones to the X data matrix, to include an "intercept"
input_X = np.hstack( (np.ones( (m,1) ), X))
z = np.dot( input_X, np.array( all_theta))
h_theta = sigmoid(z)
predicted_cls = np.argmax( h_theta, axis=1)
predicted_cls_p1 = predicted_cls + 1 # just because of how Python/numpy numbers from 0, as opposed to from 1
return predicted_cls_pl
sum( [predicted_cls[j] == y[j] for j in range(len(y))] )
print( pd.DataFrame( predicted_cls).describe() )
print( pd.DataFrame( y).describe() )
predicted_cls_p1 = predicted_cls + 1
print( predicted_cls[200:240] )
print( predicted_cls_p1[200:240] )
print( y[200:240])
sum( [predicted_cls_p1[j]==y[j] for j in range(len(y))] )[0]/ float( len(y) )
%%timeit all_theta_digits = oneVsAll( X,y,10, 0.1)
print(y )
print( (y == 3).astype('float32') )
optimize.fmin_cg(lrCostFunction, np.zeros( X.shape[1]+1) ,args=(X, (y==3).astype('float32'), .1))
np.asarray((0,0)).shape
"""
Explanation: One-vs-all Classification
cf. 1.4 One-vs-all Classification, pp.8 ex3.pdf
End of explanation
"""
ex3weightsdata = scipy.io.loadmat('./coursera_Ng/machine-learning-ex3/ex3/ex3weights.mat')
"""
Explanation: Neural Networks
End of explanation
"""
Theta1 = ex3weightsdata['Theta1']
Theta2 = ex3weightsdata['Theta2']
print( Theta1.shape )
print( Theta2.shape )
a = []
a.append( )
def predict(Theta1, Theta2, X):
"""
PREDICT predict the label of an input given a trained neural network
p = PREDICT(Theta1, Theta2, X) outputs the predicted label of X given the
trained weights of a neural network (Theta1, Theta2)
PARAMETERS
==========
Theta1
@type numpy array of size dims. s_2 x (d+1), where s_2 is the number of "units" of "hidden layer", layer 2, with d = number of features
@param Theta1
Theta2
@type numpy array of size dims. M x (s_2 + 1), M=number of output classes
@param Theta2
"""
# Useful values
m = X.shape[0] # total number of training examples, m
num_labels = Theta2.shape[0] # number of classes, i.e. output y \in \lbrace 1, \dots, num_labels \rbrace
## assume preprocessing is needed (in the case when using pandas for DataFrame, only column of 1 is needed)
input_X = np.hstack( (np.ones((m,1)), X)) # size dims. (m,d+1) i.e. m x (d+1)
a_lst = [] # "activation"
a = input_X # a_0^{(1)} already added in input_X step as a_0^{(1)}=1, # a of size dims. (m,d+1)i.e. m x (d+1)
a_lst.append(a)
z = np.dot( Theta1, a.T ) # size dims. s_2 x m
a = sigmoid(z)
a = np.vstack( ( np.ones((1,m)), a)) # size dims. (s_2 +1) x m i.e. ((s_2+1),m)
a_lst.append(a)
z = np.dot( Theta2, a)
a = sigmoid(z)
predicted_vals = np.argmax( a, axis =0)
predicted_vals_p1 = predicted_vals + 1 # add 1 to "scale" outputs to predict y, since Python/numpy counts from 0
return predicted_vals_p1
predict_nn = predict(Theta1,Theta2, X)
print( predict_nn.shape)
pd.DataFrame(predict_nn).describe()
sum( [predict_nn[j]==y[j] for j in range(len(y))] )[0]/ float( len(y) )
m = X.shape[0]; print(m)
num_labels = Theta2.shape[0]; print( num_labels )
input_X = np.hstack( (np.ones( (m,1)), X))
a_lst = []
a = input_X
a_lst.append(a)
z=np.dot( Theta1, a.T)
a=sigmoid(z)
a=np.vstack(( np.ones((1,m)),a))
a_lst.append(a)
z = np.dot( Theta2, a)
a=sigmoid(z) ; print a.shape
print( np.argmax( a,axis=0).shape )
pd.DataFrame( np.argmax(a,axis=0) ).describe()
"""
Explanation: Feedforward Propagation and Prediction
cf. 2.2 Feedforward Propagation and Prediction, ex3.pdf
End of explanation
"""
## ==================== Part 1: Loading and Visualizing Data ==============================
# We start the exercise by first loading and visualizing the dataset.
# You will be working with a dataset that contains handwritten digits.
#
# Load Training Data
print("Loading and Visualizing Data ... \n")
ex4data1 = scipy.io.loadmat('./coursera_Ng/machine-learning-ex4/ex4/ex4data1.mat')
ex4data1['X'] == X
ex4data1['y'] == y
ex4data1['y'].shape
"""
Explanation: cf. nnCostFunction.m
End of explanation
"""
# Load the weights into variables Theta1 and Theta2
ex4weightsdata = scipy.io.loadmat('./coursera_Ng/machine-learning-ex4/ex4/ex4weights.mat')
Theta1 = ex4weightsdata['Theta1']
Theta2 = ex4weightsdata['Theta2']
print( Theta1.shape ) # size dims. s_2 x ( d+1)
print( Theta2.shape ) # size dims. M x (s_2 + 1)
## ==================== Part 3: Compute Cost (Feedforward) ===================================
# To the neural network, you should first start by implementing the
# feedforward part of the neural network that returns the cost only. You
# should complete the code in nnCostFunction.m to return cost. After
# implementing the feedforward to compute the cost, you can verify that
# your implementation is correct by verifying that you get the same cost
# as us for the fixed debugging parameters.
print("\n Feedforward Using Neural Network ... \n")
# Weight regularization parameter (we set this to 0 here).
lambda_val = 0
input_layer_size = 400 # 20x20 Input Images of digits
hidden_layer_size = 25; # 25 hidden units
num_labels = 10
"""
Explanation: ex4.m
End of explanation
"""
def nnCostFunction( Theta1, Theta2, input_layer_size, hidden_layer_size, num_labels, X,y, lambda_val):
"""
NNCOSTFUNCTION Implements the neural network cost function for a two layer
neural network which performs classification
[J grad] = NNCOSTFUNCTION(Theta1, Theta2, input_layer_size, hidden_layer_size, num_labels,X,y, lambda_val)
computes the cost and gradient of the neural network. The parameters for the neural network are "unrolled"
into the vector nn_params and need to be converted back into the weight matrices.
"""
# Useful values
m = X.shape[0] # total number of training examples, m
#num_labels = Theta2.shape[0] # number of classes, i.e. output y \in \lbrace 1, \dots, num_labels \rbrace
d = input_layer_size
K = num_labels
## assume preprocessing is needed (in the case when using pandas for DataFrame, only column of 1 is needed)
input_X = np.hstack( (np.ones((m,1)), X)) # size dims. (m,d+1) i.e. m x (d+1)
a_lst = [] # "activation"
a = input_X # a_0^{(1)} already added in input_X step as a_0^{(1)}=1, # a of size dims. (m,d+1)i.e. m x (d+1)
a_lst.append(a)
z = np.dot( Theta1, a.T ) # size dims. s_2 x m
a = sigmoid(z)
a = np.vstack( ( np.ones((1,m)), a)) # size dims. (s_2 +1) x m i.e. ((s_2+1),m)
a_lst.append(a)
z = np.dot( Theta2, a)
a = sigmoid(z) # size dims. Kxm
# recall that whereas the original labels (in the variable y) were 1, 2, ..., 10, for the purpose of training a
# neural network, we need to recode the labels as vectors containing only values 0 or 1
y_prob = [np.zeros(K) for row in y] # list of 5000 numpy arrays of size dims. (10,)
for i in range( m):
y_prob[i][ y[i]-1] = 1
y_prob = np.array(y_prob) # size dims. (m,K)
# cost function for the neural network (without regularization)
J_theta = (-np.dot( np.log( a) ,y_prob).trace() - np.dot( np.log(1-a),(1-y_prob)).trace() )/m
# cost function with regularization
reg_term = lambda_val / (2. * m) * ( np.sum(Theta1[:,1:] * Theta1[:,1:]) + np.sum(Theta2[:,1:] * Theta2[:,1:]) )
J_theta += reg_term
return a, J_theta
"""
Explanation: cf. 1.3. Feedforward and cost function ex4.pdf
End of explanation
"""
a_test, J_theta_test = nnCostFunction(Theta1,Theta2,input_layer_size,hidden_layer_size,num_labels,X,y,1.)
print( a_test.shape)
J_theta_test
y.shape
#pd.DataFrame( a_test ).describe()
pd.DataFrame( y).describe()
# recall that whereas the original labels (in the variable y) were 1, 2, ..., 10, for the purpose of training a
# neural network, we need to recode the labels as vectors containing only values 0 or 1
y_prob = [ np.zeros( Theta2.shape[0] ) for row in y];
print( len( [0 for row in y] ) )
[0 for row in y][:10]
y_prob[2][ 3]
for i in range( y.shape[0]):
y_prob[i][ y[i]-1 ] = 1
y_prob[800]
print( type(y_prob) )
print( len(y_prob))
y_prob = np.array( y_prob)
print(y_prob.shape)
np.dot( a_test, y_prob).shape
np.dot( a_test,y_prob).trace()
# interpolation = - np.dot( y.T, np.log( predicted_vals) ) - np.dot( (1. -y).T, np.log( 1. - predicted_vals )) # matrix size or dim. of (1,), i.e. dim(interpolation) = 1x0
J_theta_test
y_prob_test = [np.zeros(10) for row in y]
y_prob_test = np.array(y_prob_test)
y_prob_test.shape
print( Theta1.shape )
Theta1[:,-1]
np.sum(Theta1[:,1:] * Theta1[:,1:])
def sigmoidGradient(z):
"""
SIGMOIDGRADIENT returns the gradient of the sigmoid function
evaluated at z
g = SIGMOIDGRADIENT(z) computes the gradient of the sigmoid function
evaluated at z. This should work regardless if z is a matrix or a vector.
In particular, if z is a vector or matrix, you should return
the gradient for each element.
"""
g = sigmoid(z) * (1 - sigmoid(z))
return g
"""
Explanation: cf. 1.4 Regularized cost function
cost function for neural networks with regularization given by
$$
J(\theta} = \frac{1}{m} \sum_{i=1}^m \sum_{k=1}^K
$$
$$
J(\theta} = \frac{1}{m} \sum_{i=1}^m \sum_{k=1}^K \left[ -y_k^{(i)} \log{ (( h_{\theta}(x^{(i)} ) )k ) } - (1-y^{(i)}_k ) \log{ (1- (h{\theta}(x^{(i)}))_k ) } \right]
$$
End of explanation
"""
# Randomly initialize the weights to small values
epsilon_init = 0.12
L_out = 400
L_in = 25
W = np.random.uniform( low = -epsilon_init, high= epsilon_init, size=( L_in, L_out) )
W
print( X[3].shape )
print( np.vstack(X[3]).shape )
a = np.vstack( (np.ones( ( 1, np.vstack(X[3]).shape[1]) ) , np.vstack(X[3]) ) )
print( a.shape)
z = np.dot( Theta1,a) # size dims. s_2 x 1
a = sigmoid(z)
a = np.vstack( ( np.ones((1,np.vstack(X[3]).shape[1])), a)) # size dims. (s_2 +1) x m i.e. ((s_2+1),m) with m =1
print(a.shape)
z = np.dot( Theta2, a) # size dim Kx1
a = sigmoid(z) # size dim Kx1
print(a.shape)
"""
a_lst = [] # "activation"
a = input_X # a_0^{(1)} already added in input_X step as a_0^{(1)}=1, # a of size dims. (m,d+1)i.e. m x (d+1)
a_lst.append(a)
z = np.dot( Theta1, a.T ) # size dims. s_2 x m
a = sigmoid(z)
a = np.vstack( ( np.ones((1,m)), a)) # size dims. (s_2 +1) x m i.e. ((s_2+1),m)
a_lst.append(a)
z = np.dot( Theta2, a)
a = sigmoid(z) # size dims. Kxm
"""
X[3].reshape( len(X[3]), 1).shape
np.ones(5,2)
"""
Explanation: 2.2 Random initialization
End of explanation
"""
a = np.arange(60.).reshape(3,4,5)
b = np.arange(24.).reshape(4,3,2)
c = np.tensordot(a,b,axes=([1,0],[0,1]))
d = np.tensordot(a,b,axes=([0,1],[1,0]))
print(c.shape)
print(c)
print(d.shape)
print(d)
c==d
# Feed forward
m = X.shape[0]
d = X.shape[1]
s_2 = Theta1.shape[0]
K = Theta2.shape[0]
print( X.shape) # m x d
print(Theta1.shape) # s_2 x (d+1)
print(Theta2.shape) # K x (s_2+1)
a = np.vstack( ( np.ones( (1,m)), X.T)) # (d+1)xm
z_lst = []
z = np.dot( Theta1, a) # s_2 x m
z_lst.append(z)
a_lst = []
a_lst.append(a)
a_l = np.vstack( (np.ones( (1,m)), sigmoid(z)) ) # (s_2+1)xm
#a_f = np.stack( (a,a_l),axis=0) # ValueError: all input arrays must have the same shape
a_lst.append(a_l)
z = np.dot( Theta2, a_l) # K x m
z_lst.append(z)
a_L = sigmoid(z) # K x m
a_lst.append( a_L )
def feedforward(X, Thetas, L ):
"""
PARAMETERS (INPUTS)
===================
@type X : numpy array of size dims. m x d
@param X : input data
@type Thetas : list of numpy arrays
@param Thetas : list of numpy arrays that are Theta's or weights for each layer l;
note l=1,2,..L-1,but Python lists start counting from 0,1,...L-2
@type L : (positive) integer
@param L : number of layers, e.g. input layer, 1 hidden layer, and output layer is L = 3
"""
m, d = X.shape
# s_l = [Theta.shape[0] for Theta in Thetas ]
# s_l.insert(0,d)
a_lst = []
z_lst = []
a = np.vstack( (np.ones( (1,m)), X.T)) # (d+1)xm
a_lst.append(a)
for l in range(L-2):
z = np.dot(Thetas[l] , a) # s_2 x m
z_lst.append(z)
g = sigmoid(z)
a_l = np.vstack( (np.ones((1,m)), g)) # (s_{l+1}+1)x m
a_lst.append(a_l)
z = np.dot( Thetas[L-2], a_l) # K x m
z_lst.append(z)
a_L = sigmoid(z) # K x m
a_lst.append( a_L)
return z_lst, a_lst
z_lst_test, a_lst_test = feedforward(X, [Theta1,Theta2], 3)
print( len(z_lst_test) , len(a_lst_test) )
print ( [z.shape for z in z_lst], [a.shape for a in a_lst ])
print( a_lst[-1].shape )
print( range(3-2,-1,-1) )
a_lst[-3]
# Backpropagation
delta_lst = []
y_prob = [np.zeros(K) for row in y] # list of 5000 numpy arrays of size dims. (10,)
for i in range( m):
y_prob[i][ y[i]-1] = 1
y_prob = np.array(y_prob).T # size dims. (K,m)
print( y_prob.shape )
print( a_L.shape )
delta_L = a_L - y_prob # size dims. (K,m)
delta_lst = []
delta_lst.insert(0, delta_L)
dg = a_lst[-2] * ( 1. - a_lst[-2]) # size dims. ((s_2+1)xm)
print(dg.shape)
delta_l = np.dot( Theta2.T, delta_L) * dg
print( delta_l.shape )
delta_l = delta_l[1:,:]
delta_lst.insert(0, delta_l )
dg = a_lst[-3] * (1. - a_lst[-3]) # (s_1+1) x m
print( dg.shape) # (s_1+1) x m
delta_l = np.dot( Theta1.T, delta_l) * dg
delta_l = delta_l[1:,:]
delta_lst.insert(0, delta_l)
def backprop(y,z_lst, a_lst, Thetas, L):
"""
Backpropagation
@type L : (positive) integer
@param L : number of layers, e.g. input layer, 1 hidden layer, and output layer is L = 3
"""
m = y.shape[1] # number of training examples
K = a_lst[-1].shape[0] # number of classes
delta_lst = []
y_prob = [np.zeros(K) for row in y] # list of m numpy arrays of size dims. (K,)
for i in range(m):
y_prob[i][y[i]-1] = 1
y_prob = np.array(y_prob).T # size dims. (K,m)
delta_l = a_L - y_prob # size dims (K,m)
delta_lst = []
delta_lst.insert(0, delta_l)
for l in range(L-2,-1,-1): # L-2, L-3,...1,0, corresponding to l=L-1,L-2,...2,1 (just how Python indexes from 0)
dg = a_lst[l] * ( 1- a_lst[l])
delta_l = np.dot( Thetas[l].T, delta_l) * dg
delta_l = delta_l[1:,:]
delta_lst.insert(0,delta_l)
D_lst = []
for l in range(len(Thetas)): # 0,1,..L-2, corresponding to l = 1,2,...L-1
D_lst.append( np.tensordot( a_lst[l], delta_lst[l+1].T, axes=([1],[0])).T/m )
# regularization terms for grad(J)
for l in range(len(Thetas)): # 0,1,...L-2, corresponding to l = 1,2,...L-1
Theta_reg = np.zeros( Thetas[l].shape )
Theta_reg[:,1:] = Thetas[l][:,1:]
Theta_reg = lambda_val * Theta_reg / m
D_lst[l] += Theta_reg
return delta_lst, D_lst
delta_lst_test, D_lst_test = backprop(y,z_lst_test,a_lst_test, [Theta1,Theta2], 3)
print( len(delta_lst_test) )
for delta in delta_lst_test: print( delta.shape )
for Dl in D_lst_test: print(Dl.shape)
print( delta_lst[0].shape)
len(a_lst)
print( delta_l.shape)
print( delta_l[1:,:].shape )
for l in delta_lst: print( l.shape )
for a in a_lst: print(a.shape)
print( a_lst[0].shape )
print( delta_lst[1].shape )
print( np.tensordot( a_lst[0], delta_lst[1].T, axes=([1],[0])).shape )
print( np.tensordot( a_lst[1], delta_lst[2].T, axes=([1],[0])).shape )
D1 = np.tensordot( a_lst[0], delta_lst[1].T, axes=([1],[0])).T/m
D2 = np.tensordot( a_lst[1], delta_lst[2].T, axes=([1],[0])).T/m
np.empty() == None
range(2)
Theta1.shape
np.zeros( Theta1.shape)[:,1:].shape
"""
Explanation: cf. numpy.tensordot - NumPy v1.10 Manual
End of explanation
"""
ex4data1 = scipy.io.loadmat('./coursera_Ng/machine-learning-ex4/ex4/ex4data1.mat')
Theta_testvals = scipy.io.loadmat('./coursera_Ng/machine-learning-ex4/ex4/ex4weights.mat')
Theta1_testval = Theta_testvals['Theta1'][:,1:]
b1_testval = Theta_testvals['Theta1'][:,0:1]
Theta2_testval = Theta_testvals['Theta2'][:,1:]
b2_testval = Theta_testvals['Theta2'][:,0:1]
print( np.dot( Theta1_testval, ex4data1['X'].T).shape )
np.tile( b1_testval, (1,5000)).shape
z2 = np.dot( Theta1_testval, ex4data1['X'].T) + np.tile( b1_testval, (1,5000))
z2.shape
z3 = np.dot( Theta2_testval, sigmoid(z2)) + np.tile( b2_testval, (1,5000))
print(z3.shape)
a3 = sigmoid(z3)
ht = a3
yt = ex4data1['y']
yt.shape
m = ex4data1['y'].shape[0]
y_prob = [np.zeros(10) for row in ex4data1['y']] # list of 5000 numpy arrays of size dims. (10,)
for i in range( m):
y_prob[i][ ex4data1['y'][i]-1] = 1
y_prob = np.array(y_prob).T # size dims. (K,m)
print(y_prob.shape)
J = - y_prob * np.log( a3) - (1-y_prob) * np.log( 1- a3)
print( J.shape )
np.sum(J,axis=0).shape
np.mean( np.sum(J,axis=0))
z2
sigmoid(z2)
"""
Explanation: "Breaking out" $\Theta$ weights into a "weight" or parameter matrix and* an intercept vector *
End of explanation
"""
|
CentreForResearchInAppliedLinguistics/clic
|
docs/notebooks/Cheshire/.ipynb_checkpoints/Cheshire objects and methods-checkpoint.ipynb
|
mit
|
# coding: utf-8
import os
from cheshire3.baseObjects import Session
from cheshire3.document import StringDocument
from cheshire3.internal import cheshire3Root
from cheshire3.server import SimpleServer
session = Session()
session.database = 'db_dickens'
serv = SimpleServer(session, os.path.join(cheshire3Root, 'configs', 'serverConfig.xml'))
db = serv.get_object(session, session.database)
qf = db.get_object(session, 'defaultQueryFactory')
resultSetStore = db.get_object(session, 'resultSetStore')
idxStore = db.get_object(session, 'indexStore')
"""
Explanation: Basic Cheshire objects and methods
This file wants to document how one can use Cheshire to query the CLiC database.
It fills in the gaps of the official cheshire documentation and it
provides a number of very specific, hands-on examples.
Author: Johan de Joode
Dates: 10/2/2015
Database: Dickens
Setup
End of explanation
"""
query = qf.get_query(session, 'c3.subcorpus-idx = "dickens" and/cql.proxinfo c3.chapter-idx = "fog"')
"""
Explanation: Querying
Build a query. This does not hit the database itself.
End of explanation
"""
print query.toCQL()
print query.toXCQL()
"""
Explanation: A query can be printed as CQL or as XCQL
End of explanation
"""
result_set = db.search(session, query)
result_set
"""
Explanation: To search the database using this particular query, one needs to use the search method on a database object. This spits out a result set.
End of explanation
"""
print len(result_set)
"""
Explanation: Handling the results
When using the chapter index, the result set is an iterable of results in which each chapter is one result. For the query above, there are thus 35 chapters that match the query.
End of explanation
"""
for result in result_set:
print result
"""
Explanation: Each result in the result set refers to a particular recordStore in which, surprise surprise, the actual chapter is stored.
End of explanation
"""
for result in result_set:
print 'result.id: ', result.id
print 'result.database: ', result.database
print 'result.occurrences: ', result.occurences
print 'result.proxInfo: ', result.proxInfo
print "#########"
for result in result_set:
print result.attributesToSerialize
"""
Explanation: Understanding the results
For each of these results a number of attributes can be accessed using the dot notation. The choices are:
result.attributesToSerialize
result.id
result.recordStore
result.database
result.diagnostic
result.fetch_record
result.proxInfo
result.weight
result.numericId
result.resultSet
result.occurences
result.serialize
result.scaledWeight
In our current setup it seems that results are not weighed.
proxInfo is one of the most important attributes for our purposes.
It describes the proximity information for a hit in a particular record,
or in other words, where in a record the search string can be found.
We currently assume the following values:
* the first item is the id of the root element from
which to start counting to find the word node
for instance, 0 for a chapter view (because the chapter
is the root element), but 151 for a search in quotes
text.
* the second item in the deepest list (169, 171)
is the id of the <w> (word) node
* the third element is the character offset,
or the exact character (spaces, and
and punctuation (stored in <n> (non-word) nodes
at which the search term starts
* the fourth element is the total amount of characters
in the document
End of explanation
"""
for result in result_set:
rec = result.fetch_record(session)
print type(rec), rec
"""
Explanation: From what I gather, a result in a resultSet is only a pointer to the document and not the document itself.
The latter needs to be fetched and is generally called a record.
Records have the following attributes (most of which seem irrelevant for our purposes and several of which
only return empty strings):
rec.baseUri rec.elementHash rec.get_sax rec.parent rec.rights
rec.byteCount rec.fetch_proxVector rec.get_xml rec.processHistory rec.sax
rec.context rec.fetch_vector rec.history rec.process_xpath rec.size
rec.digest rec.filename rec.id rec.recordStore rec.status
rec.dom rec.get_dom rec.metadata rec.resultSetItem rec.tagName
rec.wordCount rec.xml
End of explanation
"""
for result in result_set:
rec = result.fetch_record(session)
print "rec.id: ", rec.id
print 'rec.wordCount: ', rec.wordCount
print 'rec.get_dom(session): ', rec.get_dom(session)
print "#######"
result_set.attributesToSerialize
result.attributesToSerialize
for result in result_set:
print result.serialize(session)
"""
Explanation: The get_dom(session) method spits out the record in parsed xml.
This is essential for our purposes.
End of explanation
"""
from lxml import etree
rec_tostring = etree.tostring(rec2)
print rec_tostring
"""
Explanation: A record can be transformed into raw xml (in order to understand it), using
a method from lxml:
End of explanation
"""
# find the first occurrence of the term love
# because that is what we are all looking for
love = rec_tostring.find('love')
conc_line = rec_tostring[love-50 : love + len('love') + 50]
conc_line.replace('love', 'LOVE')
"""
Explanation: This could also be used in simple python string manipulations.
For instance, to highlight something in a chapter, or to build
a concordance based on the raw string rather than an xml tree.
In that case one should note that only each occurrence of a term is
duplicated because it is present in <txt> and in its own word node.
End of explanation
"""
kwicTransformer = db.get_object(session, 'kwic-Txr')
print kwicTransformer
doc = kwicTransformer.process_record(session, rec).get_raw(session)
print doc
from cheshire3.transformer import XmlTransformer
dctxr = db.get_object(session, 'kwic-Txr')
dctxr
doc = dctxr.process_record(session, record)
print doc.get_raw(session)[:1000]
"""
Explanation: Transforming a result
Rather than manually handling the xml like this, Cheshire has a class called a
Transformer that can perform xsl transformations on the xml of a chapter.
Transformers are defined in a configuration file. In our project they live in an
xsl file.
The following examples use a transformer that was not designed to work with our input,
but they do illustrate how transformers can be invoked.
End of explanation
"""
query = qf.get_query(session, 'c3.book-idx = "LD"')
result_set = db.search(session, query)
chapter_1 = result_set[0]
chapter_44 = result_set[43]
chapter_1
rec = chapter_1.fetch_record(session).get_dom(session)
print rec
rec.attrib
rec.attrib['id']
type(rec)
print rec
doc = kwicTransformer.process_record(session, chapter_1.fetch_record(session)).get_raw(session)
print doc
articleTransformer = db.get_object(session, 'article-Txr')
doc = articleTransformer.process_record(session, chapter_1.fetch_record(session)).get_raw(session)
print doc
#FIXME How can you get immediately query for a chapter,
# rather than getting all chapters of a book first?
# --> you need to build a better index for this
query = qf.get_query(session, 'c3.book-idx "LD" and div.id = "LD.1"')
result_set = db.search(session, query)
len(result_set)
#TODO if recordStore's are unique AND they represent chapters, it could also be possible to simply
# get a particular recordStore from Cheshire (without querying the database again).
"""
Explanation: Retrieving a chapter
End of explanation
"""
query = qf.get_query(session, 'c3.subcorpus-idx = "dickens" and c3.chapter-idx = "fog" and c3.book-idx = "BH"')
result_set = db.search(session, query)
len(result_set)
"""
Explanation: Searching in a specific book
End of explanation
"""
query = qf.get_query(session, 'c3.subcorpus-idx = "dickens" \
and/cql.proxinfo c3.chapter-idx = "dense fog" \
') #and c3.chapter-idx = "dense"')
rs = db.search(session, query)
len(rs)
for result in rs:
print result.proxInfo
#FIXME it seems that occurences cannot be trusted?
print result.occurences
query = qf.get_query(session, 'c3.subcorpus-idx = "dickens" \
and/cql.proxinfo c3.chapter-idx = "the" \
')
query.addPrefix(query, 'test')
query.toCQL()
"""
Explanation: Messing around
End of explanation
"""
query = qf.get_query(session, 'c3.subcorpus-idx = "dickens" \
and/proxinfo c3.chapter-idx = "dense fog" \
')
rs = db.search(session, query)
total = 0
for result in rs:
total += len(result.proxInfo)
total
"""
Explanation: Phrase search
End of explanation
"""
query = qf.get_query(session, 'c3.subcorpus-idx = "dickens" \
and/cql.proxinfo c3.chapter-idx = "fog" \
and c3.chapter-idx = "dense"')
rs = db.search(session, query)
len(rs)
"""
Explanation: And search
End of explanation
"""
query = qf.get_query(session, 'c3.subcorpus-idx = "dickens" \
and/cql.proxinfo c3.chapter-idx = "fog" \
or c3.chapter-idx = "dense"')
rs = db.search(session, query)
len(rs)
query = qf.get_query(session, 'c3.subcorpus-idx = "dickens" \
and c3.book-idx = "LD"')
rs = db.search(session, query)
len(rs)
query = qf.get_query(session, 'c3.subcorpus-idx = "dickens" \
and c3.chapter-idx = "he" prox/distance=1/unordered c3.chapter-idx = "said" \
or c3.chapter-idx = "did" or c3.chapter-idx = "wanted"')
rs = db.search(session, query)
len(rs)
#TODO not
#TODO wildcards
query = qf.get_query(session, 'c3.subcorpus-idx = "dickens" \
and c3.chapter-idx window/distance<5/unordered "low voice"')
rs = db.search(session, query)
len(rs)
for result in rs:
print result.proxInfo
query = qf.get_query(session, 'c3.subcorpus-idx = "dickens" \
and c3.chapter-idx window/distance<5/unordered "voice low"')
rs = db.search(session, query)
len(rs)
query = qf.get_query(session, 'c3.subcorpus-idx = "dickens" \
and c3.chapter-idx window/distance<5/unordered "low high"')
rs = db.search(session, query)
len(rs)
query = qf.get_query(session, 'c3.subcorpus-idx = "dickens" \
and c3.chapter-idx window/distance<3 "Mr Arthur said"')
rs = db.search(session, query)
len(rs)
"""
Explanation: Or search
End of explanation
"""
query = qf.get_query(session, '(c3.subcorpus-idx all "dickens" and/cql.proxinfo c3.chapter-idx any "dense fog")')
result_set = db.search(session, query)
count = 0
for result in result_set:
record = result.fetch_record(session)
print result.occurences, record #wordCount #.process_xpath('//w[@o=%s]' % result.proxInfo[0][1])
for y in result.proxInfo:
print y
count += 1
#TODO why does proxinfo only have three values here?
# --> because the last any does not have a proxinfo value
"""
Explanation: Proximity Information
End of explanation
"""
from cheshire3.transformer import LxmlQueryTermHighlightingTransformer
"""
Explanation: Term highlighting
End of explanation
"""
|
meta-mind/workspace
|
kaggle/Titanic: Machine Learning from Disaster/scripts/Titanic Machine Learning from Disaster.ipynb
|
mit
|
import pandas as pd
"""
Explanation: Titanic: Machine Learning from Disaster
Get the Data with Pandas
Import the Pandas library
End of explanation
"""
train_url = "http://s3.amazonaws.com/assets.datacamp.com/course/Kaggle/train.csv"
train = pd.read_csv(train_url)
test_url = "http://s3.amazonaws.com/assets.datacamp.com/course/Kaggle/test.csv"
test = pd.read_csv(test_url)
"""
Explanation: Load the train and test datasets to create two DataFrames
End of explanation
"""
print(train.head())
print(test.head())
"""
Explanation: Print the 'head' of the train and test dataframes
End of explanation
"""
print(train.shape)
print(test.shape)
print(train.describe())
print(test.describe())
"""
Explanation: Understanding your data
End of explanation
"""
print(train["Survived"].value_counts())
"""
Explanation: Rose vs Jack, or Female vs Male
Passengers that survived vs passengers that passed away
End of explanation
"""
print(train["Survived"].value_counts(normalize=True))
"""
Explanation: As proportions
End of explanation
"""
print(train["Survived"][train["Sex"] == 'male'].value_counts())
"""
Explanation: Males that survived vs males that passed away
End of explanation
"""
print(train["Survived"][train["Sex"] == 'female'].value_counts())
"""
Explanation: Females that survived vs Females that passed away
End of explanation
"""
print(train["Survived"][train["Sex"] == 'male'].value_counts(normalize=True))
"""
Explanation: Normalized male survival
End of explanation
"""
print(train["Survived"][train["Sex"] == 'female'].value_counts(normalize=True))
"""
Explanation: Normalized female survival
End of explanation
"""
train["Child"] = float('NaN')
"""
Explanation: Does age play a role?
Create the column Child and assign to 'NaN'
End of explanation
"""
train["Child"][train["Age"] < 18] = 1
train["Child"][train["Age"] >= 18] = 0
print(train['Child'])
"""
Explanation: Assign 1 to passengers under 18, 0 to those 18 or older. Print the new column.
End of explanation
"""
print(train["Survived"][train["Child"] == 1].value_counts(normalize = True))
"""
Explanation: Print normalized Survival Rates for passengers under 18
End of explanation
"""
print(train["Survived"][train["Child"] == 0].value_counts(normalize = True))
"""
Explanation: Print normalized Survival Rates for passengers 18 or older
End of explanation
"""
test_one = test
"""
Explanation: First prediction
Create a copy of test: test_one
End of explanation
"""
test_one['Survived'] = 0
"""
Explanation: Initialize a Survived column to 0
End of explanation
"""
test_one['Survived'][test_one['Sex'] == "female"] = 1
print(test_one['Survived'])
"""
Explanation: Set Survived to 1 if Sex equals "female" and print the Survived column from test_one
End of explanation
"""
train["Sex"][train["Sex"] == "male"] = 0
train["Sex"][train["Sex"] == "female"] = 1
test["Sex"][test["Sex"] == "male"] = 0
test["Sex"][test["Sex"] == "female"] = 1
"""
Explanation: Cleaning and Formatting your Data
Convert the male and female groups to integer form
End of explanation
"""
train["Embarked"] = train["Embarked"].fillna('S')
test["Embarked"] = test["Embarked"].fillna('S')
"""
Explanation: Impute the Embarked variable
End of explanation
"""
train["Embarked"][train["Embarked"] == "S"] = 0
train["Embarked"][train["Embarked"] == "C"] = 1
train["Embarked"][train["Embarked"] == "Q"] = 2
test["Embarked"][test["Embarked"] == "S"] = 0
test["Embarked"][test["Embarked"] == "C"] = 1
test["Embarked"][test["Embarked"] == "Q"] = 2
"""
Explanation: Convert the Embarked classes to integer form
End of explanation
"""
print(train["Embarked"])
print(train["Sex"])
print(test["Embarked"])
print(test["Sex"])
"""
Explanation: Print the Sex and Embarked columns
End of explanation
"""
import numpy as np
"""
Explanation: Creating your first decision tree
Import the Numpy library
End of explanation
"""
from sklearn import tree
"""
Explanation: Import 'tree' from scikit-learn library
End of explanation
"""
print(train)
"""
Explanation: Print the train data to see the available features
End of explanation
"""
train[["Pclass", "Sex", "Age", "Fare"]] = train[["Pclass", "Sex", "Age", "Fare"]].fillna(train[["Pclass", "Sex", "Age", "Fare"]].median())
print(train)
"""
Explanation: Fill the NaN values
End of explanation
"""
target = train["Survived"].values
features_one = train[["Pclass", "Sex", "Age", "Fare"]].values
"""
Explanation: Create the target and features numpy arrays: target, features_one
End of explanation
"""
my_tree_one = tree.DecisionTreeClassifier()
my_tree_one = my_tree_one.fit(features_one, target)
"""
Explanation: Fit your first decision tree: my_tree_one
End of explanation
"""
print(my_tree_one.feature_importances_)
print(my_tree_one.score(features_one, target))
"""
Explanation: Look at the importance and score of the included features
End of explanation
"""
#test.Fare[152] = test.Fare.median()
test[["Pclass", "Sex", "Age", "Fare"]] = test[["Pclass", "Sex", "Age", "Fare"]].fillna(test[["Pclass", "Sex", "Age", "Fare"]].median())
"""
Explanation: Predict and submit to Kaggle
Impute the missing value with the median
End of explanation
"""
test_features = test[["Pclass", "Sex", "Age", "Fare"]].values
"""
Explanation: Extract the features from the test set: Pclass, Sex, Age, and Fare.
End of explanation
"""
first_prediction = my_tree_one.predict(test_features)
print(first_prediction)
"""
Explanation: Make your prediction using the test set
End of explanation
"""
PassengerId =np.array(test["PassengerId"]).astype(int)
print(PassengerId.shape)
first_solution = pd.DataFrame(first_prediction, PassengerId, columns = ["Survived"])
print(first_solution)
"""
Explanation: Create a data frame with two columns: PassengerId & Survived. Survived contains your predictions
End of explanation
"""
print(first_solution.shape)
"""
Explanation: Check that your data frame has 418 entries
End of explanation
"""
first_solution.to_csv("../submissions/first_solution.csv", index_label = ["PassengerId"])
"""
Explanation: Write your solution to a csv file with the name my_solution.csv
End of explanation
"""
features_two = train[["Pclass","Age","Sex","Fare", "SibSp", "Parch", "Embarked"]].values
"""
Explanation: Overfitting and how to control it
Create a new array with the added features: features_two
End of explanation
"""
max_depth = 10
min_samples_split = 5
my_tree_two = tree.DecisionTreeClassifier(max_depth = 10, min_samples_split = 5, random_state = 1)
my_tree_two = my_tree_two.fit(features_two, target)
"""
Explanation: Control overfitting by setting "max_depth" to 10 and "min_samples_split" to 5 : my_tree_two
End of explanation
"""
print(my_tree_two.score(features_two, target))
test_features_two = test[["Pclass","Age","Sex","Fare", "SibSp", "Parch", "Embarked"]].values
second_prediction = my_tree_two.predict(test_features_two)
print(second_prediction)
print(second_prediction.shape)
#PassengerId =np.array(test["PassengerId"]).astype(int)
second_solution = pd.DataFrame(second_prediction, PassengerId, columns = ["Survived"])
print(second_solution)
print(second_solution.shape)
second_solution.to_csv("../submissions/second_solution.csv", index_label = ["PassengerId"])
"""
Explanation: Print the score of the new decison tree
End of explanation
"""
# Create train_two with the newly defined feature
train_two = train.copy()
train_two["family_size"] = train_two["SibSp"] + train_two["Parch"] + 1
# Create a new feature set and add the new feature
features_three = train_two[["Pclass", "Sex", "Age", "Fare", "SibSp", "Parch", "family_size"]].values
# Define the tree classifier, then fit the model
my_tree_three = tree.DecisionTreeClassifier()
my_tree_three = my_tree_three.fit(features_three, target)
# Print the score of this decision tree
print(my_tree_three.score(features_three, target))
"""
Explanation: Feature-engineering for our Titanic data set
End of explanation
"""
from sklearn.ensemble import RandomForestClassifier
"""
Explanation: A Random Forest analysis in Python
Import the RandomForestClassifier
End of explanation
"""
features_forest = train[["Pclass", "Age", "Sex", "Fare", "SibSp", "Parch", "Embarked"]].values
target = train["Survived"]
"""
Explanation: We want the Pclass, Age, Sex, Fare,SibSp, Parch, and Embarked variables
End of explanation
"""
forest = RandomForestClassifier(max_depth = 10, min_samples_split=2, n_estimators = 100, random_state = 1)
my_forest = forest.fit(features_forest, target)
"""
Explanation: Building and fitting my_forest
End of explanation
"""
print(my_forest.score(features_forest, target))
"""
Explanation: Print the score of the fitted random forest
End of explanation
"""
test_features = test[["Pclass", "Age", "Sex", "Fare", "SibSp", "Parch", "Embarked"]].values
pred_forest = my_forest.predict(test_features)
print(len(pred_forest))
PassengerId =np.array(test["PassengerId"]).astype(int)
third_solution = pd.DataFrame(pred_forest, PassengerId, columns = ["Survived"])
print(third_solution)
print(third_solution.shape)
third_solution.to_csv("../submissions/third_solution.csv", index_label = ["PassengerId"])
"""
Explanation: Compute predictions on our test set features then print the length of the prediction vector
End of explanation
"""
print(my_tree_two.feature_importances_)
print(my_forest.feature_importances_)
"""
Explanation: Interpreting and Comparing
Request and print the .feature_importances_ attribute
End of explanation
"""
print(my_tree_two.score(features_two, target))
print(my_forest.score(features_forest, target))
"""
Explanation: Compute and print the mean accuracy score for both models
End of explanation
"""
|
jinntrance/MOOC
|
coursera/ml-foundations/week5/Song recommender.ipynb
|
cc0-1.0
|
import graphlab
"""
Explanation: Building a song recommender
Fire up GraphLab Create
End of explanation
"""
song_data = graphlab.SFrame('song_data.gl/')
"""
Explanation: Load music data
End of explanation
"""
song_data.head()
"""
Explanation: Explore data
Music data shows how many times a user listened to a song, as well as the details of the song.
End of explanation
"""
graphlab.canvas.set_target('ipynb')
song_data['song'].show()
len(song_data)
"""
Explanation: Showing the most popular songs in the dataset
End of explanation
"""
users = song_data['user_id'].unique()
for artist in ['Kanye West','Foo Fighters','Taylor Swift','Lady GaGa']:
print artist, len(song_data[song_data['artist'] == artist]['user_id'].unique())
for artist in ['Kings Of Leon','Coldplay','Taylor Swift','Lady GaGa']:
print artist, song_data[song_data['artist'] == artist]['listen_count'].sum()
pop = song_data.groupby(key_columns='artist', operations={'total_count': graphlab.aggregate.SUM('listen_count')})
pop.sort('total_count', ascending=False)
pop.sort('total_count', ascending=True)
len(users)
"""
Explanation: Count number of unique users in the dataset
End of explanation
"""
train_data,test_data = song_data.random_split(.8,seed=0)
"""
Explanation: Create a song recommender
End of explanation
"""
popularity_model = graphlab.popularity_recommender.create(train_data,
user_id='user_id',
item_id='song')
"""
Explanation: Simple popularity-based recommender
End of explanation
"""
popularity_model.recommend(users=[users[0]])
popularity_model.recommend(users=[users[1]])
"""
Explanation: Use the popularity model to make some predictions
A popularity model makes the same prediction for all users, so provides no personalization.
End of explanation
"""
personalized_model = graphlab.item_similarity_recommender.create(train_data,
user_id='user_id',
item_id='song')
subset_test_users = test_data['user_id'].unique()[0:10000]
rec_songs = personalized_model.recommend(users=subset_test_users)
print len(rec_songs)
rec_1song = rec_songs[rec_songs['rank']==1]
res = rec_1song.groupby(key_columns='song', operations={'count': graphlab.aggregate.COUNT()})
print res.sort('count', ascending=False)
print len(rec_songs)
"""
Explanation: Build a song recommender with personalization
We now create a model that allows us to make personalized recommendations to each user.
End of explanation
"""
personalized_model.recommend(users=[users[0]])
personalized_model.recommend(users=[users[1]])
"""
Explanation: Applying the personalized model to make song recommendations
As you can see, different users get different recommendations now.
End of explanation
"""
personalized_model.get_similar_items(['With Or Without You - U2'])
personalized_model.get_similar_items(['Chan Chan (Live) - Buena Vista Social Club'])
"""
Explanation: We can also apply the model to find similar songs to any song in the dataset
End of explanation
"""
%matplotlib inline
model_performance = graphlab.recommender.util.compare_models(test_data,
[popularity_model,personalized_model],
user_sample=0.05)
"""
Explanation: Quantitative comparison between the models
We now formally compare the popularity and the personalized models using precision-recall curves.
End of explanation
"""
|
tangsttw/python_tips_and_notes
|
pandas/pandas.ipynb
|
mit
|
import pandas as pd
import numpy as np
"""
Explanation: pandas
THis notebook records some tips for the pandas module
End of explanation
"""
df = pd.DataFrame(np.random.randint(0,100,size=(10, 4)), columns=list('ABCD'))
df
"""
Explanation: Create dataframe
Create a dataframe of random integers
End of explanation
"""
df.loc[3:9:2, 'B':]
"""
Explanation: .loc
use .loc to select both rows and columns by label based indexing. The labels being the values of the index or the columns. Slicing with .loc includes the last element.
End of explanation
"""
df2 = pd.DataFrame([['2017-01-01', 253, 234], ['2017-02-04', 283, 333], ['2017-02-11', 3, 55]], columns=['time', 'data1', 'data2'])
df2
df2.index = pd.to_datetime(df2.pop('time'))
df2
"""
Explanation: Change index and to_datetime
Use to_datetime to convert the 'time' column to pandas's time format and set the index of dataframe to the column that records the time of data.
End of explanation
"""
|
prasants/pyds
|
11.Introduction_to_Numpy.ipynb
|
mit
|
import numpy as np
# Create an array with the statement np.array
a = np.array([1,2,3,4])
print('a is of type:', type(a))
print('dimension of a:', a.ndim) # To find the dimension of 'a'
arr1 = np.array([1,2,3,4])
arr1.ndim
arr2 = np.array([[1,2],[2,3],[3,4],[4,5]])
arr2.ndim
# Doesn't make a difference to a computer how you represent it,
# but if humans are going to read your code, this might be useful
arr3 = np.array([[[1,2],[2,3]],
[[2,3],[3,4]],
[[4,5],[5,6]],
[[6,7],[7,8]]
])
arr3.ndim
arr4 = np.array([[ 0, 1, 2, 3, 4],
[ 5, 6, 7, 8, 9],
[10, 11, 12, 13, 14]])
arr4.ndim
"""
Explanation: Table of Contents
<p><div class="lev1 toc-item"><a href="#Introduction-to-NumPy" data-toc-modified-id="Introduction-to-NumPy-1"><span class="toc-item-num">1 </span>Introduction to NumPy</a></div><div class="lev1 toc-item"><a href="#Creating-Arrays" data-toc-modified-id="Creating-Arrays-2"><span class="toc-item-num">2 </span>Creating Arrays</a></div><div class="lev2 toc-item"><a href="#Built-in-methods-to-generate-an-array" data-toc-modified-id="Built-in-methods-to-generate-an-array-21"><span class="toc-item-num">2.1 </span>Built-in methods to generate an array</a></div><div class="lev3 toc-item"><a href="#Randint" data-toc-modified-id="Randint-211"><span class="toc-item-num">2.1.1 </span>Randint</a></div><div class="lev3 toc-item"><a href="#Random-Array-with-a-Uniform-Distribution" data-toc-modified-id="Random-Array-with-a-Uniform-Distribution-212"><span class="toc-item-num">2.1.2 </span>Random Array with a Uniform Distribution</a></div><div class="lev3 toc-item"><a href="#Random-Array-with-Standard-Normal-Distribution" data-toc-modified-id="Random-Array-with-Standard-Normal-Distribution-213"><span class="toc-item-num">2.1.3 </span>Random Array with Standard Normal Distribution</a></div><div class="lev3 toc-item"><a href="#Array-of-Zeroes" data-toc-modified-id="Array-of-Zeroes-214"><span class="toc-item-num">2.1.4 </span>Array of Zeroes</a></div><div class="lev3 toc-item"><a href="#Array-of-Ones" data-toc-modified-id="Array-of-Ones-215"><span class="toc-item-num">2.1.5 </span>Array of Ones</a></div><div class="lev3 toc-item"><a href="#Identity-Matrix" data-toc-modified-id="Identity-Matrix-216"><span class="toc-item-num">2.1.6 </span>Identity Matrix</a></div><div class="lev3 toc-item"><a href="#Linspace" data-toc-modified-id="Linspace-217"><span class="toc-item-num">2.1.7 </span>Linspace</a></div><div class="lev1 toc-item"><a href="#Quick-Operations-on-Numpy-Arrays" data-toc-modified-id="Quick-Operations-on-Numpy-Arrays-3"><span class="toc-item-num">3 </span>Quick Operations on Numpy Arrays</a></div><div class="lev2 toc-item"><a href="#Selecting-Values" data-toc-modified-id="Selecting-Values-31"><span class="toc-item-num">3.1 </span>Selecting Values</a></div><div class="lev1 toc-item"><a href="#Indexing-2D-Matrices" data-toc-modified-id="Indexing-2D-Matrices-4"><span class="toc-item-num">4 </span>Indexing 2D Matrices</a></div><div class="lev2 toc-item"><a href="#Exercise" data-toc-modified-id="Exercise-41"><span class="toc-item-num">4.1 </span>Exercise</a></div><div class="lev2 toc-item"><a href="#Exercise" data-toc-modified-id="Exercise-42"><span class="toc-item-num">4.2 </span>Exercise</a></div><div class="lev2 toc-item"><a href="#Exercise" data-toc-modified-id="Exercise-43"><span class="toc-item-num">4.3 </span>Exercise</a></div><div class="lev2 toc-item"><a href="#Exercise" data-toc-modified-id="Exercise-44"><span class="toc-item-num">4.4 </span>Exercise</a></div><div class="lev2 toc-item"><a href="#Exercise" data-toc-modified-id="Exercise-45"><span class="toc-item-num">4.5 </span>Exercise</a></div><div class="lev1 toc-item"><a href="#Fancy-Indexing" data-toc-modified-id="Fancy-Indexing-5"><span class="toc-item-num">5 </span>Fancy Indexing</a></div><div class="lev2 toc-item"><a href="#Method-1:-Boolean-Masks" data-toc-modified-id="Method-1:-Boolean-Masks-51"><span class="toc-item-num">5.1 </span>Method 1: Boolean Masks</a></div><div class="lev2 toc-item"><a href="#Method-2:-Indexing-with-Array-of-Integers" data-toc-modified-id="Method-2:-Indexing-with-Array-of-Integers-52"><span class="toc-item-num">5.2 </span>Method 2: Indexing with Array of Integers</a></div><div class="lev2 toc-item"><a href="#Take" data-toc-modified-id="Take-53"><span class="toc-item-num">5.3 </span>Take</a></div><div class="lev2 toc-item"><a href="#Broadcasting" data-toc-modified-id="Broadcasting-54"><span class="toc-item-num">5.4 </span>Broadcasting</a></div><div class="lev3 toc-item"><a href="#Broadcasting-Rule" data-toc-modified-id="Broadcasting-Rule-541"><span class="toc-item-num">5.4.1 </span>Broadcasting Rule</a></div><div class="lev1 toc-item"><a href="#Other-Array-Operations" data-toc-modified-id="Other-Array-Operations-6"><span class="toc-item-num">6 </span>Other Array Operations</a></div><div class="lev3 toc-item"><a href="#Copying-Arrays" data-toc-modified-id="Copying-Arrays-601"><span class="toc-item-num">6.0.1 </span>Copying Arrays</a></div><div class="lev3 toc-item"><a href="#Squaring-Arrays" data-toc-modified-id="Squaring-Arrays-602"><span class="toc-item-num">6.0.2 </span>Squaring Arrays</a></div><div class="lev3 toc-item"><a href="#Square-Roots" data-toc-modified-id="Square-Roots-603"><span class="toc-item-num">6.0.3 </span>Square Roots</a></div>
# Introduction to NumPy
From NumPy's [website](http://numpy.org) we have the following description:<br>
>NumPy is the fundamental package for scientific computing with Python. It contains among other things:
>* a powerful N-dimensional array object
* sophisticated (broadcasting) functions
* tools for integrating C/C++ and Fortran code
* useful linear algebra, Fourier transform, and random number capabilities <br>
>Besides its obvious scientific uses, NumPy can also be used as an efficient multi-dimensional container of generic data. Arbitrary data-types can be defined. This allows NumPy to seamlessly and speedily integrate with a wide variety of databases.
You can think of Numpy as standard Python lists on steroids!
<img src="images/roids.jpg"> <br><br>
There are a few reasons why **Numpy is so much faster than lists**:
* Numpy's underlying code is written in C
* The contents of Numpy's arrays are homogenous, i.e., all of of the same `type`
* Numpy's arrays have a smaller memory footprint
And since a Data Scientist is always learning, here's an excellent resource on Arrays - [scipy array tip sheet](http://pages.physics.cornell.edu/~myers/teaching/ComputationalMethods/python/arrays.html)
# Creating Arrays
Arrays contain uniform data type, with an arbitrary number of dimensions. What's a dimension? It's just a big word to denote how many levels deep the array goes. <br>
Dimensions are nothing more than lists inside lists inside lists...<br>
<img src="images/inception.jpg">
As we saw earlier with Matplotlib, there are some conventions for importing Numpy too.
End of explanation
"""
a_list = [1,2,3,4,5]
b_list = [5,10,15,20,25]
# Multiplying these will give an error
print(a_list * b_list)
a_list = np.array([1,2,3,4,5])
b_list = np.array([5,10,15,20,25])
print(a_list * b_list)
"""
Explanation: One easy to tell the number of dimensions - look at the number of square brackets at the beginning. [[ = 2 dimensions. [[[ = 3 dimensions. <br>
Remember, dimensions are nothing more than lists inside lists inside lists...
Why use Numpy Arrays, and not just list? One reason right here.
End of explanation
"""
arr1 = np.arange(16)
print(arr1)
"""
Explanation: Numpy allows for vectorisation, i.e. operations are applied to whole arrays instead of individual elements. To get the results of a_list * b_list using traditional python, you would have had to write a for loop. When dealing with millions or billions of lines of data, that can be inefficient. We will spend some more time on operations of this nature when we get to Broadcasting.
Built-in methods to generate an array
Numpy provides us many methods to generate numbers for our array.
End of explanation
"""
arr1.reshape(4,4)
arr1.reshape(2,8)
arr1.reshape(8,2)
arr1.reshape(16,1)
"""
Explanation: We can even reshape these arrays into our desired shape. But remember, when we say desired shape, we are not speaking of circles or pentagons. Think square, reactangles, cubes and the like.
End of explanation
"""
np.random.seed(42)
rand_arr = np.random.randint(0,1000,20)
print(rand_arr)
"""
Explanation: The arange feature generates sequential series though. What if we want random numbers?<br><br>
Randint
End of explanation
"""
rand_arr.reshape(5,4)
rand_arr.reshape(4,5)
rand_arr.reshape(2,10)
"""
Explanation: Translating from Python to English, "call the randint module from the random module of numpy, then select 20 numbers between 0 and 999 at random, and assign that to an array named rand_arr i.e. 0 is included, 1000 is excluded.
<img src="images/simples.gif"> <br><br>
End of explanation
"""
np.random.seed(42)
np.random.rand(5)
np.random.seed(42)
np.random.rand(3,2)
"""
Explanation: Remember, the first number always represents the number of rows.
Random Array with a Uniform Distribution
From the official documentation:
Create an array of the given shape and populate it with random samples from a uniform distribution over [0, 1).
A uniform distribution, is a distribution that has constant probability.
End of explanation
"""
np.random.seed(42)
np.random.randn(5)
"""
Explanation: Random Array with Standard Normal Distribution
From the official documentation:
Return a sample (or samples) from the “standard normal” distribution.
For random samples from $$N(\mu, \sigma^2)$$ use:
sigma * np.random.randn(...) + mu
Don't get scared by the formula - it's actually very simple, and we will cover this in brief later on in the mathematics section.
End of explanation
"""
np.zeros(16)
np.zeros((4,4))
"""
Explanation: Array of Zeroes
End of explanation
"""
np.ones(5)
np.ones((4,4))
"""
Explanation: Array of Ones
End of explanation
"""
np.eye(10)
"""
Explanation: Identity Matrix
An identity matrix is a square matrix, with all the values on the diagonal equal to 1, and the remaining values equal to 0.
End of explanation
"""
# 5 evenly spaced numbers between -5 and 5
np.linspace(-5,5,5)
"""
Explanation: Linspace
From Numpy's official documentation:
Return evenly spaced numbers over a specified interval.
Returns num evenly spaced samples, calculated over the interval [start, stop].
The endpoint of the interval can optionally be excluded.
Here's an interesting discussion on SO about when to use Linspace v range - https://stackoverflow.com/questions/5779270/linspace-vs-range
End of explanation
"""
import numpy as np
np.random.seed(42)
arr1 = np.random.randint(1,1000,100)
arr1 = arr1.reshape(10,10)
arr1.shape
arr1
"""
Explanation: Quick Operations on Numpy Arrays
End of explanation
"""
# Find the highest value in arr1
arr1.max()
# Find the lowest value in arr1
arr1.min()
# Find the location of the highest value in arr1
arr1.argmax()
"""
Explanation: Now imagine this is just a small snippet of a large array with millions, or even billions of numbers. Does that sound crazy? Well, Data Scientist regularly work with large arrays of numbers. The Netflix Data Scientists for example, deal with a high dimensional sparse matrix. <br><br>
For smaller datasets, let's say, number of people who boarded a particular flight every day for the past hundred days, we have a few useful tools to find the highest or lowest values, and their corresponding locations.
End of explanation
"""
arr1.argmin()
# From earlier
rand_arr = np.random.randint(0,1000,20)
rand_arr
rand_arr = rand_arr.reshape(4,5)
rand_arr.shape
rand_arr
"""
Explanation: Keep in mind that if we have duplicate entries, or multiple entries, only the first entry will be returned.
End of explanation
"""
import numpy as np
np.random.seed(42)
arr1 = np.arange(1,6)
arr1
arr1[0]
arr1[0:3]
arr1[-1]
"""
Explanation: Selecting Values
Secret! You already know how to select values from a numpy array.
End of explanation
"""
import numpy as np
np.random.seed(42)
rand_arr = np.random.randint(0,1000,20)
print(rand_arr)
rand_arr = rand_arr.reshape(5,4)
rand_arr
rand_arr[0]
rand_arr[1]
rand_arr[0][-1]
# Another way to write the same thing
rand_arr[0,-1]
"""
Explanation: Remember our old friend, lists?
<img src="images/hello_friend.jpg">
And there you have it - you're already an expert in Numpy Indexing! And very soon, you will learn to be an expert at indexing 2D Matrices too.
Indexing 2D Matrices
End of explanation
"""
import numpy as np
np.random.seed(42)
arr1 = np.arange(1,101)
arr1
arr1 = arr1.reshape(10,10)
arr1
"""
Explanation: Remember, rows before columns. Always!<br>
How do we get entire rows, or snippets of values from rows?<br>
Exactly the same as before. Nothing to worry about here!
End of explanation
"""
# Step 1 - Narrow down the row
arr1[2] # 3rd row
# 26 is at index 5, we need all the numbers from thr 6th column onwards
arr1[2,5:]
"""
Explanation: Exercise
Select 26 to 30
End of explanation
"""
# Step 1: Identify the Row
arr1[7:]
# Now we need the first three columns
arr1[7:,:3]
"""
Explanation: Exercise
Select:<br>
[71, 72, 73]<br>
[81, 82, 83]<br>
[91, 92, 93]
End of explanation
"""
# Your code here
"""
Explanation: Exercise
Select: <br>
[56, 57, 58, 59, 60]<br>
[66, 67, 68, 69, 70]<br>
[76, 77, 78, 79, 80]
End of explanation
"""
# Your code here
"""
Explanation: Exercise
Select:<br>
[ 44, 45]<br>
[ 54, 55]
End of explanation
"""
# Your code here
"""
Explanation: Exercise
Create atleast 4 challenges for yourself, so you can practice indexing.
End of explanation
"""
import numpy as np
np.random.seed(42)
arr1 = np.random.randint(0,1000,100)
arr1
# We check what values are greater than 150
arr1>150
# Assign this operation to a variable x
mask = arr1>150
# Create a new array which subsets arr1 based on a boolean operation
arr2 = arr1[mask]
arr2
# Check the shape
arr2.shape
"""
Explanation: Fancy Indexing
Method 1: Boolean Masks
While there are many ways to index, one of the more common methods that Data Scientists use will is Boolean Indexing. You can read more about indexing methods here.
End of explanation
"""
list1 = [1,3,5,7]
list2 = [2,4,6,8]
arr1 = np.arange(1,101)
arr1
arr_even = arr1[list1]
arr_even
# Alternatively
arr_even = arr1[[1,3,5,7]]
arr_even
arr_odd = arr1[list2]
arr_odd
"""
Explanation: Method 2: Indexing with Array of Integers
Don't get intimidated by the big words - it just means indexing by using a Python list.
End of explanation
"""
arr1 = np.arange(1,101)
arr1
indices = [0,2,4,10,20,80,91,97,99]
np.take(arr1, indices)
"""
Explanation: Take
This is similar to Fancy Indexing, but is arguably easier to use, atleast for me. I am sure you might develop a preference for this technique too. Additionally, Wes McKinney - the creator of Pandas, reports that "take" is faster than "fancy indexing".
End of explanation
"""
np.take(arr1, [[0, 1], [11, 18]])
"""
Explanation: Works with Multi-Dimensional
End of explanation
"""
arr_1 = np.arange(1,11)
print(arr_1)
print(arr_1 * 10)
"""
Explanation: Broadcasting
Broadcasting is a way for Numpy to work with arrays of different shapes.
The easiest example to explain broadcasting would be to use a scalar value. What's a scalar? A quantity having only magnitutde, but not direction. Speed is a scalar, velocity is a vector. For our practical Numpy purposes, scalars are real numbers - 1,2,3,4.....
Broadcasting is fast and efficient because all the underlying looping occurs in C, and happens on the fly without making copies of the data.
End of explanation
"""
arr_1 = np.array([[1,2],[3,4]])
a = 2
arr_1 + a
"""
Explanation: Here we have broadcast 10 to all other elements in the array. Remember Vectorisation? Same principles!
End of explanation
"""
arr1 = np.arange(1,13)
arr1
arr1.shape
arr1 = arr1.reshape(4,3).astype('float')
arr1
"""
Explanation: Broadcasting Rule
What about arrays of different dimensions and/or sizes? Well, for that, we have the broadcasting rule.
In order to broadcast, the size of the trailing axes for both arrays in an operation must either be the same size or one of them must be one.
Umm....
<img src="images/english.jpg">
Ok, let's not offend our friend Samuel Jackson here, so here's what it means in plain English.
End of explanation
"""
arr1.dtype
arr_example = np.array([1,2,3,4])
print(arr_example)
print('arr_example is an',arr_example.dtype)
arr_example = arr_example.astype('float')
print('arr_example is now a',arr_example.dtype)
"""
Explanation: A quick digression, in case you are wondering, the .astype('float') was just a quick operation to convert integers to floats as you are already familiar with. If you want to find out what the data type of an element in a numpy array is, simply use the suffix .dtype
End of explanation
"""
arr1
arr1.shape
arr2 = np.array([0.0,1.0,2.0])
print(arr2)
print(arr2.shape)
arr1 + arr2
"""
Explanation: Back to our array, arr1
End of explanation
"""
arr3 = np.arange(0,4)
arr3 = arr3.astype('float')
print(arr3)
print(arr3.shape)
# Let's generate our error
arr1 + arr3
"""
Explanation: Do you see what happened here? Our row with 3 elements, was sequentially added to each 3-element row in arr1.
The 1d array is represented as (3,), but think of it as simple a (3). The trailing axes have to match. So (4,3) and (3) match. What happens with it's (4,3) and (4)? It won't work! Let's prove it here.
End of explanation
"""
arr4 = np.arange(1,6)
arr4
arr4 = arr4.reshape(5,1).astype('float')
arr4.shape
arr2
arr4 * arr2
"""
Explanation: A final example now, with a (5,1) and (3) array. Read the rule once again - and it will be clear that the new array will be a 5X3 array.
End of explanation
"""
a1 = np.arange(1,21)
a1 = a1.reshape(4,5)
a1
# Let's get the first column
a1[:,0]
# Assign to new array
new_a1 = a1[:,0]
new_a1
# Recall that this is how you select all values
new_a1[:] = 42
new_a1
"""
Explanation: Other Array Operations
So let's begin with some good news here. You have already performed some advanced algebraic operations! That's the power of numpy.
Let's look at a few more operations now that come in quite handy.
Copying Arrays
End of explanation
"""
a1
"""
Explanation: So what happened to our original array? Let's find out.
End of explanation
"""
a1_copy = a1.copy()
a1_copy
a1_copy = np.arange(1,21)
a1_copy = a1_copy.reshape(4,5)
a1_copy
a1
"""
Explanation: Why did that happen?! We never touched a1, and even went on to create a whole new array!
This is because Numpy is very efficient in the way it uses memory. If you want a copy, be explicit, else Numpy will make changes to the original array too. Here's how you make a copy.
End of explanation
"""
np.square(a1)
"""
Explanation: Squaring Arrays
End of explanation
"""
np.sqrt(a1)
"""
Explanation: Square Roots
End of explanation
"""
|
dryadb11781/machine-learning-python
|
Classification/ipython_notebook/EX2.ipynb
|
bsd-3-clause
|
%matplotlib inline
from __future__ import division
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import make_blobs
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
n_train = 20 # samples for training
n_test = 200 # samples for testing
n_averages = 50 # how often to repeat classification
n_features_max = 75 # maximum number of features
step = 4 # step size for the calculation
def generate_data(n_samples, n_features):
X, y = make_blobs(n_samples=n_samples, n_features=1, centers=[[-2], [2]])
# add non-discriminative features
if n_features > 1:
X = np.hstack([X, np.random.randn(n_samples, n_features - 1)])
return X, y
"""
Explanation: 範例二: Normal and Shrinkage Linear Discriminant Analysis for classification
http://scikit-learn.org/stable/auto_examples/classification/plot_digits_classification.html
Python source code: plot_lda.py
```python
from future import division
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import make_blobs
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
n_train = 20 # samples for training
n_test = 200 # samples for testing
n_averages = 50 # how often to repeat classification
n_features_max = 75 # maximum number of features
step = 4 # step size for the calculation
def generate_data(n_samples, n_features):
"""Generate random blob-ish data with noisy features.
This returns an array of input data with shape `(n_samples, n_features)`
and an array of `n_samples` target labels.
Only one feature contains discriminative information, the other features
contain only noise.
"""
X, y = make_blobs(n_samples=n_samples, n_features=1, centers=[[-2], [2]])
# add non-discriminative features
if n_features > 1:
X = np.hstack([X, np.random.randn(n_samples, n_features - 1)])
return X, y
acc_clf1, acc_clf2 = [], []
n_features_range = range(1, n_features_max + 1, step)
for n_features in n_features_range:
score_clf1, score_clf2 = 0, 0
for _ in range(n_averages):
X, y = generate_data(n_train, n_features)
clf1 = LinearDiscriminantAnalysis(solver='lsqr', shrinkage='auto').fit(X, y)
clf2 = LinearDiscriminantAnalysis(solver='lsqr', shrinkage=None).fit(X, y)
X, y = generate_data(n_test, n_features)
score_clf1 += clf1.score(X, y)
score_clf2 += clf2.score(X, y)
acc_clf1.append(score_clf1 / n_averages)
acc_clf2.append(score_clf2 / n_averages)
features_samples_ratio = np.array(n_features_range) / n_train
plt.plot(features_samples_ratio, acc_clf1, linewidth=2,
label="Linear Discriminant Analysis with shrinkage", color='r')
plt.plot(features_samples_ratio, acc_clf2, linewidth=2,
label="Linear Discriminant Analysis", color='g')
plt.xlabel('n_features / n_samples')
plt.ylabel('Classification accuracy')
plt.legend(loc=1, prop={'size': 12})
plt.suptitle('Linear Discriminant Analysis vs. \
shrinkage Linear Discriminant Analysis (1 discriminative feature)')
plt.show()
```
這個範例用來展示scikit-learn 如何使用Linear Discriminant Analysis (LDA) 線性判別分析來達成資料分類的目的
利用 sklearn.datasets.make_blobs 產生測試資料
利用自定義函數 generate_data 產生具有數個特徵之資料集,其中僅有一個特徵對於資料分料判斷有意義
使用LinearDiscriminantAnalysis來達成資料判別
比較於LDA演算法中,開啟 shrinkage 前後之差異
(一)產生測試資料
從程式碼來看,一開始主要為自定義函數generate_data(n_samples, n_features),這個函數的主要目的為產生一組測試資料,總資料列數為n_samples,每一列共有n_features個特徵。而其中只有第一個特徵得以用來判定資料類別,其他特徵則毫無意義。make_blobs負責產生單一特徵之資料後,利用`np.random.randn` 亂數產生其他`n_features - 1`個特徵,之後利用np.hstack以"水平" (horizontal)方式連接X以及亂數產生之特徵資料。
End of explanation
"""
X, y = generate_data(10, 5)
import pandas as pd
pd.set_option('precision',2)
df=pd.DataFrame(np.hstack([y.reshape(10,1),X]))
df.columns = ['y', 'X0', 'X1', 'X2', 'X2', 'X4']
print(df)
"""
Explanation: 我們可以用以下的程式碼來測試自定義函式,結果回傳了X (10x5矩陣)及y(10個元素之向量),我們可以使用pandas.DataFrame套件來觀察資料。我們可以看到只有X的第一行特徵資料(X0) 與目標數值 y 有一個明確的對應關係,也就是y為1時,數值較大。
End of explanation
"""
acc_clf1, acc_clf2 = [], []
n_features_range = range(1, n_features_max + 1, step)
for n_features in n_features_range:
score_clf1, score_clf2 = 0, 0
for _ in range(n_averages):
X, y = generate_data(n_train, n_features)
clf1 = LinearDiscriminantAnalysis(solver='lsqr', shrinkage='auto').fit(X, y)
clf2 = LinearDiscriminantAnalysis(solver='lsqr', shrinkage=None).fit(X, y)
X, y = generate_data(n_test, n_features)
score_clf1 += clf1.score(X, y)
score_clf2 += clf2.score(X, y)
acc_clf1.append(score_clf1 / n_averages)
acc_clf2.append(score_clf2 / n_averages)
"""
Explanation: (二)改變特徵數量並測試shrinkage之功能
接下來程式碼裏有兩段迴圈,外圈改變特徵數量。內圈則多次嘗試LDA之以求精準度。使用LinearDiscriminantAnalysis來訓練分類器,過程中以shrinkage='auto'以及shrinkage=None來控制shrinkage之開關,將分類器分別以clf1以及clf2儲存。之後再產生新的測試資料將準確度加入score_clf1及score_clf2裏,離開內迴圈之後除以總數以求平均。
End of explanation
"""
features_samples_ratio = np.array(n_features_range) / n_train
fig = plt.figure(figsize=(10,6), dpi=300)
plt.plot(features_samples_ratio, acc_clf1, linewidth=2,
label="Linear Discriminant Analysis with shrinkage", color='r')
plt.plot(features_samples_ratio, acc_clf2, linewidth=2,
label="Linear Discriminant Analysis", color='g')
plt.xlabel('n_features / n_samples')
plt.ylabel('Classification accuracy')
plt.legend(loc=1, prop={'size': 10})
#plt.suptitle('Linear Discriminant Analysis vs. \
#shrinkage Linear Discriminant Analysis (1 discriminative feature)')
plt.show()
%matplotlib inline
from __future__ import division
'''
from __future__ import division
now:
1 / 2 --> 0.5
4 / 2 --> 2.0
1 // 2 --> 0
4 // 2 --> 2
'''
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import make_blobs
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
n_train = 20 # samples for training
n_test = 200 # samples for testing
n_averages = 50 # how often to repeat classification
n_features_max = 75 # maximum number of features
step = 4 # step size for the calculation
def generate_data(n_samples, n_features):
"""Generate random blob-ish data with noisy features.
This returns an array of input data with shape `(n_samples, n_features)`
and an array of `n_samples` target labels.
Only one feature contains discriminative information, the other features
contain only noise.
"""
X, y = make_blobs(n_samples=n_samples, n_features=1, centers=[[-2], [2]])
# add non-discriminative features
if n_features > 1:
X = np.hstack([X, np.random.randn(n_samples, n_features - 1)])
return X, y
acc_clf1, acc_clf2 = [], []
n_features_range = range(1, n_features_max + 1, step)
for n_features in n_features_range:
score_clf1, score_clf2 = 0, 0
for _ in range(n_averages):
X, y = generate_data(n_train, n_features)
clf1 = LinearDiscriminantAnalysis(solver='lsqr', shrinkage='auto').fit(X, y)
clf2 = LinearDiscriminantAnalysis(solver='lsqr', shrinkage=None).fit(X, y)
X, y = generate_data(n_test, n_features)
score_clf1 += clf1.score(X, y)
score_clf2 += clf2.score(X, y)
acc_clf1.append(score_clf1 / n_averages)
acc_clf2.append(score_clf2 / n_averages)
features_samples_ratio = np.array(n_features_range) / n_train
plt.plot(features_samples_ratio, acc_clf1, linewidth=2,
label="Linear Discriminant Analysis with shrinkage", color='r')
plt.plot(features_samples_ratio, acc_clf2, linewidth=2,
label="Linear Discriminant Analysis", color='g')
plt.xlabel('n_features / n_samples')
plt.ylabel('Classification accuracy')
plt.legend(loc=1, prop={'size': 12})
plt.suptitle('Linear Discriminant Analysis vs. \
shrinkage Linear Discriminant Analysis (1 discriminative feature)')
plt.show()
"""
Explanation: (三)顯示LDA判別結果
End of explanation
"""
|
GoogleCloudPlatform/mlops-on-gcp
|
model_serving/caip-load-testing/02-perf-testing.ipynb
|
apache-2.0
|
%pip install -q -U locust google-cloud-monitoring google-cloud-logging google-cloud-monitoring-dashboards
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
"""
Explanation: AI Platform Prediction Load Testing using Locust
This Notebook demonstrates how to perform load testing of AI Platform Prediction using Locust. The Notebook covers the following steps:
1. Creating custom log-based metrics in Cloud Monitoring for the Locust logs
2. Creating a Cloud Monitoring Dashboard to display the AI Platform and custom metrics
3. Deploying Locust to GKE clustor
4. Configuring Lucost test
5. Runing the Lucost load test
Setup
This notebook was tested on AI Platform Notebooks using the standard TF 2.2 image.
Install required packages
End of explanation
"""
import base64
import os
import time
import datetime
import json
import requests
import numpy as np
import pandas as pd
import google.auth
from typing import List
from google.api_core.exceptions import GoogleAPICallError
from google.cloud import logging_v2
from google.cloud.logging_v2 import MetricsServiceV2Client
from google.cloud.logging_v2 import LoggingServiceV2Client
from google.cloud.monitoring_dashboard.v1.types import Dashboard
from google.cloud.monitoring_dashboard.v1 import DashboardsServiceClient
from google.cloud.monitoring_v3 import MetricServiceClient
from google.cloud.monitoring_v3.query import Query
from google.cloud.monitoring_v3.types import TimeInterval
from google.protobuf.json_format import ParseDict
"""
Explanation: Import libraries
End of explanation
"""
PROJECT_ID = '[your-google-project-id]' # Set your project Id
BUCKET = '[your-bucket-name]' # Set your bucket name Id
REGION = '[your-region]' # Set your region for deploying the model
GKE_CLUSTER_NAME = '[your-gke-cluster-name]' # Set the GKE cluster name
GKE_CLUSTER_ZONE = '[your-gke-cluster-zone]' # Set the GKE cluster zone
MODEL_NAME = 'resnet_classifier'
MODEL_VERSION = 'v1'
IMAGES_FOLDER = 'test_images'
GCS_LOCUST_TEST_CONFIG_DIR = 'gs://{}/locust-test'.format(BUCKET)
"""
Explanation: Configure GCP environment settings
End of explanation
"""
def create_locust_metric(
metric_name:str,
log_path:str,
value_field:str,
bucket_bounds:List[int]):
metric_path = logging_client.metric_path(PROJECT_ID, metric_name)
log_entry_filter = 'resource.type=global AND logName={}'.format(log_path)
metric_descriptor = {
'metric_kind': 'DELTA',
'value_type': 'DISTRIBUTION',
'labels': [
{
'key': 'test_id',
'value_type': 'STRING'
},
{
'key': 'signature',
'value_type': 'STRING'
}
]
}
bucket_options = {
'explicit_buckets': {
'bounds': bucket_bounds
}
}
value_extractor = 'EXTRACT(jsonPayload.{})'.format(value_field)
label_extractors = {
'test_id': 'EXTRACT(jsonPayload.test_id)',
'signature': 'EXTRACT(jsonPayload.signature)'
}
metric = logging_v2.types.LogMetric(
name=metric_name,
filter=log_entry_filter,
value_extractor=value_extractor,
bucket_options=bucket_options,
label_extractors=label_extractors,
metric_descriptor=metric_descriptor,
)
try:
logging_client.get_log_metric(metric_path)
print('Metric: {} already exists'.format(metric_path))
except:
logging_client.create_log_metric(parent, metric)
print('Created metric {}'.format(metric_path))
"""
Explanation: 1. Creating Custom Log-based Metrics in Cloud Monitoring
In this section of the notebook you will use the Python Cloud Logging client library to create a set of custom log-based metrics. The metrics are based on the log entries generated by the example locustfile script. The script writes the log entries into the Cloud Logging log named locust.
Each log entry includes a set of key value pairs encoded as the JSON payload type. The metrics are based on the subset of keys from the log entry.
Key | Value
----|------
test_id | An ID of a test
model | An AI Platform Prediction Model name
model_version | An AI Platform Prediction Model version
latency | A 95 percentile response time, which is calculated over a 10 sliding second window
num_requests | A total number of requests since the test started
num_failures | A total number of requests since the test started
user_count | A number of simulated users
rps | A current requests per second
Refer to the Cloud Logging API reference for more information about the API.
1.1. Define a helper function that creates a custom log metric
End of explanation
"""
log_name = 'locust'
creds , _ = google.auth.default()
logging_client = MetricsServiceV2Client(credentials=creds)
parent = logging_client.project_path(PROJECT_ID)
log_path = LoggingServiceV2Client.log_path(PROJECT_ID, log_name)
"""
Explanation: 1.2. Create a logging client.
End of explanation
"""
# user count metric
metric_name = 'locust_users'
value_field = 'user_count'
bucket_bounds = [1, 16, 32, 64, 128]
create_locust_metric(metric_name, log_path, value_field, bucket_bounds)
# latency metric
metric_name = 'locust_latency'
value_field = 'latency'
bucket_bounds = [1, 50, 100, 200, 500]
create_locust_metric(metric_name, log_path, value_field, bucket_bounds)
# failure count metric
metric_name = 'num_failures'
value_field = 'num_failures'
bucket_bounds = [1, 1000]
create_locust_metric(metric_name, log_path, value_field, bucket_bounds)
# request count metric
metric_name = 'num_requests'
value_field = 'num_requests'
bucket_bounds = [1, 1000]
create_locust_metric(metric_name, log_path, value_field, bucket_bounds)
"""
Explanation: 1.3. Create metrics to track Locust logs
End of explanation
"""
metrics = logging_client.list_log_metrics(parent)
if not list(metrics):
print("There are not any log based metrics defined in the the project")
else:
for element in logging_client.list_log_metrics(parent):
print(element.metric_descriptor.name)
"""
Explanation: 1.4. List metrics
End of explanation
"""
dashboard_service_client = DashboardsServiceClient(credentials=creds)
parent = 'projects/{}'.format(PROJECT_ID)
dashboard_template_file = 'monitoring-template.json'
with open(dashboard_template_file) as f:
dashboard_template = json.load(f)
"""
Explanation: 2. Creating the Cloud Monitoring dashboard
Thedashboard_template folder contains an example monitoring dashboard template that combines standard AI Platform Prediction metrics with log-based metrics defined in the previous steps. You can use Python Client for Cloud Monitoring Dashboards API to create a dashboard based on the template.
2.1. Load the dashboard template
End of explanation
"""
dashboard_proto = Dashboard()
dashboard_proto = ParseDict(dashboard_template, dashboard_proto)
dashboard = dashboard_service_client.create_dashboard(parent, dashboard_proto)
for dashboard in dashboard_service_client.list_dashboards(parent):
print('Dashboard name: {}, Dashboard ID: {}'.format(dashboard.display_name, dashboard.name))
"""
Explanation: 2.2. Create the dashboard in Cloud Monitoring
End of explanation
"""
!curl -s "https://raw.githubusercontent.com/kubernetes-sigs/kustomize/master/hack/install_kustomize.sh" | bash
!sudo mv kustomize /usr/local/bin
"""
Explanation: 3. Deploying Locust to a GKE cluster
Before proceeding, you need access to a GKE cluster. The described deployment process can deploy Locust to any GKE cluster as long as there are enough compute resources to support your Locust configuration. The default configuration follows the Locust's best practices and requests one processor core and 4Gi of memory for the Locust master and one processor core and 2Gi of memory for each Locust worker. As you run your tests, it is important to monitor the the master and the workers for resource utilization and fine tune the allocated resources as required.
The deployment process has been streamlined using Kustomize. As described in the following steps, you can fine tune the baseline configuration by modifying the default kustomization.yaml and patch.yaml files in the locust/manifests folder.
3.1. Install Kustomize
End of explanation
"""
!gcloud container clusters get-credentials {GKE_CLUSTER_NAME} --zone {GKE_CLUSTER_ZONE}
"""
Explanation: 3.2. Set credentials to access your GKE cluster
Use, the gcloud command to set credentials to your GKE cluster. Make sure to update the cluster_name and cluster_zone variables with values reflecting your environment.
End of explanation
"""
image_uri = 'gcr.io/{}/locust'.format(PROJECT_ID)
!gcloud builds submit --tag {image_uri} locust/locust-image
"""
Explanation: 3.3. Build the Locust image
The first step is to build a docker image that will be used to deploy Locust master and worker pods. The image is derived from the baseline locust.io image and embeds the locustfile and the files's dependencies.
End of explanation
"""
!cat locust/manifests/kustomization.yaml
!cat locust/manifests/patch.yaml
"""
Explanation: 3.4. Update the manifests
Before proceeding with deployment, you need to update the default manifests. The manifests are located in the locust/manifests folder. You will modify two files: kustomization.yaml and patch.yaml.
Set the name of the custom Locust image - Update the kustomization.yaml file with a reference to the custom image you created in the previous step. Update the newName field in the images section of the kustomization.yaml file.
Set the number of worker pods - The default configuration deploys 32 worker pods. If you want to change it, modify the count field in the replicas section of the kustomization.yaml file. Make sure that your GKE cluster has as sufficient CPUs for the Locust workers.
Set the GCS bucket for the test configuration and data files - When you start a test, the locustfile retrieves a test configuration and test data files from a GCS location. Modify the configMapGenerator section of the kustomization.yaml file to set the LOCUST_TEST_BUCKET, LOCUST_TEST_CONFIG, and LOCUST_TEST_PAYLOAD literals to the GCS bucket name, the test config file name, and the test data config file name respectively.
Modify the node pool that hosts the Locust master and workers - Update the name of the node pool in the patch.yaml file to which the Locust workload will be deployed. By default, the node pool name is default-pool. The name of the node pool is a value of the values field in the matchExpressions section.
End of explanation
"""
!kustomize build locust/manifests |kubectl apply -f -
"""
Explanation: 3.5. Deploy Locust
You are now ready to deploy Locust.
End of explanation
"""
images = []
for image_name in os.listdir(IMAGES_FOLDER):
with open(os.path.join(IMAGES_FOLDER, image_name), 'rb') as f:
images.append(f.read())
single_instance = [{'b64': base64.b64encode(images[0]).decode('utf-8')}]
two_instances = [{'b64': base64.b64encode(image).decode('utf-8')} for image in images]
test_data = [
{
'signature': 'serving_preprocess',
'instances': single_instance
},
{
'signature': 'serving_preprocess',
'instances': two_instances
}
]
test_data_local_file = 'test-payload.json'
with open (test_data_local_file, 'w') as f:
json.dump(test_data, f)
!gsutil cp {test_data_local_file} {GCS_LOCUST_TEST_CONFIG_DIR}/
"""
Explanation: 4. Configure a Locust test
At the start of each test, the locustfile script attempts to retrieve test data and a test configuration from a GCS location. Both the test data and the test configuration are formated as JSON.
The test data is an array of JSON objects, where each object includes a list of instances and a model signature. If the array contains more than one object, Locust users will randomly pick a list of instances and an associated signature with each call to the predict method of the AI Platform Prediction endpoint.
The test configuration is a JSON object with a project id, model name, model version, and a test id.
4.1. Prepare test data
In this example we are using the ResNet101 model prepared and deployed in the 01-prepare-and-deploy.ipynb notebook. We will prepare the instances for the serving_preprocess signature of the model using a couple of JPEG images from the test_images folder.
End of explanation
"""
test_id = 'test-{}'.format(datetime.datetime.now().strftime('%Y%m%d-%H%M%S'))
test_config = {
'test_id': test_id,
'project_id': PROJECT_ID,
'model': MODEL_NAME,
'version': MODEL_VERSION
}
test_config_local_file = 'test-config.json'
with open (test_config_local_file, 'w') as f:
json.dump(test_config, f)
!gsutil cp {test_config_local_file} {GCS_LOCUST_TEST_CONFIG_DIR}/
!gsutil ls {GCS_LOCUST_TEST_CONFIG_DIR}
"""
Explanation: 4.2 Prepare test config
Make sure to update the below mapping with the values representing your environment. The test_id is an arbitrary value that is used to match the custom log-based metrics records with a given test run. Use a different value anytime you start a test.
End of explanation
"""
%%bash
IP_ADDRESS=$(kubectl get service locust-master | awk -v col=4 'FNR==2{print $col}')
echo http://$IP_ADDRESS:8089
"""
Explanation: 5. Run the Locust load Tests
Load tests can be configured, started, monitored and stopped using using Locust's web interface.
In our deployment, the web interface is exposed by an external load balancer. You can access the interface using the following URL:
End of explanation
"""
!kustomize build locust/manifests | kubectl delete -f -
"""
Explanation: You can try using the following parameter configurations:
1. Number of total users to simulate: 152
2. Hatch rate: 1
3. Host: http://ml.googleapis.com
4. Number of users to increase by step: 8
5. Step duration: 1m
Cleaning up: remove the Locust deployment
End of explanation
"""
|
LaubachLab/Spikes-and-Fields
|
Working with NEx files using oct2py.ipynb
|
gpl-3.0
|
import numpy as np
from scipy.io import loadmat
%load_ext oct2py.ipython
%cd ~/Desktop/Spikes-and-Fields/NEx-demo
"""
Explanation: This post demonstrates how oct2py can be used to run legacy Matlab/Octave code to load data saved in NeuroExplorer files into Python. As will be illustrated in a forthcoming post, this same approach can be used to run existing Matlab/Octave code in an integrated Jupyter notebook with Python kernel.
Matlab/Octave code for working with NEx files and a test file can be obtained from the NeuroExplorer website.
My lab has used an older set of m files written by Alex Kirillov (author of NeuroExplorer) and updated by us to deal with changes in Matlab over the years. We no longer use Matlab and our older m files for reading NEx files work perfectly well in GNU Octave. The three main files that we use are available from our GitHub repository: nex_info, nex_ts, and nex_cont.
End of explanation
"""
%octave [nvar, names, types] = nex_info('TestDataFileForNeuroshare.nex');
"""
Explanation: Read files in an Octave session
The command %octave is the "octave magic" function that let's you run raw Matlab/Octave code in a Python notebook.
For a batch of commenda, use %%octave in the first line of a code cell. The rest of the cell is pure Octave/Matlab code. %%octave -o var returns a variable from the cell directly to the Python memoryspace.
End of explanation
"""
%octave_pull nvar names types
"""
Explanation: %octave_pull and %octave_push are used to send variables between Octave and Python.
End of explanation
"""
%whos
"""
Explanation: nvar is the number of variables in the NEx file
names are strings for each variable
types are the types of data (0-neuron, 1-event, 2-interval, 3-waveform, 4-population vector, 5-continuous variable, 6 - marker)
End of explanation
"""
nvar = nvar.astype(int) # nvar should really be an integer
types = types.flatten() # types should really be a flat array
%whos
"""
Explanation: Some issues come up with variable types using this approach, and a few adjustments are needed once the variables are pulled into Python.
End of explanation
"""
names[types==0]
"""
Explanation: Let's load the neuron's time stamps into Python
End of explanation
"""
%%octave
[~, Event04] = nex_ts('TestDataFileForNeuroshare.nex', 'Event04');
[adfreq, n, ts, fn, AD01] = nex_cont('TestDataFileForNeuroshare.nex', 'ContChannel01');
%octave_pull Event04 adfreq ts AD01
%whos
"""
Explanation: It is very easy to load the variables that are needed into the Octave session, save them into a mat file, and load them into Python using SciPy's loadmat function.
Let's say we want to bring in one of the continuous recordings (ContChannel01) and a behavioral event (Event04) for an LFP using the MNE toolbox.
End of explanation
"""
AD01 = AD01.flatten()
Event04 = Event04.flatten()
%whos
"""
Explanation: Again, the time stamps and continuous variables need to be flattened.
The floating point numbers are fine. (adfreq is the sampling frequency and ts is the time difference between saving time stamps in the Plexon recording system and the continuous signals on the AD card.)
End of explanation
"""
|
xiongzhenggang/xiongzhenggang.github.io
|
AI/ML/week4反向传播实现.ipynb
|
gpl-3.0
|
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib
from scipy.io import loadmat
from sklearn.preprocessing import OneHotEncoder
data = loadmat('../data/andrew_ml_ex33507/ex3data1.mat')
data
X = data['X']
y = data['y']
X.shape, y.shape#看下维度
# 目前考虑输入是图片的像素值,20*20像素的图片有400个输入层单元,不包括需要额外添加的加上常数项。 材料已经提供了训练好的神经网络的参数,有25个单元和10个输出单元(10个输出)
weight = loadmat("../data/andrew_ml_ex33507/ex3weights.mat")
theta1, theta2 = weight['Theta1'], weight['Theta2']
theta1.shape, theta2.shape
sample_idx = np.random.choice(np.arange(data['X'].shape[0]), 100)
sample_images = data['X'][sample_idx, :]
#展示二进制图
fig, ax_array = plt.subplots(nrows=5, ncols=5, sharey=True, sharex=True, figsize=(8, 8))
for r in range(5):
for c in range(5):
ax_array[r, c].matshow(np.array(sample_images[5 * r + c].reshape((20, 20))).T,cmap=matplotlib.cm.binary)
plt.xticks(np.array([]))
plt.yticks(np.array([]))
"""
Explanation: 正向传播和反向传播实现
反向传播算法
之前我们在计算神经网络预测结果的时候我们采用了一种正向传播方法,我们从第一层开始正向一层一层进行计算,直到最后一层的$h_{\theta}\left(x\right)$。
现在,为了计算代价函数的偏导数$\frac{\partial}{\partial\Theta^{(l)}_{ij}}J\left(\Theta\right)$,我们需要采用一种反向传播算法,也就是首先计算最后一层的误差,然后再一层一层反向求出各层的误差,直到倒数第二层
可视化数据
利用上一周的数据,先计算神经网络前向传播算法,计算输出结果,为后向传播提供预测数据
End of explanation
"""
def sigmoid(z):
return 1 / (1 + np.exp(-z))
#2st 上面传播规律,定义第一层,并计算第二层(隐藏层)的值,并添加额外值
def forward_propagate(X,theta1,theta2):
m= X.shape[0]
a1 = np.insert(X,0, values=np.ones(m), axis=1)
Z2 = a1*theta1.T
a2= np.insert(sigmoid(Z2),0, values=np.ones(m), axis=1)
Z3= a2*theta2.T
h= sigmoid(Z3)
return a1,Z2,a2,Z3,h
# 代价函数(不带规则化项(也叫权重衰减项) Y=R(5000*10) ,这里直接使用二维矩阵,代替循环累加
def cost(X,Y,theta1,theta2):
m = X.shape[0]
X = np.matrix(X)
Y = np.matrix(Y)
h=forward_propagate(X,theta1,theta2)
# multiply 矩阵size相同对应相乘
first = np.multiply(Y,np.log(h))
second = np.multiply((1-Y),np.log((1-h)))
J= np.sum(first+second)
J = (-1/m)*J
return J
# 对y标签进行编码 一开始我们得到的y是维500*1 的向量,但我们要把他编码成的矩阵。 比如说,原始y0=2,那么转化后的Y对应行就是[0,1,0...0],原始转化后的Y对应行就是[0,0...0,1]
# Scikitlearn有一个内置的编码函数,我们可以使用这个。
encoder = OneHotEncoder(sparse=False)
y_onehot = encoder.fit_transform(y)
y_onehot.shape
y[0], y_onehot[0,:] # y0是数字0
# 初始化设置
input_size = 400
num_labels = 10
cost(X, y_onehot,theta1, theta2)
# 加入正则项
def cost_reg(X,Y,theta1,theta2,learning_rate):
m = X.shape[0]
X = np.matrix(X)
Y = np.matrix(Y)
_,_,_,_,h=forward_propagate(X,theta1,theta2)
first = np.multiply(Y,np.log(h))
second = np.multiply((1-Y),np.log((1-h)))
J= np.sum(first+second)
# 计算正则时,第一项时不参与计算
J = (-1/m)*J + (float(learning_rate) / (2 * m))*(np.sum(np.power(theta1[:,1:],2))+np.sum(np.power(theta2[:,1:],2)))
return J
# theta1.shape,theta2.shape
cost_reg(X, y_onehot,theta1, theta2,1)
"""
Explanation: 模型展示
按照默认 我们设计一个输入层,一个隐藏层,一个输出层
前向传播和代价函数
在逻辑回归中,我们只有一个输出变量,又称标量(scalar),也只有一个因变量$y$,但是在神经网络中,我们可以有很多输出变量,我们的$h_\theta(x)$是一个维度为$K$的向量,并且我们训练集中的因变量也是同样维度的一个向量,因此我们的代价函数会比逻辑回归更加复杂一些,为:$\newcommand{\subk}[1]{ #1_k }$ $$h_\theta\left(x\right)\in \mathbb{R}^{K}$$ $${\left({h_\theta}\left(x\right)\right)}_{i}={i}^{th} \text{output}$$
$J(\Theta) = -\frac{1}{m} \left[ \sum\limits_{i=1}^{m} \sum\limits_{k=1}^{k} {y_k}^{(i)} \log \subk{(h_\Theta(x^{(i)}))} + \left( 1 - y_k^{(i)} \right) \log \left( 1- \subk{\left( h_\Theta \left( x^{(i)} \right) \right)} \right) \right] + \frac{\lambda}{2m} \sum\limits_{l=1}^{L-1} \sum\limits_{i=1}^{s_l} \sum\limits_{j=1}^{s_{l+1}} \left( \Theta_{ji}^{(l)} \right)^2$
End of explanation
"""
# 计算sigmoid函数的导数
def sigmoid_gradient(z):
return np.multiply(sigmoid(z) ,(1-sigmoid(z)))
# 检查
sigmoid_gradient(0)
"""
Explanation: 反向传播
这一部分需要你实现反向传播的算法,来计算神经网络代价函数的梯度。获得了梯度的数据,我们就可以使用工具库来计算代价函数的最小值。
End of explanation
"""
# 初始化设置
input_size = 400 #输入单元数量
hidden_size = 25 # y隐藏单元数量
num_labels = 10 # 输出单元数
epsilon = 0.001
theta01=np.random.rand(hidden_size,input_size+1) * 2*epsilon - epsilon# +1是添加偏置单元
theta02 =np.random.rand(num_labels,hidden_size+1)* 2*epsilon - epsilon
theta01.shape,theta02.shape
"""
Explanation: 初始话参数
到目前为止我们都是初始所有参数为0,这样的初始方法对于逻辑回归来说是可行的,但是对于神经网络来说是不可行的。如果我们令所有的初始参数都为0,这将意味着我们第二层的所有激活单元都会有相同的值。同理,如果我们初始所有的参数都为一个非0的数,结果也是一样的。
我们通常初始参数为正负ε之间的随机值,假设我们要随机初始一个尺寸为10×11的参数矩阵,代码如下:
Theta1 = rand(10, 11) (2eps) – eps
End of explanation
"""
# 分别得出
def forward_propagateNEW(X,thetalist):
m= X.shape[0]
a = np.insert(X,0, values=np.ones(m), axis=1)
alist=[a]
zlist=[]
for i in range(len(thetalist)):
theta= thetalist[i]
z = a * theta
# a= np.insert(sigmoid(z),0, values=np.ones(m), axis=1)
a=sigmoid(z)
if(i<len(thetalist)-1):
a= np.insert(a,0, values=np.ones(m), axis=1)
zlist.append(z)
alist.append(a)
return zlist,alist
# Δ 用delta1 和delta2 替代
def backpropRegSelf(input_size, hidden_size, num_labels, X, y, learning_rate,L=3): # 随机化后的 这里为3层
m = X.shape[0]
X = np.matrix(X)
y = np.matrix(y)
#初始化参数
theta1 = (np.random.random((input_size+1,hidden_size))- 0.5)* 0.24
theta2 = (np.random.random((hidden_size+1,num_labels))- 0.5)* 0.24
encoder = OneHotEncoder(sparse=False)
y_onehot = encoder.fit_transform(y) # 格式化y
# 前向计算 每层值
theta = [theta1, theta2]
zlist,alist = forward_propagateNEW(X, theta)# 返回 a1 z2 a2 。。。
# 初始化Deta
Delta=[]
for th in theta:
Delta.append(np.zeros(th.shape))
for i in range(m):
# 以计算a z
for l in range(L,1,-1): # 3,2 表示层数,最后一层已经算出来,单独列放
#最后一层
if l==L:
delta=alist[-1][i,:]-y_onehot[i,:] # 最后一层得δ
Delta[l-2] = Delta[l-2] + alist[l-2][i,:].T * delta
else:
zl = zlist[l-2][i,:]
zl = np.insert(zl, 0, values=np.ones(1)) # (1, 26) 怎加偏执项
# d2t = np.multiply((theta2.T * d3t.T).T, sigmoid_gradient(z2t)) # (1, 26)
# delta1 = delta1 + (d2t[:,1:]).T * a1t
delta = np.multiply(delta*theta[l-1].T, sigmoid_gradient(zl)) #
# 因为数组从零开始,且 Delta 为 1 2 层开始 delta 从2 层开始 # (25, 401)# (10, 26)
Delta[l-2] = Delta[l-2] + alist[l-2][i,:].T * delta[:,1:]
# add the gradient regularization term
gradAll = None
for j in range(len(Delta)):
Delta[j][:,1:] = Delta[j][:,1:]/m + (theta[j][:,1:] * learning_rate) / m
if gradAll is None:
gradAll = np.ravel(Delta[j])
else:
tmp=np.ravel(Delta[j])
gradAll = np.concatenate([gradAll,tmp])
# Delta[:,:,1:] = Delta[:,:,1:] + (theta[:,:,1:] * learning_rate) / m
return gradAll
grad2= backpropRegSelf(input_size, hidden_size, num_labels, X, y, 1)
print(grad2.shape)
def backpropReg(params, input_size, hidden_size, num_labels, X, y, learning_rate):
m = X.shape[0]
X = np.matrix(X)
y = np.matrix(y)
# reshape the parameter array into parameter matrices for each layer
theta1 = np.matrix(np.reshape(params[:hidden_size * (input_size + 1)], (hidden_size, (input_size + 1))))
theta2 = np.matrix(np.reshape(params[hidden_size * (input_size + 1):], (num_labels, (hidden_size + 1))))
# run the feed-forward pass
a1, z2, a2, z3, h = forward_propagate(X, theta1, theta2)
# initializations
J = 0
delta1 = np.zeros(theta1.shape) # (25, 401)
delta2 = np.zeros(theta2.shape) # (10, 26)
# compute the cost
for i in range(m):
first_term = np.multiply(-y[i,:], np.log(h[i,:]))
second_term = np.multiply((1 - y[i,:]), np.log(1 - h[i,:]))
J += np.sum(first_term - second_term)
J = J / m
# add the cost regularization term
J += (float(learning_rate) / (2 * m)) * (np.sum(np.power(theta1[:,1:], 2)) + np.sum(np.power(theta2[:,1:], 2)))
# perform backpropagation
for t in range(m):
a1t = a1[t,:] # (1, 401)
z2t = z2[t,:] # (1, 25)
a2t = a2[t,:] # (1, 26)
ht = h[t,:] # (1, 10)
yt = y[t,:] # (1, 10)
d3t = ht - yt # (1, 10)
z2t = np.insert(z2t, 0, values=np.ones(1)) # (1, 26)
d2t = np.multiply((theta2.T * d3t.T).T, sigmoid_gradient(z2t)) # (1, 26)
delta1 = delta1 + (d2t[:,1:]).T * a1t
delta2 = delta2 + d3t.T * a2t
delta1 = delta1 / m
delta2 = delta2 / m
# add the gradient regularization term
delta1[:,1:] = delta1[:,1:] + (theta1[:,1:] * learning_rate) / m
delta2[:,1:] = delta2[:,1:] + (theta2[:,1:] * learning_rate) / m
# unravel the gradient matrices into a single array
grad = np.concatenate((np.ravel(delta1), np.ravel(delta2)))
return J, grad
# np.random.random(size) 返回size大小的0-1随机浮点数
params = (np.random.random(size=hidden_size * (input_size + 1) + num_labels * (hidden_size + 1)) - 0.5) * 0.24
j,grad = backpropReg(params, input_size, hidden_size, num_labels, X, y, 1)
print(j,grad.shape)
# j2,grad2= backpropRegSelf(input_size, hidden_size, num_labels, X, y, 1)
# print(j2,grad2[0:10])
"""
Explanation: 反向传播
反向传播的步骤是,给定训练集,先计算正向传播,再对于层的每个节点,计算误差项,这个数据衡量这个节点对最后输出的误差“贡献”了多少。 对于每个输出节点,我们可以直接计算输出值与目标值的差值,定义为δ。对于每个隐藏节点,需要基于现有权重及(l+1)层的误差,计算
步骤:
随机初始化权重theta
实现前向传递对任何xi 都能取得h(xi)
实现Jθ
End of explanation
"""
# #J θ
# input_size = 400 #输入单元数量
# hidden_size = 25 # y隐藏单元数量
# num_labels = 10 # 输出单元数
def jcost(X, y,input_size, hidden_size, output_size,theta):
m = X.shape[0]
X = np.matrix(X)
y = np.matrix(y)
theta1 = np.reshape(theta[0:hidden_size*(input_size+1)],(hidden_size,input_size+1))#(25,401)
theta2 = np.reshape(theta[hidden_size*(input_size+1):],(output_size,hidden_size+1))#(10.26)
_,_,_,_,h=forward_propagate(X,theta1,theta2)
# multiply 矩阵size相同对应相乘
first = np.multiply(y,np.log(h))
second = np.multiply((1-y),np.log((1-h)))
J= np.sum(first+second)
J = (-1/m)*J
return J
def check(X,y,theta1,theta2,eps):
theta = np.concatenate((np.ravel(theta1), np.ravel(theta2)))
gradapprox=np.zeros(len(theta))
for i in range(len(theta)):
thetaplus = theta
thetaplus[i] = thetaplus[i] + eps
thetaminus = theta
thetaminus[i] = thetaminus[i] - eps
gradapprox[i] = (jcost(X,y,input_size,hidden_size,num_labels,thetaplus) - jcost(X,y,input_size,hidden_size,num_labels,thetaminus)) / (2 * epsilon)
return gradapprox
# theta01.shape , theta02.shape
# 计算很慢
gradapprox = check(X,y_onehot,theta1,theta2,0.001)
numerator = np.linalg.norm(grad2-gradapprox, ord=2) # Step 1'
denominator = np.linalg.norm(grad2, ord=2) + np.linalg.norm(gradapprox, ord=2) # Step 2'
difference = numerator / denominator
print(difference)
# 使用工具库计算参数最优解
from scipy.optimize import minimize
# opt.fmin_tnc(func=cost, x0=theta, fprime=gradient, args=(X, y))
fmin = minimize(fun=backpropReg, x0=(params), args=(input_size, hidden_size, num_labels, X, y_onehot, learning_rate),
method='TNC', jac=True, options={'maxiter': 250})
fmin
X = np.matrix(X)
thetafinal1 = np.matrix(np.reshape(fmin.x[:hidden_size * (input_size + 1)], (hidden_size, (input_size + 1))))
thetafinal2 = np.matrix(np.reshape(fmin.x[hidden_size * (input_size + 1):], (num_labels, (hidden_size + 1))))
print(thetafinal1[0,1],grad2[1])
# 计算使用优化后的θ得出的预测
a1, z2, a2, z3, h = forward_propagate(X, thetafinal1, thetafinal2 )
y_pred = np.array(np.argmax(h, axis=1) + 1)
y_pred
# 最后,我们可以计算准确度,看看我们训练完毕的神经网络效果怎么样。
# 预测值与实际值比较
from sklearn.metrics import classification_report#这个包是评价报告
print(classification_report(y, y_pred))
hidden_layer = thetafinal1[:, 1:]
hidden_layer.shape
fig, ax_array = plt.subplots(nrows=5, ncols=5, sharey=True, sharex=True, figsize=(12, 12))
for r in range(5):
for c in range(5):
ax_array[r, c].matshow(np.array(hidden_layer[5 * r + c].reshape((20, 20))),cmap=matplotlib.cm.binary)
plt.xticks(np.array([]))
plt.yticks(np.array([]))
"""
Explanation: 梯度检验
梯度的估计采用的方法是在代价函数上沿着切线的方向选择离两个非常近的点然后计算两个点的平均值用以估计梯度。即对于某个特定的 $\theta$,我们计算出在 $\theta$-$\varepsilon $ 处和 $\theta$+$\varepsilon $ 的代价值($\varepsilon $是一个非常小的值,通常选取 0.001),然后求两个代价的平均,用以估计在 $\theta$ 处的代价值。
End of explanation
"""
|
quantumlib/OpenFermion-FQE
|
docs/tutorials/hamiltonian_time_evolution_and_expectation_estimation.ipynb
|
apache-2.0
|
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2020 The OpenFermion Developers
End of explanation
"""
try:
import fqe
except ImportError:
!pip install fqe --quiet
Print = True
from openfermion import FermionOperator, MolecularData
from openfermion.utils import hermitian_conjugated
import numpy
import fqe
from fqe.unittest_data import build_lih_data
numpy.set_printoptions(floatmode='fixed', precision=6, linewidth=80, suppress=True)
numpy.random.seed(seed=409)
h1e, h2e, wfn = build_lih_data.build_lih_data('energy')
lih_hamiltonian = fqe.get_restricted_hamiltonian(([h1e, h2e]))
lihwfn = fqe.Wavefunction([[4, 0, 6]])
lihwfn.set_wfn(strategy='from_data', raw_data={(4, 0): wfn})
if Print:
lihwfn.print_wfn()
"""
Explanation: Hamiltonian Time Evolution and Expectation Value Computation
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://quantumai.google/openfermion/fqe/tutorials/hamiltonian_time_evolution_and_expectation_estimation"><img src="https://quantumai.google/site-assets/images/buttons/quantumai_logo_1x.png" />View on QuantumAI</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/quantumlib/OpenFermion-FQE/blob/master/docs/tutorials/hamiltonian_time_evolution_and_expectation_estimation.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/colab_logo_1x.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/quantumlib/OpenFermion-FQE/blob/master/docs/tutorials/hamiltonian_time_evolution_and_expectation_estimation.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/github_logo_1x.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/OpenFermion-FQE/docs/tutorials/hamiltonian_time_evolution_and_expectation_estimation.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/download_icon_1x.png" />Download notebook</a>
</td>
</table>
This tutorial describes the FQE's capabilities for Hamiltonian time-evolution and expectation value estimation
Where possible, LiH will be used as an example molecule for the API.
End of explanation
"""
# dummy geometry
from openfermion.chem.molecular_data import spinorb_from_spatial
from openfermion import jordan_wigner, get_sparse_operator, InteractionOperator, get_fermion_operator
h1s, h2s = spinorb_from_spatial(h1e, numpy.einsum("ijlk", -2 * h2e) * 0.5)
mol = InteractionOperator(0, h1s, h2s)
ham_fop = get_fermion_operator(mol)
ham_mat = get_sparse_operator(jordan_wigner(ham_fop)).toarray()
from scipy.linalg import expm
time = 0.01
evolved1 = lihwfn.time_evolve(time, lih_hamiltonian)
if Print:
evolved1.print_wfn()
evolved2 = fqe.time_evolve(lihwfn, time, lih_hamiltonian)
if Print:
evolved2.print_wfn()
assert numpy.isclose(fqe.vdot(evolved1, evolved2), 1)
cirq_wf = fqe.to_cirq_ncr(lihwfn)
evolve_cirq = expm(-1j * time * ham_mat) @ cirq_wf
test_evolve = fqe.from_cirq(evolve_cirq, thresh=1.0E-12)
assert numpy.isclose(fqe.vdot(test_evolve, evolved1), 1)
"""
Explanation: Application of one- and two-body fermionic gates
The API for time propogation can be invoked through the fqe namespace or the wavefunction object
End of explanation
"""
wfn = fqe.Wavefunction([[4, 2, 4]])
wfn.set_wfn(strategy='random')
if Print:
wfn.print_wfn()
diagonal = FermionOperator('0^ 0', -2.0) + \
FermionOperator('1^ 1', -1.7) + \
FermionOperator('2^ 2', -0.7) + \
FermionOperator('3^ 3', -0.55) + \
FermionOperator('4^ 4', -0.1) + \
FermionOperator('5^ 5', -0.06) + \
FermionOperator('6^ 6', 0.5) + \
FermionOperator('7^ 7', 0.3)
if Print:
print(diagonal)
evolved = wfn.time_evolve(time, diagonal)
if Print:
evolved.print_wfn()
"""
Explanation: Exact evolution implementation of quadratic Hamiltonians
Listed here are examples of evolving the special Hamiltonians.
Diagonal Hamiltonian evolution is supported.
End of explanation
"""
norb = 4
h1e = numpy.zeros((norb, norb), dtype=numpy.complex128)
for i in range(norb):
for j in range(norb):
h1e[i, j] += (i+j) * 0.02
h1e[i, i] += i * 2.0
hamil = fqe.get_restricted_hamiltonian((h1e,))
wfn = fqe.Wavefunction([[4, 0, norb]])
wfn.set_wfn(strategy='random')
initial_energy = wfn.expectationValue(hamil)
print('Initial Energy: {}'.format(initial_energy))
evolved = wfn.time_evolve(time, hamil)
final_energy = evolved.expectationValue(hamil)
print('Final Energy: {}'.format(final_energy))
"""
Explanation: Exact evolution of dense quadratic hamiltonians is supported. Here is an evolution example using a spin restricted Hamiltonian on a number and spin conserving wavefunction
End of explanation
"""
norb = 4
h1e = numpy.zeros((2*norb, 2*norb), dtype=numpy.complex128)
for i in range(2*norb):
for j in range(2*norb):
h1e[i, j] += (i+j) * 0.02
h1e[i, i] += i * 2.0
hamil = fqe.get_gso_hamiltonian((h1e,))
wfn = fqe.get_number_conserving_wavefunction(4, norb)
wfn.set_wfn(strategy='random')
initial_energy = wfn.expectationValue(hamil)
print('Initial Energy: {}'.format(initial_energy))
evolved = wfn.time_evolve(time, hamil)
final_energy = evolved.expectationValue(hamil)
print('Final Energy: {}'.format(final_energy))
"""
Explanation: The GSO Hamiltonian is for evolution of quadratic hamiltonians that are spin broken and number conserving.
End of explanation
"""
norb = 4
time = 0.001
wfn_spin = fqe.get_spin_conserving_wavefunction(2, norb)
hamil = FermionOperator('', 6.0)
for i in range(0, 2*norb, 2):
for j in range(0, 2*norb, 2):
opstring = str(i) + ' ' + str(j + 1)
hamil += FermionOperator(opstring, (i+1 + j*2)*0.1 - (i+1 + 2*(j + 1))*0.1j)
opstring = str(i) + '^ ' + str(j + 1) + '^ '
hamil += FermionOperator(opstring, (i+1 + j)*0.1 + (i+1 + j)*0.1j)
h_noncon = (hamil + hermitian_conjugated(hamil))/2.0
if Print:
print(h_noncon)
wfn_spin.set_wfn(strategy='random')
if Print:
wfn_spin.print_wfn()
spin_evolved = wfn_spin.time_evolve(time, h_noncon)
if Print:
spin_evolved.print_wfn()
"""
Explanation: The BCS hamiltonian evovles spin conserved and number broken wavefunctions.
End of explanation
"""
norb = 4
wfn = fqe.Wavefunction([[5, 1, norb]])
vij = numpy.zeros((norb, norb, norb, norb), dtype=numpy.complex128)
for i in range(norb):
for j in range(norb):
vij[i, j] += 4*(i % norb + 1)*(j % norb + 1)*0.21
wfn.set_wfn(strategy='random')
if Print:
wfn.print_wfn()
hamil = fqe.get_diagonalcoulomb_hamiltonian(vij)
evolved = wfn.time_evolve(time, hamil)
if Print:
evolved.print_wfn()
"""
Explanation: Exact Evolution Implementation of Diagonal Coulomb terms
End of explanation
"""
norb = 3
nele = 4
ops = FermionOperator('5^ 1^ 2 0', 3.0 - 1.j)
ops += FermionOperator('0^ 2^ 1 5', 3.0 + 1.j)
wfn = fqe.get_number_conserving_wavefunction(nele, norb)
wfn.set_wfn(strategy='random')
wfn.normalize()
if Print:
wfn.print_wfn()
evolved = wfn.time_evolve(time, ops)
if Print:
evolved.print_wfn()
"""
Explanation: Exact evolution of individual n-body anti-Hermitian gnerators
End of explanation
"""
lih_evolved = lihwfn.apply_generated_unitary(time, 'taylor', lih_hamiltonian, accuracy=1.e-8)
if Print:
lih_evolved.print_wfn()
norb = 2
nalpha = 1
nbeta = 1
nele = nalpha + nbeta
time = 0.05
h1e = numpy.zeros((norb*2, norb*2), dtype=numpy.complex128)
for i in range(2*norb):
for j in range(2*norb):
h1e[i, j] += (i+j) * 0.02
h1e[i, i] += i * 2.0
hamil = fqe.get_general_hamiltonian((h1e,))
spec_lim = [-1.13199078e-03, 6.12720338e+00]
wfn = fqe.Wavefunction([[nele, nalpha - nbeta, norb]])
wfn.set_wfn(strategy='random')
if Print:
wfn.print_wfn()
evol_wfn = wfn.apply_generated_unitary(time, 'chebyshev', hamil, spec_lim=spec_lim)
if Print:
evol_wfn.print_wfn()
"""
Explanation: Approximate evolution of sums of n-body generators
Approximate evolution can be done for dense operators.
End of explanation
"""
rdm1 = lihwfn.expectationValue('i^ j')
if Print:
print(rdm1)
val = lihwfn.expectationValue('5^ 3')
if Print:
print(2.*val)
trdm1 = fqe.expectationValue(lih_evolved, 'i j^', lihwfn)
if Print:
print(trdm1)
val = fqe.expectationValue(lih_evolved, '5 3^', lihwfn)
if Print:
print(2*val)
"""
Explanation: API for determining desired expectation values
End of explanation
"""
rdm2 = lihwfn.expectationValue('i^ j k l^')
if Print:
print(rdm2)
rdm2 = fqe.expectationValue(lihwfn, 'i^ j^ k l', lihwfn)
if Print:
print(rdm2)
"""
Explanation: 2.B.1 RDMs
In addition to the above API higher order density matrices in addition to hole densities can be calculated.
End of explanation
"""
li_h_energy = lihwfn.expectationValue(lih_hamiltonian)
if Print:
print(li_h_energy)
li_h_energy = fqe.expectationValue(lihwfn, lih_hamiltonian, lihwfn)
if Print:
print(li_h_energy)
"""
Explanation: 2.B.2 Hamiltonian expectations (or any expectation values)
End of explanation
"""
op = fqe.get_s2_operator()
print(lihwfn.expectationValue(op))
op = fqe.get_sz_operator()
print(lihwfn.expectationValue(op))
op = fqe.get_time_reversal_operator()
print(lihwfn.expectationValue(op))
op = fqe.get_number_operator()
print(lihwfn.expectationValue(op))
"""
Explanation: 2.B.3 Symmetry operations
End of explanation
"""
|
kubeflow/pipelines
|
components/gcp/dataproc/create_cluster/sample.ipynb
|
apache-2.0
|
%%capture --no-stderr
!pip3 install kfp --upgrade
"""
Explanation: Name
Data processing by creating a cluster in Cloud Dataproc
Label
Cloud Dataproc, cluster, GCP, Cloud Storage, KubeFlow, Pipeline
Summary
A Kubeflow Pipeline component to create a cluster in Cloud Dataproc.
Details
Intended use
Use this component at the start of a Kubeflow Pipeline to create a temporary Cloud Dataproc cluster to run Cloud Dataproc jobs as steps in the pipeline.
Runtime arguments
| Argument | Description | Optional | Data type | Accepted values | Default |
|----------|-------------|----------|-----------|-----------------|---------|
| project_id | The Google Cloud Platform (GCP) project ID that the cluster belongs to. | No | GCPProjectID | | |
| region | The Cloud Dataproc region to create the cluster in. | No | GCPRegion | | |
| name | The name of the cluster. Cluster names within a project must be unique. You can reuse the names of deleted clusters. | Yes | String | | None |
| name_prefix | The prefix of the cluster name. | Yes | String | | None |
| initialization_actions | A list of Cloud Storage URIs identifying executables to execute on each node after the configuration is completed. By default, executables are run on the master and all the worker nodes. | Yes | List | | None |
| config_bucket | The Cloud Storage bucket to use to stage the job dependencies, the configuration files, and the job driver console’s output. | Yes | GCSPath | | None |
| image_version | The version of the software inside the cluster. | Yes | String | | None |
| cluster | The full cluster configuration. | Yes | Dict | | None |
| wait_interval | The number of seconds to pause before polling the operation. | Yes | Integer | | 30 |
Output
Name | Description | Type
:--- | :---------- | :---
cluster_name | The name of the cluster. | String
Note: You can recycle the cluster by using the Dataproc delete cluster component.
Cautions & requirements
To use the component, you must:
* Set up the GCP project by following these steps.
* The component can authenticate to GCP. Refer to Authenticating Pipelines to GCP for details.
* Grant the following types of access to the Kubeflow user service account:
* Read access to the Cloud Storage buckets which contains initialization action files.
* The role, roles/dataproc.editor on the project.
Detailed description
This component creates a new Dataproc cluster by using the Dataproc create cluster REST API.
Follow these steps to use the component in a pipeline:
Install the Kubeflow Pipeline SDK:
End of explanation
"""
import kfp.components as comp
dataproc_create_cluster_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/1.7.0-rc.3/components/gcp/dataproc/create_cluster/component.yaml')
help(dataproc_create_cluster_op)
"""
Explanation: Load the component using KFP SDK
End of explanation
"""
# Required Parameters
PROJECT_ID = '<Please put your project ID here>'
# Optional Parameters
EXPERIMENT_NAME = 'Dataproc - Create Cluster'
"""
Explanation: Sample
Note: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template.
Set sample parameters
End of explanation
"""
import kfp.dsl as dsl
import json
@dsl.pipeline(
name='Dataproc create cluster pipeline',
description='Dataproc create cluster pipeline'
)
def dataproc_create_cluster_pipeline(
project_id = PROJECT_ID,
region = 'us-central1',
name='',
name_prefix='',
initialization_actions='',
config_bucket='',
image_version='',
cluster='',
wait_interval='30'
):
dataproc_create_cluster_op(
project_id=project_id,
region=region,
name=name,
name_prefix=name_prefix,
initialization_actions=initialization_actions,
config_bucket=config_bucket,
image_version=image_version,
cluster=cluster,
wait_interval=wait_interval)
"""
Explanation: Example pipeline that uses the component
End of explanation
"""
pipeline_func = dataproc_create_cluster_pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
"""
Explanation: Compile the pipeline
End of explanation
"""
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
"""
Explanation: Submit the pipeline for execution
End of explanation
"""
|
rajul/tvb-library
|
tvb/simulator/demos/display_region_connectivity.ipynb
|
gpl-2.0
|
from tvb.simulator.lab import *
"""
Explanation: Plot regions and connection edges.
Xmas balls scaled is in the range [0 - 1], representing
the cumulative input to each region.
End of explanation
"""
white_matter = connectivity.Connectivity(load_default=True)
#Compute cumulative input for each region
node_data = white_matter.weights.sum(axis=1)
scaling_factor = node_data.max()
"""
Explanation: Load the object
End of explanation
"""
if IMPORTED_MAYAVI:
xmas_balls(white_matter, node_data=node_data / scaling_factor)
"""
Explanation: Plot pretty pictures of what we just did
End of explanation
"""
|
JAmarel/LiquidCrystals
|
ElectroOptics/CurveFitAttempt.ipynb
|
mit
|
import numpy as np
from scipy.integrate import quad, dblquad
%matplotlib inline
import matplotlib.pyplot as plt
import scipy.optimize as opt
"""
Explanation: TO DO:
Need to be able to scatter plot measured values of Psi on top of the current Psi plot.
Alpha and rho LaTeX not working in plots.
Legend needs to be move in the Psi plot.
Consider also applying the scatter of the data and curve fit style of plotting for the parameters
Plots in general need to look better
LaTeX in all of the equations
End of explanation
"""
thetamin = 25.6*np.pi/180
thetamax = 33.7*np.pi/180
t = 1*10**-6 #Cell Thickness
"""
Explanation: This notebook calculates and plots the theoretical tilt angles. It will also plot the alpha and p0 factors vs temperature that are given in the cell below this.
Material Characteristics
End of explanation
"""
tempsC = np.array([26, 27, 29, 31, 33, 35, 37])
voltages = np.array([2,3,6,7,9,11,12.5,14,16,18,20,22,23.5,26,27.5,29,31,32.5,34,36])
alpha_micro = np.array([.2575,.2475,.2275,.209,.189,.176,.15])
p0Debye = np.array([650,475,300,225,160,125,100]) #Temperature Increases to the right
#This Block just converts units
fields = np.array([entry/t for entry in voltages])
debye = 3.33564e-30
p0_array = np.array([entry*debye for entry in p0Debye]) #debye units to SI units
k = 1.3806488e-23
p0k_array = np.array([entry/k for entry in p0_array]) #p0k is used because it helps with the integration
KC = 273.15
tempsK = np.array([entry+KC for entry in tempsC]) #Celsius to Kelvin
alpha_array = np.array([entry*1e-6 for entry in alpha_micro])
PSIdata = np.array([11.4056,20.4615,25.4056,27.9021,29.028,29.6154,30.2517,30.8392,31.1329,31.5245,31.8671,32.014,32.3077,32.5034,32.7972,32.9929,33.1399,33.3357,33.4336,33.6783])
Edata = fields
T = tempsK[0]
"""
Explanation: Data
End of explanation
"""
def Boltz(theta,phi,T,p0k,alpha,E):
"""Compute the integrand for the Boltzmann factor.
Returns
-------
A function of theta,phi,T,p0k,alpha,E to be used within dblquad
"""
return np.exp((1/T)*p0k*E*np.sin(theta)*np.cos(phi)*(1+alpha*E*np.cos(phi)))*np.sin(theta)
"""
Explanation: Langevin-Debye Model
$$ U(\theta,\phi) = -\rho_0E\:sin\:\theta\:cos\:\phi\:(1+\alpha E\:cos\:\phi) $$
First, Calculate the Boltzmann Factor and the Partition Function
$$ {Boltz() returns:}\:\: e^{\frac{-U}{k_bT}}\:sin\:{\theta}\ $$
End of explanation
"""
def numerator(theta,phi,T,p0k,alpha,E):
boltz = Boltz(theta,phi,T,p0k,alpha,E)
return np.sin(2*theta)*np.cos(phi)*boltz
"""
Explanation: Second, Calculate the Tilt Angle $\psi$
$$ numerator() \:returns: {sin\:{2\theta}\:cos\:{\phi}}\:e^{\frac{-U}{k_bT}}\:sin\:{\theta} $$
End of explanation
"""
def denominator(theta,phi,T,p0k,alpha,E):
boltz = Boltz(theta,phi,T,p0k,alpha,E)
return ((np.cos(theta)**2) - ((np.sin(theta)**2) * (np.cos(phi)**2)))*boltz
"""
Explanation: $$ denominator()\: returns: {({cos}^2{\theta} - {sin}^2{\theta}\:{cos}^2{\phi}})\:e^{\frac{-U}{k_bT}}\:sin\:{\theta} $$
End of explanation
"""
def compute_psi(E,p0k,alpha):
def Boltz(theta,phi,T,p0k,alpha,E):
"""Compute the integrand for the Boltzmann factor.
Returns
-------
A function of theta,phi,T,p0k,alpha,E to be used within dblquad
"""
return np.exp((1/T)*p0k*E*np.sin(theta)*np.cos(phi)*(1+alpha*E*np.cos(phi)))*np.sin(theta)
def numerator(theta,phi,T,p0k,alpha,E):
boltz = Boltz(theta,phi,T,p0k,alpha,E)
return np.sin(2*theta)*np.cos(phi)*boltz
def denominator(theta,phi,T,p0k,alpha,E):
boltz = Boltz(theta,phi,T,p0k,alpha,E)
return ((np.cos(theta)**2) - ((np.sin(theta)**2) * (np.cos(phi)**2)))*boltz
"""Computes the tilt angle(psi) by use of our tan(2psi) equation
Returns
-------
Float:
The statistical tilt angle with conditions T,p0k,alpha,E
"""
avg_numerator, avg_numerator_error = dblquad(numerator, 0, 2*np.pi, lambda theta: thetamin, lambda theta: thetamax,args=(T,p0k,alpha,E))
avg_denominator, avg_denominator_error = dblquad(denominator, 0, 2*np.pi, lambda theta: thetamin, lambda theta: thetamax,args=(T,p0k,alpha,E))
psi = np.arctan(avg_numerator / (avg_denominator)) * (180 /(2*np.pi)) #Converting to degrees from radians and divide by two
return psi
compute_psi(tdata[0],p0k_array[0]*1e7,alpha_array[0]*1e10)
PSIdata[0]
guess = [p0k_array[0]*1e7,alpha_array[0]*1e10]
guess
theta_best, theta_cov = opt.curve_fit(compute_psi, Edata, PSIdata,guess,absolute_sigma=True)
"""
Explanation: $$ tan(2\psi) = \frac{\int_{\theta_{min}}^{\theta_{max}} \int_0^{2\pi} {sin\:{2\theta}\:cos\:{\phi}}\:e^{\frac{-U}{k_bT}}\:sin\:{\theta}\: d\theta d\phi}{\int_{\theta_{min}}^{\theta_{max}} \int_0^{2\pi} ({{cos}^2{\theta} - {sin}^2{\theta}\:{cos}^2{\phi}})\:e^{\frac{-U}{k_bT}}\:sin\:{\theta}\: d\theta d\phi} $$
End of explanation
"""
|
qinwf-nuan/keras-js
|
notebooks/layers/convolutional/ZeroPadding1D.ipynb
|
mit
|
data_in_shape = (3, 5)
L = ZeroPadding1D(padding=1)
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(240)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['convolutional.ZeroPadding1D.0'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
"""
Explanation: ZeroPadding1D
[convolutional.ZeroPadding1D.0] padding 1 on 3x5 input
End of explanation
"""
data_in_shape = (4, 4)
L = ZeroPadding1D(padding=3)
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(241)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['convolutional.ZeroPadding1D.1'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
"""
Explanation: [convolutional.ZeroPadding1D.1] padding 3 on 4x4 input
End of explanation
"""
data_in_shape = (4, 4)
L = ZeroPadding1D(padding=(3,2))
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(242)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['convolutional.ZeroPadding1D.2'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
"""
Explanation: [convolutional.ZeroPadding1D.2] padding (3,2) on 4x4 input
End of explanation
"""
print(json.dumps(DATA))
"""
Explanation: export for Keras.js tests
End of explanation
"""
|
tensorflow/docs-l10n
|
site/en-snapshot/probability/examples/TFP_Release_Notebook_0_12_1.ipynb
|
apache-2.0
|
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2020 The TensorFlow Probability Authors.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
"""
#@title Installs & imports { vertical-output: true }
!pip3 install -qU tensorflow==2.4.0 tensorflow_probability==0.12.1 tensorflow-datasets inference_gym
import tensorflow as tf
import tensorflow_probability as tfp
assert '0.12' in tfp.__version__, tfp.__version__
assert '2.4' in tf.__version__, tf.__version__
physical_devices = tf.config.list_physical_devices('CPU')
tf.config.set_logical_device_configuration(
physical_devices[0],
[tf.config.LogicalDeviceConfiguration(),
tf.config.LogicalDeviceConfiguration()])
tfd = tfp.distributions
tfb = tfp.bijectors
tfpk = tfp.math.psd_kernels
import matplotlib.pyplot as plt
import numpy as np
import scipy.interpolate
import IPython
import seaborn as sns
from inference_gym import using_tensorflow as gym
import logging
"""
Explanation: TFP Release Notes notebook (0.12.1)
The intent of this notebook is to help TFP 0.12.1 "come to life" via some small snippets - little demos of things you can achieve with TFP.
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/probability/examples/TFP_Release_Notebook_0_12_1"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/TFP_Release_Notebook_0_12_1.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/TFP_Release_Notebook_0_12_1.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/probability/tensorflow_probability/examples/jupyter_notebooks/TFP_Release_Notebook_0_12_1.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
End of explanation
"""
image_shape = (32, 32, 4) # 32 x 32 RGBA image
glow = tfb.Glow(output_shape=image_shape,
coupling_bijector_fn=tfb.GlowDefaultNetwork,
exit_bijector_fn=tfb.GlowDefaultExitNetwork)
pz = tfd.Sample(tfd.Normal(0., 1.), tf.reduce_prod(image_shape))
# Calling glow on distribution p(z) creates our glow distribution over images.
px = glow(pz)
# Take samples from the distribution to get images from your dataset.
image = px.sample(1)[0].numpy()
# Rescale to [0, 1].
image = (image - image.min()) / (image.max() - image.min())
plt.imshow(image);
"""
Explanation: Bijectors
Glow
A bijector from the paper Glow: Generative Flow with Invertible 1x1 Convolutions, by Kingma and Dhariwal.
Here is how to draw an image from a distribution (note that the distribution has not "learned" anything here).
End of explanation
"""
bij = tfb.RayleighCDF()
uniforms = tfd.Uniform().sample(10_000)
plt.hist(bij.inverse(uniforms), bins='auto');
"""
Explanation: RayleighCDF
Bijector for the Rayleigh distribution's CDF. One use is sampling from the Rayleigh distribution, by taking uniform samples, then passing them through the inverse of the CDF.
End of explanation
"""
x = tfd.Normal(0., 1.).sample(5)
print(tfb.Ascending()(x))
print(tfb.Invert(tfb.Ordered())(x))
"""
Explanation: Ascending() replaces Invert(Ordered())
End of explanation
"""
x = tf.linspace(-4., 4., 100)
for low in (-1., 0., 1.):
bij = tfb.Softplus(low=low)
plt.plot(x, bij(x));
"""
Explanation: Add low arg: Softplus(low=2.)
End of explanation
"""
op_1 = tf.linalg.LinearOperatorDiag(diag=[1., -1., 3.])
op_2 = tf.linalg.LinearOperatorFullMatrix([[12., 5.], [-1., 3.]])
scale = tf.linalg.LinearOperatorBlockDiag([op_1, op_2], is_non_singular=True)
bij = tfb.ScaleMatvecLinearOperatorBlock(scale)
bij([[1., 2., 3.], [0., 1.]])
"""
Explanation: tfb.ScaleMatvecLinearOperatorBlock supports blockwise LinearOperator, multi-part args
End of explanation
"""
x = tf.linspace(-5., 10., 10 - -5 + 1)
rates = (4, 2)
for i, rate in enumerate(rates):
plt.bar(x - .3 * (1 - i), tfd.Poisson(rate).prob(x), label=f'Poisson({rate})', alpha=0.5, width=.3)
plt.bar(x.numpy() + .3, tfd.Skellam(*rates).prob(x).numpy(), color='k', alpha=0.25, width=.3,
label=f'Skellam{rates}')
plt.legend();
"""
Explanation: Distributions
Skellam
Distribution over differences of two Poisson RVs. Note that samples from this distribution can be negative.
End of explanation
"""
@tfd.JointDistributionCoroutineAutoBatched
def model():
x = yield tfd.Normal(0., 1., name='x')
y = x + 4.
yield tfd.Normal(y, 1., name='y')
draw = model.sample(10_000)
plt.hist(draw.x, bins='auto', alpha=0.5)
plt.hist(draw.y, bins='auto', alpha=0.5);
"""
Explanation: JointDistributionCoroutine[AutoBatched] produce namedtuple-like samples
Explicitly specify sample_dtype=[...] for the old tuple behavior.
End of explanation
"""
dist = tfd.VonMisesFisher([0., 1, 0, 1, 0, 1], concentration=1.)
draws = dist.sample(3)
print(dist.entropy())
tf.reduce_sum(draws ** 2, axis=1) # each draw has length 1
"""
Explanation: VonMisesFisher supports dim > 5, entropy()
The von Mises-Fisher distribution is a distribution on the $n-1$ dimensional sphere in $\mathbb{R}^n$.
End of explanation
"""
plt.figure(figsize=(10, 3))
plt.subplot(121)
plt.hist(tfd.Beta(.02, .02).sample(10_000), bins='auto')
plt.title('Beta(.02, .02)')
plt.subplot(122)
plt.title('GamX/(GamX+GamY) [the old way]')
g = tfd.Gamma(.02, 1); s0, s1 = g.sample(10_000), g.sample(10_000)
plt.hist(s0 / (s0 + s1), bins='auto')
plt.show()
plt.figure(figsize=(10, 3))
plt.subplot(121)
plt.hist(tfd.ExpGamma(.02, 1.).sample(10_000), bins='auto')
plt.title('ExpGamma(.02, 1)')
plt.subplot(122)
plt.hist(tfb.Log()(tfd.Gamma(.02, 1.)).sample(10_000), bins='auto')
plt.title('tfb.Log()(Gamma(.02, 1)) [the old way]');
"""
Explanation: ExpGamma, ExpInverseGamma
log_rate parameter added to Gamma.
Numerical improvements when sampling low-concentration Beta, Dirichlet & friends. Implicit reparameterization gradients in all cases.
End of explanation
"""
@tfd.JointDistributionCoroutineAutoBatched
def model():
x = yield tfd.Normal(0, 1, name='x')
y = yield tfd.Normal(x + 4, 1, name='y')
print(model.sample(seed=(1, 2)))
print(model.sample(seed=(1, 2)))
"""
Explanation: JointDistribution*AutoBatched support reproducible sampling (with length-2 tuple/Tensor seeds)
End of explanation
"""
# Build vMFs with the same mean direction, batch of increasing concentrations.
vmf = tfd.VonMisesFisher(tf.math.l2_normalize(tf.random.normal([10])),
concentration=[0., .1, 1., 10.])
# KL increases with concentration, since vMF(conc=0) == SphericalUniform.
print(tfd.kl_divergence(vmf, tfd.SphericalUniform(10)))
"""
Explanation: KL(VonMisesFisher || SphericalUniform)
End of explanation
"""
print('Gamma:', tfd.Gamma.parameter_properties())
print('Categorical:', tfd.Categorical.parameter_properties(dtype=tf.float64, num_classes=7))
"""
Explanation: parameter_properties
Distribution classes now expose a parameter_properties(dtype=tf.float32, num_classes=None) class method, which can enable automated construction of many classes of distributions.
End of explanation
"""
@tfd.JointDistributionCoroutineAutoBatched
def model():
scale = yield tfd.Gamma(1, 1, name='scale')
obs = yield tfd.Normal(0, scale, name='obs')
model.experimental_default_event_space_bijector(obs=.2).forward(
[tf.random.uniform([3], -2, 2.)])
"""
Explanation: experimental_default_event_space_bijector
Now accepts additional args pinning some distribution parts.
End of explanation
"""
dist = tfd.JointDistributionSequential([
tfd.HalfNormal(1.),
lambda scale: tfd.Normal(0., scale, name='observed')])
@tf.function
def sample():
bij = dist.experimental_default_event_space_bijector(observed=1.)
target_log_prob = dist.experimental_pin(observed=1.).unnormalized_log_prob
kernel = tfp.mcmc.TransformedTransitionKernel(
tfp.mcmc.HamiltonianMonteCarlo(target_log_prob,
step_size=0.6,
num_leapfrog_steps=16),
bijector=bij)
return tfp.mcmc.sample_chain(500,
current_state=tf.ones([8]), # multiple chains
kernel=kernel,
trace_fn=None)
draws = sample()
fig, (hist, trace) = plt.subplots(ncols=2, figsize=(16, 3))
trace.plot(draws, alpha=0.5)
for col in tf.transpose(draws):
sns.kdeplot(col, ax=hist);
"""
Explanation: JointDistribution.experimental_pin
Pins some joint distribution parts, returning JointDistributionPinned object representing the joint unnormalized density.
Working with the experimental_default_event_space_bijector, this makes doing variational inference or MCMC with sensible defaults much more straightforward. In the below example, the first two lines of sample make running MCMC a breeze.
End of explanation
"""
nb = tfd.NegativeBinomial.experimental_from_mean_dispersion(30., .01)
plt.hist(nb.sample(10_000), bins='auto');
"""
Explanation: tfd.NegativeBinomial.experimental_from_mean_dispersion
Alternative parameterization. Email tfprobability@tensorflow.org or send us a PR to add similar class methods for other distributions.
End of explanation
"""
# Note: 2-logical devices are configured in the install/import cell at top.
strategy = tf.distribute.MirroredStrategy()
assert strategy.num_replicas_in_sync == 2
@tfp.experimental.distribute.JointDistributionCoroutine
def model():
root = tfp.experimental.distribute.JointDistributionCoroutine.Root
group_scale = yield root(tfd.Sample(tfd.Exponential(1), 3, name='group_scale'))
_ = yield tfp.experimental.distribute.ShardedSample(tfd.Independent(tfd.Normal(0, group_scale), 1),
sample_shape=[4], name='x')
seed1, seed2 = tfp.random.split_seed((1, 2))
@tf.function
def sample(seed):
return model.sample(seed=seed)
xs = strategy.run(sample, (seed1,))
print("""
Note that the global latent `group_scale` is shared across devices, whereas
the local `x` is sampled independently on each device.
""")
print('sample:', xs)
print('another sample:', strategy.run(sample, (seed2,)))
@tf.function
def log_prob(x):
return model.log_prob(x)
print("""
Note that each device observes the same log_prob (local latent log_probs are
summed across devices).
""")
print('log_prob:', strategy.run(log_prob, (xs,)))
@tf.function
def grad_log_prob(x):
return tfp.math.value_and_gradient(model.log_prob, x)[1]
print("""
Note that each device observes the same log_prob gradient (local latents have
independent gradients, global latents have gradients aggregated across devices).
""")
print('grad_log_prob:', strategy.run(grad_log_prob, (xs,)))
"""
Explanation: tfp.experimental.distribute
DistributionStrategy-aware joint distributions, allowing for cross-device likelihood computations. Sharded Independent and Sample distributions.
End of explanation
"""
gm = tfpk.GeneralizedMatern(df=[0.5, 1.5, 2.5], length_scale=1., amplitude=0.5)
m1 = tfpk.MaternOneHalf(length_scale=1., amplitude=0.5)
m2 = tfpk.MaternThreeHalves(length_scale=1., amplitude=0.5)
m3 = tfpk.MaternFiveHalves(length_scale=1., amplitude=0.5)
xs = tf.linspace(-1.5, 1.5, 100)
gm_matrix = gm.matrix([[0.]], xs[..., tf.newaxis])
plt.plot(xs, gm_matrix[0][0])
plt.plot(xs, m1.matrix([[0.]], xs[..., tf.newaxis])[0])
plt.show()
plt.plot(xs, gm_matrix[1][0])
plt.plot(xs, m2.matrix([[0.]], xs[..., tf.newaxis])[0])
plt.show()
plt.plot(xs, gm_matrix[2][0])
plt.plot(xs, m3.matrix([[0.]], xs[..., tf.newaxis])[0])
plt.show()
"""
Explanation: PSD Kernels
GeneralizedMatern
The GeneralizedMatern positive-semidefinite kernel generalizes MaternOneHalf, MAterhThreeHalves, and MaternFiveHalves.
End of explanation
"""
epa = tfpk.Parabolic()
xs = tf.linspace(-1.05, 1.05, 100)
plt.plot(xs, epa.matrix([[0.]], xs[..., tf.newaxis])[0]);
"""
Explanation: Parabolic (Epanechnikov)
End of explanation
"""
# Import a Brownian Motion model from TFP's inference gym.
model = gym.targets.BrownianMotionMissingMiddleObservations()
prior = model.prior_distribution()
ground_truth = ground_truth = model.sample_transformations['identity'].ground_truth_mean
target_log_prob = lambda *values: model.log_likelihood(values) + prior.log_prob(values)
"""
Explanation: VI
build_asvi_surrogate_posterior
Automatically construct a structured surrogate posterior for VI in a way that incorporates the graphical structure of the prior distribution. This utilizes the method described in the paper Automatic Structured Variational Inference (https://arxiv.org/abs/2002.00643).
End of explanation
"""
# The observed loc values in the Brownian Motion inference gym model
OBSERVED_LOC = np.array([
0.21592641, 0.118771404, -0.07945447, 0.037677474, -0.27885845, -0.1484156,
-0.3250906, -0.22957903, -0.44110894, -0.09830782, np.nan, np.nan, np.nan,
np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, -0.8786016,
-0.83736074, -0.7384849, -0.8939254, -0.7774566, -0.70238715, -0.87771565,
-0.51853573, -0.6948214, -0.6202789
]).astype(dtype=np.float32)
# Plot the prior and the likelihood observations
plt.figure()
plt.title('Brownian Motion Prior Samples and Observations')
num_samples = 15
prior_samples = prior.sample(num_samples)
plt.plot(prior_samples, c='blue', alpha=0.1)
plt.plot(prior_samples[0][0], label="Prior Samples", c='blue', alpha=0.1)
plt.scatter(x=range(30),y=OBSERVED_LOC, c='black', alpha=0.5, label="Observations")
plt.legend(bbox_to_anchor=(1.05, 1), borderaxespad=0.);
logging.getLogger('tensorflow').setLevel(logging.ERROR) # suppress pfor warnings
# Construct and train an ASVI Surrogate Posterior.
asvi_surrogate_posterior = tfp.experimental.vi.build_asvi_surrogate_posterior(prior)
asvi_losses = tfp.vi.fit_surrogate_posterior(target_log_prob,
asvi_surrogate_posterior,
optimizer=tf.optimizers.Adam(learning_rate=0.1),
num_steps=500)
logging.getLogger('tensorflow').setLevel(logging.NOTSET)
# Construct and train a Mean-Field Surrogate Posterior.
factored_surrogate_posterior = tfp.experimental.vi.build_factored_surrogate_posterior(event_shape=prior.event_shape)
factored_losses = tfp.vi.fit_surrogate_posterior(target_log_prob,
factored_surrogate_posterior,
optimizer=tf.optimizers.Adam(learning_rate=0.1),
num_steps=500)
logging.getLogger('tensorflow').setLevel(logging.ERROR) # suppress pfor warnings
# Sample from the posteriors.
asvi_posterior_samples = asvi_surrogate_posterior.sample(num_samples)
factored_posterior_samples = factored_surrogate_posterior.sample(num_samples)
logging.getLogger('tensorflow').setLevel(logging.NOTSET)
"""
Explanation: This models a Brownian Motion process with a Gaussian observation model. It consists of 30 timesteps, but the middle 10 timesteps are unobservable.
```
locs[0] ~ Normal(loc=0, scale=innovation_noise_scale)
for t in range(1, num_timesteps):
locs[t] ~ Normal(loc=locs[t - 1], scale=innovation_noise_scale)
for t in range(num_timesteps):
observed_locs[t] ~ Normal(loc=locs[t], scale=observation_noise_scale)
```
The goal is to infer the values of locs from noisy observations (observed_locs). Since the middle 10 timesteps are unobservable, observed_locs are NaN values at timesteps [10,19].
End of explanation
"""
# Plot the loss curves.
plt.figure()
plt.title('Loss Curves for ASVI vs Mean-Field Surrogate Posteriors')
plt.plot(asvi_losses, c='orange', label='ASVI', alpha = 0.4)
plt.plot(factored_losses, c='green', label='Mean-Field', alpha = 0.4)
plt.ylim(-50, 300)
plt.legend(bbox_to_anchor=(1.3, 1), borderaxespad=0.);
"""
Explanation: Both the ASVI and the mean-field surrogate posterior distributions have converged, and the ASVI surrogate posterior had a lower final loss (negative ELBO value).
End of explanation
"""
# Plot samples from the ASVI and Mean-Field Surrogate Posteriors.
plt.figure()
plt.title('Posterior Samples from ASVI vs Mean-Field Surrogate Posterior')
plt.plot(asvi_posterior_samples, c='orange', alpha = 0.25)
plt.plot(asvi_posterior_samples[0][0], label='ASVI Surrogate Posterior', c='orange', alpha = 0.25)
plt.plot(factored_posterior_samples, c='green', alpha = 0.25)
plt.plot(factored_posterior_samples[0][0], label='Mean-Field Surrogate Posterior', c='green', alpha = 0.25)
plt.scatter(x=range(30),y=OBSERVED_LOC, c='black', alpha=0.5, label='Observations')
plt.plot(ground_truth, c='black', label='Ground Truth')
plt.legend(bbox_to_anchor=(1.585, 1), borderaxespad=0.);
"""
Explanation: The samples from the posteriors highlight how nicely the ASVI surrogate posterior captures the uncertainty for the timesteps without observations. On the other hand, the mean-field surrogate posterior struggles to capture the true uncertainty.
End of explanation
"""
kernel = tfp.mcmc.HamiltonianMonteCarlo(lambda x: -x**2 / 2, .05, 20)
pbar = tfp.experimental.mcmc.ProgressBarReducer(100)
kernel = tfp.experimental.mcmc.WithReductions(kernel, pbar)
plt.hist(tf.reshape(tfp.mcmc.sample_chain(100, current_state=tf.ones([128]), kernel=kernel, trace_fn=None), [-1]), bins='auto')
pbar.bar.close()
"""
Explanation: MCMC
ProgressBarReducer
Visualize progress of the sampler. (May have a nominal performance penalty; not currently supported under JIT compilation.)
End of explanation
"""
initial_state = tf.random.uniform([4096], -2., 2.)
def smc(seed):
return tfp.experimental.mcmc.sample_sequential_monte_carlo(
prior_log_prob_fn=lambda x: -x**2 / 2,
likelihood_log_prob_fn=lambda x: -(x-1.)**2 / 2,
current_state=initial_state,
seed=seed)[1]
plt.hist(smc(seed=(12, 34)), bins='auto');plt.show()
print(smc(seed=(12, 34))[:10])
print('different:', smc(seed=(10, 20))[:10])
print('same:', smc(seed=(12, 34))[:10])
"""
Explanation: sample_sequential_monte_carlo supports reproducible sampling
End of explanation
"""
def cov_to_ellipse(t, cov, mean):
"""Draw a one standard deviation ellipse from the mean, according to cov."""
diag = tf.linalg.diag_part(cov)
a = 0.5 * tf.reduce_sum(diag)
b = tf.sqrt(0.25 * (diag[0] - diag[1])**2 + cov[0, 1]**2)
major = a + b
minor = a - b
theta = tf.math.atan2(major - cov[0, 0], cov[0, 1])
x = (tf.sqrt(major) * tf.cos(theta) * tf.cos(t) -
tf.sqrt(minor) * tf.sin(theta) * tf.sin(t))
y = (tf.sqrt(major) * tf.sin(theta) * tf.cos(t) +
tf.sqrt(minor) * tf.cos(theta) * tf.sin(t))
return x + mean[0], y + mean[1]
fig, axes = plt.subplots(nrows=4, ncols=5, figsize=(14, 8),
sharex=True, sharey=True, constrained_layout=True)
t = tf.linspace(0., 2 * np.pi, 200)
tot = 10
cov = 0.1 * tf.eye(2) + 0.9 * tf.ones([2, 2])
mvn = tfd.MultivariateNormalTriL(loc=[1., 2.],
scale_tril=tf.linalg.cholesky(cov))
for ax in axes.ravel():
rv = tfp.experimental.stats.RunningCovariance(
num_samples=0., mean=tf.zeros(2), sum_squared_residuals=tf.zeros((2, 2)),
event_ndims=1)
for idx, x in enumerate(mvn.sample(tot)):
rv = rv.update(x)
ax.plot(*cov_to_ellipse(t, rv.covariance(), rv.mean),
color='k', alpha=(idx + 1) / tot)
ax.plot(*cov_to_ellipse(t, mvn.covariance(), mvn.mean()), 'r')
fig.suptitle("Twenty tries to approximate the red covariance with 10 draws");
"""
Explanation: Added streaming computations of variance, covariance, Rhat
Note, the interfaces to these have changed somewhat in tfp-nightly.
End of explanation
"""
xs = tf.linspace(0.5, 20., 100)
ys = tfp.math.bessel_ive([[0.5], [1.], [np.pi], [4.]], xs)
zs = tfp.math.bessel_kve([[0.5], [1.], [2.], [np.pi]], xs)
for i in range(4):
plt.plot(xs, ys[i])
plt.show()
for i in range(4):
plt.plot(xs, zs[i])
plt.show()
"""
Explanation: Math, stats
Bessel functions: ive, kve, log-ive
End of explanation
"""
edges = tf.linspace(-4., 4, 31)
samps = tfd.TruncatedNormal(0, 1, -4, 4).sample(100_000, seed=(123, 456))
_, (ax1, ax2) = plt.subplots(1, 2, figsize=(10, 3))
ax1.bar(edges[:-1], tfp.stats.histogram(samps, edges))
ax1.set_title('samples histogram')
ax2.bar(edges[:-1], tfp.stats.histogram(samps, edges, weights=1 / tfd.Normal(0, 1).prob(samps)))
ax2.set_title('samples, weighted by inverse p(sample)');
"""
Explanation: Optional weights arg to tfp.stats.histogram
End of explanation
"""
x = tf.linspace(-3., 3., 10)
y = tf.math.erfc(x)
z = tfp.math.erfcinv(y)
print(x)
print(z)
"""
Explanation: tfp.math.erfcinv
End of explanation
"""
|
mne-tools/mne-tools.github.io
|
0.12/_downloads/plot_clickable_image.ipynb
|
bsd-3-clause
|
# Authors: Christopher Holdgraf <choldgraf@berkeley.edu>
#
# License: BSD (3-clause)
from scipy.ndimage import imread
import numpy as np
from matplotlib import pyplot as plt
from os import path as op
import mne
from mne.viz import ClickableImage, add_background_image # noqa
from mne.channels import generate_2d_layout # noqa
print(__doc__)
# Set parameters and paths
plt.rcParams['image.cmap'] = 'gray'
im_path = op.join(op.dirname(mne.__file__), 'data', 'image', 'mni_brain.gif')
# We've already clicked and exported
layout_path = op.join(op.dirname(mne.__file__), 'data', 'image')
layout_name = 'custom_layout.lout'
"""
Explanation: ================================================================
Demonstration of how to use ClickableImage / generate_2d_layout.
================================================================
In this example, we open an image file, then use ClickableImage to
return 2D locations of mouse clicks (or load a file already created).
Then, we use generate_2d_layout to turn those xy positions into a layout
for use with plotting topo maps. In this way, you can take arbitrary xy
positions and turn them into a plottable layout.
End of explanation
"""
im = imread(im_path)
plt.imshow(im)
"""
This code opens the image so you can click on it. Commented out
because we've stored the clicks as a layout file already.
# The click coordinates are stored as a list of tuples
click = ClickableImage(im)
click.plot_clicks()
coords = click.coords
# Generate a layout from our clicks and normalize by the image
lt = generate_2d_layout(np.vstack(coords), bg_image=im)
lt.save(layout_path + layout_name) # To save if we want
"""
# We've already got the layout, load it
lt = mne.channels.read_layout(layout_name, path=layout_path, scale=False)
# Create some fake data
nchans = len(lt.pos)
nepochs = 50
sr = 1000
nsec = 5
events = np.arange(nepochs).reshape([-1, 1])
events = np.hstack([events, np.zeros([nepochs, 2], dtype=int)])
data = np.random.randn(nepochs, nchans, sr * nsec)
info = mne.create_info(nchans, sr, ch_types='eeg')
epochs = mne.EpochsArray(data, info, events)
evoked = epochs.average()
# Using the native plot_topo function with the image plotted in the background
f = evoked.plot_topo(layout=lt, fig_background=im)
"""
Explanation: Load data and click
End of explanation
"""
|
ktmud/deep-learning
|
first-neural-network/Your_first_neural_network.solution.ipynb
|
mit
|
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
"""
Explanation: Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
End of explanation
"""
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
"""
Explanation: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
End of explanation
"""
rides[:24*10].plot(x='dteday', y='cnt')
"""
Explanation: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
End of explanation
"""
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
"""
Explanation: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
End of explanation
"""
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
"""
Explanation: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
End of explanation
"""
# Save data for approximately the last 21 days
test_data = data[-21*24:]
# Now remove the test data from the data set
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
"""
Explanation: Splitting the data into training, testing, and validation sets
We'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
End of explanation
"""
# Hold out the last 60 days or so of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
"""
Explanation: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
End of explanation
"""
from numba import jitclass, jit
from numba import int64, float64, float32
import numpy as np
from collections import OrderedDict
from numba import jitclass, jit
from numba import int64, float64
spec = OrderedDict({
'input_nodes': int64,
'hidden_nodes': int64,
'output_nodes': int64,
'weights_input_to_hidden': float64[:, :],
'weights_hidden_to_output': float64[:, :],
'lr': float64
})
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
self.weights_input_to_hidden = np.ones((self.input_nodes, self.hidden_nodes)) / 10
self.weights_hidden_to_output = np.ones((self.hidden_nodes, self.output_nodes)) / 10
# Initialize weights
# self.weights_input_to_hidden = np.random.normal(0.0, self.input_nodes**-0.5,
# (self.input_nodes, self.hidden_nodes))
# self.weights_hidden_to_output = np.random.normal(0.0, self.hidden_nodes**-0.5,
# (self.hidden_nodes, self.output_nodes))
self.lr = learning_rate
def __repr__(self):
return '<NeuralNetwork: {:,} -> {:,} -> {:,}; lr: {:}>'.format(
self.input_nodes, self.hidden_nodes, self.output_nodes, self.lr
)
def activation_function(self, x):
return 1 / (1 + np.exp(-x))
def train(self, features, targets):
''' Train the network on batch of features and targets.
Arguments
---------
features: 2D array, each row is one data record,
each column is a feature
targets: 1D array of target values
'''
n_records = features.shape[0]
# Eg: 4 (input) -> 2 (hidden) -> 1 (output)
# (n_records, 4), (n_records, 1)
X, y = features, targets
### Forward pass ###
# (n_records, 1), (n_records, 2)
final_outputs, hidden_outputs = self._run(X)
### Backward pass ###
# (n_records, 1)
error = y - final_outputs # Output error
# (n_records, 1)
output_error_term = error # because f'(x) = 1
# Calculate for each node in the hidden layer's contribution to the error
# (n_recors, 1) @ (1, 2) = (n_records, 2)
hidden_error = output_error_term @ self.weights_hidden_to_output.T
# Backpropagated error terms
# (n_records, 2) * (n_records, 2) = (n_records, 2)
hidden_error_term = hidden_error * hidden_outputs * (1 - hidden_outputs)
# Weight step (input to hidden)
# (4, n_records) * (n_records, 2) = (4, 2)
delta_weights_i_h = X.T @ hidden_error_term
# Weight step (hidden to output)
# (2, n_records) * (n_records, 1) = (2, 1)
delta_weights_h_o = hidden_outputs.T @ output_error_term
# Update the weights
self.weights_input_to_hidden += self.lr * delta_weights_i_h / n_records
self.weights_hidden_to_output += self.lr * delta_weights_h_o / n_records
def _run(self, features):
# Hidden layer
# (n, 4) @ (4, 2) = (n, 2)
hidden_inputs = features @ self.weights_input_to_hidden
hidden_outputs = self.activation_function(hidden_inputs)
# Output layer
# (n, 2) @ (2, 1) = (n, 1)
final_inputs = hidden_outputs @ self.weights_hidden_to_output
# (n, 1)
final_outputs = final_inputs # f(x) = x
return final_outputs, hidden_outputs
def run(self, features):
''' Run a forward pass through the network with input features
Arguments
---------
features: 1D array of feature values
'''
final_outputs, _ = self._run(features)
return final_outputs
inputs = np.array([[0.5, -0.2, 0.1, 0.2],
[0.5, -0.2, 0.1, 0.2]])
targets = np.array([[0.4], [0.4]])
network = NeuralNetwork(4, 2, 1, 0.5)
network.train(inputs, targets)
inputs = np.array([[1.0, 0.0], [0.0, 1]])
targets = np.array([[1.0], [0.0]])
network = NeuralNetwork(2, 1, 1, 0.3)
network.train(inputs, targets)
print(network.weights_input_to_hidden)
print(network.weights_hidden_to_output)
inputs = np.array([[1.0, 0.0]])
targets = np.array([[1.0]])
network = NeuralNetwork(2, 1, 1, 0.3)
network.train(inputs, targets)
print(np.round(network.weights_input_to_hidden, 6))
print(np.round(network.weights_hidden_to_output, 6))
print('')
network.train(np.array([[0.0, 1.0]]), np.array([[0.0]]))
print(np.round(network.weights_input_to_hidden, 8))
print(np.round(network.weights_hidden_to_output, 6))
class NeuralNetwork2(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
self.weights_input_to_hidden = np.ones((self.input_nodes, self.hidden_nodes)) / 10
self.weights_hidden_to_output = np.ones((self.hidden_nodes, self.output_nodes)) / 10
# Initialize weights
# self.weights_input_to_hidden = np.random.normal(0.0, self.input_nodes**-0.5,
# (self.input_nodes, self.hidden_nodes))
# self.weights_hidden_to_output = np.random.normal(0.0, self.hidden_nodes**-0.5,
# (self.hidden_nodes, self.output_nodes))
self.lr = learning_rate
self.activation_function = lambda x : 1/(1 + np.exp(-x))
def train(self, features, targets):
''' Train the network on batch of features and targets.
Arguments
---------
features: 2D array, each row is one data record, each column is a feature
targets: 1D array of target values
'''
n_records = features.shape[0]
delta_weights_i_h = np.zeros(self.weights_input_to_hidden.shape)
delta_weights_h_o = np.zeros(self.weights_hidden_to_output.shape)
for X, y in zip(features, targets):
### Forward pass ###
hidden_inputs = np.dot(X, self.weights_input_to_hidden)
hidden_outputs = self.activation_function(hidden_inputs)
final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output)
# since the last layer just passes on its value, we don't have to apply the sigmoid here.
final_outputs = final_inputs
### Backward pass ###
error = y - final_outputs
# The derivative of the activation function y=x is 1
output_error_term = error * 1.0
hidden_error = np.dot(self.weights_hidden_to_output, error)
# Backpropagated error terms
hidden_error_term = hidden_error * hidden_outputs * (1 - hidden_outputs)
# Weight step (input to hidden)
delta_weights_i_h += hidden_error_term * X[:,None]
# Weight step (hidden to output)
delta_weights_h_o += output_error_term * hidden_outputs[:,None]
# Weights update
self.weights_hidden_to_output += self.lr*delta_weights_h_o/n_records
self.weights_input_to_hidden += self.lr*delta_weights_i_h/n_records
def run(self, features):
''' Run a forward pass through the network with input features
Arguments
---------
features: 1D array of feature values
'''
# Forward pass
hidden_inputs = np.dot(features, self.weights_input_to_hidden)
hidden_outputs = self.activation_function(hidden_inputs)
final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output)
final_outputs = final_inputs
return final_outputs
inputs = np.array([[1.0, 0.0], [0.0, 1]])
targets = np.array([[1.0], [0.0]])
network = NeuralNetwork2(2, 1, 1, 0.3)
network.train(inputs, targets)
print(network.weights_input_to_hidden)
print(network.weights_hidden_to_output)
inputs = np.array([[1.0, 0.0]])
targets = np.array([[1.0]])
network = NeuralNetwork2(2, 1, 1, 0.3)
network.train(inputs, targets)
print(network.weights_input_to_hidden)
print(network.weights_hidden_to_output)
print('')
network.train(np.array([[0.0, 1.0]]), np.array([[0.0]]))
print(network.weights_input_to_hidden)
print(network.weights_hidden_to_output)
def MSE(y, Y):
return np.mean((y-Y)**2)
"""
Explanation: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.
<img src="assets/neural_network.png" width=300px>
The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation.
We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation.
Hint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.
Below, you have these tasks:
1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function.
2. Implement the forward pass in the train method.
3. Implement the backpropagation algorithm in the train method, including calculating the output error.
4. Implement the forward pass in the run method.
End of explanation
"""
import unittest
inputs = np.array([[0.5, -0.2, 0.1]])
targets = np.array([[0.4]])
test_w_i_h = np.array([[0.1, -0.2],
[0.4, 0.5],
[-0.3, 0.2]])
test_w_h_o = np.array([[0.3],
[-0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328],
[-0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, -0.20185996],
[0.39775194, 0.50074398],
[-0.29887597, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
_ = unittest.TextTestRunner().run(suite)
"""
Explanation: Unit tests
Run these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project.
End of explanation
"""
%%timeit -n 1 -r 1
import sys
# declare global variables because %%timeit will
# put the whole cell in a closure
global losses
global network
### Set the hyperparameters here ###
iterations = 4000
learning_rate = 1.3
hidden_nodes = 7
output_nodes = 1
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
train_features_arr = np.array(train_features)
val_features_arr = np.array(val_features)
train_targets_cnt = np.array(train_targets.cnt, ndmin=2).T
val_targets_cnt = np.array(val_targets.cnt, ndmin=2).T
for ii in range(iterations):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
X, y = train_features_arr[batch], train_targets_cnt[batch]
network.train(X, y)
# Printing out the training progress
train_loss = MSE(network.run(train_features_arr), train_targets_cnt)
val_loss = MSE(network.run(val_features_arr), val_targets_cnt)
sys.stdout.write("\rProgress: {:2.1f}".format(100 * ii/float(iterations)) \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
sys.stdout.flush()
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
# give room for timeit result
print('\n')
fig, ax = plt.subplots(figsize=(7,4))
ax.plot(losses['train'], label='Training loss')
ax.plot(losses['validation'], label='Validation loss')
ax.legend()
ax.set_xlabel('epoch')
ax.set_ylabel('loss')
_ = plt.ylim([0, 1])
"""
Explanation: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of iterations
This is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, if you use too many iterations, then the model with not generalize well to other data, this is called overfitting. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. As you start overfitting, you'll see the training loss continue to decrease while the validation loss starts to increase.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. Normally a good choice to start at is 0.1; however, if you effectively divide the learning rate by n_records, try starting out with a learning rate of 1. In either case, if the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
End of explanation
"""
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(np.array(test_features))*std + mean
ax.plot(predictions[:,0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.loc[test_data.index, 'dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
"""
Explanation: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
End of explanation
"""
|
cokelaer/colormap
|
notebooks/colormap package demonstration.ipynb
|
bsd-3-clause
|
%pylab inline
from colormap import Colormap
c = Colormap()
cmap = c.cmap('cool')
# let us see what it looks like
c.test_colormap(cmap)
#Would be nice to plot a bunch of colormap to pick up one interesting
c.plot_colormap('diverging')
c.plot_colormap(c.misc)
c.plot_colormap(c.qualitative)
c.plot_colormap(c.sequentials)
c.plot_colormap(c.sequentials2)
# This list is implemented in colormap package itself
c.plot_colormap(c.diverging_black)
"""
Explanation: Using the colormap package
You can use the class Colormap as shown below
You can use the cmap_builder function and test_cmap function as well (see end of notebook)
End of explanation
"""
mycmap = c.cmap_linear('red', 'white', 'green(w3c)')
c.test_colormap(mycmap)
mycmap = c.cmap_bicolor('red', 'green(w3c)')
c.test_colormap(mycmap)
# there is also 2 extra maps from R
mycmap = c.get_cmap_heat()
c.test_colormap(mycmap)
# color can be given a a name available in
import colormap.xfree86 as colors
list(colors.XFree86_colors.keys())[0:5]
#or
list(colors.XFree86_colors.values())[0:5]
# or as RGB, HLS, HSV, YUX, Hexa format
from colormap import Color
co = Color('white')
co.hex
mycmap = c.cmap_linear('red', '#FFFFFF', 'green(w3c)')
c.test_colormap(mycmap)
# Conversion between colors
c = Color('red')
c.rgb
c.hls
c.hex
print(c)
"""
Explanation: Well, I have not found the one I wanted...I wanted from red to white to green
End of explanation
"""
from colormap import cmap_builder, test_cmap
mycm = cmap_builder('red', 'white', 'green')
test_cmap(mycm)
"""
Explanation: Using cma_builder and test_cmap
Instead of using the Colormap class, you can also use the cmap_builder alias function andthe test_cmap function
End of explanation
"""
|
kimkipyo/dss_git_kkp
|
통계, 머신러닝 복습/160502월_1일차_분석 환경, 소개/16.Pandas 데이터 입출력.ipynb
|
mit
|
%cd /home/dockeruser/data/pydata-book-master/
"""
Explanation: Pandas 데이터 입출력
이 노트북의 예제를 실행하기 위해서는 datascienceschool/rpython 도커 이미지의 다음 디렉토리로 이동해야 한다.
End of explanation
"""
!cat ../../pydata-book-master/ch06/ex1.csv
!cat ch06/ex1.csv
df = pd.read_csv('../../pydata-book-master/ch06/ex1.csv')
df
"""
Explanation: pandas 데이터 입출력 종류
CSV
Clipboard
Excel
JSON
HTML
Python Pickling
HDF5
SAS
STATA
SQL
Google BigQuery
CSV 파일 입력
Comma Separated Values
MicroSoft EXCEL에서 export 가능
pandas.from_csv(): csv file -> DataFrame
End of explanation
"""
!cat ch06/ex2.csv
pd.read_csv('../../pydata-book-master/ch06/ex2.csv', names=['a', 'b', 'c', 'd', 'message'])
"""
Explanation: 컬럼 이름이 없는 경우에는 names 인수로 설정 가능
End of explanation
"""
!cat ch06/csv_mindex.csv
pd.read_csv('../../pydata-book-master/ch06/csv_mindex.csv', index_col=['key1', 'key2'])
"""
Explanation: 특정한 컬럼을 인덱스로 지정하고 싶으면 index_col 인수 사용
End of explanation
"""
!cat 'ch06/ex3.txt'
pd.read_table('../../pydata-book-master/ch06/ex3.txt', sep='\s+')
"""
Explanation: 구분자가 comma가 아닌 경우에는 sep 인수 사용
End of explanation
"""
!cat ch06/ex4.csv
pd.read_csv('../../pydata-book-master/ch06/ex4.csv', skiprows=[0, 2, 3])
"""
Explanation: 건너 뛰어야 할 행이 있으면 skiprows 사용
End of explanation
"""
!cat ch06/ex5.csv
sentinels = {'message': ['foo', 'NA'], 'something': ['two']}
pd.read_csv('../../pydata-book-master/ch06/ex5.csv', na_values=sentinels)
"""
Explanation: 특정한 값을 NA로 취급하고 싶으면 na_values 인수 사용
End of explanation
"""
!head ch06/ex6.csv
pd.read_csv('../../pydata-book-master/ch06/ex6.csv', nrows=3)
"""
Explanation: 일부 행만 읽고 싶다면 nrows 인수 사용
End of explanation
"""
df.to_csv('../../pydata-book-master/ch06/out.csv')
!cat ch06/out.csv
"""
Explanation: CSV 파일 출력
DataFrame.to_csv(): DataFrame -> csv file
End of explanation
"""
import sys
df.to_csv(sys.stdout, sep='|')
"""
Explanation: sep 인수로 구분자 변경 가능
End of explanation
"""
df.to_csv(sys.stdout, na_rep='NULL')
"""
Explanation: na_rep 인수로 NA 표시 변경 가능
End of explanation
"""
df.to_csv(sys.stdout, index=False, header=False)
"""
Explanation: index, header 인수로 인덱스 및 헤더 출력 여부 결정 가능
End of explanation
"""
titanic = pd.read_csv('http://dato.com/files/titanic.csv', index_col=0)
titanic.tail()
"""
Explanation: 인터넷 상의 CSV 파일 입력
파일 path 대신 URL을 지정하면 다운로드하여 import
End of explanation
"""
import pandas_datareader.data as web
import datetime
start = datetime.datetime(2015, 1, 1)
end = datetime.datetime(2016, 8, 25)
"""
Explanation: 인터넷 상의 데이터 베이스 자료 입력
다음과 같은 인터넷 상의 자료는 pandas_datareader 패키지의 DataReader 을 써서 바로 pandas로 입력 가능
Yahoo! Finance
Google Finance
St.Louis FED (FRED)
Kenneth French’s data library
World Bank
Google Analytics
End of explanation
"""
df = web.DataReader("005930.KS", 'yahoo', start, end)
df.tail()
"""
Explanation: http://finance.yahoo.com/q?s=005930.ks
End of explanation
"""
df = web.DataReader("KRX:005930", "google", start, end)
df.tail()
"""
Explanation: https://www.google.com/finance?cid=151610035517112
End of explanation
"""
gdp = web.DataReader("GDP", "fred", start, end)
gdp
inflation = web.DataReader(["CPIAUCSL", "CPILFESL"], "fred", start, end)
inflation
"""
Explanation: https://fred.stlouisfed.org/series/GDP
https://fred.stlouisfed.org/series/CPIAUCSL
https://fred.stlouisfed.org/series/CPILFESL
End of explanation
"""
|
piyushbhattacharya/machine-learning
|
python/Carvan script.ipynb
|
gpl-3.0
|
ld_train, ld_test = train_test_split(cd_train, test_size=0.2, random_state=2)
x80_train = ld_train.drop(['V86'],1)
y80_train = ld_train['V86']
x20_test = ld_test.drop(['V86'],1)
y20_test = ld_test['V86']
"""
Explanation: Optimizing model...
Run train_test splits on the train data
End of explanation
"""
model_logr1 = LogisticRegression(penalty="l1",class_weight=None,random_state=2)
model_logr1.fit(x80_train, y80_train)
y20_test_pred1 = np.where(model_logr1.predict(x20_test)==1,1,0)
temp_df1 = pd.DataFrame(list(zip(cd_test['V1'],list(y20_test_pred1))), columns=['V1','V86'])
y_test_pred1 = temp_df1['V86']
roc_auc_score(y20_test, y_test_pred1)
"""
Explanation: 1. Check ROC_AUC_SCORE {penalty='l1', class_weight=None}
End of explanation
"""
model_logrl2 = LogisticRegression(penalty="l2",class_weight=None,random_state=2)
model_logrl2.fit(x80_train, y80_train)
y20_test_pred2 = np.where(model_logrl2.predict(x20_test)==1,1,0)
temp_df2 = pd.DataFrame(list(zip(cd_test['V1'],list(y20_test_pred2))), columns=['V1','V86'])
y_test_pred2 = temp_df2['V86']
roc_auc_score(y20_test, y_test_pred2)
"""
Explanation: 2. Check ROC_AUC_SCORE {penalty='l2', class_weight=None}
End of explanation
"""
model_logr3 = LogisticRegression(penalty="l1",class_weight="balanced",random_state=2)
model_logr3.fit(x80_train, y80_train)
y20_test_pred3 = np.where(model_logr3.predict(x20_test)==1,1,0)
temp_df3 = pd.DataFrame(list(zip(cd_test['V1'],list(y20_test_pred3))), columns=['V1','V86'])
y_test_pred3 = temp_df3['V86']
roc_auc_score(y20_test, y_test_pred3)
"""
Explanation: 3. Check ROC_AUC_SCORE {penalty='l1', class_weight='balanced'}
End of explanation
"""
model_logr4 = LogisticRegression(penalty="l2",class_weight="balanced",random_state=2, solver="newton-cg")
model_logr4.fit(x80_train, y80_train)
y20_test_pred4 = np.where(model_logr4.predict(x20_test)==1,1,0)
temp_df4 = pd.DataFrame(list(zip(cd_test['V1'],list(y20_test_pred4))), columns=['V1','V86'])
y_test_pred4 = temp_df4['V86']
roc_auc_score(y20_test, y_test_pred4)
prob_score=pd.Series(list(zip(*model_logr4.predict_proba(x80_train)))[1])
"""
Explanation: 4. Check ROC_AUC_SCORE {penalty='l2', class_weight='balanced'}
End of explanation
"""
cutoffs=np.linspace(0,1,100)
"""
Explanation: 2. Calculate optimum FBeta score
a. Calculate cutoffs and best KS
End of explanation
"""
KS_cut=[]
for cutoff in cutoffs:
predicted = pd.Series([0]*len(y80_train))
predicted[prob_score > cutoff] = 1
df = pd.DataFrame(list(zip(y80_train,predicted)),columns=["real","predicted"])
TP=len(df[(df["real"]==1) &(df["predicted"]==1) ])
FP=len(df[(df["real"]==0) &(df["predicted"]==1) ])
TN=len(df[(df["real"]==0) &(df["predicted"]==0) ])
FN=len(df[(df["real"]==1) &(df["predicted"]==0) ])
P=TP+FN
N=TN+FP
KS=(TP/P)-(FP/N)
KS_cut.append(KS)
cutoff_data=pd.DataFrame(list(zip(cutoffs,KS_cut)),columns=["cutoff","KS"])
KS_cutoff=cutoff_data[cutoff_data["KS"]==cutoff_data["KS"].max()]["cutoff"]
"""
Explanation: For each of these cutoff , we are going to look at TP,FP,TN,FN values and calculate KS. Then we'll chose the best cutoff as the one having highest KS.
End of explanation
"""
# Performance on test data
prob_score_test=pd.Series(list(zip(*model_logr4.predict_proba(x20_test)))[1])
predicted_test=pd.Series([0]*len(y20_test))
predicted_test[prob_score_test>float(KS_cutoff)]=1
df_test=pd.DataFrame(list(zip(y20_test,predicted_test)),columns=["real","predicted"])
k=pd.crosstab(df_test['real'],df_test["predicted"])
print('confusion matrix :\n \n ',k)
TN=k.iloc[0,0]
TP=k.iloc[1,1]
FP=k.iloc[0,1]
FN=k.iloc[1,0]
P=TP+FN
N=TN+FP
# Accuracy of test
accuracy = (TP+TN)/(P+N)
# Sensitivity on test
sensitivity = TP/P
#Specificity on test
specificity = TN/N
print("accuracy : ", accuracy)
print("sensitivity : ", sensitivity)
print("specificity : ", specificity)
"""
Explanation: Now we'll see how this model with the cutoff determined here , performs on the test data.
End of explanation
"""
cutoffs=np.linspace(0.010,0.99,100)
def Fbeta_perf(beta,cutoffs,y80_train,prob_score):
FB_cut=[]
for cutoff in cutoffs:
predicted=pd.Series([0]*len(y80_train))
predicted[prob_score>cutoff]=1
df=pd.DataFrame(list(zip(y80_train,predicted)),columns=["real","predicted"])
TP=len(df[(df["real"]==1) &(df["predicted"]==1) ])
FP=len(df[(df["real"]==0) &(df["predicted"]==1) ])
FN=len(df[(df["real"]==1) &(df["predicted"]==0) ])
P=TP+FN
Precision=TP/(TP+FP)
Recall=TP/P
FB=(1+beta**2)*Precision*Recall/((beta**2)*Precision+Recall)
FB_cut.append(FB)
cutoff_data=pd.DataFrame(list(zip(cutoffs,FB_cut)),columns=["cutoff","FB"])
FB_cutoff=cutoff_data[cutoff_data["FB"]==cutoff_data["FB"].max()]["cutoff"]
prob_score_test=pd.Series(list(zip(*model_logr4.predict_proba(x20_test)))[1])
predicted_test=pd.Series([0]*len(y20_test))
predicted_test[prob_score_test>float(FB_cutoff)]=1
df_test=pd.DataFrame(list(zip(y20_test,predicted_test)),columns=["real","predicted"])
k=pd.crosstab(df_test['real'],df_test["predicted"])
# print('confusion matrix :\n \n ',k)
TN=k.iloc[0,0]
TP=k.iloc[1,1]
FP=k.iloc[0,1]
FN=k.iloc[1,0]
P=TP+FN
N=TN+FP
print('For beta :',beta)
print('Accuracy is :',(TP+TN)/(P+N))
print('Sensitivity is :',(TP/P))
print('Specificity is :',(TN/N))
print('\n \n \n')
Fbeta_perf(0.5,cutoffs,y80_train,prob_score)
Fbeta_perf(1,cutoffs,y80_train,prob_score)
Fbeta_perf(1.5,cutoffs,y80_train,prob_score)
Fbeta_perf(2,cutoffs,y80_train,prob_score)
Fbeta_perf(2.5,cutoffs,y80_train,prob_score)
Fbeta_perf(3.0,cutoffs,y80_train,prob_score)
"""
Explanation: Next we see how cutoff determined by F_beta score performs on test data for beta values : 0.5,1,2
End of explanation
"""
from sklearn.metrics import fbeta_score
betas = np.linspace(1,3,num=5)
for ta in betas:
print('\n')
print('Beta : ', ta)
fscorema = fbeta_score(y20_test, y_test_pred4, average='macro', beta=ta)
print('fscore_ma : ' ,fscorema)
fscoremi = fbeta_score(y20_test, y_test_pred4, average='micro', beta=ta)
print('fscore_mi : ' ,fscoremi)
fscorew = fbeta_score(y20_test, y_test_pred4, average='weighted', beta=ta)
print('fscore_w : ' ,fscorew)
fscoren = fbeta_score(y20_test, y_test_pred4, average=None, beta=ta)
print('fscore_n : ' ,fscoren)
print('fscorema : ' ,fscorema)
print('fscoremi : ' ,fscoremi)
print('fscorew : ' ,fscorew)
print('fscoren : ' ,fscoren)
"""
Explanation: b. Calculate FBeta score on original optimal model {model_logr4}
End of explanation
"""
model_logr4.fit(x,y)
prediction = np.where(model_logr4.predict(cd_test)==1,"Yes","No")
submission = pd.DataFrame(list(zip(cd_test['V1'],list(prediction))),
columns=['V1','V86'])
pred_y = submission['V86']
actual_y = cd_train['V86']
submission.head(4)
submission.to_csv('submission_carvan.csv',index=False)
"""
Explanation: Fit the optimized model on actual x,y and predict y from test dataset
End of explanation
"""
|
tonyfast/tidy-harness
|
README.ipynb
|
bsd-3-clause
|
import harness
from harness import Harness
from pandas import Categorical
from sklearn import datasets, discriminant_analysis
iris = datasets.load_iris()
# Harness is just a dataframe
df = Harness(
data=iris['data'], index=Categorical(iris['target']),
estimator=discriminant_analysis.LinearDiscriminantAnalysis(),
feature_level=-1, # the feature level indicates an index
# in the dataframe. -1 is the last index.
)
# Fit the model with 50 random rows.
df.sample(50).fit()
# Transform the dataframe
transformed = df.transform()
transformed.set_index(
df.index
.rename_categories(iris['target_names'])
.rename('species'), append=True, inplace=True,
)
# Plot the dataframe using Bokeh charts.
with transformed.reset_index().DataSource(x=0, y=1) as source:
source.Scatter(color='species')
source.show()
"""
Explanation: tidy-harness
A tidy pandas.DataFrame with scikit-learn models, interactive bokeh visualizations, and jinja2 templates.
Usage
Example: Modeling Fisher's 🌸 Data
End of explanation
"""
%%script bash --bg
python setup.py develop
watchmedo tricks tricks.yaml
# Execute this cell to stop watching the files
%killbgscripts
"""
Explanation: More Examples
More examples can be found in the tests directory. Tap the Ⓣ key while in the Github interface to search quickly.
Install
For the meantime:
bash
pip install git+https://github.com/tonyfast/tidy-harness
Background
harness initially responded to the need for scikit-learn models closer to a pandas.DataFrame. Since a DataFrame is Tidy Data the rows and columns can assist in tracking samples and features over many estimations. With this knowledge it would be easier to design a testing harness for data science.
The DataFrame has a powerful declarative syntax, consider the groupby and rolling apis. There is a modern tendency toward declarative and functional syntaxes in scientific computing and visualization. This is observed in altair, dask, and scikit-learn.
tidy-harness aims to provide a chain interface between pandas.DataFrame objects and other popular scientific computing libraries in the python ecosystem. The initial harness extensions :
attach a scikit-learn estimator to the dataframe.
attach a shared jinja2 environment to render narratives about the dataframes.
bokeh plotting methods with a contextmanager for interactive visualization development
Development
The development scripts can be run through this notebook.
Jupyter notebooks are used for all Python development in this project. The key features are:
watchdog file system watcher that converts notebooks to python scripts with nbconvert. Tests are not converted.
nbconvert with the --execute flag to run notebooks and fill out their output. _The current goal is for the notebook to be viewable in a Github repo.
pytest-ipynb to run tests directly on the notebooks.
Making the python module
The script below:
Installs a develop copy of harness
Listens for file systems events to convert notebooks to python scripts.
End of explanation
"""
%%script bash
jupyter nbconvert harness/tests/*.ipynb --execute --to notebook --inplace
py.test
"""
Explanation: Build & Run Tests
The tests require pytest and pytest-ipynb.
End of explanation
"""
|
massie/notebooks
|
Physio.ipynb
|
apache-2.0
|
from math import log
# RT/F = 26.73 at room temperature
rt_div_f = 26.73
nernst = lambda xO, xI, z: rt_div_f/z * log(1.0 * xO / xI)
Na_Eq = nernst(145, 15, 1)
K_Eq = nernst(4.5, 120, 1)
Cl_Eq = nernst(116, 20, -1)
print "Na+ equilibrium potential is %.2f mV" % (Na_Eq)
print "K+ equilibrium potential is %.2f mV" % (K_Eq)
print "Cl- equilibrium potential is %.2f mV" % (Cl_Eq)
"""
Explanation: Physiology
1) Using the ion concentrations of interstitial and intracellular compartments and the Nernst equation, calculate the equilibrium potentials for Na+, K+, and Cl-
End of explanation
"""
# Values from Table 3.1 p57 in syllabus
G_Na = 1
G_K = 100
G_Cl = 25
goldman = lambda Na_Out, Na_In, K_Out, K_In, Cl_Out, Cl_In: \
rt_div_f * log((G_Na * Na_Out + G_K * K_Out + G_Cl * Cl_In)/\
(1.0 * G_Na * Na_In + G_K * K_In + G_Cl * Cl_Out))
print "Potential at equalibrium is %.2f mV" % goldman(150, 15, 5, 150, 100, 10)
"""
Explanation: 2) Assuming the resting potential for the plasma membrane is -70mV, explain whether each of the ions in question 1 would be expected to move into or out of the cell. Use an I-V plot to support your answer.
End of explanation
"""
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
plt.figure(figsize=(20,20))
x = np.arange(-100, 60, 0.1);
iv_line = lambda G_val, E_x: G_val * x + ((0.0 - E_x) * G_val)
K_line = iv_line(G_K, K_Eq)
Na_line = iv_line(G_Na, Na_Eq)
Cl_line = iv_line(G_Cl, Cl_Eq)
Sum_line = K_line + Na_line + Cl_line
plt.grid(True)
K, = plt.plot(x, K_line, label="K")
Na, = plt.plot(x, Na_line, label="Na")
Cl, = plt.plot(x, Cl_line, label="Cl")
Em, = plt.plot(x, Sum_line, label="Em")
plt.legend(handles=[K, Na, Cl, Em])
plt.show()
"""
Explanation: IV graph
End of explanation
"""
|
GoogleCloudPlatform/training-data-analyst
|
courses/machine_learning/deepdive2/art_and_science_of_ml/labs/neural_network.ipynb
|
apache-2.0
|
import os, json, math
import numpy as np
import shutil
import tensorflow as tf
print("TensorFlow version: ",tf.version.VERSION)
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' # SET TF ERROR LOG VERBOSITY
"""
Explanation: Build a DNN using the Keras Functional API
Learning objectives
Review how to read in CSV file data using tf.data.
Specify input, hidden, and output layers in the DNN architecture.
Review and visualize the final DNN shape.
Train the model locally and visualize the loss curves.
Deploy and predict with the model using Cloud AI Platform.
Introduction
In this notebook, we will build a Keras DNN to predict the fare amount for NYC taxi cab rides.
Each learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook.
End of explanation
"""
!ls -l ../data/toy_data/*.csv
"""
Explanation: Locating the CSV files
We will start with the CSV files that we wrote out in the other notebook. Just so you don't have to run the notebook, we saved a copy in ../data/toy_data
End of explanation
"""
CSV_COLUMNS = ['fare_amount', 'pickup_datetime',
'pickup_longitude', 'pickup_latitude',
'dropoff_longitude', 'dropoff_latitude',
'passenger_count', 'key']
LABEL_COLUMN = 'fare_amount'
DEFAULTS = [[0.0],['na'],[0.0],[0.0],[0.0],[0.0],[0.0],['na']]
"""
Explanation: Lab Task 1: Use tf.data to read the CSV files
First let's define our columns of data, which column we're predicting for, and the default values.
End of explanation
"""
def features_and_labels(row_data):
for unwanted_col in ['pickup_datetime', 'key']:
row_data.pop(unwanted_col)
label = row_data.pop(LABEL_COLUMN)
return row_data, label # features, label
# load the training data
def load_dataset(pattern, batch_size=1, mode=tf.estimator.ModeKeys.EVAL):
dataset = (tf.data.experimental.make_csv_dataset(pattern, batch_size, CSV_COLUMNS, DEFAULTS)
.map(features_and_labels) # features, label
)
if mode == tf.estimator.ModeKeys.TRAIN:
dataset = dataset.shuffle(1000).repeat()
dataset = dataset.prefetch(1) # take advantage of multi-threading; 1=AUTOTUNE
return dataset
"""
Explanation: Next, let's define our features we want to use and our label(s) and then load in the dataset for training.
End of explanation
"""
## Build a simple Keras DNN using its Functional API
def rmse(y_true, y_pred):
return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true)))
def build_dnn_model():
INPUT_COLS = ['pickup_longitude', 'pickup_latitude',
'dropoff_longitude', 'dropoff_latitude',
'passenger_count']
# TODO 2
# input layer
inputs = # TODO -- Your code here.
feature_columns = # TODO -- Your code here.
# the constructor for DenseFeatures takes a list of numeric columns
# The Functional API in Keras requires that you specify: LayerConstructor()(inputs)
dnn_inputs = # TODO -- Your code here.
# two hidden layers of [32, 8] just in like the BQML DNN
h1 = # TODO -- Your code here.
h2 = # TODO -- Your code here.
# final output is a linear activation because this is regression
output = # TODO -- Your code here.
model = # TODO -- Your code here.
model.compile(optimizer='adam', loss='mse', metrics=[rmse, 'mse'])
return model
print("Here is our DNN architecture so far:\n")
model = build_dnn_model()
print(model.summary())
"""
Explanation: Lab Task 2: Build a DNN with Keras
Now let's build the Deep Neural Network (DNN) model in Keras and specify the input and hidden layers. We will print out the DNN architecture and then visualize it later on.
End of explanation
"""
tf.keras.utils.plot_model(model, 'dnn_model.png', show_shapes=False, rankdir='LR')
"""
Explanation: Lab Task 3: Visualize the DNN
We can visualize the DNN using the Keras plot_model utility.
End of explanation
"""
TRAIN_BATCH_SIZE = 32
NUM_TRAIN_EXAMPLES = 10000 * 5 # training dataset repeats, so it will wrap around
NUM_EVALS = 32 # how many times to evaluate
NUM_EVAL_EXAMPLES = 10000 # enough to get a reasonable sample, but not so much that it slows down
trainds = load_dataset('../data/toy_data/taxi-traffic-train*', TRAIN_BATCH_SIZE, tf.estimator.ModeKeys.TRAIN)
evalds = load_dataset('../data/toy_data/taxi-traffic-valid*', 1000, tf.estimator.ModeKeys.EVAL).take(NUM_EVAL_EXAMPLES//1000)
steps_per_epoch = NUM_TRAIN_EXAMPLES // (TRAIN_BATCH_SIZE * NUM_EVALS)
history = model.fit(trainds,
validation_data=evalds,
epochs=NUM_EVALS,
steps_per_epoch=steps_per_epoch)
"""
Explanation: Lab Task 4: Train the model
To train the model, simply call model.fit().
Note that we should really use many more NUM_TRAIN_EXAMPLES (i.e. a larger dataset). We shouldn't make assumptions about the quality of the model based on training/evaluating it on a small sample of the full data.
End of explanation
"""
# plot
import matplotlib.pyplot as plt
nrows = 1
ncols = 2
fig = plt.figure(figsize=(10, 5))
for idx, key in enumerate(['loss', 'rmse']):
ax = fig.add_subplot(nrows, ncols, idx+1)
plt.plot(history.history[key])
plt.plot(history.history['val_{}'.format(key)])
plt.title('model {}'.format(key))
plt.ylabel(key)
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper left');
"""
Explanation: Visualize the model loss curve
Next, we will use matplotlib to draw the model's loss curves for training and validation.
End of explanation
"""
# TODO 5
# TODO -- Your code here.
"""
Explanation: Lab Task 5: Predict with the model locally
To predict with Keras, you simply call model.predict() and pass in the cab ride you want to predict the fare amount for.
End of explanation
"""
|
tsarouch/python_minutes
|
core/Hypothesis_Testing.ipynb
|
gpl-2.0
|
import random
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
"""
Explanation: Common Use Cases when dealing with Hypothesis Testing
End of explanation
"""
# se have:
n_h = 140
n_t = 110
observations = (n_h, n_t)
n_observations = n_h + n_t
print observations, n_observations,
# We define the null hypothesis and the test statistic
def run_null_hypothesis(n_observations):
"""the model of Null hypothesis"""
sample = [random.choice('HT') for _ in range(n_observations)]
df = pd.DataFrame(sample)
value_counts = df[0].value_counts()
n_heads = value_counts['H']
n_tails = value_counts['T']
return (n_heads, n_tails)
def test_statistic((n_heads, n_tails)):
"""Computes the test statistic"""
return abs(n_heads - n_tails)
test_stat_H0 = test_statistic(run_null_hypothesis(n_observations))
test_stat_H1 = test_statistic(observations)
print "Test Statistic for Null Hypothesis H0:", test_stat_H0
print "Test Statistic for Hypothesis H1:", test_stat_H1
# we perform iterations for good statistics
N_ITER = 1000
test_stat_H0_v = [test_statistic(run_null_hypothesis(n_observations)) for _ in range(N_ITER)]
p_value = 1. * sum([1 for test_stat_H0 in test_stat_H0_v if test_stat_H0 >= test_stat_H1])/N_ITER
print "The p-value is: ", p_value
"""
Explanation: USE-CASE: Testing Proportions
Is coin biased ?
We toss coin 250 times, 140 heads, 120 tails.
End of explanation
"""
p = 0
for i in range(140, 250):
p += stats.distributions.binom.pmf(k, 250, 0.5)
pval = 1-p
print "The p-value using the frequentist approach is: " , pval
"""
Explanation: In the example above, like most of what will follow, we used the MC way to evaluate the p-value.
Nevertheless, in many cases we can analytically, with the frequentist approach have an evaluation of the p-value.
Below is show the way of getting a p-value using the Probability Mass Function (pmf) of the binomial distribution.
The sucess (heads is up) process follows a Binomial distribution
X ~ B(n,p)
where n is the number of flips and p is the prob. of success (heads up) in each flip
From the classical Hypothesis Test, the p-value will correspond to the probability of getting the effect we see (or even a more rare effect) under the Null Hypothesis.
Here the H0 is that the coin is not biased => p=0.5
And we have to sum up the probabilities (using the pmf) that we see k=
End of explanation
"""
observations = {1:8, 2:9, 3:19, 4:5, 5:8, 6:11}
observations_frequencies = np.array(observations.values())
n_dice_drops = np.sum(observations_frequencies)
print n_dice_drops
def run_null_hypothesis(n_dice_drops):
"""the model of Null hypothesis"""
dice_values = [1, 2, 3, 4, 5, 6]
rolls = np.random.choice(dice_values, n_dice_drops, replace=True)
return np.array(dict(pd.DataFrame(rolls)[0].value_counts()).values())
def test_statistic(dice_frequencies, n_dice_drops):
"""Computes the test statistic"""
expected_frequencies = np.ones(6) * n_dice_drops / 6.
return sum(abs(dice_frequencies - expected_frequencies))
test_stat_H0 = test_statistic(run_null_hypothesis(n_dice_drops), n_dice_drops)
test_stat_H1 = test_statistic(observations_frequencies, n_dice_drops)
print "Test Statistic for Null Hypothesis H0:", test_stat_H0
print "Test Statistic for Hypothesis H1:", test_stat_H1
# we perform iterations for good statistics
N_ITER = 1000
test_stat_H0_v = [test_statistic(run_null_hypothesis(n_dice_drops), n_dice_drops) for _ in range(N_ITER)]
p_value = 1. * sum([1 for test_stat_H0 in test_stat_H0_v if test_stat_H0 >= test_stat_H1])/N_ITER
print "The p-value is: ", p_value
"""
Explanation: Is dice crooked ?
we have the frequencies {1:8, 2:9, 3:19, 4:5, 6:8, 6:11}
End of explanation
"""
d1 = np.random.normal(38.601, 1.42, 1000)
d2 = np.random.normal(38.523, 1.42, 1000)
plt.figure(1)
plt.subplot(211)
count, bins, ignored = plt.hist(d1, 30, normed=True)
plt.figure(1)
plt.subplot(211)
count, bins, ignored = plt.hist(d2, 30, normed=True)
# plt.plot(bins, 1/(sigma * np.sqrt(2 * np.pi)) *
# np.exp( - (bins - mu)**2 / (2 * sigma**2) ),
# linewidth=2, color='r')
plt.show()
# one way to model the null hypothesis is by permutations, shuffle values of the two distributions and treat them as one
d_all = [i for i in d1] + [ i for i in d2]
np.random.shuffle(d_all)
count, bins, ignored = plt.hist(d_all, 30, normed=True)
plt.show()
def run_null_hypothesis(d1, d2):
"""the model of Null hypothesis - treat the two distributions as one"""
d_all = [i for i in d1] + [ i for i in d2]
np.random.shuffle(d_all)
return (d_all[:len(d1)], d_all[len(d1):])
def test_statistic(d1, d2):
"""Computes the test statistic"""
test_stat = abs(np.mean(d1) - np.mean(d2))
return test_stat
test_stat_H0 = test_statistic(*run_null_hypothesis(d1, d2))
test_stat_H1 = test_statistic(d1, d2)
print "Test Statistic for Null Hypothesis H0:", test_stat_H0
print "Test Statistic for Hypothesis H1:", test_stat_H1
# we perform iterations for good statistics
N_ITER = 1000
test_stat_H0_v = [test_statistic(*run_null_hypothesis(d1, d2)) for _ in range(N_ITER)]
p_value = 1. * sum([1 for test_stat_H0 in test_stat_H0_v if test_stat_H0 >= test_stat_H1])/N_ITER
print "The p-value is: ", p_value
# The p-value here is not small.
# It means that we expect by chance to see an effect as big as the observed about 80% of time.
"""
Explanation: USE-CASE: Testing Difference in Means
End of explanation
"""
data = np.random.multivariate_normal([0, 0], [[1, .75],[.75, 1]], 1000)
x = data[:, 0]
y = data[:, 1]
plt.scatter(x, y)
# we can make the null hypothesis model just by shuffling the data of one variable
x2 = x.copy()
np.random.shuffle(x2)
plt.scatter(x2, y)
def run_null_hypothesis(x, y):
"""the model of Null hypothesis - treat the two distributions as one"""
x2 = x.copy()
np.random.shuffle(x2)
return (x2, y)
def test_statistic(x, y):
"""Computes the test statistic"""
test_stat = abs(np.corrcoef(x, y)[0][1])
return test_stat
test_stat_H0 = test_statistic(*run_null_hypothesis(x, y))
test_stat_H1 = test_statistic(x, y)
print "Test Statistic for Null Hypothesis H0:", test_stat_H0
print "Test Statistic for Hypothesis H1:", test_stat_H1
# we perform iterations for good statistics
N_ITER = 1000
test_stat_H0_v = [test_statistic(*run_null_hypothesis(d1, d2)) for _ in range(N_ITER)]
p_value = 1. * sum([1 for test_stat_H0 in test_stat_H0_v if test_stat_H0 >= test_stat_H1])/N_ITER
print "The p-value is: ", p_value
"""
Explanation: USE-CASE: Testing a Correlation
End of explanation
"""
observations = {1:8, 2:9, 3:19, 4:5, 5:8, 6:11}
observations_frequencies = np.array(observations.values())
n_dice_drops = np.sum(observations_frequencies)
print n_dice_drops
def run_null_hypothesis(n_dice_drops):
"""the model of Null hypothesis"""
dice_values = [1, 2, 3, 4, 5, 6]
rolls = np.random.choice(dice_values, n_dice_drops, replace=True)
return np.array(dict(pd.DataFrame(rolls)[0].value_counts()).values())
def test_statistic(dice_frequencies, n_dice_drops):
"""Computes the test statistic"""
expected_frequencies = np.ones(6) * n_dice_drops / 6.
return sum( (dice_frequencies - expected_frequencies)**2 / expected_frequencies)
test_stat_H0 = test_statistic(run_null_hypothesis(n_dice_drops), n_dice_drops)
test_stat_H1 = test_statistic(observations_frequencies, n_dice_drops)
print "Test Statistic for Null Hypothesis H0:", test_stat_H0
print "Test Statistic for Hypothesis H1:", test_stat_H1
# we perform iterations for good statistics
N_ITER = 1000
test_stat_H0_v = [test_statistic(run_null_hypothesis(n_dice_drops), n_dice_drops) for _ in range(N_ITER)]
p_value = 1. * sum([1 for test_stat_H0 in test_stat_H0_v if test_stat_H0 >= test_stat_H1])/N_ITER
print "The p-value is: ", p_value
"""
Explanation: USE-CASE: Testing Proportions with chi2 test
Above we used total deviation as test statistic Sum(abs(observed-expected))
It is more common to use chi2 statistic. Sum((observed-expected)^2/expected)
Lets see how what results we get having chi2 statistic
End of explanation
"""
# Lets say we have already a histogram with the bins values below:
x_obs = {1:1, 2:2, 3:2, 4:0, 5:3, 6:1, 7:1, 8:2, 9:5, 10:6,
11:1, 12:0, 13:1, 14:2, 15:1, 16:3, 17:1, 18:0, 19:1, 20:0}
x_bgr = {1:1.2, 2:1.8, 3:1.8, 4:1.9, 5:1.9, 6:2, 7:2, 8:2, 9:1.8, 10:1.8,
11:1.7, 12:1.7, 13:1.7, 14:1.6, 15:1.6, 16:1.6, 17:1.5, 18:1.5, 19:1.1, 20:0.3}
_ = plt.bar(x_obs.keys(), x_obs.values(), color='b')
_ = plt.bar(x_bgr.keys(), x_bgr.values(), alpha=0.6, color='r')
# lets say that the red is what we know as background (e.g. from monde carlo) and blue is the observed signal.
# Is this signal statistical significant ?
# The H0 would say that both those distributions come from the same process.
# So we can construct the H0 model by adding those values and then splitting them in two parts.
"""
Explanation: we see that the p-value is smaller using the chi2 statistic as test statistic.
=> This is very important point since we see that the chioice of t-statistic affects quite a lot the p-value
USE-CASE: Testing Structures in Histograms
e.g.understand if we have signal over background
End of explanation
"""
from scipy import stats
pmf_values = []
N_obs = 5
N_bgr = 1.8
for i in range(0, N_obs-1):
pmf_values.append(stats.distributions.poisson.pmf(i, N_bgr))
pval = 1-np.sum(pmf_values)
print 'The p-value is ', pval
"""
Explanation: lets focus only in the bin 9 with signal value = 5
How likelie it is to find nobs = 5 while a backgronud is 1.8 ?
The number of entries $n_s$ in a bar can be treated as a Poisson variable with mean $\nu_s$.
In this scenario we can calculate the p-value as
$P(n>= n_{obs}) = \Sigma_{n=n_{obs}}^{\infty} pmf_{poisson}(n;\nu_s=0, \nu_b) = 1 - \Sigma_{n=0}^{n_{obs}-1} pmf_{poisson}(n;\nu_s=0, \nu_b) $
End of explanation
"""
from scipy import stats
pmf_values = []
N_obs = 11
N_bgr = 3.2
for i in range(0, N_obs-1):
pmf_values.append(stats.distributions.poisson.pmf(i, N_bgr))
pval = 1-np.sum(pmf_values)
print 'The p-value is ', pval
"""
Explanation: a point to keep in mind is that the background comes with uncertainty so we eventually have a range of p-values
In principle we can apply the procedure above to the number of entries in a subset of bins. E.g. in the two bings with large peak we have $n_{obs}=11$ with expected $\nu_b=3.2$.
End of explanation
"""
|
materialsvirtuallab/matgenb
|
notebooks/2021-5-12-Explanation of Corrections.ipynb
|
bsd-3-clause
|
from pymatgen.entries.computed_entries import ComputedEntry
from pymatgen.entries.compatibility import MaterialsProjectCompatibility, \
MaterialsProject2020Compatibility
from pymatgen.ext.matproj import MPRester
"""
Explanation: Demonstration of Materials Project Energy Corrections
This notebook illustrates how to apply and obtain an explanation of energy corrections used in the Materials Project database.
Author: Ryan Kingsbury
Date: May 2021
pymatgen==2022.0.8
Overview
The Materials Project API (MPRester) returns ComputedEntry objects that contain information about DFT calculations. By default, these objects have adjustments applied to the energies of certain elements to reduce certain systematic errors. See our documentation for complete details.
As of Spring 2021, ComputedEntry are processed using the MaterialsProject2020Compatibility class in pymatgen by default. The legacy correction scheme, used from 2010-2020, is still available in MaterialsProjectCompatibility.
End of explanation
"""
# retrieve
with MPRester() as m:
entries = m.get_entries_in_chemsys("Cl-Mo-O")
entry = entries[0]
"""
Explanation: Default behavior - MaterialsProject2020Compatibility
Let's retrieve entries in the Cl-Mo-O system to demonstrate how this works.
End of explanation
"""
entries[25].energy_adjustments
"""
Explanation: You can examine the energy corrections via the energy_adjustments attribute
of the ComputedEntry. This attribute contains a list of each energy correction that has been applied.
End of explanation
"""
compat = MaterialsProjectCompatibility()
entries = compat.process_entries(entries)
entries[25].energy_adjustments
"""
Explanation: If you want even more detail, you can examine an indiviual EnergyAdjustment (one element of the list)
Applying the legacy corrections with MaterialsProjectCompatibility
If you want to use the old corrections, or apply your own, you can re-process the ComputedEntry obtained from MPRester using a Compatibility class. The .process_entries method will remove any previously-applied energy corrections and re-process the entry in-place.
End of explanation
"""
entries[25].energy_per_atom
entries[25].correction_per_atom
entries[25].energy_adjustments = []
entries[25].energy_per_atom
entries[25].correction_per_atom
"""
Explanation: Notice how the energy adjustments have changed. The class name, description and values are all different. You will also notice that the descriptions of the legacy corrections are less verbose than those of the modern MaterialsProject2020Compatibility corrections.
Removing corrections altogther
If you want to remove all corrections from a ComputedEntry, simply set energy_adjustments to an empty list. You can verify that you have removed corrections by checking the energy_per_atom and the correction_per_atom of the ComputedEntry before and after.
End of explanation
"""
# retrieve
with MPRester() as m:
entries = m.get_entries_in_chemsys("Cl-Mo-O", compatible_only=False)
entry = entries[0]
entries[25].energy_adjustments
"""
Explanation: Alternatively, you can simply pass compatible_only=False to the MPRester call when you download data.
End of explanation
"""
|
simulkade/peteng
|
python/.ipynb_checkpoints/two_phase_1D_fipy-checkpoint.ipynb
|
mit
|
from fipy import *
# relperm parameters
swc = 0.1
sor = 0.1
krw0 = 0.3
kro0 = 1.0
nw = 2.0
no = 2.0
# domain and boundaries
k = 1e-12 # m^2
phi = 0.4
u = 1.e-5
p0 = 100e5 # Pa
Lx = 100.
Ly = 10.
nx = 100
ny = 10
dx = Lx/nx
dy = Ly/ny
# fluid properties
muo = 0.002
muw = 0.001
# define the fractional flow functions
def krw(sw):
res = krw0*((sw-swc)/(1-swc-sor))**nw
return res
def dkrw(sw):
res = krw0*nw/(1-swc-sor)*((sw-swc)/(1-swc-sor))**(nw-1)
return res
def kro(sw):
res = kro0*((1-sw-sor)/(1-swc-sor))**no
return res
def dkro(sw):
res = -kro0*no/(1-swc-sor)*((1-sw-sor)/(1-swc-sor))**(no-1)
return res
def fw(sw):
res = krw(sw)/muw/(krw(sw)/muw+kro(sw)/muo)
return res
def dfw(sw):
res = (dkrw(sw)/muw*kro(sw)/muo-krw(sw)/muw*dkro(sw)/muo)/(krw(sw)/muw+kro(sw)/muo)**2
return res
import matplotlib.pyplot as plt
import numpy as np
sw_plot = np.linspace(swc, 1-sor, 50)
"""
Explanation: FiPy 1D two-phase flow in porous mediaq, 11 October, 2019
Different approaches:
* Coupled
* Sequential
* ...
End of explanation
"""
krw_plot = [krw(sw) for sw in sw_plot]
kro_plot = [kro(sw) for sw in sw_plot]
fw_plot = [fw(sw) for sw in sw_plot]
plt.figure(1)
plt.plot(sw_plot, krw_plot, sw_plot, kro_plot)
plt.show()
plt.figure(2)
plt.plot(sw_plot, fw_plot)
plt.show()
# create the grid
mesh = Grid1D(dx = Lx/nx, nx = nx)
x = mesh.cellCenters
# create the cell variables and boundary conditions
sw = CellVariable(mesh=mesh, name="saturation", hasOld=True, value = swc)
p = CellVariable(mesh=mesh, name="pressure", hasOld=True, value = p0)
# sw.setValue(1,where = x<=dx)
sw.constrain(1.0,mesh.facesLeft)
#sw.constrain(0., mesh.facesRight)
sw.faceGrad.constrain([0], mesh.facesRight)
p.constrain(p0, mesh.facesRight)
p.constrain(1.05*p0, mesh.facesLeft)
"""
Explanation: Visualize the relative permeability and fractional flow curves
End of explanation
"""
eq_p = DiffusionTerm(var=p, coeff=-k*(krw(sw.faceValue)/muw+kro(sw.faceValue)/muo))+ \
UpwindConvectionTerm(var=sw, coeff=k*(dkrw(sw.faceValue)/muw+dkro(sw.faceValue)/muo)*p.faceGrad)- \
(k*(dkrw(sw.faceValue)/muw+dkro(sw.faceValue)/muo)*sw.faceValue*p.faceGrad).divergence == 0
eq_sw = TransientTerm(coeff=phi, var=sw) + \
DiffusionTerm(var=p, coeff=-k*krw(sw.faceValue)/muw)+ \
UpwindConvectionTerm(var=sw, coeff=-k*dkrw(sw.faceValue)/muw*p.faceGrad)- \
(-k*dkrw(sw.faceValue)/muw*p.faceGrad*sw.faceValue).divergence == 0
eq = eq_p & eq_sw
steps = 200
dt0 = 500.
dt = dt0
t_end = steps*dt0
t = 0.0
viewer = Viewer(vars = sw, datamax=1.1, datamin=-0.1)
while t<t_end:
swres = 1.0e6
loop_count = 0
while True:
swres_new = eq.sweep(dt = dt)
loop_count+=1
if loop_count==1:
sw_res = swres_new
if swres_new>sw_res or loop_count>5:
dt = dt/3
continue
swres=swres_new
print(swres)
if swres_new<1e-5:
sw.updateOld()
p.updateOld()
t+=dt
dt = dt0
break
# Note: try to use the appleyard method; the overflow is a result of wrong rel-perm values
viewer.plot()
"""
Explanation: Equations
$$\varphi \frac{\partial S_w}{\partial t}+u \frac{\partial f_w}{\partial x}=0$$ or
$$\varphi \frac{\partial S_w}{\partial t}+\nabla.\left( u \frac{\partial f_w}{\partial S_w} S_w\right)+ \nabla. \left( u f_w-u\frac{\partial f_w}{\partial S_w} S_{w0} \right)=0$$
End of explanation
"""
import fractional_flow as ff
xt_shock, sw_shock, xt_prf, sw_prf, t, p_inj, R_oil = ff.frac_flow_wf(muw=muw, muo=muo, ut=u, phi=1.0, \
k=1e-12, swc=swc, sor=sor, kro0=kro0, no=no, krw0=krw0, \
nw=nw, sw0=swc, sw_inj=1.0, L=Lx, pv_inj=5.0)
plt.figure()
plt.plot(xt_prf, sw_prf)
plt.plot(x.value.squeeze()/(steps*dt), sw.value)
plt.show()
?eq.sweep
"""
Explanation: Analytical solution
End of explanation
"""
|
nadvamir/deep-learning
|
image-classification/dlnd_image_classification.ipynb
|
mit
|
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
# Use Floyd's cifar-10 dataset if present
floyd_cifar10_location = '/input/cifar-10/python.tar.gz'
if isfile(floyd_cifar10_location):
tar_gz_path = floyd_cifar10_location
else:
tar_gz_path = 'cifar-10-python.tar.gz'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(tar_gz_path):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
tar_gz_path,
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open(tar_gz_path) as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
"""
Explanation: Image Classification
In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.
Get the Data
Run the following cell to download the CIFAR-10 dataset for python.
End of explanation
"""
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
batch_id = 1
sample_id = 5
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
"""
Explanation: Explore the Data
The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following:
* airplane
* automobile
* bird
* cat
* deer
* dog
* frog
* horse
* ship
* truck
Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch.
Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions.
End of explanation
"""
def normalize(x):
"""
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
"""
x_min = np.min(x)
x_max = np.max(x)
return (x - x_min) / (x_max - x_min)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_normalize(normalize)
"""
Explanation: Implement Preprocess Functions
Normalize
In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
End of explanation
"""
import tensorflow as tf
def one_hot_encode(x):
"""
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
"""
encoded_x = np.zeros((len(x), 10))
for i, label in enumerate(x):
encoded_x[i][label] = 1.0
return encoded_x
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_one_hot_encode(one_hot_encode)
"""
Explanation: One-hot encode
Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.
Hint: Don't reinvent the wheel.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
"""
Explanation: Randomize Data
As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.
Preprocess all the data and save it
Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import pickle
import problem_unittests as tests
import helper
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
"""
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
"""
import tensorflow as tf
def neural_net_image_input(image_shape):
"""
Return a Tensor for a batch of image input
: image_shape: Shape of the images
: return: Tensor for image input.
"""
# ideally these would not be hardcoded...
return tf.placeholder(tf.float32, [None, 32, 32, 3], name='x')
def neural_net_label_input(n_classes):
"""
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
"""
return tf.placeholder(tf.float32, [None, 10], name='y')
def neural_net_keep_prob_input():
"""
Return a Tensor for keep probability
: return: Tensor for keep probability.
"""
# TODO: Implement Function
return tf.placeholder(tf.float32, name='keep_prob')
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
"""
Explanation: Build the network
For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.
Note: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the "Convolutional and Max Pooling Layer" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup.
However, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d.
Let's begin!
Input
The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions
* Implement neural_net_image_input
* Return a TF Placeholder
* Set the shape using image_shape with batch size set to None.
* Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_label_input
* Return a TF Placeholder
* Set the shape using n_classes with batch size set to None.
* Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_keep_prob_input
* Return a TF Placeholder for dropout keep probability.
* Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder.
These names will be used at the end of the project to load your saved model.
Note: None for shapes in TensorFlow allow for a dynamic size.
End of explanation
"""
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
"""
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_ksize: kernal size 2-D Tuple for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
"""
# convolution layer
conv_weights = tf.Variable(tf.truncated_normal([conv_ksize[0], conv_ksize[1], x_tensor.shape[3].value, conv_num_outputs]))
conv_bias = tf.Variable(tf.random_normal([conv_num_outputs]))
x_tensor = tf.nn.conv2d(x_tensor, conv_weights, strides=[1, conv_strides[0], conv_strides[1], 1], padding='SAME')
x_tensor = tf.nn.bias_add(x_tensor, conv_bias)
x_tensor = tf.nn.relu(x_tensor)
# maxpool layer
return tf.nn.max_pool(
x_tensor,
ksize=[1, pool_ksize[0], pool_ksize[1], 1],
strides=[1, pool_strides[0], pool_strides[1], 1],
padding='SAME')
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_con_pool(conv2d_maxpool)
"""
Explanation: Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling:
* Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor.
* Apply a convolution to x_tensor using weight and conv_strides.
* We recommend you use same padding, but you're welcome to use any padding.
* Add bias
* Add a nonlinear activation to the convolution.
* Apply Max Pooling using pool_ksize and pool_strides.
* We recommend you use same padding, but you're welcome to use any padding.
Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers.
End of explanation
"""
def flatten(x_tensor):
"""
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
"""
# TODO: Implement Function
shape = lambda i: x_tensor.shape[i].value
flattened_size = shape(1) * shape(2) * shape(3)
return tf.reshape(x_tensor, [-1, flattened_size])
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_flatten(flatten)
"""
Explanation: Flatten Layer
Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
"""
def fully_conn(x_tensor, num_outputs):
"""
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
"""
weights = tf.Variable(tf.truncated_normal([x_tensor.shape[1].value, num_outputs]))
bias = tf.Variable(tf.truncated_normal([num_outputs]))
return tf.nn.relu(tf.add(tf.matmul(x_tensor, weights), bias))
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_fully_conn(fully_conn)
"""
Explanation: Fully-Connected Layer
Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
"""
def output(x_tensor, num_outputs):
"""
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
"""
weights = tf.Variable(tf.truncated_normal([x_tensor.shape[1].value, num_outputs]))
bias = tf.Variable(tf.truncated_normal([num_outputs]))
return tf.add(tf.matmul(x_tensor, weights), bias)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_output(output)
"""
Explanation: Output Layer
Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
Note: Activation, softmax, or cross entropy should not be applied to this.
End of explanation
"""
def conv_net(x, keep_prob):
"""
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
"""
# TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
# Play around with different number of outputs, kernel size and stride
# Function Definition from Above:
# conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
x = conv2d_maxpool(
x,
conv_num_outputs=54,
conv_ksize=(4, 4),
conv_strides=(4, 4),
pool_ksize=(2, 2),
pool_strides=(2, 2))
# TODO: Apply a Flatten Layer
# Function Definition from Above:
# flatten(x_tensor)
x = flatten(x)
# TODO: Apply 1, 2, or 3 Fully Connected Layers
# Play around with different number of outputs
# Function Definition from Above:
# fully_conn(x_tensor, num_outputs)
x = fully_conn(x, 1500)
x = tf.nn.dropout(x, keep_prob)
x = fully_conn(x, 1500)
# TODO: Apply an Output Layer
# Set this to the number of classes
# Function Definition from Above:
# output(x_tensor, num_outputs)
x = output(x, 10)
# TODO: return output
return x
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
"""
Explanation: Create Convolutional Model
Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model:
Apply 1, 2, or 3 Convolution and Max Pool layers
Apply a Flatten Layer
Apply 1, 2, or 3 Fully Connected Layers
Apply an Output Layer
Return the output
Apply TensorFlow's Dropout to one or more layers in the model using keep_prob.
End of explanation
"""
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
"""
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
"""
session.run(optimizer, feed_dict={
x: feature_batch,
y: label_batch,
keep_prob: keep_probability
})
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_train_nn(train_neural_network)
"""
Explanation: Train the Neural Network
Single Optimization
Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following:
* x for image input
* y for labels
* keep_prob for keep probability for dropout
This function will be called for each batch, so tf.global_variables_initializer() has already been called.
Note: Nothing needs to be returned. This function is only optimizing the neural network.
End of explanation
"""
def print_stats(session, feature_batch, label_batch, cost, accuracy):
"""
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
"""
nn_cost = session.run(cost, feed_dict={
x: feature_batch,
y: label_batch,
keep_prob: 1.0
})
nn_accuracy = session.run(accuracy, feed_dict={
x: feature_batch,
y: label_batch,
keep_prob: 1.0
})
print('Cost: {:.4f}, accuracy: {:.4f}'.format(nn_cost, nn_accuracy))
"""
Explanation: Show Stats
Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
End of explanation
"""
# TODO: Tune Parameters
epochs = 6
batch_size = 256
keep_probability = 0.9
"""
Explanation: Hyperparameters
Tune the following parameters:
* Set epochs to the number of iterations until the network stops learning or start overfitting
* Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory:
* 64
* 128
* 256
* ...
* Set keep_probability to the probability of keeping a node using dropout
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
"""
Explanation: Train on a Single CIFAR-10 Batch
Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
"""
Explanation: Fully Train the Model
Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
"""
Test the saved model against the test dataset
"""
test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
"""
Explanation: Checkpoint
The model has been saved to disk.
Test Model
Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.
End of explanation
"""
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.