markdown
stringlengths 0
37k
| code
stringlengths 1
33.3k
| path
stringlengths 8
215
| repo_name
stringlengths 6
77
| license
stringclasses 15
values |
---|---|---|---|---|
The exponentially-weighted moving average gives more weight to more recent points.
|
def PlotEWMA(daily, name):
"""Plots rolling mean.
daily: DataFrame of daily prices
"""
dates = pd.date_range(daily.index.min(), daily.index.max())
reindexed = daily.reindex(dates)
thinkplot.Scatter(reindexed.ppg, s=15, alpha=0.2, label=name)
roll_mean = reindexed.ppg.ewm(30).mean()
thinkplot.Plot(roll_mean, label="EWMA", color="#ff7f00")
plt.xticks(rotation=30)
thinkplot.Config(ylabel="price per gram ($)")
PlotEWMA(daily, name)
|
code/chap12ex.ipynb
|
AllenDowney/ThinkStats2
|
gpl-3.0
|
We can use resampling to generate missing values with the right amount of noise.
|
def FillMissing(daily, span=30):
"""Fills missing values with an exponentially weighted moving average.
Resulting DataFrame has new columns 'ewma' and 'resid'.
daily: DataFrame of daily prices
span: window size (sort of) passed to ewma
returns: new DataFrame of daily prices
"""
dates = pd.date_range(daily.index.min(), daily.index.max())
reindexed = daily.reindex(dates)
ewma = pd.Series(reindexed.ppg).ewm(span=span).mean()
resid = (reindexed.ppg - ewma).dropna()
fake_data = ewma + thinkstats2.Resample(resid, len(reindexed))
reindexed.ppg.fillna(fake_data, inplace=True)
reindexed["ewma"] = ewma
reindexed["resid"] = reindexed.ppg - ewma
return reindexed
def PlotFilled(daily, name):
"""Plots the EWMA and filled data.
daily: DataFrame of daily prices
"""
filled = FillMissing(daily, span=30)
thinkplot.Scatter(filled.ppg, s=15, alpha=0.2, label=name)
thinkplot.Plot(filled.ewma, label="EWMA", color="#ff7f00")
plt.xticks(rotation=30)
thinkplot.Config(ylabel="Price per gram ($)")
|
code/chap12ex.ipynb
|
AllenDowney/ThinkStats2
|
gpl-3.0
|
Here's what the EWMA model looks like with missing values filled.
|
PlotFilled(daily, name)
|
code/chap12ex.ipynb
|
AllenDowney/ThinkStats2
|
gpl-3.0
|
Serial correlation
The following function computes serial correlation with the given lag.
|
def SerialCorr(series, lag=1):
xs = series[lag:]
ys = series.shift(lag)[lag:]
corr = thinkstats2.Corr(xs, ys)
return corr
|
code/chap12ex.ipynb
|
AllenDowney/ThinkStats2
|
gpl-3.0
|
Before computing correlations, we'll fill missing values.
|
filled_dailies = {}
for name, daily in dailies.items():
filled_dailies[name] = FillMissing(daily, span=30)
|
code/chap12ex.ipynb
|
AllenDowney/ThinkStats2
|
gpl-3.0
|
Here are the serial correlations for raw price data.
|
for name, filled in filled_dailies.items():
corr = thinkstats2.SerialCorr(filled.ppg, lag=1)
print(name, corr)
|
code/chap12ex.ipynb
|
AllenDowney/ThinkStats2
|
gpl-3.0
|
It's not surprising that there are correlations between consecutive days, because there are obvious trends in the data.
It is more interested to see whether there are still correlations after we subtract away the trends.
|
for name, filled in filled_dailies.items():
corr = thinkstats2.SerialCorr(filled.resid, lag=1)
print(name, corr)
|
code/chap12ex.ipynb
|
AllenDowney/ThinkStats2
|
gpl-3.0
|
Even if the correlations between consecutive days are weak, there might be correlations across intervals of one week, one month, or one year.
|
rows = []
for lag in [1, 7, 30, 365]:
print(lag, end="\t")
for name, filled in filled_dailies.items():
corr = SerialCorr(filled.resid, lag)
print("%.2g" % corr, end="\t")
print()
|
code/chap12ex.ipynb
|
AllenDowney/ThinkStats2
|
gpl-3.0
|
The strongest correlation is a weekly cycle in the medium quality category.
Autocorrelation
The autocorrelation function is the serial correlation computed for all lags.
We can use it to replicate the results from the previous section.
|
# NOTE: acf throws a FutureWarning because we need to replace `unbiased` with `adjusted`,
# just as soon as Colab gets updated :)
import warnings
warnings.simplefilter(action="ignore", category=FutureWarning)
import statsmodels.tsa.stattools as smtsa
filled = filled_dailies["high"]
acf = smtsa.acf(filled.resid, nlags=365, unbiased=True, fft=False)
print("%0.2g, %.2g, %0.2g, %0.2g, %0.2g" % (acf[0], acf[1], acf[7], acf[30], acf[365]))
|
code/chap12ex.ipynb
|
AllenDowney/ThinkStats2
|
gpl-3.0
|
To get a sense of how much autocorrelation we should expect by chance, we can resample the data (which eliminates any actual autocorrelation) and compute the ACF.
|
def SimulateAutocorrelation(daily, iters=1001, nlags=40):
"""Resample residuals, compute autocorrelation, and plot percentiles.
daily: DataFrame
iters: number of simulations to run
nlags: maximum lags to compute autocorrelation
"""
# run simulations
t = []
for _ in range(iters):
filled = FillMissing(daily, span=30)
resid = thinkstats2.Resample(filled.resid)
acf = smtsa.acf(resid, nlags=nlags, unbiased=True, fft=False)[1:]
t.append(np.abs(acf))
high = thinkstats2.PercentileRows(t, [97.5])[0]
low = -high
lags = range(1, nlags + 1)
thinkplot.FillBetween(lags, low, high, alpha=0.2, color="gray")
|
code/chap12ex.ipynb
|
AllenDowney/ThinkStats2
|
gpl-3.0
|
The following function plots the actual autocorrelation for lags up to 40 days.
The flag add_weekly indicates whether we should add a simulated weekly cycle.
|
def PlotAutoCorrelation(dailies, nlags=40, add_weekly=False):
"""Plots autocorrelation functions.
dailies: map from category name to DataFrame of daily prices
nlags: number of lags to compute
add_weekly: boolean, whether to add a simulated weekly pattern
"""
thinkplot.PrePlot(3)
daily = dailies["high"]
SimulateAutocorrelation(daily)
for name, daily in dailies.items():
if add_weekly:
daily.ppg = AddWeeklySeasonality(daily)
filled = FillMissing(daily, span=30)
acf = smtsa.acf(filled.resid, nlags=nlags, unbiased=True, fft=False)
lags = np.arange(len(acf))
thinkplot.Plot(lags[1:], acf[1:], label=name)
|
code/chap12ex.ipynb
|
AllenDowney/ThinkStats2
|
gpl-3.0
|
To show what a strong weekly cycle would look like, we have the option of adding a price increase of 1-2 dollars on Friday and Saturdays.
|
def AddWeeklySeasonality(daily):
"""Adds a weekly pattern.
daily: DataFrame of daily prices
returns: new DataFrame of daily prices
"""
fri_or_sat = (daily.index.dayofweek == 4) | (daily.index.dayofweek == 5)
fake = daily.ppg.copy()
fake[fri_or_sat] += np.random.uniform(0, 2, fri_or_sat.sum())
return fake
|
code/chap12ex.ipynb
|
AllenDowney/ThinkStats2
|
gpl-3.0
|
Here's what the real ACFs look like. The gray regions indicate the levels we expect by chance.
|
axis = [0, 41, -0.2, 0.2]
PlotAutoCorrelation(dailies, add_weekly=False)
thinkplot.Config(axis=axis, loc="lower right", ylabel="correlation", xlabel="lag (day)")
|
code/chap12ex.ipynb
|
AllenDowney/ThinkStats2
|
gpl-3.0
|
Here's what it would look like if there were a weekly cycle.
|
PlotAutoCorrelation(dailies, add_weekly=True)
thinkplot.Config(axis=axis, loc="lower right", xlabel="lag (days)")
|
code/chap12ex.ipynb
|
AllenDowney/ThinkStats2
|
gpl-3.0
|
Prediction
The simplest way to generate predictions is to use statsmodels to fit a model to the data, then use the predict method from the results.
|
def GenerateSimplePrediction(results, years):
"""Generates a simple prediction.
results: results object
years: sequence of times (in years) to make predictions for
returns: sequence of predicted values
"""
n = len(years)
inter = np.ones(n)
d = dict(Intercept=inter, years=years, years2=years**2)
predict_df = pd.DataFrame(d)
predict = results.predict(predict_df)
return predict
def PlotSimplePrediction(results, years):
predict = GenerateSimplePrediction(results, years)
thinkplot.Scatter(daily.years, daily.ppg, alpha=0.2, label=name)
thinkplot.plot(years, predict, color="#ff7f00")
xlim = years[0] - 0.1, years[-1] + 0.1
thinkplot.Config(
title="Predictions",
xlabel="Years",
xlim=xlim,
ylabel="Price per gram ($)",
loc="upper right",
)
|
code/chap12ex.ipynb
|
AllenDowney/ThinkStats2
|
gpl-3.0
|
Here's what the prediction looks like for the high quality category, using the linear model.
|
name = "high"
daily = dailies[name]
_, results = RunLinearModel(daily)
years = np.linspace(0, 5, 101)
PlotSimplePrediction(results, years)
|
code/chap12ex.ipynb
|
AllenDowney/ThinkStats2
|
gpl-3.0
|
When we generate predictions, we want to quatify the uncertainty in the prediction. We can do that by resampling. The following function fits a model to the data, computes residuals, then resamples from the residuals to general fake datasets. It fits the same model to each fake dataset and returns a list of results.
|
def SimulateResults(daily, iters=101, func=RunLinearModel):
"""Run simulations based on resampling residuals.
daily: DataFrame of daily prices
iters: number of simulations
func: function that fits a model to the data
returns: list of result objects
"""
_, results = func(daily)
fake = daily.copy()
result_seq = []
for _ in range(iters):
fake.ppg = results.fittedvalues + thinkstats2.Resample(results.resid)
_, fake_results = func(fake)
result_seq.append(fake_results)
return result_seq
|
code/chap12ex.ipynb
|
AllenDowney/ThinkStats2
|
gpl-3.0
|
To generate predictions, we take the list of results fitted to resampled data. For each model, we use the predict method to generate predictions, and return a sequence of predictions.
If add_resid is true, we add resampled residuals to the predicted values, which generates predictions that include predictive uncertainty (due to random noise) as well as modeling uncertainty (due to random sampling).
|
def GeneratePredictions(result_seq, years, add_resid=False):
"""Generates an array of predicted values from a list of model results.
When add_resid is False, predictions represent sampling error only.
When add_resid is True, they also include residual error (which is
more relevant to prediction).
result_seq: list of model results
years: sequence of times (in years) to make predictions for
add_resid: boolean, whether to add in resampled residuals
returns: sequence of predictions
"""
n = len(years)
d = dict(Intercept=np.ones(n), years=years, years2=years**2)
predict_df = pd.DataFrame(d)
predict_seq = []
for fake_results in result_seq:
predict = fake_results.predict(predict_df)
if add_resid:
predict += thinkstats2.Resample(fake_results.resid, n)
predict_seq.append(predict)
return predict_seq
|
code/chap12ex.ipynb
|
AllenDowney/ThinkStats2
|
gpl-3.0
|
To visualize predictions, I show a darker region that quantifies modeling uncertainty and a lighter region that quantifies predictive uncertainty.
|
def PlotPredictions(daily, years, iters=101, percent=90, func=RunLinearModel):
"""Plots predictions.
daily: DataFrame of daily prices
years: sequence of times (in years) to make predictions for
iters: number of simulations
percent: what percentile range to show
func: function that fits a model to the data
"""
result_seq = SimulateResults(daily, iters=iters, func=func)
p = (100 - percent) / 2
percents = p, 100 - p
predict_seq = GeneratePredictions(result_seq, years, add_resid=True)
low, high = thinkstats2.PercentileRows(predict_seq, percents)
thinkplot.FillBetween(years, low, high, alpha=0.3, color="gray")
predict_seq = GeneratePredictions(result_seq, years, add_resid=False)
low, high = thinkstats2.PercentileRows(predict_seq, percents)
thinkplot.FillBetween(years, low, high, alpha=0.5, color="gray")
|
code/chap12ex.ipynb
|
AllenDowney/ThinkStats2
|
gpl-3.0
|
Here are the results for the high quality category.
|
years = np.linspace(0, 5, 101)
thinkplot.Scatter(daily.years, daily.ppg, alpha=0.1, label=name)
PlotPredictions(daily, years)
xlim = years[0] - 0.1, years[-1] + 0.1
thinkplot.Config(
title="Predictions", xlabel="Years", xlim=xlim, ylabel="Price per gram ($)"
)
|
code/chap12ex.ipynb
|
AllenDowney/ThinkStats2
|
gpl-3.0
|
But there is one more source of uncertainty: how much past data should we use to build the model?
The following function generates a sequence of models based on different amounts of past data.
|
def SimulateIntervals(daily, iters=101, func=RunLinearModel):
"""Run simulations based on different subsets of the data.
daily: DataFrame of daily prices
iters: number of simulations
func: function that fits a model to the data
returns: list of result objects
"""
result_seq = []
starts = np.linspace(0, len(daily), iters).astype(int)
for start in starts[:-2]:
subset = daily[start:]
_, results = func(subset)
fake = subset.copy()
for _ in range(iters):
fake.ppg = results.fittedvalues + thinkstats2.Resample(results.resid)
_, fake_results = func(fake)
result_seq.append(fake_results)
return result_seq
|
code/chap12ex.ipynb
|
AllenDowney/ThinkStats2
|
gpl-3.0
|
And this function plots the results.
|
def PlotIntervals(daily, years, iters=101, percent=90, func=RunLinearModel):
"""Plots predictions based on different intervals.
daily: DataFrame of daily prices
years: sequence of times (in years) to make predictions for
iters: number of simulations
percent: what percentile range to show
func: function that fits a model to the data
"""
result_seq = SimulateIntervals(daily, iters=iters, func=func)
p = (100 - percent) / 2
percents = p, 100 - p
predict_seq = GeneratePredictions(result_seq, years, add_resid=True)
low, high = thinkstats2.PercentileRows(predict_seq, percents)
thinkplot.FillBetween(years, low, high, alpha=0.2, color="gray")
|
code/chap12ex.ipynb
|
AllenDowney/ThinkStats2
|
gpl-3.0
|
Here's what the high quality category looks like if we take into account uncertainty about how much past data to use.
|
name = "high"
daily = dailies[name]
thinkplot.Scatter(daily.years, daily.ppg, alpha=0.1, label=name)
PlotIntervals(daily, years)
PlotPredictions(daily, years)
xlim = years[0] - 0.1, years[-1] + 0.1
thinkplot.Config(
title="Predictions", xlabel="Years", xlim=xlim, ylabel="Price per gram ($)"
)
|
code/chap12ex.ipynb
|
AllenDowney/ThinkStats2
|
gpl-3.0
|
Exercises
Exercise: The linear model I used in this chapter has the obvious drawback that it is linear, and there is no reason to expect prices to change linearly over time. We can add flexibility to the model by adding a quadratic term, as we did in Section 11.3.
Use a quadratic model to fit the time series of daily prices, and use the model to generate predictions. You will have to write a version of RunLinearModel that runs that quadratic model, but after that you should be able to reuse code from the chapter to generate predictions.
Exercise: Write a definition for a class named SerialCorrelationTest that extends HypothesisTest from Section 9.2. It should take a series and a lag as data, compute the serial correlation of the series with the given lag, and then compute the p-value of the observed correlation.
Use this class to test whether the serial correlation in raw price data is statistically significant. Also test the residuals of the linear model and (if you did the previous exercise), the quadratic model.
Bonus Example: There are several ways to extend the EWMA model to generate predictions. One of the simplest is something like this:
Compute the EWMA of the time series and use the last point as an intercept, inter.
Compute the EWMA of differences between successive elements in the time series and use the last point as a slope, slope.
To predict values at future times, compute inter + slope * dt, where dt is the difference between the time of the prediction and the time of the last observation.
|
name = "high"
daily = dailies[name]
filled = FillMissing(daily)
diffs = filled.ppg.diff()
thinkplot.plot(diffs)
plt.xticks(rotation=30)
thinkplot.Config(ylabel="Daily change in price per gram ($)")
filled["slope"] = diffs.ewm(span=365).mean()
thinkplot.plot(filled.slope[-365:])
plt.xticks(rotation=30)
thinkplot.Config(ylabel="EWMA of diff ($)")
# extract the last inter and the mean of the last 30 slopes
start = filled.index[-1]
inter = filled.ewma[-1]
slope = filled.slope[-30:].mean()
start, inter, slope
# reindex the DataFrame, adding a year to the end
dates = pd.date_range(filled.index.min(), filled.index.max() + np.timedelta64(365, "D"))
predicted = filled.reindex(dates)
# generate predicted values and add them to the end
predicted["date"] = predicted.index
one_day = np.timedelta64(1, "D")
predicted["days"] = (predicted.date - start) / one_day
predict = inter + slope * predicted.days
predicted.ewma.fillna(predict, inplace=True)
# plot the actual values and predictions
thinkplot.Scatter(daily.ppg, alpha=0.1, label=name)
thinkplot.Plot(predicted.ewma, color="#ff7f00")
|
code/chap12ex.ipynb
|
AllenDowney/ThinkStats2
|
gpl-3.0
|
Create fake trajectories
The input trajectories for the one-way shooting version must
not include the shooting point (which is shared between the two trajectories)
be in forward-time order (so reversed paths, which are created as time goes backward, need to be reversed)
|
from openpathsampling.tests.test_helpers import make_1d_traj
traj1 = make_1d_traj([-0.9, 0.1, 1.1, 2.1, 3.1, 4.1, 5.1, 6.1, 7.1, 8.1, 9.1, 10.1])
traj2 = make_1d_traj([-0.8, 1.2])
traj3 = make_1d_traj([5.3, 8.3, 11.3])
traj4 = make_1d_traj([-0.6, 1.4, 3.4, 5.4, 7.4])
traj5 = make_1d_traj([-0.5, 0.5, 1.5, 2.5, 3.5, 4.5, 5.5])
|
examples/example_one_way_shooting.ipynb
|
dwhswenson/OPSPiggybacker
|
lgpl-2.1
|
Make list of move data
The input to the pseudo-simulator is a list of data related to the move. For one-way shooting, you need the following information with each move:
the replica this move applies to (for TPS, just use 0)
the single-direction trajectory (as described in the previous section)
the index of the shooting point from the previous full trajectory
whether the trajectory was accepted
the direction of the one-way shooting move (forward is +1, backward is -1)
The moves object below is a list of tuples of that information, in the order listed above. This is what you need to create from your previous simulation.
|
moves = [
(0, traj2, 3, True, -1),
(0, traj3, 4, True, +1),
(0, traj4, 6, False, -1),
(0, traj5, 6, True, -1)
]
|
examples/example_one_way_shooting.ipynb
|
dwhswenson/OPSPiggybacker
|
lgpl-2.1
|
From here, you've already done everything that needs to be done to reshape your already-run simulation. Now you just need to create the fake OPS simulations.
Create OPS objects
|
# volumes
cv = paths.FunctionCV("x", lambda snap: snap.xyz[0][0])
left_state = paths.CVDefinedVolume(cv, float("-inf"), 0.0)
right_state = paths.CVDefinedVolume(cv, 10.0, float("inf"))
# network
network = paths.TPSNetwork(left_state, right_state)
ensemble = network.sampling_ensembles[0] # the only one
|
examples/example_one_way_shooting.ipynb
|
dwhswenson/OPSPiggybacker
|
lgpl-2.1
|
Create initial conditions
|
initial_conditions = paths.SampleSet([
paths.Sample(replica=0,
trajectory=traj1,
ensemble=ensemble)
])
|
examples/example_one_way_shooting.ipynb
|
dwhswenson/OPSPiggybacker
|
lgpl-2.1
|
Create OPSPiggybacker objects
Note that the big difference here is that you use pre_joined=False. This is essential for the automated one-way shooting treatment.
|
shoot = oink.ShootingStub(ensemble, pre_joined=False)
sim = oink.ShootingPseudoSimulator(storage=paths.Storage('one_way.nc', 'w'),
initial_conditions=initial_conditions,
mover=shoot,
network=network)
|
examples/example_one_way_shooting.ipynb
|
dwhswenson/OPSPiggybacker
|
lgpl-2.1
|
Run the pseudo-simulator
|
sim.run(moves)
sim.storage.close()
|
examples/example_one_way_shooting.ipynb
|
dwhswenson/OPSPiggybacker
|
lgpl-2.1
|
Analyze with OPS
|
analysis_file = paths.AnalysisStorage("one_way.nc")
scheme = analysis_file.schemes[0]
scheme.move_summary(analysis_file.steps)
import openpathsampling.visualize as ops_vis
from IPython.display import SVG
history = ops_vis.PathTree(
analysis_file.steps,
ops_vis.ReplicaEvolution(replica=0)
)
# switch to the "boxcar" look for the trajectories
history.options.movers['default']['new'] = 'single'
history.options.css['horizontal_gap'] = True
SVG(history.svg())
path_lengths = [len(step.active[0].trajectory) for step in analysis_file.steps]
plt.hist(path_lengths, alpha=0.5);
cv_x = analysis_file.cvs['x']
# load the active trajectory as storage.steps[step_num].active[replica_id]
plt.plot(cv_x(analysis_file.steps[2].active[0]), 'o-');
|
examples/example_one_way_shooting.ipynb
|
dwhswenson/OPSPiggybacker
|
lgpl-2.1
|
The XMM Image Data
Recall that we downloaded some XMM data in the "First Look" notebook.
We downloaded three files, and just looked at one - the "science" image.
|
imfits = pyfits.open('a1835_xmm/P0098010101M2U009IMAGE_3000.FTZ')
im = imfits[0].data
|
examples/XrayImage/Modeling.ipynb
|
enoordeh/StatisticalMethods
|
gpl-2.0
|
im is the image, our observed data, presented after some "standard processing." The numbers in the pixels are counts (i.e. numbers of photoelectrons recorded by the CCD during the exposure).
We display the image on a log scale, which allows us to simultaneously see both the cluster of galaxies in the center, and the much fainter background and other sources in the field.
|
plt.imshow(viz.scale_image(im, scale='log', max_cut=40), cmap='gray', origin='lower');
|
examples/XrayImage/Modeling.ipynb
|
enoordeh/StatisticalMethods
|
gpl-2.0
|
A Model for the Cluster of Galaxies
We will use a common parametric model for the surface brightness of galaxy clusters: the azimuthally symmetric beta model:
$S(r) = S_0 \left[1.0 + \left(\frac{r}{r_c}\right)^2\right]^{-3\beta + 1/2}$,
where $r$ is projected distance from the cluster center.
The parameters of this model are:
$x_0$, the $x$ coordinate of the cluster center
$y_0$, the $y$ coordinate of the cluster center
$S_0$, the normalization, in surface brightness units
$r_c$, a radial scale (called the "core radius")
$\beta$, which determines the slope of the profile
Note that this model describes a 2D surface brightness distribution, since $r^2 = x^2 + y^2$
Let's draw a cartoon of this model on the whiteboard
Planning an Expected Counts Map
Our data are counts, i.e. the number of times a physical pixel in the camera was activated while pointing at the area of sky corresponding to a pixel in our image. We can think of different sky pixels as having different effective exposure times, as encoded by an exposure map, ex.
We expect to see counts due to a number of sources:
X-rays from the galaxy cluster
X-rays from other detected sources in the field
X-rays from unresolved sources (the Cosmic X-ray Background)
Diffuse X-rays from the Galactic halo and the local bubble (the local X-ray foreground)
Soft protons from the solar wind, cosmic rays, and other undesirables (the particle background)
Let's go through these in turn.
1. Counts from the Cluster
Since our data are counts in each pixel, our model needs to first predict the expected counts in each pixel. Physical models predict intensity (counts per second per pixel per unit effective area of the telescope). The spatial variation of the effective area relative to the aimpoint is one of the things accounted for in the exposure map, and we can leave the overall area to one side when fitting (although we would need it to turn our results into physically interesting conclusions about, e.g. the luminosity of the cluster).
Since the X-rays from the cluster are transformed according to the exposure map, the units of $S_0$ are counts/s/pixel, and the model prediction for the expected number of counts from the cluster is CL*ex, where CL is an image with pixel values computed from $S(r)$.
2-4. X-ray background model
The X-ray background will be "vignetted" in the same way as X-rays from the cluster. We can lump sources 2-4 together, to extend our model so that it is composed of a galaxy cluster, plus an X-ray background.
The simplest assumption we can make about the X-ray background is that it is spatially uniform, on average. The model must account for the varying effective exposure as a function of position, however. So the model prediction associated with this component is b*ex, where b is a single number with units of counts/s/pixel.
We can circumvent the problem of the other detected sources in the field by masking them out, leaving us with the assumption that any remaining counts are not due to the masked sources. This could be a source of systematic error, so we'll note it down for later.
5. Particle background model
The particle background represents a flux of particles that either do not traverse the telescope optics at all, or follow a different optical path than X-rays - so the exposure map (and its vignetting correction) does not apply.
Instead, we're given, from a black box, a prediction for the expected counts/pixel due to particles, so the extension to our model is simply to add this image, pb.
Full model
Combining these three components, the model (CL+b)*ex + pb gives us an expected number of counts/pixel across the field.
A Look at the Other XMM Products
The "exposure map" and the "particle background map" were supplied to us by the XMM reduction pipeline, along with the science image. Let's take a look at them now.
|
pbfits = pyfits.open('a1835_xmm/P0098010101M2X000BKGMAP3000.FTZ')
pb = pbfits[0].data
exfits = pyfits.open('a1835_xmm/P0098010101M2U009EXPMAP3000.FTZ')
ex = exfits[0].data
|
examples/XrayImage/Modeling.ipynb
|
enoordeh/StatisticalMethods
|
gpl-2.0
|
The "Exposure Map"
The ex image is in units of seconds, and represents the effective exposure time at each pixel position.
This is actually the product of the exposure time that the detector was exposed for, and a relative sensitivity map accounting for the vignetting of the telescope, dithering, and bad pixels whose data have been excised.
Displaying the exposure map on a linear scale makes the vignetting pattern and other features clear.
|
plt.imshow(ex, cmap='gray', origin='lower');
plt.savefig("figures/cluster_expmap.png")
|
examples/XrayImage/Modeling.ipynb
|
enoordeh/StatisticalMethods
|
gpl-2.0
|
The "Particle Background Map"
pb is not data at all, but rather a model for the expected counts/pixel in this specific observation due to the "quiescent particle background."
This map comes out of a blackbox in the processing pipeline. Even though there are surely uncertainties in it, we have no quantitative description of them to work with.
Note that the exposure map above does not apply to the particle backround; some particles are vignetted by the telescope optics, but not to the same degree as X-rays. The resulting spatial pattern and the total exposure time are accounted for in pb.
|
plt.imshow(pb, cmap='gray', origin='lower');
plt.savefig("figures/cluster_pbmap.png")
|
examples/XrayImage/Modeling.ipynb
|
enoordeh/StatisticalMethods
|
gpl-2.0
|
Masking out the other sources
There are non-cluster sources in this field. To simplify the model-building exercise, we will crudely mask them out for the moment.
A convenient way to do this is by setting the exposure map to zero in these locations - as if a set of tiny little shutters in front of each of those pixels had not been opened. "Not observed" is different from "observed zero counts."
Let's read in a text file encoding a list of circular regions in the image, and set the exposure map pixels within each of those regions in to zero.
|
mask = np.loadtxt('a1835_xmm/M2ptsrc.txt')
for reg in mask:
# this is inefficient but effective
for i in np.round(reg[1]+np.arange(-np.ceil(reg[2]),np.ceil(reg[2]))):
for j in np.round(reg[0]+np.arange(-np.ceil(reg[2]),np.ceil(reg[2]))):
if (i-reg[1])**2 + (j-reg[0])**2 <= reg[2]**2:
ex[np.int(i-1), np.int(j-1)] = 0.0
|
examples/XrayImage/Modeling.ipynb
|
enoordeh/StatisticalMethods
|
gpl-2.0
|
As a sanity check, let's have a look at the modified exposure map.
Compare the location of the "holes" to the science image above.
|
plt.imshow(ex, cmap='gray', origin='lower');
plt.savefig("figures/cluster_expmap_masked.png")
|
examples/XrayImage/Modeling.ipynb
|
enoordeh/StatisticalMethods
|
gpl-2.0
|
A Generative Model for the X-ray Image
All of the discussion above was in terms of predicting the expected number of counts in each pixel, $\mu_k$. This is not what we observe: we observe counts.
To be able to generate a mock dataset, we need to make an assumption about the form of the sampling distribution for the counts $N$ in each pixel, ${\rm Pr}(N_k|\mu_k)$.
Let's assume that this distribution is Poisson, since we expect X-ray photon arrivals to be "rare events."
${\rm Pr}(N_k|\mu_k) = \frac{{\rm e}^{-\mu_k} \mu_k^{N_k}}{N_k !}$
Here, $\mu_k(\theta)$ is the expected number of counts in the $k$th pixel:
$\mu_k(\theta) = \left( S(r_k;\theta) + b \right) \cdot$ ex + pb
Note that writing the sampling distribution like this contains the assumption that the pixels are independent (i.e., there is no cross-talk between the cuboids of silicon that make up the pixels in the CCD chip). (Also note that this assumption is different from the assumption that the expected numbers of counts are independent! They are explicitly not independent: we wrote down a model for a cluster surface brightness distribution that is potentially many pixels in diameter.)
At this point we can draw the PGM for a forward model of this dataset, using the exposure and particle background maps supplied, and some choices for the model parameters.
Then, we can go ahead and simulate some mock data and compare with the image we have.
|
# import cluster_pgm
# cluster_pgm.forward()
from IPython.display import Image
Image(filename="cluster_pgm_forward.png")
def beta_model_profile(r, S0, rc, beta):
'''
The fabled beta model, radial profile S(r)
'''
return S0 * (1.0 + (r/rc)**2)**(-3.0*beta + 0.5)
def beta_model_image(x, y, x0, y0, S0, rc, beta):
'''
Here, x and y are arrays ("meshgrids" or "ramps") containing x and y pixel numbers,
and the other arguments are galaxy cluster beta model parameters.
Returns a surface brightness image of the same shape as x and y.
'''
r = np.sqrt((x-x0)**2 + (y-y0)**2)
return beta_model_profile(r, S0, rc, beta)
def model_image(x, y, ex, pb, x0, y0, S0, rc, beta, b):
'''
Here, x, y, ex and pb are images, all of the same shape, and the other args are
cluster model and X-ray background parameters. ex is the (constant) exposure map
and pb is the (constant) particle background map.
'''
return (beta_model_image(x, y, x0, y0, S0, rc, beta) + b) * ex + pb
# Set up the ramp images, to enable fast array calculations:
nx,ny = ex.shape
x = np.outer(np.ones(ny),np.arange(nx))
y = np.outer(np.arange(ny),np.ones(nx))
fig,ax = plt.subplots(nrows=1, ncols=2)
fig.set_size_inches(15, 6)
plt.subplots_adjust(wspace=0.2)
left = ax[0].imshow(x, cmap='gray', origin='lower')
ax[0].set_title('x')
fig.colorbar(left,ax=ax[0],shrink=0.9)
right = ax[1].imshow(y, cmap='gray', origin='lower')
ax[1].set_title('y')
fig.colorbar(right,ax=ax[1],shrink=0.9)
# Now choose parameters, compute model and plot, compared to data!
x0,y0 = 328,348 # The center of the image is 328,328
S0,b = 0.01,5e-7 # Cluster and background surface brightness, arbitrary units
beta = 2.0/3.0 # Canonical value is beta = 2/3
rc = 4 # Core radius, in pixels
# Realize the expected counts map for the model:
mu = model_image(x,y,ex,pb,x0,y0,S0,rc,beta,b)
# Draw a *sample image* from the Poisson sampling distribution:
mock = np.random.poisson(mu,mu.shape)
# The difference between the mock and the real data should be symmetrical noise if the model
# is a good match...
diff = im - mock
# Plot three panels:
fig,ax = plt.subplots(nrows=1, ncols=3)
fig.set_size_inches(15, 6)
plt.subplots_adjust(wspace=0.2)
left = ax[0].imshow(viz.scale_image(mock, scale='log', max_cut=40), cmap='gray', origin='lower')
ax[0].set_title('Mock (log, rescaled)')
fig.colorbar(left,ax=ax[0],shrink=0.6)
center = ax[1].imshow(viz.scale_image(im, scale='log', max_cut=40), cmap='gray', origin='lower')
ax[1].set_title('Data (log, rescaled)')
fig.colorbar(center,ax=ax[1],shrink=0.6)
right = ax[2].imshow(diff, vmin=-40, vmax=40, cmap='gray', origin='lower')
ax[2].set_title('Difference (linear)')
fig.colorbar(right,ax=ax[2],shrink=0.6)
fig.savefig("figures/cluster_mock-data-diff.png")
|
examples/XrayImage/Modeling.ipynb
|
enoordeh/StatisticalMethods
|
gpl-2.0
|
Points in SimpleITK
Utility functions
A number of functions that deal with point data in a uniform manner.
|
import numpy as np
def point2str(point, precision=1):
"""
Format a point for printing, based on specified precision with trailing zeros. Uniform printing for vector-like data
(tuple, numpy array, list).
Args:
point (vector-like): nD point with floating point coordinates.
precision (int): Number of digits after the decimal point.
Return:
String represntation of the given point "xx.xxx yy.yyy zz.zzz...".
"""
return ' '.join(format(c, '.{0}f'.format(precision)) for c in point)
def uniform_random_points(bounds, num_points):
"""
Generate random (uniform withing bounds) nD point cloud. Dimension is based on the number of pairs in the bounds input.
Args:
bounds (list(tuple-like)): list where each tuple defines the coordinate bounds.
num_points (int): number of points to generate.
Returns:
list containing num_points numpy arrays whose coordinates are within the given bounds.
"""
internal_bounds = [sorted(b) for b in bounds]
# Generate rows for each of the coordinates according to the given bounds, stack into an array,
# and split into a list of points.
mat = np.vstack([np.random.uniform(b[0], b[1], num_points) for b in internal_bounds])
return list(mat[:len(bounds)].T)
def target_registration_errors(tx, point_list, reference_point_list):
"""
Distances between points transformed by the given transformation and their
location in another coordinate system. When the points are only used to evaluate
registration accuracy (not used in the registration) this is the target registration
error (TRE).
"""
return [np.linalg.norm(np.array(tx.TransformPoint(p)) - np.array(p_ref))
for p,p_ref in zip(point_list, reference_point_list)]
def print_transformation_differences(tx1, tx2):
"""
Check whether two transformations are "equivalent" in an arbitrary spatial region
either 3D or 2D, [x=(-10,10), y=(-100,100), z=(-1000,1000)]. This is just a sanity check,
as we are just looking at the effect of the transformations on a random set of points in
the region.
"""
if tx1.GetDimension()==2 and tx2.GetDimension()==2:
bounds = [(-10,10),(-100,100)]
elif tx1.GetDimension()==3 and tx2.GetDimension()==3:
bounds = [(-10,10),(-100,100), (-1000,1000)]
else:
raise ValueError('Transformation dimensions mismatch, or unsupported transformation dimensionality')
num_points = 10
point_list = uniform_random_points(bounds, num_points)
tx1_point_list = [ tx1.TransformPoint(p) for p in point_list]
differences = target_registration_errors(tx2, point_list, tx1_point_list)
print(tx1.GetName()+ '-' +
tx2.GetName()+
':\tminDifference: {:.2f} maxDifference: {:.2f}'.format(min(differences), max(differences)))
|
22_Transforms.ipynb
|
thewtex/SimpleITK-Notebooks
|
apache-2.0
|
In SimpleITK points can be represented by any vector-like data type. In Python these include Tuple, Numpy array, and List. In general Python will treat these data types differently, as illustrated by the print function below.
|
# SimpleITK points represented by vector-like data structures.
point_tuple = (9.0, 10.531, 11.8341)
point_np_array = np.array([9.0, 10.531, 11.8341])
point_list = [9.0, 10.531, 11.8341]
print(point_tuple)
print(point_np_array)
print(point_list)
# Uniform printing with specified precision.
precision = 2
print(point2str(point_tuple, precision))
print(point2str(point_np_array, precision))
print(point2str(point_list, precision))
|
22_Transforms.ipynb
|
thewtex/SimpleITK-Notebooks
|
apache-2.0
|
Global Transformations
All global transformations <i>except translation</i> are of the form:
$$T(\mathbf{x}) = A(\mathbf{x}-\mathbf{c}) + \mathbf{t} + \mathbf{c}$$
In ITK speak (when printing your transformation):
<ul>
<li>Matrix: the matrix $A$</li>
<li>Center: the point $\mathbf{c}$</li>
<li>Translation: the vector $\mathbf{t}$</li>
<li>Offset: $\mathbf{t} + \mathbf{c} - A\mathbf{c}$</li>
</ul>
TranslationTransform
|
# A 3D translation. Note that you need to specify the dimensionality, as the sitk TranslationTransform
# represents both 2D and 3D translations.
dimension = 3
offset =(1,2,3) # offset can be any vector-like data
translation = sitk.TranslationTransform(dimension, offset)
print(translation)
# Transform a point and use the inverse transformation to get the original back.
point = [10, 11, 12]
transformed_point = translation.TransformPoint(point)
translation_inverse = translation.GetInverse()
print('original point: ' + point2str(point) + '\n'
'transformed point: ' + point2str(transformed_point) + '\n'
'back to original: ' + point2str(translation_inverse.TransformPoint(transformed_point)))
|
22_Transforms.ipynb
|
thewtex/SimpleITK-Notebooks
|
apache-2.0
|
Euler2DTransform
|
point = [10, 11]
rotation2D = sitk.Euler2DTransform()
rotation2D.SetTranslation((7.2, 8.4))
rotation2D.SetAngle(np.pi/2)
print('original point: ' + point2str(point) + '\n'
'transformed point: ' + point2str(rotation2D.TransformPoint(point)))
# Change the center of rotation so that it coincides with the point we want to
# transform, why is this a unique configuration?
rotation2D.SetCenter(point)
print('original point: ' + point2str(point) + '\n'
'transformed point: ' + point2str(rotation2D.TransformPoint(point)))
|
22_Transforms.ipynb
|
thewtex/SimpleITK-Notebooks
|
apache-2.0
|
VersorTransform
|
# Rotation only, parametrized by Versor (vector part of unit quaternion),
# quaternion defined by rotation of theta around axis n:
# q = [n*sin(theta/2), cos(theta/2)]
# 180 degree rotation around z axis
# Use a versor:
rotation1 = sitk.VersorTransform([0,0,1,0])
# Use axis-angle:
rotation2 = sitk.VersorTransform((0,0,1), np.pi)
# Use a matrix:
rotation3 = sitk.VersorTransform()
rotation3.SetMatrix([-1, 0, 0, 0, -1, 0, 0, 0, 1]);
point = (10, 100, 1000)
p1 = rotation1.TransformPoint(point)
p2 = rotation2.TransformPoint(point)
p3 = rotation3.TransformPoint(point)
print('Points after transformation:\np1=' + str(p1) +
'\np2='+ str(p2) + '\np3='+ str(p3))
|
22_Transforms.ipynb
|
thewtex/SimpleITK-Notebooks
|
apache-2.0
|
We applied the "same" transformation to the same point, so why are the results slightly different for the second initialization method?
This is where theory meets practice. Using the axis-angle initialization method involves trigonometric functions which on a fixed precision machine lead to these slight differences. In many cases this is not an issue, but it is something to remember. From here on we will sweep it under the rug (printing with a more reasonable precision).
Translation to Rigid [3D]
Copy the translational component.
|
dimension = 3
t =(1,2,3)
translation = sitk.TranslationTransform(dimension, t)
# Only need to copy the translational component.
rigid_euler = sitk.Euler3DTransform()
rigid_euler.SetTranslation(translation.GetOffset())
rigid_versor = sitk.VersorRigid3DTransform()
rigid_versor.SetTranslation(translation.GetOffset())
# Sanity check to make sure the transformations are equivalent.
bounds = [(-10,10),(-100,100), (-1000,1000)]
num_points = 10
point_list = uniform_random_points(bounds, num_points)
transformed_point_list = [translation.TransformPoint(p) for p in point_list]
# Draw the original and transformed points, include the label so that we
# can modify the plots without requiring explicit changes to the legend.
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
orig = ax.scatter(list(np.array(point_list).T)[0],
list(np.array(point_list).T)[1],
list(np.array(point_list).T)[2],
marker='o',
color='blue',
label='Original points')
transformed = ax.scatter(list(np.array(transformed_point_list).T)[0],
list(np.array(transformed_point_list).T)[1],
list(np.array(transformed_point_list).T)[2],
marker='^',
color='red',
label='Transformed points')
plt.legend(loc=(0.0,1.0))
euler_errors = target_registration_errors(rigid_euler, point_list, transformed_point_list)
versor_errors = target_registration_errors(rigid_versor, point_list, transformed_point_list)
print('Euler\tminError: {:.2f} maxError: {:.2f}'.format(min(euler_errors), max(euler_errors)))
print('Versor\tminError: {:.2f} maxError: {:.2f}'.format(min(versor_errors), max(versor_errors)))
|
22_Transforms.ipynb
|
thewtex/SimpleITK-Notebooks
|
apache-2.0
|
Rotation to Rigid [3D]
Copy the matrix or versor and <b>center of rotation</b>.
|
rotationCenter = (10, 10, 10)
rotation = sitk.VersorTransform([0,0,1,0], rotationCenter)
rigid_euler = sitk.Euler3DTransform()
rigid_euler.SetMatrix(rotation.GetMatrix())
rigid_euler.SetCenter(rotation.GetCenter())
rigid_versor = sitk.VersorRigid3DTransform()
rigid_versor.SetRotation(rotation.GetVersor())
#rigid_versor.SetCenter(rotation.GetCenter()) #intentional error
# Sanity check to make sure the transformations are equivalent.
bounds = [(-10,10),(-100,100), (-1000,1000)]
num_points = 10
point_list = uniform_random_points(bounds, num_points)
transformed_point_list = [ rotation.TransformPoint(p) for p in point_list]
euler_errors = target_registration_errors(rigid_euler, point_list, transformed_point_list)
versor_errors = target_registration_errors(rigid_versor, point_list, transformed_point_list)
# Draw the points transformed by the original transformation and after transformation
# using the incorrect transformation, illustrate the effect of center of rotation.
from mpl_toolkits.mplot3d import Axes3D
incorrect_transformed_point_list = [ rigid_versor.TransformPoint(p) for p in point_list]
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
orig = ax.scatter(list(np.array(transformed_point_list).T)[0],
list(np.array(transformed_point_list).T)[1],
list(np.array(transformed_point_list).T)[2],
marker='o',
color='blue',
label='Rotation around specific center')
transformed = ax.scatter(list(np.array(incorrect_transformed_point_list).T)[0],
list(np.array(incorrect_transformed_point_list).T)[1],
list(np.array(incorrect_transformed_point_list).T)[2],
marker='^',
color='red',
label='Rotation around origin')
plt.legend(loc=(0.0,1.0))
print('Euler\tminError: {:.2f} maxError: {:.2f}'.format(min(euler_errors), max(euler_errors)))
print('Versor\tminError: {:.2f} maxError: {:.2f}'.format(min(versor_errors), max(versor_errors)))
|
22_Transforms.ipynb
|
thewtex/SimpleITK-Notebooks
|
apache-2.0
|
Similarity [2D]
When the center of the similarity transformation is not at the origin the effect of the transformation is not what most of us expect. This is readily visible if we limit the transformation to scaling: $T(\mathbf{x}) = s\mathbf{x}-s\mathbf{c} + \mathbf{c}$. Changing the transformation's center results in scale + translation.
|
def display_center_effect(x, y, tx, point_list, xlim, ylim):
tx.SetCenter((x,y))
transformed_point_list = [ tx.TransformPoint(p) for p in point_list]
plt.scatter(list(np.array(transformed_point_list).T)[0],
list(np.array(transformed_point_list).T)[1],
marker='^',
color='red', label='transformed points')
plt.scatter(list(np.array(point_list).T)[0],
list(np.array(point_list).T)[1],
marker='o',
color='blue', label='original points')
plt.xlim(xlim)
plt.ylim(ylim)
plt.legend(loc=(0.25,1.01))
# 2D square centered on (0,0)
points = [np.array((-1,-1)), np.array((-1,1)), np.array((1,1)), np.array((1,-1))]
# Scale by 2
similarity = sitk.Similarity2DTransform();
similarity.SetScale(2)
interact(display_center_effect, x=(-10,10), y=(-10,10),tx = fixed(similarity), point_list = fixed(points),
xlim = fixed((-10,10)),ylim = fixed((-10,10)));
|
22_Transforms.ipynb
|
thewtex/SimpleITK-Notebooks
|
apache-2.0
|
Rigid to Similarity [3D]
Copy the translation, center, and matrix or versor.
|
rotation_center = (100, 100, 100)
theta_x = 0.0
theta_y = 0.0
theta_z = np.pi/2.0
translation = (1,2,3)
rigid_euler = sitk.Euler3DTransform(rotation_center, theta_x, theta_y, theta_z, translation)
similarity = sitk.Similarity3DTransform()
similarity.SetMatrix(rigid_euler.GetMatrix())
similarity.SetTranslation(rigid_euler.GetTranslation())
similarity.SetCenter(rigid_euler.GetCenter())
# Apply the transformations to the same set of random points and compare the results
# (see utility functions at top of notebook).
print_transformation_differences(rigid_euler, similarity)
|
22_Transforms.ipynb
|
thewtex/SimpleITK-Notebooks
|
apache-2.0
|
Similarity to Affine [3D]
Copy the translation, center and matrix.
|
rotation_center = (100, 100, 100)
axis = (0,0,1)
angle = np.pi/2.0
translation = (1,2,3)
scale_factor = 2.0
similarity = sitk.Similarity3DTransform(scale_factor, axis, angle, translation, rotation_center)
affine = sitk.AffineTransform(3)
affine.SetMatrix(similarity.GetMatrix())
affine.SetTranslation(similarity.GetTranslation())
affine.SetCenter(similarity.GetCenter())
# Apply the transformations to the same set of random points and compare the results
# (see utility functions at top of notebook).
print_transformation_differences(similarity, affine)
|
22_Transforms.ipynb
|
thewtex/SimpleITK-Notebooks
|
apache-2.0
|
Scale Transform
Just as the case was for the similarity transformation above, when the transformations center is not at the origin, instead of a pure anisotropic scaling we also have translation ($T(\mathbf{x}) = \mathbf{s}^T\mathbf{x}-\mathbf{s}^T\mathbf{c} + \mathbf{c}$).
|
# 2D square centered on (0,0).
points = [np.array((-1,-1)), np.array((-1,1)), np.array((1,1)), np.array((1,-1))]
# Scale by half in x and 2 in y.
scale = sitk.ScaleTransform(2, (0.5,2));
# Interactively change the location of the center.
interact(display_center_effect, x=(-10,10), y=(-10,10),tx = fixed(scale), point_list = fixed(points),
xlim = fixed((-10,10)),ylim = fixed((-10,10)));
|
22_Transforms.ipynb
|
thewtex/SimpleITK-Notebooks
|
apache-2.0
|
Scale Versor
This is not what you would expect from the name (composition of anisotropic scaling and rigid). This is:
$$T(x) = (R+S)(\mathbf{x}-\mathbf{c}) + \mathbf{t} + \mathbf{c},\;\; \textrm{where } S= \left[\begin{array}{ccc} s_0-1 & 0 & 0 \ 0 & s_1-1 & 0 \ 0 & 0 & s_2-1 \end{array}\right]$$
There is no natural way of "promoting" the similarity transformation to this transformation.
|
scales = (0.5,0.7,0.9)
translation = (1,2,3)
axis = (0,0,1)
angle = 0.0
scale_versor = sitk.ScaleVersor3DTransform(scales, axis, angle, translation)
print(scale_versor)
|
22_Transforms.ipynb
|
thewtex/SimpleITK-Notebooks
|
apache-2.0
|
Scale Skew Versor
Again, not what you expect based on the name, this is not a composition of transformations. This is:
$$T(x) = (R+S+K)(\mathbf{x}-\mathbf{c}) + \mathbf{t} + \mathbf{c},\;\; \textrm{where } S = \left[\begin{array}{ccc} s_0-1 & 0 & 0 \ 0 & s_1-1 & 0 \ 0 & 0 & s_2-1 \end{array}\right]\;\; \textrm{and } K = \left[\begin{array}{ccc} 0 & k_0 & k_1 \ k_2 & 0 & k_3 \ k_4 & k_5 & 0 \end{array}\right]$$
In practice this is an over-parametrized version of the affine transform, 15 (scale, skew, versor, translation) vs. 12 parameters (matrix, translation).
|
scale = (2,2.1,3)
skew = np.linspace(start=0.0, stop=1.0, num=6) #six eqaully spaced values in[0,1], an arbitrary choice
translation = (1,2,3)
versor = (0,0,0,1.0)
scale_skew_versor = sitk.ScaleSkewVersor3DTransform(scale, skew, versor, translation)
print(scale_skew_versor)
|
22_Transforms.ipynb
|
thewtex/SimpleITK-Notebooks
|
apache-2.0
|
Bounded Transformations
SimpleITK supports two types of bounded non-rigid transformations, BSplineTransform (sparse represntation) and DisplacementFieldTransform (dense representation).
Transforming a point that is outside the bounds will return the original point - identity transform.
|
#
# This function displays the effects of the deformable transformation on a grid of points by scaling the
# initial displacements (either of control points for bspline or the deformation field itself). It does
# assume that all points are contained in the range(-2.5,-2.5), (2.5,2.5).
#
def display_displacement_scaling_effect(s, original_x_mat, original_y_mat, tx, original_control_point_displacements):
if tx.GetDimension() !=2:
raise ValueError('display_displacement_scaling_effect only works in 2D')
plt.scatter(original_x_mat,
original_y_mat,
marker='o',
color='blue', label='original points')
pointsX = []
pointsY = []
tx.SetParameters(s*original_control_point_displacements)
for index, value in np.ndenumerate(original_x_mat):
px,py = tx.TransformPoint((value, original_y_mat[index]))
pointsX.append(px)
pointsY.append(py)
plt.scatter(pointsX,
pointsY,
marker='^',
color='red', label='transformed points')
plt.legend(loc=(0.25,1.01))
plt.xlim((-2.5,2.5))
plt.ylim((-2.5,2.5))
|
22_Transforms.ipynb
|
thewtex/SimpleITK-Notebooks
|
apache-2.0
|
BSpline
Using a sparse set of control points to control a free form deformation.
|
# Create the transformation (when working with images it is easier to use the BSplineTransformInitializer function
# or its object oriented counterpart BSplineTransformInitializerFilter).
dimension = 2
spline_order = 3
direction_matrix_row_major = [1.0,0.0,0.0,1.0] # identity, mesh is axis aligned
origin = [-1.0,-1.0]
domain_physical_dimensions = [2,2]
bspline = sitk.BSplineTransform(dimension, spline_order)
bspline.SetTransformDomainOrigin(origin)
bspline.SetTransformDomainDirection(direction_matrix_row_major)
bspline.SetTransformDomainPhysicalDimensions(domain_physical_dimensions)
bspline.SetTransformDomainMeshSize((4,3))
# Random displacement of the control points.
originalControlPointDisplacements = np.random.random(len(bspline.GetParameters()))
bspline.SetParameters(originalControlPointDisplacements)
# Apply the bspline transformation to a grid of points
# starting the point set exactly at the origin of the bspline mesh is problematic as
# these points are considered outside the transformation's domain,
# remove epsilon below and see what happens.
numSamplesX = 10
numSamplesY = 20
coordsX = np.linspace(origin[0]+np.finfo(float).eps, origin[0] + domain_physical_dimensions[0], numSamplesX)
coordsY = np.linspace(origin[1]+np.finfo(float).eps, origin[1] + domain_physical_dimensions[1], numSamplesY)
XX, YY = np.meshgrid(coordsX, coordsY)
interact(display_displacement_scaling_effect, s= (-1.5,1.5), original_x_mat = fixed(XX), original_y_mat = fixed(YY),
tx = fixed(bspline), original_control_point_displacements = fixed(originalControlPointDisplacements));
|
22_Transforms.ipynb
|
thewtex/SimpleITK-Notebooks
|
apache-2.0
|
DisplacementField
A dense set of vectors representing the displacment inside the given domain. The most generic representation of a transformation.
|
# Create the displacment field.
# When working with images the safer thing to do is use the image based constructor,
# sitk.DisplacementFieldTransform(my_image), all the fixed parameters will be set correctly and the displacement
# field is initialized using the vectors stored in the image. SimpleITK requires that the image's pixel type be
# sitk.sitkVectorFloat64.
displacement = sitk.DisplacementFieldTransform(2)
field_size = [10,20]
field_origin = [-1.0,-1.0]
field_spacing = [2.0/9.0,2.0/19.0]
field_direction = [1,0,0,1] # direction cosine matrix (row major order)
# Concatenate all the information into a single list
displacement.SetFixedParameters(field_size+field_origin+field_spacing+field_direction)
# Set the interpolater, either sitkLinear which is default or nearest neighbor
displacement.SetInterpolator(sitk.sitkNearestNeighbor)
originalDisplacements = np.random.random(len(displacement.GetParameters()))
displacement.SetParameters(originalDisplacements)
coordsX = np.linspace(field_origin[0], field_origin[0]+(field_size[0]-1)*field_spacing[0], field_size[0])
coordsY = np.linspace(field_origin[1], field_origin[1]+(field_size[1]-1)*field_spacing[1], field_size[1])
XX, YY = np.meshgrid(coordsX, coordsY)
interact(display_displacement_scaling_effect, s= (-1.5,1.5), original_x_mat = fixed(XX), original_y_mat = fixed(YY),
tx = fixed(displacement), original_control_point_displacements = fixed(originalDisplacements));
|
22_Transforms.ipynb
|
thewtex/SimpleITK-Notebooks
|
apache-2.0
|
Displacement field transform created from an image. Remember that SimpleITK will clear the image you provide, as shown in the cell below.
|
displacement_image = sitk.Image([64,64], sitk.sitkVectorFloat64)
# The only point that has any displacement is (0,0)
displacement = (0.5,0.5)
displacement_image[0,0] = displacement
print('Original displacement image size: ' + point2str(displacement_image.GetSize()))
displacement_field_transform = sitk.DisplacementFieldTransform(displacement_image)
print('After using the image to create a transform, displacement image size: ' + point2str(displacement_image.GetSize()))
# Check that the displacement field transform does what we expect.
print('Expected result: {0}\nActual result:{1}'.format(str(displacement), displacement_field_transform.TransformPoint((0,0))))
|
22_Transforms.ipynb
|
thewtex/SimpleITK-Notebooks
|
apache-2.0
|
Composite transform (Transform)
The generic SimpleITK transform class. This class can represent both a single transformation (global, local), or a composite transformation (multiple transformations applied one after the other). This is the output typed returned by the SimpleITK registration framework.
The choice of whether to use a composite transformation or compose transformations on your own has subtle differences in the registration framework.
Below we represent the composite transformation $T_{affine}(T_{rigid}(x))$ in two ways: (1) use a composite transformation to contain the two; (2) combine the two into a single affine transformation. We can use both as initial transforms (SetInitialTransform) for the registration framework (ImageRegistrationMethod). The difference is that in the former case the optimized parameters belong to the rigid transformation and in the later they belong to the combined-affine transformation.
|
# Create a composite transformation: T_affine(T_rigid(x)).
rigid_center = (100,100,100)
theta_x = 0.0
theta_y = 0.0
theta_z = np.pi/2.0
rigid_translation = (1,2,3)
rigid_euler = sitk.Euler3DTransform(rigid_center, theta_x, theta_y, theta_z, rigid_translation)
affine_center = (20, 20, 20)
affine_translation = (5,6,7)
# Matrix is represented as a vector-like data in row major order.
affine_matrix = np.random.random(9)
affine = sitk.AffineTransform(affine_matrix, affine_translation, affine_center)
# Using the composite transformation we just add them in (stack based, first in - last applied).
composite_transform = sitk.Transform(affine)
composite_transform.AddTransform(rigid_euler)
# Create a single transform manually. this is a recipe for compositing any two global transformations
# into an affine transformation, T_0(T_1(x)):
# A = A=A0*A1
# c = c1
# t = A0*[t1+c1-c0] + t0+c0-c1
A0 = np.asarray(affine.GetMatrix()).reshape(3,3)
c0 = np.asarray(affine.GetCenter())
t0 = np.asarray(affine.GetTranslation())
A1 = np.asarray(rigid_euler.GetMatrix()).reshape(3,3)
c1 = np.asarray(rigid_euler.GetCenter())
t1 = np.asarray(rigid_euler.GetTranslation())
combined_mat = np.dot(A0,A1)
combined_center = c1
combined_translation = np.dot(A0, t1+c1-c0) + t0+c0-c1
combined_affine = sitk.AffineTransform(combined_mat.flatten(), combined_translation, combined_center)
# Check if the two transformations are equivalent.
print('Apply the two transformations to the same point cloud:')
print('\t', end='')
print_transformation_differences(composite_transform, combined_affine)
print('Transform parameters:')
print('\tComposite transform: ' + point2str(composite_transform.GetParameters(),2))
print('\tCombined affine: ' + point2str(combined_affine.GetParameters(),2))
print('Fixed parameters:')
print('\tComposite transform: ' + point2str(composite_transform.GetFixedParameters(),2))
print('\tCombined affine: ' + point2str(combined_affine.GetFixedParameters(),2))
|
22_Transforms.ipynb
|
thewtex/SimpleITK-Notebooks
|
apache-2.0
|
Composite transforms enable a combination of a global transformation with multiple local/bounded transformations. This is useful if we want to apply deformations only in regions that deform while other regions are only effected by the global transformation.
The following code illustrates this, where the whole region is translated and subregions have different deformations.
|
# Global transformation.
translation = sitk.TranslationTransform(2,(1.0,0.0))
# Displacement in region 1.
displacement1 = sitk.DisplacementFieldTransform(2)
field_size = [10,20]
field_origin = [-1.0,-1.0]
field_spacing = [2.0/9.0,2.0/19.0]
field_direction = [1,0,0,1] # direction cosine matrix (row major order)
# Concatenate all the information into a single list.
displacement1.SetFixedParameters(field_size+field_origin+field_spacing+field_direction)
displacement1.SetParameters(np.ones(len(displacement1.GetParameters())))
# Displacement in region 2.
displacement2 = sitk.DisplacementFieldTransform(2)
field_size = [10,20]
field_origin = [1.0,-3]
field_spacing = [2.0/9.0,2.0/19.0]
field_direction = [1,0,0,1] #direction cosine matrix (row major order)
# Concatenate all the information into a single list.
displacement2.SetFixedParameters(field_size+field_origin+field_spacing+field_direction)
displacement2.SetParameters(-1.0*np.ones(len(displacement2.GetParameters())))
# Composite transform which applies the global and local transformations.
composite = sitk.Transform(translation)
composite.AddTransform(displacement1)
composite.AddTransform(displacement2)
# Apply the composite transformation to points in ([-1,-3],[3,1]) and
# display the deformation using a quiver plot.
# Generate points.
numSamplesX = 10
numSamplesY = 10
coordsX = np.linspace(-1.0, 3.0, numSamplesX)
coordsY = np.linspace(-3.0, 1.0, numSamplesY)
XX, YY = np.meshgrid(coordsX, coordsY)
# Transform points and compute deformation vectors.
pointsX = np.zeros(XX.shape)
pointsY = np.zeros(XX.shape)
for index, value in np.ndenumerate(XX):
px,py = composite.TransformPoint((value, YY[index]))
pointsX[index]=px - value
pointsY[index]=py - YY[index]
plt.quiver(XX, YY, pointsX, pointsY);
|
22_Transforms.ipynb
|
thewtex/SimpleITK-Notebooks
|
apache-2.0
|
Writing and Reading
The SimpleITK.ReadTransform() returns a SimpleITK.Transform . The content of the file can be any of the SimpleITK transformations or a composite (set of tranformations).
|
import os
# Create a 2D rigid transformation, write it to disk and read it back.
basic_transform = sitk.Euler2DTransform()
basic_transform.SetTranslation((1,2))
basic_transform.SetAngle(np.pi/2)
full_file_name = os.path.join(OUTPUT_DIR, 'euler2D.tfm')
sitk.WriteTransform(basic_transform, full_file_name)
# The ReadTransform function returns an sitk.Transform no matter the type of the transform
# found in the file (global, bounded, composite).
read_result = sitk.ReadTransform(full_file_name)
print('Different types: '+ str(type(read_result) != type(basic_transform)))
print_transformation_differences(basic_transform, read_result)
# Create a composite transform then write and read.
displacement = sitk.DisplacementFieldTransform(2)
field_size = [10,20]
field_origin = [-10.0,-100.0]
field_spacing = [20.0/(field_size[0]-1),200.0/(field_size[1]-1)]
field_direction = [1,0,0,1] #direction cosine matrix (row major order)
# Concatenate all the information into a single list.
displacement.SetFixedParameters(field_size+field_origin+field_spacing+field_direction)
displacement.SetParameters(np.random.random(len(displacement.GetParameters())))
composite_transform = sitk.Transform(basic_transform)
composite_transform.AddTransform(displacement)
full_file_name = os.path.join(OUTPUT_DIR, 'composite.tfm')
sitk.WriteTransform(composite_transform, full_file_name)
read_result = sitk.ReadTransform(full_file_name)
print_transformation_differences(composite_transform, read_result)
|
22_Transforms.ipynb
|
thewtex/SimpleITK-Notebooks
|
apache-2.0
|
Motivating Support Vector Machines
Support Vector Machines (SVMs) are a powerful supervised learning algorithm used for classification or for regression. SVMs are a discriminative classifier: that is, they draw a boundary between clusters of data.
Let's show a quick example of support vector classification. First we need to create a dataset:
|
from sklearn.datasets.samples_generator import make_blobs
X, y = make_blobs(n_samples=50, centers=2,
random_state=0, cluster_std=0.60)
plt.figure(figsize=(8,8))
plt.scatter(X[:, 0], X[:, 1], c=y, s=50);
|
notebooks/05-SVM.ipynb
|
albahnsen/PracticalMachineLearningClass
|
mit
|
A discriminative classifier attempts to draw a line between the two sets of data. Immediately we see a problem: such a line is ill-posed! For example, we could come up with several possibilities which perfectly discriminate between the classes in this example:
|
xfit = np.linspace(-1, 3.5)
plt.figure(figsize=(8,8))
plt.scatter(X[:, 0], X[:, 1], c=y, s=50)
for m, b in [(1, 0.65), (0.5, 1.6), (-0.2, 2.9)]:
plt.plot(xfit, m * xfit + b, '-k')
plt.xlim(-1, 3.5);
|
notebooks/05-SVM.ipynb
|
albahnsen/PracticalMachineLearningClass
|
mit
|
These are three very different separaters which perfectly discriminate between these samples. Depending on which you choose, a new data point will be classified almost entirely differently!
How can we improve on this?
Support Vector Machines: Maximizing the Margin
Support vector machines are one way to address this.
What support vector machined do is to not only draw a line, but consider a region about the line of some given width. Here's an example of what it might look like:
|
xfit = np.linspace(-1, 3.5)
plt.figure(figsize=(8,8))
plt.scatter(X[:, 0], X[:, 1], c=y, s=50)
for m, b, d in [(1, 0.65, 0.33), (0.5, 1.6, 0.55), (-0.2, 2.9, 0.2)]:
yfit = m * xfit + b
plt.plot(xfit, yfit, '-k')
plt.fill_between(xfit, yfit - d, yfit + d, edgecolor='none', color='#AAAAAA', alpha=0.4)
plt.xlim(-1, 3.5);
|
notebooks/05-SVM.ipynb
|
albahnsen/PracticalMachineLearningClass
|
mit
|
Notice here that if we want to maximize this width, the middle fit is clearly the best.
This is the intuition of support vector machines, which optimize a linear discriminant model in conjunction with a margin representing the perpendicular distance between the datasets.
Fitting a Support Vector Machine
Now we'll fit a Support Vector Machine Classifier to these points. While the mathematical details of the likelihood model are interesting, we'll let you read about those elsewhere. Instead, we'll just treat the scikit-learn algorithm as a black box which accomplishes the above task.
|
from sklearn.svm import SVC # "Support Vector Classifier"
clf = SVC(kernel='linear')
clf.fit(X, y)
|
notebooks/05-SVM.ipynb
|
albahnsen/PracticalMachineLearningClass
|
mit
|
To better visualize what's happening here, let's create a quick convenience function that will plot SVM decision boundaries for us:
|
import warnings
warnings.filterwarnings('ignore')
def plot_svc_decision_function(clf, ax=None):
"""Plot the decision function for a 2D SVC"""
if ax is None:
ax = plt.gca()
x = np.linspace(plt.xlim()[0], plt.xlim()[1], 30)
y = np.linspace(plt.ylim()[0], plt.ylim()[1], 30)
Y, X = np.meshgrid(y, x)
P = np.zeros_like(X)
for i, xi in enumerate(x):
for j, yj in enumerate(y):
P[i, j] = clf.decision_function([[xi, yj]])
# plot the margins
ax.contour(X, Y, P, colors='k',
levels=[-1, 0, 1], alpha=0.5,
linestyles=['--', '-', '--'])
plt.figure(figsize=(8,8))
plt.scatter(X[:, 0], X[:, 1], c=y, s=50)
plot_svc_decision_function(clf)
|
notebooks/05-SVM.ipynb
|
albahnsen/PracticalMachineLearningClass
|
mit
|
Notice that the dashed lines touch a couple of the points: these points are the pivotal pieces of this fit, and are known as the support vectors (giving the algorithm its name).
In scikit-learn, these are stored in the support_vectors_ attribute of the classifier:
|
plt.figure(figsize=(8,8))
plt.scatter(X[:, 0], X[:, 1], c=y, s=50)
plot_svc_decision_function(clf)
plt.scatter(clf.support_vectors_[:, 0], clf.support_vectors_[:, 1],
s=200, facecolors='none');
|
notebooks/05-SVM.ipynb
|
albahnsen/PracticalMachineLearningClass
|
mit
|
Let's use IPython's interact functionality to explore how the distribution of points affects the support vectors and the discriminative fit.
(This is only available in IPython 2.0+, and will not work in a static view)
|
from IPython.html.widgets import interact
def plot_svm(N=10):
X, y = make_blobs(n_samples=200, centers=2,
random_state=0, cluster_std=0.60)
X = X[:N]
y = y[:N]
clf = SVC(kernel='linear')
clf.fit(X, y)
plt.figure(figsize=(8,8))
plt.scatter(X[:, 0], X[:, 1], c=y, s=50)
plt.xlim(-1, 4)
plt.ylim(-1, 6)
plot_svc_decision_function(clf, plt.gca())
plt.scatter(clf.support_vectors_[:, 0], clf.support_vectors_[:, 1],
s=200, facecolors='none')
interact(plot_svm, N=[10, 200], kernel='linear');
|
notebooks/05-SVM.ipynb
|
albahnsen/PracticalMachineLearningClass
|
mit
|
Notice the unique thing about SVM is that only the support vectors matter: that is, if you moved any of the other points without letting them cross the decision boundaries, they would have no effect on the classification results!
Going further: Kernel Methods
Where SVM gets incredibly exciting is when it is used in conjunction with kernels.
To motivate the need for kernels, let's look at some data which is not linearly separable:
|
from sklearn.datasets.samples_generator import make_circles
X, y = make_circles(100, factor=.1, noise=.1)
clf = SVC(kernel='linear').fit(X, y)
plt.figure(figsize=(8,8))
plt.scatter(X[:, 0], X[:, 1], c=y, s=50)
# plot_svc_decision_function(clf);
|
notebooks/05-SVM.ipynb
|
albahnsen/PracticalMachineLearningClass
|
mit
|
Clearly, no linear discrimination will ever separate these data.
One way we can adjust this is to apply a kernel, which is some functional transformation of the input data.
For example, one simple model we could use is a radial basis function
|
r = np.exp(-(X[:, 0] ** 2 + X[:, 1] ** 2))
|
notebooks/05-SVM.ipynb
|
albahnsen/PracticalMachineLearningClass
|
mit
|
If we plot this along with our data, we can see the effect of it:
|
from mpl_toolkits import mplot3d
def plot_3D(elev=30, azim=30):
plt.figure(figsize=(8,8))
ax = plt.subplot(projection='3d')
ax.scatter3D(X[:, 0], X[:, 1], r, c=y, s=50)
ax.view_init(elev=elev, azim=azim)
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_zlabel('r')
interact(plot_3D, elev=[-90, 90], azip=(-180, 180));
|
notebooks/05-SVM.ipynb
|
albahnsen/PracticalMachineLearningClass
|
mit
|
We can see that with this additional dimension, the data becomes trivially linearly separable!
This is a relatively simple kernel; SVM has a more sophisticated version of this kernel built-in to the process. This is accomplished by using kernel='rbf', short for radial basis function:
|
clf = SVC(kernel='rbf')
clf.fit(X, y)
plt.figure(figsize=(8,8))
plt.scatter(X[:, 0], X[:, 1], c=y, s=50)
plot_svc_decision_function(clf)
|
notebooks/05-SVM.ipynb
|
albahnsen/PracticalMachineLearningClass
|
mit
|
Sommer 2015
Datenmodell
Aufgabe
Erstellen Sie eine Abfrage, mit der Sie die Daten aller Kunden, die Anzahl deren Aufträge, die Anzahl der Fahrten und die Summe der Streckenkilometer erhalten. Die Ausgabe soll nach Kunden-PLZ absteigend sortiert sein.
Lösung
mysql
select k.kd_id,
(select count(a.Au_ID) from auftrag a
where a.au_kd_id = k.kd_id ) as AnzahlAuftr,
(select count(f.`f_id`) from fahrten f, auftrag a
where f.f_au_id = a.au_id and a.`au_kd_id` = k.`kd_id`) as AnzahlFahrt,
(select sum(ts.ts_strecke) from teilstrecke ts, fahrten f, auftrag a
where ts.ts_f_id = f.f_id and a.au_id = f.`f_au_id` and a.`au_kd_id` = k.`kd_id`) as SumStrecke
from kunde k
order by k.kd_plz;
|
%%sql
select k.kd_id, k.kd_plz,
(select count(a.Au_ID) from auftrag a where a.au_kd_id = k.kd_id ) as AnzahlAuftr,
(select count(f.`f_id`) from fahrten f, auftrag a
where f.f_au_id = a.au_id and a.`au_kd_id` = k.`kd_id`) as AnzahlFahrt,
(select sum(ts.ts_strecke) from teilstrecke ts, fahrten f, auftrag a
where ts.ts_f_id = f.f_id and a.au_id = f.`f_au_id` and a.`au_kd_id` = k.`kd_id`) as SumStrecke
from kunde k order by k.kd_plz;
%sql select count(*) as AnzahlFahrten from fahrten
|
jup_notebooks/datenbanken/SubSelects.ipynb
|
steinam/teacher
|
mit
|
Warum geht kein Join ??
mysql
select k.kd_id, k.`kd_firma`, k.`kd_plz`,
count(a.Au_ID) as AnzAuftrag,
count(f.f_id) as AnzFahrt,
sum(ts.ts_strecke) as SumStrecke
from kunde k left join auftrag a
on k.`kd_id` = a.`au_kd_id`
left join fahrten f
on a.`au_id` = f.`f_au_id`
left join teilstrecke ts
on ts.`ts_f_id` = f.`f_id`
group by k.kd_id
order by k.`kd_plz`
|
%%sql
select k.kd_id, k.`kd_firma`, k.`kd_plz`,
count(distinct a.Au_ID) as AnzAuftrag,
count(distinct f.f_id) as AnzFahrt,
sum(ts.ts_strecke) as SumStrecke
from kunde k left join auftrag a
on k.`kd_id` = a.`au_kd_id`
left join fahrten f
on a.`au_id` = f.`f_au_id`
left join teilstrecke ts
on ts.`ts_f_id` = f.`f_id`
group by k.kd_id
order by k.`kd_plz`
|
jup_notebooks/datenbanken/SubSelects.ipynb
|
steinam/teacher
|
mit
|
Der Ansatz mit Join funktioniert in dieser Form nicht, da spätestens beim 2. Join die Firma Trappo mit 2 Datensätzen aus dem 1. Join verknüpft wird. Deshalb wird auch die Anzahl der Fahren verdoppelt. Dies wiederholt sich beim 3. Join.
Die folgende Abfrage zeigt ohne die Aggregatfunktionen das jeweilige Ausgangsergebnis
mysql
select k.kd_id, k.`kd_firma`, k.`kd_plz`, a.`au_id`
from kunde k left join auftrag a
on k.`kd_id` = a.`au_kd_id`
left join fahrten f
on a.`au_id` = f.`f_au_id`
left join teilstrecke ts
on ts.`ts_f_id` = f.`f_id`
order by k.`kd_plz`
|
%%sql
SELECT kunde.Kd_ID, kunde.Kd_Firma, kunde.Kd_Strasse, kunde.Kd_PLZ,
kunde.Kd_Ort, COUNT(distinct auftrag.Au_ID) AS AnzahlAuftr, COUNT(distinct fahrten.F_ID) AS AnzahlFahrt, SUM(teilstrecke.Ts_Strecke) AS SumStrecke
FROM kunde
LEFT JOIN auftrag ON auftrag.Au_Kd_ID = kunde.Kd_ID
LEFT JOIN fahrten ON fahrten.F_Au_ID = auftrag.Au_ID
LEFT JOIN Teilstrecke ON teilstrecke.Ts_F_ID = fahrten.F_ID
GROUP BY kunde.Kd_ID
ORDER BY kunde.Kd_PLZ desc;
|
jup_notebooks/datenbanken/SubSelects.ipynb
|
steinam/teacher
|
mit
|
Es geht auch mit einem Subselect
``mysql
select kd.Kd_Name,
(select COUNT(*) from Rechnung as R
where R.Rg_KD_ID= KD.Kd_IDand year(R.Rg_Datum`) = 2015)
from Kunde kd inner join `zahlungsbedingung`
on kd.`Kd_Zb_ID` = `zahlungsbedingung`.`Zb_ID`
and zahlungsbedingung.Zb_SkontoProzent > 3.0
```
|
%%sql
select kd.`Kd_Name`,
(select COUNT(*) from Rechnung as R
where R.`Rg_KD_ID` = KD.`Kd_ID` and year(R.`Rg_Datum`) = 2015) as Anzahl
from Kunde kd inner join `zahlungsbedingung`
on kd.`Kd_Zb_ID` = `zahlungsbedingung`.`Zb_ID`
and `zahlungsbedingung`.`Zb_SkontoProzent` > 3.0
%%sql
-- wortmann und prinz
select
(select count(rechnung.rg_id) from rechnung
where
rechnung.rg_kd_id = kunde.kd_id
and (select zb_skontoprozent from zahlungsbedingung where zahlungsbedingung.zb_id = kunde.kd_zb_id) > 3
and YEAR(rechnung.rg_datum) = 2015
) as AnzRechnungen,
kunde.*
from kunde;
%%sql
SELECT COUNT(r.rg_id) AS AnzRechnung, k.*
FROM kunde AS k
LEFT JOIN rechnung AS r ON k.kd_id = r.Rg_KD_ID
WHERE k.kd_zb_id IN
(SELECT zb_id FROM zahlungsbedingung WHERE zb_skontoprozent > 3) AND YEAR(r.Rg_Datum) = 2015
GROUP BY k.Kd_ID
|
jup_notebooks/datenbanken/SubSelects.ipynb
|
steinam/teacher
|
mit
|
Lösung
|
%sql mysql://steinam:steinam@localhost/versicherung_complete
%%sql
select min(`vv`.`Abschlussdatum`) as 'Erster Abschluss', `vv`.`Mitarbeiter_ID`
from `versicherungsvertrag` vv inner join mitarbeiter m
on vv.`Mitarbeiter_ID` = m.`ID`
where vv.`Mitarbeiter_ID` in ( select m.`ID` from mitarbeiter m
inner join Abteilung a
on m.`Abteilung_ID` = a.`ID`)
group by vv.`Mitarbeiter_ID`
%%sql
-- rm
SELECT m.ID, m.Name, m.Vorname, v.*
FROM versicherungsvertrag AS v
JOIN mitarbeiter AS m ON m.ID = v.Mitarbeiter_ID
WHERE v.Abschlussdatum = (SELECT min(v.Abschlussdatum)
FROM versicherungsvertrag AS v WHERE v.Mitarbeiter_ID = m.ID
)
GROUP BY v.Mitarbeiter_ID
%%sql
-- original
SELECT vv.ID as VV, vv.Vertragsnummer, vv.Abschlussdatum, vv.Art,
mi.ID as MI, mi.Name, mi.Vorname
from Versicherungsvertrag vv
right join ( select MIN(vv2.ID) as ID, vv2.Mitarbeiter_ID
from Versicherungsvertrag vv2
group by vv2.Mitarbeiter_id ) Temp
on Temp.ID = vv.ID
right join Mitarbeiter mi on mi.ID = vv.Mitarbeiter_ID
where mi.Abteilung_ID = ( select ID from Abteilung
where Bezeichnung = 'Vertrieb' );
%%sql
-- rm
SELECT m.ID, m.Name, m.Vorname, v.*
FROM versicherungsvertrag AS v
JOIN mitarbeiter AS m ON m.ID = v.Mitarbeiter_ID
GROUP BY v.Mitarbeiter_ID
ORDER BY v.Abschlussdatum ASC
%%sql
-- ruppert_hartmann
Select mitarbeiter.ID, mitarbeiter.Name, mitarbeiter.Vorname,
mitarbeiter.Personalnummer,
abteilung.Bezeichnung,
min(versicherungsvertrag.abschlussdatum),
versicherungsvertrag.vertragsnummer
FROM mitarbeiter
LEFT JOIN abteilung ON Abteilung_ID = Abteilung.ID
LEFT JOIN versicherungsvertrag ON versicherungsvertrag.Mitarbeiter_ID = mitarbeiter.ID
WHERE abteilung.Bezeichnung = 'Vertrieb'
GROUP BY mitarbeiter.ID
result = _
result
|
jup_notebooks/datenbanken/SubSelects.ipynb
|
steinam/teacher
|
mit
|
Simple Model
This section corresponds to the code in the Running Your First Model section of the tutorial.
First, import the base classes we'll use
|
from mesa import Agent, Model
from mesa.time import RandomActivation
import random
|
examples/Tutorial-Boltzmann_Wealth_Model/.ipynb_checkpoints/Introduction to Mesa Tutorial Code-checkpoint.ipynb
|
projectmesa/mesa-examples
|
apache-2.0
|
Next, create the agent and model classes:
|
class MoneyAgent(Agent):
""" An agent with fixed initial wealth."""
def __init__(self, unique_id):
self.unique_id = unique_id
self.wealth = 1
def step(self, model):
if self.wealth == 0:
return
other_agent = random.choice(model.schedule.agents)
other_agent.wealth += 1
self.wealth -= 1
class MoneyModel(Model):
"""A model with some number of agents."""
def __init__(self, N):
self.running = True
self.num_agents = N
self.schedule = RandomActivation(self)
# Create agents
for i in range(self.num_agents):
a = MoneyAgent(i)
self.schedule.add(a)
def step(self):
'''Advance the model by one step.'''
self.schedule.step()
|
examples/Tutorial-Boltzmann_Wealth_Model/.ipynb_checkpoints/Introduction to Mesa Tutorial Code-checkpoint.ipynb
|
projectmesa/mesa-examples
|
apache-2.0
|
Create a model and run it for 10 steps:
|
model = MoneyModel(10)
for i in range(10):
model.step()
|
examples/Tutorial-Boltzmann_Wealth_Model/.ipynb_checkpoints/Introduction to Mesa Tutorial Code-checkpoint.ipynb
|
projectmesa/mesa-examples
|
apache-2.0
|
And display a histogram of agent wealths:
|
agent_wealth = [a.wealth for a in model.schedule.agents]
plt.hist(agent_wealth)
|
examples/Tutorial-Boltzmann_Wealth_Model/.ipynb_checkpoints/Introduction to Mesa Tutorial Code-checkpoint.ipynb
|
projectmesa/mesa-examples
|
apache-2.0
|
Create and run 100 models, and visualize the wealth distribution across all of them:
|
all_wealth = []
for j in range(100):
# Run the model
model = MoneyModel(10)
for i in range(10):
model.step()
# Store the results
for agent in model.schedule.agents:
all_wealth.append(agent.wealth)
plt.hist(all_wealth, bins=range(max(all_wealth)+1))
|
examples/Tutorial-Boltzmann_Wealth_Model/.ipynb_checkpoints/Introduction to Mesa Tutorial Code-checkpoint.ipynb
|
projectmesa/mesa-examples
|
apache-2.0
|
Adding space
This section puts the agents on a grid, corresponding to the Adding Space section of the tutorial.
For this, we need to import the grid class:
|
from mesa.space import MultiGrid
|
examples/Tutorial-Boltzmann_Wealth_Model/.ipynb_checkpoints/Introduction to Mesa Tutorial Code-checkpoint.ipynb
|
projectmesa/mesa-examples
|
apache-2.0
|
Create the new model object. (Note that this overwrites the MoneyModel object created above)
|
class MoneyModel(Model):
"""A model with some number of agents."""
def __init__(self, N, width, height):
self.running = True
self.num_agents = N
self.grid = MultiGrid(height, width, True)
self.schedule = RandomActivation(self)
# Create agents
for i in range(self.num_agents):
a = MoneyAgent(i)
self.schedule.add(a)
# Add the agent to a random grid cell
x = random.randrange(self.grid.width)
y = random.randrange(self.grid.height)
self.grid.place_agent(a, (x, y))
def step(self):
self.schedule.step()
|
examples/Tutorial-Boltzmann_Wealth_Model/.ipynb_checkpoints/Introduction to Mesa Tutorial Code-checkpoint.ipynb
|
projectmesa/mesa-examples
|
apache-2.0
|
And create the agent to go along with it:
|
class MoneyAgent(Agent):
""" An agent with fixed initial wealth."""
def __init__(self, unique_id):
self.unique_id = unique_id
self.wealth = 1
def move(self, model):
possible_steps = model.grid.get_neighborhood(self.pos, moore=True, include_center=False)
new_position = random.choice(possible_steps)
model.grid.move_agent(self, new_position)
def give_money(self, model):
cellmates = model.grid.get_cell_list_contents([self.pos])
if len(cellmates) > 1:
other = random.choice(cellmates)
other.wealth += 1
self.wealth -= 1
def step(self, model):
self.move(model)
if self.wealth > 0:
self.give_money(model)
|
examples/Tutorial-Boltzmann_Wealth_Model/.ipynb_checkpoints/Introduction to Mesa Tutorial Code-checkpoint.ipynb
|
projectmesa/mesa-examples
|
apache-2.0
|
Create a model with 50 agents and a 10x10 grid, and run for 20 steps
|
model = MoneyModel(50, 10, 10)
for i in range(20):
model.step()
|
examples/Tutorial-Boltzmann_Wealth_Model/.ipynb_checkpoints/Introduction to Mesa Tutorial Code-checkpoint.ipynb
|
projectmesa/mesa-examples
|
apache-2.0
|
Visualize the number of agents on each grid cell:
|
import numpy as np
agent_counts = np.zeros((model.grid.width, model.grid.height))
for cell in model.grid.coord_iter():
cell_content, x, y = cell
agent_count = len(cell_content)
agent_counts[x][y] = agent_count
plt.imshow(agent_counts, interpolation='nearest')
plt.colorbar()
|
examples/Tutorial-Boltzmann_Wealth_Model/.ipynb_checkpoints/Introduction to Mesa Tutorial Code-checkpoint.ipynb
|
projectmesa/mesa-examples
|
apache-2.0
|
Collecting Data
Add a Data Collector to the model, as explained in the corresponding section of the tutorial.
First, import the DataCollector
|
from mesa.datacollection import DataCollector
|
examples/Tutorial-Boltzmann_Wealth_Model/.ipynb_checkpoints/Introduction to Mesa Tutorial Code-checkpoint.ipynb
|
projectmesa/mesa-examples
|
apache-2.0
|
Compute the agents' Gini coefficient, measuring inequality.
|
def compute_gini(model):
'''
Compute the current Gini coefficient.
Args:
model: A MoneyModel instance
Returns:
The Gini Coefficient for the model's current step.
'''
agent_wealths = [agent.wealth for agent in model.schedule.agents]
x = sorted(agent_wealths)
N = model.num_agents
B = sum( xi * (N-i) for i,xi in enumerate(x) ) / (N*sum(x))
return (1 + (1/N) - 2*B)
|
examples/Tutorial-Boltzmann_Wealth_Model/.ipynb_checkpoints/Introduction to Mesa Tutorial Code-checkpoint.ipynb
|
projectmesa/mesa-examples
|
apache-2.0
|
This MoneyModel is identical to the one above, except for the self.datacollector = ... line at the end of the __init__ method, and the collection in step.
|
class MoneyModel(Model):
"""A model with some number of agents."""
def __init__(self, N, width, height):
self.running = True
self.num_agents = N
self.grid = MultiGrid(height, width, True)
self.schedule = RandomActivation(self)
# Create agents
for i in range(self.num_agents):
a = MoneyAgent(i)
self.schedule.add(a)
# Add the agent to a random grid cell
x = random.randrange(self.grid.width)
y = random.randrange(self.grid.height)
self.grid.place_agent(a, (x, y))
# New addition: add a DataCollector:
self.datacollector = DataCollector(model_reporters={"Gini": compute_gini},
agent_reporters={"Wealth": lambda a: a.wealth})
def step(self):
self.datacollector.collect(self) # Collect the data before the agents run.
self.schedule.step()
|
examples/Tutorial-Boltzmann_Wealth_Model/.ipynb_checkpoints/Introduction to Mesa Tutorial Code-checkpoint.ipynb
|
projectmesa/mesa-examples
|
apache-2.0
|
Now instantiate a model, run it for 100 steps...
|
model = MoneyModel(50, 10, 10)
for i in range(100):
model.step()
|
examples/Tutorial-Boltzmann_Wealth_Model/.ipynb_checkpoints/Introduction to Mesa Tutorial Code-checkpoint.ipynb
|
projectmesa/mesa-examples
|
apache-2.0
|
... And collect and plot the data it generated:
|
gini = model.datacollector.get_model_vars_dataframe()
gini.head()
gini.plot()
agent_wealth = model.datacollector.get_agent_vars_dataframe()
agent_wealth.head()
end_wealth = agent_wealth.xs(99, level="Step")["Wealth"]
end_wealth.hist(bins=range(agent_wealth.Wealth.max()+1))
one_agent_wealth = agent_wealth.xs(14, level="AgentID")
one_agent_wealth.Wealth.plot()
|
examples/Tutorial-Boltzmann_Wealth_Model/.ipynb_checkpoints/Introduction to Mesa Tutorial Code-checkpoint.ipynb
|
projectmesa/mesa-examples
|
apache-2.0
|
Batch Run
Run a parameter sweep, as explained in the Batch Run tutorial section.
Import the Mesa BatchRunner:
|
from mesa.batchrunner import BatchRunner
|
examples/Tutorial-Boltzmann_Wealth_Model/.ipynb_checkpoints/Introduction to Mesa Tutorial Code-checkpoint.ipynb
|
projectmesa/mesa-examples
|
apache-2.0
|
Set up the batch run:
|
parameters = {"height": 10, "width": 10, "N": range(10, 500, 10)}
batch_run = BatchRunner(MoneyModel, parameters, iterations=5, max_steps=100,
model_reporters={"Gini": compute_gini})
|
examples/Tutorial-Boltzmann_Wealth_Model/.ipynb_checkpoints/Introduction to Mesa Tutorial Code-checkpoint.ipynb
|
projectmesa/mesa-examples
|
apache-2.0
|
Run the parameter sweep; this step might take a while:
|
batch_run.run_all()
|
examples/Tutorial-Boltzmann_Wealth_Model/.ipynb_checkpoints/Introduction to Mesa Tutorial Code-checkpoint.ipynb
|
projectmesa/mesa-examples
|
apache-2.0
|
Export and plot the results:
|
run_data = batch_run.get_model_vars_dataframe()
run_data.head()
plt.scatter(run_data.N, run_data.Gini)
plt.xlabel("Number of agents")
plt.ylabel("Gini Coefficient")
|
examples/Tutorial-Boltzmann_Wealth_Model/.ipynb_checkpoints/Introduction to Mesa Tutorial Code-checkpoint.ipynb
|
projectmesa/mesa-examples
|
apache-2.0
|
Dictionary Comprehensions
Just like List Comprehensions, Dictionary Data Types also support their own version of comprehension for quick creation. It is not as commonly used as List Comprehensions, but the syntax is:
|
{x:x**2 for x in range(10)}
|
Advanced Dictionaries.ipynb
|
jserenson/Python_Bootcamp
|
gpl-3.0
|
One of the reasons it is not as common is the difficulty in structuring the key names that are not based off the values.
Iteration over keys,values, and items
Dictionaries can be iterated over using the iter methods available in a dictionary. For example:
|
for k in d.iterkeys():
print k
for v in d.itervalues():
print v
for item in d.iteritems():
print item
|
Advanced Dictionaries.ipynb
|
jserenson/Python_Bootcamp
|
gpl-3.0
|
view items,keys and values
You can use the view methods to view items keys and values. For example:
|
d.viewitems()
d.viewkeys()
d.viewvalues()
|
Advanced Dictionaries.ipynb
|
jserenson/Python_Bootcamp
|
gpl-3.0
|
2. Construindo DFNs com Keras
Reshaping MNIST data
|
#Achatamos imagem em um vetor
X_train = X_train.reshape(X_train.shape[0], np.prod(X_train.shape[1:]))
X_test = X_test.reshape(X_test.shape[0], np.prod(X_test.shape[1:]))
#Sequential é a API que permite construirmos um modelo ao adicionar incrementalmente layers
from keras.models import Sequential
from keras.layers import Dense, Flatten
from keras.optimizers import SGD
DFN = Sequential()
DFN.add(Dense(128, input_shape=(28*28,), activation='relu'))
DFN.add(Dense(128, activation='relu'))
DFN.add(Dense(128, activation='relu'))
DFN.add(Dense(10, activation='softmax'))
#optim = SGD(lr=0.01 ) - pode construir o otimizador por fora para definir parametros
DFN.compile(loss='categorical_crossentropy',
optimizer='sgd', #ou usar os parâmetros padrão
metrics=['accuracy'])
DFN.fit(X_train, y_train, batch_size=32, epochs=2,
validation_split=0.2,
verbose=1)
print('\nAccuracy: %.2f' % DFN.evaluate(X_test, y_test, verbose=1)[1])
|
src/Keras Tutorial.ipynb
|
MLIME/12aMostra
|
gpl-3.0
|
3. Construindo CNNs com Keras
Reshaping MNIST data
|
X_train = X_train.reshape(X_train.shape[0], 28, 28, 1)
X_test = X_test.reshape(X_test.shape[0], 28, 28, 1)
|
src/Keras Tutorial.ipynb
|
MLIME/12aMostra
|
gpl-3.0
|
Compilando e ajustando CNN
|
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import MaxPooling2D
from keras.layers.convolutional import Conv2D
CNN = Sequential()
CNN.add(Conv2D(32, (3, 3), padding='same', activation='relu',
input_shape=(28, 28, 1),))
CNN.add(MaxPooling2D(pool_size=(2, 2)))
CNN.add(Conv2D(32, (3, 3), padding='same', activation='relu'))
CNN.add(MaxPooling2D(pool_size=(2, 2)))
CNN.add(Dropout(0.25))
CNN.add(Flatten())
CNN.add(Dense(256, activation='relu'))
CNN.add(Dropout(0.5))
CNN.add(Dense(10, activation='softmax'))
CNN.compile(loss='categorical_crossentropy',
optimizer='sgd',
metrics=['accuracy'])
CNN.fit(X_train, y_train, batch_size=32, epochs=2,
validation_split=0.2,
verbose=1)
print('\nAccuracy: %.2f' % CNN.evaluate(X_test, y_test, verbose=1)[1])
|
src/Keras Tutorial.ipynb
|
MLIME/12aMostra
|
gpl-3.0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.