markdown
stringlengths 0
37k
| code
stringlengths 1
33.3k
| path
stringlengths 8
215
| repo_name
stringlengths 6
77
| license
stringclasses 15
values |
---|---|---|---|---|
Transfer weights from caffe to lasagne
Load pretrained caffe model
|
net_caffe = caffe.Net('./ResNet-50-deploy.prototxt', './ResNet-50-model.caffemodel', caffe.TEST)
layers_caffe = dict(zip(list(net_caffe._layer_names), net_caffe.layers))
print 'Number of layers: %i' % len(layers_caffe.keys())
|
code/Experiments/Lasagne_examples/examples/ResNets/resnet50/ImageNet Pretrained Network (ResNet-50).ipynb
|
matthijsvk/multimodalSR
|
mit
|
Copy weights
There is one more issue with BN layer: caffa stores variance $\sigma^2$, but lasagne stores inverted standard deviation $\dfrac{1}{\sigma}$, so we need make simple transfommation to handle it.
Other issue reffers to weights ofthe dense layer, in caffe it is transposed, we should handle it too.
|
for name, layer in net.items():
if name not in layers_caffe:
print name, type(layer).__name__
continue
if isinstance(layer, BatchNormLayer):
layer_bn_caffe = layers_caffe[name]
layer_scale_caffe = layers_caffe['scale' + name[2:]]
layer.gamma.set_value(layer_scale_caffe.blobs[0].data)
layer.beta.set_value(layer_scale_caffe.blobs[1].data)
layer.mean.set_value(layer_bn_caffe.blobs[0].data)
layer.inv_std.set_value(1/np.sqrt(layer_bn_caffe.blobs[1].data) + 1e-4)
continue
if isinstance(layer, DenseLayer):
layer.W.set_value(layers_caffe[name].blobs[0].data.T)
layer.b.set_value(layers_caffe[name].blobs[1].data)
continue
if len(layers_caffe[name].blobs) > 0:
layer.W.set_value(layers_caffe[name].blobs[0].data)
if len(layers_caffe[name].blobs) > 1:
layer.b.set_value(layers_caffe[name].blobs[1].data)
|
code/Experiments/Lasagne_examples/examples/ResNets/resnet50/ImageNet Pretrained Network (ResNet-50).ipynb
|
matthijsvk/multimodalSR
|
mit
|
Testing
Read ImageNet synset
|
with open('./imagenet_classes.txt', 'r') as f:
classes = map(lambda s: s.strip(), f.readlines())
|
code/Experiments/Lasagne_examples/examples/ResNets/resnet50/ImageNet Pretrained Network (ResNet-50).ipynb
|
matthijsvk/multimodalSR
|
mit
|
Download some image urls for recognition
|
index = urllib.urlopen('http://www.image-net.org/challenges/LSVRC/2012/ori_urls/indexval.html').read()
image_urls = index.split('<br>')
np.random.seed(23)
np.random.shuffle(image_urls)
image_urls = image_urls[:100]
|
code/Experiments/Lasagne_examples/examples/ResNets/resnet50/ImageNet Pretrained Network (ResNet-50).ipynb
|
matthijsvk/multimodalSR
|
mit
|
Load mean values
|
blob = caffe.proto.caffe_pb2.BlobProto()
data = open('./ResNet_mean.binaryproto', 'rb').read()
blob.ParseFromString(data)
mean_values = np.array(caffe.io.blobproto_to_array(blob))[0]
|
code/Experiments/Lasagne_examples/examples/ResNets/resnet50/ImageNet Pretrained Network (ResNet-50).ipynb
|
matthijsvk/multimodalSR
|
mit
|
Image loader
|
def prep_image(url, fname=None):
if fname is None:
ext = url.split('.')[-1]
im = plt.imread(io.BytesIO(urllib.urlopen(url).read()), ext)
else:
ext = fname.split('.')[-1]
im = plt.imread(fname, ext)
h, w, _ = im.shape
if h < w:
im = skimage.transform.resize(im, (256, w*256/h), preserve_range=True)
else:
im = skimage.transform.resize(im, (h*256/w, 256), preserve_range=True)
h, w, _ = im.shape
im = im[h//2-112:h//2+112, w//2-112:w//2+112]
rawim = np.copy(im).astype('uint8')
im = np.swapaxes(np.swapaxes(im, 1, 2), 0, 1)
im = im[::-1, :, :]
im = im - mean_values
return rawim, floatX(im[np.newaxis])
|
code/Experiments/Lasagne_examples/examples/ResNets/resnet50/ImageNet Pretrained Network (ResNet-50).ipynb
|
matthijsvk/multimodalSR
|
mit
|
Lets take five images and compare prediction of Lasagne with Caffe
|
n = 5
m = 5
i = 0
for url in image_urls:
print url
try:
rawim, im = prep_image(url)
except:
print 'Failed to download'
continue
prob_lasangne = np.array(lasagne.layers.get_output(net['prob'], im, deterministic=True).eval())[0]
prob_caffe = net_caffe.forward_all(data=im)['prob'][0]
print 'Lasagne:'
res = sorted(zip(classes, prob_lasangne), key=lambda t: t[1], reverse=True)[:n]
for c, p in res:
print ' ', c, p
print 'Caffe:'
res = sorted(zip(classes, prob_caffe), key=lambda t: t[1], reverse=True)[:n]
for c, p in res:
print ' ', c, p
plt.figure()
plt.imshow(rawim.astype('uint8'))
plt.axis('off')
plt.show()
i += 1
if i == m:
break
print '\n\n'
model = {
'values': lasagne.layers.get_all_param_values(net['prob']),
'synset_words': classes,
'mean_image': mean_values
}
pickle.dump(model, open('./resnet50.pkl', 'wb'), protocol=-1)
|
code/Experiments/Lasagne_examples/examples/ResNets/resnet50/ImageNet Pretrained Network (ResNet-50).ipynb
|
matthijsvk/multimodalSR
|
mit
|
Now as per part (a), we compute the SVD and use the first two singular values. Recall the model is that
\begin{equation}
\mathbf{x} \sim \mathcal{N}\left(W\mathbf{z},\Psi\right),
\end{equation}
where $\Psi$ is diagonal. If the SVD is $X = UDV^\intercal,$ $W$ will be the first two columns of $V$.
|
X = word_frequencies.as_matrix().astype(np.float64)
U, D, V = np.linalg.svd(X.T) # in matlab the matrix is read in as its transpose
|
chap12/8.ipynb
|
ppham27/MLaPP-solutions
|
mit
|
In this way, we let $Z = UD$, so $X = ZV^\intercal$. Now, let $\tilde{Z}$ be the approximation from using 2 singular values, so $\tilde{X} = \tilde{Z}W^\intercal$, so $\tilde{Z} = \tilde{U}\tilde{D}$. For some reason, the textbook chooses not to scale by $\tilde{D}$, so we just have $\tilde{U}$. Recall that all the variables are messed up because we used the tranpose.
|
Z = V.T[:,:2]
Z
|
chap12/8.ipynb
|
ppham27/MLaPP-solutions
|
mit
|
Now, let's plot these results.
|
plt.figure(figsize=(8,8))
def plot_latent_variables(Z, ax=None):
if ax == None:
ax = plt.gca()
ax.plot(Z[:,0], Z[:,1], 'o', markerfacecolor='none')
for i in range(len(Z)):
ax.text(Z[i,0] + 0.005, Z[i,1], i,
verticalalignment='center')
ax.set_xlabel('$z_1$')
ax.set_ylabel('$z_2$')
ax.set_title('PCA with $L = 2$ for Alien Documents')
ax.grid(True)
plot_latent_variables(Z)
plt.show()
|
chap12/8.ipynb
|
ppham27/MLaPP-solutions
|
mit
|
I, respectfully, disagree with the book for this reason. The optimal latent representation $Z = XW$ (observations are rows here), should be chosen such that
\begin{equation}
J(W,Z) = \frac{1}{N}\left\lVert X - ZW^\intercal\right\rVert^2
\end{equation}
is minimized, where $W$ is orthonormal.
|
U, D, V = np.linalg.svd(X)
V = V.T # python implementation of SVD factors X = UDV (note that V is not tranposed)
|
chap12/8.ipynb
|
ppham27/MLaPP-solutions
|
mit
|
By section 12.2.3 of the book, $W$ is the first $2$ columns of $V$. Thus, our actual plot should be below.
|
W = V[:,:2]
Z = np.dot(X, W)
plt.figure(figsize=(8,8))
ax = plt.gca();
plot_latent_variables(Z, ax=ax)
ax.set_aspect('equal')
plt.show()
|
chap12/8.ipynb
|
ppham27/MLaPP-solutions
|
mit
|
Note that this is very similar with the $y$-axis flipped. That part does not actually matter. What matters is the scaling by eigenvalues for computing. Before that scaling the proximity of points may not mean much if the eigenvalue is actually very large.
Now, the second part asks us to see if we can properly identify documents related to abductions by using a document with the single word abducted as a probe.
|
probe_document = np.zeros_like(words, dtype=np.float64)
abducted_idx = (words=='abducted').as_matrix()
probe_document[abducted_idx] = 1
X[0:3,abducted_idx]
|
chap12/8.ipynb
|
ppham27/MLaPP-solutions
|
mit
|
Note that despite the first document being about abductions, it doesn't contain the word abducted.
Let's look at the latent variable representation. We'll use cosine similarity to account for the difference in magnitude.
|
from scipy.spatial import distance
z = np.dot(probe_document, W)
similarities = list(map(lambda i : (i, 1 - distance.cosine(z,Z[i,:])), range(len(Z))))
similarities.sort(key=lambda similarity_tuple : -similarity_tuple[1])
similarities
|
chap12/8.ipynb
|
ppham27/MLaPP-solutions
|
mit
|
Closures (factory functions)
|
def v1_multiply_by(m):
def multiply(n):
return n * m
return multiply
multiply_by_7 = v1_multiply_by(7)
print(multiply_by_7(4))
def v2_multiply_by(m):
return lambda n: n * m
multiply_by_7 = v2_multiply_by(7)
print(multiply_by_7(4))
def multiplications_between_0_and_9():
multiply_by = []
for m in range(10):
# If "lambda n, m = m: n * m" is replaced by "lambda n, m: n * m"
# then all mulplications are by 9
multiply_by.append(lambda n, m = m: n * m)
return multiply_by
multiply_by = multiplications_between_0_and_9()
multiply_by_7 = multiply_by[7]
print(multiply_by_7(4))
|
Lectures/Lecture_2/Jupyter_notebook_sheets/functions.ipynb
|
YufeiZhang/Principles-of-Programming-Python-3
|
gpl-3.0
|
Function states
|
from random import randrange
def randomly_odd_or_even_random_digit():
odd = randrange(2)
if odd:
def random_odd_or_random_even_digit():
return randrange(1, 10, 2)
else:
def random_odd_or_random_even_digit():
return randrange(0, 10, 2)
random_odd_or_random_even_digit.odd = odd
return random_odd_or_random_even_digit
for i in range(10):
random_odd_or_random_even_digit = randomly_odd_or_even_random_digit()
if random_odd_or_random_even_digit.odd:
print('Will be a random odd digit.... ', random_odd_or_random_even_digit())
else:
print('Will be a random even digit... ', random_odd_or_random_even_digit())
|
Lectures/Lecture_2/Jupyter_notebook_sheets/functions.ipynb
|
YufeiZhang/Principles-of-Programming-Python-3
|
gpl-3.0
|
Function parameters:
first parameters without default values, if any,
then parameters with default values, if any,
then, possibly,
either a starred parameter to
gather values and assign them to parameters of the first and second type beyond the longest initial segment of those that are otherwise assigned an argument, if any, provided none of those parameters is assigned a keyword argument,
and to store an arbitray number of positional arguments beyond those that have been assigned to a parameter, if any,
or only a star,
if a starred parameter or only a star is present, then parameters for required keyword arguments (so called "keyword-only arguments"), if any, with or without defaults (actually the defaults make the associated keyword-only arguments not truly required and these parameters could be part of the second group),
then a double starred parameter to store an arbitray number of keyword arguments, if any.
Function arguments:
positional arguments precede keyword arguments and double starred ones, and
starred arguments precede double starred ones.
|
def f1(a, b, c = 3, d = 4, e = 5, f = 6):
print(a, b, c, d, e, f)
f1(11, 12, 13, 14, 15, 16)
f1(11, 12, 13, *(14, 15, 16))
f1(11, *(12, 13, 14), **{'f': 16, 'e': 15})
f1(11, 12, 13, e = 15)
f1(11, c = 13, b = 12, e = 15)
f1(11, c = 13, *(12,), e = 15)
f1(11, *(12, 13), e = 15)
f1(11, e = 15, *(12, 13))
f1(11, f = 16, e = 15, b = 12, c = 13)
f1(11, f = 16, **{'e': 15, 'b': 12, 'c': 13})
f1(11, *(12, 13), e = 15, **{'f': 16, 'd': 14})
f1(11, e = 15, *(12,), **{'f': 16, 'd': 14})
f1(11, f = 16, *(12, 13), e = 15, **{'d': 14})
def f2(*x):
print(x)
f2()
f2(11)
f2(11, 12, *(13, 14, 15))
def f3(*x, a, b = -2, c):
print(x, a, b, c)
f3(c = 23, a = 21)
f3(11, 12, a = 21, **{'b': 22, 'c': 23})
f3(11, *(12, 13), c = 23, a = 21)
f3(11, 12, 13, c = 23, *(14, 15), **{'a': 21})
def f4(*, a, b = -2, c):
print(a, b, c)
f4(c = 23, a = 21)
f4(**{'a': 21, 'b': 22, 'c': 23})
f4(c = 23, **{'a': 21})
f4(a = 21, **{'c': 23, 'b': 22})
def f5(**x):
print(x)
f5()
f5(a = 11, b = 12)
f5(**{'a': 11, 'b': 12, 'c': 13})
f5(a = 11, c = 12, e = 15, **{'b': 13, 'd': 14})
def f6(a, b, c, d = 4, e = 5, *x, m, n = -2, o, **z):
print(a, b, c, d, e, x, m, n, o, z)
# Cannot replace "*(12,)" by "*(12, 21)"
f6(11, t = 40, e = 15, *(12,), o = 33, c = 13, m = 31, u = 41,
**{'v': 42, 'w': 43})
# Cannot replace "*(13, 14)" by "*(13, 14, 21)"
f6(11, 12, u = 41, m = 31, t = 40, e = 15, *(13, 14), o = 33,
**{'v': 42, 'w': 43})
f6(11, u = 41, o = 33, *(12, 13, 14, 15, 21, 22), n = 32, t = 40, m = 31,
**{'v': 42, 'w': 43})
f6(11, 12, 13, n = 32, t = 40, *(14, 15, 21, 22, 23), o = 33, u = 41, m = 31,
**{'v': 42, 'w': 43})
|
Lectures/Lecture_2/Jupyter_notebook_sheets/functions.ipynb
|
YufeiZhang/Principles-of-Programming-Python-3
|
gpl-3.0
|
Function annotations
|
def f(w: str, a: int, b: int = -2, x: float = -3.) -> int:
if w == 'incorrect_return_type':
return '0'
return 0
from inspect import signature
def type_check(function, *args, **kwargs):
'''Assumes that "function" has nothing but variables possibly with defaults
as arguments and has type annotations for all arguments and the returned value.
Checks whether a combination of positional and default arguments is correct,
and in case it is whether those arguments are of the appropriate types,
and in case they are whether the returned value is of the appropriate type.
'''
good_arguments = True
argument_type_errors = ''
parameters = list(reversed(function.__code__.co_varnames))
if len(args) > len(parameters):
print('Incorrect sequence of arguments')
return
for argument in args:
parameter = parameters.pop()
if not isinstance(argument, function.__annotations__[parameter]):
argument_type_errors += ('{} should be of type {}\n'
.format(parameter, function.__annotations__[parameter]))
good_arguments = False
for argument in kwargs:
if not argument in parameters:
print('Incorrect sequence of arguments')
return
if not isinstance(kwargs[argument], function.__annotations__[argument]):
argument_type_errors += ('{} should be of type {}\n'
.format(argument, function.__annotations__[argument]))
good_arguments = False
parameters.remove(argument)
# Make sure that all parameters left are given a default value.
if any([parameter for parameter in parameters
if signature(function).parameters[parameter].default is
signature(function).parameters[parameter].empty]):
print('Incorrect sequence of arguments')
return
if good_arguments:
if isinstance(function(*args, **kwargs), function.__annotations__['return']):
print('All good')
else:
(print('The returned value should be of type {}'
.format(function.__annotations__['return'])))
else:
print(argument_type_errors, end = '')
for args, kwargs in [(('0', 1, 2, 3.), {}),
(('0', 1, 2), {'x': 3.}),
(('0', 1), {'b': 2, 'x': 3.}),
(('0',), {'x': 3., 'a': 1, 'b': 2}),
((), {'x': 3., 'w': '0', 'a': 1}),
(('0', 1, 2), {}),
(('0',), {}),
(('0'), {'x': 3.}),
(('0', 1, 2, 3., 4), {}),
(('incorrect_return_type', 1, 2, 3.), {'x' : 3}),
(('incorrect_return_type', 1, 2), {'y': 3}),
(('0', 1), {'x': 3, 'c': 2}),
((), {'a': 1, 'b': 2,'x': 3}),
((0, 1, 2, 3.), {}),
(('0', 1., 2, 3), {'w': 'incorrect_return_type'}),
(('incorrect_return_type', 1, 2), {'x': 3}),
((0, 1), {'b': 2., 'x': 3.}),
((0,), {'x': 3, 'a': 1., 'b': 2.}),
((), {'x': 3, 'w': 0, 'a': 1.}),
(('incorrect_return_type', 1, 2, 3.), {})]:
print('Testing {}, {}:'.format(args, kwargs))
type_check(f, *args, **kwargs)
print()
|
Lectures/Lecture_2/Jupyter_notebook_sheets/functions.ipynb
|
YufeiZhang/Principles-of-Programming-Python-3
|
gpl-3.0
|
Mutable versus immutable default values
|
def append_one_v1(L = []):
L.append(1)
return L
def append_one_v2(L = None):
if L == None:
L = []
L.append(1)
return L
for i in range(5):
print(append_one_v1([0]))
print()
for i in range(5):
print(append_one_v1())
print()
for i in range(5):
print(append_one_v2([0]))
print()
for i in range(5):
print(append_one_v2())
_nothing = object()
def f(x = _nothing):
if x is _nothing:
print('Nothing')
else:
print('Something')
f(0), f(1), f([]), f([1]), f(None)
print()
f()
|
Lectures/Lecture_2/Jupyter_notebook_sheets/functions.ipynb
|
YufeiZhang/Principles-of-Programming-Python-3
|
gpl-3.0
|
1. Connect girder client and set parameters
|
# APIURL = 'http://demo.kitware.com/histomicstk/api/v1/'
# SAMPLE_SLIDE_ID = '5bbdee92e629140048d01b5d'
APIURL = 'http://candygram.neurology.emory.edu:8080/api/v1/'
SAMPLE_SLIDE_ID = '5d586d76bd4404c6b1f286ae'
# Connect to girder client
gc = girder_client.GirderClient(apiUrl=APIURL)
gc.authenticate(interactive=True)
# gc.authenticate(apiKey='kri19nTIGOkWH01TbzRqfohaaDWb6kPecRqGmemb')
|
docs/examples/segmentation_masks_to_annotations.ipynb
|
DigitalSlideArchive/HistomicsTK
|
apache-2.0
|
Let's inspect the ground truth codes file
This contains the ground truth codes and information dataframe. This is a dataframe that is indexed by the annotation group name and has the following columns:
group: group name of annotation (string), eg. "mostly_tumor"
GT_code: int, desired ground truth code (in the mask) Pixels of this value belong to corresponding group (class)
color: str, rgb format. eg. rgb(255,0,0).
NOTE:
Zero pixels have special meaning and do not encode specific ground truth class. Instead, they simply mean 'Outside ROI' and should be ignored during model training or evaluation.
|
# read GTCodes dataframe
GTCODE_PATH = os.path.join(
CWD, '..', '..', 'tests', 'test_files', 'sample_GTcodes.csv')
GTCodes_df = read_csv(GTCODE_PATH)
GTCodes_df.index = GTCodes_df.loc[:, 'group']
GTCodes_df.head()
|
docs/examples/segmentation_masks_to_annotations.ipynb
|
DigitalSlideArchive/HistomicsTK
|
apache-2.0
|
Read and visualize mask
|
# read mask
X_OFFSET = 59206
Y_OFFSET = 33505
MASKNAME = "TCGA-A2-A0YE-01Z-00-DX1.8A2E3094-5755-42BC-969D-7F0A2ECA0F39" + \
"_left-%d_top-%d_mag-BASE.png" % (X_OFFSET, Y_OFFSET)
MASKPATH = os.path.join(CWD, '..', '..', 'tests', 'test_files', 'annotations_and_masks', MASKNAME)
MASK = imread(MASKPATH)
plt.figure(figsize=(7,7))
plt.imshow(MASK)
plt.title(MASKNAME[:23])
plt.show()
|
docs/examples/segmentation_masks_to_annotations.ipynb
|
DigitalSlideArchive/HistomicsTK
|
apache-2.0
|
2. Get contours from mask
This function get_contours_from_mask() generates contours from a mask image. There are many parameters that can be set but most have defaults set for the most common use cases. The only required parameters you must provide are MASK and GTCodes_df, but you may want to consider setting the following parameters based on your specific needs: get_roi_contour, roi_group, discard_nonenclosed_background, background_group, that control behaviour regarding region of interest (ROI) boundary and background pixel class (e.g. stroma).
|
print(get_contours_from_mask.__doc__)
|
docs/examples/segmentation_masks_to_annotations.ipynb
|
DigitalSlideArchive/HistomicsTK
|
apache-2.0
|
Extract contours
|
# Let's extract all contours from a mask, including ROI boundary. We will
# be discarding any stromal contours that are not fully enclosed within a
# non-stromal contour since we already know that stroma is the background
# group. This is so things look uncluttered when posted to DSA.
groups_to_get = None
contours_df = get_contours_from_mask(
MASK=MASK, GTCodes_df=GTCodes_df, groups_to_get=groups_to_get,
get_roi_contour=True, roi_group='roi',
discard_nonenclosed_background=True,
background_group='mostly_stroma',
MIN_SIZE=30, MAX_SIZE=None, verbose=True,
monitorPrefix=MASKNAME[:12] + ": getting contours")
|
docs/examples/segmentation_masks_to_annotations.ipynb
|
DigitalSlideArchive/HistomicsTK
|
apache-2.0
|
Let's inspect the contours dataframe
The columns that really matter here are group, color, coords_x, and coords_y.
|
contours_df.head()
|
docs/examples/segmentation_masks_to_annotations.ipynb
|
DigitalSlideArchive/HistomicsTK
|
apache-2.0
|
3. Get annotation documents from contours
This method get_annotation_documents_from_contours() generates formatted annotation documents from contours that can be posted to the DSA server.
|
print(get_annotation_documents_from_contours.__doc__)
|
docs/examples/segmentation_masks_to_annotations.ipynb
|
DigitalSlideArchive/HistomicsTK
|
apache-2.0
|
As mentioned in the docs, this function wraps get_single_annotation_document_from_contours()
|
print(get_single_annotation_document_from_contours.__doc__)
|
docs/examples/segmentation_masks_to_annotations.ipynb
|
DigitalSlideArchive/HistomicsTK
|
apache-2.0
|
Let's get a list of annotation documents (each is a dictionary). For the purpose of this tutorial,
we separate the documents by group (i.e. each document is composed of polygons from the same
style/group). You could decide to allow heterogeneous groups in the same annotation document by
setting separate_docs_by_group to False. We place 10 polygons in each document for this demo
for illustration purposes. Realistically you would want each document to contain several hundred depending on their complexity. Placing too many polygons in each document can lead to performance issues when rendering in HistomicsUI.
Get annotation documents
|
# get list of annotation documents
annprops = {
'X_OFFSET': X_OFFSET,
'Y_OFFSET': Y_OFFSET,
'opacity': 0.2,
'lineWidth': 4.0,
}
annotation_docs = get_annotation_documents_from_contours(
contours_df.copy(), separate_docs_by_group=True, annots_per_doc=10,
docnamePrefix='demo', annprops=annprops,
verbose=True, monitorPrefix=MASKNAME[:12] + ": annotation docs")
|
docs/examples/segmentation_masks_to_annotations.ipynb
|
DigitalSlideArchive/HistomicsTK
|
apache-2.0
|
Let's examine one of the documents.
Limit display to the first two elements (polygons) and cap the vertices for clarity.
|
ann_doc = annotation_docs[0].copy()
ann_doc['elements'] = ann_doc['elements'][:2]
for i in range(2):
ann_doc['elements'][i]['points'] = ann_doc['elements'][i]['points'][:5]
ann_doc
|
docs/examples/segmentation_masks_to_annotations.ipynb
|
DigitalSlideArchive/HistomicsTK
|
apache-2.0
|
Post the annotation to the correct item/slide in DSA
|
# deleting existing annotations in target slide (if any)
existing_annotations = gc.get('/annotation/item/' + SAMPLE_SLIDE_ID)
for ann in existing_annotations:
gc.delete('/annotation/%s' % ann['_id'])
# post the annotation documents you created
for annotation_doc in annotation_docs:
resp = gc.post(
"/annotation?itemId=" + SAMPLE_SLIDE_ID, json=annotation_doc)
|
docs/examples/segmentation_masks_to_annotations.ipynb
|
DigitalSlideArchive/HistomicsTK
|
apache-2.0
|
Create widget object
|
cesium = CesiumWidget()
|
Examples/CesiumWidget Example KML.ipynb
|
OSGeo-live/CesiumWidget
|
apache-2.0
|
Display the widget:
|
cesium
|
Examples/CesiumWidget Example KML.ipynb
|
OSGeo-live/CesiumWidget
|
apache-2.0
|
Cesium is packed with example data. Let's look at some GDP per captia data from 2008.
|
cesium.kml_url = '/nbextensions/CesiumWidget/cesium/Apps/SampleData/kml/gdpPerCapita2008.kmz'
|
Examples/CesiumWidget Example KML.ipynb
|
OSGeo-live/CesiumWidget
|
apache-2.0
|
Example zoomto
|
for lon in np.arange(0, 360, 0.5):
cesium.zoom_to(lon, 0, 36000000, 0 ,-90, 0)
cesium._zoomto
|
Examples/CesiumWidget Example KML.ipynb
|
OSGeo-live/CesiumWidget
|
apache-2.0
|
Example flyto
|
cesium.fly_to(14, 90, 20000001)
cesium._flyto
|
Examples/CesiumWidget Example KML.ipynb
|
OSGeo-live/CesiumWidget
|
apache-2.0
|
Explore event-related dynamics for specific frequency bands
The objective is to show you how to explore spectrally localized
effects. For this purpose we adapt the method described in [1]_ and use it on
the somato dataset. The idea is to track the band-limited temporal evolution
of spatial patterns by using the Global Field Power (GFP).
We first bandpass filter the signals and then apply a Hilbert transform. To
reveal oscillatory activity the evoked response is then subtracted from every
single trial. Finally, we rectify the signals prior to averaging across trials
by taking the magniude of the Hilbert.
Then the GFP is computed as described in [2], using the sum of the squares
but without normalization by the rank.
Baselining is subsequently applied to make the GFPs comparable between
frequencies.
The procedure is then repeated for each frequency band of interest and
all GFPs are visualized. To estimate uncertainty, non-parametric confidence
intervals are computed as described in [3] across channels.
The advantage of this method over summarizing the Space x Time x Frequency
output of a Morlet Wavelet in frequency bands is relative speed and, more
importantly, the clear-cut comparability of the spectral decomposition (the
same type of filter is used across all bands).
References
.. [1] Hari R. and Salmelin R. Human cortical oscillations: a neuromagnetic
view through the skull (1997). Trends in Neuroscience 20 (1),
pp. 44-49.
.. [2] Engemann D. and Gramfort A. (2015) Automated model selection in
covariance estimation and spatial whitening of MEG and EEG signals,
vol. 108, 328-342, NeuroImage.
.. [3] Efron B. and Hastie T. Computer Age Statistical Inference (2016).
Cambrdige University Press, Chapter 11.2.
|
# Authors: Denis A. Engemann <denis.engemann@gmail.com>
#
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.datasets import somato
from mne.baseline import rescale
from mne.stats import _bootstrap_ci
|
0.18/_downloads/db126f84a1b5439712a1d57b1be2255c/plot_time_frequency_global_field_power.ipynb
|
mne-tools/mne-tools.github.io
|
bsd-3-clause
|
Set parameters
|
data_path = somato.data_path()
raw_fname = data_path + '/MEG/somato/sef_raw_sss.fif'
# let's explore some frequency bands
iter_freqs = [
('Theta', 4, 7),
('Alpha', 8, 12),
('Beta', 13, 25),
('Gamma', 30, 45)
]
|
0.18/_downloads/db126f84a1b5439712a1d57b1be2255c/plot_time_frequency_global_field_power.ipynb
|
mne-tools/mne-tools.github.io
|
bsd-3-clause
|
We create average power time courses for each frequency band
|
# set epoching parameters
event_id, tmin, tmax = 1, -1., 3.
baseline = None
# get the header to extract events
raw = mne.io.read_raw_fif(raw_fname, preload=False)
events = mne.find_events(raw, stim_channel='STI 014')
frequency_map = list()
for band, fmin, fmax in iter_freqs:
# (re)load the data to save memory
raw = mne.io.read_raw_fif(raw_fname, preload=True)
raw.pick_types(meg='grad', eog=True) # we just look at gradiometers
# bandpass filter and compute Hilbert
raw.filter(fmin, fmax, n_jobs=1, # use more jobs to speed up.
l_trans_bandwidth=1, # make sure filter params are the same
h_trans_bandwidth=1, # in each band and skip "auto" option.
fir_design='firwin')
raw.apply_hilbert(n_jobs=1, envelope=False)
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, baseline=baseline,
reject=dict(grad=4000e-13, eog=350e-6), preload=True)
# remove evoked response and get analytic signal (envelope)
epochs.subtract_evoked() # for this we need to construct new epochs.
epochs = mne.EpochsArray(
data=np.abs(epochs.get_data()), info=epochs.info, tmin=epochs.tmin)
# now average and move on
frequency_map.append(((band, fmin, fmax), epochs.average()))
|
0.18/_downloads/db126f84a1b5439712a1d57b1be2255c/plot_time_frequency_global_field_power.ipynb
|
mne-tools/mne-tools.github.io
|
bsd-3-clause
|
Now we can compute the Global Field Power
We can track the emergence of spatial patterns compared to baseline
for each frequency band, with a bootstrapped confidence interval.
We see dominant responses in the Alpha and Beta bands.
|
fig, axes = plt.subplots(4, 1, figsize=(10, 7), sharex=True, sharey=True)
colors = plt.get_cmap('winter_r')(np.linspace(0, 1, 4))
for ((freq_name, fmin, fmax), average), color, ax in zip(
frequency_map, colors, axes.ravel()[::-1]):
times = average.times * 1e3
gfp = np.sum(average.data ** 2, axis=0)
gfp = mne.baseline.rescale(gfp, times, baseline=(None, 0))
ax.plot(times, gfp, label=freq_name, color=color, linewidth=2.5)
ax.axhline(0, linestyle='--', color='grey', linewidth=2)
ci_low, ci_up = _bootstrap_ci(average.data, random_state=0,
stat_fun=lambda x: np.sum(x ** 2, axis=0))
ci_low = rescale(ci_low, average.times, baseline=(None, 0))
ci_up = rescale(ci_up, average.times, baseline=(None, 0))
ax.fill_between(times, gfp + ci_up, gfp - ci_low, color=color, alpha=0.3)
ax.grid(True)
ax.set_ylabel('GFP')
ax.annotate('%s (%d-%dHz)' % (freq_name, fmin, fmax),
xy=(0.95, 0.8),
horizontalalignment='right',
xycoords='axes fraction')
ax.set_xlim(-1000, 3000)
axes.ravel()[-1].set_xlabel('Time [ms]')
|
0.18/_downloads/db126f84a1b5439712a1d57b1be2255c/plot_time_frequency_global_field_power.ipynb
|
mne-tools/mne-tools.github.io
|
bsd-3-clause
|
Goals of this Lesson
Present the fundamentals of Linear Regression for Prediction
Notation and Framework
Gradient Descent for Linear Regression
Advantages and Issues
Closed form Matrix Solutions for Linear Regression
Advantages and Issues
Demonstrate Python
Exploratory Plotting
Simple plotting with pyplot from matplotlib
Code Gradient Descent
Code Closed Form Matrix Solution
Perform Linear Regression in scikit-learn
References for Linear Regression
Elements of Statistical Learning by Hastie, Tibshriani, Friedman - Chapter 3
Alex Ihler's Course Notes on Linear Models for Regression - http://sli.ics.uci.edu/Classes/2015W-273a
scikit-learn Documentation - http://scikit-learn.org/stable/modules/linear_model.html#ordinary-least-squares
Linear Regression Analysis By Seber and Lee - http://www.wiley.com/WileyCDA/WileyTitle/productCd-0471415405,subjectCd-ST24.html
Applied Linear Regression by Weisberg - http://onlinelibrary.wiley.com/book/10.1002/0471704091
Wikipedia - http://en.wikipedia.org/wiki/Linear_regression
Linear Regression Notation and Framework
Linear Regression is a supervised learning technique that is interested in predicting a response or target $\mathbf{y}$, based on a linear combination of a set $D$ predictors or features, $\mathbf{x}= (1, x_1,\dots, x_D)$ such that,
\begin{equation}
y = \beta_0 + \beta_1 x_1 + \dots + \beta_D x_D = \mathbf{x_i}^T\mathbf{\beta}
\end{equation}
Data We Observe
\begin{eqnarray}
y &:& \mbox{response or target variable} \
\mathbf{x} &:& \mbox{set of $D$ predictor or explanatory variables } \mathbf{x}^T = (1, x_1, \dots, x_D)
\end{eqnarray}
What We Are Trying to Learn
\begin{eqnarray}
\beta^T = (\beta_0, \beta_1, \dots, \beta_D) : \mbox{Parameter values for a "best" prediction of } y \rightarrow \hat y
\end{eqnarray}
Outcomes We are Trying to Predict
\begin{eqnarray}
\hat y : \mbox{Prediction for the data that we observe}
\end{eqnarray}
Matrix Notation
\begin{equation}
\mathbf{Y} = \left( \begin{array}{ccc}
y_1 \
y_2 \
\vdots \
y_i \
\vdots \
y_N
\end{array} \right)
\qquad
\mathbf{X} = \left( \begin{array}{ccc}
1 & x_{1,1} & x_{1,2} & \dots & x_{1,D} \
1 & x_{2,1} & x_{2,2} & \dots & x_{2,D} \
\vdots & \vdots & \vdots & \ddots & \vdots \
1 & x_{i,1} & x_{i,2} & \dots & x_{i,D} \
\vdots & \vdots & \vdots & \ddots & \vdots \
1 & x_{N,1} & x_{N,2} & \dots & x_{N,D} \
\end{array} \right)
\qquad
\beta = \left( \begin{array}{ccc}
\beta_0 \
\beta_1 \
\vdots \
\beta_j \
\vdots \
\beta_D
\end{array} \right)
\end{equation}
Why is it called Linear Regression?
It is often asked, why is it called linear regression if we can use polynomial terms and other transformations as the predictors. That is
\begin{equation}
y = \beta_0 + \beta_1 x_1 + \beta_2 x_1^2 + \beta_3 x_1^3 + \beta_4 \sin(x_1)
\end{equation}
is still a linear regression, though it contains polynomial and trigonometric transformations of $x_1$. This is due to the fact that the term linear applies to the learned coefficients $\beta$ and not the input features $\mathbf{x}$.
How can we Learn $\beta$?
Linear Regression can be thought of as an optimization problem where we want to minimize some loss function of the error between the prediction $\hat y$ and the observed data $y$.
\begin{eqnarray}
error_i &=& y_i - \hat y_i \
&=& y_i - \mathbf{x_i^T}\beta
\end{eqnarray}
Let's see what these errors look like...
Below we show a simulation where the observed $y$ was generated such that $y= 1 + 0.5 x + \epsilon$ and $\epsilon \sim N(0,1)$. If we assume that know the truth that $y=1 + 0.5 x$, the red lines demonstrate the error (or residuals) between the observed and the truth.
|
#############################################################
# Demonstration - What do Residuals Look Like
#############################################################
np.random.seed(33) # Setting a seed allows reproducability of experiments
beta0 = 1 # Creating an intercept
beta1 = 0.5 # Creating a slope
# Randomly sampling data points
x_example = np.random.uniform(0,5,10)
y_example = beta0 + beta1 * x_example + np.random.normal(0,1,10)
line1 = beta0 + beta1 * np.arange(-1, 6)
f = plt.figure()
plt.scatter(x_example,y_example) # Plotting observed data
plt.plot(np.arange(-1,6), line1) # Plotting the true line
for i, xi in enumerate(x_example):
plt.vlines(xi, beta0 + beta1 * xi, y_example[i], colors='red') # Plotting Residual Lines
plt.annotate('Error or "residual"', xy = (x_example[5], 2), xytext = (-1.5,2.1),
arrowprops=dict(width=1,headwidth=7,facecolor='black', shrink=0.01))
f.set_size_inches(10,5)
plt.title('Errors in Linear Regression')
plt.show()
|
Session 1 - Linear_Regression.ipynb
|
dinrker/PredictiveModeling
|
mit
|
Choosing a Loss Function to Optimize
Historically Linear Regression has been solved using the method of Least Squares where we are interested in minimizing the mean squared error loss function of the form:
\begin{eqnarray}
Loss(\beta) = MSE &=& \frac{1}{N} \sum_{i=1}^{N} (y_i - \hat y_i)^2 \
&=& \frac{1}{N} \sum_{i=1}^{N} (y_i - \mathbf{x_i^T}\beta)^2 \
\end{eqnarray}
Where $N$ is the total number of observations. Other loss functions can be used, but using mean squared error (also referred to sum of the squared residuals in other text) has very nice properities for closed form solutions. We will use this loss function for both gradient descent and to create a closed form matrix solution.
Before We Present Solutions for Linear Regression: Introducing a Baseball Dataset
We'll use this dataset to investigate Linear Regression. The dataset consists of 337 observations and 18 variables from the set of Major League Baseball players who played at least one game in both the 1991 and 1992
seasons, excluding pitchers. The dataset contains the 1992 salaries for that population, along with performance measures for each player. Four categorical variables indicate how free each player was to move to other teams.
Reference
Pay for Play: Are Baseball Salaries Based on Performance?
http://www.amstat.org/publications/jse/v6n2/datasets.watnik.html
Filename
'baseball.dat.txt'.
Variables
Salary: Thousands of dollars
AVG: Batting average
OBP: On-base percentage
Runs: Number of runs
Hits: Number of hits
Doubles: Number of doubles
Triples: Number of triples
HR: Number of home runs
RBI: Number of runs batted in
Walks: Number of walks
SO: Number of strike-outs
SB: Number of stolen bases
Errs: Number of errors
free agency eligibility: Indicator of "free agency eligibility"
free agent in 1991/2: Indicator of "free agent in 1991/2"
arbitration eligibility: Indicator of "arbitration eligibility"
arbitration in 1991/2: Indicator of "arbitration in 1991/2"
Name: Player's name (in quotation marks)
What we will try to predict
We will attempt to predict the players salary based upon some predictor variables such as Hits, OBP, Walks, RBIs, etc.
Load The Data
Loading data in python from csv files in python can be done by a few different ways. The numpy package has a function called 'genfromtxt' that can read csv files, while the pandas library has the 'read_csv' function. Remember that we have imported numpy and pandas as np and pd respectively at the top of this notebook. An example using pandas is as follows:
pd.read_csv(filename, **args)
http://pandas.pydata.org/pandas-docs/dev/generated/pandas.io.parsers.read_csv.html
<span style="color:red">STUDENT ACTIVITY (2 MINS)</span>
Student Action - Load the 'baseball.dat.txt' file into a variable called 'baseball'. Then use baseball.head() to view the first few entries
|
#######################################################################
# Student Action - Load the file 'baseball.dat.txt' using pd.read_csv()
#######################################################################
baseball = pd.read_csv('data/baseball.dat.txt')
|
Session 1 - Linear_Regression.ipynb
|
dinrker/PredictiveModeling
|
mit
|
Crash Course: Plotting with Matplotlib
At the top of this notebook we have imported the the package pyplot as plt from the matplotlib library. matplotlib is a great package for creating simple plots in Python. Below is a link to their tutorial for basic plotting.
Tutorials
http://matplotlib.org/users/pyplot_tutorial.html
https://scipy-lectures.github.io/intro/matplotlib/matplotlib.html
Simple Plotting
Step 0: Import the packge pyplot from matplotlib for plotting
import matplotlib.pyplot as plt
Step 1: Create a variable to store a new figure object
fig = plt.figure()
Step 2: Create the plot of your choice
Common Plots
plt.plot(x,y) - A line plot
plt.scatter(x,y) - Scatter Plots
plt.hist(x) - Histogram of a variable
Example Plots: http://matplotlib.org/gallery.html
Step 3: Create labels for your plot for better interpretability
X Label
plt.xlabel('String')
Y Label
plt.ylabel('String')
Title
plt.title('String')
Step 4: Change the figure size for better viewing within the iPython Notebook
fig.set_size_inches(width, height)
Step 5: Show the plot
plt.show()
The above command allows the plot to be shown below the cell that you're currently in. This is made possible by the magic command %matplotlib inline.
NOTE: This may not always be the best way to create plots, but it is a quick template to get you started.
Transforming Variables
We'll talk more about numpy later, but to perform the logarithmic transformation use the command
np.log($array$)
|
#############################################################
# Demonstration - Plot a Histogram of Hits
#############################################################
f = plt.figure()
plt.hist(baseball['Hits'], bins=15)
plt.xlabel('Number of Hits')
plt.ylabel('Frequency')
plt.title('Histogram of Number of Hits')
f.set_size_inches(10, 5)
plt.show()
|
Session 1 - Linear_Regression.ipynb
|
dinrker/PredictiveModeling
|
mit
|
<span style="color:red">STUDENT ACTIVITY (7 MINS)</span>
Data Exploration - Investigating Variables
Work in pairs to import the package matplotlib.pyplot, create the following two plots.
A histogram of the $log(Salary)$
hint: np.log()
a scatterplot of $log(Salary)$ vs $Hits$.
|
#############################################################
# Student Action - import matplotlib.pylot
# - Plot a Histogram of log(Salaries)
#############################################################
f = plt.figure()
plt.hist(np.log(baseball['Salary']), bins = 15)
plt.xlabel('log(Salaries)')
plt.ylabel('Frequency')
plt.title('Histogram of log Salaries')
f.set_size_inches(10, 5)
plt.show()
#############################################################
# Studdent Action - Plot a Scatter Plot of Salarie vs. Hitting
#############################################################
f = plt.figure()
plt.scatter(baseball['Hits'], np.log(baseball['Salary']))
plt.xlabel('Hits')
plt.ylabel('log(Salaries)')
plt.title('Scatter Plot of Salarie vs. Hitting')
f.set_size_inches(10, 5)
plt.show()
|
Session 1 - Linear_Regression.ipynb
|
dinrker/PredictiveModeling
|
mit
|
Gradient Descent for Linear Regression
In Linear Regression we are interested in optimizing our loss function $Loss(\beta)$ to find the optimatal $\beta$ such that
\begin{eqnarray}
\hat \beta &=& \arg \min_{\beta} \frac{1}{N} \sum_{i=1}^{N} (y_i - \mathbf{x_i^T}\beta)^2 \
&=& \arg \min_{\beta} \frac{1}{N} \mathbf{(Y - X\beta)^T (Y - X\beta)} \
\end{eqnarray}
One optimization technique called 'Gradient Descent' is useful for finding an optimal solution to this problem. Gradient descent is a first order optimization technique that attempts to find a local minimum of a function by updating its position by taking steps proportional to the negative gradient of the function at its current point. The gradient at the point indicates the direction of steepest ascent and is the best guess for which direction the algorithm should go.
If we consider $\theta$ to be some parameters we are interested in optimizing, $L(\theta)$ to be our loss function, and $\alpha$ to be our step size proportionality, then we have the following algorithm:
Algorithm - Gradient Descent
Initialize $\theta$
Until $\alpha || \nabla L(\theta) || < tol $:
$\theta^{(t+1)} = \theta^{(t)} - \alpha \nabla_{\theta} L(\theta^{(t)})$
For our problem at hand, we therefore need to find $\nabla L(\beta)$. The deriviative of $L(\beta)$ due to the $j^{th}$ feature is:
\begin{eqnarray}
\frac{\partial L(\beta)}{\partial \beta_j} = -\frac{2}{N}\sum_{i=1}^{N} (y_i - \mathbf{x_i^T}\beta)\cdot{x_{i,j}}
\end{eqnarray}
In matrix notation this can be written:
\begin{eqnarray}
Loss(\beta) &=& \frac{1}{N}\mathbf{(Y - X\beta)^T (Y - X\beta)} \
&=& \frac{1}{N}\mathbf{(Y^TY} - 2 \mathbf{\beta^T X^T Y + \beta^T X^T X\beta)} \
\nabla_{\beta} L(\beta) &=& \frac{1}{N} (-2 \mathbf{X^T Y} + 2 \mathbf{X^T X \beta)} \
&=& -\frac{2}{N} \mathbf{X^T (Y - X \beta)} \
\end{eqnarray}
<span style="color:red">STUDENT ACTIVITY (7 MINS)</span>
Create a function that returns the gradient of $L(\beta)$
|
###################################################################
# Student Action - Programming the Gradient
###################################################################
def gradient(X, y, betas):
#****************************
# Your code here!
return -2.0/len(X)*np.dot(X.T, y - np.dot(X, betas))
#****************************
#########################################################
# Testing your gradient function
#########################################################
np.random.seed(33)
X = pd.DataFrame({'ones':1,
'X1':np.random.uniform(0,1,50)})
y = np.random.normal(0,1,50)
betas = np.array([-1,4])
grad_expected = np.array([ 2.98018138, 7.09758971])
grad = gradient(X,y,betas)
try:
np.testing.assert_almost_equal(grad, grad_expected)
print "Test Passed!"
except AssertionError:
print "*******************************************"
print "ERROR: Something isn't right... Try Again!"
print "*******************************************"
|
Session 1 - Linear_Regression.ipynb
|
dinrker/PredictiveModeling
|
mit
|
<span style="color:red">STUDENT ACTIVITY (15 MINS)</span>
Student Action - Use your Gradient Function to complete the Gradient Descent for the Baseball Dataset
Code Gradient Descent Here
We have set-up the all necessary matrices and starting values. In the designated section below code the algorithm from the previous section above.
|
# Setting up our matrices
Y = np.log(baseball['Salary'])
N = len(Y)
X = pd.DataFrame({'ones' : np.ones(N),
'Hits' : baseball['Hits']})
p = len(X.columns)
# Initializing the beta vector
betas = np.array([0.015,5.13])
# Initializing Alpha
alph = 0.00001
# Setting a tolerance
tol = 1e-8
###################################################################
# Student Action - Programming the Gradient Descent Algorithm Below
###################################################################
niter = 1.
while (alph*np.linalg.norm(gradient(X,Y,betas)) > tol) and (niter < 20000):
#****************************
# Your code here!
betas -= alph*gradient(X, Y, betas)
niter += 1
#****************************
print niter, betas
try:
beta_expected = np.array([ 0.01513772, 5.13000121])
np.testing.assert_almost_equal(betas, beta_expected)
print "Test Passed!"
except AssertionError:
print "*******************************************"
print "ERROR: Something isn't right... Try Again!"
print "*******************************************"
|
Session 1 - Linear_Regression.ipynb
|
dinrker/PredictiveModeling
|
mit
|
Comments on Gradient Descent
Advantage: Very General Algorithm $\rightarrow$ Gradient Descent and its variants are used throughout Machine Learning and Statistics
Disadvantage: Highly Sensitive to Initial Starting Conditions
Not gauranteed to find the global optima
Disadvantage: How do you choose step size $\alpha$?
Too small $\rightarrow$ May never find the minima
Too large $\rightarrow$ May step past the minima
Can we fix it?
Adaptive step sizes
Newton's Method for Optimization
http://en.wikipedia.org/wiki/Newton%27s_method_in_optimization
Each correction obviously comes with it's own computational considerations.
See the Supplementary Material for any help necessary with scripting this in Python.
Visualizing Gradient Descent to Understand its Limitations
Let's try to find the value of $X$ that maximizes the following function:
\begin{equation}
f(x) = w \times \frac{1}{\sqrt{2\pi \sigma_1^2}} \exp \left( - \frac{(x-\mu_1)^2}{2\sigma_1^2}\right) + (1-w) \times \frac{1}{\sqrt{2\pi \sigma_2^2}} \exp \left( - \frac{(x-\mu_2)^2}{2\sigma_2^2}\right)
\end{equation}
where $w=0.3$, $\mu_1 = 3, \sigma_1^2=1$ and $\mu_2 = -1, \sigma_2^2=0.5$
Let's visualize this function
|
x1 = np.arange(-10, 15, 0.05)
mu1 = 6.5
var1 = 3
mu2 = -1
var2 = 10
weight = 0.3
def mixed_normal_distribution(x, mu1, var1, mu2, var2):
pdf1 = np.exp( - (x - mu1)**2 / (2*var1) ) / np.sqrt(2 * np.pi * var1)
pdf2 = np.exp( - (x - mu2)**2 / (2*var2) ) / np.sqrt(2 * np.pi * var2)
return weight * pdf1 + (1-weight )*pdf2
pdf = mixed_normal_distribution(x1, mu1, var1, mu2, var2)
fig = plt.figure()
plt.plot(x1, pdf)
fig.set_size_inches([10,5])
plt.show()
|
Session 1 - Linear_Regression.ipynb
|
dinrker/PredictiveModeling
|
mit
|
Now let's show visualize happens for different starting conditions and different step sizes
|
def mixed_gradient(x, mu1, var1, mu2, var2):
grad_pdf1 = np.exp( - (x - mu1)**2 / (2*var1) ) * ((x-mu1)/var1) / np.sqrt(2 * np.pi * var1)
grad_pdf2 = np.exp( - (x - mu2)**2 / (2*var2) ) * ((x-mu2)/var2) / np.sqrt(2 * np.pi * var2)
return weight * grad_pdf1 + (1-weight)*grad_pdf2
# Initialize X
x = 3.25
# Initializing Alpha
alph = 5
# Setting a tolerance
tol = 1e-8
niter = 1.
results = []
while (alph*np.linalg.norm(mixed_gradient(x, mu1, var1, mu2, var2)) > tol) and (niter < 500000):
#****************************
results.append(x)
x = x - alph * mixed_gradient(x, mu1, var1, mu2, var2)
niter += 1
#****************************
print x, niter
if niter < 500000:
exes = mixed_normal_distribution(np.array(results), mu1, var1, mu2, var2)
fig = plt.figure()
plt.plot(x1, pdf)
plt.plot(results, exes, color='red', marker='x')
plt.ylim([0,0.1])
fig.set_size_inches([20,10])
plt.show()
|
Session 1 - Linear_Regression.ipynb
|
dinrker/PredictiveModeling
|
mit
|
Linear Regression Matrix Solution
From the last section, you may have recognized that we could actually solve for $\beta$ directly.
\begin{eqnarray}
Loss(\beta) &=& \frac{1}{N}\mathbf{(Y - X\beta)^T (Y - X\beta)} \
\nabla_{\beta} L(\beta) &=& \frac{1}{N} (-2 \mathbf{X^T Y} + 2 \mathbf{X^T X \beta}) \
\end{eqnarray}
Setting to zero
\begin{eqnarray}
-2 \mathbf{X^T Y} + 2 \mathbf{X^T X} \beta &=& 0 \
\mathbf{X^T X \beta} &=& \mathbf{X^T Y} \
\end{eqnarray}
If we assume that the columns $X$ are linearly independent then
\begin{eqnarray}
\hat \beta &=& \mathbf{(X^T X)^{-1}X^T Y} \
\end{eqnarray}
This is called the Ordinary Least Squares (OLS) Estimator
<span style="color:red">STUDENT ACTIVITY (10 MINS)</span>
_ Student Action - Solve for $\hat \beta$ directly using OLS on the Baseball Dataset - 10 mins _
Review the Supplementary Materials for help with Linear Algebra
|
# Setting up our matrices
y = np.log(baseball['Salary'])
N = len(Y)
X = pd.DataFrame({'ones' : np.ones(N),
'Hits' : baseball['Hits']})
#############################################################
# Student Action - Program a closed form solution for
# Linear Regression. Compare with Gradient
# Descent.
#############################################################
def solve_linear_regression(X, y):
#****************************
return np.dot(np.linalg.inv(np.dot(X.T, X)), np.dot(X.T, y))
#****************************
betas = solve_linear_regression(X,y)
try:
beta_expected = np.array([ 0.01513353, 5.13051682])
np.testing.assert_almost_equal(betas, beta_expected)
print "Betas: ", betas
print "Test Passed!"
except AssertionError:
print "*******************************************"
print "ERROR: Something isn't right... Try Again!"
print "*******************************************"
|
Session 1 - Linear_Regression.ipynb
|
dinrker/PredictiveModeling
|
mit
|
Comments on solving the loss function directly
Advantage: Simple solution to code
Disadvantage: The Design Matrix must be Full Rank to invert
Can be corrected with a Generalized Inverse Solution
Disadvantage: Inverting a Matrix can be a computational expensive operation
If we have a design matrix that has $N$ observations and $D$ predictors, then X is $(N\times D)$ it follows then that
\begin{eqnarray}
\mathbf{X^TX} \mbox{ is of size } (D \times N) \times (N \times D) = (D \times D) \
\end{eqnarray}
If a matrix is of size $(D\times D)$, the computational cost of inverting it is $O(D^3)$.
Thus inverting a matrix is directly related to the number of predictors that are included in the analysis.
Sci-Kit Learn Linear Regression
As we've shown in the previous two exercises, when coding these algorithms ourselves, we must consider many things such as selecting step sizes, considering the computational cost of inverting matrices. For many applications though, packages have been created that have taken into consideration many of these parameter selections. We now turn our attention to the Python package for Machine Learning called 'scikit-learn'.
http://scikit-learn.org/stable/
Included is the documentation for the scikit-learn implementation of Ordinary Least Squares from their linear models package
Generalized Linear Models Documentation:
http://scikit-learn.org/stable/modules/linear_model.html#ordinary-least-squares
LinearRegression Class Documentation:
http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html#sklearn.linear_model.LinearRegression
From this we that we'll need to import the module linear_model using the following:
from sklearn import linear_model
Let's examine an example using the LinearRegression class from scikit-learn. We'll continue with the simulated data from the beginning of the exercise.
Example using the variables from the Residual Example
Notes
Calling linear_model.LinearRegression() creates an object of class sklearn.linear_model.base.LinearRegression
Defaults
fit_intercept = True: automatically adds a column vector of ones for an intercept
normalize = False: defaults to not normalizing the input predictors
copy_X = False: defaults to not copying X
n_jobs = 1: The number of jobs to use for the computation. If -1 all CPUs are used. This will only provide speedup for n_targets > 1 and sufficient large problems.
Example
`lmr = linear_model.LinearRegression()
To fit a model, the method .fit(X,y) can be used
X must be a column vector for scikit-learn
This can be accomplished by creating a DataFrame using pd.DataFrame()
Example
lmr.fit(X,y)
To predict out of sample values, the method .predict(X) can be used
To see the $\beta$ estimates use .coef_ for the coefficients for the predictors and .intercept for $\beta_0$
|
#############################################################
# Demonstration - scikit-learn with Regression Example
#############################################################
from sklearn import linear_model
lmr = linear_model.LinearRegression()
lmr.fit(pd.DataFrame(x_example), pd.DataFrame(y_example))
xTest = pd.DataFrame(np.arange(-1,6))
yHat = lmr.predict(xTest)
f = plt.figure()
plt.scatter(x_example, y_example)
p1, = plt.plot(np.arange(-1,6), line1)
p2, = plt.plot(xTest, yHat)
plt.legend([p1, p2], ['y = 1 + 0.5x', 'OLS Estimate'], loc=2)
f.set_size_inches(10,5)
plt.show()
print lmr.coef_, lmr.intercept_
|
Session 1 - Linear_Regression.ipynb
|
dinrker/PredictiveModeling
|
mit
|
<span style="color:red">STUDENT ACTIVITY (15 MINS)</span>
Final Student Task
Programming Linear Regression using the scikit-learn method. For the ambitious students, plot all results on one plot.
|
#######################################################################
# Student Action - Use scikit-learn to calculate the beta coefficients
#
# Note: You no longer need the intercept column in your X matrix for
# sci-kit Learn. It will add that column automatically.
#######################################################################
lmr2 = linear_model.LinearRegression(fit_intercept=True)
lmr2.fit(pd.DataFrame(baseball['Hits']), np.log(baseball['Salary']))
xtest = np.arange(0,200)
ytest = lmr2.intercept_ + lmr2.coef_*xtest
f = plt.figure()
plt.scatter(baseball['Hits'], np.log(baseball['Salary']))
plt.plot(xtest, ytest, color='r', linewidth=3)
f.set_size_inches(10,5)
plt.show()
print lmr2.coef_, lmr2.intercept_
|
Session 1 - Linear_Regression.ipynb
|
dinrker/PredictiveModeling
|
mit
|
Linear Regression in the Real World
In the real world, Linear Regression for predictive modeling doesn't end once you've fit the model. Models are often fit and used to predict user behavior, used to quantify business metrics, or sometimes used to identify cats faces for internet points. In that pursuit, it isn't really interesting to fit a model and assess its performance on data that has already been observed. The real interest lies in how it predicts future observations!
Often times then, we may be susceptible to creating a model that is perfected for our observed data, but that does not generalize well to new data. In order to assess how we perform to new data, we can score the model on both the old and new data, and compare the models performance with the hope that the it generalizes well to the new data. After lunch we'll introduce some techniques and other methods to better our chances of performing well on new data.
Before we break for lunch though, let's take a look at a simulated dataset to see what we mean...
Situation
Imagine that last year a talent management company managed 400 celebrities and tracked how popular they were within the public eye, as well various predictors for that metric. The company is now interested in managing a few new celebrities, but wants to sign those stars that are above a certain 'popularity' threshold to maintain their image.
Our job is to predict how popular each new celebrity will be over the course of the coming year so that we make that best decision about who to manage. For this analysis we'll use a function l2_error to compare our errors on a training set, and on a test set of celebrity data.
The variable celeb_data_old represents things we know about the previous batch of celebrities. Each row represents one celeb. Each column represents some tangible measure about them -- their age at the time, number of Twitter followers, voice squeakiness, etc. The specifics of what each column represents aren't important.
Similarly, popularity_score_old is a previous measure of the celebrities popularity.
Finally, celeb_data_new represents the same information that we had from celeb_data_old but for the new batch of internet wonders that we're considering.
How can we predict how popular the NEW batch of celebrities will be ahead of time so that we can decide who to sign? And are these estimates stable from year to year?
|
with np.load('data/mystery_data_old.npz') as data:
celeb_data_old = data['celeb_data_old']
popularity_old = data['popularity_old']
celeb_data_new = data['celeb_data_new']
lmr3 = linear_model.LinearRegression()
lmr3.fit(celeb_data_old, popularity_old)
predicted_popularity_old = lmr3.predict(celeb_data_old)
predicted_popularity_new = lmr3.predict(celeb_data_new)
def l2_error(y_true, y_pred):
"""
calculate the sum of squared errors (i.e. "L2 error")
given a vector of true ys and a vector of predicted ys
"""
diff = (y_true-y_pred)
return np.sqrt(np.dot(diff, diff))
print "Predicted L2 Error:", l2_error(popularity_old, predicted_popularity_old)
|
Session 1 - Linear_Regression.ipynb
|
dinrker/PredictiveModeling
|
mit
|
Checking How We Did
At the end of the year, we tally up the popularity numbers for each celeb and check how well we did on our predictions.
|
with np.load('data/mystery_data_new.npz') as data:
popularity_new = data['popularity_new']
print "Predicted L2 Error:", l2_error(popularity_new, predicted_popularity_new)
|
Session 1 - Linear_Regression.ipynb
|
dinrker/PredictiveModeling
|
mit
|
Step 2: Create WaveJSON waveform
The pattern to be generated is specified in the waveJSON format
The pattern is applied to the Arduino interface, pins D0, D1 and D2 are set to generate a 3-bit count.
To check the generated pattern we loop them back to pins D19, D18 and D17 respectively and use the the trace analyzer to view the loopback signals
The Waveform class is used to display the specified waveform.
|
from pynq.lib.logictools import Waveform
up_counter = {'signal': [
['stimulus',
{'name': 'bit0', 'pin': 'D0', 'wave': 'lh' * 8},
{'name': 'bit1', 'pin': 'D1', 'wave': 'l.h.' * 4},
{'name': 'bit2', 'pin': 'D2', 'wave': 'l...h...' * 2}],
['analysis',
{'name': 'bit2_loopback', 'pin': 'D17'},
{'name': 'bit1_loopback', 'pin': 'D18'},
{'name': 'bit0_loopback', 'pin': 'D19'}]],
'foot': {'tock': 1},
'head': {'text': 'up_counter'}}
waveform = Waveform(up_counter)
waveform.display()
|
boards/Pynq-Z2/logictools/notebooks/pattern_generator_and_trace_analyzer.ipynb
|
cathalmccabe/PYNQ
|
bsd-3-clause
|
Note: Since there are no captured samples at this moment, the analysis group will be empty.
Step 3: Instantiate the pattern generator and trace analyzer objects
Users can choose whether to use the trace analyzer by calling the trace() method.
The analyzer can be set to trace a specific number of samples using, num_analyzer_samples argument.
|
pattern_generator = logictools_olay.pattern_generator
pattern_generator.trace(num_analyzer_samples=16)
|
boards/Pynq-Z2/logictools/notebooks/pattern_generator_and_trace_analyzer.ipynb
|
cathalmccabe/PYNQ
|
bsd-3-clause
|
Step 4: Setup the pattern generator
The pattern generator will work at the default frequency of 10MHz. This can be modified using a frequency argument in the setup() method.
|
pattern_generator.setup(up_counter,
stimulus_group_name='stimulus',
analysis_group_name='analysis')
|
boards/Pynq-Z2/logictools/notebooks/pattern_generator_and_trace_analyzer.ipynb
|
cathalmccabe/PYNQ
|
bsd-3-clause
|
Set the loopback connections using jumper wires on the Arduino Interface
Output pins D0, D1 and D2 are connected to pins D19, D18 and D17 respectively
Loopback/Input pins D19, D18 and D17 are observed using the trace analyzer as shown below
After setup, the pattern generator should be ready to run
Note: Make sure all other pins are disconnected.
Step 5: Run and display waveform
The run() method will execute all the samples, show_waveform() method is used to display the waveforms.
Alternatively, we can also use step() method to single step the pattern.
|
pattern_generator.run()
pattern_generator.show_waveform()
|
boards/Pynq-Z2/logictools/notebooks/pattern_generator_and_trace_analyzer.ipynb
|
cathalmccabe/PYNQ
|
bsd-3-clause
|
Step 6: Stop the pattern generator
Calling stop() will clear the logic values on output pins; however, the waveform will be recorded locally in the pattern generator instance.
|
pattern_generator.stop()
|
boards/Pynq-Z2/logictools/notebooks/pattern_generator_and_trace_analyzer.ipynb
|
cathalmccabe/PYNQ
|
bsd-3-clause
|
Filtering initIssues
After initializing the snapshot, you often want to look at the <code>initIssues</code> answer. If there are too many issues, you may want to ignore a particular class of issues. We show below how to do that.
|
# Lets get the initIssues for our snapshot
issues = bf.q.initIssues().answer().frame()
issues
# Ignore all issues whose Line_Text contain one of these as a substring
line_texts_to_ignore = ["transceiver"]
def has_substring(text: Optional[str], substrings: List[str]) -> bool:
"""Returns True if 'text' is not None and contains one of the 'substrings'"""
return text is not None and any(substr in text for substr in substrings)
issues[
issues.apply(
lambda issue: not has_substring(issue["Line_Text"], line_texts_to_ignore),
axis=1,
)
]
|
jupyter_notebooks/Pandas Examples.ipynb
|
batfish/pybatfish
|
apache-2.0
|
In the code above, we are using the Pandas method <code>apply</code> to map issues to a binary array based on whether the issue has one of the substrings in line_texts_to_ignore. Passing axis=1 makes apply iterate over rows instead of columns. The helper method has_substring makes this determination. It returns True if text is not None and has any of the substrings. The Python method <code>any</code> returns True if any element of the input iterable is True. Using the binary array as a filter for issues produces rows that match our criterion.
Instead of ignoring some issues, you may want to focus on issues that match a certain criteria. That too can be easily accomplished, as follows.
|
# Only show issues whose details match these substrings
focus_details = ["Unrecognized element 'ServiceDetails' in AWS"]
issues[
issues.apply(lambda issue: has_substring(issue["Details"], focus_details), axis=1)
]
|
jupyter_notebooks/Pandas Examples.ipynb
|
batfish/pybatfish
|
apache-2.0
|
The code above is similar to the one we used earlier, with the only differences being that we use the focus_details list as the argument to the has_substrings helper and we do not invert its result.
Filtering objects
|
# Fetch interface properties and display its first five rows
interfaces = bf.q.interfaceProperties().answer().frame()
interfaces.head(5)
|
jupyter_notebooks/Pandas Examples.ipynb
|
batfish/pybatfish
|
apache-2.0
|
To filter based on a column, we need to know its data type. We can learn that in the Batfish documentation or by inspecting the answer we got from Batfish (e.g., using Python's type() method).
We show three examples of filtering based on the Interface and Active columns, which are of type <code>pybatfish.datamodel.primitives.Interface</code> and bool, respectively. The former has hostname and interface properties (which are strings).
|
# Display all interfaces on node 'exitgw'
interfaces[interfaces.apply(lambda row: row["Interface"].hostname == "exitgw", axis=1)]
# Display all GigabitEthernet interfaces on node 'exitgw'
interfaces[
interfaces.apply(
lambda row: row["Interface"].hostname == "exitgw"
and row["Interface"].interface.startswith("GigabitEthernet"),
axis=1,
)
]
# Display all active GigabitEthernet interfaces on node 'exitgw'
interfaces[
interfaces.apply(
lambda row: row["Interface"].hostname == "exitgw"
and row["Interface"].interface.startswith("GigabitEthernet")
and row["Active"],
axis=1,
)
]
|
jupyter_notebooks/Pandas Examples.ipynb
|
batfish/pybatfish
|
apache-2.0
|
Filtering columns
When viewing Batfish answers, you may want to view only some of the columns. Pandas makes that easy for both original answers and answers where some rows have been filtered, as both of them are just DataFrames.
|
# Filter interfaces to all active GigabitEthernet interfaces on node exitgw
exitgw_gige_active_interfaces = interfaces[
interfaces.apply(
lambda row: row["Interface"].hostname == "exitgw"
and row["Interface"].interface.startswith("GigabitEthernet")
and row["Active"],
axis=1,
)
]
# Display only the Interface and All_Prefixes columns of the filtered DataFrame
exitgw_gige_active_interfaces[["Interface", "All_Prefixes"]]
|
jupyter_notebooks/Pandas Examples.ipynb
|
batfish/pybatfish
|
apache-2.0
|
Counting rows
Often, you would be interested in counting the number of rows in the filtered answer. This is super easy because Python's len() method, which we use for iterables, can be used on DataFrames as well.
|
# Show the number of rows in the filtered DataFrame that we obtained above
len(exitgw_gige_active_interfaces)
|
jupyter_notebooks/Pandas Examples.ipynb
|
batfish/pybatfish
|
apache-2.0
|
Grouping rows
For more advanced operations than filtering rows and columns, chances are that you will find Pandas <code>groupyby</code> pretty handy. This method enables you to group rows using a custom criteria and analyze those groups. For instance, if you wanted to group interfaces by nodes, you may do the following:
|
# Get interfaces grouped by node name
intefaces_by_hostname = interfaces.groupby(
lambda index: interfaces.loc[index]["Interface"].hostname
)
|
jupyter_notebooks/Pandas Examples.ipynb
|
batfish/pybatfish
|
apache-2.0
|
We obtained a Pandas DataFrameGroupBy object above. The groupby method iterates over row indexes (apply iterated over rows), calls the lambda over each, and groups rows whose indices yield the same value. In our example, the lambda first gets the row using interfaces.loc[index], then gets the interface (which is of type pybatfish.datamodel.primitives.Interface), and finally the hostname.
DataFrameGroupBy objects offer many functions that are useful for analysis. We demonstrate two of them below.
|
# Display the rows corresponding to node 'exitgw' group
intefaces_by_hostname.get_group("exitgw")
|
jupyter_notebooks/Pandas Examples.ipynb
|
batfish/pybatfish
|
apache-2.0
|
Here, we used the <code>get_group</code> method to get all information for 'exitgw', thus viewing all interfaces for that node. This is possible using row filtering as well, but we can do other things that are not, such as:
|
# Display the number of interfaces per node
intefaces_by_hostname.count()[["Interface"]]
|
jupyter_notebooks/Pandas Examples.ipynb
|
batfish/pybatfish
|
apache-2.0
|
Then, we fit the GLM model:
|
mod1 = smf.glm(formula=formula, data=dta, family=sm.families.Binomial()).fit()
mod1.summary()
|
examples/notebooks/glm_formula.ipynb
|
phobson/statsmodels
|
bsd-3-clause
|
Finally, we define a function to operate customized data transformation using the formula framework:
|
def double_it(x):
return 2 * x
formula = 'SUCCESS ~ double_it(LOWINC) + PERASIAN + PERBLACK + PERHISP + PCTCHRT + \
PCTYRRND + PERMINTE*AVYRSEXP*AVSALK + PERSPENK*PTRATIO*PCTAF'
mod2 = smf.glm(formula=formula, data=dta, family=sm.families.Binomial()).fit()
mod2.summary()
|
examples/notebooks/glm_formula.ipynb
|
phobson/statsmodels
|
bsd-3-clause
|
As expected, the coefficient for double_it(LOWINC) in the second model is half the size of the LOWINC coefficient from the first model:
|
print(mod1.params[1])
print(mod2.params[1] * 2)
|
examples/notebooks/glm_formula.ipynb
|
phobson/statsmodels
|
bsd-3-clause
|
Load and check data
|
exps = ['test_restoration_5']
paths = [os.path.expanduser("~/nta/results/{}".format(e)) for e in exps]
df = load_many(paths)
df.head(5)
# replace hebbian prine
# df['hebbian_prune_perc'] = df['hebbian_prune_perc'].replace(np.nan, 0.0, regex=True)
# df['weight_prune_perc'] = df['weight_prune_perc'].replace(np.nan, 0.0, regex=True)
df.columns
df.shape
df.iloc[1]
df.groupby('model')['model'].count()
|
projects/archive/dynamic_sparse/notebooks/ExperimentAnalysis-TestRestoration.ipynb
|
numenta/nupic.research
|
agpl-3.0
|
## Analysis
Experiment Details
|
num_epochs=100
# Did any trials failed?
df[df["epochs"]<num_epochs]["epochs"].count()
# Removing failed or incomplete trials
df_origin = df.copy()
df = df_origin[df_origin["epochs"]>=num_epochs]
df.shape
# which ones failed?
# failed, or still ongoing?
df_origin['failed'] = df_origin["epochs"]<num_epochs
df_origin[df_origin['failed']]['epochs']
# helper functions
def mean_and_std(s):
return "{:.3f} ± {:.3f}".format(s.mean(), s.std())
def round_mean(s):
return "{:.0f}".format(round(s.mean()))
stats = ['min', 'max', 'mean', 'std']
def agg(columns, filter=None, round=3):
if filter is None:
return (df.groupby(columns)
.agg({'val_acc_max_epoch': round_mean,
'val_acc_max': stats,
'model': ['count']})).round(round)
else:
return (df[filter].groupby(columns)
.agg({'val_acc_max_epoch': round_mean,
'val_acc_max': stats,
'model': ['count']})).round(round)
|
projects/archive/dynamic_sparse/notebooks/ExperimentAnalysis-TestRestoration.ipynb
|
numenta/nupic.research
|
agpl-3.0
|
Does improved weight pruning outperforms regular SET
|
agg(['on_perc', 'network'])
|
projects/archive/dynamic_sparse/notebooks/ExperimentAnalysis-TestRestoration.ipynb
|
numenta/nupic.research
|
agpl-3.0
|
Tuples as Records
|
lax_coordinates = (33.9425, -118.408056)
city, year, pop, chg, area = ('Tokyo', 2003, 32450, 0.66, 8014)
traveler_ids = [('USA', '31195855'), ('BRA', 'CE342567'), ('ESP', 'XDA205856')]
for passport in sorted(traveler_ids):
print('%s/%s' % passport)
for country, _ in traveler_ids:
print(country)
|
fluent_python/Chapter 2, An Array of Sequences.ipynb
|
kylepjohnson/notebooks
|
mit
|
Tuple Unpacking
|
import os
_, filename = os.path.split('/home/kyle/afile.txt')
print(filename)
a, b, *rest = range(5)
a, b, rest
a, b, *rest = range(3)
a, b, rest
a, b, *rest = range(2)
a, b, rest
a, *body, c, d = range(5)
a, body, c, d
*head, b, c, d = range(5)
head, b, c, d
metro_areas = [('Tokyo','JP',36.933,(35.689722,139.691667)),
('Delhi NCR', 'IN', 21.935, (28.613889, 77.208889)),
('Mexico City', 'MX', 20.142, (19.433333, -99.133333)),
('New York-Newark', 'US', 20.104, (40.808611, -74.020386)),
('Sao Paulo', 'BR', 19.649, (-23.547778, -46.635833)),
]
print('{:15} | {:^9} | {:^9}'.format('', 'lat.', 'long.'))
fmt = '{:15} | {:9.4f} | {:9.4f}'
fmt
for name, cc, pop, (latitude, longitude) in metro_areas:
if longitude <= 0:
print(fmt.format(name, latitude, longitude))
|
fluent_python/Chapter 2, An Array of Sequences.ipynb
|
kylepjohnson/notebooks
|
mit
|
Named tuples
|
from collections import namedtuple
City = namedtuple('City', 'name country population coordinates')
tokyo = City('Tokyo', 'JP', 36.933, (35.689722, 139.691667))
tokyo
tokyo.population
tokyo.name
tokyo.coordinates
tokyo[1]
# a few useful methods on namedtuple
City._fields
LatLong = namedtuple('LatLong', 'lat long')
delhi_data = ('Delhi NCR', 'IN', 21.935, LatLong(28.613889, 77.208889))
delhi = City._make(delhi_data) # instantiate a named tuple from an iterable
delhi._asdict()
for key, value in delhi._asdict().items():
print(key + ':', value)
|
fluent_python/Chapter 2, An Array of Sequences.ipynb
|
kylepjohnson/notebooks
|
mit
|
Slicing
|
# why slices and range exclude the last item
l = [10,20,30,40,50,60]
l[:2]
l[2:]
# slice objects
s = 'bicycle'
s[::3]
s[::-1]
s[::-2]
invoice = """
0.....6.................................40........52...55........
1909 Pimoroni PiBrella $17.50 3 $52.50
1489 6mm Tactile Switch x20 $4.95 2 $9.90
1510 Panavise Jr. - PV-201 $28.00 1 $28.00
1601 PiTFT Mini Kit 320x240 $34.95 1 $34.95
"""
SKU = slice(0,6)
DESCRIPTION = slice(6, 40)
UNIT_PRICE = slice(40, 52)
QUANTITY = slice(52, 55)
ITEM_TOTAL = slice(55, None)
line_items = invoice.split('\n')[2:]
for item in line_items:
print(item[UNIT_PRICE], item[DESCRIPTION])
|
fluent_python/Chapter 2, An Array of Sequences.ipynb
|
kylepjohnson/notebooks
|
mit
|
Assigning to Slices
|
l = list(range(10))
l
l[2:5] = [20, 30]
l
del l[5:7]
l
l[3::2] = [11, 22]
l
l[2:5] = 100
l
l[2:5] = [100]
l
|
fluent_python/Chapter 2, An Array of Sequences.ipynb
|
kylepjohnson/notebooks
|
mit
|
Using + and * with Sequences
|
l = [1, 2, 3]
l * 5
5 * 'abcd'
|
fluent_python/Chapter 2, An Array of Sequences.ipynb
|
kylepjohnson/notebooks
|
mit
|
Building Lists of Lists
|
board = [['_'] *3 for i in range(3)]
board
board[1][2] = 'X'
board
|
fluent_python/Chapter 2, An Array of Sequences.ipynb
|
kylepjohnson/notebooks
|
mit
|
Augmented Assignment with Sequences
|
l = [1, 2, 3]
id(l)
l *= 2
id(l) # same list
t=(1,2,3)
id(t)
t *= 2
id(t) # new tuple was created
|
fluent_python/Chapter 2, An Array of Sequences.ipynb
|
kylepjohnson/notebooks
|
mit
|
A += Assignment Puzzler
|
import dis
dis.dis('s[a] += b')
|
fluent_python/Chapter 2, An Array of Sequences.ipynb
|
kylepjohnson/notebooks
|
mit
|
• Putting mutable items in tuples is not a good idea.
• Augmented assignment is not an atomic operation—we just saw it throwing an exception after doing part of its job.
• Inspecting Python bytecode is not too difficult, and is often helpful to see what is going on under the hood.
list.sort and the sorted Built-In Function
sorted() makes a new list, doesn't touch the original.
sort() changes list in place.
|
fruits = ['grape', 'raspberry', 'apple', 'banana']
sorted(fruits)
fruits
sorted(fruits, reverse=True)
sorted(fruits, key=len)
sorted(fruits, key=len, reverse=True)
fruits
fruits.sort() # note that sort() returns None
fruits
|
fluent_python/Chapter 2, An Array of Sequences.ipynb
|
kylepjohnson/notebooks
|
mit
|
Next: use bisect module to better search sorted lists.
Managing Ordered Sequences with bisect
|
breakpoints=[60, 70, 80, 90]
grades='FDCBA'
bisect.bisect(breakpoints, 99)
bisect.bisect(breakpoints, 59)
bisect.bisect(breakpoints, 75)
def grade(score, breakpoints=[60, 70, 80, 90], grades='FDCBA'):
i = bisect.bisect(breakpoints, score)
return grades[i]
[grade(score) for score in [33, 99, 77, 70, 89, 90, 100]]
grade(4)
grade(93)
|
fluent_python/Chapter 2, An Array of Sequences.ipynb
|
kylepjohnson/notebooks
|
mit
|
Inserting with bisect.insort
|
import bisect
import random
SIZE = 7
random.seed(1729)
my_list = []
for i in range(SIZE):
new_item = random.randrange(SIZE*2)
bisect.insort(my_list, new_item)
print('%2d ->' % new_item, my_list)
|
fluent_python/Chapter 2, An Array of Sequences.ipynb
|
kylepjohnson/notebooks
|
mit
|
Arrays
|
from array import array
from random import random
floats = array('d', (random() for i in range(10**7)))
floats[-1]
fp = open('floats.bin', 'wb')
floats.tofile(fp)
fp.close()
floats2 = array('d')
fp = open('floats.bin', 'rb')
floats2.fromfile(fp, 10**7)
fp.close()
floats2[-1]
floats2 == floats
|
fluent_python/Chapter 2, An Array of Sequences.ipynb
|
kylepjohnson/notebooks
|
mit
|
To sort an array, use a = array.array(a.typecode, sorted(a)). To keep it sorted while adding to it, use bisect.insort.
Memory Views
The built-in memorview class is a shared-memory sequence type that lets you handle slices of arrays without copying bytes.
|
# Changing the value of an array item by poking one of its bytes
import array
numbers = array.array('h', [-2, -1, 0, 1, 2])
memv = memoryview(numbers)
len(memv)
memv[0]
memv_oct = memv.cast('B') # ch type of array to unsigned char
memv_oct.tolist()
memv_oct[5] = 4
numbers
|
fluent_python/Chapter 2, An Array of Sequences.ipynb
|
kylepjohnson/notebooks
|
mit
|
NumPy and SciPy
|
import numpy
a = numpy.arange(12)
a
type(a)
a.shape
a.shape = 3, 4 # turn a into three units of 4
a
a[2]
a[2, 1]
a[:, 1]
a.transpose()
|
fluent_python/Chapter 2, An Array of Sequences.ipynb
|
kylepjohnson/notebooks
|
mit
|
Loading, saving, and operating:
Use numpy.loadtxt()
Deques and Other Queues
Inserting and removing from the left of a list (the 0-index end) is costly. collections.deque is a thread-safe double-ended queue designed for fast inserting and removing from both ends.
|
from collections import deque
dq = deque(range(10), maxlen=10)
dq
dq.rotate(3)
dq
dq.rotate(-4)
dq
dq.appendleft(-1)
dq
dq.extend([11, 22, 33])
dq
dq.extendleft([10, 20, 30, 40])
dq
|
fluent_python/Chapter 2, An Array of Sequences.ipynb
|
kylepjohnson/notebooks
|
mit
|
a hidden cost: removing items from the middle of a deque is not as fast
On using single type in list: "we put items in a list to process them later, which implies that all items should support at least some operation in common".
|
# but a workaround with `key`
l = [28, 14, '28', 5, '9', '1', 0, 6, '23', 19]
sorted(l, key=int)
sorted(l, key=str)
|
fluent_python/Chapter 2, An Array of Sequences.ipynb
|
kylepjohnson/notebooks
|
mit
|
A brief note about pseudo-random numbers
When carrying out simulations, it is typical to use random number generators. Most computers can not generate true random numbers -- instead we use algorithms that approximate the generation of random numbers (pseudo-random number generators). One important difference between a true random number generator and a pseudo-random number generator is that a series of pseudo-random numbers can be regenerated if you know the "seed" value that initialized the algorithm. We can specifically set this seed value, so that we can guarantee that two different people evaluating this notebook get the same results, even though we're using (pseudo)random numbers in our simulation.
|
# set the seed for the pseudo-random number generator
# the seed is any 32 bit integer
# different seeds will generate different results for the
# simulations that follow
np.random.seed(20160208)
|
Introduction-to-Simulation.ipynb
|
Bio204-class/bio204-notebooks
|
cc0-1.0
|
Generating a population to sample from
We'll start by simulating our "population of interest" -- i.e. the population we want to make inferences about. We'll assume that our variable of interest (e.g. circulating stress hormone levels) is normally distributed with a mean of 10 nM and a standard deviation of 1 nM.
|
popn = np.random.normal(loc=10, scale=1, size=6500)
plt.hist(popn,bins=50)
plt.xlabel("Glucorticoid concentration (nM)")
plt.ylabel("Frequency")
pass
print("Mean glucorticoid concentration:", np.mean(popn))
print("Standard deviation of glucocorticoid concentration:", np.std(popn))
|
Introduction-to-Simulation.ipynb
|
Bio204-class/bio204-notebooks
|
cc0-1.0
|
Take a random sample of the population of interest
We'll use the np.random.choice function to take a sample from our population of interest.
|
sample1 = np.random.choice(popn, size=25)
plt.hist(sample1)
plt.xlabel("Glucorticoid concentration (nM)")
plt.ylabel("Frequency")
pass
np.mean(sample1), np.std(sample1,ddof=1)
|
Introduction-to-Simulation.ipynb
|
Bio204-class/bio204-notebooks
|
cc0-1.0
|
Take a second random sample of size 25
|
sample2 = np.random.choice(popn, size=25)
np.mean(sample2), np.std(sample2,ddof=1)
|
Introduction-to-Simulation.ipynb
|
Bio204-class/bio204-notebooks
|
cc0-1.0
|
Compare the first and second samples
|
plt.hist(sample1)
plt.hist(sample2,alpha=0.5)
plt.xlabel("Glucorticoid concentration (nM)")
plt.ylabel("Frequency")
pass
|
Introduction-to-Simulation.ipynb
|
Bio204-class/bio204-notebooks
|
cc0-1.0
|
## Generate a large number of samples of size 25
Every time we take a random sample from our population of interest we'll get a different estimate of the mean and standard deviation (or whatever other statistics we're interested in). To explore how well random samples of size 25 perform, generally, in terms of estimating the mean and standard deviation of the population of interest we need a large number of such samples.
It's tedious to take one sample at a time, so we'll generate 100 samples of size 25, and calculate the mean and standard deviation for each of those samples (storing the means and standard deviations in lists).
|
means25 = []
std25 = []
for i in range(100):
s = np.random.choice(popn, size=25)
means25.append(np.mean(s))
std25.append(np.std(s,ddof=1))
plt.hist(means25,bins=15)
plt.xlabel("Mean glucocorticoid concentration")
plt.ylabel("Frequency")
plt.title("Distribution of estimates of the\n mean glucocorticoid concentration\n for 100 samples of size 25")
plt.vlines(np.mean(popn), 0, 18, linestyle='dashed', color='red',label="True Mean")
plt.legend(loc="upper right")
pass
|
Introduction-to-Simulation.ipynb
|
Bio204-class/bio204-notebooks
|
cc0-1.0
|
Relative Frequency Histogram
A relative frequency histogram is like a frequency histogram, except the bin heights are given in fractions of the total sample size (relative frequency) rather than absolute frequency. This is equivalent to adding the constraint that the total height of all the bars in the histogram will add to 1.0.
|
# Relative Frequency Histogram
plt.hist(means25, bins=15, weights=np.ones_like(means25) * (1.0/len(means25)))
plt.xlabel("mean glucocorticoid concentration")
plt.ylabel("Relative Frequency")
plt.vlines(np.mean(popn), 0, 0.20, linestyle='dashed', color='red',label="True Mean")
plt.legend(loc="upper right")
pass
|
Introduction-to-Simulation.ipynb
|
Bio204-class/bio204-notebooks
|
cc0-1.0
|
Density histogram
If instead of constraining the total height of the bars, we constrain the total area of the bars to sum to one, we call this a density histogram. When comparing histograms based on different numbers of samples, with different bin width, etc. you should usually use the density histogram.
The argument normed=True to the pyplot.hist function will this function calculate a density histogram instead of the default frequency histogram.
|
plt.hist(means25,bins=15,normed=True)
plt.xlabel("Mean glucocorticoid concentration")
plt.ylabel("Density")
plt.vlines(np.mean(popn), 0, 2.5, linestyle='dashed', color='red',label="True Mean")
plt.legend(loc="upper right")
pass
|
Introduction-to-Simulation.ipynb
|
Bio204-class/bio204-notebooks
|
cc0-1.0
|
How does the spread of our estimates of the mean change as sample size increases?
What happens as we increase the size of our samples? Let's draw 100 random samples of size 50, 100, and 200 observations to compare.
|
means50 = []
std50 = []
for i in range(100):
s = np.random.choice(popn, size=50)
means50.append(np.mean(s))
std50.append(np.std(s,ddof=1))
means100 = []
std100 = []
for i in range(100):
s = np.random.choice(popn, size=100)
means100.append(np.mean(s))
std100.append(np.std(s,ddof=1))
means200 = []
std200 = []
for i in range(100):
s = np.random.choice(popn, size=200)
means200.append(np.mean(s))
std200.append(np.std(s,ddof=1))
# the label arguments get used when we create a legend
plt.hist(means25, normed=True, alpha=0.75, histtype="stepfilled", label="n=25")
plt.hist(means50, normed=True, alpha=0.75, histtype="stepfilled", label="n=50")
plt.hist(means100, normed=True, alpha=0.75, histtype="stepfilled", label="n=100")
plt.hist(means200, normed=True, alpha=0.75, histtype="stepfilled", label="n=200")
plt.xlabel("Mean glucocorticoid concentration")
plt.ylabel("Density")
plt.vlines(np.mean(popn), 0, 7, linestyle='dashed', color='black',label="True Mean")
plt.legend()
pass
|
Introduction-to-Simulation.ipynb
|
Bio204-class/bio204-notebooks
|
cc0-1.0
|
Standard Error of the Mean
We see from the graph above that our estimates of the mean cluster more tightly about the true mean as our sample size increases. Let's quantify that by calculating the standard deviation of our mean estimates as a function of sample size.
The standard deviation of the sampling distribution of a statistic of interest is called the "Standard Error" of that statistic. Here, through simulation, we are estimating the "Standard Error of the Mean".
|
sm25 = np.std(means25,ddof=1)
sm50 = np.std(means50,ddof=1)
sm100 = np.std(means100,ddof=1)
sm200 = np.std(means200, ddof=1)
x = [25,50,100,200]
y = [sm25,sm50,sm100,sm200]
plt.scatter(x,y)
plt.xlabel("Sample size")
plt.ylabel("Std Dev of Mean Estimates")
pass
|
Introduction-to-Simulation.ipynb
|
Bio204-class/bio204-notebooks
|
cc0-1.0
|
You can show mathematically for normally distributed data, that the expected Standard Error of the Mean as a function of sample size is:
$$
\mbox{Standard Error of Mean} = \frac{\sigma}{\sqrt{n}}
$$
where $\sigma$ is the population standard deviation, and $n$ is the sample size.
Let's compare that theoretical expectation to our simulated estimates.
|
x = [25,50,100,200]
y = [sm25,sm50,sm100,sm200]
theory = [np.std(popn)/np.sqrt(i) for i in range(10,250)]
plt.scatter(x,y, label="Simulation estimates")
plt.plot(range(10,250), theory, color='red', label="Theoretical expectation")
plt.xlabel("Sample size")
plt.ylabel("Std Error of Mean")
plt.legend()
plt.xlim(0,300)
pass
|
Introduction-to-Simulation.ipynb
|
Bio204-class/bio204-notebooks
|
cc0-1.0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.