markdown
stringlengths 0
37k
| code
stringlengths 1
33.3k
| path
stringlengths 8
215
| repo_name
stringlengths 6
77
| license
stringclasses 15
values |
---|---|---|---|---|
Note - I was unable to get the galaxies to clusster using DBSCAN.
Problem 3) Supervised Machine Learning
Supervised machine learning, on the other hand, aims to predict a target class or produce a regression result based on the location of labelled sources (i.e. the training set) in the multidimensional feature space. The "supervised" comes from the fact that we are specifying the allowed outputs from the model. As there are labels available for the training set, it is possible to estimate the accuracy of the model (though there are generally important caveats about generalization, which we will explore in further detail later).
We will begin with a simple, but nevertheless, elegant algorithm for classification and regression: $k$-nearest-neighbors ($k$NN). In brief, the classification or regression output is determined by examining the $k$ nearest neighbors in the training set, where $k$ is a user defined number. Typically, though not always, distances between sources are Euclidean, and the final classification is assigned to whichever class has a plurality within the $k$ nearest neighbors (in the case of regression, the average of the $k$ neighbors is the output from the model). We will experiment with the steps necessary to optimize $k$, and other tuning parameters, in the detailed break-out problem.
In scikit-learn the KNeighborsClassifer algorithm is implemented as part of the sklearn.neighbors module.
Problem 3a
Fit two different $k$NN models to the iris data, one with 3 neighbors and one with 10 neighbors. Plot the resulting class predictions in the sepal length-sepal width plane (same plot as above). How do the results compare to the true classifications? Is there any reason to be suspect of this procedure?
Hint - after you have constructed the model, it is possible to obtain model predictions using the .predict() method, which requires a feature array, including the same features and order as the training set, as input.
Hint that isn't essential, but is worth thinking about - should the features be re-scaled in any way?
|
from sklearn.neighbors import KNeighborsClassifier
KNNclf = KNeighborsClassifier( # complete
preds = KNNclf.predict( # complete
plt.figure()
plt.scatter( # complete
# complete
# complete
# complete
|
Sessions/Session04/Day0/TooBriefMachLearn.ipynb
|
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
mit
|
These results are almost identical to the training classifications. However, we have cheated! In this case we are evaluating the accuracy of the model (98% in this case) using the same data that defines the model. Thus, what we have really evaluated here is the training error. The relevant parameter, however, is the generalization error: how accurate are the model predictions on new data?
Without going into too much detail, we will test this using cross validation (CV). In brief, CV provides predictions on the training set using a subset of the data to generate a model that predicts the class of the remaining sources. Using cross_val_predict, we can get a better sense of the model accuracy. Predictions from cross_val_predict are produced in the following manner:
from sklearn.cross_validation import cross_val_predict
CVpreds = cross_val_predict(sklearn.model(), X, y)
where sklearn.model() is the desired model, X is the feature array, and y is the label array.
Problem 3b
Produce cross-validation predictions for the iris dataset and a $k$NN with 5 neighbors. Plot the resulting classifications, as above, and estimate the accuracy of the model as applied to new data. How does this accuracy compare to a $k$NN with 50 neighbors?
|
from sklearn.cross_validation import cross_val_predict
CVpreds = cross_val_predict( # complete
plt.scatter( # complete
print("The accuracy of the kNN = 5 model is ~{:.4}".format( # complete
# complete
# complete
# complete
|
Sessions/Session04/Day0/TooBriefMachLearn.ipynb
|
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
mit
|
While it is useful to understand the overall accuracy of the model, it is even more useful to understand the nature of the misclassifications that occur.
Problem 3c
Calculate the accuracy for each class in the iris set, as determined via CV for the $k$NN = 50 model.
|
# complete
# complete
# complete
|
Sessions/Session04/Day0/TooBriefMachLearn.ipynb
|
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
mit
|
We just found that the classifier does a much better job classifying setosa and versicolor than it does for virginica. The main reason for this is some viginica flowers lie far outside the main virginica locus, and within predominantly versicolor "neighborhoods". In addition to knowing the accuracy for the individual classes, it is also useful to know class predictions for the misclassified sources, or in other words where there is "confusion" for the classifier. The best way to summarize this information is with a confusion matrix. In a confusion matrix, one axis shows the true class and the other shows the predicted class. For a perfect classifier all of the power will be along the diagonal, while confusion is represented by off-diagonal signal.
Like almost everything else we have encountered during this exercise, scikit-learn makes it easy to compute a confusion matrix. This can be accomplished with the following:
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_prep)
Problem 3d
Calculate the confusion matrix for the iris training set and the $k$NN = 50 model.
|
from sklearn.metrics import confusion_matrix
cm = confusion_matrix( # complete
|
Sessions/Session04/Day0/TooBriefMachLearn.ipynb
|
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
mit
|
From this representation, we see right away that most of the virginica that are being misclassifed are being scattered into the versicolor class. However, this representation could still be improved: it'd be helpful to normalize each value relative to the total number of sources in each class, and better still, it'd be good to have a visual representation of the confusion matrix. This visual representation will be readily digestible. Now let's normalize the confusion matrix.
Problem 3e
Calculate the normalized confusion matrix. Be careful, you have to sum along one axis, and then divide along the other.
Anti-hint: This operation is actually straightforward using some array manipulation that we have not covered up to this point. Thus, we have performed the necessary operations for you below. If you have extra time, you should try to develop an alternate way to arrive at the same normalization.
|
normalized_cm = cm.astype('float')/cm.sum(axis = 1)[:,np.newaxis]
normalized_cm
|
Sessions/Session04/Day0/TooBriefMachLearn.ipynb
|
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
mit
|
The normalization makes it easier to compare the classes, since each class has a different number of sources. Now we can procede with a visual representation of the confusion matrix. This is best done using imshow() within pyplot. You will also need to plot a colorbar, and labeling the axes will also be helpful.
Problem 3f
Plot the confusion matrix. Be sure to label each of the axeses.
Hint - you might find the sklearn confusion matrix tutorial helpful for making a nice plot.
|
# complete
# complete
# complete
|
Sessions/Session04/Day0/TooBriefMachLearn.ipynb
|
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
mit
|
The Source Code of Life: Using Python to Explore Our DNA
Researchers just found the gene responsible for mistakenly thinking we've found the gene for specific things. It's the region between the start and the end of every chromosome, plus a few segments in our mitochondria.
Every good presentation starts with xkcd, right?
Bioinformatics
HINT: If you're viewing this notebook as slides, press the "s" key to see a bunch of extra notes.
Image: http://www.sciencemag.org/sites/default/files/styles/article_main_large/public/images/13%20June%202014.jpg?itok=DPBy5nLZ
Today we're going to talk about part off the wonderful field of study known as bioinformatics. What is bioinformatics? According to Wikipedia, its "an interdisciplinary field that develops methods and software tools for understanding biological data." Here's another definition that more fits my experiences: "The mathematical, statistical and computing methods that aim to solve biological problems using DNA and amino acid sequences and related information." (Fredj Tekaia, Institut Pasteur)
What is DNA?
A long string of A, C, G, and T (bases)
ACGTTCATGG <- Ten bases
ACGTTCATGGATGTGACCAG <- Twenty bases
etc...
So what is it about biology that needs such computationally intensive stuff? I thought biology was a "squishy" science? Well guess what, nature has been playing with big data long before it became the buzzword it is now. Let's start by talking about DNA. DNA is your "source code," but we'll get to more about how that works in a minute. First we're just going to talk about what it is. It's a string of four different types of molecules with long names that we're not going to worry about right now, so we'll just call them by their abbreviations (which is what everyone uses most of the time anyway): A, C, G, and T. And instead of molecules, we'll call them bases, because reasons. So if you have just an A, that's one base, or if you have ACGTTCATGG, that is ten bases.
Well, not really a string...
<img src="images/double_helix.jpg" alt="Double helix" style="width: 200px;"/>
Two strings stuck together? But if you know one you know the other...
Ok, when I said it was a string that was a bit of a simplification, it's actually two strings stuck together. That's why pictures of DNA look like a twisted ladder (that's the "double helix" you may hear about): each side of the ladder is a string, and the "rungs" are where they stick together. But the nice thing is, each base has only one other base it can stick to. A always sticks to T and vice versa, same with C and G. So if you know that one side of the ladder is ACGTTCATGG, then we know the other side is TGCAAGTACC. This is really nice because it cuts the amount of information we actually have to know in half.
The Human Genome
It's big. How big?
So now that you know everything (just kidding) about DNA, it's time for a question: How many bases long would you guess the human genome (fancy word for all the DNA) is? It ends up being about 3.2 billion base pairs long. (Remember the double helix? That's why it's pairs)
About 3,200,000,000bp (bp is base pairs). Actual file format commonly used in bioinformatics (FASTA):
```
Sequence0
TTTCTGACTAACACTACAATTACCACTTGATGTTACCGACTAAGTGGTACGACTTGCTAGAACCGACTCTCGTACGTAT
CGCAGACTAGTGCGCGCGCTTAGTGACTATACTAGAATATACCTGGGGCCCAAGGAGTGTCGGGCGATCGTCCTTGAAA
TAAATATCTCAACCATCGTCATCTAGGGGGAACAGAGCGGTGGGCAGGTCCCAACCTGTTTATTTGTGTTGCTAACACT
ACGGCGCAGCTGCTCAAGTAGGTGCGATTATCGAGTAGAGGCTCCACCGGCTCTATGTGCCACGCATCTACTGAACCGA
ATTCTATCCCTGATACTCCAGAAGGTCGCAGGTTTACAGACACGTTTCAGCTCGAGAGGCCATCGATTATCTTAATATA
CCACACTGCCGAATAGCATGCCCGTAGAATCCAAGCCACGAGATAGCGTTACTTAATGAGTACCCAACGCAAATGAGGT
TGATTATCCCTAACCTGCAATCTAGGCCTTGTTCTGGAGGGGGTTATCCTTTATAGTTGATTACTTACACTCACCATGT
TCGTAGTCGGAACTCACCGATTAAGACCGATTTTACTATGGGAAGGCCAGGTTACACCTGTTTCGGGGGGGCCGCGGCG
GGTTACTTTAACCTGTCCATCCATCAGTCACTGGGCGCCAAGATTCTCCTATAGTTATATCCGCCCTTTGATTTAAACC
TAGGCCTACCTCAACGAACTGGGCCATGGGGTTCACACAGAAACAAGGGGGATAGACAGTCTTATTGAGCGCTTCTGAA
CAGCGTGTGTTCACGGTACGGCAATACCACCAGTAAACCGAGAACAGTGTTGAAGGTGATCGAACACGTGTTTTCTTCA
CCGTAGGGCTTCTAGGGAGTATCGCCCCCATATAGGCAGACGAGAAGGACTGTCACGCGCGGAGATCGATAATACGTAT
AACACAAGCACAGTAACTGCCCCGACCGGCTAAAGGACGTGGCCCAGTGTACCCAACGTACGTAATTGCAAGAGGTCTG
TCTGTCATCCCGAGGACTGCTTCTATAACTCGTTGAGGGCACTAGGCTTGAGACAATCAGCTTCGCTCGTCACGATTTT
ACTTTTTTCCTGGAAAAGCCCCCCCACAGACTATCAGGTCGCGCTTACCATACCAGTCCTTCTTGATAAGCCAATCCGT
ATTAGGTAGATTAAGCTGACAGTCGGGGCGACTCTTTGGAAACAGTATTCCCGTTTCGGGCACCTAGGATTCAGGCTTG
TACAACGATCATAGACGTCGCGGAAAGAAATAGCACAGTGTAGGAGCTGGTCGTGACCCGTGCTGTCAAGTTTATTGCA
CGGCTTGCTAAAAGGTACAGTGTAACGTTTCACAAACAAGCGAGACCCATTGTTGGTCTAACGCTATCGTACTTGATAC
CAGCCTGTGACGTCACGCGAAATCGTCTGTATAACTAGTTCTTCCCCGACTGCCACGGTATCCCAAAATTACATACTGA
CAGGACCTCTTCCATATTCATCAGGACTCGACGAAGCGCGCCCCGTGTAGTACGCGAAAATTATACCGTCCGTAGGTAC
```
Now picture this 2 million more times.
If you wrapped it in PEP 8-guideline adhering lines of 79 characters, it would still be over 40 million lines (although granted DNA is about as un-Pythonic of code as you can get). And we're not even all that special when it comes to genome length. A lot of plants have us beat, actually, with paris japonica being a top contender for longest genome at 150 billion base pairs.
So what does all of that code even do? Well, most of it doesn't appear to "do" much. (How much code in your codebase is like that? Although I'd be willing to bet there's a lot of it that we just don't understand what it does yet) But we'll stick with the ~1.5% of it that we have a pretty good idea about.
https://en.wikipedia.org/wiki/Central_dogma_of_molecular_biology
It's time to learn something called "The Central Dogma of Molecular Biology," since that's a lot cooler-sounding than "what DNA does." Don't worry, we'll keep to the very basics. There are two steps, "transcription" and "translation." Transcription basically involves making a copy of a chunk of DNA, except that you only copy one side of it (it looks like half a strand of DNA). We call this copy RNA. Translation involves reading the pattern of the DNA into protein. What is protein? Basically most of what we're made of is either protein or is made by proteins (excluding water, of course). So that's basically how your source code works: your DNA tells your body how to make proteins, which are like parts of a machine. These little parts make little machines, which are part of bigger machines, etc. etc. until all the parts fit together to make you! (But how do the parts know how to fit together you ask? Umm...great question...that we're not going to cover today. Next slide!)
Sequencing DNA
|
YouTubeVideo('fCd6B5HRaZ8', start=135)
|
bioinformatics_intro_python.ipynb
|
mrphyja/bioinfo-intro-python
|
mit
|
Ok, now you know something about DNA, so we can start getting into some of the fun puzzles that this leads to. So how do we find out what someone's DNA sequence is? There are several methods, including some newer ones that I'm really excited about, but we'll stick with the most popular here: Illuimina's sequencing by synthesis. It's probably so popular because it's fast and keeps getting cheaper. The reason it's so fast is because it's parallelized. It works by breaking the DNA up into little chunks and then looking at all the chunks at the same time. Basically you have a whole bunch of these fragments on a slide and then flood the slide with a whole bunch of one base (say A). The base is modified slightly so that it fluoresces a certain color when illuminated by a certain wavelength and also so that no other bases can attach after it. Then the excess bases are washed off, the slide is imaged to see which fragments got a base added, and then the fluorescent part of the new addition as well as the part that blocks the next base are removed and the process is repeated with another base (say C). Repeat until you've got most of the fragments.
Sequence Alignment
Now things start to get fun
That's a nice-looking lake...
Until now!
So now you've got a whole bunch of little pieces of DNA that you have to match up to a reference sequence. A quick analogy (credit for the analogy/pics to Aaron Quinlan, who has all his slides for a course in applied computational genomics freely available). What makes this lake puzzle so hard? So much blue and white! So we need to find a way to determine how well the piece we have actually matches the picture at that point. We'll call this aligning sequences.
First step: Assign scores
If a base matches with itself, it gets a score of one. Otherwise it gets a score of zero.
||A|C|G|T|
|-|-|-|-|-|
|A|1|0|0|0|
|C|0|1|0|0|
|G|0|0|1|0|
|T|0|0|0|1|
So these two sequences:
AACTGTGGTAC
ACTTGTGGAAC
10011111011
have an alignment score of eight.
This is going to seem like complete overkill at first, but you'll understand the advantages to doing it this way after a bit. Each base that aligns with itself gets a score of one, any other alignment is zero. But what if these are long segments, could we maybe get a better score by shifting one in reference to another? That's a valid question, but before we can tackle that, I need to introduce you to another unfortunate aspect of sequencing...
What about gaps?
Yes, unfortunately there are going to be gaps.
We'll add a "gap penalty" of -3.
So the score for this alignment:
ACTACA-ACGTTGAC
A-TAGAAACGCT-AC
1 1101 11101 11
-3 -3 -3
is just one
Sequencing DNA is great, but it's also kind of messy. You may end up with extra bases or missing bases in your sequences. Also, people don't all have the same DNA (unless you're identical twins!) so you may have bases that are actually missing or extra with respect to the "reference" sequence. But, gaps are problematic because when they're in a place that codes for a protein (remember earlier?) they are pretty good at making the protein not work. So we want to introduce gaps only when it's a lot better than the alternative. For now we'll have a "gap penalty" of -3.
Scoring matrix
||-|A|C|G|T|T|T|G|T|C|G|C|
|-|
|-|0|-3|-6|-9|-12|-15|-18|-21|-24|-27|-30|-33|
|A|-3||||||||||||
|C|-6||||||||||||
|T|-9||||||||||||
|T|-12||||||||||||
|T|-15||||||||||||
|C|-18||||||||||||
|T|-21||||||||||||
|G|-24||||||||||||
|C|-27||||||||||||
$$
S_{m,n}=\left{
\begin{array}{ll}
S_{m-1,n} + gap\
S_{m,n-1} + gap\
S_{m-1,n-1} + B(a,b)\
\end{array}
\right.
$$
||-|A|C|G|T|T|T|G|T|C|G|C|
|-|
|-|0|-3|-6|-9|-12|-15|-18|-21|-24|-27|-30|-33|
|A|-3|1|-2|-5|-8|-11|-14|-17|-20|-23|-26|-29|
|C|-6||||||||||||
|T|-9||||||||||||
|T|-12||||||||||||
|T|-15||||||||||||
|C|-18||||||||||||
|T|-21||||||||||||
|G|-24||||||||||||
|C|-27||||||||||||
We need to keep track of where that score came from.
||-|A|C|G|T|T|T|G|T|C|G|C|
|-|
|-|0|β-3|β-6|β-9|β-12|β-15|β-18|β-21|β-24|β-27|β-30|β-33|
|A|β-3|β1|β-2|β-5|β-8|β-11|β-14|β-17|β-20|β-23|β-26|β-29|
|C|β-6||||||||||||
|T|β-9||||||||||||
|T|β-12||||||||||||
|T|β-15||||||||||||
|C|β-18||||||||||||
|T|β-21||||||||||||
|G|β-24||||||||||||
|C|β-27||||||||||||
||-|A|C|G|T|T|T|G|T|C|G|C|
|-|
|-|0|β-3|β-6|β-9|β-12|β-15|β-18|β-21|β-24|β-27|β-30|β-33|
|A|β-3|β1|β-2|β-5|β-8|β-11|β-14|β-17|β-20|β-23|β-26|β-29|
|C|β-6|β-2|β2|β1|β-4|β-7|β-10|β-13|β-16|β-19|β-22|β-25|
|T|β-9|β-5|β-1|β2|β0|β-3|β-6|β-9|β-12|β-15|β-18|β-21|
|T|β-12|β-8|β-4|β-1|β3|β1|β-2|β-5|β-8|β-11|β-14|β-17|
|T|β-15|β-11|β-7|β-4|β0|β4|β2|β-1|β-4|β-7|β-10|β-13|
|C|β-18|β-14|β-10|β-7|β-3|β1|β4|β2|β-1|β-3|β-6|β-9|
|T|β-21|β-17|β-13|β-10|β-6|β-2|β2|β4|β3|β0|β-3|β-6|
|G|β-24|β-20|β-16|β-12|β-9|β-5|β-1|β3|β4|β3|β1|β-2|
|C|β-27|β-23|β-19|β-15|β-12|β-8|β-4|β0|β3|β5|β3|β2|
Now just follow the arrows from that bottom-right corner back to the top-left zero.
```
ACGTTTGTCGC
|| ||| | ||
AC-TTTCT-GC
```
So how do we keep track of where we actually want gaps? We will use a scoring matrix. We start out with one sequence at the top and one on the side with an extra space added at the beginning of each. The two spaces align with a score of zero and we start form there. Each cell gets filled in with whatever ends up the highest of three possibilities:
1. The score above plus the gap penalty (remember the gap penalty is negative)
2. The score to the left plus the gap penalty
3. The score to the upper left plus the alignment score
We can fill in the top and left columns right away since they don't have a score to their upper-left, so their only possible score is the gap penalty.
But our choice affects the score of everything down and to the right, so besides just the score, we need to keep track of where that score came from.
Once it's all filled out, just follow the arrows from the bottom-right corner to the top-left zero. Every time you go straight up, it's a gap in the left sequence. Every time you go left it's a gap in the top sequence. Every time you go up-left, the two bases align.
Let's code this!
See Python for Bioinformatics for the inspiration for this demo.
Here is the "substitution matrix" and its corresponding "alphabet":
|
dna_sub_mat = np.array(
[[ 1, 0, 0, 0],
[ 0, 1, 0, 0],
[ 0, 0, 1, 0],
[ 0, 0, 0, 1]])
dbet = 'ACGT'
|
bioinformatics_intro_python.ipynb
|
mrphyja/bioinfo-intro-python
|
mit
|
Same as we defined above, with dbet being the "dna alphabet" used for this matrix (four-letter alphabet, could be in a different order, this is just the one we chose)
And here are we calculate the scores and arrows as separate matrices
|
def nw_alignment(sub_mat, abet, seq1, seq2, gap=-8):
# Get the lengths of the sequences
seq1_len, seq2_len = len(seq1), len(seq2)
# Create the scoring and arrow matrices
score_mat = np.zeros((seq1_len+1, seq2_len+1), int)
arrow_mat = np.zeros((seq1_len+1, seq2_len+1), int)
# Fill first column and row of score matrix with scores based on gap penalty
score_mat[0] = np.arange(seq2_len+1) * gap
score_mat[:,0] = np.arange(seq1_len+1) * gap
# Fill top row of arrow matrix with ones (left arrow)
arrow_mat[0] = np.ones(seq2_len+1)
for seq1_pos, seq1_letter in enumerate(seq1):
for seq2_pos, seq2_letter in enumerate(seq2):
f = np.zeros(3)
# Cell above + gap penalty
f[0] = score_mat[seq1_pos,seq2_pos+1] + gap
# Cell to left + gap penalty
f[1] = score_mat[seq1_pos+1,seq2_pos] + gap
n1 = abet.index(seq1_letter)
n2 = abet.index(seq2_letter)
# Cell to upper-left + alignment score
f[2] = score_mat[seq1_pos,seq2_pos] + sub_mat[n1,n2]
score_mat[seq1_pos+1, seq2_pos+1] = f.max()
arrow_mat[seq1_pos+1, seq2_pos+1] = f.argmax()
return score_mat, arrow_mat
|
bioinformatics_intro_python.ipynb
|
mrphyja/bioinfo-intro-python
|
mit
|
I'm calling this nw_align after the Needleman-Wunsch algorithm. It's hard to put score and directional information in just one matrix, so we make two matrices. We start with a matrix of zeros for each and then fill them in concurrently as we run through the possibilities.
What does our result look like?
|
s1 = 'ACTTCTGC'
s2 = 'ACGTTTGTCGC'
score_mat, arrow_mat = nw_alignment(dna_sub_mat, dbet, s1, s2, gap=-3)
print(score_mat)
print(arrow_mat)
|
bioinformatics_intro_python.ipynb
|
mrphyja/bioinfo-intro-python
|
mit
|
Looks good, but not all that useful by itself.
Now we need a way to get the sequences back out of our scoring matrix:
|
def backtrace(arrow_mat, seq1, seq2):
align1, align2 = '', ''
align1_pos, align2_pos = arrow_mat.shape
align1_pos -= 1
align2_pos -= 1
selected = []
while True:
selected.append((align1_pos, align2_pos))
if arrow_mat[align1_pos, align2_pos] == 0:
# Up arrow, add gap to align2
align1 += seq1[align1_pos-1]
align2 += '-'
align1_pos -= 1
elif arrow_mat[align1_pos, align2_pos] == 1:
# Left arrow, add gap to align2
align1 += '-'
align2 += seq2[align2_pos-1]
align2_pos -= 1
elif arrow_mat[align1_pos, align2_pos] == 2:
# Up-arrow arrow, no gap
align1 += seq1[align1_pos-1]
align2 += seq2[align2_pos-1]
align1_pos -= 1
align2_pos -= 1
if align1_pos==0 and align2_pos==0:
break
# reverse the strings
return align1[::-1], align2[::-1], selected
a1, a2, selected = backtrace(arrow_mat, s1, s2)
print(a1)
print(a2)
|
bioinformatics_intro_python.ipynb
|
mrphyja/bioinfo-intro-python
|
mit
|
Sometimes it's nice to see the scoring matrix, though, so here's a function to visualize it
|
def visual_scoring_matrix(seq1, seq2, score_mat, arrow_mat):
visual_mat = []
for i, row in enumerate(arrow_mat):
visual_mat_row = []
for j, col in enumerate(row):
if col == 0:
arrow = 'β'
elif col == 1:
arrow = 'β'
else:
arrow = 'β'
visual_mat_row.append(arrow + ' ' + str(score_mat[i,j]))
visual_mat.append(visual_mat_row)
visual_mat = np.array(visual_mat, object)
tab = plt.table(
cellText=visual_mat,
rowLabels=['-'] + list(s1),
colLabels=['-'] + list(s2),
loc='center')
tab.scale(2, 3)
tab.set_fontsize(30)
plt.axis('tight')
plt.axis('off')
align1, align2, selected = backtrace(arrow_mat, seq1, seq2)
for pos in selected:
y, x = pos
tab._cells[(y+1, x)]._text.set_color('green')
tab._cells[(y+1, x)]._text.set_weight(1000)
plt.show()
visual_scoring_matrix(s1, s2, score_mat, arrow_mat)
|
bioinformatics_intro_python.ipynb
|
mrphyja/bioinfo-intro-python
|
mit
|
Let's generate some sequences and see how fast this is
|
def random_dna_seq(length=1000):
seq = [random.choice(dbet) for x in range(length)]
return ''.join(seq)
def mutate_dna_seq(seq, chance=1/5):
mut_seq_base = [random.choice(dbet) if random.random() < chance else x for x in seq]
mut_seq_indel = [random.choice(('', x + random.choice(dbet))) if random.random() < chance else x for x in mut_seq_base]
return ''.join(mut_seq_indel)
s1 = random_dna_seq()
s2 = mutate_dna_seq(s1)
print(s1)
print(s2)
a = %timeit -o nw_alignment(dna_sub_mat, dbet, s1, s2, gap=-3)
print('{:.1f} years for the whole genome'.format(a.average * 2300000000 / 60 / 60 / 24 / 365.25))
|
bioinformatics_intro_python.ipynb
|
mrphyja/bioinfo-intro-python
|
mit
|
If we wanted to shift this one position at a time along the whole genome and check the alignments, how long would it take?
That's a long time!
Let's make it faster! (Just because)
So in reality, we don't actually want to use this algorithm to align our fragments to the whole genome. It's too slow, and there's no good way to decide which alignment is "best." It's still a good introduction to thinking about these types of problems, though. And since it's actually fairly easy and demonstrates how you can improve your code by understanding your algorithm, we'll do something to make it a bit faster.
||-|A|C|G|T|T|T|G|T|C|G|C|
|-|
|-|0|β-3|β-6|β-9|β-12|β-15|β-18|β-21|β-24|β-27|β-30|β-33|
|A|β-3|β1|β-2||||||||||
|C|β-6|β-2|||||||||||
|T|β-9||||||||||||
|T|β-12||||||||||||
|T|β-15||||||||||||
|C|β-18||||||||||||
|T|β-21||||||||||||
|G|β-24||||||||||||
|C|β-27||||||||||||
We can't calculate a whole row or column at a time because the values depend on those in the same row/column. But what about diagonals?
If you look at the diagonals, you know the values above and to the left, so you have everything you need to calculate your score.
We'll "get rid of" our nested loop (really just abstract it into a faster numpy "loop")
This is going to take a couple more steps, but it will be worth it in the end.
First we pre-calculate the "upper-left score" for each location.
|
def sub_values(sub_mat, abet, seq1, seq2):
# convert the sequences to numbers
seq1_ind = [abet.index(i) for i in seq1]
seq2_ind = [abet.index(i) for i in seq2]
sub_vals = np.array([[0] * (len(seq2)+1)] + [[0] + [sub_mat[y, x] for x in seq2_ind] for y in seq1_ind], int)
return sub_vals
sub_values(dna_sub_mat, dbet, 'AACGTTA', 'AAGCTTAAAAAAAA')
|
bioinformatics_intro_python.ipynb
|
mrphyja/bioinfo-intro-python
|
mit
|
Then we get a list of all the diagonals in the matrix.
|
def diags(l1, l2):
ys = np.array([np.arange(l1) + 1 for i in np.arange(l2)])
xs = np.array([np.arange(l2) + 1 for i in np.arange(l1)])
diag_ys = [np.flip(ys.diagonal(i), 0) for i in range(1-l2, l1)]
diag_xs = [xs.diagonal(i) for i in range(1-l1, l2)]
index_list = []
for y, x in zip(diag_ys, diag_xs):
index_list.append([y, x])
return index_list
diags(6, 3)
|
bioinformatics_intro_python.ipynb
|
mrphyja/bioinfo-intro-python
|
mit
|
And here's the actual function. It takes the same arguments and produces the same matrices.
|
def FastNW(sub_mat, abet, seq1, seq2, gap=-8):
sub_vals = sub_values(sub_mat, abet, seq1, seq2)
# Get the lengths of the sequences
seq1_len, seq2_len = len(seq1), len(seq2)
# Create the scoring and arrow matrices
score_mat = np.zeros((seq1_len+1, seq2_len+1), int)
arrow_mat = np.zeros((seq1_len+1, seq2_len+1), int)
# Fill first column and row of score matrix with scores based on gap penalty
score_mat[0] = np.arange(seq2_len+1) * gap
score_mat[:,0] = np.arange(seq1_len+1) * gap
# Fill top row of arrow matrix with ones (left arrow)
arrow_mat[0] = np.ones(seq2_len+1)
# Get the list of diagonals
diag_list = diags(seq1_len, seq2_len)
# fill in the matrix
for diag in diag_list:
# Matrix to hold all three possible scores for every element in the diagonal
f = np.zeros((3, len(diag[0])), float)
# Cell above + gap penalty for every cell in the diagonal
x, y = diag[0]-1, diag[1]
f[0] = score_mat[x, y] + gap
# Cell to the left + gap penalty for every cell in the diagonal
x, y = diag[0], diag[1]-1
f[1] = score_mat[x, y] + gap
# Cell to the upper left + alignment score for every cell in the diagonal
x, y = diag[0]-1, diag[1]-1
f[2] = score_mat[x,y] + sub_vals[diag]
max_score = (f.max(0))
max_score_pos = f.argmax(0)
score_mat[diag] = max_score
arrow_mat[diag] = max_score_pos
return score_mat, arrow_mat
FastNW(dna_sub_mat, dbet, s1, s2)
|
bioinformatics_intro_python.ipynb
|
mrphyja/bioinfo-intro-python
|
mit
|
So how much faster is it?
|
s1 = random_dna_seq()
s2 = mutate_dna_seq(s1)
a = %timeit -o nw_alignment(dna_sub_mat, dbet, s1, s2)
print('{:.1f} years for the whole genome'.format(a.average * 2300000000 / 60 / 60 / 24 / 365.25))
a = %timeit -o FastNW(dna_sub_mat, dbet, s1, s2)
print('{:.1f} years for the whole genome'.format(a.average * 2300000000 / 60 / 60 / 24 / 365.25))
|
bioinformatics_intro_python.ipynb
|
mrphyja/bioinfo-intro-python
|
mit
|
Now why did we use the substitution matrix?
Here's how DNA translates to protein:
Wikipedia
So we can align proteins, too!
|
blosum50 = np.array(
[[ 5,-2,-1,-2,-1,-1,-1, 0,-2,-1,-2,-1,-1,-3,-1, 1, 0,-3,-2, 0],
[-2, 7,-1,-2,-1, 1, 0,-3, 0,-4,-3, 3,-2,-3,-3,-1,-1,-3,-1,-3],
[-1,-1, 7, 2,-2, 0, 0, 0, 1,-3,-4,-0,-2,-4,-2,-1, 0,-4,-2,-3],
[-2,-2, 2, 8,-4, 0, 2,-1,-1,-4,-4,-1,-4,-5,-1, 0,-1,-5,-3,-4],
[-1,-4,-2,-4,13,-3,-3,-3,-3,-2,-2,-3,-2,-2,-4,-1,-1,-5,-3,-1],
[-1,-1, 0, 0,-3, 7, 2,-2, 1,-3,-2, 2, 0,-4,-1,-0,-1,-1,-1,-3],
[-1, 0, 0, 2,-3, 2, 6,-3, 0,-4,-3, 1,-2,-3,-1,-1,-1,-3,-2,-3],
[ 0,-3, 0,-1,-3,-2,-3, 8,-2,-4,-4,-2,-3,-4,-2, 0,-2,-3,-3,-4],
[-2, 0, 1,-1,-3, 1, 0,-2,10,-4,-3, 0,-1,-1,-2,-1,-2,-3,-1, 4],
[-1,-4,-3,-4,-2,-3,-4,-4,-4, 5, 2,-3, 2, 0,-3,-3,-1,-3,-1, 4],
[-2,-3,-4,-4,-2,-2,-3,-4,-3, 2, 5,-3, 3, 1,-4,-3,-1,-2,-1, 1],
[-1, 3, 0,-1,-3, 2, 1,-2, 0,-3,-3, 6,-2,-4,-1, 0,-1,-3,-2,-3],
[-1,-2,-2,-4,-2, 0,-2,-3,-1, 2, 3,-2, 7, 0,-3,-2,-1,-1, 0, 1],
[-3,-3,-4,-5,-2,-4,-3,-4,-1, 0, 1,-4, 0, 8,-4,-3,-2, 1, 4,-1],
[-1,-3,-2,-1,-4,-1,-1,-2,-2,-3,-4,-1,-3,-4,10,-1,-1,-4,-3,-3],
[ 1,-1, 1, 0,-1, 0,-1, 0,-1,-3,-3, 0,-2,-3,-1, 5, 2,-4,-2,-2],
[ 0,-1, 0,-1,-1,-1,-1,-2,-2,-1,-1,-1,-1,-2,-1, 2, 5,-3,-2, 0],
[-3,-3,-4,-5,-5,-1,-3,-3,-3,-3,-2,-3,-1, 1,-4,-4,-3,15, 2,-3],
[-2,-1,-2,-3,-3,-1,-2,-3, 2,-1,-1,-2, 0, 4,-3,-2,-2, 2, 8,-1],
[ 0,-3,-3,-4,-1,-3,-3,-4,-4, 4, 1,-3, 1,-1,-3,-2, 0,-3,-1, 5]])
pbet = 'ARNDCQEGHILKMFPSTWYV'
s1 = [random.choice(pbet) for _ in range(10)]
s2 = [random.choice(pbet) if random.random() < .25 else x for x in s1] + [random.choice(pbet) for _ in range(10)]
score_mat, arrow_mat = FastNW(blosum50, pbet, s1, s2)
visual_scoring_matrix(s1, s2, score_mat, arrow_mat)
|
bioinformatics_intro_python.ipynb
|
mrphyja/bioinfo-intro-python
|
mit
|
Other things we can do with this algorithm:
Local alignment
Affine gap penalties
Other useful bioinformatics Python packages:
|
from pysam import FastaFile
from os.path import getsize
print(getsize('Homo_sapiens.GRCh38.dna.primary_assembly.fasta.gz')/1024**2, 'MiB')
with FastaFile('Homo_sapiens.GRCh38.dna.primary_assembly.fasta.gz') as myfasta:
chr17len = myfasta.get_reference_length('17')
print(chr17len, 'bp')
seq = myfasta.fetch('17', int(chr17len/2), int(chr17len/2)+500)
print(seq)
with FastaFile('Homo_sapiens.GRCh38.dna.primary_assembly.fasta.gz') as myfasta:
chr17len = myfasta.get_reference_length('17')
%timeit myfasta.fetch('17', int(chr17len/2), int(chr17len/2)+500)
from Bio.Seq import Seq
from Bio.Alphabet import IUPAC
coding_dna = Seq("ATGGCCATTGTAATGGGCCGCTGAAAGGGTGCCCGATAG", IUPAC.unambiguous_dna)
print(coding_dna.__repr__())
print(coding_dna.translate().__repr__())
|
bioinformatics_intro_python.ipynb
|
mrphyja/bioinfo-intro-python
|
mit
|
The order of the polynomial is given by the number of coefficients (minus one), which is given by len(p_normal)-1.
However, there are many other ways it could be written, which are useful in different contexts. For example, we are often interested in the roots of the polynomial, so would want to express it in the form
$$ p(x) = 2 (x - 1)(x - 2)(x + 3). $$
This allows us to read off the roots directly. We could imagine representing this in Python using a container containing the roots, such as:
|
p_roots = (1, 2, -3)
|
05-classes-oop.ipynb
|
IanHawke/maths-with-python
|
mit
|
combined with a single variable containing the leading term,
|
p_leading_term = 2
|
05-classes-oop.ipynb
|
IanHawke/maths-with-python
|
mit
|
We see that the order of the polynomial is given by the number of roots (and hence by len(p_roots)). This form represents the same polynomial but requires two pieces of information (the roots and the leading coefficient).
The different forms are useful for different things. For example, if we want to add two polynomials the standard form makes it straightforward, but the factored form does not. Conversely, multiplying polynomials in the factored form is easy, whilst in the standard form it is not.
But the key point is that the object - the polynomial - is the same: the representation may appear different, but it's the object itself that we really care about. So we want to represent the object in code, and work with that object.
Classes
Python, and other languages that include object oriented concepts (which is most modern languages) allow you to define and manipulate your own objects. Here we will define a polynomial object step by step.
|
class Polynomial(object):
explanation = "I am a polynomial"
def explain(self):
print(self.explanation)
|
05-classes-oop.ipynb
|
IanHawke/maths-with-python
|
mit
|
We have defined a class, which is a single object that will represent a polynomial. We use the keyword class in the same way that we use the keyword def when defining a function. The definition line ends with a colon, and all the code defining the object is indented by four spaces.
The name of the object - the general class, or type, of the thing that we're defining - is Polynomial. The convention is that class names start with capital letters, but this convention is frequently ignored.
The type of object that we are building on appears in brackets after the name of the object. The most basic thing, which is used most often, is the object type as here.
Class variables are defined in the usual way, but are only visible inside the class. Variables that are set outside of functions, such as explanation above, will be common to all class variables.
Functions are defined inside classes in the usual way (using the def keyword, indented by four additional spaces). They work in a special way: they are not called directly, but only when you have a member of the class. This is what the self keyword does: it takes the specific instance of the class and uses its data. Class functions are often called methods.
Let's see how this works on a specific example:
|
p = Polynomial()
print(p.explanation)
p.explain()
p.explanation = "I change the string"
p.explain()
|
05-classes-oop.ipynb
|
IanHawke/maths-with-python
|
mit
|
The first line, p = Polynomial(), creates an instance of the class. That is, it creates a specific Polynomial. It is assigned to the variable named p. We can access class variables using the "dot" notation, so the string can be printed via p.explanation. The method that prints the class variable also uses the "dot" notation, hence p.explain(). The self variable in the definition of the function is the instance itself, p. This is passed through automatically thanks to the dot notation.
Note that we can change class variables in specific instances in the usual way (p.explanation = ... above). This only changes the variable for that instance. To check that, let us define two polynomials:
|
p = Polynomial()
p.explanation = "Changed the string again"
q = Polynomial()
p.explanation = "Changed the string a third time"
p.explain()
q.explain()
|
05-classes-oop.ipynb
|
IanHawke/maths-with-python
|
mit
|
We can of course make the methods take additional variables. We modify the class (note that we have to completely re-define it each time):
|
class Polynomial(object):
explanation = "I am a polynomial"
def explain_to(self, caller):
print("Hello, {}. {}.".format(caller,self.explanation))
|
05-classes-oop.ipynb
|
IanHawke/maths-with-python
|
mit
|
We then use this, remembering that the self variable is passed through automatically:
|
r = Polynomial()
r.explain_to("Alice")
|
05-classes-oop.ipynb
|
IanHawke/maths-with-python
|
mit
|
At the moment the class is not doing anything interesting. To do something interesting we need to store (and manipulate) relevant variables. The first thing to do is to add those variables when the instance is actually created. We do this by adding a special function (method) which changes how the variables of type Polynomial are created:
|
class Polynomial(object):
"""Representing a polynomial."""
explanation = "I am a polynomial"
def __init__(self, roots, leading_term):
self.roots = roots
self.leading_term = leading_term
self.order = len(roots)
def explain_to(self, caller):
print("Hello, {}. {}.".format(caller,self.explanation))
print("My roots are {}.".format(self.roots))
|
05-classes-oop.ipynb
|
IanHawke/maths-with-python
|
mit
|
This __init__ function is called when a variable is created. There are a number of special class functions, each of which has two underscores before and after the name. This is another Python convention that is effectively a rule: functions surrounded by two underscores have special effects, and will be called by other Python functions internally. So now we can create a variable that represents a specific polynomial by storing its roots and the leading term:
|
p = Polynomial(p_roots, p_leading_term)
p.explain_to("Alice")
q = Polynomial((1,1,0,-2), -1)
q.explain_to("Bob")
|
05-classes-oop.ipynb
|
IanHawke/maths-with-python
|
mit
|
It is always useful to have a function that shows what the class represents, and in particular what this particular instance looks like. We can define another method that explicitly displays the Polynomial:
|
class Polynomial(object):
"""Representing a polynomial."""
explanation = "I am a polynomial"
def __init__(self, roots, leading_term):
self.roots = roots
self.leading_term = leading_term
self.order = len(roots)
def display(self):
string = str(self.leading_term)
for root in self.roots:
if root == 0:
string = string + "x"
elif root > 0:
string = string + "(x - {})".format(root)
else:
string = string + "(x + {})".format(-root)
return string
def explain_to(self, caller):
print("Hello, {}. {}.".format(caller,self.explanation))
print("My roots are {}.".format(self.roots))
p = Polynomial(p_roots, p_leading_term)
print(p.display())
q = Polynomial((1,1,0,-2), -1)
print(q.display())
|
05-classes-oop.ipynb
|
IanHawke/maths-with-python
|
mit
|
Where classes really come into their own is when we manipulate them as objects in their own right. For example, we can multiply together two polynomials to get another polynomial. We can create a method to do that:
|
class Polynomial(object):
"""Representing a polynomial."""
explanation = "I am a polynomial"
def __init__(self, roots, leading_term):
self.roots = roots
self.leading_term = leading_term
self.order = len(roots)
def display(self):
string = str(self.leading_term)
for root in self.roots:
if root == 0:
string = string + "x"
elif root > 0:
string = string + "(x - {})".format(root)
else:
string = string + "(x + {})".format(-root)
return string
def multiply(self, other):
roots = self.roots + other.roots
leading_term = self.leading_term * other.leading_term
return Polynomial(roots, leading_term)
def explain_to(self, caller):
print("Hello, {}. {}.".format(caller,self.explanation))
print("My roots are {}.".format(self.roots))
p = Polynomial(p_roots, p_leading_term)
q = Polynomial((1,1,0,-2), -1)
r = p.multiply(q)
print(r.display())
|
05-classes-oop.ipynb
|
IanHawke/maths-with-python
|
mit
|
Network Architecture
The encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.
Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughlt 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data.
What's going on with the decoder
Okay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers aren't. Usually, you'll see deconvolutional layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but it reverse. A stride in the input layer results in a larger stride in the deconvolutional layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a deconvolutional layer. Deconvolution is often called "transpose convolution" which is what you'll find the TensorFlow API, with tf.nn.conv2d_transpose.
However, deconvolutional layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In this Distill article from Augustus Odena, et al, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with tf.image.resize_images, followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.
Exercise: Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by 2. Odena et al claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in tf.image.resize_images or use tf.image.resize_nearest_neighbor.
|
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 = tf.layers.conv2d(inputs_, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x16
maxpool1 = tf.layers.max_pooling2d(conv1, (2,2), (2,2), padding='same')
# Now 14x14x16
conv2 = tf.layers.conv2d(maxpool1, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x8
maxpool2 = tf.layers.max_pooling2d(conv2, (2,2), (2,2), padding='same')
# Now 7x7x8
conv3 = tf.layers.conv2d(maxpool2, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x8
encoded = tf.layers.max_pooling2d(conv3, (2,2), (2,2), padding='same')
# Now 4x4x8
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, (7,7))
# Now 7x7x8
conv4 = tf.layers.conv2d(upsample1, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x8
upsample2 = tf.image.resize_nearest_neighbor(conv4, (14,14))
# Now 14x14x8
conv5 = tf.layers.conv2d(upsample2, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x8
upsample3 = tf.image.resize_nearest_neighbor(conv5, (28,28))
# Now 28x28x8
conv6 = tf.layers.conv2d(upsample3, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x16
logits = tf.layers.conv2d(conv6, 1, (3,3), padding='same', activation=None)
#Now 28x28x1
decoded = tf.nn.sigmoid(logits, name='decoded')
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(0.001).minimize(cost)
|
autoencoder/Convolutional_Autoencoder_Solution.ipynb
|
adukic/nd101
|
mit
|
Denoising
As I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.
Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.
Exercise: Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers.
|
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 = tf.layers.conv2d(inputs_, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x32
maxpool1 = tf.layers.max_pooling2d(conv1, (2,2), (2,2), padding='same')
# Now 14x14x32
conv2 = tf.layers.conv2d(maxpool1, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x32
maxpool2 = tf.layers.max_pooling2d(conv2, (2,2), (2,2), padding='same')
# Now 7x7x32
conv3 = tf.layers.conv2d(maxpool2, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x16
encoded = tf.layers.max_pooling2d(conv3, (2,2), (2,2), padding='same')
# Now 4x4x16
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, (7,7))
# Now 7x7x16
conv4 = tf.layers.conv2d(upsample1, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x16
upsample2 = tf.image.resize_nearest_neighbor(conv4, (14,14))
# Now 14x14x16
conv5 = tf.layers.conv2d(upsample2, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x32
upsample3 = tf.image.resize_nearest_neighbor(conv5, (28,28))
# Now 28x28x32
conv6 = tf.layers.conv2d(upsample3, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x32
logits = tf.layers.conv2d(conv6, 1, (3,3), padding='same', activation=None)
#Now 28x28x1
decoded = tf.nn.sigmoid(logits, name='decoded')
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(0.001).minimize(cost)
sess = tf.Session()
epochs = 100
batch_size = 200
# Set's how much noise we're adding to the MNIST images
noise_factor = 0.5
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images from the batch
imgs = batch[0].reshape((-1, 28, 28, 1))
# Add random noise to the input images
noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)
# Clip the images to be between 0 and 1
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
# Noisy images as inputs, original images as targets
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
|
autoencoder/Convolutional_Autoencoder_Solution.ipynb
|
adukic/nd101
|
mit
|
Checking out the performance
Here I'm adding noise to the test images and passing them through the autoencoder. It does a suprising great job of removing the noise, even though it's sometimes difficult to tell what the original number is.
|
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([noisy_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
|
autoencoder/Convolutional_Autoencoder_Solution.ipynb
|
adukic/nd101
|
mit
|
Method 1: Using Boolean Variables
|
# Create variable with TRUE if nationality is USA
american = df['nationality'] == "USA"
# Create variable with TRUE if age is greater than 50
elderly = df['age'] > 50
# Select all casess where nationality is USA and age is greater than 50
df[american & elderly]
|
python/pandas_selecting_rows_on_conditions.ipynb
|
tpin3694/tpin3694.github.io
|
mit
|
Method 2: Using variable attributes
|
# Select all cases where the first name is not missing and nationality is USA
df[df['first_name'].notnull() & (df['nationality'] == "USA")]
|
python/pandas_selecting_rows_on_conditions.ipynb
|
tpin3694/tpin3694.github.io
|
mit
|
Zadatci
1. Linearna regresija kao klasifikator
U prvoj laboratorijskoj vjeΕΎbi koristili smo model linearne regresije za, naravno, regresiju. MeΔutim, model linearne regresije moΕΎe se koristiti i za klasifikaciju. Iako zvuΔi pomalo kontraintuitivno, zapravo je dosta jednostavno. Naime, cilj je nauΔiti funkciju $f(\mathbf{x})$ koja za negativne primjere predviΔa vrijednost $1$, dok za pozitivne primjere predviΔa vrijednost $0$. U tom sluΔaju, funkcija $f(\mathbf{x})=0.5$ predstavlja granicu izmeΔu klasa, tj. primjeri za koje vrijedi $h(\mathbf{x})\geq 0.5$ klasificiraju se kao pozitivni, dok se ostali klasificiraju kao negativni.
Klasifikacija pomoΔu linearne regresije implementirana je u razredu RidgeClassifier. U sljedeΔim podzadatcima istrenirajte taj model na danim podatcima i prikaΕΎite dobivenu granicu izmeΔu klasa. Pritom iskljuΔite regularizaciju ($\alpha = 0$, odnosno alpha=0). TakoΔer i ispiΕ‘ite toΔnost vaΕ‘eg klasifikacijskog modela (smijete koristiti funkciju metrics.accuracy_score). Skupove podataka vizualizirajte koriΕ‘tenjem pomoΔne funkcije plot_clf_problem(X, y, h=None) koja je dostupna u pomoΔnom paketu mlutils (datoteku mlutils.py moΕΎete preuzeti sa stranice kolegija). X i y predstavljaju ulazne primjere i oznake, dok h predstavlja funkciju predikcije modela (npr. model.predict).
U ovom zadatku cilj je razmotriti kako se klasifikacijski model linearne regresije ponaΕ‘a na linearno odvojim i neodvojivim podatcima.
|
from sklearn.linear_model import LinearRegression, RidgeClassifier
from sklearn.metrics import accuracy_score
|
STRUCE/2018/SU-2018-LAB02-0036477171.ipynb
|
DominikDitoIvosevic/Uni
|
mit
|
(a)
Prvo, isprobajte ugraΔeni model na linearno odvojivom skupu podataka seven ($N=7$).
|
seven_X = np.array([[2,1], [2,3], [1,2], [3,2], [5,2], [5,4], [6,3]])
seven_y = np.array([1, 1, 1, 1, 0, 0, 0])
clf = RidgeClassifier().fit(seven_X, seven_y)
predicted_y = clf.predict(seven_X)
score = accuracy_score(y_pred=predicted_y, y_true=seven_y)
print(score)
mlutils.plot_2d_clf_problem(X=seven_X, y=predicted_y, h=None)
|
STRUCE/2018/SU-2018-LAB02-0036477171.ipynb
|
DominikDitoIvosevic/Uni
|
mit
|
Kako bi se uvjerili da se u isprobanoj implementaciji ne radi o niΔemu doli o obiΔnoj linearnoj regresiji, napiΕ‘ite kΓ΄d koji dolazi do jednakog rjeΕ‘enja koriΕ‘tenjem iskljuΔivo razreda LinearRegression. Funkciju za predikciju, koju predajete kao treΔi argument h funkciji plot_2d_clf_problem, moΕΎete definirati lambda-izrazom: lambda x : model.predict(x) >= 0.5.
|
lr = LinearRegression().fit(seven_X, seven_y)
predicted_y_2 = lr.predict(seven_X)
mlutils.plot_2d_clf_problem(X=seven_X, y=seven_y, h= lambda x : lr.predict(x) >= 0.5)
|
STRUCE/2018/SU-2018-LAB02-0036477171.ipynb
|
DominikDitoIvosevic/Uni
|
mit
|
Q: Kako bi bila definirana granica izmeΔu klasa ako bismo koristili oznake klasa $-1$ i $1$ umjesto $0$ i $1$?
(b)
Probajte isto na linearno odvojivom skupu podataka outlier ($N=8$):
|
outlier_X = np.append(seven_X, [[12,8]], axis=0)
outlier_y = np.append(seven_y, 0)
lr2 = LinearRegression().fit(outlier_X, outlier_y)
predicted_y_2 = lr2.predict(outlier_X)
mlutils.plot_2d_clf_problem(X=outlier_X, y=outlier_y, h= lambda x : lr2.predict(x) >= 0.5)
|
STRUCE/2018/SU-2018-LAB02-0036477171.ipynb
|
DominikDitoIvosevic/Uni
|
mit
|
Q: ZaΕ‘to model ne ostvaruje potpunu toΔnost iako su podatci linearno odvojivi?
(c)
ZavrΕ‘no, probajte isto na linearno neodvojivom skupu podataka unsep ($N=8$):
|
unsep_X = np.append(seven_X, [[2,2]], axis=0)
unsep_y = np.append(seven_y, 0)
lr3 = LinearRegression().fit(unsep_X, unsep_y)
predicted_y_2 = lr3.predict(unsep_X)
mlutils.plot_2d_clf_problem(X=unsep_X, y=unsep_y, h= lambda x : lr3.predict(x) >= 0.5)
|
STRUCE/2018/SU-2018-LAB02-0036477171.ipynb
|
DominikDitoIvosevic/Uni
|
mit
|
Q: OΔito je zaΕ‘to model nije u moguΔnosti postiΔi potpunu toΔnost na ovom skupu podataka. MeΔutim, smatrate li da je problem u modelu ili u podacima? Argumentirajte svoj stav.
2. ViΕ‘eklasna klasifikacija
Postoji viΕ‘e naΔina kako se binarni klasifikatori mogu se upotrijebiti za viΕ‘eklasnu klasifikaciju. NajΔeΕ‘Δe se koristi shema tzv. jedan-naspram-ostali (engl. one-vs-rest, OVR), u kojoj se trenira po jedan klasifikator $h_j$ za svaku od $K$ klasa. Svaki klasifikator $h_j$ trenira se da razdvaja primjere klase $j$ od primjera svih drugih klasa, a primjer se klasificira u klasu $j$ za koju je $h_j(\mathbf{x})$ maksimalan.
PomoΔu funkcije datasets.make_classification generirajte sluΔajan dvodimenzijski skup podataka od tri klase i prikaΕΎite ga koristeΔi funkciju plot_2d_clf_problem. Radi jednostavnosti, pretpostavite da nema redundantnih znaΔajki te da je svaka od klasa "zbijena" upravo u jednu grupu.
|
from sklearn.datasets import make_classification
x, y = sklearn.datasets.make_classification(n_samples=100, n_informative=2, n_redundant=0, n_repeated=0, n_features=2, n_classes=3, n_clusters_per_class=1)
#print(dataset)
mlutils.plot_2d_clf_problem(X=x, y=y, h=None)
|
STRUCE/2018/SU-2018-LAB02-0036477171.ipynb
|
DominikDitoIvosevic/Uni
|
mit
|
Trenirajte tri binarna klasifikatora, $h_1$, $h_2$ i $h_3$ te prikaΕΎite granice izmeΔu klasa (tri grafikona). Zatim definirajte $h(\mathbf{x})=\mathrm{argmax}_j h_j(\mathbf{x})$ (napiΕ‘ite svoju funkciju predict koja to radi) i prikaΕΎite granice izmeΔu klasa za taj model. Zatim se uvjerite da biste identiΔan rezultat dobili izravno primjenom modela RidgeClassifier, buduΔi da taj model za viΕ‘eklasan problem zapravo interno implementira shemu jedan-naspram-ostali.
Q: Alternativna shema jest ona zvana jedan-naspram-jedan (engl, one-vs-one, OVO). Koja je prednost sheme OVR nad shemom OVO? A obratno?
|
fig = plt.figure(figsize=(5,15))
fig.subplots_adjust(wspace=0.2)
y_ovo1 = [ 0 if i == 0 else 1 for i in y]
lrOvo1 = LinearRegression().fit(x, y_ovo1)
fig.add_subplot(3,1,1)
mlutils.plot_2d_clf_problem(X=x, y=y_ovo1, h= lambda x : lrOvo1.predict(x) >= 0.5)
y_ovo2 = [ 0 if i == 1 else 1 for i in y]
lrOvo2 = LinearRegression().fit(x, y_ovo2)
fig.add_subplot(3,1,2)
mlutils.plot_2d_clf_problem(X=x, y=y_ovo2, h= lambda x : lrOvo2.predict(x) >= 0.5)
y_ovo3 = [ 0 if i == 2 else 1 for i in y]
lrOvo3 = LinearRegression().fit(x, y_ovo3)
fig.add_subplot(3,1,3)
mlutils.plot_2d_clf_problem(X=x, y=y_ovo3, h= lambda x : lrOvo3.predict(x) >= 0.5)
|
STRUCE/2018/SU-2018-LAB02-0036477171.ipynb
|
DominikDitoIvosevic/Uni
|
mit
|
3. LogistiΔka regresija
Ovaj zadatak bavi se probabilistiΔkim diskriminativnim modelom, logistiΔkom regresijom, koja je, unatoΔ nazivu, klasifikacijski model.
LogistiΔka regresija tipiΔan je predstavnik tzv. poopΔenih linearnih modela koji su oblika: $h(\mathbf{x})=f(\mathbf{w}^\intercal\tilde{\mathbf{x}})$. LogistiΔka funkcija za funkciju $f$ koristi tzv. logistiΔku (sigmoidalnu) funkciju $\sigma (x) = \frac{1}{1 + \textit{exp}(-x)}$.
(a)
Definirajte logistiΔku (sigmoidalnu) funkciju $\mathrm{sigm}(x)=\frac{1}{1+\exp(-\alpha x)}$ i prikaΕΎite je za $\alpha\in{1,2,4}$.
|
def sigm(alpha):
def f(x):
return 1 / (1 + exp(-alpha*x))
return f
ax = list(range(-10, 10))
ay1 = list(map(sigm(1), ax))
ay2 = list(map(sigm(2), ax))
ay3 = list(map(sigm(4), ax))
fig = plt.figure(figsize=(5,15))
p1 = fig.add_subplot(3, 1, 1)
p1.plot(ax, ay1)
p2 = fig.add_subplot(3, 1, 2)
p2.plot(ax, ay2)
p3 = fig.add_subplot(3, 1, 3)
p3.plot(ax, ay3)
|
STRUCE/2018/SU-2018-LAB02-0036477171.ipynb
|
DominikDitoIvosevic/Uni
|
mit
|
Q: ZaΕ‘to je sigmoidalna funkcija prikladan izbor za aktivacijsku funkciju poopΔenoga linearnog modela?
</br>
Q: Kakav utjecaj ima faktor $\alpha$ na oblik sigmoide? Ε to to znaΔi za model logistiΔke regresije (tj. kako izlaz modela ovisi o normi vektora teΕΎina $\mathbf{w}$)?
(b)
Implementirajte funkciju
lr_train(X, y, eta=0.01, max_iter=2000, alpha=0, epsilon=0.0001, trace=False)
za treniranje modela logistiΔke regresije gradijentnim spustom (batch izvedba). Funkcija uzima oznaΔeni skup primjera za uΔenje (matrica primjera X i vektor oznaka y) te vraΔa $(n+1)$-dimenzijski vektor teΕΎina tipa ndarray. Ako je trace=True, funkcija dodatno vraΔa listu (ili matricu) vektora teΕΎina $\mathbf{w}^0,\mathbf{w}^1,\dots,\mathbf{w}^k$ generiranih kroz sve iteracije optimizacije, od 0 do $k$. Optimizaciju treba provoditi dok se ne dosegne max_iter iteracija, ili kada razlika u pogreΕ‘ci unakrsne entropije izmeΔu dviju iteracija padne ispod vrijednosti epsilon. Parametar alpha predstavlja faktor regularizacije.
PreporuΔamo definiranje pomoΔne funkcije lr_h(x,w) koja daje predikciju za primjer x uz zadane teΕΎine w. TakoΔer, preporuΔamo i funkciju cross_entropy_error(X,y,w) koja izraΔunava pogreΕ‘ku unakrsne entropije modela na oznaΔenom skupu (X,y) uz te iste teΕΎine.
NB: Obratite pozornost na to da je naΔin kako su definirane oznake (${+1,-1}$ ili ${1,0}$) kompatibilan s izraΔunom funkcije gubitka u optimizacijskome algoritmu.
|
from sklearn.preprocessing import PolynomialFeatures as PolyFeat
from sklearn.metrics import log_loss
def loss_function(h_x, y):
return -y * np.log(h_x) - (1 - y) * np.log(1 - h_x)
def lr_h(x, w):
Phi = PolyFeat(1).fit_transform(x.reshape(1,-1))
return sigm(1)(Phi.dot(w))
def cross_entropy_error(X, y, w):
Phi = PolyFeat(1).fit_transform(X)
return log_loss(y, sigm(1)(Phi.dot(w)))
def lr_train(X, y, eta = 0.01, max_iter = 2000, alpha = 0, epsilon = 0.0001, trace= False):
w = zeros(shape(X)[1] + 1)
N = len(X)
w_trace = [];
error = epsilon**-1
for i in range(0, max_iter):
dw0 = 0; dw = zeros(shape(X)[1]);
new_error = 0
for j in range(0, N):
h = lr_h(X[j], w)
dw0 += h - y[j]
dw += (h - y[j])*X[j]
new_error += loss_function(h, y[j])
if abs(error - new_error) < epsilon:
print('stagnacija na i = ', i)
break
else: error = new_error
w[0] -= eta*dw0
w[1:] = w[1:] * (1-eta*alpha) - eta*dw
w_trace.extend(w)
if trace:
return w, w_trace
else: return w
|
STRUCE/2018/SU-2018-LAB02-0036477171.ipynb
|
DominikDitoIvosevic/Uni
|
mit
|
(c)
KoristeΔi funkciju lr_train, trenirajte model logistiΔke regresije na skupu seven, prikaΕΎite dobivenu granicu izmeΔu klasa te izraΔunajte pogreΕ‘ku unakrsne entropije.
NB: Pripazite da modelu date dovoljan broj iteracija.
|
trained = lr_train(seven_X, seven_y)
print(cross_entropy_error(seven_X, seven_y, trained))
print(trained)
h3c = lambda x: lr_h(x, trained) > 0.5
figure()
mlutils.plot_2d_clf_problem(seven_X, seven_y, h3c)
|
STRUCE/2018/SU-2018-LAB02-0036477171.ipynb
|
DominikDitoIvosevic/Uni
|
mit
|
Q: Koji kriterij zaustavljanja je aktiviran?
Q: ZaΕ‘to dobivena pogreΕ‘ka unakrsne entropije nije jednaka nuli?
Q: Kako biste utvrdili da je optimizacijski postupak doista pronaΕ‘ao hipotezu koja minimizira pogreΕ‘ku uΔenja? O Δemu to ovisi?
Q: Na koji naΔin biste preinaΔili kΓ΄d ako biste htjeli da se optimizacija izvodi stohastiΔkim gradijentnim spustom (online learning)?
(d)
PrikaΕΎite na jednom grafikonu pogreΕ‘ku unakrsne entropije (oΔekivanje logistiΔkog gubitka) i pogreΕ‘ku klasifikacije (oΔekivanje gubitka 0-1) na skupu seven kroz iteracije optimizacijskog postupka. Koristite trag teΕΎina funkcije lr_train iz zadatka (b) (opcija trace=True). Na drugom grafikonu prikaΕΎite pogreΕ‘ku unakrsne entropije kao funkciju broja iteracija za razliΔite stope uΔenja, $\eta\in{0.005,0.01,0.05,0.1}$.
|
from sklearn.metrics import zero_one_loss
eta = [0.005, 0.01, 0.05, 0.1]
[w3d, w3d_trace] = lr_train(seven_X, seven_y, trace=True)
Phi = PolyFeat(1).fit_transform(seven_X)
h_3d = lambda x: x >= 0.5
error_unakrs = []
errror_classy = []
errror_eta = []
for k in range(0, len(w3d_trace), 3):
error_unakrs.append(cross_entropy_error(seven_X, seven_y, w3d_trace[k:k+3]))
errror_classy.append(zero_one_loss(seven_y, h_3d(sigm(1)(Phi.dot(w3d_trace[k:k+3])))))
for i in eta:
err = []
[w3, w3_trace] = lr_train(seven_X, seven_y, i, trace=True)
for j in range(0, len(w3_trace), 3):
err.append(cross_entropy_error(seven_X, seven_y, w3_trace[j:j+3]))
errror_eta.append(err)
figure(figsize(12, 15))
subplots_adjust(wspace=0.1)
subplot(2,1,1)
grid()
plot(error_unakrs); plot(errror_classy);
subplot(2,1,2)
grid()
for i in range(0, len(eta)):
plot(errror_eta[i], label = 'eta = ' + str(i))
legend(loc = 'best');
|
STRUCE/2018/SU-2018-LAB02-0036477171.ipynb
|
DominikDitoIvosevic/Uni
|
mit
|
Q: ZaΕ‘to je pogreΕ‘ka unakrsne entropije veΔa od pogreΕ‘ke klasifikacije? Je li to uvijek sluΔaj kod logistiΔke regresije i zaΕ‘to?
Q: Koju stopu uΔenja $\eta$ biste odabrali i zaΕ‘to?
(e)
Upoznajte se s klasom linear_model.LogisticRegression koja implementira logistiΔku regresiju. Usporedite rezultat modela na skupu seven s rezultatom koji dobivate pomoΔu vlastite implementacije algoritma.
NB: Kako ugraΔena implementacija koristi naprednije verzije optimizacije funkcije, vrlo je vjerojatno da Vam se rjeΕ‘enja neΔe poklapati, ali generalne performanse modela bi trebale. Ponovno, pripazite na broj iteracija i snagu regularizacije.
|
from sklearn.linear_model import LogisticRegression
reg3e = LogisticRegression(max_iter=2000, tol=0.0001, C=0.01**-1, solver='lbfgs').fit(seven_X,seven_y)
h3e = lambda x : reg3e.predict(x)
figure(figsize(7, 7))
mlutils.plot_2d_clf_problem(seven_X,seven_y, h3e)
|
STRUCE/2018/SU-2018-LAB02-0036477171.ipynb
|
DominikDitoIvosevic/Uni
|
mit
|
4. Analiza logistiΔke regresije
(a)
KoristeΔi ugraΔenu implementaciju logistiΔke regresije, provjerite kako se logistiΔka regresija nosi s vrijednostima koje odskaΔu. Iskoristite skup outlier iz prvog zadatka. PrikaΕΎite granicu izmeΔu klasa.
Q: ZaΕ‘to se rezultat razlikuje od onog koji je dobio model klasifikacije linearnom regresijom iz prvog zadatka?
|
logReg4 = LogisticRegression(solver='liblinear').fit(outlier_X, outlier_y)
mlutils.plot_2d_clf_problem(X=outlier_X, y=outlier_y, h= lambda x : logReg4.predict(x) >= 0.5)
|
STRUCE/2018/SU-2018-LAB02-0036477171.ipynb
|
DominikDitoIvosevic/Uni
|
mit
|
(b)
Trenirajte model logistiΔke regresije na skupu seven te na dva odvojena grafikona prikaΕΎite, kroz iteracije optimizacijskoga algoritma, (1) izlaz modela $h(\mathbf{x})$ za svih sedam primjera te (2) vrijednosti teΕΎina $w_0$, $w_1$, $w_2$.
|
[w4b, w4b_trace] = lr_train(seven_X, seven_y, trace = True)
w0_4b = []; w1_4b = []; w2_4b = [];
for i in range(0, len(w4b_trace), 3):
w0_4b.append(w4b_trace[i])
w1_4b.append(w4b_trace[i+1])
w2_4b.append(w4b_trace[i+2])
h_gl = []
for i in range(0, len(seven_X)):
h = []
for j in range(0, len(w4b_trace), 3):
h.append(lr_h(seven_X[i], w4b_trace[j:j+3]))
h_gl.append(h)
figure(figsize(7, 14))
subplot(2,1,1)
grid()
for i in range(0, len(h_gl)):
plot(h_gl[i], label = 'x' + str(i))
legend(loc = 'best') ;
subplot(2,1,2)
grid()
plot(w0_4b); plot(w1_4b); plot(w2_4b);
legend(['w0', 'w1', 'w2'], loc = 'best');
|
STRUCE/2018/SU-2018-LAB02-0036477171.ipynb
|
DominikDitoIvosevic/Uni
|
mit
|
(c)
Ponovite eksperiment iz podzadatka (b) koristeΔi linearno neodvojiv skup podataka unsep iz prvog zadatka.
Q: Usporedite grafikone za sluΔaj linearno odvojivih i linearno neodvojivih primjera te komentirajte razliku.
|
unsep_y = np.append(seven_y, 0)
[w4c, w4c_trace] = lr_train(unsep_X, unsep_y, trace = True)
w0_4c = []; w1_4c = []; w2_4c = [];
for i in range(0, len(w4c_trace), 3):
w0_4c.append(w4c_trace[i])
w1_4c.append(w4c_trace[i+1])
w2_4c.append(w4c_trace[i+2])
h_gl = []
for i in range(0, len(unsep_X)):
h = []
for j in range(0, len(w4c_trace), 3):
h.append(lr_h(unsep_X[i], w4c_trace[j:j+3]))
h_gl.append(h)
figure(figsize(7, 14))
subplots_adjust(wspace=0.1)
subplot(2,1,1)
grid()
for i in range(0, len(h_gl)):
plot(h_gl[i], label = 'x' + str(i))
legend(loc = 'best') ;
subplot(2,1,2)
grid()
plot(w0_4c); plot(w1_4c); plot(w2_4c);
legend(['w0', 'w1', 'w2'], loc = 'best');
|
STRUCE/2018/SU-2018-LAB02-0036477171.ipynb
|
DominikDitoIvosevic/Uni
|
mit
|
5. Regularizirana logistiΔka regresija
Trenirajte model logistiΔke regresije na skupu seven s razliΔitim faktorima L2-regularizacije, $\alpha\in{0,1,10,100}$. PrikaΕΎite na dva odvojena grafikona (1) pogreΕ‘ku unakrsne entropije te (2) L2-normu vektora $\mathbf{w}$ kroz iteracije optimizacijskog algoritma.
Q: Jesu li izgledi krivulja oΔekivani i zaΕ‘to?
Q: Koju biste vrijednost za $\alpha$ odabrali i zaΕ‘to?
|
from numpy.linalg import norm
alpha5 = [0, 1, 10, 100]
err_gl = []; norm_gl = [];
for a in alpha5:
[w5, w5_trace] = lr_train(seven_X, seven_y, alpha = a, trace = True)
err = []; L2_norm = [];
for k in range(0, len(w5_trace), 3):
err.append(cross_entropy_error(seven_X, seven_y, w5_trace[k:k+3]))
L2_norm.append(linalg.norm(w5_trace[k:k+1]))
err_gl.append(err)
norm_gl.append(L2_norm)
figure(figsize(7, 14))
subplot(2,1,1)
grid()
for i in range(0, len(err_gl)):
plot(err_gl[i], label = 'alpha = ' + str(alpha5[i]) )
legend(loc = 'best') ;
subplot(2,1,2)
grid()
for i in range(0, len(err_gl)):
plot(norm_gl[i], label = 'alpha = ' + str(alpha5[i]) )
legend(loc = 'best');
|
STRUCE/2018/SU-2018-LAB02-0036477171.ipynb
|
DominikDitoIvosevic/Uni
|
mit
|
6. LogistiΔka regresija s funkcijom preslikavanja
ProuΔite funkciju datasets.make_classification. Generirajte i prikaΕΎite dvoklasan skup podataka s ukupno $N=100$ dvodimenzijskih ($n=2)$ primjera, i to sa dvije grupe po klasi (n_clusters_per_class=2). Malo je izgledno da Δe tako generiran skup biti linearno odvojiv, meΔutim to nije problem jer primjere moΕΎemo preslikati u viΕ‘edimenzijski prostor znaΔajki pomoΔu klase preprocessing.PolynomialFeatures, kao Ε‘to smo to uΔinili kod linearne regresije u prvoj laboratorijskoj vjeΕΎbi. Trenirajte model logistiΔke regresije koristeΔi za preslikavanje u prostor znaΔajki polinomijalnu funkciju stupnja $d=2$ i stupnja $d=3$. PrikaΕΎite dobivene granice izmeΔu klasa. MoΕΎete koristiti svoju implementaciju, ali se radi brzine preporuΔa koristiti linear_model.LogisticRegression. Regularizacijski faktor odaberite po ΕΎelji.
NB: Kao i ranije, za prikaz granice izmeΔu klasa koristite funkciju plot_2d_clf_problem. Funkciji kao argumente predajte izvorni skup podataka, a preslikavanje u prostor znaΔajki napravite unutar poziva funkcije h koja Δini predikciju, na sljedeΔi naΔin:
|
from sklearn.preprocessing import PolynomialFeatures
[x6, y6] = make_classification(n_samples=100, n_features=2, n_redundant=0, n_classes=2, n_clusters_per_class=2)
figure(figsize(7, 5))
mlutils.plot_2d_clf_problem(x6, y6)
d = [2,3]
j = 1
figure(figsize(12, 4))
subplots_adjust(wspace=0.1)
for i in d:
subplot(1,2,j)
poly = PolynomialFeatures(i)
Phi = poly.fit_transform(x6)
model = LogisticRegression(solver='lbfgs')
model.fit(Phi, y6)
h = lambda x : model.predict(poly.transform(x))
mlutils.plot_2d_clf_problem(x6, y6, h)
title('d = ' + str(i))
j += 1
# VaΕ‘ kΓ΄d ovdje...
|
STRUCE/2018/SU-2018-LAB02-0036477171.ipynb
|
DominikDitoIvosevic/Uni
|
mit
|
Generally speaking, the procedure for scikit-learn is uniform across all machine-learning algorithms. Models are accessed via the various modules (ensemble, SVM, neighbors, etc), with user-defined tuning parameters. The features (or data) for the models are stored in a 2D array, X, with rows representing individual sources and columns representing the corresponding feature values. [In a minority of cases, X, represents a similarity or distance matrix where each entry represents the distance to every other source in the data set.] In cases where there is a known classification or scalar value (typically supervised methods), this information is stored in a 1D array y.
Unsupervised models are fit by calling .fit(X) and supervised models are fit by calling .fit(X, y). In both cases, predictions for new observations, Xnew, can be obtained by calling .predict(Xnew). Those are the basics and beyond that, the details are algorithm specific, but the documentation for essentially everything within scikit-learn is excellent, so read the docs.
To further develop our intuition, we will now explore the Iris dataset a little further.
Problem 1a What is the pythonic type of iris?
|
type(iris)
|
Sessions/Session04/Day0/TooBriefMLSolutions.ipynb
|
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
mit
|
You likely haven't encountered a scikit-learn Bunch before. It's functionality is essentially the same as a dictionary.
Problem 1b What are the keys of iris?
|
iris.keys()
|
Sessions/Session04/Day0/TooBriefMLSolutions.ipynb
|
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
mit
|
Most importantly, iris contains data and target values. These are all you need for scikit-learn, though the feature and target names and description are useful.
Problem 1c What is the shape and content of the iris data?
|
print(np.shape(iris.data))
print(iris.data)
|
Sessions/Session04/Day0/TooBriefMLSolutions.ipynb
|
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
mit
|
Problem 1d What is the shape and content of the iris target?
|
print(np.shape(iris.target))
print(iris.target)
|
Sessions/Session04/Day0/TooBriefMLSolutions.ipynb
|
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
mit
|
Finally, as a baseline for the exercises that follow, we will now make a simple 2D plot showing the separation of the 3 classes in the iris dataset. This plot will serve as the reference for examining the quality of the clustering algorithms.
Problem 1e Make a scatter plot showing sepal length vs. sepal width for the iris data set. Color the points according to their respective classes.
|
print(iris.feature_names) # shows that sepal length is first feature and sepal width is second feature
plt.scatter(iris.data[:,0], iris.data[:,1], c = iris.target, s = 30, edgecolor = "None", cmap = "viridis")
plt.xlabel('sepal length')
plt.ylabel('sepal width')
|
Sessions/Session04/Day0/TooBriefMLSolutions.ipynb
|
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
mit
|
Problem 2) Unsupervised Machine Learning
Unsupervised machine learning, sometimes referred to as clustering or data mining, aims to group or classify sources in the multidimensional feature space. The "unsupervised" comes from the fact that there are no target labels provided to the algorithm, so the machine is asked to cluster the data "on its own." The lack of labels means there is no (simple) method for validating the accuracy of the solution provided by the machine (though sometimes simple examination can show the results are terrible).
For this reason [note - this is my (AAM) opinion and there many be many others who disagree], unsupervised methods are not particularly useful for astronomy. Supposing one did find some useful clustering structure, an adversarial researcher could always claim that the current feature space does not accurately capture the physics of the system and as such the clustering result is not interesting or, worse, erroneous. The one potentially powerful exception to this broad statement is outlier detection, which can be a branch of both unsupervised and supervised learning. Finding weirdo objects is an astronomical pastime, and there are unsupervised methods that may help in that regard in the LSST era.
To begin today we will examine one of the most famous, and simple, clustering algorithms: $k$-means. $k$-means clustering looks to identify $k$ convex clusters, where $k$ is a user defined number. And here-in lies the rub: if we truly knew the number of clusters in advance, we likely wouldn't need to perform any clustering in the first place. This is the major downside to $k$-means. Operationally, pseudocode for the algorithm can be summarized as the following:
initiate search by identifying k points (i.e. the cluster centers)
loop
assign each point in the data set to the closest cluster center
calculate new cluster centers based on mean position of all points within cluster
if diff(new center - old center) < threshold:
stop (i.e. clusters are defined)
The threshold is defined by the user, though in some cases the total number of iterations is also. An advantage of $k$-means is that the solution will always converge, though the solution may only be a local minimum. Disadvantages include the assumption of convexity, i.e. difficult to capture complex geometry, and the curse of dimensionality.
In scikit-learn the KMeans algorithm is implemented as part of the sklearn.cluster module.
Problem 2a Fit two different $k$-means models to the iris data, one with 2 clusters and one with 3 clusters. Plot the resulting clusters in the sepal length-sepal width plane (same plot as above). How do the results compare to the true classifications?
|
from sklearn.cluster import KMeans
Kcluster = KMeans(n_clusters = 2)
Kcluster.fit(iris.data)
plt.figure()
plt.scatter(iris.data[:,0], iris.data[:,1], c = Kcluster.labels_, s = 30, edgecolor = "None", cmap = "viridis")
plt.xlabel('sepal length')
plt.ylabel('sepal width')
Kcluster = KMeans(n_clusters = 3)
Kcluster.fit(iris.data)
plt.figure()
plt.scatter(iris.data[:,0], iris.data[:,1], c = Kcluster.labels_, s = 30, edgecolor = "None", cmap = "viridis")
plt.xlabel('sepal length')
plt.ylabel('sepal width')
|
Sessions/Session04/Day0/TooBriefMLSolutions.ipynb
|
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
mit
|
With 3 clusters the algorithm does a good job of separating the three classes. However, without the a priori knowledge that there are 3 different types of iris, the 2 cluster solution would appear to be superior.
Problem 2b How do the results change if the 3 cluster model is called with n_init = 1 and init = 'random' options? Use rs for the random state [this allows me to cheat in service of making a point].
*Note - the respective defaults for these two parameters are 10 and k-means++, respectively. Read the docs to see why these choices are, likely, better than those in 2b.
|
rs = 14
Kcluster1 = KMeans(n_clusters = 3, n_init = 1, init = 'random', random_state = rs)
Kcluster1.fit(iris.data)
plt.figure()
plt.scatter(iris.data[:,0], iris.data[:,1], c = Kcluster1.labels_, s = 30, edgecolor = "None", cmap = "viridis")
plt.xlabel('sepal length')
plt.ylabel('sepal width')
|
Sessions/Session04/Day0/TooBriefMLSolutions.ipynb
|
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
mit
|
Petal length has the largest range and standard deviation, thus, it will have the most "weight" when determining the $k$ clusters.
The truth is that the iris data set is fairly small and straightfoward. Nevertheless, we will now examine the clustering results after re-scaling the features. [Some algorithms, cough Support Vector Machines cough, are notoriously sensitive to the feature scaling, so it is important to know about this step.] Imagine you are classifying stellar light curves: the data set will include contact binaries with periods of $\sim 0.1 \; \mathrm{d}$ and Mira variables with periods of $\gg 100 \; \mathrm{d}$. Without re-scaling, this feature that covers 4 orders of magnitude may dominate all others in the final model projections.
The two most common forms of re-scaling are to rescale to a guassian with mean $= 0$ and variance $= 1$, or to rescale the min and max of the feature to $[0, 1]$. The best normalization is problem dependent. The sklearn.preprocessing module makes it easy to re-scale the feature set. It is essential that the same scaling used for the training set be used for all other data run through the model. The testing, validation, and field observations cannot be re-scaled independently. This would result in meaningless final classifications/predictions.
Problem 2d Re-scale the features to normal distributions, and perform $k$-means clustering on the iris data. How do the results compare to those obtained earlier?
Hint - you may find 'StandardScaler()' within the sklearn.preprocessing module useful.
|
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler().fit(iris.data)
Kcluster = KMeans(n_clusters = 3)
Kcluster.fit(scaler.transform(iris.data))
plt.figure()
plt.scatter(iris.data[:,0], iris.data[:,1], c = Kcluster.labels_, s = 30, edgecolor = "None", cmap = "viridis")
plt.xlabel('sepal length')
plt.ylabel('sepal width')
|
Sessions/Session04/Day0/TooBriefMLSolutions.ipynb
|
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
mit
|
I have used my own domain knowledge to specifically choose features that may be useful when clustering galaxies. If you know a bit about SDSS and can think of other features that may be useful feel free to add them to the query.
One nice feature of astropy tables is that they can readily be turned into pandas DataFrames, which can in turn easily be turned into a sklearn X array with NumPy. For example:
X = np.array(SDSSgals.to_pandas())
And you are ready to go.
Challenge Problem Using the SDSS dataset above, identify interesting clusters within the data [this is intentionally very open ended, if you uncover anything especially exciting you'll have a chance to share it with the group]. Feel free to use the algorithms discussed above, or any other packages available via sklearn. Can you make sense of the clusters in the context of galaxy evolution?
Hint - don't fret if you know nothing about galaxy evolution (neither do I!). Just take a critical look at the clusters that are identified
|
Xgal = np.array(SDSSgals.to_pandas())
galScaler = StandardScaler().fit(Xgal)
dbs = DBSCAN(eps = .25, min_samples=55)
dbs.fit(galScaler.transform(Xgal))
cluster_members = dbs.labels_ != -1
outliers = dbs.labels_ == -1
plt.figure(figsize = (10,8))
plt.scatter(Xgal[:,0][outliers], Xgal[:,3][outliers],
c = "k",
s = 4, alpha = 0.1)
plt.scatter(Xgal[:,0][cluster_members], Xgal[:,3][cluster_members],
c = dbs.labels_[cluster_members],
alpha = 0.4, edgecolor = "None", cmap = "viridis")
plt.xlim(-1,5)
plt.ylim(-0,3.5)
|
Sessions/Session04/Day0/TooBriefMLSolutions.ipynb
|
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
mit
|
Note - I was unable to get the galaxies to clusster using DBSCAN.
Problem 3) Supervised Machine Learning
Supervised machine learning, on the other hand, aims to predict a target class or produce a regression result based on the location of labelled sources (i.e. the training set) in the multidimensional feature space. The "supervised" comes from the fact that we are specifying the allowed outputs from the model. As there are labels available for the training set, it is possible to estimate the accuracy of the model (though there are generally important caveats about generalization, which we will explore in further detail later).
We will begin with a simple, but nevertheless, elegant algorithm for classification and regression: $k$-nearest-neighbors ($k$NN). In brief, the classification or regression output is determined by examining the $k$ nearest neighbors in the training set, where $k$ is a user defined number. Typically, though not always, distances between sources are Euclidean, and the final classification is assigned to whichever class has a plurality within the $k$ nearest neighbors (in the case of regression, the average of the $k$ neighbors is the output from the model). We will experiment with the steps necessary to optimize $k$, and other tuning parameters, in the detailed break-out problem.
In scikit-learn the KNeighborsClassifer algorithm is implemented as part of the sklearn.neighbors module.
Problem 3a
Fit two different $k$NN models to the iris data, one with 3 neighbors and one with 10 neighbors. Plot the resulting class predictions in the sepal length-sepal width plane (same plot as above). How do the results compare to the true classifications? Is there any reason to be suspect of this procedure?
Hint - after you have constructed the model, it is possible to obtain model predictions using the .predict() method, which requires a feature array, including the same features and order as the training set, as input.
Hint that isn't essential, but is worth thinking about - should the features be re-scaled in any way?
|
from sklearn.neighbors import KNeighborsClassifier
KNNclf = KNeighborsClassifier(n_neighbors = 3).fit(iris.data, iris.target)
preds = KNNclf.predict(iris.data)
plt.figure()
plt.scatter(iris.data[:,0], iris.data[:,1],
c = preds, cmap = "viridis", s = 30, edgecolor = "None")
KNNclf = KNeighborsClassifier(n_neighbors = 10).fit(iris.data, iris.target)
preds = KNNclf.predict(iris.data)
plt.figure()
plt.scatter(iris.data[:,0], iris.data[:,1],
c = preds, cmap = "viridis", s = 30, edgecolor = "None")
|
Sessions/Session04/Day0/TooBriefMLSolutions.ipynb
|
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
mit
|
These results are almost identical to the training classifications. However, we have cheated! In this case we are evaluating the accuracy of the model (98% in this case) using the same data that defines the model. Thus, what we have really evaluated here is the training error. The relevant parameter, however, is the generalization error: how accurate are the model predictions on new data?
Without going into too much detail, we will test this using cross validation (CV). In brief, CV provides predictions on the training set using a subset of the data to generate a model that predicts the class of the remaining sources. Using cross_val_predict, we can get a better sense of the model accuracy. Predictions from cross_val_predict are produced in the following manner:
from sklearn.cross_validation import cross_val_predict
CVpreds = cross_val_predict(sklearn.model(), X, y)
where sklearn.model() is the desired model, X is the feature array, and y is the label array.
Problem 3b
Produce cross-validation predictions for the iris dataset and a $k$NN with 5 neighbors. Plot the resulting classifications, as above, and estimate the accuracy of the model as applied to new data. How does this accuracy compare to a $k$NN with 50 neighbors?
|
from sklearn.cross_validation import cross_val_predict
CVpreds = cross_val_predict(KNeighborsClassifier(n_neighbors=5), iris.data, iris.target)
plt.figure()
plt.scatter(iris.data[:,0], iris.data[:,1],
c = preds, cmap = "viridis", s = 30, edgecolor = "None")
print("The accuracy of the kNN = 5 model is ~{:.4}".format( sum(CVpreds == iris.target)/len(CVpreds) ))
CVpreds50 = cross_val_predict(KNeighborsClassifier(n_neighbors=50), iris.data, iris.target)
print("The accuracy of the kNN = 50 model is ~{:.4}".format( sum(CVpreds50 == iris.target)/len(CVpreds50) ))
|
Sessions/Session04/Day0/TooBriefMLSolutions.ipynb
|
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
mit
|
While it is useful to understand the overall accuracy of the model, it is even more useful to understand the nature of the misclassifications that occur.
Problem 3c
Calculate the accuracy for each class in the iris set, as determined via CV for the $k$NN = 50 model.
|
for iris_type in range(3):
iris_acc = sum( (CVpreds50 == iris_type) & (iris.target == iris_type)) / sum(iris.target == iris_type)
print("The accuracy for class {:s} is ~{:.4f}".format(iris.target_names[iris_type], iris_acc))
|
Sessions/Session04/Day0/TooBriefMLSolutions.ipynb
|
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
mit
|
We just found that the classifier does a much better job classifying setosa and versicolor than it does for virginica. The main reason for this is some viginica flowers lie far outside the main virginica locus, and within predominantly versicolor "neighborhoods". In addition to knowing the accuracy for the individual classes, it is also useful to know class predictions for the misclassified sources, or in other words where there is "confusion" for the classifier. The best way to summarize this information is with a confusion matrix. In a confusion matrix, one axis shows the true class and the other shows the predicted class. For a perfect classifier all of the power will be along the diagonal, while confusion is represented by off-diagonal signal.
Like almost everything else we have encountered during this exercise, scikit-learn makes it easy to compute a confusion matrix. This can be accomplished with the following:
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_prep)
Problem 3d
Calculate the confusion matrix for the iris training set and the $k$NN = 50 model.
|
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(iris.target, CVpreds50)
print(cm)
|
Sessions/Session04/Day0/TooBriefMLSolutions.ipynb
|
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
mit
|
The normalization makes it easier to compare the classes, since each class has a different number of sources. Now we can procede with a visual representation of the confusion matrix. This is best done using imshow() within pyplot. You will also need to plot a colorbar, and labeling the axes will also be helpful.
Problem 3f
Plot the confusion matrix. Be sure to label each of the axeses.
Hint - you might find the sklearn confusion matrix tutorial helpful for making a nice plot.
|
plt.imshow(normalized_cm, interpolation = 'nearest', cmap = 'bone_r')# complete
tick_marks = np.arange(len(iris.target_names))
plt.xticks(tick_marks, iris.target_names, rotation=45)
plt.yticks(tick_marks, iris.target_names)
plt.ylabel( 'True')# complete
plt.xlabel( 'Predicted' )# complete
plt.colorbar()
plt.tight_layout()
|
Sessions/Session04/Day0/TooBriefMLSolutions.ipynb
|
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
mit
|
Load the trained model with its weigths
|
from keras.layers import Input, BatchNormalization, LSTM, TimeDistributed, Dense
from keras.models import Model
input_features = Input(batch_shape=(1, 1, 4096,), name='features')
input_normalized = BatchNormalization(mode=1)(input_features)
lstm1 = LSTM(512, return_sequences=True, stateful=True, name='lstm1')(input_normalized)
lstm2 = LSTM(512, return_sequences=True, stateful=True, name='lstm2')(lstm1)
output = TimeDistributed(Dense(201, activation='softmax'), name='fc')(lstm2)
model = Model(input=input_features, output=output)
model.load_weights('../work/scripts/training/lstm_activity_classification/model_snapshot/lstm_activity_classification_02_e100.hdf5')
model.summary()
model.compile(loss='categorical_crossentropy', optimizer='rmsprop')
|
notebooks/16 Visualization of Results.ipynb
|
imatge-upc/activitynet-2016-cvprw
|
mit
|
Extract the predictions for each video and print the scoring
|
predictions = []
for v, features in examples:
nb_instances = features.shape[0]
X = features.reshape((nb_instances, 1, 4096))
model.reset_states()
prediction = model.predict(X, batch_size=1)
prediction = prediction.reshape(nb_instances, 201)
class_prediction = np.argmax(prediction, axis=1)
predictions.append((v, prediction, class_prediction))
|
notebooks/16 Visualization of Results.ipynb
|
imatge-upc/activitynet-2016-cvprw
|
mit
|
Print the global classification results
|
from IPython.display import YouTubeVideo, display
for v, prediction, class_prediction in predictions:
print('Video ID: {}\t\tGround truth: {}'.format(v.video_id, v.get_activity()))
class_means = np.mean(prediction, axis=0)
top_3 = np.argsort(class_means[1:])[::-1][:3] + 1
scores = class_means[top_3]/np.sum(class_means[1:])
for index, score in zip(top_3, scores):
if score == 0.:
continue
label = dataset.labels[index][1]
print('{:.4f}\t{}'.format(score, label))
vid = YouTubeVideo(v.video_id)
display(vid)
print('\n')
|
notebooks/16 Visualization of Results.ipynb
|
imatge-upc/activitynet-2016-cvprw
|
mit
|
Now show the temporal prediction for the activity happening at the video.
|
import matplotlib.pyplot as plt
%matplotlib inline
import matplotlib
normalize = matplotlib.colors.Normalize(vmin=0, vmax=201)
for v, prediction, class_prediction in predictions:
v.get_video_instances(16, 0)
ground_truth = np.array([instance.output for instance in v.instances])
nb_instances = len(v.instances)
print('Video ID: {}\nMain Activity: {}'.format(v.video_id, v.get_activity()))
plt.figure(num=None, figsize=(18, 1), dpi=100)
plt.contourf(np.broadcast_to(ground_truth, (2, nb_instances)), norm=normalize, interpolation='nearest')
plt.title('Ground Truth')
plt.show()
plt.figure(num=None, figsize=(18, 1), dpi=100)
plt.contourf(np.broadcast_to(class_prediction, (2, nb_instances)), norm=normalize, interpolation='nearest')
plt.title('Prediction')
plt.show()
print('\n')
normalize = matplotlib.colors.Normalize(vmin=0, vmax=1)
for v, prediction, class_prediction in predictions:
v.get_video_instances(16, 0)
ground_truth = np.array([instance.output for instance in v.instances])
nb_instances = len(v.instances)
output_index = dataset.get_output_index(v.label)
print('Video ID: {}\nMain Activity: {}'.format(v.video_id, v.get_activity()))
class_means = np.mean(prediction, axis=0)
top_3 = np.argsort(class_means[1:])[::-1][:3] + 1
scores = class_means[top_3]/np.sum(class_means[1:])
for index, score in zip(top_3, scores):
if score == 0.:
continue
label = dataset.labels[index][1]
print('{:.4f}\t{}'.format(score, label))
plt.figure(num=None, figsize=(18, 1), dpi=100)
plt.contourf(np.broadcast_to(ground_truth/output_index, (2, nb_instances)), norm=normalize, interpolation='nearest')
plt.title('Ground Truth')
plt.show()
# print only the positions that predicted the global ground truth category
temp = np.zeros((nb_instances))
temp[class_prediction==output_index] = 1
plt.figure(num=None, figsize=(18, 1), dpi=100)
plt.contourf(np.broadcast_to(temp, (2, nb_instances)), norm=normalize, interpolation='nearest')
plt.title('Prediction of the ground truth class')
plt.show()
plt.figure(num=None, figsize=(18, 1), dpi=100)
plt.contourf(np.broadcast_to(prediction[:,output_index], (2, nb_instances)), norm=normalize, interpolation='nearest')
plt.title('Probability for ground truth')
plt.show()
print('\n')
|
notebooks/16 Visualization of Results.ipynb
|
imatge-upc/activitynet-2016-cvprw
|
mit
|
In the following cell, complete the code with an expression that evaluates to a list of integers derived from the raw numbers in numbers_str, assigning the value of this expression to a variable numbers. If you do everything correctly, executing the cell should produce the output 985 (not '985').
|
new_list = numbers_str.split(",")
numbers = [int(item) for item in new_list]
max(numbers)
|
databases_hw/db04/Homework_4.ipynb
|
mercybenzaquen/foundations-homework
|
mit
|
Great! We'll be using the numbers list you created above in the next few problems.
In the cell below, fill in the square brackets so that the expression evaluates to a list of the ten largest values in numbers. Expected output:
[506, 528, 550, 581, 699, 721, 736, 804, 855, 985]
(Hint: use a slice.)
|
#len(numbers)
sorted(numbers)[10:]
|
databases_hw/db04/Homework_4.ipynb
|
mercybenzaquen/foundations-homework
|
mit
|
In the cell below, write an expression that evaluates to a list of the integers from numbers that are evenly divisible by three, sorted in numerical order. Expected output:
[120, 171, 258, 279, 528, 699, 804, 855]
|
sorted([item for item in numbers if item % 3 == 0])
|
databases_hw/db04/Homework_4.ipynb
|
mercybenzaquen/foundations-homework
|
mit
|
Okay. You're doing great. Now, in the cell below, write an expression that evaluates to a list of the square roots of all the integers in numbers that are less than 100. In order to do this, you'll need to use the sqrt function from the math module, which I've already imported for you. Expected output:
[2.6457513110645907, 8.06225774829855, 8.246211251235321]
(These outputs might vary slightly depending on your platform.)
|
from math import sqrt
# your code here
squared = []
for item in numbers:
if item < 100:
squared_numbers = sqrt(item)
squared.append(squared_numbers)
squared
|
databases_hw/db04/Homework_4.ipynb
|
mercybenzaquen/foundations-homework
|
mit
|
Problem set #2: Still more list comprehensions
Still looking good. Let's do a few more with some different data. In the cell below, I've defined a data structure and assigned it to a variable planets. It's a list of dictionaries, with each dictionary describing the characteristics of a planet in the solar system. Make sure to run the cell before you proceed.
|
planets = [
{'diameter': 0.382,
'mass': 0.06,
'moons': 0,
'name': 'Mercury',
'orbital_period': 0.24,
'rings': 'no',
'type': 'terrestrial'},
{'diameter': 0.949,
'mass': 0.82,
'moons': 0,
'name': 'Venus',
'orbital_period': 0.62,
'rings': 'no',
'type': 'terrestrial'},
{'diameter': 1.00,
'mass': 1.00,
'moons': 1,
'name': 'Earth',
'orbital_period': 1.00,
'rings': 'no',
'type': 'terrestrial'},
{'diameter': 0.532,
'mass': 0.11,
'moons': 2,
'name': 'Mars',
'orbital_period': 1.88,
'rings': 'no',
'type': 'terrestrial'},
{'diameter': 11.209,
'mass': 317.8,
'moons': 67,
'name': 'Jupiter',
'orbital_period': 11.86,
'rings': 'yes',
'type': 'gas giant'},
{'diameter': 9.449,
'mass': 95.2,
'moons': 62,
'name': 'Saturn',
'orbital_period': 29.46,
'rings': 'yes',
'type': 'gas giant'},
{'diameter': 4.007,
'mass': 14.6,
'moons': 27,
'name': 'Uranus',
'orbital_period': 84.01,
'rings': 'yes',
'type': 'ice giant'},
{'diameter': 3.883,
'mass': 17.2,
'moons': 14,
'name': 'Neptune',
'orbital_period': 164.8,
'rings': 'yes',
'type': 'ice giant'}]
|
databases_hw/db04/Homework_4.ipynb
|
mercybenzaquen/foundations-homework
|
mit
|
Now, in the cell below, write a list comprehension that evaluates to a list of names of the planets that have a diameter greater than four earth radii. Expected output:
['Jupiter', 'Saturn', 'Uranus']
|
[item['name'] for item in planets if item['diameter'] > 2]
#I got one more planet!
|
databases_hw/db04/Homework_4.ipynb
|
mercybenzaquen/foundations-homework
|
mit
|
In the cell below, write a single expression that evaluates to the sum of the mass of all planets in the solar system. Expected output: 446.79
|
#sum([int(item['mass']) for item in planets])
sum([item['mass'] for item in planets])
|
databases_hw/db04/Homework_4.ipynb
|
mercybenzaquen/foundations-homework
|
mit
|
Good work. Last one with the planets. Write an expression that evaluates to the names of the planets that have the word giant anywhere in the value for their type key. Expected output:
['Jupiter', 'Saturn', 'Uranus', 'Neptune']
|
import re
planet_with_giant= [item['name'] for item in planets if re.search(r'\bgiant\b', item['type'])]
planet_with_giant
|
databases_hw/db04/Homework_4.ipynb
|
mercybenzaquen/foundations-homework
|
mit
|
EXTREME BONUS ROUND: Write an expression below that evaluates to a list of the names of the planets in ascending order by their number of moons. (The easiest way to do this involves using the key parameter of the sorted function, which we haven't yet discussed in class! That's why this is an EXTREME BONUS question.) Expected output:
['Mercury', 'Venus', 'Earth', 'Mars', 'Neptune', 'Uranus', 'Saturn', 'Jupiter']
Problem set #3: Regular expressions
In the following section, we're going to do a bit of digital humanities. (I guess this could also be journalism if you were... writing an investigative piece about... early 20th century American poetry?) We'll be working with the following text, Robert Frost's The Road Not Taken. Make sure to run the following cell before you proceed.
|
import re
poem_lines = ['Two roads diverged in a yellow wood,',
'And sorry I could not travel both',
'And be one traveler, long I stood',
'And looked down one as far as I could',
'To where it bent in the undergrowth;',
'',
'Then took the other, as just as fair,',
'And having perhaps the better claim,',
'Because it was grassy and wanted wear;',
'Though as for that the passing there',
'Had worn them really about the same,',
'',
'And both that morning equally lay',
'In leaves no step had trodden black.',
'Oh, I kept the first for another day!',
'Yet knowing how way leads on to way,',
'I doubted if I should ever come back.',
'',
'I shall be telling this with a sigh',
'Somewhere ages and ages hence:',
'Two roads diverged in a wood, and I---',
'I took the one less travelled by,',
'And that has made all the difference.']
|
databases_hw/db04/Homework_4.ipynb
|
mercybenzaquen/foundations-homework
|
mit
|
In the cell above, I defined a variable poem_lines which has a list of lines in the poem, and imported the re library.
In the cell below, write a list comprehension (using re.search()) that evaluates to a list of lines that contain two words next to each other (separated by a space) that have exactly four characters. (Hint: use the \b anchor. Don't overthink the "two words in a row" requirement.)
Expected result:
['Then took the other, as just as fair,',
'Had worn them really about the same,',
'And both that morning equally lay',
'I doubted if I should ever come back.',
'I shall be telling this with a sigh']
|
[item for item in poem_lines if re.search(r'\b[a-zA-Z]{4}\b \b[a-zA-Z]{4}\b', item)]
|
databases_hw/db04/Homework_4.ipynb
|
mercybenzaquen/foundations-homework
|
mit
|
Good! Now, in the following cell, write a list comprehension that evaluates to a list of lines in the poem that end with a five-letter word, regardless of whether or not there is punctuation following the word at the end of the line. (Hint: Try using the ? quantifier. Is there an existing character class, or a way to write a character class, that matches non-alphanumeric characters?) Expected output:
['And be one traveler, long I stood',
'And looked down one as far as I could',
'And having perhaps the better claim,',
'Though as for that the passing there',
'In leaves no step had trodden black.',
'Somewhere ages and ages hence:']
|
[item for item in poem_lines if re.search(r'\b[a-zA-Z]{5}\b.?$',item)]
|
databases_hw/db04/Homework_4.ipynb
|
mercybenzaquen/foundations-homework
|
mit
|
Okay, now a slightly trickier one. In the cell below, I've created a string all_lines which evaluates to the entire text of the poem in one string. Execute this cell.
|
all_lines = " ".join(poem_lines)
|
databases_hw/db04/Homework_4.ipynb
|
mercybenzaquen/foundations-homework
|
mit
|
Now, write an expression that evaluates to all of the words in the poem that follow the word 'I'. (The strings in the resulting list should not include the I.) Hint: Use re.findall() and grouping! Expected output:
['could', 'stood', 'could', 'kept', 'doubted', 'should', 'shall', 'took']
|
re.findall(r'[I] (\b\w+\b)', all_lines)
|
databases_hw/db04/Homework_4.ipynb
|
mercybenzaquen/foundations-homework
|
mit
|
Finally, something super tricky. Here's a list of strings that contains a restaurant menu. Your job is to wrangle this plain text, slightly-structured data into a list of dictionaries.
|
entrees = [
"Yam, Rosemary and Chicken Bowl with Hot Sauce $10.95",
"Lavender and Pepperoni Sandwich $8.49",
"Water Chestnuts and Peas Power Lunch (with mayonnaise) $12.95 - v",
"Artichoke, Mustard Green and Arugula with Sesame Oil over noodles $9.95 - v",
"Flank Steak with Lentils And Tabasco Pepper With Sweet Chilli Sauce $19.95",
"Rutabaga And Cucumber Wrap $8.49 - v"
]
|
databases_hw/db04/Homework_4.ipynb
|
mercybenzaquen/foundations-homework
|
mit
|
You'll need to pull out the name of the dish and the price of the dish. The v after the hyphen indicates that the dish is vegetarian---you'll need to include that information in your dictionary as well. I've included the basic framework; you just need to fill in the contents of the for loop.
Expected output:
[{'name': 'Yam, Rosemary and Chicken Bowl with Hot Sauce ',
'price': 10.95,
'vegetarian': False},
{'name': 'Lavender and Pepperoni Sandwich ',
'price': 8.49,
'vegetarian': False},
{'name': 'Water Chestnuts and Peas Power Lunch (with mayonnaise) ',
'price': 12.95,
'vegetarian': True},
{'name': 'Artichoke, Mustard Green and Arugula with Sesame Oil over noodles ',
'price': 9.95,
'vegetarian': True},
{'name': 'Flank Steak with Lentils And Tabasco Pepper With Sweet Chilli Sauce ',
'price': 19.95,
'vegetarian': False},
{'name': 'Rutabaga And Cucumber Wrap ', 'price': 8.49, 'vegetarian': True}]
Great work! You are done. Go cavort in the sun, or whatever it is you students do when you're done with your homework
|
menu = []
for item in entrees:
entrees_dictionary= {}
match = re.search(r'(.*) .(\d*\d\.\d{2})\ ?( - v+)?$', item)
if match:
name = match.group(1)
price= match.group(2)
#vegetarian= match.group(3)
if match.group(3):
entrees_dictionary['vegetarian']= True
else:
entrees_dictionary['vegetarian']= False
entrees_dictionary['name']= name
entrees_dictionary['price']= price
menu.append(entrees_dictionary)
menu
|
databases_hw/db04/Homework_4.ipynb
|
mercybenzaquen/foundations-homework
|
mit
|
Task 1. Compiling Ebola Data
The DATA_FOLDER/ebola folder contains summarized reports of Ebola cases from three countries (Guinea, Liberia and Sierra Leone) during the recent outbreak of the disease in West Africa. For each country, there are daily reports that contain various information about the outbreak in several cities in each country.
Use pandas to import these data files into a single Dataframe.
Using this DataFrame, calculate for each country, the daily average per month of new cases and deaths.
Make sure you handle all the different expressions for new cases and deaths that are used in the reports.
|
'''
Functions needed to solve task 1
'''
#function to import excel file into a dataframe
def importdata(path,date):
allpathFiles = glob.glob(DATA_FOLDER+path+'/*.csv')
list_data = []
for file in allpathFiles:
excel = pd.read_csv(file,parse_dates=[date])
list_data.append(excel)
return pd.concat(list_data)
#function to add the month on a new column of a DataFrame
def add_month(df):
copy_df = df.copy()
months = [calendar.month_name[x.month] for x in copy_df.Date]
copy_df['Month'] = months
return copy_df
#founction which loc only the column within a country and a specified month
#return a dataframe
def chooseCountry_month(dataframe,country,descr,month):
df = dataframe.loc[(dataframe['Country']==country) & (dataframe['Description']==descr)]
#df = add_month(df)
df_month = df.loc[(df['Month']==month)]
return df_month
# Create a dataframe with the number of death, the new cases and the daily infos for a country and a specified month
def getmonthresults(dataframe,country,month):
if country =='Liberia':
descr_kill ='Total death/s in confirmed cases'
descr_cases ='Total confirmed cases'
if country =='Guinea':
descr_kill ='Total deaths of confirmed'
descr_cases ='Total cases of confirmed'
if country == 'Sierra Leone':
descr_kill ='death_confirmed'
descr_cases ='cum_confirmed'
df_kill = chooseCountry_month(dataframe,country,descr_kill,month)
df_cases = chooseCountry_month(dataframe,country,descr_cases,month)
#calculate the number of new cases and of new deaths for the all month
res_kill = int(df_kill.iloc[len(df_kill)-1].Totals)-int(df_kill.iloc[0].Totals)
res_cases = int(df_cases.iloc[len(df_cases)-1].Totals)-int(df_cases.iloc[0].Totals)
#calculate the number of days counted which is last day of register - first day of register
nb_day = df_kill.iloc[len(df_kill)-1].Date.day-df_kill.iloc[0].Date.day
# Sometimes the values in the dataframe are wrong due to the excelfiles which are not all the same!
# We then get negative results. Therefor we replace them all by NaN !
if(res_cases < 0)&(res_kill <0):
monthreport = pd.DataFrame({'New cases':[np.nan],'Deaths':[np.nan],'daily average of New cases':[np.nan],'daily average of Deaths':[np.nan],'month':[month],'Country':[country]})
elif(res_cases >= 0) &( res_kill <0):
monthreport = pd.DataFrame({'New cases':[res_cases],'Deaths':[np.nan],'daily average of New cases':[res_cases/nb_day],'daily average of Deaths':[np.nan],'month':[month],'Country':[country]})
elif(res_cases < 0) & (res_kill >= 0):
monthreport = pd.DataFrame({'New cases':[np.nan],'Deaths':[res_kill],'daily average of New cases':[np.nan],'daily average of Deaths':[res_kill/nb_day],'month':[month],'Country':[country]})
elif(nb_day == 0):
monthreport = pd.DataFrame({'New cases':'notEnoughdatas','Deaths':'notEnoughdatas','daily average of New cases':'notEnoughdatas','daily average of Deaths':'notEnoughdatas','month':[month],'Country':[country]})
else:
monthreport = pd.DataFrame({'New cases':[res_cases],'Deaths':[res_kill],'daily average of New cases':[res_cases/nb_day],'daily average of Deaths':[res_kill/nb_day],'month':[month],'Country':[country]})
return monthreport
#check if the month and the country is in the dataframe df
def checkData(df,month,country):
check = df.loc[(df['Country']==country)& (df['Month']== month)]
return check
#return a dataframe with all the infos(daily new cases, daily death) for each month and each country
def getResults(data):
Countries = ['Guinea','Liberia','Sierra Leone']
Months = ['January','February','March','April','May','June','July','August','September','October','November','December']
results=[]
compteur =0
for country in Countries:
for month in Months:
if not(checkData(data,month,country).empty) : #check if the datas for the month and country exist
res = getmonthresults(data,country,month)
results.append(res)
return pd.concat(results)
# import data from guinea
path_guinea = 'Ebola/guinea_data/'
data_guinea = importdata(path_guinea,'Date')
# set the new order / change the columns / keep only the relevant datas / add the name of the country
data_guinea = data_guinea[['Date', 'Description','Totals']]
data_guinea['Country'] = ['Guinea']*len(data_guinea)
#search for New cases and death!!
#descr(newcases): "Total cases of confirmed" // descr(deaths): "Total deaths of confirmed"
data_guinea = data_guinea.loc[(data_guinea.Description=='Total cases of confirmed')|(data_guinea.Description=='Total deaths of confirmed')]
#import data from liberia
path_liberia = 'Ebola/liberia_data/'
data_liberia = importdata(path_liberia,'Date')
# set the new order / change the columns / keep only the relevant datas / add the name of the country
data_liberia = data_liberia[['Date', 'Variable','National']]
data_liberia['Country'] = ['Liberia']*len(data_liberia)
#search for New cases and death!!
#descr(newcases): "Total confirmed cases" // descr(deaths): "Total death/s in confirmed cases"
data_liberia = data_liberia.loc[(data_liberia.Variable=='Total confirmed cases')|(data_liberia.Variable=='Total death/s in confirmed cases')]
#change the name of the columns to be able merge the 3 data sets
data_liberia = data_liberia.rename(columns={'Date': 'Date', 'Variable': 'Description','National':'Totals'})
#import data from sierra leonne
path_sl = 'Ebola/sl_data/'
data_sl = importdata(path_sl,'date')
# set the new order / change the columns / keep only the relevant datas / add the name of the country
data_sl = data_sl[['date', 'variable','National']]
data_sl['Country'] = ['Sierra Leone']*len(data_sl)
#search for new cases and death
#descr(newcases): "cum_confirmed" // descr(deaths): "death_confirmed"
data_sl = data_sl.loc[(data_sl.variable=='cum_confirmed')|(data_sl.variable=='death_confirmed')]
#change the name of the columns to be able merge the 3 data sets
data_sl = data_sl.rename(columns={'date': 'Date', 'variable': 'Description','National':'Totals'})
#merge the 3 dataframe into ONE which we'll apply our analysis
dataFrame = [data_guinea,data_liberia,data_sl]
data = pd.concat(dataFrame)
# Replace the NaN by 0;
data = data.fillna(0)
#add a column with the month
data = add_month(data)
#get the results from the data set -> see the function
results = getResults(data)
#print the resuults
results
|
Backup (not final delivery)/Homework 1.ipynb
|
hbjornoy/DataAnalysis
|
apache-2.0
|
Task 2. RNA Sequences
In the DATA_FOLDER/microbiome subdirectory, there are 9 spreadsheets of microbiome data that was acquired from high-throughput RNA sequencing procedures, along with a 10<sup>th</sup> file that describes the content of each.
Use pandas to import the first 9 spreadsheets into a single DataFrame.
Then, add the metadata information from the 10<sup>th</sup> spreadsheet as columns in the combined DataFrame.
Make sure that the final DataFrame has a unique index and all the NaN values have been replaced by the tag unknown.
|
Sheet10_Meta = pd.read_excel(DATA_FOLDER +'microbiome/metadata.xls')
allFiles = glob.glob(DATA_FOLDER + 'microbiome' + "/MID*.xls")
allFiles
|
Backup (not final delivery)/Homework 1.ipynb
|
hbjornoy/DataAnalysis
|
apache-2.0
|
Creating and filling the DataFrame
In order to iterate only once over the data folder, we will attach the metadata to each excel spreadsheet right after creating a DataFrame with it. This will allow the code to be shorter and clearer, but also to iterate only once on every line and therefore be more efficient.
|
#Creating an empty DataFrame to store our data and initializing a counter.
Combined_data = pd.DataFrame()
K = 0
while (K < int(len(allFiles))):
#Creating a DataFrame and filling it with the excel's data
df = pd.read_excel(allFiles[K], header=None)
#Getting the metadata of the corresponding spreadsheet
df['BARCODE'] = Sheet10_Meta.at[int(K), 'BARCODE']
df['GROUP'] = Sheet10_Meta.at[int(K), 'GROUP']
df['SAMPLE'] = Sheet10_Meta.at[int(K),'SAMPLE']
#Append the recently created DataFrame to our combined one
Combined_data = Combined_data.append(df)
K = K + 1
#Renaming the columns with meaningfull names
Combined_data.columns = ['Name', 'Value','BARCODE','GROUP','SAMPLE']
Combined_data.head()
|
Backup (not final delivery)/Homework 1.ipynb
|
hbjornoy/DataAnalysis
|
apache-2.0
|
3. Cleaning and reindexing
At first we get rid of the NaN value, we must replace them by "unknown". In order to have a more meaningful and single index, we will reset it to be the name of the RNA sequence.
|
#Replacing the NaN values with unknwown
Combined_data = Combined_data.fillna('unknown')
#Reseting the index
Combined_data = Combined_data.set_index('Name')
#Showing the result
Combined_data
|
Backup (not final delivery)/Homework 1.ipynb
|
hbjornoy/DataAnalysis
|
apache-2.0
|
Task 3. Class War in Titanic
Use pandas to import the data file Data/titanic.xls. It contains data on all the passengers that travelled on the Titanic.
For each of the following questions state clearly your assumptions and discuss your findings:
Describe the type and the value range of each attribute. Indicate and transform the attributes that can be Categorical.
Plot histograms for the travel class, embarkation port, sex and age attributes. For the latter one, use discrete decade intervals.
Calculate the proportion of passengers by cabin floor. Present your results in a pie chart.
For each travel class, calculate the proportion of the passengers that survived. Present your results in pie charts.
Calculate the proportion of the passengers that survived by travel class and sex. Present your results in a single histogram.
Create 2 equally populated age categories and calculate survival proportions by age category, travel class and sex. Present your results in a DataFrame with unique index.
Question 3.1
Describe the type and the value range of each attribute. Indicate and transform the attributes that can be Categorical.
Assumptions:
- "For each exercise, please provide both a written explanation of the steps you will apply to manipulate the data, and the corresponding code." We assume that "written explanation can come in the form of commented code as well as text"
- We assume that we must not describe the value range of attributes that contain string as we dont feel the length of strings or ASCI-values don't give any insight
|
'''
Here is a sample of the information in the titanic dataframe
'''
# Importing titanic.xls info with Pandas
titanic = pd.read_excel('Data/titanic.xls')
# printing only the 30 first and last rows of information
print(titanic.head)
'''
To describe the INTENDED values and types of the data we will show you the titanic.html file that was provided to us
Notice:
- 'age' is of type double, so someone can be 17.5 years old, mostly used with babies that are 0.x years old
- 'cabin' is stored as integer, but it har characters and letters
- By this model, embarked is stored as an integer, witch has to be interpreted as the 3 different embarkation ports
- It says that 'boat' is stored as a integer even though it has spaces and letters, it should be stored as string
PS: it might be that the information stored as integer is supposed to be categorical data,
...because they have a "small" amount of valid options
'''
# Display html info in Jupyter Notebook
from IPython.core.display import display, HTML
htmlFile = 'Data/titanic.html'
display(HTML(htmlFile))
'''
The default types of the data after import:
Notice:
- the strings and characters are imported as objects
- 'survived' is imported as int instead of double (which is in our opinion better since it's only 0 and 1
- 'sex' is imported as object not integer because it is a string
'''
titanic.dtypes
'''
Below you can see the value range of the different numerical values.
name, sex, ticket, cabin, embarked, boat and home.dest is not included because they can't be quantified numerically.
'''
titanic.describe()
'''
Additional information that is important to remember when manipulation the data
is if/where there are NaN values in the dataset
'''
# This displays the number of NaN there is in different attributes
print(pd.isnull(titanic).sum())
'''
Some of this data is missing while some is meant to describe 'No' or something of meaning.
Example:
Cabin has 1014 NaN in its column, it might be that every passenger had a cabin and the data is missing.
Or it could mean that most passengers did not have a cabin or a mix. The displayed titanic.html file
give us some insight if it is correct. It says that there are 0 NaN in the column. This indicates that
there are 1014 people without a cabin. Boat has also 823 NaN's, while the titanic lists 0 NaN's.
It is probably because most of those who died probably weren't in a boat.
'''
'''
What attributes should be stored as categorical information?
Categorical data is essentially 8-bit integers which means it can store up to 2^8 = 256 categories
Benefit is that it makes memory usage lower and it has a performance increase in calculations.
'''
print('Number of unique values in... :')
for attr in titanic:
print(" {attr}: {u}".format(attr=attr, u=len(titanic[attr].unique())))
'''
We think it will be smart to categorize: 'pclass', 'survived', 'sex', 'cabin', 'embarked' and 'boat'
because they have under 256 categories and don't have a strong numerical value like 'age'
'survived' is a bordercase because it might be more practical to work with integers in some settings
'''
# changing the attributes to categorical data
titanic.pclass = titanic.pclass.astype('category')
titanic.survived = titanic.survived.astype('category')
titanic.sex = titanic.sex.astype('category')
titanic.cabin = titanic.cabin.astype('category')
titanic.embarked = titanic.embarked.astype('category')
titanic.boat = titanic.boat.astype('category')
#Illustrate the change by printing out the new types
titanic.dtypes
|
Backup (not final delivery)/Homework 1.ipynb
|
hbjornoy/DataAnalysis
|
apache-2.0
|
Question 3.2
"Plot histograms for the travel class, embarkation port, sex and age attributes. For the latter one, use discrete decade intervals. "
assumptions:
|
#Plotting the ratio different classes(1st, 2nd and 3rd class) the passengers have
pc = titanic.pclass.value_counts().sort_index().plot(kind='bar')
pc.set_title('Travel classes')
pc.set_ylabel('Number of passengers')
pc.set_xlabel('Travel class')
pc.set_xticklabels(('1st class', '2nd class', '3rd class'))
plt.show(pc)
#Plotting the amount of people that embarked from different cities(C=Cherbourg, Q=Queenstown, S=Southampton)
em = titanic.embarked.value_counts().sort_index().plot(kind='bar')
em.set_title('Ports of embarkation')
em.set_ylabel('Number of passengers')
em.set_xlabel('Port of embarkation')
em.set_xticklabels(('Cherbourg', 'Queenstown', 'Southampton'))
plt.show(em)
#Plotting what sex the passengers are
sex = titanic.sex.value_counts().plot(kind='bar')
sex.set_title('Gender of the passengers')
sex.set_ylabel('Number of Passengers')
sex.set_xlabel('Gender')
sex.set_xticklabels(('Female', 'Male'))
plt.show(sex)
#Plotting agegroup of passengers
bins = [0,10,20,30,40,50,60,70,80]
age_grouped = pd.DataFrame(pd.cut(titanic.age, bins))
ag = age_grouped.age.value_counts().sort_index().plot.bar()
ag.set_title('Age of Passengers ')
ag.set_ylabel('Number of passengers')
ag.set_xlabel('Age groups')
plt.show(ag)
|
Backup (not final delivery)/Homework 1.ipynb
|
hbjornoy/DataAnalysis
|
apache-2.0
|
Question 3.3
Calculate the proportion of passengers by cabin floor. Present your results in a pie chart.
assumptions:
- Because we are tasked with categorizing persons by the floor of their cabin it was problematic that you had cabin input: "F E57" and "F G63". There were only 7 of these instances with conflicting cabinfloors. We also presumed that the was a floor "T". Even though there was only one instance, so it might have been a typo.
- We assume that you don't want to include people without cabinfloor
|
'''
Parsing the cabinfloor, into floors A, B, C, D, E, F, G, T and display in a pie chart
'''
#Dropping NaN (People without cabin)
cabin_floors = titanic.cabin.dropna()
# removes digits and spaces
cabin_floors = cabin_floors.str.replace(r'[\d ]+', '')
# removes duplicate letters and leave unique (CC -> C) (FG -> G)
cabin_floors = cabin_floors.str.replace(r'(.)(?=.*\1)', '')
# removes ambigous data from the dataset (FE -> NaN)(FG -> NaN)
cabin_floors = cabin_floors.str.replace(r'([A-Z]{1})\w+', 'NaN' )
# Recategorizing (Since we altered the entries, we messed with the categories)
cabin_floors = cabin_floors.astype('category')
# Removing NaN (uin this case ambigous data)
cabin_floors = cabin_floors.cat.remove_categories('NaN')
cabin_floors = cabin_floors.dropna()
# Preparing data for plt.pie
numberOfCabinPlaces = cabin_floors.count()
grouped = cabin_floors.groupby(cabin_floors).count()
sizes = np.array(grouped)
labels = np.array(grouped.index)
# Plotting the pie chart
plt.pie(sizes, labels=labels, autopct='%1.1f%%', pctdistance=0.75, labeldistance=1.1)
print("There are {cabin} passengers that have cabins and {nocabin} passengers without a cabin"
.format(cabin=numberOfCabinPlaces, nocabin=(len(titanic) - numberOfCabinPlaces)))
|
Backup (not final delivery)/Homework 1.ipynb
|
hbjornoy/DataAnalysis
|
apache-2.0
|
Question 3.4
For each travel class, calculate the proportion of the passengers that survived. Present your results in pie charts.
assumptions:
|
# function that returns the number of people that survived and died given a specific travelclass
def survivedPerClass(pclass):
survived = len(titanic.survived[titanic.survived == 1][titanic.pclass == pclass])
died = len(titanic.survived[titanic.survived == 0][titanic.pclass == pclass])
return [survived, died]
# Fixing the layout horizontal
the_grid = plt.GridSpec(1, 3)
labels = ["Survived", "Died"]
# Each iteration plots a pie chart
for p in titanic.pclass.unique():
sizes = survivedPerClass(p)
plt.subplot(the_grid[0, p-1], aspect=1 )
plt.pie(sizes, labels=labels, autopct='%1.1f%%')
plt.show()
|
Backup (not final delivery)/Homework 1.ipynb
|
hbjornoy/DataAnalysis
|
apache-2.0
|
Question 3.5
"Calculate the proportion of the passengers that survived by travel class and sex. Present your results in a single histogram."
assumptions:
1. By "proportions" We assume it is a likelyhood-percentage of surviving
|
# group by selected data and get a count for each category
survivalrate = titanic.groupby(['pclass', 'sex', 'survived']).size()
# calculate percentage
survivalpercentage = survivalrate.groupby(level=['pclass', 'sex']).apply(lambda x: x / x.sum() * 100)
# plotting in a histogram
histogram = survivalpercentage.filter(like='1', axis=0).plot(kind='bar')
histogram.set_title('Proportion of the passengers that survived by travel class and sex')
histogram.set_ylabel('Percent likelyhood of surviving titanic')
histogram.set_xlabel('class/gender group')
plt.show(histogram)
|
Backup (not final delivery)/Homework 1.ipynb
|
hbjornoy/DataAnalysis
|
apache-2.0
|
Question 3.6
"Create 2 equally populated age categories and calculate survival proportions by age category, travel class and sex. Present your results in a DataFrame with unique index."
assumptions:
1. By "proportions" we assume it is a likelyhood-percentage of surviving
2. To create 2 equally populated age categories; we will find the median and round up from the median to nearest whole year difference before splitting.
|
#drop NaN rows
age_without_nan = titanic.age.dropna()
#categorizing
age_categories = pd.qcut(age_without_nan, 2, labels=["Younger", "Older"])
#Numbers to explain difference
median = int(np.float64(age_without_nan.median()))
amount = int(age_without_nan[median])
print("The Median age is {median} years old".format(median = median))
print("and there are {amount} passengers that are {median} year old \n".format(amount=amount, median=median))
print(age_categories.groupby(age_categories).count())
print("\nAs you can see the pd.qcut does not cut into entirely equal sized bins, because the age is of a discreet nature")
# imported for the sake of surpressing some warnings
import warnings
warnings.filterwarnings('ignore')
# extract relevant attributes
csas = titanic[['pclass', 'sex', 'age', 'survived']]
csas.dropna(subset=['age'], inplace=True)
# Defining the categories
csas['age_group'] = csas.age > csas.age.median()
csas['age_group'] = csas['age_group'].map(lambda age_category: 'older' if age_category else "younger")
# Converting to int to make it able to aggregate and give percentage
csas.survived = csas.survived.astype(int)
g_categories = csas.groupby(['pclass', 'age_group', 'sex'])
result = pd.DataFrame(g_categories.survived.mean()).rename(columns={'survived': 'survived proportion'})
# reset current index and spesify the unique index
result.reset_index(inplace=True)
unique_index = result.pclass.astype(str) + ': ' + result.age_group.astype(str) + ' ' + result.sex.astype(str)
# Finalize the unique index dataframe
result_w_unique = result[['survived proportion']]
result_w_unique.set_index(unique_index, inplace=True)
print(result_w_unique)
|
Backup (not final delivery)/Homework 1.ipynb
|
hbjornoy/DataAnalysis
|
apache-2.0
|
It works on modules to list the available methods and variables. Take the math module, for example:
|
import math
# math.is # Try completion on this
help(math.isinf)
# try math.isinf() and hit shift-tab while the cursor is between the parentheses
# you should see the same help pop up.
# math.isinf()
|
Notebooks/Introduction/2 - Introduction to ipython.ipynb
|
lmoresi/UoM-VIEPS-Intro-to-Python
|
mit
|
It works on functions that take special arguments and tells you what you need to supply.
Try this and try tabbing in the parenthesis when you use this function yourself:
|
import string
string.capwords("the quality of mercy is not strained")
# string.capwords()
|
Notebooks/Introduction/2 - Introduction to ipython.ipynb
|
lmoresi/UoM-VIEPS-Intro-to-Python
|
mit
|
It also provides special operations that allow you to drill down into the underlying shell / filesystem (but these are not standard python code any more).
|
# execute simple unix shell commands
!ls
!echo ""
!pwd
|
Notebooks/Introduction/2 - Introduction to ipython.ipynb
|
lmoresi/UoM-VIEPS-Intro-to-Python
|
mit
|
Another way to do this is to use the cell magic functionality to direct the notebook to change the cell to something different (here everything in the cell is interpreted as a unix shell )
|
%%sh
ls -l
echo ""
pwd
|
Notebooks/Introduction/2 - Introduction to ipython.ipynb
|
lmoresi/UoM-VIEPS-Intro-to-Python
|
mit
|
I don't advise using this too often as the code becomes more difficult to convert to python.
A % is a one-line magic function that can go anywhere in the cell.
A %% is a cell-wide function
|
%magic # to see EVERYTHING in the magic system !
|
Notebooks/Introduction/2 - Introduction to ipython.ipynb
|
lmoresi/UoM-VIEPS-Intro-to-Python
|
mit
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.