markdown
stringlengths 0
37k
| code
stringlengths 1
33.3k
| path
stringlengths 8
215
| repo_name
stringlengths 6
77
| license
stringclasses 15
values |
---|---|---|---|---|
If you would have wanted to split on the linebreaks only (possible followed by e.g. spaces), you could have used the following pattern:
|
s = """This is a text on three lines
with multiple instances of
double spaces."""
whitespace = re.compile(r"\s*\n\s*")
print(whitespace.split(s))
|
Chapter 6 - Regular Expressions.ipynb
|
mikekestemont/ghent1516
|
mit
|
If we want to correct the double spaces, we could now do:
|
ds = re.compile(r" +")
for line in whitespace.split(s):
print ds.sub(" ", line)
|
Chapter 6 - Regular Expressions.ipynb
|
mikekestemont/ghent1516
|
mit
|
One final feature we should mention is the [^...] syntax: this will match any character that is NOT between the brackets. Remember the vowel_pattern above? Using the caret symbol we can quickly 'invert' this pattern, so that it will match all consonants:
|
s = "these are vowels and consonants"
consonants = re.compile(r"[^aeuoi]")
print(consonants.sub("X", s))
|
Chapter 6 - Regular Expressions.ipynb
|
mikekestemont/ghent1516
|
mit
|
Regular expressions are really useful, but they can get tricky as well as difficult to read, because of the many different options that exist. There is a whole range of special symbols which you can use to match nearly everything in a text, from word boundaries (\b) to digits (\d) etc. Don't learn these by heart but look up a good reference list online (like http://www.tutorialspoint.com/python/python_reg_expressions.htm). As usual Stackoverflow will prove really useful when you search for information online.
Final Exercises Chapter 6
Ex. 1 - Write Python code that loads data items from a file that has the format below. Use regular expresions to parse the lines and the data fields: take care of the multiple whitespace characters that might occur. Fill a dictionary using the two data fields. Use regular expressions as much as possible!
Example data:
color = red
number =7
name= joe
age = 9
...
Ex. 2 - In the scientific community you will often find data online that has been stored in '.csv' format. Each data item in these files is represented on separate line. Write a function that takes a csv-filename as only input parameter and return a lists of lists, containing the data fields for each item.
Example data:
Mike, 28, scientist, Belgium
Lars, 49, research director, Luxemburg
Matt, 52, rockstar, US
Example output:
[["Mike","28","scientist","Belgium"],["Lars","49","research director","Luxemburg"], ...]
Ex. 3 - Expand the previous excercise (don't throw away the original version!). Assume that the first line of your csv-file is not a real data-entry, but a so-called header-line that contains the names of the data fields stored in your csv-file. Now, have your function return a list of dictionaries: one for data item, containing for each item the value for each data field which you find.
Example data:
name, age, profession, country
Mike, 28, scientist, Belgium
Lars, 49, research director, Luxemburg
Matt, 52, rockstar, US
...
Example output:
[{"name": "Mike", "age": "28", "profession":"scientist", "country":"Belgium"}, {"name": "Lars", "age": "49", "profession":"research director", "country":"Luxemburg"]}, ...]
Ex. 4 - Write a function that reads a random text file, splits the words across whitespace instances and returns a set containing all words that contain at least two characters. Use regular expressions where possible!
Ex. 5 - Come up with a regular expression that matches time-of-day strings (such as 9:14 am or 11:20 pm).
Ex. 6 - Write a function that can validate email addresses: a valid email address contains at least one dot, one (and only one!) at-symbol. It should not contain other punctuation symbols and it should end in a common extension like ".com", ".net" or ".org". Again, use regular expressions where possible!
You've reached the end of Chapter 6! Ignore the code below, it's only here to make the page pretty:
|
from IPython.core.display import HTML
def css_styling():
styles = open("styles/custom.css", "r").read()
return HTML(styles)
css_styling()
|
Chapter 6 - Regular Expressions.ipynb
|
mikekestemont/ghent1516
|
mit
|
Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.
The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation.
We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation.
Hint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.
Below, you have these tasks:
1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function.
2. Implement the forward pass in the train method.
3. Implement the backpropagation algorithm in the train method, including calculating the output error.
4. Implement the forward pass in the run method.
|
class NeuralNetwork(object):
@staticmethod
def sigmoid(x):
return 1 / (1 + np.exp(-x))
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.input_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.output_nodes**-0.5,
(self.output_nodes, self.hidden_nodes))
self.lr = learning_rate
self.activation_function = NeuralNetwork.sigmoid
def train(self, inputs_list, targets_list):
# Convert inputs list to 2d array
inputs = np.array(inputs_list, ndmin=2).T
targets = np.array(targets_list, ndmin=2).T
### Forward pass ###
hidden_inputs = np.dot(self.weights_input_to_hidden, inputs)
hidden_outputs = self.activation_function(hidden_inputs)
final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs)
final_outputs = final_inputs
### Backward pass ###
output_errors = targets - final_outputs
hidden_errors = np.dot(self.weights_hidden_to_output.T, output_errors)
hidden_grad = hidden_outputs * (1 - hidden_outputs)
self.weights_hidden_to_output += self.lr * (output_errors * hidden_outputs).T
self.weights_input_to_hidden += self.lr * np.dot((hidden_errors * hidden_grad), inputs.T)
def run(self, inputs_list):
inputs = np.array(inputs_list, ndmin=2).T
hidden_inputs = np.dot(self.weights_input_to_hidden, inputs)
hidden_outputs = self.activation_function(hidden_inputs)
final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs)
final_outputs = final_inputs
return final_outputs
def MSE(y, Y):
return np.mean((y-Y)**2)
|
py3/project-1/dlnd-your-first-neural-network.ipynb
|
jjonte/udacity-deeplearning-nd
|
unlicense
|
Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of epochs
This is the number of times the dataset will pass through the network, each time updating the weights. As the number of epochs increases, the network becomes better and better at predicting the targets in the training set. You'll need to choose enough epochs to train the network well but not too many or you'll be overfitting.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
|
import sys
### Set the hyperparameters here ###
epochs = 1000
learning_rate = 0.1
hidden_nodes = 10
output_nodes = 1
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for e in range(epochs):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
for record, target in zip(train_features.ix[batch].values,
train_targets.ix[batch]['cnt']):
network.train(record, target)
# Printing out the training progress
train_loss = MSE(network.run(train_features), train_targets['cnt'].values)
val_loss = MSE(network.run(val_features), val_targets['cnt'].values)
sys.stdout.write("\rProgress: " + str(100 * e/float(epochs))[:4] \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
plt.ylim(ymax=0.5)
|
py3/project-1/dlnd-your-first-neural-network.ipynb
|
jjonte/udacity-deeplearning-nd
|
unlicense
|
Thinking about your results
Answer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does?
Note: You can edit the text in this cell by double clicking on it. When you want to render the text, press control + enter
Your answer below
It does pretty well up until Dec 22, then the accuracy drops dramatically - it thinks the demand would be higher than it really is for the last 10 days of the year. I would guess the impact of the Christmas holiday and the time people take time off work would cause this.
Unit tests
Run these unit tests to check the correctness of your network implementation. These tests must all be successful to pass the project.
|
import unittest
inputs = [0.5, -0.2, 0.1]
targets = [0.4]
test_w_i_h = np.array([[0.1, 0.4, -0.3],
[-0.2, 0.5, 0.2]])
test_w_h_o = np.array([[0.3, -0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328, -0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, 0.39775194, -0.29887597],
[-0.20185996, 0.50074398, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
|
py3/project-1/dlnd-your-first-neural-network.ipynb
|
jjonte/udacity-deeplearning-nd
|
unlicense
|
Aufgabe 2 Informationsextraktion per Syntaxanalyse
Gegenstand dieser Aufgabe ist eine anwendungsnahe Möglichkeit, Ergebnisse einer Syntaxanalyse weiterzuverarbeiten. Aus den syntaktischen Abhängigkeiten eines Textes soll (unter Zuhilfenahme einiger Normalisierungsschritte) eine semantische Repräsentation der im Text enthaltenen Informationen gewonnen werden.
Für die syntaktische Analyse soll der DependencyParser der Stanford CoreNLP Suite verwendet werden. Die semantische Repräsentation eines Satzes sei ein zweistelliges, logisches Prädikat, dessen Argumente durch Subjekt und Objekt gefüllt sind. (Bei Fehlen eines der beiden Elemente soll None geschrieben werden.)
Folgendes Beispiel illustriert das gewünschte Ergebnis:
Eingabe:
I shot an elephant in my pajamas.
The elephant was seen by a giraffe in the desert.
The bird I need is a raven.
The man who saw the raven laughed out loud.
Ausgabe:
shot(I, elephant)
seen(giraffe, elephant)
need(I, bird)
raven(bird, None)
saw(man, raven)
laughed(man, None)
Beachten Sie, dass PATH_TO_CORE in folgender Code-Zelle Ihrem System entsprechend angepasst werden muss!
|
from nltk.parse.stanford import StanfordDependencyParser
PATH_TO_CORE = "/pfad/zu/stanford-corenlp-full-2017-06-09"
jar = PATH_TO_CORE + '/' + "stanford-corenlp-3.8.0.jar"
model = PATH_TO_CORE + '/' + "stanford-corenlp-3.8.0-models.jar"
dep_parser = StanfordDependencyParser(
jar, model,
model_path="edu/stanford/nlp/models/lexparser/englishPCFG.ser.gz"
)
from collections import defaultdict
def generate_predicates_for_sentence(sentence):
verbs = set()
sbj = {}
obj = {}
sbj_candidates = defaultdict(list)
case = {}
relcl_triples = []
for result in dep_parser.raw_parse(sentence):
for triple in result.triples():
# print(*triple)
if triple[1] == "nsubj":
# whenever we find a subject, its head can be called verb
# if something is added twice it does not matter --> sets
# so it is better to add too often than not enough !
# remember that nouns can be "verbs" in that sense together with copula
verbs.add(triple[0])
sbj[triple[0]] = triple[2]
if triple[1] == "dobj" or triple[1] == "nsubjpass":
# everything that has a direct object should be called a verb as well
verbs.add(triple[0])
obj[triple[0]] = triple[2]
if triple[0][1].startswith('V'):
# everything with a 'verb' as part of speech can be called a verb
verbs.add(triple[0])
if triple[1] == "nmod":
sbj_candidates[triple[0]].append(triple[2])
if triple[1] == "case":
case[triple[0]] = triple[2][0]
if triple[1] == "acl:relcl":
relcl_triples.append(triple)
for triple in relcl_triples:
if triple[2] not in sbj or sbj[triple[2]][1] in ["WP", "WDT"]:
sbj[triple[2]] = triple[0]
else:
obj[triple[2]] = triple[0]
for v in verbs:
if v not in sbj:
if v in sbj_candidates:
for cand in sbj_candidates[v]:
if case[cand] == "by":
sbj[v] = cand
predicates = []
for v in verbs:
if v in sbj:
subject = sbj[v]
else:
subject = ("None",)
if v in obj:
object = obj[v]
else:
object = ("None",)
predicates.append(
v[0] + "(" + subject[0] + ", " + object[0] + ")"
)
return predicates
for pred in generate_predicates_for_sentence(
"The man who saw the raven laughed out loud."
):
print(pred)
def generate_predicates_for_text(text):
predicates = []
for sent in nltk.tokenize.sent_tokenize(text):
predicates.extend(generate_predicates_for_sentence(sent))
return predicates
text = """
I shot an elephant in my pajamas.
The elephant was seen by a giraffe.
The bird I need is a raven.
The man who saw the raven laughed out loud.
"""
for pred in generate_predicates_for_text(text):
print(pred)
|
11-notebook-after-class.ipynb
|
mnschmit/LMU-Syntax-nat-rlicher-Sprachen
|
apache-2.0
|
Hausaufgaben
Aufgabe 3 Parent Annotation
Parent Annotation kann die Performanz einer CFG wesentlich verbessern. Schreiben Sie eine Funktion, die einen gegebenen Syntaxbaum dieser Optimierung unterzieht. Auf diese Art und Weise transformierte Bäume können dann wiederum zur Grammatikinduktion verwendet werden.
parentHistory soll dabei die Anzahl der Vorgänger sein, die zusätzlich zum direkten Elternknoten berücksichtigt werden. (Kann bei der Lösung der Aufgabe auch ignoriert werden.)
parentChar soll ein Trennzeichen sein, das bei den neuen Knotenlabels zwischen dem ursprünglichen Knotenlabel und der Liste von Vorgängern eingefügt wird.
|
def parent_annotation(tree, parentHistory=0, parentChar="^"):
pass
test_tree = nltk.Tree(
"S",
[
nltk.Tree("NP", [
nltk.Tree("DET", []),
nltk.Tree("N", [])
]),
nltk.Tree("VP", [
nltk.Tree("V", []),
nltk.Tree("NP", [
nltk.Tree("DET", []),
nltk.Tree("N", [])
])
])
]
)
parent_annotation(
test_tree
)
|
11-notebook-after-class.ipynb
|
mnschmit/LMU-Syntax-nat-rlicher-Sprachen
|
apache-2.0
|
Aufgabe 4 Mehr Semantik für IE
Zusätzlich zu den in Aufgabe 2 behandelten Konstruktionen sollen jetzt auch negierte und komplexe Sätze mit Konjunktionen sinnvoll verarbeitet werden.
Eingabe:
I see an elephant.
You didn't see the elephant.
Peter saw the elephant and drank wine.
Gewünschte Ausgabe:
see(I, elephant)
not_see(You, elephant)
saw(Peter, elephant)
drank(Peter, wine)
Kopieren Sie am besten Ihren aktuellen Stand von oben herunter und fügen Sie Ihre Erweiterungen dann hier ein.
|
def generate_predicates_for_sentence(sentence):
pass
def generate_predicates_for_text(text):
pass
text = """
I see an elephant.
You didn't see the elephant.
Peter saw the elephant and drank wine.
"""
|
11-notebook-after-class.ipynb
|
mnschmit/LMU-Syntax-nat-rlicher-Sprachen
|
apache-2.0
|
2.Drucke alle die Zahlen von 0 bis 4 aus:
|
for x in range(5):
print(x)
for x in range(3, 6):
print(x)
|
06_Python_Rückblick/01+Rückblick+02+For-Loop-Übungen+-Copy1.ipynb
|
dostrebel/working_place_ds_17
|
mit
|
4.Baue einen For-Loop, indem Du alle geraden Zahlen ausdruckst, die tiefer sind als 237.
|
numbers = [
951, 402, 984, 651, 360, 69, 408, 319, 601, 485, 980, 507, 725, 547, 544,
615, 83, 165, 141, 501, 263, 617, 865, 575, 219, 390, 984, 592, 236, 105, 942, 941,
386, 462, 47, 418, 907, 344, 236, 375, 823, 566, 597, 978, 328, 615, 953, 345,
399, 162, 758, 219, 918, 237, 412, 566, 826, 248, 866, 950, 626, 949, 687, 217,
815, 67, 104, 58, 512, 24, 892, 894, 767, 553, 81, 379, 843, 831, 445, 742, 717,
958, 609, 842, 451, 688, 753, 854, 685, 93, 857, 440, 380, 126, 721, 328, 753, 470,
743, 527
]
# Hier kommt Dein Code:
new_lst = []
for elem in numbers:
if elem < 238 and elem % 2 == 0:
new_lst.append(elem)
else:
continue
print(new_lst)
#Lösung:
|
06_Python_Rückblick/01+Rückblick+02+For-Loop-Übungen+-Copy1.ipynb
|
dostrebel/working_place_ds_17
|
mit
|
6.Addiere nur die Zahlen, die gerade sind
|
evennumber = []
for elem in numbers:
if elem % 2 == 0:
evennumber.append(elem)
sum(evennumber)
|
06_Python_Rückblick/01+Rückblick+02+For-Loop-Übungen+-Copy1.ipynb
|
dostrebel/working_place_ds_17
|
mit
|
7.Drucke mit einem For Loop 5 Mal hintereinander Hello World aus
|
Satz = ['Hello World', 'Hello World','Hello World','Hello World','Hello World']
for elem in Satz:
print(elem)
#Lösung
|
06_Python_Rückblick/01+Rückblick+02+For-Loop-Übungen+-Copy1.ipynb
|
dostrebel/working_place_ds_17
|
mit
|
8.Entwickle ein Programm, das alle Nummern zwischen 2000 und 3200 findet, die durch 7, aber nicht durch 5 teilbar sind. Das Ergebnis sollte auf einer Zeile ausgedruckt werden. Tipp: Schaue Dir hier die Vergleichsoperanden von Python an.
|
l=[]
for i in range(2000, 3201):
if (i % 7==0) and (i % 5!=0):
l.append(str(i))
print(','.join(l))
|
06_Python_Rückblick/01+Rückblick+02+For-Loop-Übungen+-Copy1.ipynb
|
dostrebel/working_place_ds_17
|
mit
|
9.Schreibe einen For Loop, der die Nummern in der folgenden Liste von int in str verwandelt.
|
lst = range(45,99)
newlst = []
for i in lst:
i = str(i)
newlst.append(i)
print(newlst)
|
06_Python_Rückblick/01+Rückblick+02+For-Loop-Übungen+-Copy1.ipynb
|
dostrebel/working_place_ds_17
|
mit
|
10.Schreibe nun ein Programm, das alle Ziffern 4 mit dem Buchstaben A ersetzte, alle Ziffern 5 mit dem Buchtaben B.
|
newnewlist = []
for elem in newlst:
if '4' in elem:
elem = elem.replace('4', 'A')
if '5' in elem:
elem = elem.replace('5', 'B')
newnewlist.append(elem)
newnewlist
|
06_Python_Rückblick/01+Rückblick+02+For-Loop-Übungen+-Copy1.ipynb
|
dostrebel/working_place_ds_17
|
mit
|
Note: The R code and the results in this notebook has been converted to markdown so that R is not required to build the documents. The R results in the notebook were computed using R 3.5.1 and lme4 1.1.
ipython
%load_ext rpy2.ipython
ipython
%R library(lme4)
array(['lme4', 'Matrix', 'tools', 'stats', 'graphics', 'grDevices',
'utils', 'datasets', 'methods', 'base'], dtype='<U9')
Comparing R lmer to statsmodels MixedLM
The statsmodels imputation of linear mixed models (MixedLM) closely follows the approach outlined in Lindstrom and Bates (JASA 1988). This is also the approach followed in the R package LME4. Other packages such as Stata, SAS, etc. should also be consistent with this approach, as the basic techniques in this area are mostly mature.
Here we show how linear mixed models can be fit using the MixedLM procedure in statsmodels. Results from R (LME4) are included for comparison.
Here are our import statements:
Growth curves of pigs
These are longitudinal data from a factorial experiment. The outcome variable is the weight of each pig, and the only predictor variable we will use here is "time". First we fit a model that expresses the mean weight as a linear function of time, with a random intercept for each pig. The model is specified using formulas. Since the random effects structure is not specified, the default random effects structure (a random intercept for each group) is automatically used.
|
data = sm.datasets.get_rdataset('dietox', 'geepack').data
md = smf.mixedlm("Weight ~ Time", data, groups=data["Pig"])
mdf = md.fit(method=["lbfgs"])
print(mdf.summary())
|
examples/notebooks/mixed_lm_example.ipynb
|
jseabold/statsmodels
|
bsd-3-clause
|
Here is the same model fit in R using LMER:
ipython
%%R
data(dietox, package='geepack')
ipython
%R print(summary(lmer('Weight ~ Time + (1|Pig)', data=dietox)))
```
Linear mixed model fit by REML ['lmerMod']
Formula: Weight ~ Time + (1 | Pig)
Data: dietox
REML criterion at convergence: 4809.6
Scaled residuals:
Min 1Q Median 3Q Max
-4.7118 -0.5696 -0.0943 0.4877 4.7732
Random effects:
Groups Name Variance Std.Dev.
Pig (Intercept) 40.39 6.356
Residual 11.37 3.371
Number of obs: 861, groups: Pig, 72
Fixed effects:
Estimate Std. Error t value
(Intercept) 15.72352 0.78805 19.95
Time 6.94251 0.03339 207.94
Correlation of Fixed Effects:
(Intr)
Time -0.275
```
Note that in the statsmodels summary of results, the fixed effects and random effects parameter estimates are shown in a single table. The random effect for animal is labeled "Intercept RE" in the statsmodels output above. In the LME4 output, this effect is the pig intercept under the random effects section.
There has been a lot of debate about whether the standard errors for random effect variance and covariance parameters are useful. In LME4, these standard errors are not displayed, because the authors of the package believe they are not very informative. While there is good reason to question their utility, we elected to include the standard errors in the summary table, but do not show the corresponding Wald confidence intervals.
Next we fit a model with two random effects for each animal: a random intercept, and a random slope (with respect to time). This means that each pig may have a different baseline weight, as well as growing at a different rate. The formula specifies that "Time" is a covariate with a random coefficient. By default, formulas always include an intercept (which could be suppressed here using "0 + Time" as the formula).
|
md = smf.mixedlm("Weight ~ Time", data, groups=data["Pig"], re_formula="~Time")
mdf = md.fit(method=["lbfgs"])
print(mdf.summary())
|
examples/notebooks/mixed_lm_example.ipynb
|
jseabold/statsmodels
|
bsd-3-clause
|
Here is the same model fit using LMER in R:
ipython
%R print(summary(lmer("Weight ~ Time + (1 + Time | Pig)", data=dietox)))
```
Linear mixed model fit by REML ['lmerMod']
Formula: Weight ~ Time + (1 + Time | Pig)
Data: dietox
REML criterion at convergence: 4434.1
Scaled residuals:
Min 1Q Median 3Q Max
-6.4286 -0.5529 -0.0416 0.4841 3.5624
Random effects:
Groups Name Variance Std.Dev. Corr
Pig (Intercept) 19.493 4.415
Time 0.416 0.645 0.10
Residual 6.038 2.457
Number of obs: 861, groups: Pig, 72
Fixed effects:
Estimate Std. Error t value
(Intercept) 15.73865 0.55012 28.61
Time 6.93901 0.07982 86.93
Correlation of Fixed Effects:
(Intr)
Time 0.006
```
The random intercept and random slope are only weakly correlated $(0.294 / \sqrt{19.493 * 0.416} \approx 0.1)$. So next we fit a model in which the two random effects are constrained to be uncorrelated:
|
.294 / (19.493 * .416)**.5
md = smf.mixedlm("Weight ~ Time", data, groups=data["Pig"],
re_formula="~Time")
free = sm.regression.mixed_linear_model.MixedLMParams.from_components(np.ones(2),
np.eye(2))
mdf = md.fit(free=free, method=["lbfgs"])
print(mdf.summary())
|
examples/notebooks/mixed_lm_example.ipynb
|
jseabold/statsmodels
|
bsd-3-clause
|
The likelihood drops by 0.3 when we fix the correlation parameter to 0. Comparing 2 x 0.3 = 0.6 to the chi^2 1 df reference distribution suggests that the data are very consistent with a model in which this parameter is equal to 0.
Here is the same model fit using LMER in R (note that here R is reporting the REML criterion instead of the likelihood, where the REML criterion is twice the log likelihood):
ipython
%R print(summary(lmer("Weight ~ Time + (1 | Pig) + (0 + Time | Pig)", data=dietox)))
```
Linear mixed model fit by REML ['lmerMod']
Formula: Weight ~ Time + (1 | Pig) + (0 + Time | Pig)
Data: dietox
REML criterion at convergence: 4434.7
Scaled residuals:
Min 1Q Median 3Q Max
-6.4281 -0.5527 -0.0405 0.4840 3.5661
Random effects:
Groups Name Variance Std.Dev.
Pig (Intercept) 19.8404 4.4543
Pig.1 Time 0.4234 0.6507
Residual 6.0282 2.4552
Number of obs: 861, groups: Pig, 72
Fixed effects:
Estimate Std. Error t value
(Intercept) 15.73875 0.55444 28.39
Time 6.93899 0.08045 86.25
Correlation of Fixed Effects:
(Intr)
Time -0.086
```
Sitka growth data
This is one of the example data sets provided in the LMER R library. The outcome variable is the size of the tree, and the covariate used here is a time value. The data are grouped by tree.
|
data = sm.datasets.get_rdataset("Sitka", "MASS").data
endog = data["size"]
data["Intercept"] = 1
exog = data[["Intercept", "Time"]]
|
examples/notebooks/mixed_lm_example.ipynb
|
jseabold/statsmodels
|
bsd-3-clause
|
Here is the statsmodels LME fit for a basic model with a random intercept. We are passing the endog and exog data directly to the LME init function as arrays. Also note that endog_re is specified explicitly in argument 4 as a random intercept (although this would also be the default if it were not specified).
|
md = sm.MixedLM(endog, exog, groups=data["tree"], exog_re=exog["Intercept"])
mdf = md.fit()
print(mdf.summary())
|
examples/notebooks/mixed_lm_example.ipynb
|
jseabold/statsmodels
|
bsd-3-clause
|
Here is the same model fit in R using LMER:
ipython
%R
data(Sitka, package="MASS")
print(summary(lmer("size ~ Time + (1 | tree)", data=Sitka)))
```
Linear mixed model fit by REML ['lmerMod']
Formula: size ~ Time + (1 | tree)
Data: Sitka
REML criterion at convergence: 164.8
Scaled residuals:
Min 1Q Median 3Q Max
-2.9979 -0.5169 0.1576 0.5392 4.4012
Random effects:
Groups Name Variance Std.Dev.
tree (Intercept) 0.37451 0.612
Residual 0.03921 0.198
Number of obs: 395, groups: tree, 79
Fixed effects:
Estimate Std. Error t value
(Intercept) 2.2732443 0.0878955 25.86
Time 0.0126855 0.0002654 47.80
Correlation of Fixed Effects:
(Intr)
Time -0.611
```
We can now try to add a random slope. We start with R this time. From the code and output below we see that the REML estimate of the variance of the random slope is nearly zero.
ipython
%R print(summary(lmer("size ~ Time + (1 + Time | tree)", data=Sitka)))
```
Linear mixed model fit by REML ['lmerMod']
Formula: size ~ Time + (1 + Time | tree)
Data: Sitka
REML criterion at convergence: 153.4
Scaled residuals:
Min 1Q Median 3Q Max
-2.7609 -0.5173 0.1188 0.5270 3.5466
Random effects:
Groups Name Variance Std.Dev. Corr
tree (Intercept) 2.217e-01 0.470842
Time 3.288e-06 0.001813 -0.17
Residual 3.634e-02 0.190642
Number of obs: 395, groups: tree, 79
Fixed effects:
Estimate Std. Error t value
(Intercept) 2.273244 0.074655 30.45
Time 0.012686 0.000327 38.80
Correlation of Fixed Effects:
(Intr)
Time -0.615
convergence code: 0
Model failed to converge with max|grad| = 0.793203 (tol = 0.002, component 1)
Model is nearly unidentifiable: very large eigenvalue
- Rescale variables?
```
If we run this in statsmodels LME with defaults, we see that the variance estimate is indeed very small, which leads to a warning about the solution being on the boundary of the parameter space. The regression slopes agree very well with R, but the likelihood value is much higher than that returned by R.
|
exog_re = exog.copy()
md = sm.MixedLM(endog, exog, data["tree"], exog_re)
mdf = md.fit()
print(mdf.summary())
|
examples/notebooks/mixed_lm_example.ipynb
|
jseabold/statsmodels
|
bsd-3-clause
|
We can further explore the random effects structure by constructing plots of the profile likelihoods. We start with the random intercept, generating a plot of the profile likelihood from 0.1 units below to 0.1 units above the MLE. Since each optimization inside the profile likelihood generates a warning (due to the random slope variance being close to zero), we turn off the warnings here.
|
import warnings
with warnings.catch_warnings():
warnings.filterwarnings("ignore")
likev = mdf.profile_re(0, 're', dist_low=0.1, dist_high=0.1)
|
examples/notebooks/mixed_lm_example.ipynb
|
jseabold/statsmodels
|
bsd-3-clause
|
Here is a plot of the profile likelihood function. We multiply the log-likelihood difference by 2 to obtain the usual $\chi^2$ reference distribution with 1 degree of freedom.
|
import matplotlib.pyplot as plt
plt.figure(figsize=(10,8))
plt.plot(likev[:,0], 2*likev[:,1])
plt.xlabel("Variance of random slope", size=17)
plt.ylabel("-2 times profile log likelihood", size=17)
|
examples/notebooks/mixed_lm_example.ipynb
|
jseabold/statsmodels
|
bsd-3-clause
|
Here is a plot of the profile likelihood function. The profile likelihood plot shows that the MLE of the random slope variance parameter is a very small positive number, and that there is low uncertainty in this estimate.
|
re = mdf.cov_re.iloc[1, 1]
with warnings.catch_warnings():
# Parameter is often on the boundary
warnings.simplefilter("ignore", ConvergenceWarning)
likev = mdf.profile_re(1, 're', dist_low=.5*re, dist_high=0.8*re)
plt.figure(figsize=(10, 8))
plt.plot(likev[:,0], 2*likev[:,1])
plt.xlabel("Variance of random slope", size=17)
lbl = plt.ylabel("-2 times profile log likelihood", size=17)
|
examples/notebooks/mixed_lm_example.ipynb
|
jseabold/statsmodels
|
bsd-3-clause
|
2. use nested DataStructs
nesting DataStructs allows for much more complex parsing patterns.
|
from dstruct import DataStruct
from datetime import datetime
class Transaction(DataStruct):
amount = DataField()
@datafield(path=None)
def time(self, data):
s = data['utc-unix']
tz = data['time-zone']
if 'UTC' not in tz:
raise ValueError("Unknow time-zone standard: '%s'" % tz)
else:
dif = tz.replace('UTC', '')
# trick for adding time-zone
s = eval(str(s)+dif+'*60*60')
dt = datetime.fromtimestamp(s)
# return the parsed epoch time
# for the given time-zone
return dt.strftime('%Y-%m-%d %H:%M:%S')
@datafield('source')
def source(self, data):
t = data['type']
# we use the transaction type
# to identify which kind of
# `DataStruct` should be used
if '-' in t:
s = ''
for sub in t.split('-'):
s += sub.capitalize()
else:
s = t.capitalize()
# we use `eval` to grab
# the appropriate struct
cls = eval(s)
return cls(data)
class TransactionSource(DataStruct):
ref = DataField()
class Purchase(TransactionSource):
type = DataField()
at = DataField('name')
card = DataField(parser=lambda s: 'X'*len(s[:-4])+s[-4:])
class MobileDeposit(TransactionSource):
type = DataField()
note = DataField()
check_number = DataField('check-number')
class DetailedAccountSummary(AccountSummaryFromFile):
last_withdraw = DataField('account', 'withdrawn', '0', parser=Transaction)
last_deposit = DataField('account', 'deposited', '0', parser=Transaction)
|
examples/advanced.ipynb
|
rmorshea/dstruct
|
mit
|
The Parsed Account Summary
|
print(DetailedAccountSummary('data_files/bank_data.json'))
|
examples/advanced.ipynb
|
rmorshea/dstruct
|
mit
|
<br>
|
# Import wine quality data from a local CSV file
wine = h2o.import_file("winequality-white.csv")
wine.head(5)
# Define features (or predictors)
features = list(wine.columns) # we want to use all the information
features.remove('quality') # we need to exclude the target 'quality' (otherwise there is nothing to predict)
features
# Split the H2O data frame into training/test sets
# so we can evaluate out-of-bag performance
wine_split = wine.split_frame(ratios = [0.8], seed = 1234)
wine_train = wine_split[0] # using 80% for training
wine_test = wine_split[1] # using the rest 20% for out-of-bag evaluation
wine_train.shape
wine_test.shape
|
py_03c_regression_ensembles.ipynb
|
woobe/odsc_h2o_machine_learning
|
apache-2.0
|
<br>
Step 1: Build GBM Models using Random Grid Search and Extract the Best Model
|
# define the range of hyper-parameters for GBM grid search
# 27 combinations in total
hyper_params = {'sample_rate': [0.7, 0.8, 0.9],
'col_sample_rate': [0.7, 0.8, 0.9],
'max_depth': [3, 5, 7]}
# Set up GBM grid search
# Add a seed for reproducibility
gbm_rand_grid = H2OGridSearch(
H2OGradientBoostingEstimator(
model_id = 'gbm_rand_grid',
seed = 1234,
ntrees = 10000,
nfolds = 5,
fold_assignment = "Modulo", # needed for stacked ensembles
keep_cross_validation_predictions = True, # needed for stacked ensembles
stopping_metric = 'mse',
stopping_rounds = 15,
score_tree_interval = 1),
search_criteria = search_criteria,
hyper_params = hyper_params)
# Use .train() to start the grid search
gbm_rand_grid.train(x = features,
y = 'quality',
training_frame = wine_train)
# Sort and show the grid search results
gbm_rand_grid_sorted = gbm_rand_grid.get_grid(sort_by='mse', decreasing=False)
print(gbm_rand_grid_sorted)
# Extract the best model from random grid search
best_gbm_model_id = gbm_rand_grid_sorted.model_ids[0]
best_gbm_from_rand_grid = h2o.get_model(best_gbm_model_id)
best_gbm_from_rand_grid.summary()
|
py_03c_regression_ensembles.ipynb
|
woobe/odsc_h2o_machine_learning
|
apache-2.0
|
<br>
Step 2: Build DRF Models using Random Grid Search and Extract the Best Model
|
# define the range of hyper-parameters for DRF grid search
# 27 combinations in total
hyper_params = {'sample_rate': [0.5, 0.6, 0.7],
'col_sample_rate_per_tree': [0.7, 0.8, 0.9],
'max_depth': [3, 5, 7]}
# Set up DRF grid search
# Add a seed for reproducibility
drf_rand_grid = H2OGridSearch(
H2ORandomForestEstimator(
model_id = 'drf_rand_grid',
seed = 1234,
ntrees = 200,
nfolds = 5,
fold_assignment = "Modulo", # needed for stacked ensembles
keep_cross_validation_predictions = True), # needed for stacked ensembles
search_criteria = search_criteria,
hyper_params = hyper_params)
# Use .train() to start the grid search
drf_rand_grid.train(x = features,
y = 'quality',
training_frame = wine_train)
# Sort and show the grid search results
drf_rand_grid_sorted = drf_rand_grid.get_grid(sort_by='mse', decreasing=False)
print(drf_rand_grid_sorted)
# Extract the best model from random grid search
best_drf_model_id = drf_rand_grid_sorted.model_ids[0]
best_drf_from_rand_grid = h2o.get_model(best_drf_model_id)
best_drf_from_rand_grid.summary()
|
py_03c_regression_ensembles.ipynb
|
woobe/odsc_h2o_machine_learning
|
apache-2.0
|
<br>
Step 3: Build DNN Models using Random Grid Search and Extract the Best Model
|
# define the range of hyper-parameters for DNN grid search
# 81 combinations in total
hyper_params = {'activation': ['tanh', 'rectifier', 'maxout'],
'hidden': [[50], [50,50], [50,50,50]],
'l1': [0, 1e-3, 1e-5],
'l2': [0, 1e-3, 1e-5]}
# Set up DNN grid search
# Add a seed for reproducibility
dnn_rand_grid = H2OGridSearch(
H2ODeepLearningEstimator(
model_id = 'dnn_rand_grid',
seed = 1234,
epochs = 20,
nfolds = 5,
fold_assignment = "Modulo", # needed for stacked ensembles
keep_cross_validation_predictions = True), # needed for stacked ensembles
search_criteria = search_criteria,
hyper_params = hyper_params)
# Use .train() to start the grid search
dnn_rand_grid.train(x = features,
y = 'quality',
training_frame = wine_train)
# Sort and show the grid search results
dnn_rand_grid_sorted = dnn_rand_grid.get_grid(sort_by='mse', decreasing=False)
print(dnn_rand_grid_sorted)
# Extract the best model from random grid search
best_dnn_model_id = dnn_rand_grid_sorted.model_ids[0]
best_dnn_from_rand_grid = h2o.get_model(best_dnn_model_id)
best_dnn_from_rand_grid.summary()
|
py_03c_regression_ensembles.ipynb
|
woobe/odsc_h2o_machine_learning
|
apache-2.0
|
<br>
Model Stacking
|
# Define a list of models to be stacked
# i.e. best model from each grid
all_ids = [best_gbm_model_id, best_drf_model_id, best_dnn_model_id]
# Set up Stacked Ensemble
ensemble = H2OStackedEnsembleEstimator(model_id = "my_ensemble",
base_models = all_ids)
# use .train to start model stacking
# GLM as the default metalearner
ensemble.train(x = features,
y = 'quality',
training_frame = wine_train)
|
py_03c_regression_ensembles.ipynb
|
woobe/odsc_h2o_machine_learning
|
apache-2.0
|
<br>
Comparison of Model Performance on Test Data
|
print('Best GBM model from Grid (MSE) : ', best_gbm_from_rand_grid.model_performance(wine_test).mse())
print('Best DRF model from Grid (MSE) : ', best_drf_from_rand_grid.model_performance(wine_test).mse())
print('Best DNN model from Grid (MSE) : ', best_dnn_from_rand_grid.model_performance(wine_test).mse())
print('Stacked Ensembles (MSE) : ', ensemble.model_performance(wine_test).mse())
|
py_03c_regression_ensembles.ipynb
|
woobe/odsc_h2o_machine_learning
|
apache-2.0
|
Purpose
The purpose of this notebook is to work out the data structure for saving the computed results for a single session. Here we are using the xarray package to structure the data, because:
It is built to handle large multi-dimensional data (orginally for earth sciences data).
It allows you to call dimensions by name (time, frequency, etc).
The plotting functions are convenient for multi-dimensional data (it has convenient heatmap plotting).
It can output to HDF5 (via the netcdf format, a geosciences data format), which is built for handling large data in a descriptive (i.e. can label units, add information about how data was constructed, etc.).
Lazily loads data so large datasets that are too big for memory can be handled (via dask).
Previously, I was using the pandas package in python and this wasn't handling the loading and combining of time-frequency data. In particular, the size of the data was problematic even on the cluster and this was frustrating to debug. pandas now recommends the usage of xarray for multi-dimesional data.
|
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import xarray as xr
from src.data_processing import (get_LFP_dataframe, make_tetrode_dataframe,
make_tetrode_pair_info, reshape_to_segments)
from src.parameters import (ANIMALS, SAMPLING_FREQUENCY,
MULTITAPER_PARAMETERS, FREQUENCY_BANDS,
RIPPLE_COVARIATES, ALPHA)
from src.analysis import (decode_ripple_clusterless,
detect_epoch_ripples, is_overlap,
_subtract_event_related_potential)
|
notebooks/2017_06_14_Test_Spectral_Single_Session.ipynb
|
edeno/Jadhav-2016-Data-Analysis
|
gpl-3.0
|
Go through the steps to get the ripple triggered connectivity
|
epoch_key = ('HPa', 6, 2)
ripple_times = detect_epoch_ripples(
epoch_key, ANIMALS, sampling_frequency=SAMPLING_FREQUENCY)
tetrode_info = make_tetrode_dataframe(ANIMALS)[epoch_key]
tetrode_info = tetrode_info[
~tetrode_info.descrip.str.endswith('Ref').fillna(False)]
tetrode_pair_info = make_tetrode_pair_info(tetrode_info)
lfps = {tetrode_key: get_LFP_dataframe(tetrode_key, ANIMALS)
for tetrode_key in tetrode_info.index}
from copy import deepcopy
from functools import partial, wraps
multitaper_parameter_name = '4Hz_Resolution'
multitaper_params = MULTITAPER_PARAMETERS[multitaper_parameter_name]
num_lfps = len(lfps)
num_pairs = int(num_lfps * (num_lfps - 1) / 2)
params = deepcopy(multitaper_params)
window_of_interest = params.pop('window_of_interest')
reshape_to_trials = partial(
reshape_to_segments,
sampling_frequency=params['sampling_frequency'],
window_offset=window_of_interest, concat_axis=1)
ripple_locked_lfps = pd.Panel({
lfp_name: _subtract_event_related_potential(
reshape_to_trials(lfps[lfp_name], ripple_times))
for lfp_name in lfps})
from src.spectral.connectivity import Connectivity
from src.spectral.transforms import Multitaper
m = Multitaper(
np.rollaxis(ripple_locked_lfps.values, 0, 3),
**params,
start_time=ripple_locked_lfps.major_axis.min())
c = Connectivity(
fourier_coefficients=m.fft(),
frequencies=m.frequencies,
time=m.time)
|
notebooks/2017_06_14_Test_Spectral_Single_Session.ipynb
|
edeno/Jadhav-2016-Data-Analysis
|
gpl-3.0
|
Make an xarray dataset for coherence and pairwise spectral granger
|
n_lfps = len(lfps)
ds = xr.Dataset(
{'coherence_magnitude': (['time', 'frequency', 'tetrode1', 'tetrode2'], c.coherence_magnitude()),
'pairwise_spectral_granger_prediction': (['time', 'frequency', 'tetrode1', 'tetrode2'], c.pairwise_spectral_granger_prediction())},
coords={'time': c.time + np.diff(c.time)[0] / 2,
'frequency': c.frequencies + np.diff(c.frequencies)[0] / 2,
'tetrode1': tetrode_info.tetrode_id.values,
'tetrode2': tetrode_info.tetrode_id.values,
'brain_area1': ('tetrode1', tetrode_info.area.tolist()),
'brain_area2': ('tetrode2', tetrode_info.area.tolist()),
'session': np.array(['{0}_{1:02d}_{2:02d}'.format(*epoch_key)]),
}
)
ds
|
notebooks/2017_06_14_Test_Spectral_Single_Session.ipynb
|
edeno/Jadhav-2016-Data-Analysis
|
gpl-3.0
|
Show that it is easy to select two individual tetrodes and plot a subset of their frequency for coherence.
|
ds.sel(
tetrode1='HPa621',
tetrode2='HPa624',
frequency=slice(0, 30)).coherence_magnitude.plot(x='time', y='frequency');
|
notebooks/2017_06_14_Test_Spectral_Single_Session.ipynb
|
edeno/Jadhav-2016-Data-Analysis
|
gpl-3.0
|
Show the same thing for spectral granger.
|
ds.sel(
tetrode1='HPa621',
tetrode2='HPa6220',
frequency=slice(0, 30)
).pairwise_spectral_granger_prediction.plot(x='time', y='frequency');
|
notebooks/2017_06_14_Test_Spectral_Single_Session.ipynb
|
edeno/Jadhav-2016-Data-Analysis
|
gpl-3.0
|
Now show that we can plot all tetrodes pairs in a dataset
|
ds['pairwise_spectral_granger_prediction'].sel(
frequency=slice(0, 30)).plot(x='time', y='frequency', col='tetrode1', row='tetrode2', robust=True);
ds['coherence_magnitude'].sel(
frequency=slice(0, 30)).plot(x='time', y='frequency', col='tetrode1', row='tetrode2');
|
notebooks/2017_06_14_Test_Spectral_Single_Session.ipynb
|
edeno/Jadhav-2016-Data-Analysis
|
gpl-3.0
|
It is also easy to select a subset of tetrode pairs (in this case all CA1-PFC tetrode pairs).
|
(ds.sel(
tetrode1=ds.tetrode1[ds.brain_area1=='CA1'],
tetrode2=ds.tetrode2[ds.brain_area2=='PFC'],
frequency=slice(0, 30))
.coherence_magnitude
.plot(x='time', y='frequency', col='tetrode1', row='tetrode2'));
|
notebooks/2017_06_14_Test_Spectral_Single_Session.ipynb
|
edeno/Jadhav-2016-Data-Analysis
|
gpl-3.0
|
xarray also makes it easy to compare the difference of a connectivity measure from its baseline (in this case, the baseline is the first time bin)
|
((ds - ds.isel(time=0)).sel(
tetrode1=ds.tetrode1[ds.brain_area1=='CA1'],
tetrode2=ds.tetrode2[ds.brain_area2=='PFC'],
frequency=slice(0, 30))
.coherence_magnitude
.plot(x='time', y='frequency', col='tetrode1', row='tetrode2'));
|
notebooks/2017_06_14_Test_Spectral_Single_Session.ipynb
|
edeno/Jadhav-2016-Data-Analysis
|
gpl-3.0
|
It is also easy to average over the tetrode pairs
|
(ds.sel(
tetrode1=ds.tetrode1[ds.brain_area1=='CA1'],
tetrode2=ds.tetrode2[ds.brain_area2=='PFC'],
frequency=slice(0, 30))
.coherence_magnitude.mean(['tetrode1', 'tetrode2'])
.plot(x='time', y='frequency'));
|
notebooks/2017_06_14_Test_Spectral_Single_Session.ipynb
|
edeno/Jadhav-2016-Data-Analysis
|
gpl-3.0
|
And also average over the difference
|
((ds - ds.isel(time=0)).sel(
tetrode1=ds.tetrode1[ds.brain_area1=='CA1'],
tetrode2=ds.tetrode2[ds.brain_area2=='PFC'],
frequency=slice(0, 30))
.coherence_magnitude.mean(['tetrode1', 'tetrode2'])
.plot(x='time', y='frequency'));
|
notebooks/2017_06_14_Test_Spectral_Single_Session.ipynb
|
edeno/Jadhav-2016-Data-Analysis
|
gpl-3.0
|
Test saving as netcdf file
|
import os
path = '{0}_{1:02d}_{2:02d}.nc'.format(*epoch_key)
group = '{0}/'.format(multitaper_parameter_name)
write_mode = 'a' if os.path.isfile(path) else 'w'
ds.to_netcdf(path=path, group=group, mode=write_mode)
|
notebooks/2017_06_14_Test_Spectral_Single_Session.ipynb
|
edeno/Jadhav-2016-Data-Analysis
|
gpl-3.0
|
Show that we can open the saved dataset and recover the data
|
with xr.open_dataset(path, group=group) as da:
da.load()
print(da)
|
notebooks/2017_06_14_Test_Spectral_Single_Session.ipynb
|
edeno/Jadhav-2016-Data-Analysis
|
gpl-3.0
|
Make data structure for group delay
|
n_bands = len(FREQUENCY_BANDS)
delay, slope, r_value = (np.zeros((c.time.size, n_bands, m.n_signals, m.n_signals)),) * 3
for band_ind, frequency_band in enumerate(FREQUENCY_BANDS):
(delay[:, band_ind, ...],
slope[:, band_ind, ...],
r_value[:, band_ind, ...]) = c.group_delay(
FREQUENCY_BANDS[frequency_band], frequency_resolution=m.frequency_resolution)
coordinate_names = ['time', 'frequency_band', 'tetrode1', 'tetrode2']
ds = xr.Dataset(
{'delay': (coordinate_names, delay),
'slope': (coordinate_names, slope),
'r_value': (coordinate_names, r_value)},
coords={'time': c.time + np.diff(c.time)[0] / 2,
'frequency_band': list(FREQUENCY_BANDS.keys()),
'tetrode1': tetrode_info.tetrode_id.values,
'tetrode2': tetrode_info.tetrode_id.values,
'brain_area1': ('tetrode1', tetrode_info.area.tolist()),
'brain_area2': ('tetrode2', tetrode_info.area.tolist()),
'session': np.array(['{0}_{1:02d}_{2:02d}'.format(*epoch_key)]),
}
)
ds['delay'].sel(frequency_band='beta', tetrode1='HPa621', tetrode2='HPa622').plot();
|
notebooks/2017_06_14_Test_Spectral_Single_Session.ipynb
|
edeno/Jadhav-2016-Data-Analysis
|
gpl-3.0
|
Make data structure for canonical coherence
|
canonical_coherence, area_labels = c.canonical_coherence(tetrode_info.area.tolist())
dimension_names = ['time', 'frequency', 'brain_area1', 'brain_area2']
data_vars = {'canonical_coherence': (dimension_names, canonical_coherence)}
coordinates = {
'time': c.time + np.diff(c.time)[0] / 2,
'frequency': c.frequencies + np.diff(c.frequencies)[0] / 2,
'brain_area1': area_labels,
'brain_area2': area_labels,
'session': np.array(['{0}_{1:02d}_{2:02d}'.format(*epoch_key)]),
}
ds = xr.Dataset(data_vars, coords=coordinates)
ds.sel(brain_area1='CA1', brain_area2='PFC', frequency=slice(0, 30)).canonical_coherence.plot(x='time', y='frequency')
|
notebooks/2017_06_14_Test_Spectral_Single_Session.ipynb
|
edeno/Jadhav-2016-Data-Analysis
|
gpl-3.0
|
Now after adding this code into the code base, test if we can compute, save, and load
|
from src.analysis import ripple_triggered_connectivity
for parameters_name, parameters in MULTITAPER_PARAMETERS.items():
ripple_triggered_connectivity(
lfps, epoch_key, tetrode_info, ripple_times, parameters,
FREQUENCY_BANDS,
multitaper_parameter_name=parameters_name,
group_name='all_ripples')
with xr.open_dataset(path, group='2Hz_Resolution/all_ripples/canonical_coherence') as da:
da.load()
print(da)
da.sel(brain_area1='CA1', brain_area2='PFC', frequency=slice(0, 30)).canonical_coherence.plot(x='time', y='frequency')
with xr.open_dataset(path, group='10Hz_Resolution/all_ripples/canonical_coherence') as da:
da.load()
print(da)
da.sel(brain_area1='CA1', brain_area2='PFC', frequency=slice(0, 30)).canonical_coherence.plot(x='time', y='frequency')
|
notebooks/2017_06_14_Test_Spectral_Single_Session.ipynb
|
edeno/Jadhav-2016-Data-Analysis
|
gpl-3.0
|
Identificando a rotação entre 2 imagens
Calcular a Transformada de Fourier das 2 imagens que se quer comparar;
Converter as imagens obtidas para coordenadas polares
Calcular a correlação de fase usando a função phasecorr
Encontrar o ponto de máximo do mapa de correlação resultante
|
f = mpimg.imread('../data/cameraman.tif')
# Inserindo uma borda de zeros para permitir a rotação da imagem
t = np.zeros(np.array(f.shape)+100,dtype=np.uint8)
t[50:f.shape[0]+50,50:f.shape[1]+50] = f
f = t
t1 = np.array([
[1,0,-f.shape[0]/2.],
[0,1,-f.shape[1]/2.],
[0,0,1]]);
t2 = np.array([
[1,0,f.shape[0]/2.],
[0,1,f.shape[1]/2.],
[0,0,1]]);
# Rotacionando a imagem 30 graus
theta = np.radians(30)
r1 = np.array([
[np.cos(theta),-np.sin(theta),0],
[np.sin(theta),np.cos(theta),0],
[0,0,1]]);
T = t2.dot(r1).dot(t1)
f_rot = ia.affine(f,T,0)
plt.figure(1,(10,10))
plt.subplot(1,2,1)
plt.imshow(f, cmap='gray')
plt.title('Imagem original')
plt.subplot(1,2,2)
plt.imshow(f_rot, cmap='gray')
plt.title('Imagem rotacionada')
W,H = f.shape
f_polar = ia.polar(f,(150,200),2*np.pi)
f_rot_polar = ia.polar(f_rot,(150,200),2*np.pi)
plt.figure(1,(10,10))
plt.subplot(1,2,1)
plt.imshow(f_polar, cmap='gray')
plt.title('Imagem original (coord. polar)')
plt.subplot(1,2,2)
plt.imshow(f_rot_polar, cmap='gray')
plt.title('Imagem rotacionada (coord. polar)')
# Calculando a correlação de fase
g = ia.phasecorr(f_polar,f_rot_polar)
# Encontrando o ponto de máxima correlação
i = np.argmax(g)
corr = np.unravel_index(i,g.shape)
# Calculate the angle
ang = (float(corr[1])/g.shape[1])*360
print('Ponto de máxima correlação: ',ang)
|
2S2018/13 Correlacao de fase.ipynb
|
robertoalotufo/ia898
|
mit
|
<table class="bq-notebook-buttons" align="left">
<td>
<a target="_blank" href="#"><img src="./images/bigquery_32px.png" />View on BigQuery Docs</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/GoogleCloudPlatform/bigquery-notebooks/blob/main/notebooks/official/notebook_template.ipynb"><img src="./images/colab_32px.png" />Run in Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/GoogleCloudPlatform/bigquery-notebooks/blob/main/notebooks/official/notebook_template.ipynb.ipynb"><img src="./images/github_32px.png" />View source on GitHub</a>
</td>
</table>
Overview
{TODO: Include a paragraph or two explaining what this example demonstrates, who should be interested in it, and what you need to know before you get started.}
Dataset
{TODO: Include a paragraph with Dataset information and where to obtain it.}
{TODO: Make sure the dataset is accessible to the public. Googlers: Add your dataset to the public samples bucket within gs://cloud-samples-data/ai-platform-unified, if it doesn't already exist there.}
Objective
In this notebook, you will learn how to {TODO: Complete the sentence explaining briefly what you will learn from the notebook, such as
training, hyperparameter tuning, or serving}:
* {TODO: Add high level bullets for the steps of what you will perform in the notebook}
Costs
{TODO: Update the list of billable products that your tutorial uses.}
This tutorial uses billable components of Google Cloud:
BigQuery
Cloud Storage
{TODO: Include links to pricing documentation for each product you listed above.}
Learn about BigQuery
pricing, BigQuery ML pricing, and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Set up your local development environment
If you are using Colab or Google Cloud Notebooks, your environment already meets
all the requirements to run this notebook. You can skip this step.
Otherwise, make sure your environment meets this notebook's requirements.
You need the following:
The Google Cloud SDK
Git
Python 3
virtualenv
Jupyter notebook running in a virtual environment with Python 3
The Google Cloud guide to Setting up a Python development
environment and the Jupyter
installation guide provide detailed instructions
for meeting these requirements. The following steps provide a condensed set of
instructions:
Install and initialize the Cloud SDK.
Install Python 3.
Install
virtualenv
and create a virtual environment that uses Python 3. Activate the virtual environment.
To install Jupyter, run pip3 install jupyter on the
command-line in a terminal shell.
To launch Jupyter, run jupyter notebook on the command-line in a terminal shell.
Open this notebook in the Jupyter Notebook Dashboard.
Install additional packages
Install additional package dependencies not installed in your notebook environment, such as {XGBoost, AdaNet, or TensorFlow Hub TODO: Replace with relevant packages for the tutorial}. Use the latest major GA version of each package.
|
import os
# The Google Cloud Notebook product has specific requirements
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
# Google Cloud Notebook requires dependencies to be installed with '--user'
USER_FLAG = ""
if IS_GOOGLE_CLOUD_NOTEBOOK:
USER_FLAG = "--user"
! pip3 install {USER_FLAG} --upgrade tensorflow
|
notebooks/official/notebook_template.ipynb
|
GoogleCloudPlatform/bigquery-notebooks
|
apache-2.0
|
Before you begin
Select a GPU runtime
Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select "Runtime --> Change runtime type > GPU"
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the BigQuery API. {TODO: Update the APIs needed for your tutorial. Edit the API names, and update the link to append the API IDs, separating each one with a comma. For example, container.googleapis.com,cloudbuild.googleapis.com}
If you are running this notebook locally, you will need to install the Cloud SDK.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.
Set your project ID
If you don't know your project ID, you may be able to get your project ID using gcloud.
|
PROJECT_ID = ""
# Get your Google Cloud project ID from gcloud
if not os.getenv("IS_TESTING"):
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID: ", PROJECT_ID)
|
notebooks/official/notebook_template.ipynb
|
GoogleCloudPlatform/bigquery-notebooks
|
apache-2.0
|
Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
{TODO: Adjust wording in the first paragraph to fit your use case - explain how your tutorial uses the Cloud Storage bucket. The example below shows how Vertex AI uses the bucket for training.}
When you submit a training job using the Cloud SDK, you upload a Python package
containing your training code to a Cloud Storage bucket. Vertex AI runs
the code from this package. In this tutorial, Vertex AI also saves the
trained model that results from your job in the same bucket. Using this model artifact, you can then
create Vertex AI model and endpoint resources in order to serve
online predictions.
Set the name of your Cloud Storage bucket below. It must be unique across all
Cloud Storage buckets.
You may also change the REGION variable, which is used for operations
throughout the rest of this notebook. Make sure to choose a region where Vertex AI services are
available. You may
not use a Multi-Regional Storage bucket for training with Vertex AI.
|
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
REGION = "[your-region]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
|
notebooks/official/notebook_template.ipynb
|
GoogleCloudPlatform/bigquery-notebooks
|
apache-2.0
|
Import libraries and define constants
{TODO: Put all your imports and installs up into a setup section.}
|
import os
import sys
import numpy as np
import tensorflow as tf
|
notebooks/official/notebook_template.ipynb
|
GoogleCloudPlatform/bigquery-notebooks
|
apache-2.0
|
General style examples
Notebook heading
Include the collapsed license at the top (this uses Colab's "Form" mode to hide the cells).
Only include a single H1 title.
Include the button-bar immediately under the H1.
Check that the Colab and GitHub links at the top are correct.
Notebook sections
Use H2 (##) and H3 (###) titles for notebook section headings.
Use sentence case to capitalize titles and headings. ("Train the model" instead of "Train the Model")
Include a brief text explanation before any code cells.
Use short titles/headings: "Download the data", "Build the model", "Train the model".
Writing style
Use present tense. ("You receive a response" instead of "You will receive a response")
Use active voice. ("The service processes the request" instead of "The request is processed by the service")
Use second person and an imperative style.
Correct examples: "Update the field", "You must update the field"
Incorrect examples: "Let's update the field", "We'll update the field", "The user should update the field"
Googlers: Please follow our branding guidelines.
Code
Put all your installs and imports in a setup section.
Save the notebook with the Table of Contents open.
Write Python 3 compatible code.
Follow the Google Python Style guide and write readable code.
Keep cells small (max ~20 lines).
TensorFlow code style
Use the highest level API that gets the job done (unless the goal is to demonstrate the low level API). For example, when using Tensorflow:
Use TF.keras.Sequential > keras functional api > keras model subclassing > ...
Use model.fit > model.train_on_batch > manual GradientTapes.
Use eager-style code.
Use tensorflow_datasets and tf.data where possible.
Notebook code style examples
Notebooks are for people. Write code optimized for clarity.
Demonstrate small parts before combining them into something more complex. Like below:
|
# Build the model
model = tf.keras.Sequential(
[
tf.keras.layers.Dense(10, activation="relu", input_shape=(None, 5)),
tf.keras.layers.Dense(3),
]
)
# Run the model on a single batch of data, and inspect the output.
result = model(tf.constant(np.random.randn(10, 5), dtype=tf.float32)).numpy()
print("min:", result.min())
print("max:", result.max())
print("mean:", result.mean())
print("shape:", result.shape)
# Compile the model for training
model.compile(
optimizer=tf.keras.optimizers.Adam(), loss=tf.keras.losses.categorical_crossentropy
)
|
notebooks/official/notebook_template.ipynb
|
GoogleCloudPlatform/bigquery-notebooks
|
apache-2.0
|
Keep examples quick. Use small datasets, or small slices of datasets. You don't need to train to convergence, train until it's obvious it's making progress.
For a large example, don't try to fit all the code in the notebook. Add python files to tensorflow examples, and in the notebook run:
! pip3 install git+https://github.com/tensorflow/examples
Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
{TODO: Include commands to delete individual resources below}
|
# Delete endpoint resource
! gcloud ai endpoints delete $ENDPOINT_NAME --quiet --region $REGION_NAME
# Delete model resource
! gcloud ai models delete $MODEL_NAME --quiet
# Delete Cloud Storage objects that were created
! gsutil -m rm -r $JOB_DIR
|
notebooks/official/notebook_template.ipynb
|
GoogleCloudPlatform/bigquery-notebooks
|
apache-2.0
|
Although this works, it is visually unappealing. We can improve on this using styles and themes.
|
import tkinter as tk
from tkinter import ttk
class Application(ttk.Frame):
def __init__(self, master=None):
super().__init__(master, padding="3 3 12 12")
self.grid(column=0, row=0, )
self.createWidgets()
self.master.title('Test')
def createWidgets(self):
self.hi_there = ttk.Button(self)
self.hi_there["text"] = "Hello World\n(click me)"
self.hi_there["command"] = self.say_hi
self.QUIT = ttk.Button(self, text="QUIT", style='Alert.TButton', command=root.destroy)
for child in self.winfo_children():
child.grid_configure(padx=10, pady=10)
def say_hi(self):
print("hi there, everyone!")
root = tk.Tk()
app = Application(master=root)
s = ttk.Style()
s.configure('TButton', font='helvetica 24')
s.configure('Alert.TButton', foreground='red')
root.mainloop()
|
Wk04/Wk04-GUI.ipynb
|
briennakh/BIOF509
|
mit
|
As our applications get more complicated we must give greater thought to the layout. The following example comes from the TkDocs site.
|
from tkinter import *
from tkinter import ttk
def calculate(*args):
try:
value = float(feet.get())
meters.set((0.3048 * value * 10000.0 + 0.5)/10000.0)
except ValueError:
pass
root = Tk()
root.title("Feet to Meters")
mainframe = ttk.Frame(root, padding="3 3 12 12")
mainframe.grid(column=0, row=0, sticky=(N, W, E, S))
mainframe.columnconfigure(0, weight=1)
mainframe.rowconfigure(0, weight=1)
feet = StringVar()
meters = StringVar()
feet_entry = ttk.Entry(mainframe, width=7, textvariable=feet)
feet_entry.grid(column=2, row=1, sticky=(W, E))
ttk.Label(mainframe, textvariable=meters).grid(column=2, row=2, sticky=(W, E))
ttk.Button(mainframe, text="Calculate", command=calculate).grid(column=3, row=3, sticky=W)
ttk.Label(mainframe, text="feet").grid(column=3, row=1, sticky=W)
ttk.Label(mainframe, text="is equivalent to").grid(column=1, row=2, sticky=E)
ttk.Label(mainframe, text="meters").grid(column=3, row=2, sticky=W)
for child in mainframe.winfo_children(): child.grid_configure(padx=5, pady=5)
feet_entry.focus()
root.bind('<Return>', calculate)
root.mainloop()
|
Wk04/Wk04-GUI.ipynb
|
briennakh/BIOF509
|
mit
|
Matplotlib
For simple programs, displaying data and taking basic input, often a command line application will be much faster to implement than a GUI. The times when I have moved away from the command line it has been to interact with image data and plots. Here, matplotlib often works very well. Either it can be embedded in a larger application or it can be used directly.
There are a number of examples on the matplotlib site.
Here is one stripped down example of one recent GUI I have used.
|
"""
Do a mouseclick somewhere, move the mouse to some destination, release
the button. This class gives click- and release-events and also draws
a line or a box from the click-point to the actual mouseposition
(within the same axes) until the button is released. Within the
method 'self.ignore()' it is checked wether the button from eventpress
and eventrelease are the same.
"""
from matplotlib.widgets import RectangleSelector
import matplotlib.pyplot as plt
import matplotlib.cbook as cbook
def line_select_callback(eclick, erelease):
'eclick and erelease are the press and release events'
x1, y1 = eclick.xdata, eclick.ydata
x2, y2 = erelease.xdata, erelease.ydata
print ("(%3.2f, %3.2f) --> (%3.2f, %3.2f)" % (x1, y1, x2, y2))
print (" The button you used were: %s %s" % (eclick.button, erelease.button))
def toggle_selector(event):
print (' Key pressed.')
if event.key in ['Q', 'q'] and toggle_selector.RS.active:
print (' RectangleSelector deactivated.')
toggle_selector.RS.set_active(False)
if event.key in ['A', 'a'] and not toggle_selector.RS.active:
print (' RectangleSelector activated.')
toggle_selector.RS.set_active(True)
image_file = cbook.get_sample_data('grace_hopper.png')
image = plt.imread(image_file)
fig, current_ax = plt.subplots()
plt.imshow(image)
toggle_selector.RS = RectangleSelector(current_ax,
line_select_callback,
drawtype='box', useblit=True,
button=[1,3], # don't use middle button
minspanx=5, minspany=5,
spancoords='pixels')
plt.connect('key_press_event', toggle_selector)
plt.show()
|
Wk04/Wk04-GUI.ipynb
|
briennakh/BIOF509
|
mit
|
Load and prepare the data
Import data from static tumbling csv file
|
static_tumbling = pd.read_csv('static-tumbling.csv')
|
Static_tumbling_nn.ipynb
|
steelcolosus/static.tumbling.neural.network
|
mit
|
Separate the data into features and targets
|
elements, score = static_tumbling['elements'], static_tumbling['score']
|
Static_tumbling_nn.ipynb
|
steelcolosus/static.tumbling.neural.network
|
mit
|
Generate global vocabulary
|
#Main vocabulary, based on the data set elements
main_vocab = set()
for line in elements:
for element in line.split(" "):
main_vocab.add(element)
main_vocab = list(main_vocab)
#Expanded vocabulary based on 49 permutations of the posible transitions
vocab = list(main_vocab)
for roll in product(main_vocab, repeat = 2 ):
vocab.append("{} {}".format(roll[0],roll[1]))
|
Static_tumbling_nn.ipynb
|
steelcolosus/static.tumbling.neural.network
|
mit
|
Create dictionary to map each element to an index
|
word2idx = {word: i for i, word in enumerate(vocab)}
word2idx
|
Static_tumbling_nn.ipynb
|
steelcolosus/static.tumbling.neural.network
|
mit
|
Text to vector fucntion
It will convert the elements to a vector of words
|
def text_to_vector(text):
word_vector = np.zeros(len(vocab), dtype=np.int_)
text_vector = text.split(' ')
#basic vocab matching
for element in text_vector:
idx = word2idx.get(element, None)
if idx is None:
continue
else:
word_vector[idx] += 1
#Check for transition order
for x in range(len(text_vector) -1 ):
pair = "{} {}".format(text_vector[x],text_vector[x+1])
idx2 = word2idx.get(pair, None)
if idx2 is None:
continue
else:
word_vector[idx2]+=1
return np.array(word_vector)
text_to_vector("flick flick flick mortal")
|
Static_tumbling_nn.ipynb
|
steelcolosus/static.tumbling.neural.network
|
mit
|
Convert all static tumbling passes to vectors
|
word_vectors = np.zeros((len(elements), len(vocab)), dtype=np.int_)
for ii, text in enumerate(elements):
word_vectors[ii] = text_to_vector(text)
word_vectors
|
Static_tumbling_nn.ipynb
|
steelcolosus/static.tumbling.neural.network
|
mit
|
Train, validation, Tests sets
Now that we have the word_vectors, we're ready to split our data into train, validation, and test sets. Remember that we train on the train data, use the validation data to set the hyperparameters, and at the very end measure the network performance on the test data.
|
Y = (score).astype(np.float_)
records = len(score)
shuffle = np.arange(records)
np.random.shuffle(shuffle)
test_fraction = 0.9
#Y values are one dimentional array of shape (1, N) in order to get the dot product we need it in the form
# (N, 1) so that's why i'm using `Y.values[train_split,None]`
train_split, test_split = shuffle[:int(records*test_fraction)], shuffle[int(records*test_fraction):]
trainX, trainY = word_vectors[train_split,:], Y.values[train_split,None]
testX, testY = word_vectors[test_split,:], Y.values[test_split,None]
trainX
|
Static_tumbling_nn.ipynb
|
steelcolosus/static.tumbling.neural.network
|
mit
|
Building the network
|
# Network building
def build_model():
# This resets all parameters and variables, leave this here
tf.reset_default_graph()
#Input
net = tflearn.input_data([None, 56])
#Hidden
net = tflearn.fully_connected(net, 350, activation='sigmoid')
net = tflearn.fully_connected(net, 150, activation='sigmoid')
net = tflearn.fully_connected(net, 25, activation='sigmoid')
#output layer as a linear activation function
net = tflearn.fully_connected(net, 1, activation='linear')
net = tflearn.regression(net, optimizer='sgd', loss='mean_square',metric='R2', learning_rate=0.01)
model = tflearn.DNN(net)
return model
|
Static_tumbling_nn.ipynb
|
steelcolosus/static.tumbling.neural.network
|
mit
|
Initializing the model
|
model = build_model()
|
Static_tumbling_nn.ipynb
|
steelcolosus/static.tumbling.neural.network
|
mit
|
Training the network
|
# Training
model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=128, n_epoch=2000)
|
Static_tumbling_nn.ipynb
|
steelcolosus/static.tumbling.neural.network
|
mit
|
I know total loss is still to high, but not that bad for the first round of hyper parameters, still room for total loss improvement
Saving de model
|
# Load model
# model.load('Checkpoints/model-with-transitions-with-3-layers.tfl')
# Manually save model
model.save("Checkpoints/model-with-transitions-with-3-layers.tfl")
|
Static_tumbling_nn.ipynb
|
steelcolosus/static.tumbling.neural.network
|
mit
|
Testing
|
# Helper function that uses our model to get the score for the static tumbling pass
def test_score(sentence):
score = model.predict([text_to_vector(sentence.lower())])
print('Gym pass: {}'.format(sentence))
print('Score: {}'.format(score))
print()
return score
# Helper function that uses our model to compare static tumbling passes
def test_compare(pass1, pass2):
score1 = test_score(pass1)
score2 = test_score(pass2)
if score1 > score2:
print('Gym pass 1: {}'.format(pass1))
elif score2 > score1:
print('Gym pass 2: {}'.format(pass2))
else:
print('same difficulty')
|
Static_tumbling_nn.ipynb
|
steelcolosus/static.tumbling.neural.network
|
mit
|
Now we check the accuracy of the mode, this test checks which static tumbling line is more difficult, the second one is not even in the data we trianed the neural network.
First we compare to static tumblin pass that has the same elements but different transition cost or effort,
acording to flick mortal and mortal flick it's harder to execute mortal flick
|
element1 = "flick mortal"
element2 = "mortal flick"
test_compare(element1,element2)
|
Static_tumbling_nn.ipynb
|
steelcolosus/static.tumbling.neural.network
|
mit
|
Now test the model with data that wasn't in the data set
in this complex example the second element is a lot harder to execute
|
test_element1 = "flick flick flick flick flick mortal giro giro giro2"
test_element2 = "mortal flick giro flick giro mortal giro2 giro2 giro2"
test_compare(test_element1,test_element2)
|
Static_tumbling_nn.ipynb
|
steelcolosus/static.tumbling.neural.network
|
mit
|
Test data validation
Now the test values we separeted from the begining are going to be compared with the actual values to check model accuracy
|
fig, ax = plt.subplots(figsize=(15,6))
predictions = model.predict(testX)
ax.plot(predictions,label='Prediction')
ax.plot(testY, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
|
Static_tumbling_nn.ipynb
|
steelcolosus/static.tumbling.neural.network
|
mit
|
Hyperparameterers
|
# data
num_epochs = 10 # train for 400 epochs for good results
image_size = 64
# resolution of Kernel Inception Distance measurement, see related section
kid_image_size = 75
padding = 0.25
dataset_name = "caltech_birds2011"
# adaptive discriminator augmentation
max_translation = 0.125
max_rotation = 0.125
max_zoom = 0.25
target_accuracy = 0.85
integration_steps = 1000
# architecture
noise_size = 64
depth = 4
width = 128
leaky_relu_slope = 0.2
dropout_rate = 0.4
# optimization
batch_size = 128
learning_rate = 2e-4
beta_1 = 0.5 # not using the default value of 0.9 is important
ema = 0.99
|
examples/generative/ipynb/gan_ada.ipynb
|
keras-team/keras-io
|
apache-2.0
|
Data pipeline
In this example, we will use the
Caltech Birds (2011) dataset for
generating images of birds, which is a diverse natural dataset containing less then 6000
images for training. When working with such low amounts of data, one has to take extra
care to retain as high data quality as possible. In this example, we use the provided
bounding boxes of the birds to cut them out with square crops while preserving their
aspect ratios when possible.
|
def round_to_int(float_value):
return tf.cast(tf.math.round(float_value), dtype=tf.int32)
def preprocess_image(data):
# unnormalize bounding box coordinates
height = tf.cast(tf.shape(data["image"])[0], dtype=tf.float32)
width = tf.cast(tf.shape(data["image"])[1], dtype=tf.float32)
bounding_box = data["bbox"] * tf.stack([height, width, height, width])
# calculate center and length of longer side, add padding
target_center_y = 0.5 * (bounding_box[0] + bounding_box[2])
target_center_x = 0.5 * (bounding_box[1] + bounding_box[3])
target_size = tf.maximum(
(1.0 + padding) * (bounding_box[2] - bounding_box[0]),
(1.0 + padding) * (bounding_box[3] - bounding_box[1]),
)
# modify crop size to fit into image
target_height = tf.reduce_min(
[target_size, 2.0 * target_center_y, 2.0 * (height - target_center_y)]
)
target_width = tf.reduce_min(
[target_size, 2.0 * target_center_x, 2.0 * (width - target_center_x)]
)
# crop image
image = tf.image.crop_to_bounding_box(
data["image"],
offset_height=round_to_int(target_center_y - 0.5 * target_height),
offset_width=round_to_int(target_center_x - 0.5 * target_width),
target_height=round_to_int(target_height),
target_width=round_to_int(target_width),
)
# resize and clip
# for image downsampling, area interpolation is the preferred method
image = tf.image.resize(
image, size=[image_size, image_size], method=tf.image.ResizeMethod.AREA
)
return tf.clip_by_value(image / 255.0, 0.0, 1.0)
def prepare_dataset(split):
# the validation dataset is shuffled as well, because data order matters
# for the KID calculation
return (
tfds.load(dataset_name, split=split, shuffle_files=True)
.map(preprocess_image, num_parallel_calls=tf.data.AUTOTUNE)
.cache()
.shuffle(10 * batch_size)
.batch(batch_size, drop_remainder=True)
.prefetch(buffer_size=tf.data.AUTOTUNE)
)
train_dataset = prepare_dataset("train")
val_dataset = prepare_dataset("test")
|
examples/generative/ipynb/gan_ada.ipynb
|
keras-team/keras-io
|
apache-2.0
|
After preprocessing the training images look like the following:
Kernel inception distance
Kernel Inception Distance (KID) was proposed as a
replacement for the popular
Frechet Inception Distance (FID)
metric for measuring image generation quality.
Both metrics measure the difference in the generated and training distributions in the
representation space of an InceptionV3
network pretrained on
ImageNet.
According to the paper, KID was proposed because FID has no unbiased estimator, its
expected value is higher when it is measured on fewer images. KID is more suitable for
small datasets because its expected value does not depend on the number of samples it is
measured on. In my experience it is also computationally lighter, numerically more
stable, and simpler to implement because it can be estimated in a per-batch manner.
In this example, the images are evaluated at the minimal possible resolution of the
Inception network (75x75 instead of 299x299), and the metric is only measured on the
validation set for computational efficiency.
|
class KID(keras.metrics.Metric):
def __init__(self, name="kid", **kwargs):
super().__init__(name=name, **kwargs)
# KID is estimated per batch and is averaged across batches
self.kid_tracker = keras.metrics.Mean()
# a pretrained InceptionV3 is used without its classification layer
# transform the pixel values to the 0-255 range, then use the same
# preprocessing as during pretraining
self.encoder = keras.Sequential(
[
layers.InputLayer(input_shape=(image_size, image_size, 3)),
layers.Rescaling(255.0),
layers.Resizing(height=kid_image_size, width=kid_image_size),
layers.Lambda(keras.applications.inception_v3.preprocess_input),
keras.applications.InceptionV3(
include_top=False,
input_shape=(kid_image_size, kid_image_size, 3),
weights="imagenet",
),
layers.GlobalAveragePooling2D(),
],
name="inception_encoder",
)
def polynomial_kernel(self, features_1, features_2):
feature_dimensions = tf.cast(tf.shape(features_1)[1], dtype=tf.float32)
return (features_1 @ tf.transpose(features_2) / feature_dimensions + 1.0) ** 3.0
def update_state(self, real_images, generated_images, sample_weight=None):
real_features = self.encoder(real_images, training=False)
generated_features = self.encoder(generated_images, training=False)
# compute polynomial kernels using the two sets of features
kernel_real = self.polynomial_kernel(real_features, real_features)
kernel_generated = self.polynomial_kernel(
generated_features, generated_features
)
kernel_cross = self.polynomial_kernel(real_features, generated_features)
# estimate the squared maximum mean discrepancy using the average kernel values
batch_size = tf.shape(real_features)[0]
batch_size_f = tf.cast(batch_size, dtype=tf.float32)
mean_kernel_real = tf.reduce_sum(kernel_real * (1.0 - tf.eye(batch_size))) / (
batch_size_f * (batch_size_f - 1.0)
)
mean_kernel_generated = tf.reduce_sum(
kernel_generated * (1.0 - tf.eye(batch_size))
) / (batch_size_f * (batch_size_f - 1.0))
mean_kernel_cross = tf.reduce_mean(kernel_cross)
kid = mean_kernel_real + mean_kernel_generated - 2.0 * mean_kernel_cross
# update the average KID estimate
self.kid_tracker.update_state(kid)
def result(self):
return self.kid_tracker.result()
def reset_state(self):
self.kid_tracker.reset_state()
|
examples/generative/ipynb/gan_ada.ipynb
|
keras-team/keras-io
|
apache-2.0
|
Adaptive discriminator augmentation
The authors of StyleGAN2-ADA propose to change the
augmentation probability adaptively during training. Though it is explained differently
in the paper, they use integral control on the augmentation
probability to keep the discriminator's accuracy on real images close to a target value.
Note, that their controlled variable is actually the average sign of the discriminator
logits (r_t in the paper), which corresponds to 2 * accuracy - 1.
This method requires two hyperparameters:
target_accuracy: the target value for the discriminator's accuracy on real images. I
recommend selecting its value from the 80-90% range.
integration_steps:
the number of update steps required for an accuracy error of 100% to transform into an
augmentation probability increase of 100%. To give an intuition, this defines how slowly
the augmentation probability is changed. I recommend setting this to a relatively high
value (1000 in this case) so that the augmentation strength is only adjusted slowly.
The main motivation for this procedure is that the optimal value of the target accuracy
is similar across different dataset sizes (see figure 4 and 5 in the paper),
so it does not have to be retuned, because the
process automatically applies stronger data augmentation when it is needed.
|
# "hard sigmoid", useful for binary accuracy calculation from logits
def step(values):
# negative values -> 0.0, positive values -> 1.0
return 0.5 * (1.0 + tf.sign(values))
# augments images with a probability that is dynamically updated during training
class AdaptiveAugmenter(keras.Model):
def __init__(self):
super().__init__()
# stores the current probability of an image being augmented
self.probability = tf.Variable(0.0)
# the corresponding augmentation names from the paper are shown above each layer
# the authors show (see figure 4), that the blitting and geometric augmentations
# are the most helpful in the low-data regime
self.augmenter = keras.Sequential(
[
layers.InputLayer(input_shape=(image_size, image_size, 3)),
# blitting/x-flip:
layers.RandomFlip("horizontal"),
# blitting/integer translation:
layers.RandomTranslation(
height_factor=max_translation,
width_factor=max_translation,
interpolation="nearest",
),
# geometric/rotation:
layers.RandomRotation(factor=max_rotation),
# geometric/isotropic and anisotropic scaling:
layers.RandomZoom(
height_factor=(-max_zoom, 0.0), width_factor=(-max_zoom, 0.0)
),
],
name="adaptive_augmenter",
)
def call(self, images, training):
if training:
augmented_images = self.augmenter(images, training)
# during training either the original or the augmented images are selected
# based on self.probability
augmentation_values = tf.random.uniform(
shape=(batch_size, 1, 1, 1), minval=0.0, maxval=1.0
)
augmentation_bools = tf.math.less(augmentation_values, self.probability)
images = tf.where(augmentation_bools, augmented_images, images)
return images
def update(self, real_logits):
current_accuracy = tf.reduce_mean(step(real_logits))
# the augmentation probability is updated based on the dicriminator's
# accuracy on real images
accuracy_error = current_accuracy - target_accuracy
self.probability.assign(
tf.clip_by_value(
self.probability + accuracy_error / integration_steps, 0.0, 1.0
)
)
|
examples/generative/ipynb/gan_ada.ipynb
|
keras-team/keras-io
|
apache-2.0
|
Network architecture
Here we specify the architecture of the two networks:
generator: maps a random vector to an image, which should be as realistic as possible
discriminator: maps an image to a scalar score, which should be high for real and low
for generated images
GANs tend to be sensitive to the network architecture, I implemented a DCGAN architecture
in this example, because it is relatively stable during training while being simple to
implement. We use a constant number of filters throughout the network, use a sigmoid
instead of tanh in the last layer of the generator, and use default initialization
instead of random normal as further simplifications.
As a good practice, we disable the learnable scale parameter in the batch normalization
layers, because on one hand the following relu + convolutional layers make it redundant
(as noted in the
documentation).
But also because it should be disabled based on theory when using spectral normalization
(section 4.1), which is not used here, but is common
in GANs. We also disable the bias in the fully connected and convolutional layers, because
the following batch normalization makes it redundant.
|
# DCGAN generator
def get_generator():
noise_input = keras.Input(shape=(noise_size,))
x = layers.Dense(4 * 4 * width, use_bias=False)(noise_input)
x = layers.BatchNormalization(scale=False)(x)
x = layers.ReLU()(x)
x = layers.Reshape(target_shape=(4, 4, width))(x)
for _ in range(depth - 1):
x = layers.Conv2DTranspose(
width, kernel_size=4, strides=2, padding="same", use_bias=False,
)(x)
x = layers.BatchNormalization(scale=False)(x)
x = layers.ReLU()(x)
image_output = layers.Conv2DTranspose(
3, kernel_size=4, strides=2, padding="same", activation="sigmoid",
)(x)
return keras.Model(noise_input, image_output, name="generator")
# DCGAN discriminator
def get_discriminator():
image_input = keras.Input(shape=(image_size, image_size, 3))
x = image_input
for _ in range(depth):
x = layers.Conv2D(
width, kernel_size=4, strides=2, padding="same", use_bias=False,
)(x)
x = layers.BatchNormalization(scale=False)(x)
x = layers.LeakyReLU(alpha=leaky_relu_slope)(x)
x = layers.Flatten()(x)
x = layers.Dropout(dropout_rate)(x)
output_score = layers.Dense(1)(x)
return keras.Model(image_input, output_score, name="discriminator")
|
examples/generative/ipynb/gan_ada.ipynb
|
keras-team/keras-io
|
apache-2.0
|
GAN model
|
class GAN_ADA(keras.Model):
def __init__(self):
super().__init__()
self.augmenter = AdaptiveAugmenter()
self.generator = get_generator()
self.ema_generator = keras.models.clone_model(self.generator)
self.discriminator = get_discriminator()
self.generator.summary()
self.discriminator.summary()
def compile(self, generator_optimizer, discriminator_optimizer, **kwargs):
super().compile(**kwargs)
# separate optimizers for the two networks
self.generator_optimizer = generator_optimizer
self.discriminator_optimizer = discriminator_optimizer
self.generator_loss_tracker = keras.metrics.Mean(name="g_loss")
self.discriminator_loss_tracker = keras.metrics.Mean(name="d_loss")
self.real_accuracy = keras.metrics.BinaryAccuracy(name="real_acc")
self.generated_accuracy = keras.metrics.BinaryAccuracy(name="gen_acc")
self.augmentation_probability_tracker = keras.metrics.Mean(name="aug_p")
self.kid = KID()
@property
def metrics(self):
return [
self.generator_loss_tracker,
self.discriminator_loss_tracker,
self.real_accuracy,
self.generated_accuracy,
self.augmentation_probability_tracker,
self.kid,
]
def generate(self, batch_size, training):
latent_samples = tf.random.normal(shape=(batch_size, noise_size))
# use ema_generator during inference
if training:
generated_images = self.generator(latent_samples, training)
else:
generated_images = self.ema_generator(latent_samples, training)
return generated_images
def adversarial_loss(self, real_logits, generated_logits):
# this is usually called the non-saturating GAN loss
real_labels = tf.ones(shape=(batch_size, 1))
generated_labels = tf.zeros(shape=(batch_size, 1))
# the generator tries to produce images that the discriminator considers as real
generator_loss = keras.losses.binary_crossentropy(
real_labels, generated_logits, from_logits=True
)
# the discriminator tries to determine if images are real or generated
discriminator_loss = keras.losses.binary_crossentropy(
tf.concat([real_labels, generated_labels], axis=0),
tf.concat([real_logits, generated_logits], axis=0),
from_logits=True,
)
return tf.reduce_mean(generator_loss), tf.reduce_mean(discriminator_loss)
def train_step(self, real_images):
real_images = self.augmenter(real_images, training=True)
# use persistent gradient tape because gradients will be calculated twice
with tf.GradientTape(persistent=True) as tape:
generated_images = self.generate(batch_size, training=True)
# gradient is calculated through the image augmentation
generated_images = self.augmenter(generated_images, training=True)
# separate forward passes for the real and generated images, meaning
# that batch normalization is applied separately
real_logits = self.discriminator(real_images, training=True)
generated_logits = self.discriminator(generated_images, training=True)
generator_loss, discriminator_loss = self.adversarial_loss(
real_logits, generated_logits
)
# calculate gradients and update weights
generator_gradients = tape.gradient(
generator_loss, self.generator.trainable_weights
)
discriminator_gradients = tape.gradient(
discriminator_loss, self.discriminator.trainable_weights
)
self.generator_optimizer.apply_gradients(
zip(generator_gradients, self.generator.trainable_weights)
)
self.discriminator_optimizer.apply_gradients(
zip(discriminator_gradients, self.discriminator.trainable_weights)
)
# update the augmentation probability based on the discriminator's performance
self.augmenter.update(real_logits)
self.generator_loss_tracker.update_state(generator_loss)
self.discriminator_loss_tracker.update_state(discriminator_loss)
self.real_accuracy.update_state(1.0, step(real_logits))
self.generated_accuracy.update_state(0.0, step(generated_logits))
self.augmentation_probability_tracker.update_state(self.augmenter.probability)
# track the exponential moving average of the generator's weights to decrease
# variance in the generation quality
for weight, ema_weight in zip(
self.generator.weights, self.ema_generator.weights
):
ema_weight.assign(ema * ema_weight + (1 - ema) * weight)
# KID is not measured during the training phase for computational efficiency
return {m.name: m.result() for m in self.metrics[:-1]}
def test_step(self, real_images):
generated_images = self.generate(batch_size, training=False)
self.kid.update_state(real_images, generated_images)
# only KID is measured during the evaluation phase for computational efficiency
return {self.kid.name: self.kid.result()}
def plot_images(self, epoch=None, logs=None, num_rows=3, num_cols=6, interval=5):
# plot random generated images for visual evaluation of generation quality
if epoch is None or (epoch + 1) % interval == 0:
num_images = num_rows * num_cols
generated_images = self.generate(num_images, training=False)
plt.figure(figsize=(num_cols * 2.0, num_rows * 2.0))
for row in range(num_rows):
for col in range(num_cols):
index = row * num_cols + col
plt.subplot(num_rows, num_cols, index + 1)
plt.imshow(generated_images[index])
plt.axis("off")
plt.tight_layout()
plt.show()
plt.close()
|
examples/generative/ipynb/gan_ada.ipynb
|
keras-team/keras-io
|
apache-2.0
|
Training
One can should see from the metrics during training, that if the real accuracy
(discriminator's accuracy on real images) is below the target accuracy, the augmentation
probability is increased, and vice versa. In my experience, during a healthy GAN
training, the discriminator accuracy should stay in the 80-95% range. Below that, the
discriminator is too weak, above that it is too strong.
Note that we track the exponential moving average of the generator's weights, and use that
for image generation and KID evaluation.
|
# create and compile the model
model = GAN_ADA()
model.compile(
generator_optimizer=keras.optimizers.Adam(learning_rate, beta_1),
discriminator_optimizer=keras.optimizers.Adam(learning_rate, beta_1),
)
# save the best model based on the validation KID metric
checkpoint_path = "gan_model"
checkpoint_callback = tf.keras.callbacks.ModelCheckpoint(
filepath=checkpoint_path,
save_weights_only=True,
monitor="val_kid",
mode="min",
save_best_only=True,
)
# run training and plot generated images periodically
model.fit(
train_dataset,
epochs=num_epochs,
validation_data=val_dataset,
callbacks=[
keras.callbacks.LambdaCallback(on_epoch_end=model.plot_images),
checkpoint_callback,
],
)
|
examples/generative/ipynb/gan_ada.ipynb
|
keras-team/keras-io
|
apache-2.0
|
Inference
|
# load the best model and generate images
model.load_weights(checkpoint_path)
model.plot_images()
|
examples/generative/ipynb/gan_ada.ipynb
|
keras-team/keras-io
|
apache-2.0
|
Wood products DF with and without logging slash utilization
Using studies from Sathre and O'Connor in the US.
|
HWu = pd.read_sql('''SELECT *
FROM so4
WHERE harvestslash = "X"
AND processingresidues = "X"
AND "post-usewoodproduct" = "X"
AND stumps is null''', sqdb['cx'],index_col = 'index')
HWo = pd.read_sql('''SELECT *
FROM so4
WHERE harvestslash is null
AND processingresidues = "X"
AND stumps is null
AND "post-usewoodproduct" = "X"''', sqdb['cx'], index_col = 'index')
#HWo
print tabulate(HWo[['reference','df']], headers = ['index','reference','displacement factor'],tablefmt="pipe")
#HWu
print tabulate(HWu[['reference','df']], headers = ['index','reference','displacement factor'],tablefmt="pipe")
constants = {'me' : {'value':0.5,
'desc': 'Mill Efficiency'},
'DFu' : {'value': np.average(HWu.df),
'desc': 'Displacement factor with logging residual utilization',
'source': '''\cite{Sathre2010}'''},
'DFo' : {'value': np.average(HWo.df),
'desc': 'Displacement factor without logging residual utilization'},
'wDens' : {'value': sum(wood_dens.pct/100 * wood_dens.density_lbscuft),
'units' : 'lbs/cuft',
'desc': 'average harvested wood density weighted by species harvested',
'source': '\cite{Mciver2012}'}
}
constants['wDens']['value']
|
wood_fates.ipynb
|
peteWT/fcat_biomass
|
mit
|
Timber Products Output
The TPO estimates logging redisues produced from commercial timber harvesting operations. The follwoing is in million cubic feet (MCF)
|
tpoData = ut.gData('1GDdquzrCoq2cxVN2fbCpP4gwi2yrMnONNrWbfhZKZu4', 872275354, hrow=1)
tpoData.to_sql('tpo', sqdb['cx'], if_exists = 'replace')
print tabulate(tpoData, headers = ['Ownership','Roundwood Products','Logging Residues', 'Year'],tablefmt="pipe")
|
wood_fates.ipynb
|
peteWT/fcat_biomass
|
mit
|
Board of Equalization Data
The board of equalization
|
pd.read_csv('boe_hh.csv').to_sql('boe',sqdb['cx'], if_exists = 'replace')
pd.read_sql('select * from boe', sqdb['cx'])
|
wood_fates.ipynb
|
peteWT/fcat_biomass
|
mit
|
McIver and Morgan annual in cubic
Figure 2 from morgan and mciver presents total roundwood harvest from 1947 through 2012 in MMBF. To convert MMBF to MCF we use a sawlog conversion of 5.44. This is an approximation as the actual sawlog conversion varies with the log size on average over time has changed.
|
mm_histHarvest = ut.gData('13UQtRfNBSJ81PXxbYSnB2LrjHePNcvhJhrsxRBjHpoY', 2081880100).fillna(value=0)
mm_histHarvest.to_sql('mm_hist', sqdb['cx'], if_exists = 'replace')
mm_histHarvest
|
wood_fates.ipynb
|
peteWT/fcat_biomass
|
mit
|
Bioenergy consumption
To apply the apropriate DF for harvested wood we need to know what fraction of the logging residues were utilized as bioenergy feedstock. McIver and Morgan (Table 6) reports bioenergy consumption from 2000 forward. For years previous, we use the average bioenergy consumption from 2000 -- 2012.
|
bioEnergy = ut.gData('138FWlGeW57MKdcz2UkWxtWV4o50SZO8sduB1R6JOFp8', 529610043)
bioEnergy.set_index('producttype').transpose().to_sql('mciver_bio', sqdb['cx'], if_exists = 'replace')
bio_pct = pd.read_sql('select "index" as year,"Bioenergy"/100 as biopct from mciver_bio where "Bioenergy" is not null', sqdb['cx'])
bio_dict = bio_pct.set_index('year').to_dict('index')
print tabulate(bio_pct, headers = ['year', 'bioenergy % of harvest'],tablefmt="pipe")
def bioPct(year):
# if year < 1980:
# return 0
if year in bio_dict.keys():
return bio_dict[year]['biopct']
else:
return np.average(bio_pct.biopct)
|
wood_fates.ipynb
|
peteWT/fcat_biomass
|
mit
|
Logging residuals
The BOE data does not specifically estimate logging residuals, it simply reports harvested roundwood. To accurately ascribe fate to roundwood harvested, an estimation of logging residuals must be made
Calculating emissions reductions
The following functions calculate the displaced emissions resulting from wood harvested with and without logging residue utilization. They return estimates in metric tons of CO2 equivalents.
|
def WPu (rw_harvest, lr, year, mill_efficiency = constants['me']['value'], wdens = constants['wDens']['value'], df = constants['DFu']['value']):
'''
Calculates the emissions reduction resulting from harvested wood with with utilization of loggin residuals for bioenergy
'''
# establish the aproporiate bioenergy consumption, if no data on bioenergy consumption exists, use average from 2000-2012
if year in bio_dict.keys():
bioe_pct = bio_dict[year]['biopct']
else:
bioe_pct = np.average(bio_pct.biopct)
#Calculate total volume used in bioenergy
bioevol = bioe_pct * rw_harvest
#Establish utilization ratio for bioenergy
lrUsed = bioevol/lr
#Calcuate roundwood harvest volume fromwhich loggin residues were utilized
HWu = lrUsed * rw_harvest
#Calculate volume of final wood product produced using mill efficiency. S&O use volume of wood product not sawlogs for DF
WPu = HWu * mill_efficiency * wdens * 1000000 / 2204.62 * 0.5 * df
#Per comment from Roger Sathre,needs to be reduced by 50% before applying the DF as the DF is meant for tC not tWood..
#This is in MT
return WPu
def WPo (rw_harvest, lr, year, mill_efficiency = constants['me']['value'], wdens = constants['wDens']['value'], df = constants['DFo']['value']):
'''
Calculates the emissions reduction resulting from harvested wood without utilization of
logging residuals for bioenergy
'''
# establish the aproporiate bioenergy consumption lever for a given year, if no data on bioenergy consumption exists, use average from 2000-2012
if year in bio_dict.keys():
bioe_pct = bio_dict[year]['biopct']
else:
bioe_pct = np.average(bio_pct.biopct)
#Calculate total volume used in bioenergy
bioevol = bioe_pct * rw_harvest
#Establish utilization ratio for bioenergy
lrUsed = bioevol/lr
#Calcuate roundwood harvest volume fromwhich loggin residues were utilized
HWo = (1-lrUsed) * rw_harvest
#Calculate volume of final wood product produced using mill efficiency. S&O use volume of wood product not sawlogs for DF
WPo = HWo * mill_efficiency * wdens * 1000000 / 2204.62 * 0.5 * df
#This is in MT
return WPo
|
wood_fates.ipynb
|
peteWT/fcat_biomass
|
mit
|
Emissions reduction from harvested wood with LR utilized
Emissions reductions resulting from harvested roundwood with logging residue utilized in bioenergy
|
erWPu = []
for row in tpoData.index:
rw,lr,yr = tpoData.iloc[row][['roundwoodproducts','loggingresidues', 'year']].tolist()
erWPu.append(WPu(rw,lr,yr))
tpoData['erWPu'] = erWPu
erWPo = []
for row in tpoData.index:
rw,lr,yr = tpoData.iloc[row][['roundwoodproducts','loggingresidues', 'year']].tolist()
erWPo.append(WPo(rw,lr,yr))
tpoData['erWPo'] = erWPo
|
wood_fates.ipynb
|
peteWT/fcat_biomass
|
mit
|
Emissions reduction from harvested wood without LR utilization
Emissions reductions resulting from harvested roundwood without logging residue utilized in bioenergy. Though wood with LR utilization rate has a higher displacement factor, the majority of loggin residues wer not utilized.
|
tpoData['erTotal'] = tpoData.erWPo+tpoData.erWPu
tpoData.to_sql('tpo_emreduc', sqdb['cx'], if_exists='replace')
tpoData['bioe_pct'] = tpoData.year.apply(bioPct)
tpoData['bioe_t'] = tpoData.bioe_pct * tpoData.loggingresidues * 1e6* constants['wDens']['value']/2204.62
|
wood_fates.ipynb
|
peteWT/fcat_biomass
|
mit
|
Using M&M Historical data
|
erWPo = []
for row in mm_histHarvest.index:
r = mm_histHarvest.iloc[row]
yr = r['year'] ## year
rw = (r.state+r.private+r.tribal+r.blm+r.nat_forest)/5.44
qry = 'select avg(loggingresidues/roundwoodproducts) lr from tpo where year = {}'.format(yr)
if yr in tpoData.year.tolist():
lr = pd.read_sql(qry, sqdb['cx'])*rw
else:
lr = pd.read_sql('select avg(loggingresidues/roundwoodproducts) lr from tpo', sqdb['cx'])*rw
erWPo.append(WPo(rw,lr,yr).lr[0])
mm_histHarvest['erWPo'] = erWPo
erWPu = []
lrVect = []
tHarv = []
for row in mm_histHarvest.index:
r = mm_histHarvest.iloc[row]
yr = r['year'] ## year
rw = (r.state+r.private+r.tribal+r.blm+r.nat_forest)/5.44
qry = 'select avg(loggingresidues/roundwoodproducts) lr from tpo where year = {}'.format(yr)
if yr in tpoData.year.tolist():
lr = pd.read_sql(qry, sqdb['cx'])*rw
else:
lr = pd.read_sql('select avg(loggingresidues/roundwoodproducts) lr from tpo', sqdb['cx'])*rw
lrVect.append(lr.lr[0])
tHarv.append(rw)
erWPu.append(WPu(rw,lr,yr).lr[0])
mm_histHarvest['erWPu'] = erWPu
mm_histHarvest['loggingresidues'] = lrVect
mm_histHarvest['totalharvest'] = tHarv
|
wood_fates.ipynb
|
peteWT/fcat_biomass
|
mit
|
Total emissions reduction from harvested wood products
Sum of emissions reductions from harvested wood with and without LR utilization
|
mm_histHarvest['erTotal'] = mm_histHarvest.erWPo+mm_histHarvest.erWPu
mm_histHarvest.to_sql('mm_emreduc', sqdb['cx'], if_exists='replace')
mm_histHarvest['bioe_pct'] = mm_histHarvest.year.apply(bioPct)
mm_histHarvest['bioe_t'] = mm_histHarvest.bioe_pct * mm_histHarvest.loggingresidues * 1e6* constants['wDens']['value']/2204.62
mm_histHarvest.to_sql()
mm_histHarvest
sns.set_style("whitegrid")
fig2, ax2 = plt.subplots(figsize=(12, 10))
ax2 = sns.barplot(x ='year', y='erTotal', data=mm_histHarvest.sort_values('year'))
ax2.set_ylabel('Emissions reduction (MT CO2e)')
ax2.set_title('Emissions reductions resulting \nfrom roundwood harvest in CA')
ax2.set_xticklabels(ax2.get_xticklabels(),rotation=90)
[fig2.savefig('graphics/ann_hh_em_reduc.{}'.format(i)) for i in ['pdf','png']]
sns.set_style("whitegrid")
fig, ax = plt.subplots()
ax = sns.barplot(x ='year', y='erTotal', hue="ownership", data=tpoData.sort_values('year'))
ax.set_ylabel('Emissions reduction (MT CO2e)')
ax.set_title('Emissions reductions resulting from roundwood harvest in CA')
[fig.savefig('graphics/harv_em_reductions.{}'.format(i)) for i in ['pdf','png']]
|
wood_fates.ipynb
|
peteWT/fcat_biomass
|
mit
|
Total emissions reductions from roundwood harvesting in CA, 2012
|
pd.read_sql('select sum("erTotal") from tpo_emreduc where year = "2012"', sqdb['cx'])
|
wood_fates.ipynb
|
peteWT/fcat_biomass
|
mit
|
Emissions from un-utilized logging residuals
From logging residuals not used in bioenergy, emmisions are produced from combustion of the residual material or from decomposition of the material over time. To calculate the ratio of burned to decompsed logging residues I begin with the CARB estimate of PM2.5 produced from forest management:
|
tName = 'cpe_allyears'
sqdb['crs'].executescript('drop table if exists {0};'.format(tName))
for y in [2000, 2005, 2010, 2012, 2015]:
url = 'http://www.arb.ca.gov/app/emsinv/2013/emsbyeic.csv?F_YR={0}&F_DIV=0&F_SEASON=A&SP=2013&SPN=2013_Almanac&F_AREA=CA'
df = pd.read_csv(url.format(y))
df.to_sql(tName, sqdb['cx'], if_exists = 'append')
pmAnn = pd.read_sql('''
select year,
eicsoun,
"PM2_5"*365 an_pm25_av
from cpe_allyears
where eicsoun = 'FOREST MANAGEMENT';
''', sqdb['cx'])
pmAnn
|
wood_fates.ipynb
|
peteWT/fcat_biomass
|
mit
|
Estimate biomass, CO2, CH4 and BC from PM2.5
To estimate total biomass from PM2.5 I assume 90% consumption of biomass in piles and use the relationship of pile tonnage to PM emissions calculated using the Piled Fuels Biomass and Emissions Calculator provided by the Washington State Department of Natural resources. This calculator is based on the Consume fire behavior model published by the US Forest Service.
|
pfbec = pd.read_csv('fera_pile_cemissions.csv', header=1)
ward = ut.gData('13UQtRfNBSJ81PXxbYSnB2LrjHePNcvhJhrsxRBjHpoY', 475419971)
def sp2bio(pm, species = 'PM2.5 (tons)'):
return pm * (pfbec[species]/pfbec['Pile Biomass (tons)'])
def bioPm(pm):
return pm * (pfbec['Pile Biomass (tons)']/pfbec['PM2.5 (tons)'])
co2t = lambda x: sp2bio(x,'CO2 (tons)')
ch4t = lambda x: sp2bio(x,'CH4 (tons)')
pmAnn['biomass_t']=pmAnn.an_pm25_av.apply(bioPm)
pmAnn['co2_t'] = pmAnn.biomass_t.apply(co2t)
pmAnn['ch4_t'] = pmAnn.biomass_t.apply(ch4t)
pmAnn['ch4_co2e'] = pmAnn.ch4_t * 56
pmAnn['bc_co2e']= pmAnn.an_pm25_av.apply(ut.pm2bcgwpPiles)
pmAnn['t_co2e']= pmAnn.co2_t + pmAnn.ch4_co2e + pmAnn.bc_co2e
print tabulate(pmAnn[['YEAR','EICSOUN','co2_t','ch4_co2e','bc_co2e','t_co2e']], headers = ['Year','Emissions source','CO2 (t)', 'CH4 (tCO2e)', 'BC (tCO2e)', 'Pile Burn Total (tCO2e)'],tablefmt="pipe")
|
wood_fates.ipynb
|
peteWT/fcat_biomass
|
mit
|
Estimating GHG emissions from decomposition of unitilized logging slash
To provide a full picture of the emissions from residual material produced from commercial timber harvesting in California, decomposition of unutilized logging residuals left on-site that are not burned must be accounted for. To establish the fraction of logging residue that is left to decompose, residues burned and used in bioenergy are subtracted from the total reported by the TPO:
To calculate the GHG emissions from decomposition of piles we use the following equation:
|
annLrAvg = pd.read_sql('''with ann as (select sum(loggingresidues) lr
from tpo
group by year)
select avg(lr) foo
from ann;''', sqdb['cx'])['foo'][0]
pctLR_bio = (np.average(pmAnn.biomass_t)/1e6)/annLrAvg
annLrAvg
pmAnn
lr_t = 1e6*tpoData.loggingresidues*constants['wDens']['value']/2204.62
tpoData['unused_lr'] = 1e6*(lr_t-(pctLR_bio*lr_t))
tpoData['burned_lr'] = 1e6*lr_t*(np.average(pmAnn.biomass_t)/(annLrAvg*1e6))
tpoData['unburned_lr'] = (lr_t*1e6) - tpoData.bioe_t - tpoData.burned_lr
tpoData['unburned_lr_co2e'] = tpoData.unburned_lr.apply(ut.co2eDecomp)
tpoData
|
wood_fates.ipynb
|
peteWT/fcat_biomass
|
mit
|
Biomass residuals from non-commercial management activities
Data from TPO does not account for forest management activities that do not result in commercial products (timber sales, biomass sales). To estimate the amount of residual material produced from non commercial management activities we use data from the US Forest Service (FACTS) and from CalFires timber harvest plan data.
Forest Service ACtivity Tracking System (FACTS)
Data from TPO does not account for forest management activities that do not result in commercial products (timber sales, biomass sales). We use a range of 10-35 BDT/acre to convert acres reported in FACTS to volume.
|
pd.read_excel('lf/FACTS_Tabular_092115.xlsx', sheetname = 'CategoryCrosswalk').to_sql('facts_cat', sqdb['cx'], if_exists = 'replace')
pd.read_csv('pd/facts_notimber.csv').to_sql('facts_notimber', sqdb['cx'], if_exists='replace')
|
wood_fates.ipynb
|
peteWT/fcat_biomass
|
mit
|
Querying FACTS
The USFS reports Hazardous Fuels Treatment (HFT) activities as well as Timber Sales (TS) derived from the FACTS database. We use these two datasets to estimate the number of acres treated that did not produce commercial material (sawlogs or biomass) and where burning was not used. The first step is to elimina all treatments in the HFT dataset that included timber sales. We accomplish this by eliminating all rows in the HFT dataset that have identical FACTS_ID fields in the TS dataset. We further filter the HFT dataset by removing any planned but not executed treatements (nbr_units1 >0 below -- nbr_units1 references NBR_UNITS_ACCOMPLISHED in the USFS dataset, see metadata for HFT here), and use text matching in the 'ACTIVITY' and 'METHOD' fields to remove any rows that contain reference to 'burning' or 'fire'. Finally, we remove all rows that that reference 'Biomass' in the method category as it is assumed that this means material was removed for bioenergy.
|
usfs_acres = pd.read_sql('''select
sum(nbr_units1) acres,
method,
strftime('%Y',date_compl) year,
cat."ACTIVITY" activity,
cat."TENTATIVE_CATEGORY" r5_cat
from facts_notimber n
join facts_cat cat
on (n.activity = cat."ACTIVITY")
where date_compl is not null
and nbr_units1 > 0
and cat."TENTATIVE_CATEGORY" != 'Burning'
and cat."ACTIVITY" not like '%ire%'
and method not like '%Burn%'
and method != 'Biomass'
group by cat."ACTIVITY",
year,
method,
cat."TENTATIVE_CATEGORY"
order by year;''', con = sqdb['cx'])
|
wood_fates.ipynb
|
peteWT/fcat_biomass
|
mit
|
Converting acres to cubic feet
FACTS reports in acres. To estimate the production of biomass from acres treated we use a range of 10-35 BDT/acre. We assume that actual biomass residuals per acre are normally distributed with a mean of 22.5 and a standard deviation of (35-10)/4 = 6.25
|
def sumBDT(ac, maxbdt = 35, minbdt = 10):
av = (maxbdt + minbdt)/2
stdev = (float(maxbdt) - float(minbdt))/4
d_frac = (ac-np.floor(ac))*np.random.normal(av, stdev, 1).clip(min=0)[0]
t_bdt = np.sum(np.random.normal(av,stdev,np.floor(ac)).clip(min=0))
return d_frac+t_bdt
usfs_acres['bdt'] = usfs_acres['acres'].apply(sumBDT)
usfs_an_bdt = usfs_acres.groupby(['year']).sum()
|
wood_fates.ipynb
|
peteWT/fcat_biomass
|
mit
|
Weighted average wood density
Average wood density weighted by harvested species percent. Derived from McIver and Morgan, Table 4
|
wood_dens = ut.gData('138FWlGeW57MKdcz2UkWxtWV4o50SZO8sduB1R6JOFp8', 1297253755)
wavg_dens =sum(wood_dens.pct/100 * wood_dens.density_lbscuft)
|
wood_fates.ipynb
|
peteWT/fcat_biomass
|
mit
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.