markdown
stringlengths 0
37k
| code
stringlengths 1
33.3k
| path
stringlengths 8
215
| repo_name
stringlengths 6
77
| license
stringclasses 15
values |
---|---|---|---|---|
7.Set up the FeatureUnion with the desired features, then fit according to the train reviews and transform the train reviews
|
#Make a FeatureUnion object with the desired features then fit to train reviews
comb_features = yml.make_featureunion()
comb_features.fit(train_reviews)
train_features = comb_features.transform(train_reviews)
train_lsi = yml.get_lsi_features(train_reviews, lsi, topics, dictionary)
train_features = sparse.hstack((train_features, train_lsi))
train_features = train_features.todense()
#fit each model in turn
model_runs = [(True, False, False, False, False), (False, True, False, False, False),
(False, False, True, False, False), (False, False, False, True, False),
(False, False, False, False, True)]
test_results = {}
for i in tqdm.tqdm(range(0, len(model_runs))):
clf = yml.fit_model(train_features, train_labels, svm_clf = model_runs[i][0],
RandomForest = model_runs[i][1], nb = model_runs[i][2])
threshold = 0.7
error = yml.test_user_set(test_set, clf, restaurant_df, user_df, comb_features, threshold, lsi, topics, dictionary)
test_results[clf] = error
#Get top predictions
for key in test_results.keys():
results = test_results[test_results.keys()[0]]
log_loss = yml.get_log_loss()
print log_loss
|
machine_learning/User_Sample_test_draft_ed.ipynb
|
georgetown-analytics/yelp-classification
|
mit
|
Ahora vamos a elegir diferentes grados de aproximación a la imagen original usando un ciclo "For":
|
for i in range(5, 85, 15):
matreconstruida = np.matrix(U[:, :i]) * np.diag(S[:i]) * np.matrix(V[:i, :])
plt.imshow(matreconstruida, cmap='gray')
title = "Aproximación de grado k = %s" % i
plt.title(title)
plt.show()
|
Alumnos/Victor Quintero/Tarea 2.ipynb
|
mauriciogtec/PropedeuticoDataScience2017
|
mit
|
¿Qué tiene que ver este proyecto con compresión de imágenes?
R= Al descomponer la imagen nos estamos quedando únicamente con información relevante de la misma, por lo que podemos reconstruirla posteriormente utilizando únicamente un porcentaje de los vectores de U, Sigma y V, dependiendo de la fidelidad de la imagen que queramos. Como se puede observar en el ejercicio, con un porcentaje bajo de vectores podemos reconstruir la imagen sin alterar mucho su fidelidad, por lo que en lugar de guardar toda la matriz, basta con guardar ese porcentaje de vectores y así ahorrar memoria.
Ejercicio 2: Cálculo de pseudoinversa y resoluver sistemas de ecuaciones
Programar una función que dada cualquier matriz devuelva la pseudoinversa usando la descomposición SVD. Hacer otra función que resuelva un sistema de ecuaciones de la forma Ax=b usando la pseudoinversa.
|
from copy import copy, deepcopy
def pseudoinversa(A):
U, S, V = np.linalg.svd(A)
m, n = A.shape
D = np.empty([m,n])
D = D * 0
for k in range (n):
D[k,k] = 1
S = D * S # Vuelvo a S una matriz diagonal mXn
pseudo = deepcopy(S)
for i in range (n): #Calculo pseudo inversa de sigma
if pseudo[i,i] != 0:
pseudo[i,i] = 1/pseudo[i,i]
pseudo = pseudo.transpose()
VT = V.transpose()
UT = U.transpose()
w = np.dot(VT,pseudo)
pseudo = np.dot(w,UT)
return pseudo
def resuelve(A,b):
y= pseudoinversa(A)
x = np.dot(y,b)
return x
|
Alumnos/Victor Quintero/Tarea 2.ipynb
|
mauriciogtec/PropedeuticoDataScience2017
|
mit
|
Ejemplo para ver que la función resuelve de manera correcta el sistema de ecuaciones:
|
A = np.array([[2, 1, 3], [4, -1, 3], [-2, 5, 5]])
b = np.array([[17],[31],[-5]])
resuelve(A,b)
|
Alumnos/Victor Quintero/Tarea 2.ipynb
|
mauriciogtec/PropedeuticoDataScience2017
|
mit
|
Jugar con la función donde b puede tomar distintos valores y A=[[1,1],[0,0]]:
|
A = np.array([[1,1],[0,0]])
b= np.array([[5],[0]])
resuelve(A,b)
|
Alumnos/Victor Quintero/Tarea 2.ipynb
|
mauriciogtec/PropedeuticoDataScience2017
|
mit
|
a) Si b esta en la imagen de A (La imagen es [x,0]) devuelve la solución al sistema de manera correcta. Si b no esta en la imagen (ej. b= [1,1]) devuelve la solución al sistema considerando la imagen, que es la solución más cercana, en el ejemplo b=[1,1] devuelve la solución al sistema considerando b=[1,0].
b) ¿La solución resultante es única? No, ya que para diferentes valore de b, existe el mismo valor de x. Esto sucede porque la matriz es singular.
c) Cambiar a: A=[[1,1],[0,1e-32]]. ¿La solución es única? Sí, para cada diferente valor de b1 y b2, devuelve un valor único de x1 y x2. ¿Cambia el valor devuelto de x en cada posible valor de b del punto anterior? sí, debido a que esta matriz si es invertible con el metodo de la pseudoinversa, aunque prácticamente sea la misma matriz que en el punto anterior.
|
A = np.array([[1,1],[0,1e-32]])
b= np.array([[5],[0]])
resuelve(A,b)
|
Alumnos/Victor Quintero/Tarea 2.ipynb
|
mauriciogtec/PropedeuticoDataScience2017
|
mit
|
Ejercicio 3: Ajuste de mínimos cuadrados
|
import pandas as pd
z = pd.read_csv("https://raw.githubusercontent.com/mauriciogtec/PropedeuticoDataScience2017/master/Tarea/study_vs_sat.csv",index_col = False)
m, n = z.shape
SX= z.iloc[0][0]
SY = z.iloc[0][1]
SXX = z.iloc[0][0] **2
SYY = z.iloc[0][1] **2
SXY = z.iloc[0][0] * z.iloc[0][1]
for i in range (1,m):
SX += z.iloc[i][0]
SY += z.iloc[i][1]
SXX += z.iloc[i][0] **2
SYY += z.iloc[i][1] **2
SXY += z.iloc[i][0] * z.iloc[i][1]
Beta = (m*SXY - SX*SY) / (m*SXX- SX**2)
Alpha = (1/m)*SY - Beta*(1/m)*SX
funcion= "Sat_score ~ " + str(Alpha) + " + " + str(Beta) + "Study_hours"
print(z,"\n ","\n ",funcion)
|
Alumnos/Victor Quintero/Tarea 2.ipynb
|
mauriciogtec/PropedeuticoDataScience2017
|
mit
|
<li> ¿Cuál es el gradiente de la función que se quiere optimizar? R= El Vector [1, Study_hours]
Programar una función que reciba los valores alpha, beta y el vector Study_hours y devuelva un vector array de numpy de predicciones alpha + beta * Study_hours_i, con un vaor por cada individuo.
|
def sat_score(Alpha,Beta,Study_hours):
m, = Study_hours.shape
Satscore= [0]
for i in range (m-1):
Satscore += [0]
Satscore = np.array([Satscore])
Satscore= Satscore.transpose()
for j in range (m):
Satscore[j,0]= Alpha + Beta * Study_hours[j]
return Satscore
SH= z.iloc[:,0]
sat_s = sat_score(353.164879499,25.3264677779,SH)
plt.scatter(SH, sat_s)
plt.title('Scatter: Study hours vs Sat Score')
plt.xlabel('Study_hours')
plt.ylabel('Sat_score')
plt.show()
sat_s
|
Alumnos/Victor Quintero/Tarea 2.ipynb
|
mauriciogtec/PropedeuticoDataScience2017
|
mit
|
<li><strong>(Avanzado)</strong> Usen la libreria <code>matplotlib</code> par visualizar las predicciones con alpha y beta solución contra los valores reales de sat_score.
|
SS= z.iloc[:,1]
g1 = (SH,SS)
g2 = (SH,sat_s)
data = (g1, g2)
colors = ("green", "red")
groups = ("Real", "Forecast: Alpha + Beta * Study_Hours")
fig, ax = plt.subplots()
for data, color, group in zip(data, colors, groups):
x, y = data
ax.scatter(x, y, alpha=0.8, c=color, edgecolors='none', s=30, label=group)
plt.title('Real vs Forecast')
plt.legend(loc=0)
plt.show()
|
Alumnos/Victor Quintero/Tarea 2.ipynb
|
mauriciogtec/PropedeuticoDataScience2017
|
mit
|
<li> Definan un numpy array X de dos columnas, la primera con unos en todas sus entradas y la segunda y la segunda con la variable Study_hours. Observen que <code>X*[alpha,beta]</code> nos devuelve <code>alpha + beta*study_hours_i</code> en cada entrada y que entonces el problema se vuelve <code>sat_score ~ X*[alpha,beta]</code>
|
x=[1.]
y= [z.iloc[0,0]]
for i in range (19):
x += [1]
y += [z.iloc[i+1,0]]
X = np.array([x,y])
X = X.transpose()
alpha = 353.164879499
beta = 25.3264677779
ab=np.array([[alpha],[beta]])
R = np.dot(X,ab)
R
|
Alumnos/Victor Quintero/Tarea 2.ipynb
|
mauriciogtec/PropedeuticoDataScience2017
|
mit
|
<li>Calculen la pseudoinversa X^+ de X y computen <code>(X^+)*sat_score</code> para obtener alpha y beta soluciones.</li>
|
Xpseudo= pseudoinversa(X)
Sscore= z.iloc[:,1]
ab=np.dot(Xpseudo,Sscore)
ab
|
Alumnos/Victor Quintero/Tarea 2.ipynb
|
mauriciogtec/PropedeuticoDataScience2017
|
mit
|
<li>Comparen la solución anterior con la de la fórmula directa de solución exacta <code>(alpha,beta)=(X^t*X)^(-1)*X^t*study_hours</code>.</li>
|
SH= z.iloc[:,0]
Sscore= z.iloc[:,1]
XT = X.transpose()
XT2 = np.dot(XT,X)
XTI = np.linalg.inv(XT2)
w= np.dot(XTI,XT)
ab = np.dot(w,Sscore)
ab
|
Alumnos/Victor Quintero/Tarea 2.ipynb
|
mauriciogtec/PropedeuticoDataScience2017
|
mit
|
The first step in any data analysis is acquiring and munging the data
Our starting data set can be found here:
http://jakecoltman.com in the pyData post
It is designed to be roughly similar to the output from DCM's path to conversion
Download the file and transform it into something with the columns:
id,lifetime,age,male,event,search,brand
where lifetime is the total time that we observed someone not convert for and event should be 1 if we see a conversion and 0 if we don't. Note that all values should be converted into ints
It is useful to note that end_date = datetime.datetime(2016, 5, 3, 20, 36, 8, 92165)
|
####Data munging here
###Parametric Bayes
#Shout out to Cam Davidson-Pilon
## Example fully worked model using toy data
## Adapted from http://blog.yhat.com/posts/estimating-user-lifetimes-with-pymc.html
## Note that we've made some corrections
N = 2500
##Generate some random data
lifetime = pm.rweibull( 2, 5, size = N )
birth = pm.runiform(0, 10, N)
censor = ((birth + lifetime) >= 10)
lifetime_ = lifetime.copy()
lifetime_[censor] = 10 - birth[censor]
alpha = pm.Uniform('alpha', 0, 20)
beta = pm.Uniform('beta', 0, 20)
@pm.observed
def survival(value=lifetime_, alpha = alpha, beta = beta ):
return sum( (1-censor)*(log( alpha/beta) + (alpha-1)*log(value/beta)) - (value/beta)**(alpha))
mcmc = pm.MCMC([alpha, beta, survival ] )
mcmc.sample(50000, 30000)
pm.Matplot.plot(mcmc)
mcmc.trace("alpha")[:]
|
Basic Presentation.ipynb
|
JakeColtman/BayesianSurvivalAnalysis
|
mit
|
Problems:
1 - Try to fit your data from section 1
2 - Use the results to plot the distribution of the median
Note that the media of a Weibull distribution is:
$$β(log 2)^{1/α}$$
|
#### Fit to your data here
#### Plot the distribution of the median
|
Basic Presentation.ipynb
|
JakeColtman/BayesianSurvivalAnalysis
|
mit
|
Problems:
4 - Try adjusting the number of samples for burning and thinnning
5 - Try adjusting the prior and see how it affects the estimate
|
#### Adjust burn and thin, both paramters of the mcmc sample function
#### Narrow and broaden prior
|
Basic Presentation.ipynb
|
JakeColtman/BayesianSurvivalAnalysis
|
mit
|
Problems:
7 - Try testing whether the median is greater than a different values
|
#### Hypothesis testing
|
Basic Presentation.ipynb
|
JakeColtman/BayesianSurvivalAnalysis
|
mit
|
If we want to look at covariates, we need a new approach.
We'll use Cox proprtional hazards, a very popular regression model.
To fit in python we use the module lifelines:
http://lifelines.readthedocs.io/en/latest/
|
### Fit a cox proprtional hazards model
|
Basic Presentation.ipynb
|
JakeColtman/BayesianSurvivalAnalysis
|
mit
|
Once we've fit the data, we need to do something useful with it. Try to do the following things:
1 - Plot the baseline survival function
2 - Predict the functions for a particular set of features
3 - Plot the survival function for two different set of features
4 - For your results in part 3 caculate how much more likely a death event is for one than the other for a given period of time
|
#### Plot baseline hazard function
#### Predict
#### Plot survival functions for different covariates
#### Plot some odds
|
Basic Presentation.ipynb
|
JakeColtman/BayesianSurvivalAnalysis
|
mit
|
Model selection
Difficult to do with classic tools (here)
Problem:
1 - Calculate the BMA coefficient values
2 - Try running with different priors
|
#### BMA Coefficient values
#### Different priors
|
Basic Presentation.ipynb
|
JakeColtman/BayesianSurvivalAnalysis
|
mit
|
List all Positions for an Account.
|
r = positions.PositionList(accountID=accountID)
client.request(r)
print(r.response)
|
Oanda v20 REST-oandapyV20/06.00 Position Management.ipynb
|
anthonyng2/FX-Trading-with-Python-and-Oanda
|
mit
|
List all open Positions for an Account.
|
r = positions.OpenPositions(accountID=accountID)
client.request(r)
|
Oanda v20 REST-oandapyV20/06.00 Position Management.ipynb
|
anthonyng2/FX-Trading-with-Python-and-Oanda
|
mit
|
Get the details of a single instrument’s position in an Account
|
instrument = "AUD_USD"
r = positions.PositionDetails(accountID=accountID, instrument=instrument)
client.request(r)
|
Oanda v20 REST-oandapyV20/06.00 Position Management.ipynb
|
anthonyng2/FX-Trading-with-Python-and-Oanda
|
mit
|
Closeout the open Position regarding instrument in an Account.
|
data = {
"longUnits": "ALL"
}
r = positions.PositionClose(accountID=accountID,
instrument=instrument,
data=data)
client.request(r)
|
Oanda v20 REST-oandapyV20/06.00 Position Management.ipynb
|
anthonyng2/FX-Trading-with-Python-and-Oanda
|
mit
|
Define resources
To execute parsl we need to first define a set of resources on which the apps can run. Here we use a pool of threads.
|
workers = ThreadPoolExecutor(max_workers=4)
# We pass the workers to the DataFlowKernel which will execute our Apps over the workers.
dfk = DataFlowKernel(executors=[workers])
|
Bash-Tutorial.ipynb
|
Parsl/parsl_demos
|
apache-2.0
|
Defining Bash Apps
To demonstrate how to run apps written as Bash scripts, we use two mock science applications: simulate.sh and stats.sh. The simulation.sh script serves as a trivial proxy for any more complex scientific simulation application. It generates and prints a set of one or more random integers in the range [0-2^62) as controlled by its command line arguments. The stats.sh script serves as a trivial model of an "analysis" program. It reads N files each containing M integers and prints the average of all those numbers to stdout. Like simulate.sh it logs environmental information to stderr.
The following cell show how apps can be composed from arbitrary Bash scripts. The simulate signature shows how variables can be passed to the Bash script (e.g., "sim_steps") as well as how standard Parsl parameters are managed (e.g., "stdout").
|
@App('bash', dfk)
def simulate(sim_steps=1, sim_range=100, sim_values=5, outputs=[], stdout=None, stderr=None):
# The bash app function requires that the bash script is returned from the function as a
# string. Positional and Keyword args to the fn() are formatted into the cmd_line string
# All arguments to the app function are made available at the time of string formatting a
# string assigned to cmd_line.
# Here we compose the command-line call to simulate.sh with keyword arguments to simulate()
# and redirect stdout to the first file listed in the outputs list.
return '''echo "sim_steps: {sim_steps}\nsim_range: {sim_range}\nsim_values: {sim_values}"
echo "Starting run at $(date)"
$PWD/bin/simulate.sh --timesteps {sim_steps} --range {sim_range} --nvalues {sim_values} > {outputs[0]}
echo "Done at $(date)"
ls
'''
|
Bash-Tutorial.ipynb
|
Parsl/parsl_demos
|
apache-2.0
|
Running Bash Apps
Now that we've defined an app, let's run 10 parallel instances of it using a for loop. Each run will write 100 random numbers, each between 0 and 99, to the output file.
In order to track files created by Bash apps, a list of data futures (one for each file in the outputs[] list) is made available as an attribute of the AppFuture returned upon calling the decorated app fn.
<App_Future> = App_Function(... , outputs=['x.txt', 'y.txt'...])
[<DataFuture> ... ] = <App_Future>.outputs
|
simulated_results = []
# Launch 10 parallel runs of simulate() and put the futures in a list
for sim_index in range(10):
sim_fut = simulate(sim_steps=1,
sim_range=100,
sim_values=100,
outputs = ['stdout.{0}.txt'.format(sim_index)],
stderr='stderr.{0}.txt'.format(sim_index))
simulated_results.extend([sim_fut])
|
Bash-Tutorial.ipynb
|
Parsl/parsl_demos
|
apache-2.0
|
Handling Futures
The variable "simulated_results" contains a list of AppFutures, each corresponding to a running bash app.
Now let's print the status of the 10 jobs by checking if the app futures are done.
Note: you can re-run this step until all the jobs complete (all status are True) or go on, as a later step will block until all the jobs are complete.
|
print ([i.done() for i in simulated_results])
|
Bash-Tutorial.ipynb
|
Parsl/parsl_demos
|
apache-2.0
|
Retrieving Results
Each of the Apps return one DataFuture. Here we put all of these (data futures of file outputs) together into a list (simulation_outputs). This is done by iterating over each of the AppFutures and taking the first and only DataFuture in it's outputs list.
|
# Grab just the data futures for the output files from each simulation
simulation_outputs = [i.outputs[0] for i in simulated_results]
|
Bash-Tutorial.ipynb
|
Parsl/parsl_demos
|
apache-2.0
|
Defining a Second Bash App
We now explore how Parsl can be used to block on results. Let's define another app, analyze(), that calls stats.sh to find the average of the numbers in a set of files.
|
@App('bash', dfk)
def analyze(inputs=[], stdout=None, stderr=None):
# Here we compose the commandline for stats.sh that take a list of filenames as arguments
# Since we want a space separated list, rather than a python list (e.g: ['x.txt', 'y.txt'])
# we create a string by joining the filenames of each item in the inputs list and using
# that string to format the cmd_line explicitly
input_files = ' '.join([i for i in inputs])
return '$PWD/bin/stats.sh {0}'.format(input_files)
|
Bash-Tutorial.ipynb
|
Parsl/parsl_demos
|
apache-2.0
|
Blocking on Results
We call analyze with the list of data futures as inputs. This will block until all the simulate runs have completed and the data futures have 'resolved'. Finally, we print the result when it is ready.
|
results = analyze(inputs=simulation_outputs,
stdout='analyze.out',
stderr='analyze.err')
results.result()
with open('analyze.out', 'r') as f:
print(f.read())
|
Bash-Tutorial.ipynb
|
Parsl/parsl_demos
|
apache-2.0
|
Association (USES-A)
Passing object of class 1 as an argument of class 2 methods
|
class A():
def __init__(self, a, b, c):
self.a = a
self.b = b
self.c = c
def addNums():
self.b + self.c
class B():
def __init__(self, d, e):
self.d = d
self.e = e
def addAllNums(self, arg):
x = self.d + self.e + arg.b + arg.c
return x
objA = A("hi", 2, 6)
objB = B(5, 9)
objB.addAllNums(objA)
|
tutorials/python/Ipython files/py basics/OOPS basics.ipynb
|
vivekec/datascience
|
gpl-3.0
|
Composition (PART-OF)
Object of class 1 is defined inside the constructor of class 2
|
class A():
def __init__(self, a, b, c):
self.a = a
self.b = b
self.c = c
def addNums():
self.b + self.c
class B():
def __init__(self, d, e):
self.d = d
self.e = e
self.objA = A("hi", 2, 6)
def addAllNums(self):
x = self.d + self.e + self.objA.b + self.objA.c
return x
objB = B(5, 9)
objB.addAllNums()
|
tutorials/python/Ipython files/py basics/OOPS basics.ipynb
|
vivekec/datascience
|
gpl-3.0
|
Inheritance (IS-A)
|
class A():
def __init__(self, a, b, c):
self.a = a
self.b = b
self.c = c
def addNums():
self.b + self.c
class B(A):
def __init__(self, a, b, c, d, e):
# A.__init__(self, a, b, c)
super().__init__(a, b, c)
self.d = d
self.e = e
def addAllNums(self):
x = self.a + self.b + self.c + self.d + self.e
return x
objB = B(1, 2, 3, 5, 9)
objB.addAllNums()
|
tutorials/python/Ipython files/py basics/OOPS basics.ipynb
|
vivekec/datascience
|
gpl-3.0
|
Function overriding
|
class A():
def __init__(self, a, b, c):
self.a = a
self.b = b
self.c = c
def addNums(self):
return self.b * self.c
class B(A):
def __init__(self, a, b, c, d, e):
super().__init__(a, b, c)
self.d = d
self.e = e
def addNums(self):
return self.d + self.e
def check(self):
print("Class B func:", self.addNums())
print("Class B func:", super().addNums())
objB = B(1,2,3,5, 9)
objB.check()
|
tutorials/python/Ipython files/py basics/OOPS basics.ipynb
|
vivekec/datascience
|
gpl-3.0
|
There is no function overloading
Gives no error but only last defined function is executed
|
class A():
def f1(self, x):
return x
def f1(self, x, y):
return x, y
objA = A()
objA.f1(8,5)
# objA.f1(8) # Gives error
# How to work with function overloading
class A():
def f1(self,name = None):
if name is None:
return 5
else:
return name
objA = A()
print(objA.f1())
objA.f1(8)
|
tutorials/python/Ipython files/py basics/OOPS basics.ipynb
|
vivekec/datascience
|
gpl-3.0
|
Open a Terminal through Jupyter Notebook
(Menu Bar -> Terminal -> New Terminal)
Create Queue and Feed Tensorflow Graph
Run the Next Cell to Display the Code
|
%%bash
cat /root/src/main/python/queue/tensorflow_hdfs.py
|
oreilly.ml/high-performance-tensorflow/notebooks/02_Feed_Queue_HDFS.ipynb
|
shareactorIO/pipeline
|
apache-2.0
|
I'll write another function to grab batches out of the arrays made by split data. Here each batch will be a sliding window on these arrays with size batch_size X num_steps. For example, if we want our network to train on a sequence of 100 characters, num_steps = 100. For the next batch, we'll shift this window the next sequence of num_steps characters. In this way we can feed batches to the network and the cell states will continue through on each batch.
|
def get_batch(arrs, num_steps):
batch_size, slice_size = arrs[0].shape
n_batches = int(slice_size/num_steps)
for b in range(n_batches):
yield [x[:, b*num_steps: (b+1)*num_steps] for x in arrs]
def build_rnn(num_classes, batch_size=50, num_steps=50, lstm_size=128, num_layers=2,
learning_rate=0.001, grad_clip=5, sampling=False):
if sampling == True:
batch_size, num_steps = 1, 1
tf.reset_default_graph()
# Declare placeholders we'll feed into the graph
with tf.name_scope('inputs'):
inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs')
x_one_hot = tf.one_hot(inputs, num_classes, name='x_one_hot')
with tf.name_scope('targets'):
targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets')
y_one_hot = tf.one_hot(targets, num_classes, name='y_one_hot')
y_reshaped = tf.reshape(y_one_hot, [-1, num_classes])
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
# Build the RNN layers
with tf.name_scope("RNN_cells"):
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers)
with tf.name_scope("RNN_init_state"):
initial_state = cell.zero_state(batch_size, tf.float32)
# Run the data through the RNN layers
with tf.name_scope("RNN_forward"):
outputs, state = tf.nn.dynamic_rnn(cell, x_one_hot, initial_state=initial_state)
final_state = state
# Reshape output so it's a bunch of rows, one row for each cell output
with tf.name_scope('sequence_reshape'):
seq_output = tf.concat(outputs, axis=1,name='seq_output')
output = tf.reshape(seq_output, [-1, lstm_size], name='graph_output')
# Now connect the RNN outputs to a softmax layer and calculate the cost
with tf.name_scope('logits'):
softmax_w = tf.Variable(tf.truncated_normal((lstm_size, num_classes), stddev=0.1),
name='softmax_w')
softmax_b = tf.Variable(tf.zeros(num_classes), name='softmax_b')
logits = tf.matmul(output, softmax_w) + softmax_b
tf.summary.histogram('softmax_w', softmax_w)
tf.summary.histogram('softmax_b', softmax_b)
with tf.name_scope('predictions'):
preds = tf.nn.softmax(logits, name='predictions')
tf.summary.histogram('predictions', preds)
with tf.name_scope('cost'):
loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped, name='loss')
cost = tf.reduce_mean(loss, name='cost')
tf.summary.scalar('cost', cost)
# Optimizer for training, using gradient clipping to control exploding gradients
with tf.name_scope('train'):
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(cost, tvars), grad_clip)
train_op = tf.train.AdamOptimizer(learning_rate)
optimizer = train_op.apply_gradients(zip(grads, tvars))
merged = tf.summary.merge_all()
# Export the nodes
export_nodes = ['inputs', 'targets', 'initial_state', 'final_state',
'keep_prob', 'cost', 'preds', 'optimizer', 'merged']
Graph = namedtuple('Graph', export_nodes)
local_dict = locals()
graph = Graph(*[local_dict[each] for each in export_nodes])
return graph
|
tensorboard/Anna_KaRNNa_Summaries.ipynb
|
hvillanua/deep-learning
|
mit
|
Training
Time for training which is is pretty straightforward. Here I pass in some data, and get an LSTM state back. Then I pass that state back in to the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I calculate the validation loss and save a checkpoint.
|
!mkdir -p checkpoints/anna
epochs = 10
save_every_n = 100
train_x, train_y, val_x, val_y = split_data(chars, batch_size, num_steps)
model = build_rnn(len(vocab),
batch_size=batch_size,
num_steps=num_steps,
learning_rate=learning_rate,
lstm_size=lstm_size,
num_layers=num_layers)
saver = tf.train.Saver(max_to_keep=100)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
train_writer = tf.summary.FileWriter('./logs/2/train', sess.graph)
test_writer = tf.summary.FileWriter('./logs/2/test')
# Use the line below to load a checkpoint and resume training
#saver.restore(sess, 'checkpoints/anna20.ckpt')
n_batches = int(train_x.shape[1]/num_steps)
iterations = n_batches * epochs
for e in range(epochs):
# Train network
new_state = sess.run(model.initial_state)
loss = 0
for b, (x, y) in enumerate(get_batch([train_x, train_y], num_steps), 1):
iteration = e*n_batches + b
start = time.time()
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: 0.5,
model.initial_state: new_state}
summary, batch_loss, new_state, _ = sess.run([model.merged, model.cost,
model.final_state, model.optimizer],
feed_dict=feed)
loss += batch_loss
end = time.time()
print('Epoch {}/{} '.format(e+1, epochs),
'Iteration {}/{}'.format(iteration, iterations),
'Training loss: {:.4f}'.format(loss/b),
'{:.4f} sec/batch'.format((end-start)))
train_writer.add_summary(summary, iteration)
if (iteration%save_every_n == 0) or (iteration == iterations):
# Check performance, notice dropout has been set to 1
val_loss = []
new_state = sess.run(model.initial_state)
for x, y in get_batch([val_x, val_y], num_steps):
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: 1.,
model.initial_state: new_state}
summary, batch_loss, new_state = sess.run([model.merged, model.cost,
model.final_state], feed_dict=feed)
val_loss.append(batch_loss)
test_writer.add_summary(summary, iteration)
print('Validation loss:', np.mean(val_loss),
'Saving checkpoint!')
#saver.save(sess, "checkpoints/anna/i{}_l{}_{:.3f}.ckpt".format(iteration, lstm_size, np.mean(val_loss)))
tf.train.get_checkpoint_state('checkpoints/anna')
|
tensorboard/Anna_KaRNNa_Summaries.ipynb
|
hvillanua/deep-learning
|
mit
|
Constants
|
_DEFAULT_IMAGE_SIZE = 224
_NUM_CHANNELS = 3
_LABEL_CLASSES = 1001
RESNET_SIZE = 50 # We're loading a resnet-50 saved model.
# Model directory
MODEL_DIR='resnet_model_checkpoints'
VIS_DIR='visualization'
# RIEMANN STEPS is the number of steps in a Riemann Sum.
# This is used to compute an approximate the integral of gradients by supplying
# images on the path from a blank image to the original image.
RIEMANN_STEPS = 30
# Return the top k classes and probabilities, so we can also visualize model inference
# against other contending classes besides the most likely class.
TOP_K = 5
|
jupyter/resnet_model_understanding.ipynb
|
google-aai/tf-serving-k8s-tutorial
|
apache-2.0
|
Download model checkpoint
The next step is to load the researcher's saved checkpoint into our estimator. We will download it from
http://download.tensorflow.org/models/official/resnet50_2017_11_30.tar.gz using the following commands.
|
import urllib.request
urllib.request.urlretrieve("http://download.tensorflow.org/models/official/resnet50_2017_11_30.tar.gz ", "resnet.tar.gz")
#unzip the file into a directory called resnet
call(["mkdir", MODEL_DIR])
call(["tar", "-zxvf", "resnet.tar.gz", "-C", MODEL_DIR])
# Make sure you see model checkpoint files in this directory
os.listdir(MODEL_DIR)
|
jupyter/resnet_model_understanding.ipynb
|
google-aai/tf-serving-k8s-tutorial
|
apache-2.0
|
Import the Model Architecture
In order to reconstruct the Resnet neural network used to train the Imagenet model, we need to load the architecture pieces. During the setup step, we checked out https://github.com/tensorflow/models/tree/v1.4.0/official/resnet. We can now load functions and constants from resnet_model.py into the notebook.
|
%run ../models/official/resnet/resnet_model.py #TODO: modify directory based on where you git cloned the TF models.
|
jupyter/resnet_model_understanding.ipynb
|
google-aai/tf-serving-k8s-tutorial
|
apache-2.0
|
Image preprocessing functions
Note that preprocessing functions are called during training as well (see https://github.com/tensorflow/models/blob/master/official/resnet/imagenet_main.py and https://github.com/tensorflow/models/blob/master/official/resnet/vgg_preprocessing.py), so we will need to extract relevant logic from these functions. Below is a simplified preprocessing code that normalizes the image's pixel values.
For simplicity, we assume the client provides properly-sized images 224 x 224 x 3 in batches. It will become clear later that sending images over ip in protobuf format can be more easily handled by storing a 4d tensor. The only preprocessing required here is to subtract the mean.
|
def preprocess_images(images):
"""Preprocesses the image by subtracting out the mean from all channels.
Args:
image: A 4D `Tensor` representing a batch of images.
Returns:
image pixels normalized to be between -0.5 and 0.5
"""
return tf.to_float(images) / 255 - 0.5
|
jupyter/resnet_model_understanding.ipynb
|
google-aai/tf-serving-k8s-tutorial
|
apache-2.0
|
Resnet Model Functions
We are going to create two estimators here since we need to run two model predictions.
The first prediction computes the top labels for the image by returning the argmax_k top logits.
The second prediction returns a sequence of gradients along the straightline path from a purely grey image (127.5, 127.5, 127.5) to the final image. We use grey here because the resnet model transforms this pixel value to all 0s.
Below is the resnet model function.
|
def resnet_model_fn(features, labels, mode):
"""Our model_fn for ResNet to be used with our Estimator."""
# Preprocess images as necessary for resnet
features = preprocess_images(features['images'])
# This network must be IDENTICAL to that used to train.
network = imagenet_resnet_v2(RESNET_SIZE, _LABEL_CLASSES)
# tf.estimator.ModeKeys.TRAIN will be false since we are predicting.
logits = network(
inputs=features, is_training=(mode == tf.estimator.ModeKeys.TRAIN))
# Instead of the top 1 result, we can now return top k!
top_k_logits, top_k_classes = tf.nn.top_k(logits, k=TOP_K)
top_k_probs = tf.nn.softmax(top_k_logits)
predictions = {
'classes': top_k_classes,
'probabilities': top_k_probs
}
return tf.estimator.EstimatorSpec(
mode=mode,
predictions=predictions,
)
|
jupyter/resnet_model_understanding.ipynb
|
google-aai/tf-serving-k8s-tutorial
|
apache-2.0
|
Gradients Model Function
The Gradients model function takes as input a single image (a 4d tensor of dimension [1, 244, 244, 3]) and expands it to a series of images (tensor dimension [RIEMANN_STEPS + 1, 244, 244, 3]), where each image is simply a "fractional" image, with image 0 being pure gray to image RIEMANN_STEPS being the original image. The gradients are then computed for each of these images, and various outputs are returned.
Note: Each step is a single inference that returns an entire gradient pixel map.
The total gradient map evaluation can take a couple minutes!
|
def gradients_model_fn(features, labels, mode):
"""Our model_fn for ResNet to be used with our Estimator."""
# Supply the most likely class from features dict to determine which logit function
# to use gradients along the
most_likely_class = features['most_likely_class']
# Features here is a 4d tensor of ONE image. Normalize it as in training and serving.
features = preprocess_images(features['images'])
# This network must be IDENTICAL to that used to train.
network = imagenet_resnet_v2(RESNET_SIZE, _LABEL_CLASSES)
# path_features should have dim [RIEMANN_STEPS + 1, 224, 224, 3]
path_features = tf.zeros([1, 224, 224, 3])
for i in range(1, RIEMANN_STEPS + 1):
path_features = tf.concat([path_features, features * i / RIEMANN_STEPS], axis=0)
# Path logits should evaluate logits for each path feature and return a 2d array for all path images and classes
path_logits = network(inputs=path_features, is_training=(mode == tf.estimator.ModeKeys.TRAIN))
# The logit we care about is only that pertaining to the most likely class
# The most likely class contains only a single integer, so retrieve it.
target_logits = path_logits[:, most_likely_class[0]]
# Compute gradients for each image with respect to each logit
gradients = tf.gradients(target_logits, path_features)
# Multiply elementwise to the original image to get weighted gradients for each pixel.
gradients = tf.squeeze(tf.multiply(gradients, features))
predictions = {
'path_features': path_features, # for debugging
'path_logits': path_logits, # for debugging
'target_logits': target_logits, # use this to verify that the riemann integral works out
'path_features': path_features, # for displaying path images
'gradients': gradients # for displaying gradient images and computing integrated gradient
}
return tf.estimator.EstimatorSpec(
mode=mode,
predictions=predictions, # This is the returned value
)
|
jupyter/resnet_model_understanding.ipynb
|
google-aai/tf-serving-k8s-tutorial
|
apache-2.0
|
Estimators
Load in the model_fn using the checkpoints from MODEL_DIR. This will initialize our weights which we will then use to run backpropagation to find integrated gradients.
|
# Load this model into our estimator
resnet_estimator = tf.estimator.Estimator(
model_fn=resnet_model_fn, # Call our generate_model_fn to create model function
model_dir=MODEL_DIR, # Where to look for model checkpoints
#config not needed
)
gradients_estimator = tf.estimator.Estimator(
model_fn=gradients_model_fn, # Call our generate_model_fn to create model function
model_dir=MODEL_DIR, # Where to look for model checkpoints
#config not needed
)
|
jupyter/resnet_model_understanding.ipynb
|
google-aai/tf-serving-k8s-tutorial
|
apache-2.0
|
Create properly sized image in numpy
Load whatever image you would like (local or url), and resize to 224 x 224 x 3 using opencv2.
|
def resize_and_pad_image(img, output_image_dim):
"""Resize the image to make it IMAGE_DIM x IMAGE_DIM pixels in size.
If an image is not square, it will pad the top/bottom or left/right
with black pixels to ensure the image is square.
Args:
img: the input 3-color image
output_image_dim: resized and padded output length (and width)
Returns:
resized and padded image
"""
old_size = img.size # old_size[0] is in (width, height) format
ratio = float(output_image_dim) / max(old_size)
new_size = tuple([int(x * ratio) for x in old_size])
# use thumbnail() or resize() method to resize the input image
# thumbnail is a in-place operation
# im.thumbnail(new_size, Image.ANTIALIAS)
scaled_img = img.resize(new_size, Image.ANTIALIAS)
# create a new image and paste the resized on it
padded_img = Image.new("RGB", (output_image_dim, output_image_dim))
padded_img.paste(scaled_img, ((output_image_dim - new_size[0]) // 2,
(output_image_dim - new_size[1]) // 2))
return padded_img
IMAGE_PATH = 'https://www.popsci.com/sites/popsci.com/files/styles/1000_1x_/public/images/2017/09/depositphotos_33210141_original.jpg?itok=MLFznqbL&fc=50,50'
IMAGE_NAME = os.path.splitext(os.path.basename(IMAGE_PATH))[0]
print(IMAGE_NAME)
image = None
if 'http' in IMAGE_PATH:
resp = requests.get(IMAGE_PATH)
image = Image.open(BytesIO(resp.content))
else:
image = Image.open(IMAGE_PATH) # Parse the image from your local disk.
# Resize and pad the image
image = resize_and_pad_image(image, _DEFAULT_IMAGE_SIZE)
feature = np.asarray(image)
feature = np.array([feature])
# Display the image to validate
imgplot = plt.imshow(feature[0])
plt.show()
|
jupyter/resnet_model_understanding.ipynb
|
google-aai/tf-serving-k8s-tutorial
|
apache-2.0
|
Prediction Input Function
Since we are analyzing the model using the estimator api, we need to provide an input function for prediction. Fortunately, there are built-in input functions that can read from numpy arrays, e.g. tf.estimator.inputs.numpy_input_fn.
|
label_predictions = resnet_estimator.predict(
tf.estimator.inputs.numpy_input_fn(
x={'images': feature},
shuffle=False
)
)
label_dict = next(label_predictions)
# Print out probabilities and class names
classval = label_dict['classes']
probsval = label_dict['probabilities']
labels = []
with open('client/imagenet1000_clsid_to_human.txt', 'r') as f:
label_reader = csv.reader(f, delimiter=':', quotechar='\'')
for row in label_reader:
labels.append(row[1][:-1])
# The served model uses 0 as the miscellaneous class, and so starts indexing
# the imagenet images from 1. Subtract 1 to reference the text correctly.
classval = [labels[x - 1] for x in classval]
class_and_probs = [str(p) + ' : ' + c for c, p in zip(classval, probsval)]
for j in range(0, 5):
print(class_and_probs[j])
|
jupyter/resnet_model_understanding.ipynb
|
google-aai/tf-serving-k8s-tutorial
|
apache-2.0
|
Computing Gradients
Run the gradients estimator to retrieve a generator of metrics and gradient pictures, and pickle the images.
|
# make the visualization directory
IMAGE_DIR = os.path.join(VIS_DIR, IMAGE_NAME)
call(['mkdir', '-p', IMAGE_DIR])
# Get one of the top classes. 0 picks out the best, 1 picks out second best, etc...
best_label = label_dict['classes'][0]
# Compute gradients with respect to this class
gradient_predictions = gradients_estimator.predict(
tf.estimator.inputs.numpy_input_fn(
x={'images': feature, 'most_likely_class': np.array([best_label])},
shuffle=False
)
)
# Start computing the sum of gradients (to be used for integrated gradients)
int_gradients = np.zeros((224, 224, 3))
gradients_and_logits = []
# Print gradients along the path, and pickle them
for i in range(0, RIEMANN_STEPS + 1):
gradient_dict = next(gradient_predictions)
gradient_map = gradient_dict['gradients']
print('Path image %d: gradient: %f, logit: %f' % (i, np.sum(gradient_map), gradient_dict['target_logits']))
# Gradient visualization output pickles
pickle.dump(gradient_map, open(os.path.join(IMAGE_DIR, 'path_gradient_' + str(i) + '.pkl'), "wb" ))
int_gradients = np.add(int_gradients, gradient_map)
gradients_and_logits.append((np.sum(gradient_map), gradient_dict['target_logits']))
pickle.dump(int_gradients, open(os.path.join(IMAGE_DIR, 'int_gradients.pkl'), "wb" ))
pickle.dump(gradients_and_logits, open(os.path.join(IMAGE_DIR, 'gradients_and_logits.pkl'), "wb" ))
|
jupyter/resnet_model_understanding.ipynb
|
google-aai/tf-serving-k8s-tutorial
|
apache-2.0
|
Visualization
If you simply want to play around with visualization, unpickle the result from above so you do not have to rerun prediction again. The following visualizes the gradients with different amplification of pixels, and prints their derivatives and logits as well to view where the biggest differentiators lie. You can also modify the INTERPOLATION flag to increase the "fatness" of pixels.
Below are two examples of visualization methods: one computing the gradient value normalized to between 0 and 1, and another visualizing absolute deviation from the median.
Plotting individual image gradients along path
First, let us plot the individual gradient value for all gradient path images. Pay special attention to the images with a large positive gradient (i.e. in the direction of increasing logit for the most likely class). Do the pixel gradients resemble the image class you are trying to detect?
|
AMPLIFICATION = 2.0
INTERPOLATION = 'none'
gradients_and_logits = pickle.load(open(os.path.join(IMAGE_DIR, 'gradients_and_logits.pkl'), "rb" ))
for i in range(0, RIEMANN_STEPS + 1):
gradient_map = pickle.load(open(os.path.join(IMAGE_DIR, 'path_gradient_' + str(i) + '.pkl'), "rb" ))
min_grad = np.ndarray.min(gradient_map)
max_grad = np.ndarray.max(gradient_map)
median_grad = np.median(gradient_map)
gradient_and_logit = gradients_and_logits[i]
plt.figure(figsize=(10,10))
plt.subplot(121)
plt.title('Image %d: grad: %.2f, logit: %.2f' % (i, gradient_and_logit[0], gradient_and_logit[1]))
imgplot = plt.imshow((gradient_map - min_grad) / (max_grad - min_grad),
interpolation=INTERPOLATION)
plt.subplot(122)
plt.title('Image %d: grad: %.2f, logit: %.2f' % (i, gradient_and_logit[0], gradient_and_logit[1]))
imgplot = plt.imshow(np.abs(gradient_map - median_grad) * AMPLIFICATION / max(max_grad - median_grad, median_grad - min_grad),
interpolation=INTERPOLATION)
plt.show()
|
jupyter/resnet_model_understanding.ipynb
|
google-aai/tf-serving-k8s-tutorial
|
apache-2.0
|
Plot the Integrated Gradient
When integrating over all gradients along the path, the result is an image that captures larger signals from pixels with the large gradients. Is the integrated gradient a clear representation of what it is trying to detect?
|
AMPLIFICATION = 2.0
INTERPOLATION = 'none'
# Plot the integrated gradients
int_gradients = pickle.load(open(os.path.join(IMAGE_DIR, 'int_gradients.pkl'), "rb" ))
min_grad = np.ndarray.min(int_gradients)
max_grad = np.ndarray.max(int_gradients)
median_grad = np.median(int_gradients)
plt.figure(figsize=(15,15))
plt.subplot(131)
imgplot = plt.imshow((int_gradients - min_grad) / (max_grad - min_grad),
interpolation=INTERPOLATION)
plt.subplot(132)
imgplot = plt.imshow(np.abs(int_gradients - median_grad) * AMPLIFICATION / max(max_grad - median_grad, median_grad - min_grad),
interpolation=INTERPOLATION)
plt.subplot(133)
imgplot = plt.imshow(feature[0])
plt.show()
# Verify that the average of gradients is equal to the difference in logits
print('total logit diff: %f' % (gradients_and_logits[RIEMANN_STEPS][1] - gradients_and_logits[0][1]))
print('sum of integrated gradients: %f' % (np.sum(int_gradients) / RIEMANN_STEPS + 1))
|
jupyter/resnet_model_understanding.ipynb
|
google-aai/tf-serving-k8s-tutorial
|
apache-2.0
|
Plot the integrated gradients for each channel
We can also visualize individual pixel contributions from different RGB channels.
Can you think of any other visualization ideas to try out?
|
AMPLIFICATION = 2.0
INTERPOLATION = 'none'
# Show red-green-blue channels for integrated gradients
for channel in range(0, 3):
gradient_channel = int_gradients[:,:,channel]
min_grad = np.ndarray.min(gradient_channel)
max_grad = np.ndarray.max(gradient_channel)
median_grad = np.median(gradient_channel)
plt.figure(figsize=(10,10))
plt.subplot(121)
imgplot = plt.imshow((gradient_channel - min_grad) / (max_grad - min_grad),
interpolation=INTERPOLATION,
cmap='gray')
plt.subplot(122)
imgplot = plt.imshow(np.abs(gradient_channel - median_grad) * AMPLIFICATION / max(max_grad - median_grad, median_grad - min_grad),
interpolation=INTERPOLATION,
cmap='gray')
plt.show()
|
jupyter/resnet_model_understanding.ipynb
|
google-aai/tf-serving-k8s-tutorial
|
apache-2.0
|
Load capacity curves
In order to use this methodology, it is necessary to provide one (or a group) of capacity curves, defined according to the format described in the RMTK manual. In case multiple capacity curves are input, a spectral shape also needs to be defined.
Please provide the location of the file containing the capacity curves using the parameter capacity_curves_file.
Please also provide a spectral shape using the parameter input_spectrum if multiple capacity curves are used.
|
capacity_curves_file = "../../../../../../rmtk_data/capacity_curves_Vb-dfloor.csv"
input_spectrum = "../../../../../../rmtk_data/FEMAP965spectrum.txt"
capacity_curves = utils.read_capacity_curves(capacity_curves_file)
Sa_ratios = utils.get_spectral_ratios(capacity_curves, input_spectrum)
utils.plot_capacity_curves(capacity_curves)
|
notebooks/vulnerability/derivation_fragility/R_mu_T_dispersion/SPO2IDA/spo2ida.ipynb
|
GEMScienceTools/rmtk
|
agpl-3.0
|
Idealise pushover curves
In order to use this methodology the pushover curves need to be idealised. Please choose an idealised shape using the parameter idealised_type. The valid options for this methodology are "bilinear" and "quadrilinear". Idealised curves can also be directly provided as input by setting the field Idealised to TRUE in the input file defining the capacity curves.
|
idealised_type = "quadrilinear"
idealised_capacity = utils.idealisation(idealised_type, capacity_curves)
utils.plot_idealised_capacity(idealised_capacity, capacity_curves, idealised_type)
|
notebooks/vulnerability/derivation_fragility/R_mu_T_dispersion/SPO2IDA/spo2ida.ipynb
|
GEMScienceTools/rmtk
|
agpl-3.0
|
Load damage state thresholds
Please provide the path to your damage model file using the parameter damage_model_file in the cell below. Currently only interstorey drift damage model type is supported.
|
damage_model_file = "../../../../../../rmtk_data/damage_model_ISD.csv"
damage_model = utils.read_damage_model(damage_model_file)
|
notebooks/vulnerability/derivation_fragility/R_mu_T_dispersion/SPO2IDA/spo2ida.ipynb
|
GEMScienceTools/rmtk
|
agpl-3.0
|
Calculate fragility functions
The damage threshold dispersion is calculated and integrated with the record-to-record dispersion through Monte Carlo simulations. Please enter the number of Monte Carlo samples to be performed using the parameter montecarlo_samples in the cell below.
|
montecarlo_samples = 50
fragility_model = SPO2IDA_procedure.calculate_fragility(capacity_curves, idealised_capacity, damage_model, montecarlo_samples, Sa_ratios, 1)
|
notebooks/vulnerability/derivation_fragility/R_mu_T_dispersion/SPO2IDA/spo2ida.ipynb
|
GEMScienceTools/rmtk
|
agpl-3.0
|
Plot fragility functions
The following parameters need to be defined in the cell below in order to plot the lognormal CDF fragility curves obtained above:
* minIML and maxIML: These parameters define the limits of the intensity measure level for plotting the functions
|
minIML, maxIML = 0.01, 2
utils.plot_fragility_model(fragility_model, minIML, maxIML)
print fragility_model
|
notebooks/vulnerability/derivation_fragility/R_mu_T_dispersion/SPO2IDA/spo2ida.ipynb
|
GEMScienceTools/rmtk
|
agpl-3.0
|
Save fragility functions
The derived parametric fragility functions can be saved to a file in either CSV format or in the NRML format that is used by all OpenQuake input models. The following parameters need to be defined in the cell below in order to save the lognormal CDF fragility curves obtained above:
1. taxonomy: This parameter specifies a taxonomy string for the the fragility functions.
2. minIML and maxIML: These parameters define the bounds of applicability of the functions.
3. output_type: This parameter specifies the file format to be used for saving the functions. Currently, the formats supported are "csv" and "nrml".
|
taxonomy = "RC"
minIML, maxIML = 0.01, 2.00
output_type = "csv"
output_path = "../../../../../../rmtk_data/output/"
utils.save_mean_fragility(taxonomy, fragility_model, minIML, maxIML, output_type, output_path)
|
notebooks/vulnerability/derivation_fragility/R_mu_T_dispersion/SPO2IDA/spo2ida.ipynb
|
GEMScienceTools/rmtk
|
agpl-3.0
|
Step 2: Initialize Webcam and HDMI Out
|
# monitor configuration: 640*480 @ 60Hz
Mode = VideoMode(640,480,24)
hdmi_out = base.video.hdmi_out
hdmi_out.configure(Mode,PIXEL_BGR)
hdmi_out.start()
# monitor (output) frame buffer size
frame_out_w = 1920
frame_out_h = 1080
# camera (input) configuration
frame_in_w = 640
frame_in_h = 480
# initialize camera from OpenCV
import cv2
videoIn = cv2.VideoCapture(0)
videoIn.set(cv2.CAP_PROP_FRAME_WIDTH, frame_in_w);
videoIn.set(cv2.CAP_PROP_FRAME_HEIGHT, frame_in_h);
print("Capture device is open: " + str(videoIn.isOpened()))
|
boards/Pynq-Z1/base/notebooks/video/opencv_face_detect_webcam.ipynb
|
cathalmccabe/PYNQ
|
bsd-3-clause
|
Step 3: Show input frame on HDMI output
|
# Capture webcam image
import numpy as np
ret, frame_vga = videoIn.read()
# Display webcam image via HDMI Out
if (ret):
outframe = hdmi_out.newframe()
outframe[0:480,0:640,:] = frame_vga[0:480,0:640,:]
hdmi_out.writeframe(outframe)
else:
raise RuntimeError("Failed to read from camera.")
|
boards/Pynq-Z1/base/notebooks/video/opencv_face_detect_webcam.ipynb
|
cathalmccabe/PYNQ
|
bsd-3-clause
|
Step 4: Now use matplotlib to show image inside notebook
|
# Output webcam image as JPEG
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
plt.imshow(frame_vga[:,:,[2,1,0]])
plt.show()
|
boards/Pynq-Z1/base/notebooks/video/opencv_face_detect_webcam.ipynb
|
cathalmccabe/PYNQ
|
bsd-3-clause
|
Step 5: Apply the face detection to the input
|
import cv2
np_frame = frame_vga
face_cascade = cv2.CascadeClassifier(
'/home/xilinx/jupyter_notebooks/base/video/data/'
'haarcascade_frontalface_default.xml')
eye_cascade = cv2.CascadeClassifier(
'/home/xilinx/jupyter_notebooks/base/video/data/'
'haarcascade_eye.xml')
gray = cv2.cvtColor(np_frame, cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray, 1.3, 5)
for (x,y,w,h) in faces:
cv2.rectangle(np_frame,(x,y),(x+w,y+h),(255,0,0),2)
roi_gray = gray[y:y+h, x:x+w]
roi_color = np_frame[y:y+h, x:x+w]
eyes = eye_cascade.detectMultiScale(roi_gray)
for (ex,ey,ew,eh) in eyes:
cv2.rectangle(roi_color,(ex,ey),(ex+ew,ey+eh),(0,255,0),2)
|
boards/Pynq-Z1/base/notebooks/video/opencv_face_detect_webcam.ipynb
|
cathalmccabe/PYNQ
|
bsd-3-clause
|
Step 6: Show results on HDMI output
|
# Output OpenCV results via HDMI
outframe[0:480,0:640,:] = frame_vga[0:480,0:640,:]
hdmi_out.writeframe(outframe)
|
boards/Pynq-Z1/base/notebooks/video/opencv_face_detect_webcam.ipynb
|
cathalmccabe/PYNQ
|
bsd-3-clause
|
Step 7: Now use matplotlib to show image inside notebook
|
# Output OpenCV results via matplotlib
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
plt.imshow(np_frame[:,:,[2,1,0]])
plt.show()
|
boards/Pynq-Z1/base/notebooks/video/opencv_face_detect_webcam.ipynb
|
cathalmccabe/PYNQ
|
bsd-3-clause
|
Step 8: Release camera and HDMI
|
videoIn.release()
hdmi_out.stop()
del hdmi_out
|
boards/Pynq-Z1/base/notebooks/video/opencv_face_detect_webcam.ipynb
|
cathalmccabe/PYNQ
|
bsd-3-clause
|
2. Read in the hanford.csv file
|
df = pd.read_csv("data/hanford.csv")
df
|
class6/donow/Skinner_Barnaby_DoNow_6.ipynb
|
ledeprogram/algorithms
|
gpl-3.0
|
3. Calculate the basic descriptive statistics on the data
|
df['Exposure'].mean()
df['Exposure'].describe()
|
class6/donow/Skinner_Barnaby_DoNow_6.ipynb
|
ledeprogram/algorithms
|
gpl-3.0
|
4. Calculate the coefficient of correlation (r) and generate the scatter plot. Does there seem to be a correlation worthy of investigation?
|
df.corr()
df.plot(kind='scatter', x='Mortality', y='Exposure')
|
class6/donow/Skinner_Barnaby_DoNow_6.ipynb
|
ledeprogram/algorithms
|
gpl-3.0
|
5. Create a linear regression model based on the available data to predict the mortality rate given a level of exposure
|
lm = smf.ols(formula='Mortality~Exposure',data=df).fit()
lm.params
intercept, Exposure = lm.params
Mortality = Exposure*10+intercept
Mortality
|
class6/donow/Skinner_Barnaby_DoNow_6.ipynb
|
ledeprogram/algorithms
|
gpl-3.0
|
6. Plot the linear regression line on the scatter plot of values. Calculate the r^2 (coefficient of determination)
7. Predict the mortality rate (Cancer per 100,000 man years) given an index of exposure = 10
|
intercept, Exposure = lm.params
Mortality = Exposure*10+intercept
Mortality
|
class6/donow/Skinner_Barnaby_DoNow_6.ipynb
|
ledeprogram/algorithms
|
gpl-3.0
|
映画レビューのテキスト分類
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/r1/tutorials/keras/basic_text_classification.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/r1/tutorials/keras/basic_text_classification.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
Note: これらのドキュメントは私たちTensorFlowコミュニティが翻訳したものです。コミュニティによる 翻訳はベストエフォートであるため、この翻訳が正確であることや英語の公式ドキュメントの 最新の状態を反映したものであることを保証することはできません。 この翻訳の品質を向上させるためのご意見をお持ちの方は、GitHubリポジトリtensorflow/docsにプルリクエストをお送りください。 コミュニティによる翻訳やレビューに参加していただける方は、 docs-ja@tensorflow.org メーリングリストにご連絡ください。
ここでは、映画のレビューをそのテキストを使って肯定的か否定的かに分類します。これは、二値分類あるいは2クラス分類という問題の例であり、機械学習において重要でいろいろな応用が可能なものです。
ここでは、Internet Movie Databaseから抽出した50,000件の映画レビューを含む、 IMDB dataset を使います。レビューは訓練用とテスト用に25,000件ずつに分割されています。訓練用とテスト用のデータは均衡しています。言い換えると、それぞれが同数の肯定的及び否定的なレビューを含んでいます。
ここでは、TensorFlowを使ってモデルを構築・訓練するためのハイレベルなAPIである tf.kerasを使用します。tf.kerasを使ったもう少し高度なテキスト分類のチュートリアルについては、 MLCC Text Classification Guideを参照してください。
|
# keras.datasets.imdb is broken in 1.13 and 1.14, by np 1.16.3
!pip install tf_nightly
import tensorflow.compat.v1 as tf
from tensorflow import keras
import numpy as np
print(tf.__version__)
|
site/ja/r1/tutorials/keras/basic_text_classification.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
IMDB datasetのダウンロード
IMDBデータセットは、TensorFlowにパッケージ化されています。それは前処理済みのものであり、(単語の連なりである)レビューが、整数の配列に変換されています。そこでは整数が辞書中の特定の単語を表します。
次のコードは、IMDBデータセットをあなたのパソコンにダウンロードします。(すでにダウンロードしていれば、キャッシュされたコピーを使用します)
|
imdb = keras.datasets.imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
|
site/ja/r1/tutorials/keras/basic_text_classification.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
num_words=10000という引数は、訓練データ中に出てくる単語のうち、最も頻繁に出現する10,000個を保持するためのものです。データサイズを管理可能にするため、稀にしか出現しない単語は破棄されます。
データを調べる
データの形式を理解するために少し時間を割いてみましょう。このデータセットは前処理済みで、サンプルそれぞれが、映画レビューの中の単語を表す整数の配列になっています。ラベルはそれぞれ、0または1の整数値で、0が否定的レビュー、1が肯定的なレビューを示しています。
|
print("Training entries: {}, labels: {}".format(len(train_data), len(train_labels)))
|
site/ja/r1/tutorials/keras/basic_text_classification.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
レビューのテキストは複数の整数に変換されており、それぞれの整数が辞書の中の特定の単語を表します。最初のレビューがどのようなものか見てみましょう。
|
print(train_data[0])
|
site/ja/r1/tutorials/keras/basic_text_classification.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
映画のレビューはそれぞれ長さが異なっていることでしょう。次のコードで、最初と2つ目のレビューの単語の数を見てみます。ニューラルネットワークへの入力は同じ長さでなければならないため、後ほどその問題を解決する必要があります。
|
len(train_data[0]), len(train_data[1])
|
site/ja/r1/tutorials/keras/basic_text_classification.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
整数を単語に戻してみる
整数をテキストに戻す方法を知っていると便利です。整数を文字列にマッピングする辞書オブジェクトを検索するためのヘルパー関数を定義します。
|
# 単語を整数にマッピングする辞書
word_index = imdb.get_word_index()
# インデックスの最初の方は予約済み
word_index = {k:(v+3) for k,v in word_index.items()}
word_index["<PAD>"] = 0
word_index["<START>"] = 1
word_index["<UNK>"] = 2 # unknown
word_index["<UNUSED>"] = 3
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
def decode_review(text):
return ' '.join([reverse_word_index.get(i, '?') for i in text])
|
site/ja/r1/tutorials/keras/basic_text_classification.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
decode_reviewを使うと、最初のレビューのテキストを表示できます。
|
decode_review(train_data[0])
|
site/ja/r1/tutorials/keras/basic_text_classification.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
データの準備
レビュー(整数の配列)は、ニューラルネットワークに投入する前に、テンソルに変換する必要があります。これには2つの方法があります。
配列をワンホット(one-hot)エンコーディングと同じように、単語の出現を表す0と1のベクトルに変換します。例えば、[3, 5]という配列は、インデックス3と5を除いてすべてゼロの10,000次元のベクトルになります。そして、これをネットワークの最初の層、すなわち、浮動小数点のベクトルデータを扱うことができるDense(全結合)層とします。ただし、これは単語数×レビュー数の行列が必要なメモリ集約的な方法です。
もう一つの方法では、配列をパディングによって同じ長さに揃え、サンプル数 * 長さの最大値の形の整数テンソルにします。そして、この形式を扱うことができるEmbedding(埋め込み)層をネットワークの最初の層にします。
このチュートリアルでは、後者を採用することにします。
映画レビューは同じ長さでなければならないので、長さを標準化する pad_sequences 関数を使うことにします。
|
train_data = keras.preprocessing.sequence.pad_sequences(train_data,
value=word_index["<PAD>"],
padding='post',
maxlen=256)
test_data = keras.preprocessing.sequence.pad_sequences(test_data,
value=word_index["<PAD>"],
padding='post',
maxlen=256)
|
site/ja/r1/tutorials/keras/basic_text_classification.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
サンプルの長さを見てみましょう。
|
len(train_data[0]), len(train_data[1])
|
site/ja/r1/tutorials/keras/basic_text_classification.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
次に、パディング済みの最初のサンプルを確認します。
|
print(train_data[0])
|
site/ja/r1/tutorials/keras/basic_text_classification.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
モデルの構築
ニューラルネットワークは、層を積み重ねることで構成されます。この際、2つの大きな決定が必要です。
モデルにいくつの層を設けるか?
層ごとに何個の隠れユニットを使用するか?
この例では、入力データは単語インデックスの配列で構成されています。推定の対象となるラベルは、0または1です。この問題のためのモデルを構築しましょう。
|
# 入力の形式は映画レビューで使われている語彙数(10,000語)
vocab_size = 10000
model = keras.Sequential()
model.add(keras.layers.Embedding(vocab_size, 16))
model.add(keras.layers.GlobalAveragePooling1D())
model.add(keras.layers.Dense(16, activation=tf.nn.relu))
model.add(keras.layers.Dense(1, activation=tf.nn.sigmoid))
model.summary()
|
site/ja/r1/tutorials/keras/basic_text_classification.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
これらの層は、分類器を構成するため一列に積み重ねられます。
最初の層はEmbedding(埋め込み)層です。この層は、整数にエンコードされた語彙を受け取り、それぞれの単語インデックスに対応する埋め込みベクトルを検索します。埋め込みベクトルは、モデルの訓練の中で学習されます。ベクトル化のために、出力行列には次元が1つ追加されます。その結果、次元は、(batch, sequence, embedding)となります。
次は、GlobalAveragePooling1D(1次元のグローバル平均プーリング)層です。この層は、それぞれのサンプルについて、シーケンスの次元方向に平均値をもとめ、固定長のベクトルを返します。この結果、モデルは最も単純な形で、可変長の入力を扱うことができるようになります。
この固定長の出力ベクトルは、16個の隠れユニットを持つ全結合(Dense)層に受け渡されます。
最後の層は、1個の出力ノードに全結合されます。シグモイド(sigmoid)活性化関数を使うことで、値は確率あるいは確信度を表す0と1の間の浮動小数点数となります。
隠れユニット
上記のモデルには、入力と出力の間に、2つの中間層あるいは「隠れ」層があります。出力(ユニット、ノード、またはニューロン)は、その層の内部表現の次元数です。言い換えると、このネットワークが学習によって内部表現を獲得する際の自由度ということです。
モデルにより多くの隠れユニットがある場合(内部表現空間の次元数がより大きい場合)、または、より多くの層がある場合、あるいはその両方の場合、ネットワークはより複雑な内部表現を学習することができます。しかしながら、その結果として、ネットワークの計算量が多くなるほか、学習してほしくないパターンを学習するようになります。学習してほしくないパターンとは、訓練データでの性能は向上するものの、テスト用データの性能が向上しないパターンです。この問題を過学習(overfitting)といいます。この問題は後ほど検証することになります。
損失関数とオプティマイザ
モデルを訓練するには、損失関数とオプティマイザが必要です。今回の問題は二値分類問題であり、モデルの出力は確率(1ユニットの層とシグモイド活性化関数)であるため、損失関数としてbinary_crossentropy(2値のクロスエントロピー)関数を使用することにします。
損失関数の候補はこれだけではありません。例えば、mean_squared_error(平均二乗誤差)を使うこともできます。しかし、一般的には、確率を扱うにはbinary_crossentropyの方が適しています。binary_crossentropyは、確率分布の間の「距離」を測定する尺度です。今回の場合には、真の分布と予測値の分布の間の距離ということになります。
後ほど、回帰問題を検証する際には(例えば家屋の値段を推定するとか)、もう一つの損失関数であるmean_squared_error(平均二乗誤差)の使い方を目にすることになります。
さて、モデルのオプティマイザと損失関数を設定しましょう。
|
model.compile(optimizer=tf.keras.optimizers.Adam(),
loss='binary_crossentropy',
metrics=['accuracy'])
|
site/ja/r1/tutorials/keras/basic_text_classification.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
検証用データを作る
訓練を行う際、モデルが見ていないデータでの正解率を検証したいと思います。もとの訓練用データから、10,000個のサンプルを取り分けて検証用データ(validation set)を作ります。(なぜ、ここでテスト用データを使わないのでしょう? 今回の目的は、訓練用データだけを使って、モデルの開発とチューニングを行うことです。その後、テスト用データを1回だけ使い、正解率を検証するのです。)
|
x_val = train_data[:10000]
partial_x_train = train_data[10000:]
y_val = train_labels[:10000]
partial_y_train = train_labels[10000:]
|
site/ja/r1/tutorials/keras/basic_text_classification.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
モデルの訓練
512個のサンプルからなるミニバッチを使って、40エポックモデルを訓練します。この結果、x_trainとy_trainに含まれるすべてのサンプルを40回繰り返すことになります。訓練中、検証用データの10,000サンプルを用いて、モデルの損失と正解率をモニタリングします。
|
history = model.fit(partial_x_train,
partial_y_train,
epochs=40,
batch_size=512,
validation_data=(x_val, y_val),
verbose=1)
|
site/ja/r1/tutorials/keras/basic_text_classification.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
モデルの評価
さて、モデルの性能を見てみましょう。2つの値が返されます。損失(エラーを示す数値であり、小さい方が良い)と正解率です。
|
results = model.evaluate(test_data, test_labels, verbose=2)
print(results)
|
site/ja/r1/tutorials/keras/basic_text_classification.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
この、かなり素朴なアプローチでも87%前後の正解率を達成しました。もっと高度なアプローチを使えば、モデルの正解率は95%に近づけることもできるでしょう。
正解率と損失の時系列グラフを描く
model.fit() は、訓練中に発生したすべてのことを記録した辞書を含むHistory オブジェクトを返します。
|
history_dict = history.history
history_dict.keys()
|
site/ja/r1/tutorials/keras/basic_text_classification.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
4つのエントリがあります。それぞれが、訓練と検証の際にモニターしていた指標を示します。これを使って、訓練時と検証時の損失を比較するグラフと、訓練時と検証時の正解率を比較するグラフを作成することができます。
|
import matplotlib.pyplot as plt
%matplotlib inline
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" は青いドット
plt.plot(epochs, loss, 'bo', label='Training loss')
# ”b" は青い実線
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf() # 図のクリア
acc_values = history_dict['acc']
val_acc_values = history_dict['val_acc']
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()
plt.show()
|
site/ja/r1/tutorials/keras/basic_text_classification.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
then you can import the necessary modules ...
|
from clyngor.as_pyasp import TermSet,Atom
from urllib.request import urlopen
from meneco.meneco import query, utils, sbml
|
meneco.ipynb
|
bioasp/meneco
|
gpl-3.0
|
Next, you can load a draft network from an sbml file ...
|
draft_sbml= urlopen('https://raw.githubusercontent.com/bioasp/meneco/master/Ectodata/ectocyc.sbml')
draftnet = sbml.readSBMLnetwork(draft_sbml, 'draft')
|
meneco.ipynb
|
bioasp/meneco
|
gpl-3.0
|
load the seeds ...
|
seeds_sbml = urlopen('https://raw.githubusercontent.com/bioasp/meneco/master/Ectodata/seeds.sbml')
seeds = sbml.readSBMLseeds(seeds_sbml)
|
meneco.ipynb
|
bioasp/meneco
|
gpl-3.0
|
and load the targets ...
|
targets_sbml = urlopen('https://raw.githubusercontent.com/bioasp/meneco/master/Ectodata/targets.sbml')
targets = sbml.readSBMLtargets(targets_sbml)
|
meneco.ipynb
|
bioasp/meneco
|
gpl-3.0
|
Then you can check the draft network for unproducible targets ...
|
model = query.get_unproducible(draftnet, targets, seeds)
unproducible = set(a[0] for pred in model if pred == 'unproducible_target' for a in model[pred])
print('{0} unproducible targets:\n\t{1}\n'.format(len(unproducible), '\n\t'.join(unproducible)))
|
meneco.ipynb
|
bioasp/meneco
|
gpl-3.0
|
You can load another reaction network like metacyc repair data base ...
|
repair_sbml = urlopen('https://raw.githubusercontent.com/bioasp/meneco/master/Ectodata/metacyc_16-5.sbml')
repairnet = sbml.readSBMLnetwork(repair_sbml, 'repairnet')
|
meneco.ipynb
|
bioasp/meneco
|
gpl-3.0
|
and combine the draft network with the repair database ...
|
combinet = draftnet
combinet = TermSet(combinet.union(repairnet))
|
meneco.ipynb
|
bioasp/meneco
|
gpl-3.0
|
and then check for which targets producibilty cannot be restored even with the combined networks ...
|
model = query.get_unproducible(combinet, targets, seeds)
never_producible = set(a[0] for pred in model if pred == 'unproducible_target' for a in model[pred])
print('{0} unreconstructable targets:\n\t{1}\n'.format(len(never_producible), '\n\t'.join(never_producible)))
|
meneco.ipynb
|
bioasp/meneco
|
gpl-3.0
|
and for which targets the production paths are repairable ...
|
reconstructable_targets = unproducible.difference(never_producible)
print('{0} reconstructable targets:\n\t{1}\n'.format(len(reconstructable_targets), '\n\t'.join(reconstructable_targets)))
|
meneco.ipynb
|
bioasp/meneco
|
gpl-3.0
|
You can compute the essential reactions for the repairable target ...
|
essential_reactions = set()
for t in reconstructable_targets:
single_target = TermSet()
single_target.add(Atom('target("' + t + '")'))
print('\nComputing essential reactions for', t,'... ', end=' ')
model = query.get_intersection_of_completions(draftnet, repairnet, seeds, single_target)
print(' done.')
essentials_lst = set(a[0] for pred in model if pred == 'xreaction' for a in model[pred])
print('{0} essential reactions for target {1}:\n\t{2}'.format(len(essentials_lst), t, '\n\t'.join(essentials_lst)))
essential_reactions = essential_reactions.union(essentials_lst)
print('Overall {0} essential reactions found:\n\t{1}\n'.format(len(essential_reactions), '\n\t'.join(essential_reactions)))
|
meneco.ipynb
|
bioasp/meneco
|
gpl-3.0
|
You can compute a completion of minimal size suitable to produce all reconstructable targets ...
|
targets = TermSet(Atom('target("' + t+'")') for t in reconstructable_targets)
model = query.get_minimal_completion_size(draftnet, repairnet, seeds, targets)
one_min_sol_lst = set(a[0] for pred in model if pred == 'xreaction' for a in model[pred])
optimum = len(one_min_sol_lst)
print('minimal size =',optimum)
print('One minimal completion of size {0}:\n\t{1}\n'.format(
optimum, '\n\t'.join(one_min_sol_lst)))
|
meneco.ipynb
|
bioasp/meneco
|
gpl-3.0
|
We can compute the common reactions in all completion with a given size ...
|
model = query.get_intersection_of_optimal_completions(draftnet, repairnet, seeds, targets, optimum)
cautious_sol_lst = set(a[0] for pred in model if pred == 'xreaction' for a in model[pred])
print('Intersection of all solutions of size {0}:\n\t{1}\n'.format(
optimum, '\n\t'.join(cautious_sol_lst)))
|
meneco.ipynb
|
bioasp/meneco
|
gpl-3.0
|
We can compute the union of all completion with a given size ...
|
model = query.get_union_of_optimal_completions(draftnet, repairnet, seeds, targets, optimum)
brave_sol_lst = set(a[0] for pred in model if pred == 'xreaction' for a in model[pred])
print('Intersection of all solutions of size {0}:\n\t{1}\n'.format(
optimum, '\n\t'.join(brave_sol_lst)))
|
meneco.ipynb
|
bioasp/meneco
|
gpl-3.0
|
And finally compute all (for this notebook we print only the first three) completions with a given size ...
|
models = query.get_optimal_completions(draftnet, repairnet, seeds, targets, optimum)
count = 0
for model in models:
one_min_sol_lst = set(a[0] for pred in model if pred == 'xreaction' for a in model[pred])
count += 1
print('Completion {0}:\n\t{1}\n'.format(
str(count), '\n\t'.join(one_min_sol_lst)))
if count == 3: break
|
meneco.ipynb
|
bioasp/meneco
|
gpl-3.0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.