id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st118268
|
I want to plot the weights over time. lets say I have a model likenn.Linear(2, 2) I would like to plot all the weights in time. I plan to transform the weights into list of size four, so that I can just call as the pseudo code below
for i in weights.size()[0]:
for j in weights.size()[1]:
grid.plot(weights[i,j])
where weights[i,j] -> list - the values of weights[i,j] collected during training.
Or is there any other feature already like this exist in pytorch?
|
st118269
|
I am trying to use loss terms with the output of intermediary layers, but I get an error that “it cannot compute the gradients with respect to labels”. To be more explicit:
Say you have an architecture like
layer1 = self.layer1(input)
layer1 = F.relu(layer1)
layer2 = self.layer2(input)
layer2 = F.relu(layer2)
layer3 = self.layer3(input)
layer3 = F.relu(layer3)
And I would want to use a loss term like
criterion = nn.MSELoss()
loss_term = criterion(layer2, layer1)
And I get an error mentioned above: “cannot compute gradients with respect to labels. Either mention requires_gradients = False or set the variable as volatile”. (The error message is approximate since I don’t have pytorch and my code here to quickly reproduce it. I need to implement something like the above snippet, can anyone please help?
|
st118270
|
nn.MSELoss expects the second input to be the “target”. In your case layer1 is a Variable which requires its gradients to be computed (according to your snippet) and hence the error.
You can compute your loss like this:
loss = torch.pow(layer2 - layer1, 2).mean()
|
st118271
|
Is it possible to implement CrossEntropy like this (equivalent of BCELoss) ? I know it’s a sum over plog(q) but not sure how to implement it by myself. If it works, this could be a solution to my problem (I need both MSE and BCE loss). Thanks for your answer.
|
st118272
|
I get the above mentioned error when I try to run the followed code:
import torch.distributed
import ipdb
if name == ‘main’:
ipdb.set_trace()
torch.distributed.init_process_group(backend=‘mpi’)
rank = torch.distributed.collectives.get_rank()
print('My rank is ' + rank)
I am running this code on command line as follows:
mpiexec -n 2 python main.py 4
|
st118273
|
torch.distributed is still alpha quality and you can expect weird broken builds.
|
st118274
|
Thank you for your reply.
Does it mean that I should update my installed package to get a working version? Or the package torch.distributed is not functional yet in which case I will wait for your announcement.
|
st118275
|
If I want to look at some intermediate states of LSTM forwarding, like the values of forget gates, input gates etc, are there any simple ways to do it?
(The only solution I can come up with is to rewrite a LSTM)
|
st118276
|
Hi everyone,
I built my own loss function to calculate the log-likelihood for a Mixture Density Network, connected to LSTM. However, there seems to be a problem somewhere (the loss goes to infinity, and the whole thing collapses).
While I am debugging, I found that after I get the loss, and do the backward() step, the loss doesn’t have any grad information
loss.grad = None
Is this normal?
Thank you
|
st118277
|
Yes, only the gradients w.r.t. explicitly created Variables (called leaf Variables) are saved. If you want to get the grad w.r.t. some intermediate values (i.e. computed form leaf Variables) you need to use hooks (search for register_hook in the docs).
|
st118278
|
Hey guys,
I’ve been trying to play around with some gradients and came across an error while running this command:
return torch.inverse(torch.mm(x, x.t()))
This gave me the error:
TypeError: Type Variable doesn’t implement stateless method inverse
Is there an issue with my syntax, or is the inverse not implemented? Thanks!
Abhimanyu
|
st118279
|
torch.inverse 154 takes tensor as an argument not Variable. You can get tensor out of a Variable using .data
|
st118280
|
Thanks! I will try it out, however I need it to be differentiable. Will converting it to a tensor preserve the gradients?
|
st118281
|
Hey, I tried your suggestion - doesn’t work. The .data call is:
return torch.inverse(torch.mm(x, x.t()).data)
And I get the error:
AttributeError: 'FloatTensor' object has no attribute 'data'
|
st118282
|
The gradient of the inverse is not implemented, and apparently won’t be implemented in the nearby future, see https://github.com/pytorch/pytorch/issues/440 263
|
st118283
|
That’s what the error was suggesting but apparently torch.mm 6 returns a tensor and not a variable.
I don’t know why are you getting an error in the first place cuz I tried following code and it didn’t give any error:
mat1 = torch.randn(3, 3)
torch.inverse(torch.mm(mat1, mat1.t()))
|
st118284
|
Thanks for the link @fmassa! In my case, all eigenvalues of the variable are going to be <1 without exception, so I’ll just approximate the inverse with a power series I think.
|
st118285
|
Hi guys,
I’m new to pytorch. I wonder if pytorch support to indexing specified dimensions for example, for a tensor a:
a = torch.FloatTensor([[1, 2, 3], [4, 5, 6]])
I want to select the first and third column as a subtensor.
[[1, 3]
[4, 6]]
I checked documents and discussion but still don’t know how to do it. I need some guidance.
Thanks!
|
st118286
|
We are modifying our indexing to be on parity with numpy.
In the meanwhile, you can do this:
a.index_select(1, torch.LongTensor([0, 2]))
|
st118287
|
Hi all,
I started to use PyTorch yesterday and it works pretty well.
Today I tried to use data_parallel but there are some errors.
I tried to reproduce the error with this simple code:
import torch
import torch.utils.data
import torch.nn as nn
import torch.optim as optim
from torch.autograd import Variable
import sys
import numpy as np
class Test(nn.Module):
def __init__(self):
super(Test, self).__init__()
self.const = Variable(torch.from_numpy(np.zeros((5,5),dtype=np.float32))).cuda(0)
def forward(self, x, y):
bat = x.size(0)
return self.const.unsqueeze(0).expand(bat, 5, 5)+x+y
model=Test()
model = torch.nn.DataParallel(model, device_ids=range(int(sys.argv[1])))
inp1 = Variable(torch.from_numpy(np.zeros((6,5,5),dtype=np.float32))).cuda()
inp2 = Variable(torch.from_numpy(np.zeros((6,5,5),dtype=np.float32))).cuda()
print inp1
print model(inp1, inp2)
The error msg is:
RuntimeError: arguments are located on different GPUs at /b/wheel/pytorch-src/torch/lib/THC/generated/../generic/THCTensorMathPointwise.cu:214
|
st118288
|
Should I specify the GPU id in this case? I tried set both to 0 or left both blank. Neither of them works.
|
st118289
|
I found the solution. I need to specify the self.const as parameters.
import torch
import torch.utils.data
import torch.nn as nn
import torch.optim as optim
from torch.autograd import Variable
import sys
import numpy as np
class Test(nn.Module):
def __init__(self):
super(Test, self).__init__()
self.const = nn.Parameter(torch.from_numpy(np.zeros((5,5),dtype=np.float32)), requires_grad=False)
def forward(self, x, y):
bat = x.size(0)
return self.const.unsqueeze(0).expand(bat, 5, 5)+x+y
model=Test()
model = torch.nn.DataParallel(model, device_ids=range(int(sys.argv[1])))
model = model.cuda()
inp1 = Variable(torch.from_numpy(np.ones((6,5,5),dtype=np.float32))).cuda()
inp2 = Variable(torch.from_numpy(np.ones((6,5,5),dtype=np.float32))).cuda()
print inp1
print model(inp1, inp2)
|
st118290
|
When you want to optimize the network you need to specify:
optimizer = optim.Adam(filter(lambda p: p.requires_grad, model.parameters()), lr=1e-3)
|
st118291
|
Hello guys. According to this link, If we want to implement a loss function using autograd concept, we should not use unpacking variables. Something like this is forbidden in autograd based backprop:
var.data[0,:]
I would like to know is var[0,:] unpacking as same as var.data[0,:]? (var is a variable type)
|
st118292
|
Thanks for your response, But as you have mentioned, I did following code:
output = net(images) # batches*95*S*S
for ind in range(B):
output[:,2+(1+coords)*ind,:,:] = torch.sqrt(output[:,2+(1+coords)*ind,:,:])
output[:,3+(1+coords)*ind,:,:] = torch.sqrt(output[:,3+(1+coords)*ind,:,:])
But bellow error occurred:
Traceback (most recent call last):
File “Main_v3.py”, line 200, in
train(epoch)
File “Main_v3.py”, line 193, in train
cost.backward()
File “/home/mohammad/anaconda3/lib/python3.5/site-packages/torch/autograd/variable.py”, line 146, in backward
self._execution_engine.run_backward((self,), (gradient,), retain_variables)
File “/home/mohammad/anaconda3/lib/python3.5/site-packages/torch/autograd/_functions/pointwise.py”, line 130, in backward
i, = self.saved_tensors
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation
Could you please tell me what is the problem?
|
st118293
|
Just to confirm, what is your pytorch version? (run torch.__version__ in the python interpreter).
One thing, tensor assignment is an inplace operation, so that might indicate where the problem is.
|
st118294
|
So Is there any way to manipulate some parts of my output variable and avoid this error? Because as you can see in My loss function, I should take square of some parts of output variable!
|
st118295
|
The problem seems to be that you are trying to apply an in-place operation that can’t be in-place because the backprop wouldn’t work (sqrt).
First idea that I had was to perform the operations out of place in different tensors and then concatenate them, something like
out1 = output[:, 2::1+coords, :, :].sqrt()
out2 = output[:, 3::1+coords, :, :].sqrt()
but that still requires concatenating and shuffling around the elements in the tensor, which doesn’t sound like a good solution.
I have no better ideas now given the information I have about what you want to do, sorry
|
st118296
|
I have to convert the following code from Lua/Torch to PyTorch
function GramMatrix()
local net = nn.Sequential()
net:add(nn.View(-1):setNumInputDims(2))
local concat = nn.ConcatTable()
concat:add(nn.Identity())
concat:add(nn.Identity())
net:add(concat)
net:add(nn.MM(false, true))
return net
end
Till now I have tried this->
class GramMatrix(nn.Module):
def forward(self, input):
a, b, c, d = input.size() # a=batch size(=1)
features = input.view(a * b, c * d) # resise F_XL into \hat F_XL
G = torch.mm(features, features.t()) # compute the gram product
# we 'normalize' the values of the gram matrix
# by dividing by the number of element in each feature maps.
return G.div(a * b * c * d)
But the output of GramMatrix().forward() results in require_grad = True, which later on causes problem with nn.MSELoss().forward()
What should I do?
|
st118297
|
in loss function(Loss(input, target)), the target.requires_grad must be False.
if you wanna calculate the grad for target, maybe simply use
loss = (predict-target)**2/predict.size(0)
it can’t take advantage of backend, but I think it won’t cost much time either.
I think maybe mse_loss should be availiable in torch.nn.functional.
|
st118298
|
Hi!
I need to implement module which will essentially is a 2d grid of independed torch.Linear instances. Is there any elegant pytorch way to write the following code:
out = np.zeros(w.shape[1:])
for c in xrange(out.shape[0]):
for i in xrange(out.shape[1]):
for j in xrange(out.shape[2]):
out[c, i, j] = x[:, i, j].dot(w[:, c, i, j])
Also know as Einstein summation in numpy:
out = np.einsum('kij,kmij->mij', x, w)
|
st118299
|
An obvious solution here is just reshape it all and use a huge Linear layer, which is probably good enough for starters.
|
st118300
|
you could use torch.bmm or torch.baddbmm here i think, which does batch matrix multiply.
So:
view your 2D grid as 1D list of matrix-multiplies to do
do Batch matrix multiply
unview back to 2d grid
|
st118301
|
That’s even better, thanks. I overlooked it for some reason. That should do the trick better.
|
st118302
|
Hi,
In theano or tensorflow, they have 2 different option for convolution operation : ‘valid’, ‘same’. I want to produce same size convolution output when I am using even number filter.
However, with current convolution layer, there is no way I can produce same output dimension if the kernel size is even number.
For example : [ 1 2 3 ] convolved with kernel [a b], if I use no pad, I will get 2 output. If I use pad 1, I will get 4 output. I can’t get 3 output.
Is there any solution for this ?
Thanks
|
st118303
|
Note that you can add an extra F.pad 351 before the convolution to adjust the size to make it have the same size.
|
st118304
|
I see. For now I will use it. Thanks
IMHO it would be nice if convolution operator give those choices so we can avoid if-else for even number kernel size.
|
st118305
|
My code runs fine on the cpu but when I try to run it on the GPU, I get the following stack trace:
Traceback (most recent call last):
File "run_experiments.py", line 123, in <module>
train_model(model, train_loader, nb_batches, optimizer, criterion, **vars(args))
File "run_experiments.py", line 54, in train_model
predictions, _ = model.forward(inputs, targets[:, :-1, :])
File "/net/if1/ab3cb/grad_stuff/vislang/project/Visual-Story-Telling/source_code/model_zoo/seq2seq.py", line 41, in forward
_, context_vec = self.encoder(inputs, hidden_init)
File "/if1/ab3cb/miniconda3/envs/fast_torch/lib/python3.5/site-packages/torch/nn/modules/module.py", line 206, in __call__
result = self.forward(*input, **kwargs)
File "/net/if1/ab3cb/grad_stuff/vislang/project/Visual-Story-Telling/source_code/model_zoo/encoder.py", line 36, in forward
output, hidden_state = self.gru(inputs, hidden_init)
File "/if1/ab3cb/miniconda3/envs/fast_torch/lib/python3.5/site-packages/torch/nn/modules/module.py", line 206, in __call__
result = self.forward(*input, **kwargs)
File "/if1/ab3cb/miniconda3/envs/fast_torch/lib/python3.5/site-packages/torch/nn/modules/rnn.py", line 91, in forward
output, hidden = func(input, self.all_weights, hx)
File "/if1/ab3cb/miniconda3/envs/fast_torch/lib/python3.5/site-packages/torch/nn/_functions/rnn.py", line 327, in forward
return func(input, *fargs, **fkwargs)
File "/if1/ab3cb/miniconda3/envs/fast_torch/lib/python3.5/site-packages/torch/autograd/function.py", line 202, in _do_forward
flat_output = super(NestedIOFunction, self)._do_forward(*flat_input)
File "/if1/ab3cb/miniconda3/envs/fast_torch/lib/python3.5/site-packages/torch/autograd/function.py", line 224, in forward
result = self.forward_extended(*nested_tensors)
File "/if1/ab3cb/miniconda3/envs/fast_torch/lib/python3.5/site-packages/torch/nn/_functions/rnn.py", line 269, in forward_extended
cudnn.rnn.forward(self, input, hx, weight, output, hy)
File "/if1/ab3cb/miniconda3/envs/fast_torch/lib/python3.5/site-packages/torch/backends/cudnn/rnn.py", line 239, in forward
fn.hx_desc = cudnn.descriptor(hx)
File "/if1/ab3cb/miniconda3/envs/fast_torch/lib/python3.5/site-packages/torch/backends/cudnn/__init__.py", line 304, in descriptor
descriptor.set(tensor)
File "/if1/ab3cb/miniconda3/envs/fast_torch/lib/python3.5/site-packages/torch/backends/cudnn/__init__.py", line 110, in set
self, _typemap[tensor.type()], tensor.dim(),
KeyError: 'torch.FloatTensor'
My training code is as follows:
def train_model(model, train_loader, nb_batches, optimizer, criterion, **kwargs):
running_loss = 0
for epoch in range(kwargs["epochs"]):
iters = 0
for inputs, targets in tqdm(train_loader, total=nb_batches):
#process the inputs from the data loader to make it compatible with
#the pytorch graph
inputs, targets = torch.from_numpy(inputs).float(), torch.from_numpy(targets).float()
#convert to cuda tensors if cuda flag is true
if torch.cuda.is_available:
inputs, targets = inputs.cuda(), targets.cuda()
inputs, targets = Variable(inputs), Variable(targets)
#clear out the gradients buffer
optimizer.zero_grad()
predictions, _ = model(inputs, targets[:, :-1, :])
loss = criterion(predictions, targets[:, 1:, :])
loss.backward()
optimizer.step()
running_loss += loss.data[0]
'''if iters % 10 == 0:
print("Loss at {} iteration: {}".format(iters+1, running_loss/(iters+1)))'''
if iters > nb_batches:
break
iters += 1
#define the model, optimizer and criterion
model = Seq2Seq(args.embed_size, args.hidden_size)
if torch.cuda.is_available():
model = model.cuda()
optimizer = optim.SGD(model.parameters(), lr=args.lr)
criterion = nn.KLDivLoss()
train_model(model, train_loader, nb_batches, optimizer, criterion, **vars(args))
I have the model definition in this gist 35. Additionally I recently upgraded my pytorch version due to slow loading of the GPU by following this topic. I have just started using pytorch on the gpu so any help in figuring this out would be appreciated.
|
st118306
|
Did you check what is the type of hidden_init before executing the following line in encoder?
output, hidden_state = self.gru(inputs, hidden_init)
I can see you have converted the inputs to cuda tensor, so only thing that can cause the problem might be related to hidden_init (I guess).
|
st118307
|
Thank for that tip. That gets rid of the keyerror but now I receive the following error (stack trace below):
Traceback (most recent call last):
File "run_experiments.py", line 118, in <module>
train_model(model, train_loader, nb_batches, optimizer, criterion, **vars(args))
File "run_experiments.py", line 47, in train_model
loss.backward()
File "/if1/ab3cb/miniconda3/envs/fast_torch/lib/python3.5/site-packages/torch/autograd/variable.py", line 146, in backward
self._execution_engine.run_backward((self,), (gradient,), retain_variables)
File "/if1/ab3cb/miniconda3/envs/fast_torch/lib/python3.5/site-packages/torch/nn/_functions/linear.py", line 22, in backward
grad_input = torch.mm(grad_output, weight)
TypeError: torch.mm received an invalid combination of arguments - got (torch.FloatTensor, torch.cuda.FloatTensor), but expected one of:
* (torch.SparseFloatTensor mat1, torch.FloatTensor mat2)
didn't match because some of the arguments have invalid types: (torch.FloatTensor, torch.cuda.FloatTensor)
* (torch.FloatTensor source, torch.FloatTensor mat2)
didn't match because some of the arguments have invalid types: (torch.FloatTensor, torch.cuda.FloatTensor)
I dont understand why the loss.backward() operation expects anything to be a non-cuda tensor when I have converted my model and associated parameters to cuda tensors. Am I missing something here?
PS: I have updated the gists to reflect the new code.
|
st118308
|
maybe you somewhere typecasted a Variable to cuda rather than the Variable.data to cuda
|
st118309
|
I have changed my code so that I convert all tensors to cuda before wrapping them in Variables but now I am getting a different error (stack trace below):
Traceback (most recent call last):
File "run_experiments.py", line 122, in <module>
train_model(model, train_loader, nb_batches, optimizer, criterion, **vars(args))
File "run_experiments.py", line 50, in train_model
loss = criterion(predictions, targets[:, 1:, :])
File "/if1/ab3cb/miniconda3/envs/fast_torch/lib/python3.5/site-packages/torch/nn/modules/module.py", line 206, in __call__
result = self.forward(*input, **kwargs)
File "/if1/ab3cb/miniconda3/envs/fast_torch/lib/python3.5/site-packages/torch/nn/modules/loss.py", line 36, in forward
return backend_fn(self.size_average, weight=self.weight)(input, target)
File "/if1/ab3cb/miniconda3/envs/fast_torch/lib/python3.5/site-packages/torch/nn/_functions/thnn/auto.py", line 41, in forward
output, *self.additional_args)
TypeError: FloatDistKLDivCriterion_updateOutput received an invalid combination of arguments - got (int, torch.FloatTensor, torch.cuda.FloatTensor, torch.FloatTensor, bool), but expected (int state, torch.FloatTensor input, torch.FloatTensor target, torch.FloatTensor output, bool sizeAverage)
I have edited the code to reflect the new changes
|
st118310
|
One quick question, when we typecast a Variable to cuda, I thought Variable.data is also casted to cuda but from your statement it doesn’t seem so. Can you explain why? Since Variable is like a wrapper on a tensor, if we cast a Variable, everything wrapped by the Variable should be casted as well (I guess).
|
st118311
|
I think you need:
criterion = nn.KLDivLoss().cuda()
(i have to check, but i think it has weights)
|
st118312
|
if you have a script i can run (preferably small – 30 lines), i can debug it for you. Otherwise, idk, just go into pdb, break at places and see where the FloatTensor (not cuda.FloatTensor) is coming from…
|
st118313
|
@smth I have updated the gist 53 with a relatively small driver script to reproduce the error. Let me know if you run into any problems. Appreciate your help!
|
st118314
|
i tried to take a look at this today (sorry for the delay), but your gist is still missing model_utils.
|
st118315
|
Yes, even I forgot to update the thread. I have solved the problem. I had a time distributed wrapper in the model_utils script. I had forgotten to port the tensors to cuda over there. Thanks everyone for all the help. Appreciate it!
|
st118316
|
Hi, I simply make a big embedding layer (10M vocabulary) as code below.
optimizer.step() is very slow (less than 100 samples / second).
I tried CPU, CPU sparse, and GPU (cuda), but all of them are very slow. CPU non-sparse is the fastest.
Can I get a reason? if I remove loss.backward() and optimizer.step(), it’s 10000+ samples / second (I mean, data generator is not a bottleneck).
class Model(nn.Module):
def __init__(self, n_words=10000000, dim_word=64):
super(Model, self).__init__()
self.n_words = n_words
self.dim_word = dim_word
self.embedding = nn.Embedding(self.n_words, self.dim_word, sparse=False)
def forward(self, indices):
y = self.embedding(indices)
return y
def train():
model = Model(10000000, 64)
criterion = loss.TripletMarginLoss()
optimizer = optim.Adagrad(model.parameters(), lr=0.1)
...
|
st118317
|
Hello!
It seems the time complexity of training a Embedding of N words is not O(1). (maybe O(N) or more).
Thus I found training 5M-word-Embedding is much slower than training 1M-word-Embedding. So I tried to split it to smaller ones, it was better although not perfect.
Which size of embedding are you trying?
|
st118318
|
Hello,
I hope this might be helpful.It 70’s just a coding of my simple approach, it was ok for my purpose.
https://github.com/thnkim/LargeEmbedding 237
|
st118319
|
Hi guys,
Pytorch is not considering the sign which I calculate my error. Gradient update should be by subtraction if the error is calculating by hypothesis - target and gradient update should be by addition if the error is target - hypothesis. In numpy we should change the sign of gradient update w += or w -= and it is pretty intuitive. But with pytorch, in any cases, we should do w -= , doesn’t matter how we calculate the error. Pytorch is designed for this behavior?
Below is my code with target - hypothesis.
import torch as th
from torch.autograd import Variable
epochs = 501
lr = 1
XOR_X = [[0, 0], [0, 1], [1, 0], [1, 1]]
XOR_Y = [[0, 1], [1, 0], [1, 0], [0, 1]]
if th.cuda.is_available():
dtype = th.cuda.FloatTensor
else:
dtype = th.FloatTensor
x_ = Variable(th.FloatTensor(XOR_X).type(dtype), requires_grad=False)
y_ = Variable(th.FloatTensor(XOR_Y).type(dtype), requires_grad=False)
w1 = Variable(th.randn(2, 5).type(dtype), requires_grad=True)
w2 = Variable(th.randn(5, 2).type(dtype), requires_grad=True)
b1 = Variable(th.zeros(5).type(dtype), requires_grad=True)
b2 = Variable(th.zeros(2).type(dtype), requires_grad=True)
def forward(x):
a2 = x.mm(w1)
# pytorch didn't have numpy like broadcasting when i wrote this script
# expand_as make the tensor as similar size as the other tensor
a2 = a2.add(b1.expand_as(a2))
h2 = a2.sigmoid()
a3 = h2.mm(w2)
a3 = a3.add(b2.expand_as(a3))
hyp = a3.sigmoid()
return hyp
for epoch in range(epochs):
hyp = forward(x_)
cost = y_ - hyp
cost = cost.pow(2).sum()
if epoch % 500 == 0:
print(cost.data[0])
cost.backward()
# why negative
w1.data -= lr * w1.grad.data
w2.data -= lr * w2.grad.data
b1.data -= lr * b1.grad.data
b2.data -= lr * b2.grad.data
w1.grad.data.zero_()
w2.grad.data.zero_()
for x in XOR_X:
hyp = forward(Variable(th.FloatTensor([x])))
print(x, hyp.max(1)[1].data)
|
st118320
|
cost = cost.pow(2).sum()
This makes the loss invariant to sign. Hence what you see.
|
st118321
|
Hi all,
I was reading fractional Max-Pooling few days back.(https://arxiv.org/pdf/1412.6071.pdf 31)
Then I found out that, it was implemented in old Torch version. But it’s not available in PyTorch. Are we planning to add this? I’m ready to try it out. @smth?
|
st118322
|
We haven’t exposed this at the Python level. What’s missing is that we need to write Python wrapper for this, similar to:
github.com
torch/nn/blob/master/SpatialFractionalMaxPooling.lua 32
local SpatialFractionalMaxPooling, parent =
torch.class('nn.SpatialFractionalMaxPooling', 'nn.Module')
-- Usage:
-- nn.SpatialFractionalMaxPooling(poolSizeW, poolSizeH, outW, outH)
-- the output should be the exact size (outH x outW)
-- nn.SpatialFractionalMaxPooling(poolSizeW, poolSizeH, ratioW, ratioH)
-- the output should be the size (floor(inH x ratioH) x floor(inW x ratioW))
-- ratios are numbers between (0, 1) exclusive
function SpatialFractionalMaxPooling:__init(poolSizeW, poolSizeH, arg1, arg2)
parent.__init(self)
assert(poolSizeW >= 2)
assert(poolSizeH >= 2)
-- Pool size (how wide the pooling for each output unit is)
self.poolSizeW = poolSizeW
self.poolSizeH = poolSizeH
-- Random samples are drawn for all
-- batch * plane * (height, width; i.e., 2) points. This determines
This file has been truncated. show original
github.com
pytorch/pytorch/blob/master/torch/nn/modules/pooling.py 36
import torch
from torch.autograd import Variable
from .module import Module
from .utils import _single, _pair, _triple
from .. import functional as F
class MaxPool1d(Module):
r"""Applies a 1D max pooling over an input signal composed of several input
planes.
In the simplest case, the output value of the layer with input size :math:`(N, C, L)`
and output :math:`(N, C, L_{out})` can be precisely described as:
.. math::
\begin{array}{ll}
out(N_i, C_j, k) = \max_{{m}=0}^{{kernel\_size}-1} input(N_i, C_j, stride * k + m)
\end{array}
This file has been truncated. show original
It’s a low priority task, but we’ll eventually expose it.
If you want to give a hand at implementing the wrapper, we’d love a contribution
|
st118323
|
HI, I’m using Python 3.6.0 (Anaconda/macOS) and would like to upgrade to Python 3.6.1.0 because 3.6.0.0 has some bugs with visdom.
But still the latest PyTorch seems to have conflict with upgrade of Python 3.6.1.0 at least in Anaconda/macOS.
|
st118324
|
Finally, update for Python 3.6.1 is released so now you can use both PyTorch and visdom. Thank you.
|
st118325
|
Hi Team,
Below given is the XOR NN using PyTorch. Looks like the output giving by the max()[1] is wrong. Please review my output as POC.
import torch as th
from torch.autograd import Variable
epochs = 2000
lr = 1
XOR_X = [[0, 0], [0, 1], [1, 0], [1, 1]]
XOR_Y = [[0, 1], [1, 0], [1, 0], [0, 1]]
x_ = Variable(th.FloatTensor(XOR_X), requires_grad=False)
y_ = Variable(th.FloatTensor(XOR_Y), requires_grad=False)
w1 = Variable(th.randn(2, 3), requires_grad=True)
w2 = Variable(th.randn(3, 2), requires_grad=True)
b1 = Variable(th.zeros(3), requires_grad=True)
b2 = Variable(th.zeros(2), requires_grad=True)
def forward(x):
a2 = x.mm(w1)
# pytorch didn't have numpy like broadcasting when i wrote this script
# expand_as make the tensor as similar size as the other tensor
a2 = a2.add(b1.expand_as(a2))
h2 = a2.sigmoid()
a3 = h2.mm(w2)
a3 = a3.add(b2.expand_as(a3))
hyp = a3.sigmoid()
return hyp
for epoch in range(epochs):
hyp = forward(x_)
cost = y_ - hyp
cost = cost.pow(2).sum()
if epoch % 500 == 0:
print(cost.data[0])
cost.backward()
w1.data -= lr * w1.grad.data
w2.data -= lr * w2.grad.data
b1.data -= lr * b1.grad.data
b2.data -= lr * b2.grad.data
w1.grad.data.zero_()
w2.grad.data.zero_()
for x in XOR_X:
hyp = forward(Variable(th.FloatTensor([x])))
values, indices = hyp.max(0)
print('==========================\nX is: ', x)
print('==========================\n hyp is: ', hyp)
print('==========================\n indices from argmax: ', indices)
==========================
X is: [0, 0]
hyp is: Variable containing:
0.0166 0.9810
[torch.FloatTensor of size 1x2]
==========================
indices from argmax: Variable containing:
0 0
[torch.LongTensor of size 1x2]
|
st118326
|
I found the same issue with getting the max indices - it would consistently return all zeros.
|
st118327
|
The issue is that your Tensor is of size (1x2), and you are taking the max over dimension 0 (which has only one element). Take the max over dimension 1 instead
|
st118328
|
It would include components and abstractions (and maybe recent models) commonly used in RL (similar to the pytorch/text and pytorch/vision repo).
We could start by forking ChainerRL repo:
https://github.com/pfnet/chainerrl
|
st118329
|
There was a proof-of-concept RL repo under pytorch during alpha phase, but it wasn’t ready before the release.
@Soumith_Chintala Any updates on this?
|
st118330
|
@fmassa @Soumith_Chintala @jekbradbury
ChainerRL (muupan is its maintainer) says they want to support pytorch as a backend and are asking for suggestions on how to do so:
github.com/chainer/chainerrl
PyTorch as an additional backend
opened
Apr 22, 2017
muupan
I'm curious about whether ChainerRL can support PyTorch as an additional NN backend. Its interface is similar to Chainer's, but I'm...
enhancement
prio:low
ChainerRL has a lot of the most recent RL architectures/components/abstractions implemented, so having it support pytorch would be very useful.
|
st118331
|
RL unlike say vision or text is not easy to get API design right.
There are several competing RL packages for pytorch, but neither is generic enough to be a core package – this is why we removed the official RL package before release.
|
st118332
|
Ok, so I have a model split-up into multiple files. I am running the show from a jupyter notebook. The first time I ran everything, it was converging just fine, but started to overfit. I killed the kernel and told it to train for only 3 epochs and started everything again. I DID NOT TOUCH THE CODE! Now it’s not converging.
Has this ever happened to anyone?
Edit: Nevermind…I accidentaly set requires_gradients = False somewhere.
|
st118333
|
Hi, all,
I am totally new here. I have a basic question, if I have a image list and label list, how to fit to data utils?
|
st118334
|
Just create a Dataset for that. Something like
class MyDataset(torch.utils.data.Dataset):
def __init__(self, list_of_images, list_of_labels):
self.list_of_images = list_of_images
self.list_of_labels = list_of_labels
assert len(list_of_images) == len(list_of_labels)
def __getitem__(self, idx):
return self.list_of_images[idx], self.list_of_labels[idx]
def __len__(self):
return len(self.list_of_labels)
|
st118335
|
I just installed PyTorch on Windows Linux Subsystem (WSL):
root@TESLA:~# conda install pytorch torchvision -c soumith
Fetching package metadata ...........
Solving package specifications: .
Package plan for installation in environment /root/miniconda2:
The following NEW packages will be INSTALLED:
pytorch: 0.1.11-py27_5 soumith
torchvision: 0.1.8-py27_2 soumith
Proceed ([y]/n)? y
root@TESLA:~#
root@TESLA:~# python
Python 2.7.13 |Continuum Analytics, Inc.| (default, Dec 20 2016, 23:09:15)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
Anaconda is brought to you by Continuum Analytics.
Please check out: http://continuum.io/thanks and https://anaconda.org
>>>
>>>
>>>
>>> import torch
>>>
>>> import torch.nn as nn
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named nn
>>>
>>>
>>>
>>>
>>> exit()
root@TESLA:~#
root@TESLA:~# ll /root/miniconda2/pkgs/pytorch-0.1.11-py27_5/lib/python2.7/site-packages/torch
total 63M
drwxrwxrwx 2 root 0 Apr 26 08:55 autograd
drwxrwxrwx 2 root 0 Apr 26 08:55 backends
drwxrwxrwx 2 root 0 Apr 26 08:55 cuda
drwxrwxrwx 2 root 0 Apr 26 08:55 distributed
drwxrwxrwx 2 root 0 Apr 26 08:55 legacy
drwxrwxrwx 2 root 0 Apr 26 08:55 lib
drwxrwxrwx 2 root 0 Apr 26 08:55 multiprocessing
drwxrwxrwx 2 root 0 Apr 26 08:55 nn
drwxrwxrwx 2 root 0 Apr 26 08:55 optim
drwxrwxrwx 2 root 0 Apr 26 08:55 sparse
drwxrwxrwx 2 root 0 Apr 26 08:55 _thnn
drwxrwxrwx 2 root 0 Apr 26 08:55 utils
-rwxrwxr-x 1 root 63M Mar 31 09:53 _C.so
-rwxrwxr-x 1 root 18K Mar 31 09:53 _dl.so
-rw-rw-r-- 2 root 3.6K Mar 31 09:32 functional.py
-rw-rw-r-- 2 root 4.9K Mar 31 09:53 functional.pyc
-rw-rw-r-- 2 root 8.4K Mar 31 09:32 __init__.py
-rw-rw-r-- 2 root 11K Mar 31 09:53 __init__.pyc
-rw-rw-r-- 2 root 14K Mar 31 09:32 serialization.py
-rw-rw-r-- 2 root 14K Mar 31 09:53 serialization.pyc
-rw-rw-r-- 2 root 3.3K Mar 31 09:32 storage.py
-rw-rw-r-- 2 root 5.8K Mar 31 09:53 storage.pyc
-rw-rw-r-- 2 root 34K Mar 31 09:32 _tensor_docs.py
-rw-rw-r-- 2 root 30K Mar 31 09:53 _tensor_docs.pyc
-rw-rw-r-- 2 root 15K Mar 31 09:32 tensor.py
-rw-rw-r-- 2 root 19K Mar 31 09:53 tensor.pyc
-rw-rw-r-- 2 root 11K Mar 31 09:32 _tensor_str.py
-rw-rw-r-- 2 root 11K Mar 31 09:53 _tensor_str.pyc
-rw-rw-r-- 2 root 101K Mar 31 09:32 _torch_docs.py
-rw-rw-r-- 2 root 100K Mar 31 09:53 _torch_docs.pyc
-rw-rw-r-- 2 root 3.5K Mar 31 09:32 _utils.py
-rw-rw-r-- 2 root 4.0K Mar 31 09:53 _utils.pyc
-rw-rw-r-- 2 root 31 Mar 31 09:50 version.py
-rw-rw-r-- 2 root 172 Mar 31 09:53 version.pyc
root@TESLA:~#
Looks like I can import torch fine, but not any of its modules. Can anyone help me with this?
|
st118336
|
Just tried it on a standalone Ubuntu 14.04 machine: same problem.
Here’s my PATH:
root@Pascal:/home/michael/NN/PyTorch# echo $PATH
/root/miniconda2/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games
|
st118337
|
I noticed an interesting thing: being able to import torch modules depends on my current directory. Also, it depends on the presence of the pytorch source code file in the current directory. For example, I have torch.py 47 file containing just two lines:
import torch
import torch.nn
If I try to execute this file “python torch.py 47” it will fail to import torch.nn module. If I remove this file from current directory, watch:
root@Pascal:/home/michael/NN#
root@Pascal:/home/michael/NN# python
>>> import torch
>>> import torch.nn
>>> exit()
root@Pascal:/home/michael/NN#
root@Pascal:/home/michael/NN# cp PyTorch/torch.py .
root@Pascal:/home/michael/NN#
root@Pascal:/home/michael/NN#
root@Pascal:/home/michael/NN# python
>>> import torch
>>> import torch.nn
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named nn
>>>
>>> exit()
root@Pascal:/home/michael/NN#
root@Pascal:/home/michael/NN# rm torch.py*
root@Pascal:/home/michael/NN#
root@Pascal:/home/michael/NN# python
>>> import torch
>>> import torch.nn
>>>
>>>
Weird, right?
|
st118338
|
this is expected. it’s how python works.
If you have a file torch.py in your current directory, it’ll take precedence for import.
You can debug these issues by printing which torch got imported.
>>> import torch
>>> print(torch)
|
st118339
|
Hi all,
I create a new SuperModule class which allows for a lot of great high-level functionality without sacrificing ANY model flexibility. You define your models exactly as you would with nn.Module, except now you have access to fit(), evaluate(), and predict() functions, can use a ton of nice Callbacks, Constraints, and Regularizers - and there’s a sweet tqdm Progress Bar.
It inherits directly from nn.Module, so you can still do manual training if necessary and access all of its members. Also, there is a fit_loader() function to fit directly on DataLoader objects.
The code is available at the 3rd party torchsample repository 630. My motivation is that people can take this code and tailor it to their liking or expand on it.
Here’s a small example of the main functionality (full example in the torchsample README):
from torchsample.modules import SuperModule
class Network(SuperModule):
def __init__(self):
super(Network, self).__init__()
self.conv1 = nn.Conv2d(1, 32, kernel_size=3)
self.conv2 = nn.Conv2d(32, 64, kernel_size=3)
self.fc1 = nn.Linear(1600, 128)
self.fc2 = nn.Linear(128, 10)
def forward(self, x):
x = F.relu(F.max_pool2d(self.conv1(x), 2))
x = F.relu(F.max_pool2d(self.conv2(x), 2))
x = x.view(-1, 1600)
x = F.relu(self.fc1(x))
x = F.dropout(x, training=self.training)
x = self.fc2(x)
return F.log_softmax(x)
# constraints
# -> Nonneg on Conv layers applied at end of every epoch
# -> UnitNorm on FC layers applied every 4 batches
from torchsample.constraints import NonNeg, UnitNorm
constraints = [NonNeg(frequency=1, unit='batch', module_filter='*conv*'),
UnitNorm(frequency=4, unit='batch', module_filter='*fc*')]
# regularizers
# -> L1 on Conv layers
# -> L2 on FC layers
from torchsample.regularizers import L1Regularizer, L2Regularizer
regularizers = [L1Regularizer(scale=1e-6, module_filter='*conv*'),
L2Regularizer(scale=1e-6, module_filter='*fc*')]
# callbacks
# lambda callback
from torchsample.callbacks import LambdaCallback
callbacks = [LambdaCallback(on_train_end=lambda logs: print('TRAINING FINISHED'))]
model = Network()
model.set_loss(F.nll_loss)
model.set_optimizer(optim.Adadelta, lr=1.0)
model.set_regularizers(regularizers)
model.set_constraints(constraints)
model.set_callbacks(callbacks)
# fit model
model.fit(x_train, y_train,
validation_data=(x_test, y_test),
nb_epoch=5,
batch_size=128,
verbose=1)
# evaluate on test data
val_loss = model.evaluate(x_test, y_test)
# predict on input data
y_pred = model.predict(x_test)
Example of the progress bar:
|
st118340
|
just as an update, I implemented the following callbacks which I know people have been asking for:
ModelCheckpoint - saves model weights during training here 167
EarlyStopping - terminates training if loss doesnt improve here 93
LearningRateScheduler - schedule LR according to current epoch/LR/loss here 44
ReduceLROnPlateau - reduces LR if loss doesnt improve here 43
CSVLogger - logs train/val loss and other metrics to csv file during training here 42
To note, these can all be used in manual training by instantiating the class and calling the appropriate function - e.g. callback.on_batch_begin() or callback.on_epoch_begin()
Happy to answer any questions or take requests.
Examples of all the callbacks:
from torchsample.callbacks import ModelCheckpoint
callbacks = [ModelCheckpoint(file='/users/ncullen/desktop/test/model_{epoch}_{loss}.pt',
monitor='val_loss',
save_best_only=False,
max_checkpoints=3)]
from torchsample.callbacks import CSVLogger
callbacks = [CSVLogger(file='/users/ncullen/desktop/test/logger.csv',append=True)]
from torchsample.callbacks import EarlyStopping
callbacks = [EarlyStopping(monitor='val_loss',
min_delta=0,
patience=2)]
from torchsample.callbacks import LearningRateScheduler
save_lrs = []
def lr_schedule(epoch, lr, **kwargs):
"""exponential decay"""
new_lr = lr[0] * 1e-5**(epoch / 200)
save_lrs.append(new_lr)
return new_lr
callbacks = [LearningRateScheduler(lr_schedule)]
from torchsample.callbacks import ReduceLROnPlateau
callbacks = [ReduceLROnPlateau(monitor='val_loss',
factor=0.1,
patience=1,
cooldown=0,
min_lr=1e-3,
verbose=1)]
|
st118341
|
Hi Nick,
Thanks for your contribution! The code is pretty neat!
Have you chosen a License? Would love to contribute to it.
If I may suggest some features I’d like to have re: model trainer:
GPU support
Ability to use PyTorch pretrained models (SuperModel can’t be used as a subclass for official VGG, Resnet, etc models)
Accuracy logging
I’ve already implemented 1 and 2 in my fork. I’d be happy to send a PR if you’re OK with those features.
Rodrigo
|
st118342
|
Rodrigo,
Thanks, yes haha i’ll add a license right now.
For features:
GPU support is easy I’ll implement it now, but i’ll look at yours if you already did it.
pretrained is not my area, but happy to accept your code
yes, accuracy logging and other metrics are definitely on to-do… will prob implement accuracy in the next few days
Note that a lot of the code is changing rapidly (for instance, just removed - th_gather2d/th_gather3d
in favor of - th_gather_nd - … and made - th_meshgrid - work for any number of dimensions (e.g. th_meshgrid(2,3,4)) , and have been adding a lot to the transforms. Hopefully the SuperModule will remain quite stable though so definitely happy to have contributions there.
|
st118343
|
For step 2. I meant to be able to use models written by someone else, including Pytorch’s pretrained models 12. In this case, I don’t think it is possible to use inheritance as that would involve changing Pytorch’s code. I’ve replaced the inheritance strategy with object composition by adding a class field _model to SuperModel/Trainer which can be used for training and inference.
I’ll push my code to Github so you can have a look at my changes. Let me know what you think.
Happy to see the code evolve so quickly
Thanks again!
|
st118344
|
This is what I meant by object composition: https://github.com/recastrodiaz/torchsample/commit/879fae8b22ebe4d6bd75697c0a879edbd76a3bb9 40
PS: I’ve also replaced the imports DataLoader and TensorDataset to the Pytorch’s implementation (I manually create DataLoaders that pin_memory = true). Not sure this affects something else in your repo though. And I’m not sure either, what the differences are between Pytorch’s DataLoader and yours.
Edit2: I’ve implemented Accuracy logging here 9 I haven’t entirely finished testing it but I’d love to know if this is what you had in mind?
Happy to use a different approach that doesn’t break backwards compatibility.
|
st118345
|
If you look, I’ve changed a lot of the code - now includes multiple inputs and targets support, and optional target support - and cuda support. I also made it so fit doesnt default to fit_loader.
I LOVE the metrics class… I’m thinking how to integrate it and how to deal with the History callback.
Also, breaking compatibility is not an issue haha… this is mostly code for my own shit that i hacked together in a few days about two weeks ago… compatibility SHOULD be broken.
For ModuleTrainer I don’t like the way you have to pass in a model to the trainer… It just adds another layer of composition (like DataSet and DataLoader) which I think is cumbersome, I’d be willing to add the ModuleTrainer as an ADDITIONAL class, so people could either choose SuperModule as a drop-in for nn.Module, or use ModuleTrainer as an extra layer on their actual nn.Module class. Do you think that would add additioinal benefit? Happy to merge ModuleTrainer and let you develop it separately+concurrently for now.
|
st118346
|
I rather see the code changing fast than not at all
Thanks for adding CUDA support. I’ll try it out as soon as a model I’m working on finishes training.
Happy you like the Metrics class! I saw that you’ve implemented it, so I guess no need for a PR? It also looks like you’re handling the interaction with the History callback?
Everything moves so fast in ML that I bet all I do now will be outdated a year from now, so that kinda validates your point about backwards compatibilty, haha.
I don’t think object composition is wrong per se (I’m actually a big fan haha). But I see your point about it being cumbersome to pass a model to trainer. The issue I have though, is that I can’t use SuperModule as I’m using pretrained models from PyTorch vision 5 repo that do not extend SuperModule. Thus, why passing a model to ModuleTrainer is my only option? I’ve also thought of merging SuperModule and a Module methods during runtime, but this seems like a hacky idea and probably not worth it.
Adding an extra class sounds like a good compromise. However, if we go that way, I’d love to have all training code in a single class e.g. TrainerModule implements all fit, evaluate, etc methods and SuperModule simply forwards those calls to a TrainerModule (maybe stored as class attribute of SuperModule). Less room for errors. Is this what you had in mind?
I’d also like to add some tests to the metrics classes. Do you have any thoughts on how you’d structure them?
|
st118347
|
yeah that makes sense that the engine should only be implemented once and I get that use case. The class you’re proposing might also make training GANs a lot easier as well.
Tests are obviously much needed… The metrics are sort of their own thing and only interact with the TQDM class - not History . Here’s how to use them:
from torchsample.metrics import CategoricalAccuracy
# get a prediction -> shape = (samples, classes)
y_pred = model.predict(x_train)
# y_train is ground truth -> shape = (samples,)
acc = CategoricalAccuracy(top_k=2)
score = acc(y_pred, y_train)
So, they act ust like a loss function… testing them should be straight-forward then.
Right now, I’m focusing on finishing the predict and evaluate functions, as well as what to do with the *_loader() functions because it gets a lot more complex to handle multiple inputs/targets and especially to handle an optional target.
Another reason I like your idea is because it might make hyper-param optimization (something like a MetaModelTrainer) much easier … some way to train a model over multiple datasets/splits, or train multiple models over a single dataset, and keep track of those experiments through a CSV-type logger would be great.
|
st118348
|
I’ll review how other python projects structure their test code and infrastructure, there may be a few ideas there that could be useful to this project.
Maybe non *_loader() functions are not needed? To simplify usage there could be some method helpers like:
def loader_from_tensor(tensor, batch_size=64, shuffle=False, pin_memory=False):
y_unused = torch.Tensor(tensor.size(0))
loader = torch.utils.data.DataLoader(
torch.utils.data.TensorDataset(tensor, y_unused), batch_size=batch_size, shuffle=shuffle, pin_memory=pin_memory)
loader = loader_from_tensor(tensor)
loss = trainer.evaluate_loader(loader)
y = trainer.predict_loader(loader)
I like your MetaModelTrainer idea. Combined with KFold, sounds like a sweet thing to have! The CSV-like logger sounds pretty cool too.
Unrelated question: do you have any plans to merge your image transforms into pytorch/vision? I think this would avoid the need to have 2 implementations of ImageFolder and TensorDataset? And would make you code more accessible to PyTorch users.
|
st118349
|
I’ll make an issue on github to discuss this stuff further.
Re: transforms and the datasets, no plans right now unfortunately. I would but I don’t know what the general plans are for torch.utils.data and torchvision.transforms
|
st118350
|
Hello,
I am new to pytorch. After reading the pytorch doc, I haven’t understood the difference between function and modules. the class of function includes forward and backward, and the modules class includes init, and forward. So if I want define a custom layer in network which need a custom backward algorithm(the gradient get from autograd is not what I want), how to do it? Use the function.backward?
Thanks for your help!
|
st118351
|
Neither, you should use autograd.Function, see docs 23.
By the way, register hook maybe a better choice if it could work. You should try to make sure backward give the true grad.
examples of register_hook
and more by
https://discuss.pytorch.org/search?q=register%20hook
|
st118352
|
Thanks,chenyuntc.
Actually, I want to design a new layer, for some reasons, I update the weight(W):
should I define a new autograd.Function?
|
st118353
|
But my model only a few layers need update follow formula 2, other layers update the weights follow formula 1, I think if I change the optim.optimizer, all layers update the weights follow formula 2, that is not what I want.So I think designing a custom layer is possible way to do it. I tried to use the autograd.function, but failed.
|
st118354
|
you can specifiy some lays use optimizer1, others use optimizer2.
optimizer1 = optim.Adam(list(lay1.parameters()) + list(lay2.parameters()), lr = 0.0001)
optimizer1 = optim.CustomizeOptimizer(list(lay3.parameters()) + list(lay4.parameters()), lr = 0.0001)
|
st118355
|
Hi,
I am wondering why OpenNMT-py defines a class StackedLSTM 51 given that torch’s native LSTM 13 has a num_layers param.
What justifies this choice?
Is it in order to apply dropout between layers?
|
st118356
|
torch.nn.LSTM don’t support attention mechanism and have to implement attention based on torch.nn.LSTMCell manually. Howerer, torch.nn.LSTMCell doesn’t has a num_layers param. Thus, OpenNMT-py defines a StackedLSTM which supports attention and multi-layers, as a extension to torch.nn.LSTMCell
|
st118357
|
Hey, thanks for your reply. The post have been flagged by error.
The thing is, the attention mechanism isn’t part of the LSTM anyway (both conceptually and in the code). In ONMT-py, the decoder LSTM feeds its output in the attention layer at each timestep, see https://github.com/pltrdy/OpenNMT-py/blob/master/onmt/Models.py#L108 55
Therefore, I don’t think that the difference comes from attention.
|
st118358
|
oh, my bad. I mean the context vector, output of attention mechanism is as additional input of LSTM (Bahdanau et al. 6). But the OpenNMT use another attention strategy from Luong et al. 13, and I think the target is to support input_feed (feed the context vector at each time step as additional input)
|
st118359
|
I’m having a pretty interesting error around the backwards function on a variable from a very simple network in pytorch.
When I run the following simple program using pytorch, I get some strange behaviour, where the program appears to continue after the Variable.backwards() call, but the program does not actually close, I must manually close the program myself (in this case by using ctrl+c to send SIGTERM). This might be desired behaviour but I’m not sure what I should be doing to prevent it from happening then.
$ cat net_test.py
import torch
import sys
import torch.utils
from torch.autograd import Variable
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.fc1 = nn.Linear(100, 75)
self.fc2 = nn.Linear(75, 25)
self.fc3 = nn.Linear(25, 1)
def forward(self, x):
x = F.elu(self.fc1(x))
x = F.elu(self.fc2(x))
x = self.fc3(x)
return x
if __name__ == '__main__':
net = Net()
inp = Variable(torch.randn(1, 100))
out = net(inp)
net.zero_grad()
out.backward(torch.randn(1, 1))
print('done')
sys.exit()
$ python
Python 3.6.0 |Anaconda custom (64-bit)| (default, Dec 23 2016, 12:22:00)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> torch.version.__version__
'0.1.11+8aa1cef'
>>>
$ time python net_test.py
done
^C
real 0m17,992s
user 0m0,223s
sys 0m0,037s
Strangely, I was able to get similar code to function in an iPython notebook perfectly fine, and I can actually get code to work that trains a network even with this Variable.backwards call in the code, but the program still shows the same behaviour wherein it does not close on its own.
$ cat net.py
import torch
import numpy as np
import random
import sys
import torch.utils
import torch.optim as optim
from torch.autograd import Variable
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.fc1 = nn.Linear(100, 75)
self.fc2 = nn.Linear(75, 25)
self.fc3 = nn.Linear(25, 1)
def forward(self, x):
x = F.elu(self.fc1(x))
x = F.elu(self.fc2(x))
x = self.fc3(x)
return x
if __name__ == '__main__':
net = Net()
inp = Variable(torch.randn(1, 100))
out = net(inp)
net.zero_grad()
out.backward(torch.randn(1, 1))
alpha = 0.01
optimizer = optim.SGD(net.parameters(), lr=alpha)
optimizer.zero_grad()
good_vecs = [np.random.randn(100).astype('float32') for _ in range(0, 20)]
bad_vecs = [np.random.randn(100).astype('float32') for _ in range(0, 20)]
bad_set = [(vec, [-1.0]) for vec in good_vecs]
good_set = [(vec, [1.0]) for vec in bad_vecs]
shuffled_data = bad_set + good_set
random.shuffle(shuffled_data)
vectors = []
values = []
for vector, value in shuffled_data:
vectors.append(torch.from_numpy(vector))
values.append(torch.Tensor(value))
vectors = torch.stack(vectors)
values = torch.stack(values)
running_loss = 0.0
loss = nn.MSELoss()
shuffled_data = bad_set + good_set
random.shuffle(shuffled_data)
for epoch in range(3):
running_loss = 0.0
for i in range(0, len(shuffled_data), 4):
inp = vectors[i:i+5]
label = values[i:i+5]
inp, label = Variable(inp), Variable(label)
optimizer.zero_grad()
outputs = net(inp)
this_loss = loss(outputs, label)
this_loss.backward()
optimizer.step()
running_loss += this_loss.data
print(running_loss)
print('done')
sys.exit()
$ time python net.py
9.9317
[torch.FloatTensor of size 1]
8.5297
[torch.FloatTensor of size 1]
7.2956
[torch.FloatTensor of size 1]
done
^C
real 19m41,483s
user 0m0,240s
sys 0m0,037s
Sorry if the numpy interchange stuff is odd, it is just analogous to how I’m using pytorch in a project that is using numpy.
|
st118360
|
Hi,
Running your code does not cause any problem on my side.
There may be something weird with your python/numpy/pytorch install.
|
st118361
|
Do you know the best way to introspect this? I find it odd too since I know this can’t be normal behaviour. I installed pytorch through the conda instructions on the pytorch front page, but I will try uninstalling and reinstalling it to see if that clears this up.
|
st118362
|
I tried the following and after each, I still see the same behaviour:
Updating pytorch
Uninstalling and reinstalling pytorch
using conda update --all to update all my packages
Uninstalling anaconda using anaconda-clean and then reinstalling it, installing pytorch via conda install pytorch torchvision cuda80 -c soumith
|
st118363
|
One thing you could try is to run this script inside gdb and interupt it after it print “done” and see if you get any info from the stacktrace.
|
st118364
|
@albanD unless you suspect that there’s something weird going on threading, there doesn’t appear to be a whole lot:
$ gdb python
GNU gdb (GDB) 7.12.1
Copyright (C) 2017 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law. Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-pc-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
<http://www.gnu.org/software/gdb/documentation/>.
For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from python...done.
(gdb) run net_test.py
Starting program: /home/clemente/anaconda3/bin/python net_test.py
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/usr/lib/libthread_db.so.1".
[New Thread 0x7fffb7b78700 (LWP 18149)]
[New Thread 0x7fffb7377700 (LWP 18150)]
[New Thread 0x7fffb6b76700 (LWP 18151)]
done
^C
Thread 1 "python" received signal SIGINT, Interrupt.
0x00007ffff76c4299 in pthread_cond_destroy@@GLIBC_2.3.2 () from /usr/lib/libpthread.so.0
but I’d be interested as to why this is happening to me and not anyone else…
|
st118365
|
sorry:
$ gdb python
GNU gdb (GDB) 7.12.1
Copyright (C) 2017 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law. Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-pc-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
<http://www.gnu.org/software/gdb/documentation/>.
For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from python...done.
(gdb) run net_test.py
Starting program: /home/clemente/anaconda3/bin/python net_test.py
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/usr/lib/libthread_db.so.1".
[New Thread 0x7fffb7b78700 (LWP 18437)]
[New Thread 0x7fffb7377700 (LWP 18438)]
[New Thread 0x7fffaffff700 (LWP 18439)]
done
^C
Thread 1 "python" received signal SIGINT, Interrupt.
0x00007ffff76c4299 in pthread_cond_destroy@@GLIBC_2.3.2 () from /usr/lib/libpthread.so.0
(gdb) bt
#0 0x00007ffff76c4299 in pthread_cond_destroy@@GLIBC_2.3.2 () from /usr/lib/libpthread.so.0
#1 0x00007fffedeaa75e in torch::autograd::ReadyQueue::~ReadyQueue (this=0x112bf20, __in_chrg=<optimized out>)
from /home/clemente/anaconda3/lib/python3.6/site-packages/torch/_C.cpython-36m-x86_64-linux-gnu.so
#2 std::default_delete<torch::autograd::ReadyQueue>::operator() (this=<optimized out>, __ptr=0x112bf20) at torch/csrc/autograd/engine.cpp:67
#3 std::unique_ptr<torch::autograd::ReadyQueue, std::default_delete<torch::autograd::ReadyQueue> >::~unique_ptr (this=0x112bec0, __in_chrg=<optimized out>)
at torch/csrc/autograd/engine.cpp:184
#4 std::_Destroy<std::unique_ptr<torch::autograd::ReadyQueue> > (__pointer=0x112bec0) at torch/csrc/autograd/engine.cpp:93
#5 std::_Destroy_aux<false>::__destroy<std::unique_ptr<torch::autograd::ReadyQueue>*> (__last=0x112bed0, __first=0x112bec0) at torch/csrc/autograd/engine.cpp:103
#6 std::_Destroy<std::unique_ptr<torch::autograd::ReadyQueue>*> (__last=0x112bed0, __first=<optimized out>) at torch/csrc/autograd/engine.cpp:126
#7 std::_Destroy<std::unique_ptr<torch::autograd::ReadyQueue>*, std::unique_ptr<torch::autograd::ReadyQueue> > (__last=0x112bed0, __first=<optimized out>)
at torch/csrc/autograd/engine.cpp:151
#8 std::vector<std::unique_ptr<torch::autograd::ReadyQueue, std::default_delete<torch::autograd::ReadyQueue> >, std::allocator<std::unique_ptr<torch::autograd::ReadyQueue,std::default_delete<torch::autograd::ReadyQueue> > > >::~vector (this=0x7fffee727ce8 <engine+8>, __in_chrg=<optimized out>) at torch/csrc/autograd/engine.cpp:415
#9 torch::autograd::Engine::~Engine (this=0x7fffee727ce0 <engine>, __in_chrg=<optimized out>) at torch/csrc/autograd/engine.cpp:21
#10 0x00007ffff6a276c0 in __run_exit_handlers () from /usr/lib/libc.so.6
#11 0x00007ffff6a2771a in exit () from /usr/lib/libc.so.6
#12 0x00007ffff7a4ba19 in Py_Exit (sts=0) at Python/pylifecycle.c:1541
#13 0x00007ffff7a4ee82 in handle_system_exit () at Python/pythonrun.c:602
#14 0x00007ffff7a4f12d in PyErr_PrintEx (set_sys_last_vars=1) at Python/pythonrun.c:612
#15 0x00007ffff7a4fa1d in PyRun_SimpleFileExFlags (fp=<optimized out>, filename=<optimized out>, closeit=<optimized out>, flags=0x7fffffffdf70) at Python/pythonrun.c:401
#16 0x00007ffff7a6aa41 in run_file (p_cf=0x7fffffffdf70, filename=0x604110 L"net_test.py", fp=0x66ef70) at Modules/main.c:320
#17 Py_Main (argc=<optimized out>, argv=<optimized out>) at Modules/main.c:781
#18 0x0000000000400c1d in main (argc=2, argv=<optimized out>) at ./Programs/python.c:69
|
st118366
|
It looks like a deadlock when destroying the autograd Engine
I am not sure what is causing this though… @apaszke will have to step in here.
Its weird indeed that it happens only to you.
|
st118367
|
Did you install from source or are you using the binaries? What system are you on?
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.