id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st116068
|
r = torch.randn(10)
print ((r<-1) | (r>1))
(& also works for and)
Best regards
Thomas
|
st116069
|
Hi,
Before the update, I could pass numpy.int64 as inputs to determine the size of nn.Linear weights, but now it does not accept it instead it only accepts ints. I was wondering why this was done and does this speed things up more ?
|
st116070
|
Sure,
import numpy as np
import torch.nn as nn
arr = np.arange(1,10)
layer = nn.Linear(arr[-1],arr[5])
Error, expected (int,…) didn’t match because some of the arguments have invalid types :(numpy.int64,numpy.int64) torch.FloatTensor viewed_tensor …
|
st116071
|
What version of PyTorch were you using before? This does not work on v.0.1.12 also.
|
st116072
|
Oh okay, I had not updated PyTorch from a long time. It must have been 0.1.11 or 0.1.10
|
st116073
|
Hi,
for epoch in range(80):
for i, (images, labels) in enumerate(train_loader):
images = Variable(images.cuda())
labels = Variable(labels.cuda())
# Forward + Backward + Optimize
optimizer.zero_grad()
outputs = resnet(images)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
Above is my code, and how can I record each layer’s gradient?
|
st116074
|
do you want intermediate gradients? or weight gradients?
By record, do you want to print them? or save them?
There are a few threads already answering these questions.
|
st116075
|
@Chen-Wei_Xie
search on the forums, there are many threads that answer this question.
|
st116076
|
I am also looking for an answer to the same question, and I did not find an answer on the forum. Could someone post a minimal working example of how this is done?
|
st116077
|
optimizer.zero_grad()
y = net(x)
loss = criterion(y, target)
loss.backward()
grad_of_params = {}
for name, parameter in net.named_parameters():
grad_of_param[name] = parameter.grad
|
st116078
|
I know that you can access the gradiens of a layer by using layer.grad. When I include this in a print statement, however, I get a continuous stream of None. How can I get the actual matrix of gradients?
|
st116079
|
Solved by fmassa in post #2
That is one of the cases where it’s better not to use nn.Sequential, and inherit from model yourself and perform the operations that you want.
For example
class MyModel(nn.Module):
def forward(self, input):
return input ** 2 + 1
model = MyModel()
But if you want an equivalent to a Lam…
|
st116080
|
That is one of the cases where it’s better not to use nn.Sequential, and inherit from model yourself and perform the operations that you want.
For example
class MyModel(nn.Module):
def forward(self, input):
return input ** 2 + 1
model = MyModel()
But if you want an equivalent to a Lambda layer, you can write it very easily in pytorch
class LambdaLayer(nn.Module):
def __init__(self, lambd):
super(LambdaLayer, self).__init__()
self.lambd = lambd
def forward(self, x):
return self.lambd(x)
And now you can use it as you would in keras
model.add(Lambda(lambda x: x ** 2))
|
st116081
|
I installed pytorch 0.2.0 in python 3.6 (Ubuntu 16.04). It works with command line and ipython in terminal, but when i import torch in jupyter notebook, jupyter kernel crashes. What makes it?
*** Error in `/home/dhlee/.pyenv/versions/3.6.2/envs/py3torch/bin/python’: free(): invalid pointer: 0x00007f6bb3a96b80 ***
======= Backtrace: =========
/lib/x86_64-linux-gnu/libc.so.6(+0x777e5)[0x7f6c674b17e5]
/lib/x86_64-linux-gnu/libc.so.6(+0x8037a)[0x7f6c674ba37a]
/lib/x86_64-linux-gnu/libc.so.6(cfree+0x4c)[0x7f6c674be53c]
/home/dhlee/.pyenv/versions/3.6.2/envs/py3torch/lib/python3.6/site-packages/torch/lib/libshm.so(_ZNSt6locale5_Impl16_M_install_facetEPKNS_2idEPKNS_5facetE+0x142)[0x7f6bb3830802]
/home/dhlee/.pyenv/versions/3.6.2/envs/py3torch/lib/python3.6/site-packages/torch/lib/libshm.so(_ZNSt6locale5_ImplC2Em+0x1e3)[0x7f6bb3832953]
/home/dhlee/.pyenv/versions/3.6.2/envs/py3torch/lib/python3.6/site-packages/torch/lib/libshm.so(_ZNSt6locale18_S_initialize_onceEv+0x15)[0x7f6bb38338c5]
/lib/x86_64-linux-gnu/libpthread.so.0(+0xea99)[0x7f6c67f22a99]
/home/dhlee/.pyenv/versions/3.6.2/envs/py3torch/lib/python3.6/site-packages/torch/lib/libshm.so(_ZNSt6locale13_S_initializeEv+0x21)[0x7f6bb3833911]
/home/dhlee/.pyenv/versions/3.6.2/envs/py3torch/lib/python3.6/site-packages/torch/lib/libshm.so(_ZNSt6localeC1Ev+0x13)[0x7f6bb3833953]
/home/dhlee/.pyenv/versions/3.6.2/envs/py3torch/lib/python3.6/site-packages/torch/lib/libshm.so(_ZNSt8ios_base4InitC1Ev+0xb4)[0x7f6bb38051b4]
/home/dhlee/.pyenv/versions/3.6.2/envs/py3torch/lib/python3.6/site-packages/torch/lib/libcusparse-94011b8d.so.8.0.61(+0x2f85b4)[0x7f6b83ef55b4]
/home/dhlee/.pyenv/versions/3.6.2/envs/py3torch/lib/python3.6/site-packages/torch/lib/libcusparse-94011b8d.so.8.0.61(+0x2f8703)[0x7f6b83ef5703]
/home/dhlee/.pyenv/versions/3.6.2/envs/py3torch/lib/python3.6/site-packages/torch/lib/libcusparse-94011b8d.so.8.0.61(+0x393cc6)[0x7f6b83f90cc6]
======= Memory map: ========
00400000-00664000 r-xp 00000000 08:02 30676433 /home/dhlee/.pyenv/versions/3.6.2/bin/python3.6
00864000-00865000 r–p 00264000 08:02 30676433 /home/dhlee/.pyenv/versions/3.6.2/bin/python3.6
00865000-008c9000 rw-p 00265000 08:02 30676433 /home/dhlee/.pyenv/versions/3.6.2/bin/python3.6
008c9000-008fa000 rw-p 00000000 00:00 0
028d8000-03348000 rw-p 00000000 00:00 0 [heap]
7f6b7c000000-7f6b7c021000 rw-p 00000000 00:00 0
7f6b7c021000-7f6b80000000 —p 00000000 00:00 0
7f6b83bfd000-7f6b864ee000 r-xp 00000000 08:02 30936102 /home/dhlee/.pyenv/versions/3.6.2/envs/py3torch/lib/python3.6/site-packages/torch/lib/libcusparse-94011b8d.so.8.0.61
7f6b864ee000-7f6b866ee000 —p 028f1000 08:02 30936102 /home/dhlee/.pyenv/versions/3.6.2/envs/py3torch/lib/python3.6/site-packages/torch/lib/libcusparse-94011b8d.so.8.0.61
7f6b866ee000-7f6b86707000 rw-p 028f1000 08:02 30936102 /home/dhlee/.pyenv/versions/3.6.2/envs/py3torch/lib/python3.6/site-packages/torch/lib/libcusparse-94011b8d.so.8.0.61
7f6b86707000-7f6b86718000 rw-p 00000000 00:00 0
7f6b86718000-7f6b8671d000 rw-p 0290b000 08:02 30936102 /home/dhlee/.pyenv/versions/3.6.2/envs/py3torch/lib/python3.6/site-packages/torch/lib/libcusparse-94011b8d.so.8.0.61
7f6b8671d000-7f6b88bb8000 r-xp 00000000 08:02 30936106 /home/dhlee/.pyenv/versions/3.6.2/envs/py3torch/lib/python3.6/site-packages/torch/lib/libcurand-3d68c345.so.8.0.61
7f6b88bb8000-7f6b88db8000 —p 0249b000 08:02 30936106 /home/dhlee/.pyenv/versions/3.6.2/envs/py3torch/lib/python3.6/site-packages/torch/lib/libcurand-3d68c345.so.8.0.61
7f6b88db8000-7f6b8a189000 rw-p 0249b000 08:02 30936106 /home/dhlee/.pyenv/versions/3.6.2/envs/py3torch/lib/python3.6/site-packages/torch/lib/libcurand-3d68c345.so.8.0.61
7f6b8a189000-7f6b8a693000 rw-p 00000000 00:00 0
7f6b8a693000-7f6b8a694000 rw-p 0386d000 08:02 30936106 /home/dhlee/.pyenv/versions/3.6.2/envs/py3torch/lib/python3.6/site-packages/torch/lib/libcurand-3d68c345.so.8.0.61
7f6b8a694000-7f6b8d4ac000 r-xp 00000000 08:02 30936089 /home/dhlee/.pyenv/versions/3.6.2/envs/py3torch/lib/python3.6/site-packages/torch/lib/libcublas-e78c880d.so.8.0.88
7f6b8d4ac000-7f6b8d6ac000 —p 02e18000 08:02 30936089 /home/dhlee/.pyenv/versions/3.6.2/envs/py3torch/lib/python3.6/site-packages/torch/lib/libcublas-e78c880d.so.8.0.88
7f6b8d6ac000-7f6b8d6ca000 rw-p 02e18000 08:02 30936089 /home/dhlee/.pyenv/versions/3.6.2/envs/py3torch/lib/python3.6/site-packages/torch/lib/libcublas-e78c880d.so.8.0.88
7f6b8d6ca000-7f6b8d6d9000 rw-p 00000000 00:00 0
7f6b8d6d9000-7f6b8d6dc000 rw-p 02e36000 08:02 30936089 /home/dhlee/.pyenv/versions/3.6.2/envs/py3torch/lib/python3.6/site-packages/torch/lib/libcublas-e78c880d.so.8.0.88
7f6b8d6dc000-7f6b8d6f1000 r-xp 00000000 08:02 30936095 /home/dhlee/.pyenv/versions/3.6.2/envs/py3torch/lib/python3.6/site-packages/torch/lib/libgomp-ae56ecdc.so.1.0.0
7f6b8d6f1000-7f6b8d8f0000 —p 00015000 08:02 30936095 /home/dhlee/.pyenv/versions/3.6.2/envs/py3torch/lib/python3.6/site-packages/torch/lib/libgomp-ae56ecdc.so.1.0.0
7f6b8d8f0000-7f6b8d8f3000 rw-p 00014000 08:02 30936095 /home/dhlee/.pyenv/versions/3.6.2/envs/py3torch/lib/python3.6/site-packages/torch/lib/libgomp-ae56ecdc.so.1.0.0
7f6b8d8f3000-7f6b903cb000 r-xp 00000000 08:02 30936087 /home/dhlee/.pyenv/versions/3.6.2/envs/py3torch/lib/python3.6/site-packages/torch/lib/libnccl.so.1
7f6b903cb000-7f6b905cb000 —p 02ad8000 08:02 30936087 /home/dhlee/.pyenv/versions/3.6.2/envs/py3torch/lib/python3.6/site-packages/torch/lib/libnccl.so.1
7f6b905cb000-7f6b905cc000 rw-p 02ad8000 08:02 30936087 /home/dhlee/.pyenv/versions/3.6.2/envs/py3torch/lib/python3.6/site-packages/torch/lib/libnccl.so.1
7f6b905cc000-7f6b905ce000 rw-p 02ae4000 08:02 30936087 /home/dhlee/.pyenv/versions/3.6.2/envs/py3torch/lib/python3.6/site-packages/torch/lib/libnccl.so.1
7f6b905ce000-7f6b94437000 r-xp 00000000 08:02 30936093 /home/dhlee/.pyenv/versions/3.6.2/envs/py3torch/lib/python3.6/site-packages/torch/lib/libTHCUNN.so.1
7f6b94437000-7f6b94637000 —p 03e69000 08:02 30936093 /home/dhlee/.pyenv/versions/3.6.2/envs/py3torch/lib/python3.6/site-packages/torch/lib/libTHCUNN.so.1
7f6b94637000-7f6b94646000 rw-p 03e69000 08:02 30936093 /home/dhlee/.pyenv/versions/3.6.2/envs/py3torch/lib/python3.6/site-packages/torch/lib/libTHCUNN.so.1
|
st116082
|
I have the same problem and I could not install from source Any other options? Reinstalled 0.1.12 now
|
st116083
|
There is a workaround described here: https://github.com/pytorch/pytorch/issues/2314 92. Installing v0.2.0 with Anaconda also works.
|
st116084
|
i am using jupyter notebook in ibm datascientist workbench and facing the same issue. please help.
|
st116085
|
Hm ok as far as I saw most workarounds require sudo/admin rights? I don’t have them where I want to install it. Conda works without right so that could be an option?
|
st116086
|
Hi all,
I would like to have a convolution layer with fixed weights. But more importantly the weights should be reinitialised in each iteration randomly and then normalized so that it sums to 1.
Is this possible without the need to write a c++/cuda code myself?
|
st116087
|
I’m using HingeEmbeddingLoss() but got this error.
Please help me. I don’t find any thing related to this error on the forum.
File "traning_horlicks.py", line 194, in <module>
num_epochs=25)
File "traning_horlicks.py", line 92, in train_model
loss = criterion(outputs, labels)
File "/usr/local/lib/python2.7/dist-packages/torch/nn/modules/module.py", line 206, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/torch/nn/modules/loss.py", line 228, in forward
self.size_average)(input, target)
File "/usr/local/lib/python2.7/dist-packages/torch/nn/_functions/loss.py", line 105, in forward
buffer[torch.eq(target, -1.)] = 0
TypeError: torch.eq received an invalid combination of arguments - got (torch.cuda.LongTensor, float), but expected one of:
* (torch.cuda.LongTensor tensor, int value)
didn't match because some of the arguments have invalid types: (torch.cuda.LongTensor, float)
* (torch.cuda.LongTensor tensor, torch.cuda.LongTensor other)
didn't match because some of the arguments have invalid types: (torch.cuda.LongTensor, float)
* (torch.cuda.LongTensor tensor, int value)
didn't match because some of the arguments have invalid types: (torch.cuda.LongTensor, float)
* (torch.cuda.LongTensor tensor, torch.cuda.LongTensor other)
didn't match because some of the arguments have invalid types: (torch.cuda.LongTensor, float)
Thanks
|
st116088
|
The error message says whats wrong:
You provide two variables with the types “(torch.cuda.LongTensor, float)” to the function, but it wants e.g. “(torch.cuda.LongTensor tensor, int value)” or one of the other types.
So the problem is your the type of your second argument. Change it appropriate.
|
st116089
|
I’m feeding the output of the neural network and the labels to the EmbeddingLoss(), both of them are torch.cuda.LongTensor.
If you look at the error, it shows the error is at line 105 of file loss.py 5
buffer[torch.eq(target, -1.)] = 0
torch.eq received an invalid combination of arguments - got (torch.cuda.LongTensor, float), but expected one of:
(torch.cuda.LongTensor tensor, int value)
the type of my target is torch.cuda.LongTensor which is I’m feeding to the function but the problem is in the next argument of the function.
I don’t think this is an error from my side. I’m not feeding any wrong input, the problem is in the code of loss.py 5 file. Please correct me if I’m wrong
Thanks
|
st116090
|
I found the answer on the forum, that by changing target variable to FloatTensor will solve this issue.
|
st116091
|
I am running the time sequence prediction example 22 with some actual time-series data instead of sine waves and the loss function gets stuck.
The data in the example is (100,1000), my new data is (40,1000), and I don’t see anywhere in the code where this could be the issue.
class Sequence(nn.Module):
def init(self):
super(Sequence, self).init()
self.lstm1 = nn.LSTMCell(1,51)
self.lstm2 = nn.LSTMCell(51,1)
def forward(self, input, future=0):
outputs = []
h_t = Variable(torch.zeros(input.size(0), 51).double(), requires_grad=False)
c_t = Variable(torch.zeros(input.size(0), 51).double(), requires_grad=False)
h_t2 = Variable(torch.zeros(input.size(0), 1).double(), requires_grad=False)
c_t2 = Variable(torch.zeros(input.size(0), 1).double(), requires_grad=False)
for i, input_t in enumerate(input.chunk(input.size(1), dim=1)): # Split tensor into tuples of size input.size(0)
h_t, c_t = self.lstm1(input_t, (h_t, c_t)) # Two layer LSTM (2 periods lookbehind)
h_t2, c_t2 = self.lstm2(c_t, (h_t2, c_t2)) # Second period (t-2): input is lstm cell of t-1
outputs += [c_t2]
for i in range(future): # if we should predict the future
h_t, c_t = self.lstm1(c_t2, (h_t, c_t))
h_t2, c_t2 = self.lstm2(c_t, (h_t2, c_t2))
outputs += [c_t2]
outputs = torch.stack(outputs, 1).squeeze(2)
return outputs
if name == ‘main’:
# Set seed to 0
np.random.seed(0)
torch.manual_seed(0)
# Load data and generate training set
data = torch.load('traindata_EX1.pt')
input = Variable(torch.from_numpy(data[3:,:-1]), requires_grad=False)
target = Variable(torch.from_numpy(data[3:,1:]), requires_grad=False)
# Build the model
seq = Sequence()
seq.double() # Casts all parameters and buffers to double datatype.
criterion = nn.MSELoss() # Set loss function to Mean Squared Error
# use LBFGS as optimizer since we can load the whole data to train
optimizer = optim.LBFGS(seq.parameters())
# Begin to train
for i in range(15):
print('STEP:', i)
def closure():
optimizer.zero_grad() # Reset gradients each pass.
out = seq(input) # Calculate the predicted value with the given parameters W*
loss = criterion(out, target) # Calculate error.
print('Loss:', loss.data.numpy()[0])
loss.backward() # Backpropagate
return loss
optimizer.step(closure) # Move parameters to fastest changing gradient (?)
# Predict
future = 1000
pred = seq(input[:3], future = future)
y = pred.data.numpy()
# Draw the result
plt.figure() #figsize=(30,10)
plt.title('Predict future values for time sequences\n(Dashlines are predicted values)', fontsize=30)
plt.xlabel('x', fontsize=20)
plt.ylabel('y', fontsize=20)
plt.xticks(fontsize=20)
plt.yticks(fontsize=20)
def draw(yi, color):
plt.plot(np.arange(input.size(1)), yi[:input.size(1)], color, linewidth = 2.0)
plt.plot(np.arange(input.size(1), input.size(1) + future), yi[input.size(1):], color + ':', linewidth = 2.0)
draw(y[0], 'r')
draw(y[1], 'g')
draw(y[2], 'b')
plt.savefig('plots/predict{}_V1.pdf'.format(i))
plt.close()
The file traindata_EX1.pt is here 7.
Any clues as to why my loss is getting stuck?
|
st116092
|
I wonder why this code use c_t2 as the prediction. I’m a beginner, but hidden state(in this case h_t2) is considered as the prediction habitually, isn’t it? After modifying c_t2 to h_t2, the training on the original dataset looks bad.
|
st116093
|
I spent lot of time reinstalling pyTorch since this example gives wrong result, both with and without GPU (I just took this one by chance to check my installation was OK). it would be nice to fix it or at least to remove it from what is expected to work.
|
st116094
|
When I’m using the pretrained ResNet moels provided by Pytorch models, for example resnet50. I find that there are quite a few NaN values in the running_mean and running_var buffer. In this case, I cannot use reset50.eval() since it will output all NaN output.
Is this a problem? How can I fix this?
UPDATE:
This seems to be more of a IDE problem. I used PyCharm. Here is the code I used to detect if there is NaN in the BN layers or not.
def checkBNNaN(model):
for id, s_module in enumerate(model.modules()):
if isinstance(s_module, nn.BatchNorm2d):
if (np.isnan(s_module.running_mean.numpy())).any():
print "BN # {:d} running_MEAN has NaN".format(id)
if (np.isnan(s_module.running_var.numpy())).any():
print "BN # {:d} running_Var has NaN".format(id)
The result will be totally different if you run to check a model’s BN running_variables on PyCharm on a linux machine vs. on terminal
UPDATE2:
Updating PyCharm to a new version solved this issue…
Sorry for the spam
|
st116095
|
I was trying to add trainable variables directly to ModuleList (but that didn’t work)
self.W = Variable(w_init, requires_grad=True)
self.mod_list = torch.nn.ModuleList([self.W])
so then I tried to add the parameters to Module list and that didn’t work either.
self.W = torch.nn.Parameter( w_init )
self.mod_list = torch.nn.ModuleList([self.W])
does not work (and note w_init should NOT be a Variable, seems tensors work). Not sure why it shouldn’t but the following does work:
self.W = torch.nn.Parameter( w_init )
without adding it to the ModuleList. Why? It feels that I am missing some conceptual thing about pytorch if it doesnt work for this. Anyone can clarify to me?
|
st116096
|
Can someone help me understand what determines the contents of model.parameters()? I’m guessing it’s because self.w and self.b aren’t modules. Here’s the code:
from finch.viz import scatter_plot
import numpy as np
import torch
import torch.autograd
import torch.nn.functional as F
import torch.optim as optim
from torch.autograd import Variable
from torch import nn
tensor = torch.FloatTensor
COUNT = 10
################################################################################
def scalar(x):
return torch.FloatTensor([x])
################################################################################
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.w = Variable(scalar(0.1), requires_grad=True)
self.b = Variable(scalar(0), requires_grad=True)
def forward(self, x):
x = self.w * x + self.b
def loss(self, prediction, label):
return (prediction - label)**2
################################################################################
data = np.random.standard_normal((COUNT, 1)) + 5
labels = (data * 3) + 5 + np.random.standard_normal(COUNT)
model = Net()
optimizer = optim.SGD(model.parameters())
for datum, label in zip(data, labels):
datum, label = Variable(scalar(datum)), Variable(scalar(label))
optimizer.zero_grad()
prediction = model(datum)
loss = model.loss(prediction, label)
loss.backward()
optimizer.step()
print('loss', loss)
model.w and model.b are part of the forward pass, but model.parameters() is empty. How can I register those so that model.parameters() isn’t empty?
|
st116097
|
spro:
nn.Parameter
note that doing:
self.W = torch.nn.Parameter( w_init )
self.mod_list = torch.nn.ModuleList([self.W])
does not work (and note w_init should NOT be a Variable, seems tensors work). Not sure why it shouldn’t.
|
st116098
|
What is the easiest way to print learnt parameters beta and gamma from batch normalization? When I print vars(net.state_dict()), all I see is running_mean, running_var, bias and weight.
|
st116099
|
Hello,
I want to apply function nn.parallel.data_parallel on GRU, how ever when I am trying to run following code, there is a error:
import torch
import torch.nn as nn
from torch.autograd import Variable
a = nn.GRU(100, 20, 1, batch_first=True)
input_variable = Variable(torch.rand(15, 1, 100).cuda())
hidden_state = Variable(torch.rand(1, 15, 20).cuda())
a.cuda()
output, _ = nn.parallel.data_parallel(a, (input_variable, hidden_state), [0, 1, 2])
Traceback (most recent call last):
File "test2.py", line 10, in <module>
output, _ = nn.parallel.data_parallel(a, (input_variable, hidden_state), [0, 1, 2])
File "/home/didoyang/anaconda2/lib/python2.7/site-packages/torch/nn/parallel/data_parallel.py", line 105, in data_parallel
outputs = parallel_apply(replicas, inputs, module_kwargs, used_device_ids)
File "/home/didoyang/anaconda2/lib/python2.7/site-packages/torch/nn/parallel/parallel_apply.py", line 67, in parallel_apply
raise output
RuntimeError: Expected hidden size (1, 5L, 20), got (1L, 15L, 20L)
|
st116100
|
Hi guys,
I have a question as the name suggests. Previously I used torch for training a small network with 2 LSTM layer each with 16 memory cells, and the time needed to go through all of my training data once is about 1 to 2 hours on GPU.
Now I switched to Pytorch. And training the same network on the same training data on the same GPU for one epoch only takes about 7 mins. So I was wondering what kinds of changes have you made in Pytorch that make it so much faster than Torch7 in this particular case with LSTM?
I have tested my model trained in Pytorch and it works sensibly. So I think I probably implemented my code correctly, so there is no silly mistake in my code.
Cheers,
Shuokai
|
st116101
|
I’d say that it might be because pytorch uses cudnn LSTM and also hand-fused kernels for LSTM, but I’d not expect it to be so much faster. That’s good news then!
|
st116102
|
HI @fmassa,
Yeah, I have realized that previously I was not using Cudnn from Nvidia with my Torch model. But now Pytorch use cudnn by default with GPU right?
Cheers,
Shuokai
|
st116103
|
TypeError: CudaBCECriterion_updateOutput received an invalid combination
of arguments - got (int, torch.cuda.FloatTensor, torch.cuda.FloatTensor,
torch.cuda.FloatTensor, bool, !Variable!), but expected (int state,
torch.cuda.FloatTensor input, torch.cuda.FloatTensor target,
torch.cuda.FloatTensor output, bool sizeAverage,
[torch.cuda.FloatTensor weights or None])
does this mean that we can not backprop through weights, or I am doing something wrong?
|
st116104
|
Can you post your code? That error is saying the weights are being passed to the C/Cuda level as a Variable instead of a tensor.
|
st116105
|
Right. I want these weights to be a function of my data and current state.
weights = ae_class_dist[:, class_i]
weight_cat = torch.cat([Variable(torch.ones(features_a_i.size(0))).cuda(),
weights], 0)
cross_ent = F.binary_cross_entropy(F.sigmoid(output.view(-1)),
full_y.view(-1), weight=weight_cat.data)
total_dist += cross_ent
and then optimize total_dist. According to docs, it seems possible
weight (Variable, optional): a manual rescaling weight
if provided it's repeated to match input tensor shape
Thanks!
(O
|
st116106
|
Ben_Usman:
Right. I want these weights to be a function of my data and current state.
Try passing every parameter to binary_Cross_entropy as a Variable; I don’t know what the types of output or full_y are, but weight_cat.data is definitely a tensor.
|
st116107
|
Sure, sorry. It does run with .data and gives error above without. output and full_y are Variables. My question was basically - does it suppose to work, or passing Variable as weights is just not implemented? Thanks.
|
st116108
|
yes it works. Here’s something close-ish to what you are trying:
F.binary_cross_entropy(Variable(torch.rand(3,4), requires_grad=True), Variable(torch.randn(3,4), requires_grad=True), weight=Variable(torch.randn(4,4), requires_grad=True)[:,3])
Can you come up with a minimal example that demonstrates your issue?
|
st116109
|
I was trying to make a custom nn module and I was having issues registering variables. I have been kindly pointed to nn.ModuleList() and torch.nn.ParameterList. However, I think I don’t understand in the first place, why do I need to “register” parameters? Whats the point of all this?
|
st116110
|
In pytorch, we have Variables, which are the building block in autograd, and we have an utility class nn.Parameter, which is used to indicate to nn.Module that that specific variable should be present when .parameters() is called.
For example, you could have something like
class Net(nn.Module):
def forward(self, input):
self.x = input * 2
return self.x
and when you call .parameters(), you don’t want self.x to be returned as a parameter. For more discussion on this matter, have a look at https://github.com/pytorch/pytorch/issues/143 527
|
st116111
|
His point is that self.x is a normal attribute, but not a parameter in the computation graph.
|
st116112
|
ah I see so there are special functions like nn.Parameters that register variables so that self.parameters() returns the params (I guess its just a way to organize things, but besides that there doesn’t seem to be any additional special feature about using this) and so that any attribute is not registered automatically/by accident.
|
st116113
|
btw what I did notice is that registering None does not mean that .parameters() returns None. So I guess there must be something in the internals of pytorch that makes sure None’s are not returned.
|
st116114
|
Hi,
I am profiling my training code to detect the performance bottleneck. I found that the variable.cuda() operation takes much more time than doing the actual gradient descent(74.1% vs. 13.6%).
Is there any specific reason for this?
Thanks
|
st116115
|
I am getting a “RuntimeError: Given input size: (512, 1, 1). Calculated output size: (324, 1, -510). Output size is too small.” for an input of torch.Size([324, 512, 1, 1]) after doing nn.Conv2d(512, 512, kernel_size=(1, 1), stride=(1, 1), bias=False). It is working perfectly fine on the previous version of pytorch.
My model is:
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.relu = nn.ReLU(True)
self.activ = Active()
self.drop = nn.Dropout2d(0.2)
self.conv1 = nn.Conv2d(1, 64, kernel_size = 15, stride = 3, padding = 0, bias = True)
self.bn1 = nn.BatchNorm2d(64)
self.maxpool1 = nn.MaxPool2d(kernel_size = 3, stride = 2)
self.conv2 = nn.Conv2d(64, 128, kernel_size = 5, stride = 1, padding = 0, bias = False)
self.bn2 = nn.BatchNorm2d(64)
self.maxpool2 = nn.MaxPool2d(kernel_size = 3, stride = 2)
self.conv3 = nn.Conv2d(128, 256, kernel_size = 3, stride = 1, padding= 1, bias = False)
self.bn3 = nn.BatchNorm2d(128)
# self.conv4 = nn.Conv2d(96, 192, 1, 1)
self.conv4 = nn.Conv2d(256, 256, kernel_size = 3, stride = 1, padding = 1, bias = False)
self.bn4 = nn.BatchNorm2d(256)
self.conv5 = nn.Conv2d(256, 256, kernel_size = 3, stride = 1, padding = 1, bias = False)
self.bn5 = nn.BatchNorm2d(256)
self.maxpool3 = nn.MaxPool2d(kernel_size = 3, stride = 2)
self.conv6 = nn.Conv2d(256, 512, kernel_size = 7, stride = 1, padding = 0, bias = False)
self.bn6 = nn.BatchNorm2d(512)
self.conv7 = nn.Conv2d(512, 512, kernel_size = 1, stride = 1, padding = 0, bias = False)
self.bn7 = nn.BatchNorm2d(512)
self.conv8 = nn.Conv2d(512, 250, kernel_size = 1, stride = 1, padding = 0, bias = True)
def forward(self, x):
print(x.size())
x = self.relu(self.bn1(self.conv1(x)))
x = self.maxpool1(x)
print(x.size())
x = self.relu((self.conv2(self.activ((self.bn2(x))))))
x = self.maxpool2(x)
print(x.size())
x = self.relu((self.conv3(self.activ(self.bn3(x)))))
x = self.relu((self.conv4(self.activ(self.bn4(x)))))
print(x.size())
x = self.relu((self.conv5(self.activ(self.bn5(x)))))
x = self.maxpool3(x)
x = self.relu(self.bn6(self.conv6(x)))
x = self.drop(x)
print(x.size())
x = self.conv7(x)
print(x.size())
x = self.relu(self.bn7(x))
x = self.drop(x)
print(x.size())
x = self.conv8(x)
print(x.size())
x = x.view(-1, 250)
print(x.size())
return F.log_softmax(x)
and this is the error that shows:
torch.Size([324, 1, 225, 225])
torch.Size([324, 64, 35, 35])
torch.Size([324, 128, 15, 15])
torch.Size([324, 256, 15, 15])
torch.Size([324, 512, 1, 1])
Traceback (most recent call last):
File “main.py”, line 92, in
main()
File “main.py”, line 73, in main
trainer.train(train_loader, epoch, opt)
File “/home/rohit/Documents/cvitwork/WACV17/code/train.py”, line 77, in train
outputs = self.model(inputs)
File “/home/rohit/.local/lib/python3.5/site-packages/torch/nn/modules/module.py”, line 224, in call
result = self.forward(*input, **kwargs)
File “/home/rohit/Documents/cvitwork/WACV17/code/models/binnet.py”, line 62, in forward
x = self.conv7(x)
File “/home/rohit/.local/lib/python3.5/site-packages/torch/nn/modules/module.py”, line 224, in call
result = self.forward(*input, **kwargs)
File “/home/rohit/.local/lib/python3.5/site-packages/torch/nn/modules/conv.py”, line 254, in forward
self.padding, self.dilation, self.groups)
File “/home/rohit/.local/lib/python3.5/site-packages/torch/nn/functional.py”, line 52, in conv2d
return f(input, weight, bias)
RuntimeError: Given input size: (512, 1, 1). Calculated output size: (324, 1, -510). Output size is too small.
If I can’t fix this, how do I downgrade to my previous version which was 0.1.12.post2 using pip?
|
st116116
|
I couldn’t reproduce. I just installed the latest version from master (which should be virtually equivalent to v0.2.0 for Convolution). I did had to fix an indentation problem with your function though (you need to indent forward to that it belongs to Net).
|
st116117
|
I’m still unable to fix this for some reason. I have pytorch v0.1.12 on python2 and it works in that but in v0.2 on python3 it doesnt. How do I downgrade to my previous version which was 0.1.12.post2 using pip? (Yeah the indent was a mistake when copying :P)
|
st116118
|
You can download the previous conda tar from https://anaconda.org/soumith/repo 4 and install using conda and passing the downloaded file.
also, I’d look into broadcasting, it might be one of the things that could be affecting your results (but not in your model definition, as I could run it, maybe somewhere else in your code)
|
st116119
|
You can also find old pip package URLS going through the history of this file: https://github.com/pytorch/pytorch.github.io/blob/master/_data/wizard.yml 13
I’d check the package in your current site-packages before you toast it and see if there is any mixture of versions/old libs hanging around that are causing a problem.
|
st116120
|
I wanted to use the ignore_index() in ClassNLL2d criterion, which was added in a recent commit 1 on July 14, so I compiled from source. I followed the instructions given on the PyTorch GitHub page 7.
I tried doing this with the latest version as well as the snapshot at the commit I linked to, but both give me the following gcc error when they trying to build torch._C after running the python setup.py install command:
...
...
Install the project...
-- Install configuration: "Release"
-- Installing: /home/rishabh/pytorch/torch/lib/tmp_install/lib/libTHD.so.1
-- Installing: /home/rishabh/pytorch/torch/lib/tmp_install/lib/libTHD.so
-- Set runtime path of "/home/rishabh/pytorch/torch/lib/tmp_install/lib/libTHD.so.1" to ""
-- Up-to-date: /home/rishabh/pytorch/torch/lib/tmp_install/include/THD/THD.h
-- Up-to-date: /home/rishabh/pytorch/torch/lib/tmp_install/include/THD/base/ChannelType.h
-- Up-to-date: /home/rishabh/pytorch/torch/lib/tmp_install/include/THD/base/Cuda.h
-- Up-to-date: /home/rishabh/pytorch/torch/lib/tmp_install/include/THD/base/DataChannel.h
...
...
-- Up-to-date: /home/rishabh/pytorch/torch/lib/tmp_install/include/THD/master_worker/master/THDTensor.h
-- Up-to-date: /home/rishabh/pytorch/torch/lib/tmp_install/include/THD/master_worker/master/generic/THDStorage.h
-- Up-to-date: /home/rishabh/pytorch/torch/lib/tmp_install/include/THD/master_worker/master/generic/THDTensor.h
-- Up-to-date: /home/rishabh/pytorch/torch/lib/tmp_install/include/THD/master_worker/master/generic/THDTensorCopy.h
-- Up-to-date: /home/rishabh/pytorch/torch/lib/tmp_install/include/THD/master_worker/master/generic/THDTensorLapack.h
-- Installing: /home/rishabh/pytorch/torch/lib/tmp_install/include/THD/master_worker/master/generic/THDTensorMath.h
-- Up-to-date: /home/rishabh/pytorch/torch/lib/tmp_install/include/THD/master_worker/master/generic/THDTensorRandom.h
-- Up-to-date: /home/rishabh/pytorch/torch/lib/tmp_install/include/THD/master_worker/worker/Worker.h
running build
running build_py
-- Building version 0.1.12+d6bc264
copying torch/version.py -> build/lib.linux-x86_64-3.6/torch
copying torch/autograd/variable.py -> build/lib.linux-x86_64-3.6/torch/autograd
copying torch/autograd/__init__.py -> build/lib.linux-x86_64-3.6/torch/autograd
copying torch/autograd/gradcheck.py -> build/lib.linux-x86_64-3.6/torch/autograd
copying torch/nn/functional.py -> build/lib.linux-x86_64-3.6/torch/nn
...
...
copying torch/lib/include/THC/generic/THCTensorTopK.h -> build/lib.linux-x86_64-3.6/torch/lib/include/THC/generic
copying torch/lib/include/THC/generic/THCTensorMasked.h -> build/lib.linux-x86_64-3.6/torch/lib/include/THC/generic
copying torch/lib/include/THC/generic/THCTensorRandom.h -> build/lib.linux-x86_64-3.6/torch/lib/include/THC/generic
copying torch/lib/include/THC/generic/THCTensorMathScan.h -> build/lib.linux-x86_64-3.6/torch/lib/include/THC/generic
copying torch/lib/include/THC/generic/THCTensorCopy.h -> build/lib.linux-x86_64-3.6/torch/lib/include/THC/generic
copying torch/lib/include/THC/generic/THCTensorMathPairwise.h -> build/lib.linux-x86_64-3.6/torch/lib/include/THC/generic
running build_ext
-- Building with NumPy bindings
-- Detected cuDNN at /usr/local/cuda/lib64, /usr/local/cuda/include
-- Detected CUDA at /usr/local/cuda
-- Building NCCL library
-- Building with distributed package
building 'torch._C' extension
gcc -pthread -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -fPIC -I/home/rishabh/pytorch -I/home/rishabh/pytorch/torch/csrc -I/home/rishabh/pytorch/torch/lib/tmp_install/include -I/home/rishabh/pytorch/torch/lib/tmp_install/include/TH -I/home/rishabh/pytorch/torch/lib/tmp_install/include/THPP -I/home/rishabh/pytorch/torch/lib/tmp_install/include/THNN -I/home/rishabh/pytorch/torch/lib/tmp_install/include/ATen -I/home/rishabh/anaconda3/lib/python3.6/site-packages/numpy/core/include -I/home/rishabh/pytorch/torch/lib/tmp_install/include/THD -I/usr/local/cuda/include -I/home/rishabh/pytorch/torch/lib/tmp_install/include/THCUNN -I/usr/local/cuda/include -I/home/rishabh/anaconda3/include/python3.6m -c torch/csrc/PtrWrapper.cpp -o build/temp.linux-x86_64-3.6/torch/csrc/PtrWrapper.o -D_THP_CORE -std=c++11 -Wno-write-strings -fno-strict-aliasing -DWITH_NUMPY -DWITH_DISTRIBUTED -DWITH_CUDA -DCUDA_LIB_PATH=/usr/local/cuda/lib64 -DWITH_NCCL -DWITH_CUDNN
gcc -pthread -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -fPIC -I/home/rishabh/pytorch -I/home/rishabh/pytorch/torch/csrc -I/home/rishabh/pytorch/torch/lib/tmp_install/include -I/home/rishabh/pytorch/torch/lib/tmp_install/include/TH -I/home/rishabh/pytorch/torch/lib/tmp_install/include/THPP -I/home/rishabh/pytorch/torch/lib/tmp_install/include/THNN -I/home/rishabh/pytorch/torch/lib/tmp_install/include/ATen -I/home/rishabh/anaconda3/lib/python3.6/site-packages/numpy/core/include -I/home/rishabh/pytorch/torch/lib/tmp_install/include/THD -I/usr/local/cuda/include -I/home/rishabh/pytorch/torch/lib/tmp_install/include/THCUNN -I/usr/local/cuda/include -I/home/rishabh/anaconda3/include/python3.6m -c torch/csrc/Exceptions.cpp -o build/temp.linux-x86_64-3.6/torch/csrc/Exceptions.o -D_THP_CORE -std=c++11 -Wno-write-strings -fno-strict-aliasing -DWITH_NUMPY -DWITH_DISTRIBUTED -DWITH_CUDA -DCUDA_LIB_PATH=/usr/local/cuda/lib64 -DWITH_NCCL -DWITH_CUDNN
...
...
gcc -pthread -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -fPIC -I/home/rishabh/pytorch -I/home/rishabh/pytorch/torch/csrc -I/home/rishabh/pytorch/torch/lib/tmp_install/include -I/home/rishabh/pytorch/torch/lib/tmp_install/include/TH -I/home/rishabh/pytorch/torch/lib/tmp_install/include/THPP -I/home/rishabh/pytorch/torch/lib/tmp_install/include/THNN -I/home/rishabh/pytorch/torch/lib/tmp_install/include/ATen -I/home/rishabh/anaconda3/lib/python3.6/site-packages/numpy/core/include -I/home/rishabh/pytorch/torch/lib/tmp_install/include/THD -I/usr/local/cuda/include -I/home/rishabh/pytorch/torch/lib/tmp_install/include/THCUNN -I/usr/local/cuda/include -I/home/rishabh/anaconda3/include/python3.6m -c torch/csrc/cudnn/BatchNorm.cpp -o build/temp.linux-x86_64-3.6/torch/csrc/cudnn/BatchNorm.o -D_THP_CORE -std=c++11 -Wno-write-strings -fno-strict-aliasing -DWITH_NUMPY -DWITH_DISTRIBUTED -DWITH_CUDA -DCUDA_LIB_PATH=/usr/local/cuda/lib64 -DWITH_NCCL -DWITH_CUDNN
gcc -pthread -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -fPIC -I/home/rishabh/pytorch -I/home/rishabh/pytorch/torch/csrc -I/home/rishabh/pytorch/torch/lib/tmp_install/include -I/home/rishabh/pytorch/torch/lib/tmp_install/include/TH -I/home/rishabh/pytorch/torch/lib/tmp_install/include/THPP -I/home/rishabh/pytorch/torch/lib/tmp_install/include/THNN -I/home/rishabh/pytorch/torch/lib/tmp_install/include/ATen -I/home/rishabh/anaconda3/lib/python3.6/site-packages/numpy/core/include -I/home/rishabh/pytorch/torch/lib/tmp_install/include/THD -I/usr/local/cuda/include -I/home/rishabh/pytorch/torch/lib/tmp_install/include/THCUNN -I/usr/local/cuda/include -I/home/rishabh/anaconda3/include/python3.6m -c torch/csrc/cudnn/Conv.cpp -o build/temp.linux-x86_64-3.6/torch/csrc/cudnn/Conv.o -D_THP_CORE -std=c++11 -Wno-write-strings -fno-strict-aliasing -DWITH_NUMPY -DWITH_DISTRIBUTED -DWITH_CUDA -DCUDA_LIB_PATH=/usr/local/cuda/lib64 -DWITH_NCCL -DWITH_CUDNN
torch/csrc/cudnn/Conv.cpp: In static member function ‘static cudnnConvolutionFwdAlgoPerf_t torch::cudnn::{anonymous}::algorithm_search<cudnnConvolutionFwdAlgo_t>::findAlgorithm(THCState*, cudnnHandle_t, const torch::cudnn::Convolution&, void*, void*, void*)’:
torch/csrc/cudnn/Conv.cpp:160:10: error: ‘CUDNN_CONVOLUTION_FWD_ALGO_WINOGRAD_NONFUSED’ was not declared in this scope
CUDNN_CONVOLUTION_FWD_ALGO_WINOGRAD_NONFUSED,
^
torch/csrc/cudnn/Conv.cpp: In static member function ‘static cudnnConvolutionBwdDataAlgoPerf_t torch::cudnn::{anonymous}::algorithm_search<cudnnConvolutionBwdDataAlgo_t>::findAlgorithm(THCState*, cudnnHandle_t, const torch::cudnn::Convolution&, void*, void*, void*)’:
torch/csrc/cudnn/Conv.cpp:201:10: error: ‘CUDNN_CONVOLUTION_BWD_DATA_ALGO_WINOGRAD_NONFUSED’ was not declared in this scope
CUDNN_CONVOLUTION_BWD_DATA_ALGO_WINOGRAD_NONFUSED
^
torch/csrc/cudnn/Conv.cpp: In static member function ‘static cudnnConvolutionBwdFilterAlgoPerf_t torch::cudnn::{anonymous}::algorithm_search<cudnnConvolutionBwdFilterAlgo_t>::findAlgorithm(THCState*, cudnnHandle_t, const torch::cudnn::Convolution&, void*, void*, void*)’:
torch/csrc/cudnn/Conv.cpp:245:10: error: ‘CUDNN_CONVOLUTION_BWD_FILTER_ALGO_WINOGRAD_NONFUSED’ was not declared in this scope
CUDNN_CONVOLUTION_BWD_FILTER_ALGO_WINOGRAD_NONFUSED,
^
error: command 'gcc' failed with exit status 1
BTW, I had already installed PyTorch using Anaconda on my system (CUDA 8.0, Python 3.6), using the command on the main page 2, at the location ~/anaconda3.
Any idea why this might be happening?
|
st116121
|
Try the following:
pip uninstall torch
python setup.py clean
python setup.py build
|
st116122
|
I get the error, even after clean… at least when I try
NO_DISTRIBUTED=1 python3.5 setup.py clean
NO_DISTRIBUTED=1 python3.5 setup.py install --user
|
st116123
|
Is there a way to use nn.Conv2d without specifying the number of input channels?
|
st116124
|
You could try something like Inferring shape via flatten operator , but it’s a workaround. PyTorch defines the graph dynamically, so it requires you to specify the number of input channels beforehand.
|
st116125
|
I want to compute multiple losses (as shown in this question ), but doing it the same way creates an error (pytorch v0.2):
Traceback (most recent call last):
File "tmp.py", line 159, in <module>
loss = criterion1 + criterion2
TypeError: unsupported operand type(s) for +: 'MSELoss' and 'L1Loss'
My code (selected snippets) is the fallowing:
criterion1 = nn.MSELoss(size_average=False).cuda()
criterion2 = nn.L1Loss(size_average=False).cuda()
loss = criterion1 + criterion2
train(epochs)
and
def train(epochs):
epoch = 1
while epoch <= epochs:
for batch_idx, (data, _ ) in enumerate(train_loader):
data = Variable(data.type(torch.FloatTensor).cuda())
optimizer.zero_grad()
output, y, z = model(data)
loss1 = criterion1(output, data)
loss2 = criterion2(z, y)
loss.backward()
optimizer.step()
with my Net:
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.fc1 = nn.Linear(100, 100)
self.fc1.weight.data = torch.from_numpy(A).type(torch.FloatTensor)
[...]
def forward(self, x):
y = self.fc1(x)
x = self.rl1(self.fc2(y))
[...]
x_est = self.fc5(x)
z_est = self.fc1(x_est)
return x_est, y, z_est
|
st116126
|
You should not add the nn.Criterion together, but instead the losses.
So you should do
criterion1 = nn.MSELoss(size_average=False).cuda()
criterion2 = nn.L1Loss(size_average=False).cuda()
output = model(input)
loss = criterion1(output, target) + criterion2(output, target)
That being said, you can create a function that simplifies your life for that
class MyCriterion(nn.Module):
def __init__(self, size_average=False):
super(MyCriterion, self).__init__()
self.criterion1 = nn.MSELoss(size_average=size_average)
self.criterion2 = nn.L1Loss(size_average=size_average)
def forward(self, input, target):
return self.criterion1(input, target) + self.criterion2(input, target)
criterion = MyCriterion().cuda()
|
st116127
|
I have a question, is there any method to select a number in a tensor and return the index, such as:
t = [[1, 2],
[2, 3],
[5, 2]]
select(t, dim=0, 2) returns [1, 0, 1]
and select(t, dim=1, 2) returns [1, 0]
|
st116128
|
You can use a combination of .eq() (or ==) and nonzero.
t = torch.Tensor(
[[1, 2],
[2, 3],
[5, 2]]
)
print((t == 2).nonzero())
returns
0 1
1 0
2 1
[torch.LongTensor of size 3x2]
|
st116129
|
If you want to define a custom layer that uses other layers inside , for example
def custom_layer():
convlayer1 = self.conv1(...)
convlayer2 = self.conv2(...)
activation = F.relu(convlayer1 + convlayer2)
return activation
and you want to make a very deep network with a lot of these custom layers, how would you go around not having to define
self.conv1 = nn.Conv2d…
self.conv2 = nn.Conv2d…
and so on, in the init method of your model class ? Also, two custom layers must NOT share weights.
EDIT: Ok, the above idea is wrong and I’ve done some reading on the pytorch documentation. I have an adaptation of the examples from the docs:
import torch
from torch.autograd import Variable
import torch.nn.functional as F
class model_block(torch.nn.Module):
def __init__(self, D_in, H, D_out):
super(model_block, self).__init__()
self.linear1 = torch.nn.Linear(D_in, H)
self.linear2 = torch.nn.Linear(H, D_out)
def forward(self, x):
activation = self.linear1(x)
activation = F.relu(activation)
activation = self.linear2(x)
activation = F.relu(activation)
return activation
class Net(torch.nn.Module):
def __init__(self, D_in1, H1, D_out1, D_in2, H2, D_out2):
super(Net, self).__init__()
self.block1 = model_block(D_in1, H1, D_out1)
self.block2 = model_block(D_in2, H2, D_out2)
def forward(self, x):
pred = self.block1(x)
y_pred = self.block2(pred)
return y_pred
# N is batch size; D_in is input dimension;
# H is hidden dimension; D_out is output dimension.
N, D_in1, H1, D_out1 = 64, 1000, 1000, 10
D_in2, H2, D_out2 = 10, 10, 10
# Create random Tensors to hold inputs and outputs, and wrap them in Variables
x = Variable(torch.randn(N, D_in))
y = Variable(torch.randn(N, D_out), requires_grad=False)
# Construct our model by instantiating the class defined above
model = Net(D_in1, H1, D_out1, D_in2, H2, D_out2)
# Construct our loss function and an Optimizer. The call to model.parameters()
# in the SGD constructor will contain the learnable parameters of the two
# nn.Linear modules which are members of the model.
criterion = torch.nn.MSELoss(size_average=False)
optimizer = torch.optim.SGD(model.parameters(), lr=1e-4)
for t in range(500):
# Forward pass: Compute predicted y by passing x to the model
y_pred = model(x)
# Compute and print loss
loss = criterion(y_pred, y)
print(t, loss.data[0])
# Zero gradients, perform a backward pass, and update the weights.
optimizer.zero_grad()
loss.backward()
optimizer.step()
This code runs. I just have to know, if I define my blocks like that, can I be absolutely certain that two blocks do not share weights within the network?
|
st116130
|
Yes, each block has it’s own set of weights.
You can see that by calling model.named_parameters(), and you will see that they will be prefixed with block1. and block2..
|
st116131
|
Hi I am trying to implement Sequence 2 Sequence model while implementing it I am facing the following error :
RuntimeError Traceback (most recent call last)
<ipython-input-12-cd1866fff827> in <module>()
----> 1 train(data1[0:10],data2[0:10],128,1,128,128,10000)
<ipython-input-6-2bf5208cf775> in train(data1, data2, embedding_size, n_layers, input_size, hidden_size, num_epochs)
33 enc.zero_grad()
34 dec.zero_grad()
---> 35 l.backward()
36 optimizer.step()
37
/usr/local/lib/python2.7/dist-packages/torch/autograd/variable.pyc in backward(self, gradient, retain_variables)
144 'or with gradient w.r.t. the variable')
145 gradient = self.data.new().resize_as_(self.data).fill_(1)
--> 146 self._execution_engine.run_backward((self,), (gradient,), retain_variables)
147
148 def register_hook(self, hook):
/usr/local/lib/python2.7/dist-packages/torch/nn/_functions/thnn/auto.pyc in backward(self, grad_output)
43
44 def backward(self, grad_output):
---> 45 input, target = self.saved_tensors
46 grad_input = grad_output.new().resize_as_(input).zero_()
47 getattr(self._backend, update_grad_input.name)(self._backend.library_state, input, target,
RuntimeError: Trying to backward through the graph second time, but the buffers have already been freed. Please specify retain_variables=True when calling backward for the first time.
Code:
import torch
import torch.optim as optim
from torch.autograd import Variable
import torch.nn as nn
import torch.nn.functional as F
class Encoder(nn.Module):
def __init__(self,vocab_size,embedding_size,n_layers,hidden_size):
super(Encoder,self).__init__()
self.embedding = nn.Embedding(vocab_size,embedding_size)
self.lstm = nn.LSTM(embedding_size,hidden_size,n_layers)
self.n_layers = n_layers
self.hidden_size = hidden_size
self.vocab_size = vocab_size
self.embedding_size = embedding_size
def init_hidden_cell(self):
hidden = (Variable(torch.randn(1,1,self.hidden_size)),Variable(torch.randn(1,1,self.hidden_size)))
return hidden
def forward(self,x):
vect = []
for i in xrange(len(x)):
vect.append(self.embedding(x[i].max(1)[1]))
hidden = self.init_hidden_cell()
output,hidden = self.lstm(torch.cat(vect),hidden)
return hidden
class Decoder(nn.Module):
def __init__(self,vocab_size,hidden_size,input_size,n_layers):
super(Decoder,self).__init__()
self.lstm = nn.LSTM(input_size,hidden_size,n_layers)
self.input_size = input_size
self.hidden_size = hidden_size
self.n_layers = n_layers
self.fc1 = nn.Linear(hidden_size,vocab_size)
def forward(self,hidden):
output,hidden = self.lstm(Variable(torch.zeros(1,1,self.input_size)),hidden)
return F.softmax(self.fc1(hidden[0].view(-1,self.hidden_size))),hidden
def make_corpus(data):
corpa = {"#":0}
for i in data:
for j in i.split(" "):
if j not in corpa.keys():
corpa[j] = len(corpa)
return corpa
def make_vect(word,corpa):
temp = torch.FloatTensor(1,len(corpa)).zero_()
temp[0][corpa[word]] = 1.0
return temp
def train(data1,data2,embedding_size,n_layers,input_size,hidden_size,num_epochs):
corpa_lang1 = make_corpus(data1)
corpa_lang2 = make_corpus(data2)
#print corpa_lang1
enc = Encoder(len(corpa_lang1),embedding_size,n_layers,hidden_size)
dec = Decoder(len(corpa_lang2),hidden_size,input_size,n_layers)
l = 0
loss = nn.CrossEntropyLoss()
params = list(enc.parameters()) + list(dec.parameters())
optimizer = optim.SGD(params,lr= 0.01)
for i in xrange(num_epochs):
for j in xrange(len(data1)):
print data1[j].split(" ")
ip_vec = [Variable(make_vect(k,corpa_lang1),requires_grad= True) for k in data1[j].split(" ")]
ip_vec = ip_vec + [Variable(make_vect("#",corpa_lang1),requires_grad = True)]
op1,op2 = dec(enc(ip_vec))
for m in xrange(len(data2[j].split(" "))+1):
if m == len(data2[j].split(" ")):
op_vec = Variable(torch.FloatTensor([corpa_lang2["#"]]))
op_vec.data = torch.Tensor.long(op_vec.data)
op1,op2 = dec(op2)
l = l + loss(op1,op_vec)
else:
op_vec = Variable(torch.FloatTensor([corpa_lang2[data2[j].split(" ")[m]]]))
op_vec.data = torch.Tensor.long(op_vec.data)
if m == 0:
l=l+loss(op1,op_vec)
else:
op1,op2 = dec(op2)
l = l + loss(op1,op_vec)
enc.zero_grad()
dec.zero_grad()
l.backward()
optimizer.step()
return enc,dec
lines = open('data/eng-fra.txt').read().strip()
data1 = []
data2 = []
for i in lines.split("\n"):
#print i.split("\t")
if len(i.split("\t")) == 2:
data1.append(i.split("\t")[0])
data2.append(i.split("\t")[1])
for i in xrange(len(data2)):
data2[i] = unicode(data2[i],encoding = 'utf-8')
train(data1[0:10],data2[0:10],128,1,128,128,10000)
Please can someone help me debug this problem?Thank you
|
st116132
|
I think it’s that when you accumulate the loss, you should unpack it, like loss.data[0]
Otherwise the graph will never be freed.
See this discussion: CUDA memory continuously increases when net(images) called in every iteration
|
st116133
|
Hi , thanks for your advice it did remove that error but in turn this came up:
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-12-cd1866fff827> in <module>()
----> 1 train(data1[0:10],data2[0:10],128,1,128,128,10000)
<ipython-input-6-66e52529e52f> in train(data1, data2, embedding_size, n_layers, input_size, hidden_size, num_epochs)
33 dec.zero_grad()
34 print l
---> 35 l.backward()#retain_variables = True)
36 optimizer.step()
37
/usr/local/lib/python2.7/dist-packages/torch/autograd/variable.pyc in backward(self, gradient, retain_variables)
144 'or with gradient w.r.t. the variable')
145 gradient = self.data.new().resize_as_(self.data).fill_(1)
--> 146 self._execution_engine.run_backward((self,), (gradient,), retain_variables)
147
148 def register_hook(self, hook):
RuntimeError: there are no graph nodes that require computing gradients
So , I have added the requires_grad attribute to every variable defined and do we add detach old graph after every backward pass or the old graph is preserved ? if we need to detach the old the graph then how do we need to do it?
|
st116134
|
Continuing the discussion from How can I know which part of h_n of bidirectional RNN is for backward process?:
I really want to make sure which is right
[
layer0_forward
layer0_backward
layer1_forward
layer1_backward
layer2_forward
layer2_backward
…
]
or
[
layer0_forward
layer1_forward
layer2_forward
layer0_backward
layer1_backward
layer2_backward
…
]
if first is right,how can i get it?
|
st116135
|
Is there anyone knows how the composition of the rnn output?
such as,
rnn = nn.lstm( 100, 100, layers = 3, bidirectional=True )
output = rnn(input)
what is the composition of the output?
|
st116136
|
The step which I begin to train the model.It 1 give me such errors as follows
My graphic card was gtx970,I’ve tried to install cube 8.0 and the lastest nvdia driver.The error was still the same.
when I run
python test.py --dataroot ./datasets/maps --name maps_cyclegan --model cycle_gan --phase test --no_dropoutet_256 --which_dire
got error
test。py, line 10, in module opt = TestOptions().parse()
…
RuntimeError: cuda runtime error (35) : CUDA driver version is insufficient for CUDA runtime version at torch/csrc/cuda/Module.cpp:87
My system was win10 and I use bash for the test on ubutun.
The software already installed was
dominate==2.3.1
numpy==1.13.1
olefile==0.44
Pillow==4.2.1
PyYAML==3.12
six==1.10.0
torch==0.2.0.post1
torchvision==0.1.9
So any help please?Thanks
|
st116137
|
map_fn allows you to perform an operation in parallel and collect the results.
My use case is I’d like to be able to run several mini supervised learning problems in parallel. In each thread, I would take several gradient steps on the same base model, and return the outputs. Then I do some computation using the outputs to update the base model, and repeat. My data is large enough that I’d like copying the data to the GPU to also be parallelized along with the forward and backward operations.
I’ve looked at DataParallel but this seems to operate at the module level - I don’t see how to have different copies of the model taking different update steps? Elsewhere, I’ve seen that Python multi-processing doesn’t always work well with CUDA?
Thanks for any advice you have!
|
st116138
|
I’m not sure I understood it properly. but one solution would be to adapt the current implementation of nn.DataParallel so that you can better control when you gather the parameters.
For example, have a look at those lines 122. I believe you can adapt it somewhat easily so that you only replicate the model and gather the results from time to time.
|
st116139
|
I see how I can use that to compute the forward passes for the different inputs. But can I also do backward passes?
|
st116140
|
I have followed the steps on the PyTorch github page for macOS Sierra, and I got the error on the last step using MACOSX_DEPLOYMENT_TARGET=10.9 CC=clang CXX=clang++ python setup.py install . the Error look like this
error “The CMAKE_C_COMPILER is set to a C++ compiler”
Please help me. Thanks !
|
st116141
|
I trained 2 networks A and B for different tasks. Now, I want to combine these networks into a network C for joint training. Can I fine-tune the network C from 2 weight files? Can someone give me some advice? Thank you.
|
st116142
|
Hi,
I’m currently working with a version of Improved WGAN. My architecture is exactly the one described in the DCGAN repo 1 (minus the sigmoid activation in the discriminator). To calculate the gradient penalty, I’m using the following lines of code (adapted from caogang’s github) :
def calc_gradient_penalty(netD, real_data, fake_data, batch_size=50, gpu=0):
alpha = torch.rand(batch_size, 1, 1)
alpha = alpha.expand(real_data.size())
alpha = alpha.cuda(gpu)
interpolates = alpha * real_data + ((1 - alpha) * fake_data)
interpolates = interpolates.cuda(gpu)
interpolates = autograd.Variable(interpolates, requires_grad=True)
disc_interpolates = netD(interpolates)
gradients = autograd.grad(outputs=disc_interpolates, inputs=interpolates,
grad_outputs = torch.ones(disc_interpolates.size()).cuda(gpu),
create_graph = True, retain_graph=True, only_inputs=True)[0]
return ((gradients.norm(2, dim=1) - 1) ** 2).mean()
The code does not return any errors, however, the GPU memory goes up until no more is available, then the program crashes. As soon as I remove the BatchNorm Layers in the discriminator, the problem is fixed.
Side note, removing batch norm fixed another problem I had, which was that the critic values returns were very high (on the order of 10x5, sometimes givings NaNs). That being said, I’m not sure if this is due to the same bug.
Thanks,
Lucas
|
st116143
|
Yea this issue was discovered a few days ago.
The fix hasn’t been merged yet, but I pulled the commits and it worked for me!
github.com/pytorch/pytorch
Fix BatchNorm double backwards memory leak (v.0.2.0) 38
by gchanan
on 10:00PM - 07 Aug 17
3 commits
changed 4 files
with 51 additions
and 49 deletions.
|
st116144
|
A few beginner-level questions to help move from CPU to GPU. I’ve searched previous responses here but couldn’t find specifics.
I have my code up and running in my local GPU --only one device (for any other beginners running across this post, you need to wrap your Variables (target.cuda()), network (decoder.cuda()) and criterion (criterion.cuda()) in cuda, and it obviously needs to be available in your system: physical GPU, drivers and packages nvidia+cuda.
I want to spin a small GPU cluster and run my RNN there, but I have a few questions:
Are RNNs benefited from GPU’s?
Will code that runs properly in my local GPU run out-of-the-box in a GPU cluster? If not what do I need to be thinking about?
Do GPUs help if I’m using a batch of size 1? Or are batches “good”?
Do I have to manually allocate / transfer or otherwise keep track of which tensor and other objects go to which device? Or does CUDA/PyTorch figure this out automatically?
Do I have to gather anything at the end of the computation? (I’m coming from the Spark world where its a thing sometimes).
For small, simpler models (like the one I’m running) CPU and GPU times will be very similar 6. If I take this model to a GPU cluster, will I see any improvement? Is the efficiency gain proportional only to model complexity? Or will the simple model run faster the more nodes in my cluster?
Many questions! Feel free to answer one only.
|
st116145
|
Hi,
About your question:
Yes, RNNs can benefit from optimized GPU implementations, and PyTorch wraps cudnn, which gives even further speedups
Yes, it will run out of the box, but only in one GPU. If you want to parallelize over multiple GPUs, check http://pytorch.org/docs/master/nn.html#torch.nn.DataParallel 63 for a simple way to distribute computations batch-wise over multiple GPUs
GPUs shine compared to CPUs for larger batch sizes.
If you use nn.DataParallel, everything is handled for you automatically. But you might want to have different ways of using multiple GPUs (for example parts of one model in GPU1, and other parts in GPU2), in which case you need to ship the different parts to the different GPUs yourself (via result.cuda(gpu_idx)).
nn.DataParallel already gathers the information from multiple GPUs for you 20
For small models, you won’t see any benefits from using GPUs over CPUs, and it won’t improve if you use multiple GPUs. You will need to increase the model size to start to see improvements, because there is some communication overhead to transmit data from different GPUs. Also, there are a number of tricks that are used for improving multi-GPU usage, see https://arxiv.org/abs/1404.5997 38 for example.
Hope this helps!
|
st116146
|
I am trying to train an auto-encoder decoder type of network where I have few set of convs and then a flatten/reshape layer to a single vector and again want to reconstruct the image back.
I had used lasagne previously and it had a layer called as the Inverse Layer http://lasagne.readthedocs.io/en/latest/modules/layers/special.html#lasagne.layers.InverseLayer 70
which is useful to the decoder network.
I was wondering if there is a similar thing like the Inverse Layer in pytorch?
|
st116147
|
Hi,
there isn’t one in particular, but the layers the lasagne docs name are all there:
Linear is Linear again with input/output dims swapped (it’s transposed),
Convolutions have transposed layers, e.g. ConvTranspose2d 172,
for pooling layers, there are unpooling layers, e.g. MaxUnpool2d 292.
Have good fun with your project!
Best regards
Thomas
|
st116148
|
But does this handle the non linearity? In case I have my conv layer as follows:
l1 = F.sigmoid(self.conv1(x))
l2 = F.sigmoid(self.conv2(l1))
...
fc = ... # have set of fc layers using nn.linear and then reconstuct them back by interchanging the input and output dimensions.
reconstruct_2 = F.sigmoid(self.deconv2(fc))
reconstruct_1 = F.sigmoid(self.deconv1(reconstruct_2))
Is the part of reconstruct_2 and reconstruct_1 correct?
|
st116149
|
Hello,
it does not handle the nonlinearity.
From the description of lasagne’s InverseLayer, it uses the derivative, so essantially, it effectively provides the backpropagation step of the layer it is based on. You would need to do this yourself (using d/dsigmoid(x) = sigmoid(x)*(1-sigmoid(x)), so reconstruct_2 = self.deconv2(fc*(1-fc)) or so) or use the torch.autograd.grad function.
My understanding is that for the backward, you would want the nonlinearity-induced term before the convolution.
This is, however, not necessarily something that reconstructs anything.
Best regards
Thomas
|
st116150
|
When I run my trainig code, gpu memory usage increased every iteration
for iSentence, sentence in enumerate(shuffledData):
if iSentence % 100 == 0 and iSentence != 0:
print "check"
conll_sentence = [entry for entry in sentence if isinstance(entry, utils.ConllEntry)]
conll_sentence = conll_sentence[1:] + [conll_sentence[0]]
self.model.getWordEmbeddings(conll_sentence, True)
del conll_sentence
gc.collect()
In getWordEmbeddings function:
getWordEmbeddings(sentence, train):
for root in sentence:
c = float(self.wordsCount.get(root.norm, 0))
dropFlag = not train or (random.random() < (c/(0.25+c)))
root.wordvec = self.wlookup(scalar(int(self.vocab.get(root.norm, 0))) if dropFlag else
scalar(0)).cuda()
root.posvec = self.plookup(scalar(int(self.pos[root.pos]))) if self.pdims > 0 else None
root.evec = None
root.ivec = cat([root.wordvec, root.posvec, root.evec])
forward = RNNState(self.surfaceBuilders[0])
backward = RNNState(self.surfaceBuilders[1])
for froot, rroot in zip(sentence, reversed(sentence)):
forward = forward.next(froot.ivec)
backward = backward.next(rroot.ivec)
froot.fvec = forward()
rroot.bvec = backward()
for root in sentence:
root.vec = cat([root.fvec, root.bvec])
bforward = RNNState(self.bsurfaceBuilders[0])
bbackward = RNNState(self.bsurfaceBuilders[1])
for froot, rroot in zip(sentence, reversed(sentence)):
bforward = bforward.next(froot.vec)
bbackward = bbackward.next(rroot.vec)
froot.bfvec = bforward()
rroot.bbvec = bbackward()
for root in sentence:
root.vec = cat([root.bfvec, root.bbvec])
I think the GPU Memory leak is caused by conll_sentence in every iteration because if I type
yield conll_sentence
after self.model.getWordEmbedding function, the gpu memory keep steady. I don’t know is it caused by getWordEmbedding function or caused by the codes in for loop.
I have tried to solve this problem by del objects or use gc, but none of them worked. Really hope someone to help me solve the problem
|
st116151
|
I was creating a custom mdl and when I printed the object out I got an empty model:
(Pdb) mdl_sgd
NN (
)
and .parameters() is empty:
list(mdl_sgd.parameters()) = []
my class looks as follow:
class NN(torch.nn.Module):
def __init__(self, D_layers,act,w_inits,b_inits,bias=True):
super(type(self), self).__init__()
# actiaction func
self.act = act
#create linear layers
self.linear_layers = [None]
for d in range(1,len(D_layers)):
linear_layer = torch.nn.Linear(D_layers[d-1], D_layers[d],bias=bias)
self.linear_layers.append(linear_layer)
is there a reason that this does not work?
|
st116152
|
Try something like this:
class NN(nn.Module):
def __init__(self):
super(NN, self).__init__()
linear_layers = []
for i in range(10):
linear_layers.append(nn.Linear(5,5))
self.net = nn.Sequential(*linear_layers)
|
st116153
|
Also, another easier way to look at your weights would be to do this (in a REPL of some sort):
model = NN()
model.state_dict()
|
st116154
|
I guess that you want to bulid some dynamic network. I think you should read some pytorch code like Resnet, DenseNet.
|
st116155
|
my issue is that I can’t even loop through them to update them with an update procedure! The .parameters() is empty.
|
st116156
|
Brando_Miranda:
self.linear_layers = [None]
Replace this with self.linear_layers = nn.ModuleList().
Using nn.ModuleList registers the modules’ parameters in your class.
|
st116157
|
now that I think about it I dont think I even know why we need to register things. I made a question to address it:
Why do we need to register parameters in pytorch when using nn modules?
I was trying to make a custom nn module and I was having issues registering variables. I have been kindly pointed to nn.ModuleList() and torch.nn.ParameterList. However, I think I don’t understand in the first place, why do I need to “register” parameters? Whats the point of all this?
|
st116158
|
I was trouble shooting my code when i came across a somewhat weird phenomenon. I’m using a smaller AlexNet model with batchnorm and dropout removed at present ( i wanted to make sure it wasn’t these two causing the problems). I switched the model to eval mode, do not do any gradient steps and calculate the MSE loss for a randomly initialized model on a single image (there is no point to this, i just wanted to make sure the output was constant). In principle the loss should stay constant but it seems to be alternating between two values. Is there any explanation for what might be going on? I doubt this is the reason for problems in my experiments but it would be nice to know why this is happening.
[0][1/1] Loss: [0.518107] Time batch: [0.000004]
[1][1/1] Loss: [0.517437] Time batch: [0.000005]
[2][1/1] Loss: [0.518107] Time batch: [0.000003]
[3][1/1] Loss: [0.517437] Time batch: [0.000002]
[4][1/1] Loss: [0.518107] Time batch: [0.000003]
[5][1/1] Loss: [0.517437] Time batch: [0.000002]
[6][1/1] Loss: [0.518107] Time batch: [0.000004]
[7][1/1] Loss: [0.517437] Time batch: [0.000004]
[8][1/1] Loss: [0.518107] Time batch: [0.000004]
[9][1/1] Loss: [0.517437] Time batch: [0.000002]
|
st116159
|
Ok I figured it out. Not a problem with anything in the framework. I had a random transformation in the create transform. One output corresponds to the flipped image and the other to the original.
Can someone please delete this. Thank you.
|
st116160
|
https://arxiv.org/abs/1706.06873 26
In ICML 2017 an interesting paper was introduced. Memory efficient convolution MEC suggests a new way to compute convolution with much lesser memory and much faster performance.
I think this is worth to implement in PyTorch.
|
st116161
|
Amusingly enough, Maxime Oquab and I used a MEC-style convolutions in 2014 to implement our CVPR 2015 paper on weakly supervision (see htttp://www.di.ens.fr/willow/research/weakcnn). We started with a Lua prototype of the lowering code (including dealing with padding, etc) and we ported to GPU. This is what allowed us to process large images instead of being limited to 224x224 patches. The prototype is still at https://github.com/leonbottou/torch7-custom/blob/a5c15708cc1a5e46f966dd27d87101989b8cab65/extra/nxn/Prototype-Of-Convolution.lua 31 and the GPU version is at https://github.com/leonbottou/torch7-custom/blob/master/extra/cuda/pkg/cunxn/SpatialConvolution.cu 21 .
Empirically this code was most efficient with convolutions involving a large number of planes. But a couple months later, nvidia released CUDNN, and we did not think our code was any faster. In fact the comparison with CUDNN is missing from the MEC paper because it is not open source. This is unfortunate.
|
st116162
|
On the other hand, they use cublasSgemmBatched and claim that this helps a lot. We talked about it but I do not remember if we tried. On the other hand, we tried to call cublasSgemm on multiple streams, but that did not help on our combination of CUDA and GPU. Things may have improved…
|
st116163
|
In the pytorch word language model 3 example, the RNN model class explicitly defined a function ‘init_hidden’ which assigns zero tensors. This function is then called in the main script before starting each epoch of training.
However in the SNPI 4 example this init_hidden state is not defined in the model and nor are the hidden states zeroed before each epoch of training.
Can I confirm this is because in the latter newer example utilised auto initialisation of the hidden states (to zeros)?
|
st116164
|
That is correct for the language model example, which was written October '16 while initial hidden states were added January '17 7. The SNPI example however is using manually zeroed hidden states 12.
|
st116165
|
Ah yes, thank you.
Am I right in thinking the way manual zeroing of the hidden states is done in the newer tutorial also performs the repackage_hidden() function in the main method of the word language model 4 tutorial?
|
st116166
|
As a dynamic programming language, some typing check tools in python are developed for convenience, for example:
def func(x:List):
self.prop=x # type: List
In this way, the type of a variable can be checked by the IDE automatically.
Thus I am thinking about is there any tools for tensor-shape check when building the computing graph? When we build a graph, we tend to change the shape, expand the shape, squeeze some dimensions… which troubles me a lot.
|
st116167
|
From tutorial, it define a gru with 1 hidden layer, then in forward do a for loop to implement the hidden layers. https://github.com/pytorch/tutorials/blob/master/intermediate_source/seq2seq_translation_tutorial.py#L350-L351 8
Another way is to define a gru with 3 hidden layers directly and use it.
What’s the different between these two ways?
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.