id
stringlengths
3
8
text
stringlengths
1
115k
st117068
you can save the state_dict() of a network, and load it back with load_state_dict. It’s been built for this purpose.
st117069
Hi, When I use nn.BCELoss() for the criterion, I have the assertion error: anaconda3/lib/python3.6/site-packages/torch/nn/_functions/thnn/loss.py in forward(self, input, target) 20 21 def forward(self, input, target): ---> 22 assert input.nelement() == target.nelement() 23 self._resize_weight(target) 24 result = super(BCELoss, self).forward(input, target) But when I change the loss into nn.CrossEntropy(), it is OK to run. What is the problem here?
st117070
BCELoss expects input and target to have the same number of elements. CrossEntropyLoss requires targets to be nBatch, input to be nBatch x nClasses. Read the documentation.
st117071
I cannot find any examples for HingeEmbeddingLoss. I’m confused about the usage of this criterion, what’s the input should I give it? As the doc says, HingeEmbeddingLoss Measures the loss given an input x which is a 2D mini-batch tensor and a labels y, a 1D tensor containg values (1 or -1). Is there any examples for the input x? 2)can I set the distance calculation method in the constructor, e.g. L2 pairwise distance?
st117072
the input can be the output of a non-linearity in the ranges of -1, 1, for example tanh No.
st117073
Hey all, I’m running a couple of models on a multi-GPU system. When I attempt to use torch.save() from a GPU other than device 0 while running another model on device 0, I get the following error; however the saving functionality works perfectly for all models running on GPU 0. I’ve looked into the documentation on serialization semantics and I seem to be following the recommended practices, and the default pickle settings also seem to be okay for this use case as well. Does anyone have any insight into this problem? Link to source for torch.save(): http://pytorch.org/docs/_modules/torch/serialization.html#save 10 File "/home/adamvest/models.py", line 156, in save_model torch.save(self.model.state_dict(), "%s/weights.pth" % self.args.out_folder) File "/home/adamvest/lib/python/torch/serialization.py", line 120, in save return _save(obj, f, pickle_module, pickle_protocol) File "/home/adamvest/lib/python/torch/serialization.py", line 192, in _save serialized_storages[key]._write_file(f) RuntimeError: cuda runtime error (46) : all CUDA-capable devices are busy or unavailable at /b/wheel/pytorch-src/torch/csrc/generic/serialization.cpp:38
st117074
have you tried to switch the current device with: with torch.cuda.device(1): torch.save(...)
st117075
Yes, I was able to work around my issues using this or moving the model to cpu before saving. Still not sure of the root cause though.
st117076
image.png2260×1554 627 KB Does pytorch have API function to directly implement local convolutional operator? If no, how should I do that? I find the function in keras----local connected layer. In this function, the weights wouldn’t be shared in the same features map. How about in pytorch.
st117077
github.com/pytorch/pytorch Add Conv2dLocal module 731 pytorch:master ← 1zb:conv-local opened May 18, 2017 1zb +81 -3
st117078
Conventionally, convolutional layer have a property of weight shared. But the local connected layer don’t have weight shared property. How to implement by pytorch? Please give me some advices?
st117079
just for completeness of the thread, context: https://github.com/pytorch/pytorch/pull/1583 315
st117080
I was looking into a way to dynamically change the loss function of a net to only evaluate loss on my specified output nodes, and ignore the rest of the outputs of a network. Initially, I was planning on writing a custom loss function for calculating loss only on outputs that I specify, however it seems that the loss functions are all implemented in C to improve speed, etc. I can still go down this route but it would involve recompiling PyTorch, on every machine I want to train on with this customized loss function, and the whole things seems to be a lot of work. Would a better approach be to use some kind of tensor manipulation to achieve this same effect before putting my output and labels into a standard loss function? If so, how could I go about this?
st117081
You can .chunk() or .split() out the portion of the tensor you’re interested in calculating loss on. Post some code and we can dive deeper into suggested examples. Recompiling should not be necessary for your suggested use case.
st117082
Hi, Now, pretrained VGG13~19 models with batch normalization are provided. https://github.com/pytorch/vision/blob/master/torchvision/models/vgg.py 97 Are preprocess for them same as process for models without batch normalization? (transforms.Normalize(mean=[0.485, 0.456, 0.406],std=[0.229, 0.224, 0.225])) I would like to confirm since bellowing paper mentioned normalizing for input image was omitted and it make sense because it could be substituted by batch norm layer. https://arxiv.org/pdf/1612.01452.pdf 19 How were models provided in the first link trained? Thanks.
st117083
I have a small question regarding DataParallel model’s transferring of CPU tensors to the appropriate GPUs. While manually copying tensors from CPU to GPU, if the CPU tensors are memory pinned (by calling pin_memory()), one can pass async=True in the .cuda() call to enable faster asynchronous memory copy as explained in the docs 117. But I am wondering how I can enforce DataParallel model to use asynchronous memory copy? Does DataParallel model know anything about the pinned memory nature of the input tensors?
st117084
So I basically have a image torch.cuda.ByteTensor, and I have another indices torch.cuda.LongTensor, of size 2x3. It is given like so: indices = torch.ByteTensor([[0,3,6 ], [0, 5, 9]]).cuda() So here is what I want to do: I want to basically “loop” through each column of indices, and index into image. If the value is 1, I want to keep the index of indices. If the value is 0, I want to discard it. Concretely: Let us suppose that image[indices[0,0], indices[1,0]] = 0, image[indices[0,1], indices[1,1]] = 1, and image[indices[0,2], indices[1,2]] = 1. Then in this case, I want to have the result to be [1, 2], since only columns 1 and 2 of indices are “valid”. How do I do that? Thanks!
st117085
I tried to assign my multi gpus with : CUDA_VISIBLE_DIVICES=2 python train.py , but why pytorch is always using the first gpu?
st117086
torch.cuda.set_device(gpu_id) # gpu_id is an integer corresponding to your GPU ID
st117087
Hi, I am creating my own data loader. However, I found that making a slight change could result in huge different on the speed of data loading. I really do not understand the reason. My data loader looks like this: def load_file(filename): # function to load one image saved in a dict with open(filename, 'rb') as fpik: data_dict = cPickle.load(fpik) return data_dict["data"], data_dict["label"] class MyDataset(torch.utils.data.Dataset): # My data set class def __init__(self): print 'loading data list...' self.data_files = glob.glob('train_dnn' + '/*' + '/*.pik') def __getitem__(self, idx): return load_file(self.data_files[idx]) def __len__(self): return len(self.data_files) def get_loader(): # data loader dset_train = MyDataset() loader_train = DataLoader(dset_train, batch_size = 256, shuffle = False, num_workers = 8, pin_memory = False) return loader_train The images are saved in the subfolders of the directory ‘train_dnn’, and the subfolders are numbered 0, 1,…,200, with about 60,000 images in each subfolder. So, it is a very large database If I just create a data loader by calling get_loader() as above, the speed of loading data batches is quite fast. But if I add shuffle(self.data_files) after self.data_files = glob.glob(‘train_dnn’ + ‘/’ + '/.pik’), then the speed of loading batches (not counting shuffling and glob) become very slow. In both cases, I set shuffle = False in the data loader itself. I used 8 workers in both cases, and the model is a feed forward DNN trained on the batches. A geforce 980 card was used to train the model. Anyone has any ideas?
st117088
Why do you set shuffle=False in the dataloader itself? The idea is that you can make the idx that is used to find the data_file random, so you don’t have to preshuffle them first in MyDataset. Alternatively, you can pass your own Sampler instance to the Dataloader to control which samples you will get for each batch (another angle you can use to shuffle).
st117089
Thank you for your response. I did try setting Shuffle to True, so I do not need to shuffle the list myself. But setting shuffle to True caused a huge speed degradation with data loading compared with setting it to False. The strange thing is that even if I set shuffle to False, and shuffle data myself, the speed is still low. The fasted way is to set shuffle to false, and also do not shuffle data myself… I am really confused.
st117090
If shuffling slows down your data loading, it’s probably the random access to your hard disk that is slow. Move your data to an SSD.
st117091
Hi I’m trying to write a custom Loss function. However, there seems to be something I’m not getting right… For now, this is just a dummy example. import torch import torch.nn as nn from torch.autograd import Variable class BCELossReg(torch.nn._functions.thnn.BCELoss): def __init__(self, ratio, size_averaged=True): super(BCELossReg, self).__init__(size_averaged) self.ratio = ratio def forward(self, input, target, n): result = super(BCELossReg, self).forward(input, target) result = self.ratio * result + (1-ratio) * result * n return result # init model = nn.Sequential( nn.Linear(5,3), nn.ReLU(), nn.Linear(3,1), nn.Sigmoid() ) r = Variable(torch.Tensor([0.9]), requires_grad=False) n = Variable(torch.Tensor([2]), requires_grad=False) loss_f = BCELossReg(r) optim = torch.optim.Adam(model.parameters(), lr=0.001) optim.zero_grad() # model forward pass data = Variable(torch.Tensor([1,2,3,4,5]).view(1,-1)) prediction = model(data) target = Variable(torch.Tensor([1])) # loss and backward loss = loss_f(prediction, target, n) loss.backward() optim.step() When I’m executing this I’m getting the following error: Traceback (most recent call last): File “BCELossReg.py”, line 38, in loss = loss_f(prediction, target, n) File “BCELossReg.py”, line 14, in forward result = self.ratio * result + (1-ratio) * result * n File “/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/torch/autograd/variable.py”, line 761, in mul return self.mul(other) File “/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/torch/autograd/variable.py”, line 305, in mul assert not torch.is_tensor(other) AssertionError
st117092
Somewhere your multiplying a Variable by a tensor when you need to multiply a Variable by a Variable. It’s not clear from your snippet where that is. This looks like a bug: result = self.ratio * result + (1-ratio) * result * n Instead of (1-ratio) you should have (1-self.ratio)
st117093
Is there a svelte way to obtain a vectorized view of the model parameters? Something that wouldn’t require a copy? E.g. is there a way to concatenate the results of .view(-1) of all the parameters but for it to remain a view so no copy is done? Thank you in advance.
st117094
you cannot do this for all model parameters, just because each parameter is on a separate storage. Concatenating them together will give you all of them together in a different storage (memory location). This kind of flattening model parameters used to be a thing in (Lua)Torch, but we got rid of it. What exactly is your use-case, maybe we can help find you a solution? For something like LBFGS that needs a consistent single view, we have these helper functions, but they do things out of place (get a flattened view, then compute stuff, then push back gradients back): github.com pytorch/pytorch/blob/master/torch/optim/lbfgs.py#L57-L76 22 def _gather_flat_grad(self): views = [] for p in self._params: if p.grad is None: view = p.data.new(p.data.numel()).zero_() elif p.grad.data.is_sparse: view = p.grad.data.to_dense().view(-1) else: view = p.grad.data.view(-1) views.append(view) return torch.cat(views, 0) def _add_grad(self, step_size, update): offset = 0 for p in self._params: numel = p.numel() # view as to avoid deprecated pointwise semantics p.data.add_(step_size, update[offset:offset + numel].view_as(p.data)) offset += numel assert offset == self._numel()
st117095
Thank you for the code snippet. That is the solution that I settled upon but, yes, I wanted something like what you described as being present only in Lua torch. My use case is that I added an optimizer that wants all the parameters in vector form (ported from Matlab). Just wanted to avoid the copies. What I have works for now. Maybe, between projects, I’ll hit you guys with a pull request with an attempt to re-add this functionality.
st117096
I found the the weights model of vgg/resnet is supported in “**.pth”, and I always use other weights as npy. I wonder what’s the difference between them and can I use npy or h5 as the weights model.
st117097
file.npy is a saved numpy array, you can load it with np.load('file.npy') while file.pth is a saved torch tensor, that you can load with torch.load('file.pth')
st117098
My network like this: class MultiDigitsNet(torch.nn.Module): def __init__(self): super(MultiDigitsNet, self).__init__() self.conv1 = torch.nn.Conv2d(in_channels=3, out_channels=48, kernel_size=5, stride=1, padding=2) self.max_pool1 = torch.nn.MaxPool2d(kernel_size=2, stride=2) self.relu = torch.nn.ReLU() def forward(self, x): x = self.max_pool1(self.relu(self.conv1(x))) When training network, I encountered the following error : 2017-06-29 18-05-33.png1283×498 94.3 KB I have no idea about the meaning of this error. Why this error is happening and how to deal with it?
st117099
What is the type of the input you are giving to your network? Is it correctly a Double or Float or Half Tensor wrapper into a Variable?
st117100
Thank you! I forgot to convert the type of input to Float(Double). The conversion solves my problem.
st117101
In my experience, when you run into the cudnn error, the best way is going to pdb (not necessary) and convert the input, weight etc to CPU tensor, reexecute the command, you’ll find the error message from TH, which is more friendly.
st117102
Hello, I am pretty new to pytorch and I am trying to implement the trainable layer proposed in this paper : https://arxiv.org/pdf/1607.05666.pdf 74 Here is my code: import torch.nn as nn import torch import numpy as np from torch.autograd import Variable class PCEN(nn.Module): def __init__(self,in_features,bias=False): super(PCEN,self).__init__() self.alpha = nn.Parameter(torch.Tensor(in_features,)) self.delta = nn.Parameter(torch.Tensor(in_features,)) self.r = nn.Parameter(torch.Tensor(in_features,)) self.eps = torch.Tensor([0.00001]) def forward(self,x,smoother): alpha = self.alpha.expand_as(x) delta = self.delta.expand_as(x) r = self.r.expand_as(x) pcen = (x/(self.eps + smoother)**alpha + delta)**r - delta**r return pcen #40 dimensional filterbank energies pcen = PCEN(40) #dummy data feats = np.random.standard_normal(size=(10,40)).astype('float32') smoother = np.random.standard_normal(size=(10,40)).astype('float32') feats = Variable(torch.from_numpy(feats)) smoother = Variable(torch.from_numpy(smoother)) pcen_feats = pcen(feats,smoother) Q. the eps parameter is to avoid division by zero and I don’t need to use expand_as with it? Q. The forward pass seems to be working, I was wondering if there are any obvious errors? Do I need to use register buffers for the parameters ? Q. In the paper they say to insure parameter positivity, they do gradient updates on the log values of the parameters and then take exponentials. How can I go about doing this? I have a couple more questions, but I will save them for now. Thanks, Gautam
st117103
Hello @Gautam_Bhattacharya that seems like a great project! Q. the eps parameter is to avoid division by zero and I don’t need to use expand_as with it? Yes, that usually is just the regularisation. I’d even leave it as a python float. Q. The forward pass seems to be working, I was wondering if there are any obvious errors? Do I need to use register buffers for the parameters ? I think something is up with the indentation, but that is likely only the quoting, I have not checked in great detail. Q. In the paper they say to insure parameter positivity, they do gradient updates on the log values of the parameters and then take exponentials. How can I go about doing this? You could use self.log_alpha, log_delta, log_r as the parameter (but ideally init to something close to in 0 instead of 1, too) and then do alpha = self.log_alpha.exp().expand_as(x). I hope this helps. Best regards Thomas
st117104
Thanks for the reply @tom yea, the indent for the forward function got messed up while I was pasting the code. “You could use self.log_alpha, log_delta, log_r as the parameter (but ideally init to something close to in 0 instead of 1, too) and then do alpha = self.log_alpha.exp().expand_as(x).” I am confused as to what you mean exactly. Lets say I init them properly. in the paper they initialize with a normal distribution with mean 1 and std 0.1. Q. When exactly would I take the log? I thought I could do something like - for a simple version of SGD, though it would be nice to use pytorchs optimizers for p in pcen.parameters(): p_log = torch.log§ p_log.data.add_(-learning_rate, p.grad.data) #or p_log.grad.data? # and then somehow copy the exponentiated log parameters back to p Q. or is all this not necessary based on the approach you proposed? Thanks, Gautam
st117105
Hi, apologies for being less clear. I’d do something like the following (the probability of the log going wrong is not that large, given that the mean is 10 standard deviations from 0): class PCEN(nn.Module): def __init__(self,in_features,bias=False): super(PCEN,self).__init__() self.log_alpha = nn.Parameter((torch.randn(in_features)*0.1+1.0).log_()) self.log_delta = nn.Parameter((torch.randn(in_features)*0.1+1.0).log_()) self.log_r = nn.Parameter((torch.randn(in_features)*0.1+1.0).log_()) self.eps = 0.00001 def forward(self,x,smoother): alpha = self.log_alpha.exp().expand_as(x) delta = self.log_delta.exp().expand_as(x) r = self.log_r.exp().expand_as(x) pcen = (x/(self.eps + smoother)**alpha + delta)**r - delta**r return pcen #40 dimensional filterbank energies pcen = PCEN(4) #dummy data, energy is non-negative feats = torch.randn(10,4).exp_() smoother = torch.randn(10,4).exp_() feats = Variable(feats) smoother = Variable(smoother) pcen_feats = pcen(feats,smoother) This way, the backprop will just compute correct adjustments to the log parameters. My understanding is that they did it similarly. I also took the liberty to generate the dummy data in pytorch directly and to make it positive with exp_. The fractional powers don’t really mix well with negative numbers (that is why your code got NaNs) and we all prefer positive energy. Best regards Thomas
st117106
Hey Tom, Thank you! I think this has to be the right way to do it. Yup, positive energy all the way. Only way to #feelthelearn Gautam
st117107
@tom Hi Thomas, I hope you still remember this post (not to mention see this one ) I have been experimenting with this model, and so far it does ok, but still degrades on my baseline. I still have a few optimization tricks to try. You said: This way, the backprop will just compute correct adjustments to the log parameters. My understanding is that they did it similarly. Does this mean that when I do my loss.backward() an inplace log will be taken for the associated parameters, before computing their gradients? I am just trying to check any possible loose end, though since I do get sensible results, I do think its more of an optimization issue. Thanks, Gautam
st117108
I’m trying to train my model in pytorch. With the same settings, Tesla k40c took 0.46 sec for an iteration whereas Nvidia Titan X took 0.18 sec. Both of them are 12 gb GPUs. Is this because of performance gap between the gpus?
st117109
Yes, the Titan X (if it’s the Pascal version) is much faster than the K40. That performance difference doesn’t surprise me at all, and you can get even bigger gaps between Kepler and Pascal (I’ve seen up to 6x) if you’re using FP16 or your task is very heavy on memory bandwidth.
st117110
Agreed with comments of James Bradbury. If you are building/buying your own system look at the Titan. If you using a cloud-pricer, AWS uses K40, although you can use multiple GPU depending on the P2 instance size you are using. It is my understanding that AWS will be supporting the NVIDIA V100 this Fall. The performance should be at least as good as the Titan X, if not better (https://devblogs.nvidia.com/parallelforall/inside-volta/ 13). Disclosure - I work for AWS. Nick
st117111
Yeah, depending on your application Volta might even be as much faster than Titan X as Titan X is over K40/K80.
st117112
Hi, Wouldn’t it be better if nn.CrossEntropyLoss specified on its name that it is also performing a softmax to the input? (as they do in TF) I hadn’t read the description of this loss function an I was using it wrong, since I was applying a softmax to the output of my network right before CrossEntropyLoss. Also, my network looks like a typical FFNN (see below). What is the recommended way in pytorch for handling this different network structure for training and inference? (inference should include softmax, training shouldn’t). Thanks! class Net(nn.Module): def __init__(self, num_inputs, num_u_hl1, num_u_hl2, num_outputs, dropout_rate): super(Net, self).__init__() self.cl0 = nn.Linear(num_inputs, num_u_hl1) self.cl1 = nn.Linear(num_u_hl1, num_u_hl2) self.cl2 = nn.Linear(num_u_hl2, num_outputs) self.d1 = nn.Dropout(dropout_rate) self.d2 = nn.Dropout(dropout_rate) def forward(self, x): x = self.cl0(x) x = self.d1(x) x = F.sigmoid(self.cl1(x)) x = self.d2(x) x = F.softmax(self.cl2(x)) return x
st117113
That is the intention for CrossEntropyLoss – Apply softmax in training but not in inference (assuming you don’t need probabilistic representation). And you can use training property to handle network for training and inference. if self.training: # code for training else: # code for inference
st117114
Ok, thanks for the info! I still think that the CrossEntropyLoss should have a name that specifies that the softmax is included in that function.
st117115
Yes, I think the TensorFlow name is more clear. The name “CrossEntropyLoss” was inherited from Lua Torch.
st117116
Hi !!! Basically, I want to use a pre trained model for classifying an image on the GPU. Could someone please help. I did the following : (my model is alexnet and the image variable is img_var ) alexnet.cuda() alexnet(img_var) and it shows the following error : File “try.py 8”, line 168, in alexnet(img_var) RuntimeError: expected CPU tensor (got CUDA tensor) Please help I want to infer the image on the gpu
st117117
You also need to store your tensor into your GPU: alexnet.cuda() alexnet(img_var.cuda())
st117118
I have done that already. And is it possible that my GPU infer time is very large than the cpu time
st117119
Hello. I am trying out stuff with pytorch. Basically i want to know how can I divide my pretrained model into 2 parts and then use them to classify an image. Like for example, I want to break an alexnet into two smaller models and then I send an image to the first sub model and then the output from that to the second sub model !!! Please help and thanks
st117120
I’m confused about what you mean by “splitting a model”, do you mean you just want to see an intermediate activation in AlexNet?
st117121
You can do that by modifying the AlexNet model example: https://github.com/pytorch/vision/blob/master/torchvision/models/alexnet.py 67
st117122
I have written two small models, the first one containing the conv layers and the second containing the fully connected layers with help but while transferring the weights there is an error : KeyError : ‘unexpected key “0.weight” in state_dict’
st117123
class AlexNet(nn.Module): def __init__(self, num_classes=1000): super(AlexNet, self).__init__() self.features = nn.Sequential( nn.Conv2d(3, 64, kernel_size=11, stride=4, padding=2), nn.ReLU(inplace=True), nn.MaxPool2d(kernel_size=3, stride=2), nn.Conv2d(64, 192, kernel_size=5, padding=2), nn.ReLU(inplace=True), nn.MaxPool2d(kernel_size=3, stride=2), nn.Conv2d(192, 384, kernel_size=3, padding=1), nn.ReLU(inplace=True), nn.Conv2d(384, 256, kernel_size=3, padding=1), nn.ReLU(inplace=True), nn.Conv2d(256, 256, kernel_size=3, padding=1), nn.ReLU(inplace=True), nn.MaxPool2d(kernel_size=3, stride=2), ) self.classifier = nn.Sequential( nn.Dropout(), nn.Linear(256 * 6 * 6, 4096), nn.ReLU(inplace=True), nn.Dropout(), nn.Linear(4096, 4096), nn.ReLU(inplace=True), nn.Linear(4096, num_classes), ) def forward(self, x): x = self.features(x) x = x.view(x.size(0), 256 * 6 * 6) x = self.classifier(x) return x class AlexNet_conv(nn.Module): def __init__(self, num_classes=1000): super(AlexNet_conv, self).__init__() self.features = nn.Sequential( nn.Conv2d(3, 64, kernel_size=11, stride=4, padding=2), nn.ReLU(inplace=True), nn.MaxPool2d(kernel_size=3, stride=2), nn.Conv2d(64, 192, kernel_size=5, padding=2), nn.ReLU(inplace=True), nn.MaxPool2d(kernel_size=3, stride=2), nn.Conv2d(192, 384, kernel_size=3, padding=1), nn.ReLU(inplace=True), nn.Conv2d(384, 256, kernel_size=3, padding=1), nn.ReLU(inplace=True), nn.Conv2d(256, 256, kernel_size=3, padding=1), nn.ReLU(inplace=True), nn.MaxPool2d(kernel_size=3, stride=2), ) def forward(self, x): x = self.features(x) x = x.view(x.size(0), 256 * 6 * 6) return x class AlexNet_classifier(nn.Module): def __init__(self, num_classes=1000): super(AlexNet_classifier, self).__init__() self.classifier = nn.Sequential( nn.Dropout(), nn.Linear(256 * 6 * 6, 4096), nn.ReLU(inplace=True), nn.Dropout(), nn.Linear(4096, 4096), nn.ReLU(inplace=True), nn.Linear(4096, num_classes), ) def forward(self, x): x = self.classifier(x) return x model_1 = AlexNet_conv() model_2 = AlexNet_classifier() pre_trained = alexnet(pretrained=True) model_1.load_state_dict(pre_trained.features.state_dict()) model_2.load_state_dict(pre_trained.classifier.state_dict())
st117124
I would guess that the names of your modules are different from the names of the pre-trained AlexNet’s modules. There might be an easy way around that by changing your names, or you could assign the weights manually.
st117125
Hi everyone, I’m using torch.nn.Embedding with sparse=True`. However, I find it seems that when I using optimizer.step(), the optimizer will update all rows in Embedding layer, instead of updating only the used rows. That is, if the Embedding layer is 10,000100 and I only use the 10th row, it will update the whole 10,000100 matrix, instead of only the 10th row, whose size is only 1*100. I don’t know if my understanding is correct. So I want to ask what’s the behavior when the sparse param of Embedding layer is set to True. Thank you all! ==================SOLUTION=================== Hi guys, I already solve this question with the help from @fmassa. The problem is caused by the using of momentum of SGD, which will result in the dense update. The params weight_decay will also cause this problem. If I use the simplest version of SGD, the updating speed will be much faster.
st117126
I didn’t set any value to weight_decay. According to the doc, I think in this case the value of weight_decay should be zero.
st117127
Yes, I do. However, It does relate to the optimizer. I use the momentum, which will result in the dense updates. Thanks for your suggestion!
st117128
Hi, Now I am using keras with a simplified interface of data parallisim sync version of parallelism. Now I wish to know if pytorch has similar function, seems like PyTorch is trying to advocate in “Speed”, so it had better support Multi-GPU just for a single node. Thanks, Shawn
st117129
Hi Shawn, Yes we support multi-GPU on a single machine. Check out our examples: https://github.com/pytorch/examples/tree/master/imagenet https://github.com/pytorch/examples/tree/master/dcgan Also check out corresponding documentation: http://pytorch.org/docs/nn.html#multi-gpu-layers 1.1k
st117130
Oh that’s super nice, have to give it a try later. So, basically I just need to wrap a torch.nn.DataParallel around my model and it’s good to go!? Neat. PS: maybe it’s worth mentioning the multi-GPU in the Readme or so (e.g,. as a https://github.com/pytorch/pytorch#a-gpu-ready-tensor-library 149 subsection). For instance, as Adam Paszke wrote on https://github.com/apaszke/pytorch-dist 115 Multi-GPU ready PyTorch is fully powered to efficiently use Multiple GPUs for accelerated deep learning. We integrate efficient multi-gpu collectives such as NVIDIA NCCL to make sure that you get the maximal Multi-GPU performance.
st117131
@rasbt thanks, we’ll add that in the next week or two – we plan to thoroughly benchmark ourselves and add that.
st117132
Hmm, have you figured out how to use DataParallel? I cant for the life of me get it to work! :-/
st117133
@Kalamaya have a look at examples when in doubt. the imagenet example and the dcgan example both have example uses: GitHub pytorch/examples 337 A set of examples around pytorch in Vision, Text, Reinforcement Learning, etc. - pytorch/examples
st117134
When using two GPUs, the speed is Train Epoch: 0 [900/18745 (0.048)]; Acc: 0.929; time cost: 0.722 When using one GPU, the speed is Train Epoch: 0 [15800/18745 (0.843)]; Acc: 0.905; time cost: 0.461 The codes are written as follows: model = models.resnet18(pretrained=True) model =torch.nn.DataParallel(model).cuda() x=x.cuda(async=True)# there is no difference no matter whether we include async=True or not yt=yt.cuda(async=True)# output = model(x) When using two GPUs, the output is recorded as follows: +------------------------------------------------------+ | NVIDIA-SMI 352.79 Driver Version: 352.79 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 Tesla M40 Off | 0000:06:00.0 Off | 0 | | 0% 56C P0 74W / 250W | 2440MiB / 11519MiB | 99% Default | +-------------------------------+----------------------+----------------------+ | 1 Tesla M40 Off | 0000:87:00.0 Off | 0 | | 0% 37C P0 87W / 250W | 1854MiB / 11519MiB | 97% Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | 0 16788 C python 1874MiB | | 0 56331 C python 298MiB | | 0 58531 C python 207MiB | | 1 16788 C python 1797MiB | +-----------------------------------------------------------------------------+ When using one GPU, the output is recorded as follows: +------------------------------------------------------+ | NVIDIA-SMI 352.79 Driver Version: 352.79 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 Tesla M40 Off | 0000:06:00.0 Off | 0 | | 0% 71C P0 233W / 250W | 3878MiB / 11519MiB | 99% Default | +-------------------------------+----------------------+----------------------+ | 1 Tesla M40 Off | 0000:87:00.0 Off | 0 | | 0% 26C P8 18W / 250W | 55MiB / 11519MiB | 0% Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | 0 33037 C python 3312MiB | | 0 56331 C python 298MiB | | 0 58531 C python 207MiB | +-----------------------------------------------------------------------------+ How can we improve the efficiency using two GPUs?
st117135
I don’t know what code are you using to benchmark that, but the numbers seem quite off. Multi-GPU on 2 GPUs should be pretty much the same as with Lua Torch right now (which is fast).
st117136
Parts of the codes were brought from the example of ImageNet training in PyTorch. The speed on Pytorch is similar with that on torch, indeed. What I wonder is why two GPUs run slower than one GPU?
st117137
If you have very small batches or a model that can’t even ully utilize a single GPU, using many GPUs will only add communication overhead, without benefits.
st117138
If I define some math operations around Tensors and Variables which are not in torch.nn, can they be performed on multi-GPU (I think this is supported in Tensorflow)? @smth
st117139
I wanted to load the caffe model for network in network from model zoo (https://www.dropbox.com/s/blrajqirr1p31v0/cifar10_nin.caffemodel?dl=1 5). I tried to follow the method given in pytorch docs: state_dict = torch.utils.model_zoo.load_url(‘https://s3.amazonaws.com/pytorch/models/resnet18-5c106cde.pth 5’) but im getting: AttributeError: module ‘torch.utils’ has no attribute ‘model_zoo’ Am I doing something wrong?
st117140
No, I did import it. It still doesn’t work, do you suggest re-installing pytorch or something? Screenshot from 2017-06-28 11-37-33.png958×559 342 KB
st117141
Using nn.CrossEntropyLoss() is to create a object of CrossEntropyLoss, you could only pass the parameter once when initialized it. If you wanna dynamic adjust the weight, you can use torch.nn.functional.cross_entropy() method while training
st117142
Hi all, I am not going to start a new issue and would like to discuss it here. I think that the default value of size_average=True is somewhat a trap. I am working on NLP and not very familiar with other areas. For me, it took me two days debugging this. At least in NLP, I could not see any reasonable motivation to set this value to True by default.
st117143
Not always. Suppose in NMT task. The output with shape (100, 64, 30000), where 100 is output sequence length, 64 is the batch size and 30000 is the vocab size. We do a reshape to have output (100 * 64, 30000) and get the loss. However, we divide the loss by 64, not by 100*64. That is the problem I have met.
st117144
you have a constant sequence length. In NMT, you usually have variable length sentences and you want to normalize by the sentence length right? Either ways, I’m not an expert here size_average has been the default in Torch, and continues to be in PyTorch, maybe i can have a note in the basic tutorial about this. Thanks for the feedback.
st117145
It is an example and the sentences are of variable length. But it seems we don’t normalize the loss by sentence length, for example in opennmt’s implementation https://github.com/OpenNMT/OpenNMT-py/blob/master/train.py#L169 39
st117146
How do I implement a convolutional function without weight shared property(local connected layer)? or should I build my own function to do that?
st117147
The source code of Conv2dLocal modules is class Conv2dLocal(Module): def __init__(self, in_height, in_width, in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1): super(Conv2dLocal, self).__init__() self.in_channels = in_channels self.out_channels = out_channels self.kernel_size = _pair(kernel_size) self.stride = _pair(stride) self.padding = _pair(padding) self.dilation = _pair(dilation) self.in_height = in_height self.in_width = in_width self.out_height = math.floor( (in_height + 2 * self.padding[0] - self.dilation[0] * (self.kernel_size[0] - 1) - 1) / self.stride[0] + 1 ) self.out_width = math.floor( (in_width + 2 * self.padding[1] - self.dilation[1] * (self.kernel_size[1] - 1) - 1) / self.stride[1] + 1 ) self.weight = Parameter(torch.Tensor( self.out_height, self.out_width, out_channels, in_channels, *self.kernel_size)) self.bias = Parameter(torch.Tensor( out_channels, self.out_height, self.out_width)) self.reset_parameters() # print(self.out_height, self.out_width, self.bias.size()) def reset_parameters(self): n = self.in_channels for k in self.kernel_size: n *= k stdv = 1. / math.sqrt(n) self.weight.data.uniform_(-stdv, stdv) self.bias.data.uniform_(-stdv, stdv) def __repr__(self): s = ('{name}({in_channels}, {out_channels}, kernel_size={kernel_size}' ', stride={stride}') if self.padding != (0,) * len(self.padding): s += ', padding={padding}' if self.dilation != (1,) * len(self.dilation): s += ', dilation={dilation}' if self.bias is None: s += ', bias=False' s += ')' return s.format(name=self.__class__.__name__, **self.__dict__) def forward(self, input): func = self._backend.SpatialConvolutionLocal( self.kernel_size[1], self.kernel_size[0], self.stride[1], self.stride[0], self.padding[1], self.padding[0], self.in_width, self.in_height, self.out_width, self.out_height) return func(input, self.weight, self.bias) The way I use self.conv5_local = nn.Conv2dLocal(in_channels=256, out_channels=256, in_height=8, in_width=8, kernel_size=3, stride=1, padding=0) The bug is TypeError: torch.FloatTensor constructor received an invalid combination of arguments - got (float, float, int, int, int, int), but expected one of: * no arguments * (int ...) didn't match because some of the arguments have invalid types: (!float!, !float!, !int!, !int!, !int!, !int!) * (torch.FloatTensor viewed_tensor) * (torch.Size size) * (torch.FloatStorage data) * (Sequence data) How do I debug for this problem?
st117148
I was trying to install My torch from source for development purposes. What do these warnings mean? How do I suppress these warning messages on the compilation? CMake Warning: Manually-specified variables were not used by the project: NO_CUDA THCS_LIBRARIES THCUNN_SO_VERSION THC_LIBRARIES THD_SO_VERSION THNN_SO_VERSION THPP_LIBRARIES THS_LIBRARIES TH_SO_VERSION
st117149
It seems that the nn.linear function cannot process with 3D tensor or more and specify the dimension.
st117150
I’m unable to use nn.init as stated in the documentation. I previously used nninit which was merged into the official version in 0.1.10. Error Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'module' object has no attribute 'init' I’ve ensured my version is updated: conda list | grep pytorch output pytorch 0.1.12 py27_2cu80 [cuda80] soumith
st117151
Was originally using import torch.nn as nn nn.init() But this works from torch.nn import init init()
st117152
They weren’t importing init into the nn module. This has been fixed in a recent PR and should be in the next release. You can have it now if you compile from source on master. Unfortunately, the documentation is always reflective of master, not the latest stable release, so it can have a lot of things that don’t work in the version you may have if you’re just pip or conda installing PyTorch. I wish they would let you select the version of the docs you want to view. If that’s already possible, I haven’t found out how, but then again, I’ve made it a habit to just install PyTorch from source every time and keep it updated. That seems the best course of action with a project that is rapidly improving.
st117153
we’ve now made versioned documentation. You can click on the arrow on the docs to go to a different version than master.
st117154
Hi, I trained my GAN model using both Tesla M40 and Titan X. I felt that Titan X is faster than M40. Is this normal? Thanks.
st117155
M40 has ecc checking enabled, which means that it’s a few percent slower (<10%). You can disable ecc on your M40.
st117156
Hi, I would like to change the outputs of module forward_hook, given some torch tensors. For example, I would like to change the outputs of each relu layers in the module, given select_maps. Besides, each outputs of relu layer is multiplied by the associated select_maps. I write a coarse code of my thought as follows: def hook_forward_net(model, select_maps, layers): select_output_maps = [] def fun(module, inputs, outputs): print(module) print('\t'+'inputs size: ', inputs[0].size()) print('\t'+'outputs size: ', outputs[0].size()) outputs = torch.mul(outputs[0].data.cpu(), select_maps) keys0 = list(model._modules.keys()) for key0 in keys0: value0 = model._modules.get(key0) if type(value0) == torch.nn.modules.container.Sequential: if type(value0._modules.get('2')) == torch.nn.modules.activation.ReLU: relu = value0._modules.get('2') hook = relu.register_forward_hook(fun) Does anyone know how to do that in pytorch?
st117157
How can I do convolution backward manually without forward if I have an input tensor, a grad_output and a weight tensor. I found that conv2d use ConvNd = torch._C._functions.ConvNd for forward passing. Also, I found a ConvBackward function in here 268. But I don’t know how to use it.
st117158
Maybe there were something I missed. Why do you want to do backward propagation without forwarding first? The computational graph is built in the forwarding pass, without which it is not possible to do a backward propagation.
st117159
I want to use a coustom forward function with the standard convolutional backward function. Maybe I should use the standard function when I need a backpropagation. Thank you.
st117160
Hi, There is indeed a register_forward_hook 69 function available. It is called every time after forward function, but it is not intended to change the result of forward computation. Because modifying the forward result in the hook will not cause a change to the computational graph, which can result in wrong gradient calculation in the backward phase. I guess this function is used for logging or something similar. If you would like to implement a custom forward function, it might be better to subclass the Conv layer and provide a custom forward implementation.
st117161
I don’t think there’s a way to do that at the moment because ConvBackward function is not constructible from Python, and we didn’t think of making that possible. It might be supported after the current autograd refactor.
st117162
Thank you for your reply. Now I use a register_backward_hook to process the grad_input of a Conv layer without bias. It raise TypeError: expected Variable, but hook returned 'NoneType' when the hook function return grad_data, grad_weight, None to replace the original grad_input. It seems that I cannot return a None. But what should I return if I don’t have a bias?
st117163
Hi, I had a similar problem, but for me it was a customized backward step, when calling the standard convolution in the forward pass. I built a workaround by calling the convolution from torch.nn._functions, as you can see here: Call backward on function inside a backpropagation step Maybe this could help you? Maybe anyone in here has some ideas regarding my problem?
st117164
Hi, What is the difference between nn.functional.conv_transpose2d and nn.ConvTranspose2d? Is nn.ConvTranspose2d used for training and nn.functional.conv_transpose2d just used for visualization?
st117165
There is no difference. One is the module interface (nn.ConvTranspose2d), while the other is the functional interface. Also, note that nn.ConvTranspose2d uses functional.conv_transpose2d internally 95, and the module interface is simply a convenience interface that creates the weights and biases for you
st117166
Hi This is probably just me missing out on some critical information, I am getting some NaNs (loss -> inf) in my loss function, so I decided to try to investigate where these weights come from. And in the process I tried to print out the sum of all the different parameters in my mode (alongside with their dims) def sum_params(model): s = [] for p in model.parameters(): dims = p.size() n = p.cpu().data.numpy() s.append((dims, np.sum(n))) return s Now for the embedding layer I got the tuple: (torch.Size([20000, 300]), -38459.16) However I have previously initialized the embedding as (tried both methods, not sure what is “correct” in pytorch) init.uniform(self.embedding.weight, -0.1, 0.1) self.embedding.weight.data.uniform_(-0.1, 0.1) So the sum of weights should be “close” to 0 as its expected value is 0. By explicitly calling sum() on the self.embedding.weight I got the value -48.4358 that seems more legit. Order of calls: init(), sum()->correct size, model.parameters() -> wrong size. When I after this start training the sum() immediately matches the sum of model.parameters(). this is also observed in the first forward pass BEFORE any backprob is done. So I am just wondering what am I missing? Why is my weights overwritten when I start doing forward passes? The same happens when I remove all backprob/training. Again, this is probably just missing some info, but I have a hard time wrapping my head around this, and it is annoying as I would like to migrate from tensorflow to this excellent library
st117167
Ok. Got it working, seems like I messed up a git merge and re inserted the code that loads the pre-trained embeddings after initializing the model For some reason after struglling with this for some days writing a post here made me find the bug Now the difference is within the float accuracy.