id
stringlengths
3
8
text
stringlengths
1
115k
st118568
create an nn.Parameter weight of your desired shape in the constructor of your model, and then in forward, just torch.mm to multiply weight by input to get output.
st118569
Just the same code, same data. The difference is the version of pyTorch. When I saved image, the color of image became much deeper. 2.png528×1056 857 KB 1.png530×1058 410 KB
st118570
to save_image, give the option normalize=True Like this: https://github.com/pytorch/examples/blob/master/dcgan/main.py#L254-L256 7
st118571
Hi all, I’m trying to generate a meshgrid directly in pytorch… The result I’m looking for, given two arrays of [0,1], is this: [[0,0], [0,1], [1,0], [1,1]] You can do this with np.meshgrid: grid = np.meshgrid(range(2), range(2), indexing='ij') grid = np.stack(grid, axis=-1) grid = grid.reshape(-1, 2) and with itertools.product : grid = list(itertools.product(range(2),range(2))) But does anyone reckon how you’d do that directly in pytorch? Thanks! EDIT: Hm, this seems to work: x = torch.Tensor([0,1]) torch.stack([x.repeat(2), x.repeat(2,1).t().contiguous().view(-1)],1) x = torch.Tensor([0,1,2]) torch.stack([x.repeat(3), x.repeat(3,1).t().contiguous().view(-1)],1) Full function for different sizes: def generate_grid(h, w): x = torch.range(0, h-1) y = torch.range(0, w-1) grid = torch.stack([x.repeat(w), y.repeat(h,1).t().contiguous().view(-1)],1) return grid grid = generate_grid(2,3) # 0 0 # 1 0 # 0 1 # 1 1 # 0 2 # 1 2 #[torch.FloatTensor of size 6x2]
st118572
about the torch.mode link 186 what is its function ? is it used to an array of the modal (most common) value in the passed array as scipy.stats.mode 22? is there any reference ? thanks
st118573
thanks, I couldnot understand the explanation of document, is there other related materials?
st118574
en.wikipedia.org Mode (statistics) 69 The mode of a set of data values is the value that appears most often. If X is a discrete random variable, the mode is the value x (i.e, X = x) at which the probability mass function takes its maximum value. In other words, it is the value that is most likely to be sampled. Like the statistical mean and median, the mode is a way of expressing, in a (usually) single number, important information about a random variable or a population. The numerical value of the mode is the same as that of the ...
st118575
When i do not use the batch norm, the training time is slower than when i use. (2~3 times slower) I wrote the code to print the log every 10 iterations. In tranining phase, the speed of printing the log is much faster when using the batch norm. Is this normal? Thanks.
st118576
i want to use Hard attention instead of soft attention in the translation example
st118577
I just update the newest version of pytorch. I want to use the init module, but got some error as following: In [1]: torch.__version__ Out[1]: '0.1.11_5' In [2]: torch.nn.init --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-6-a03bb0a8e5dc> in <module>() ----> 1 torch.nn.init AttributeError: module 'torch.nn' has no attribute 'init'
st118578
@Shawn1993 it is standard python packages and how they behave. Without importing something it wont be available.
st118579
I can do something like: import torch.nn.functional as F But I can’t do the same operation with init package.
st118580
Yes, you are right! It just can’t be autocomplete with Ipython . It seems a silly question now.
st118581
No it is not silly question look this import torch.nn as nn import torch.nn.init Traceback (most recent call last): File “/usr/lib/python2.7/dist-packages/IPython/core/interactiveshell.py”, line 2883, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File “”, line 1, in import torch.nn.init File “/home/adel/Desktop/pycharm-community-2016.3.3/helpers/pydev/_pydev_bundle/pydev_import_hook.py”, line 21, in do_import module = self._system_import(name, *args, **kwargs) ImportError: No module named init
st118582
Hi, I want to know that if I can use pytorch’s model directly in torch framework. Or If there is a tool to convert the model to Torch. Thanks.
st118583
There are no tools to directly convert a pytorch’s model in lua-torch. However, you can always save the parameters of your pytorch’s model in a hdf5 file, which you can load from lua-torch.
st118584
Actually i am a very beginner in torch i made the model in keras and it went well but in pytorch it is not converging i just want to know is it an error or something that i don’t know about torch the input is the question and the output is answer. class Classifier(nn.Module): def __init__(self, num_labels= 503, vocab_size= 880): super(Classifier, self).__init__() self.embed = nn.Embedding(vocab_size, 128) self.linear1 = nn.Linear(128, 128) self.linear2 = nn.Linear(128, num_labels) def forward(self, bow_vec): layer1 = self.embed(bow_vec) layer2 = layer1.sum(1).squeeze(1) layer3 = F.relu(self.linear1(layer2)) layer3 = F.relu(self.linear1(layer3)) layer4 = self.linear2(layer3) return layer4 x_loaders = torch.utils.data.DataLoader(train_x, batch_size=512, num_workers=4) y_loaders = torch.utils.data.DataLoader(new_y, batch_size=512, num_workers=4) losses = [] loss_function = nn.CrossEntropyLoss() model = Classifier() optimizer = optim.Adam(model.parameters(), lr=0.01) for epoch in range(50): total_loss = torch.Tensor([0]) for x,y in zip(x_loaders, y_loaders): inputs, labels = Variable(x), Variable(y, requires_grad= False) model.zero_grad() log_probs = model(inputs) loss = loss_function(log_probs, labels) # Step 5. Do the backward pass and update the gradient loss.backward() optimizer.step() total_loss += loss.data losses.append(total_loss[0]) print(losses) btw new_y is a vector of the target indices. sorry for that long code but this is my first time thanks
st118585
Are you sure x and y are a one-to-one match? Is learning rate the same as you do in keras? layer3 = F.relu(self.linear1(layer3)) are you sure to use self.linear1 twice? you can also add print in forward to make sure the number does not overflow def forward(self, bow_vec): layer1 = self.embed(bow_vec) layer2 = layer1.sum(1).squeeze(1) print layer2.data #..... layer3 = F.relu(self.linear1(layer2)) layer3 = F.relu(self.linear1(layer3)) layer4 = self.linear2(layer3) return layer4
st118586
First thank you so much but what is the meaning of “x and y are a one-to-one match” ?
st118587
if x is data and y is label, usually we would put them in one dataset to make sure that y is the label of x, especially when you want to use shuffle sometimes. But I guess it’s ok here. Also try init the linear layers with methods from torch.nn.init
st118588
i made sure my code is exactly the same as keras code, THE DIFFERENCE NOW IS that the model now in keras just give better results after the same number of epochs, ex: after 15 epochs the accuracy is: keras code gave 27.5% pytorch code gave 22.5% it would be really great if you have any advice for me
st118589
did you take this advice from chenyuntc ? Also try init the linear layers with methods from torch.nn.init there is probably still difference in your model.
st118590
It’s hard to say, but there are so many details to look at carefully. Such as optimization methods, do you use weight_decay(default 0 in PyTorch), is betas the same in Adam. Besides, do you use the same validate dataset, do you have the same batch size(the same epoch, too large batch_size usually would converge slower)?. The initialization of embedding layer also seems too small for me, is this the same as you do in keras? Generall, it won’t matter so much if the converge rates are close since PyTorch is much faster than keras
st118591
I am loading a pretrained model of vgg16: vgg16=torchvision.models.vgg16(pretrained=True) I am getting the following error: --------------------------------------------------------------------------- ReadError Traceback (most recent call last) <ipython-input-21-48bdd58aa112> in <module>() ----> 1 vgg16=torchvision.models.vgg16(pretrained=True) /home/sarthak/anaconda2/lib/python2.7/site-packages/torchvision/models/vgg.pyc in vgg16(pretrained, **kwargs) 122 model = VGG(make_layers(cfg['D']), **kwargs) 123 if pretrained: --> 124 model.load_state_dict(model_zoo.load_url(model_urls['vgg16'])) 125 return model 126 /home/sarthak/anaconda2/lib/python2.7/site-packages/torch/utils/model_zoo.pyc in load_url(url, model_dir) 55 hash_prefix = HASH_REGEX.search(filename).group(1) 56 _download_url_to_file(url, cached_file, hash_prefix) ---> 57 return torch.load(cached_file) 58 59 /home/sarthak/anaconda2/lib/python2.7/site-packages/torch/serialization.pyc in load(f, map_location, pickle_module) 246 f = open(f, 'rb') 247 try: --> 248 return _load(f, map_location, pickle_module) 249 finally: 250 if new_fd: /home/sarthak/anaconda2/lib/python2.7/site-packages/torch/serialization.pyc in _load(f, map_location, pickle_module) 312 return deserialized_objects[int(saved_id)] 313 --> 314 with closing(tarfile.open(fileobj=f, mode='r:', format=tarfile.PAX_FORMAT)) as tar, \ 315 mkdtemp() as tmpdir: 316 /home/sarthak/anaconda2/lib/python2.7/tarfile.pyc in open(cls, name, mode, fileobj, bufsize, **kwargs) 1691 else: 1692 raise CompressionError("unknown compression type %r" % comptype) -> 1693 return func(name, filemode, fileobj, **kwargs) 1694 1695 elif "|" in mode: /home/sarthak/anaconda2/lib/python2.7/tarfile.pyc in taropen(cls, name, mode, fileobj, **kwargs) 1721 if mode not in ("r", "a", "w"): 1722 raise ValueError("mode must be 'r', 'a' or 'w'") -> 1723 return cls(name, mode, fileobj, **kwargs) 1724 1725 @classmethod /home/sarthak/anaconda2/lib/python2.7/tarfile.pyc in __init__(self, name, mode, fileobj, format, tarinfo, dereference, ignore_zeros, encoding, errors, pax_headers, debug, errorlevel) 1585 if self.mode == "r": 1586 self.firstmember = None -> 1587 self.firstmember = self.next() 1588 1589 if self.mode == "a": /home/sarthak/anaconda2/lib/python2.7/tarfile.pyc in next(self) 2368 continue 2369 elif self.offset == 0: -> 2370 raise ReadError(str(e)) 2371 except EmptyHeaderError: 2372 if self.offset == 0: ReadError: invalid header
st118592
Either: you are not on the latest pytorch 0.1.11 or The download didn’t finish and the file is corrupted. Clear $HOME/.torch
st118593
I deleted the file and then downloaded it again but still the same error I get the same error with vgg19 but not with resnet50 and alexnet My pytorch version is 0.1.7. How do I get the latest version. I tried conda update pytorch but it says up to date.
st118594
I want to double the size of the input so I am using maxUnpool2d Input size: 1x16x16 Desired size: 1x32x32 I write this: nnFunctions.max_unpool2d(self.resnet.layer4(x),kernel_size=(2,2),stride=(2,2)) But I get the following error: --------------------------------------------------------------------------- TypeError Traceback (most recent call last) in () ----> 1 net=train(train_loader,net,1,410) <ipython-input-26-0904d8b22f1a> in train(train_loader, net, epochs, total_samples) 15 16 # forward + backward + optimize ---> 17 outputs = net(inputs) 18 loss = criterion(outputs, labels) 19 loss.backward() /home/sarthak/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.pyc in __call__(self, *input, **kwargs) 208 209 def __call__(self, *input, **kwargs): --> 210 result = self.forward(*input, **kwargs) 211 for hook in self._forward_hooks.values(): 212 hook_result = hook(self, input, result) <ipython-input-20-808a1f82a64b> in forward(self, x) 17 x=self.resnet.layer2(x) 18 x=self.resnet.layer3(x) ---> 19 x=nnFunctions.max_unpool2d(self.resnet.layer4(x),kernel_size=(2,2),stride=(2,2)) 20 x=self.custom_net(x) 21 return x TypeError: max_unpool2d() takes at least 3 arguments (3 given)
st118595
What should be passed in place of index. I have a pretrained model Resnet and a custom model that I created.
st118596
you should set the property return_indices=True on your pooling layers. See the documentation example for one usage: http://pytorch.org/docs/nn.html#maxunpool2d 73
st118597
Resnet is pretrained so how can i obtain the indices from the pretrained pooling model
st118598
Is there a design choice for why cubes can’t be directly handled by a transform layer? They are, after all, already Tensor objects and the semantics seem unambiguous to me. Is it a “no time to do it just yet” issue? Or is there some deeper reason that I’m missing? The question I’m asking is answered here if anyone needs the “answer”. How to pass a 3D tensor to Linear layer? I have a 3D tensor (5x9x12) I want to cast it to a (5x9x1) tensor through the linear layer. But I found that the nn.LinearLayer require that the input should be a matrix instead of a 3d tensor. How can I achieve my task?
st118599
it’s sort of ambiguous on what you want to do with a cube. Would one do batch matrix multiply (like torch.bmm) or would one do broadcasted matrix multiply… Either ways, it hasn’t been thought through and implemented.
st118600
Interesting, I hadn’t thought of the ambiguity. I guess I assumed that batch matrix multiply was the natural extension of 2d matrix multiplication, but I can see your point. Thanks for the reply. And I understand this isn’t an issue since the operation can already be achieved.
st118601
I am trying to generate an image from a category/label in the Image Net Dataset. How can I use any of torch vision models in away to feed the label an input and generate the image as an output (i.e image net label to image net image)? How can a GAN model help me in this scenario? Thanks a lot for the help and time.
st118602
that’s a very open question. Do you have something more specific you want to ask?
st118603
Hi, I am interested in how unused computational graphs are automatically garbage collected in pytorch. I used to use dynet (another dynamic NN library), where I can use renew_cg() to remove all previously created nodes every time I started creating a new graph for the current training example. However in pytorch, everything seems to be handled automatically, regardless whether I call __call__ of an nn.Module or directly calling the function that implements the computation. Is there any source code/documentation that I can refer to? Thanks!
st118604
the graphs are freed automatically as soon as the output Variable holding onto the graph goes out of scope. Python implemented refcounting, so the freeing is immediate.
st118605
for example: x = Variable(...) # Example 1 try: y = x ** 2 z = y * 3 except: pass # graph is freed here # Example 2 try: y = x ** 2 z = y * 3 z.backward(...) # graph is freed here except: pass # Example 3 try: y = x ** 2 z = y * 3 z.backward(..., retain_variables=True) except: pass # graph is freed here
st118606
The fundamental difference is that DyNet’s graphs are global objects held by the singleton autograd engine. In PyTorch (and Chainer) the graphs are attached to the variables that are involved in them and (as Soumith demonstrated) go out of scope when those variables do.
st118607
Hi When I loaded MSCOCO Detection data with below command: det = dset.CocoDetection(root='./train2014', annFile = ‘./annotations/instances_train2014.json’, transform = trans.Compose([trans.Scale([448,448]), trans.ToTensor(), trans.Normalize((.5,.5,.5),(.5,.5,.5))])) trainLoader = torch.utils.data.DataLoader(det, batch_size=16, num_workers=2) trainItr = iter(trainLoader) images, labels = trainItr.next() I think everything is well but not about labels value. I received an empty labels variable when using trainItr.next. Here is the printed value of variable lables: [[(‘image_id’, ‘image_id’, ‘image_id’, ‘image_id’, ‘image_id’, ‘image_id’, ‘image_id’, ‘image_id’, ‘image_id’, ‘image_id’, ‘image_id’, ‘image_id’, ‘image_id’, ‘image_id’, ‘image_id’, ‘image_id’), (‘iscrowd’, ‘iscrowd’, ‘iscrowd’, ‘iscrowd’, ‘iscrowd’, ‘iscrowd’, ‘iscrowd’, ‘iscrowd’, ‘iscrowd’, ‘iscrowd’, ‘iscrowd’, ‘iscrowd’, ‘iscrowd’, ‘iscrowd’, ‘iscrowd’, ‘iscrowd’), (‘category_id’, ‘category_id’, ‘category_id’, ‘category_id’, ‘category_id’, ‘category_id’, ‘category_id’, ‘category_id’, ‘category_id’, ‘category_id’, ‘category_id’, ‘category_id’, ‘category_id’, ‘category_id’, ‘category_id’, ‘category_id’), (‘segmentation’, ‘segmentation’, ‘segmentation’, ‘segmentation’, ‘segmentation’, ‘segmentation’, ‘segmentation’, ‘segmentation’, ‘segmentation’, ‘segmentation’, ‘segmentation’, ‘segmentation’, ‘segmentation’, ‘segmentation’, ‘segmentation’, ‘segmentation’), (‘area’, ‘area’, ‘area’, ‘area’, ‘area’, ‘area’, ‘area’, ‘area’, ‘area’, ‘area’, ‘area’, ‘area’, ‘area’, ‘area’, ‘area’, ‘area’), (‘id’, ‘id’, ‘id’, ‘id’, ‘id’, ‘id’, ‘id’, ‘id’, ‘id’, ‘id’, ‘id’, ‘id’, ‘id’, ‘id’, ‘id’, ‘id’), (‘bbox’, ‘bbox’, ‘bbox’, ‘bbox’, ‘bbox’, ‘bbox’, ‘bbox’, ‘bbox’, ‘bbox’, ‘bbox’, ‘bbox’, ‘bbox’, ‘bbox’, ‘bbox’, ‘bbox’, ‘bbox’)]] How can I solve this?
st118608
Normally the COCO dataset uses the official loaders provided by the COCO dataset 65, so if there is a problem with it, it might be that your data is not exactly in the format provided by the dataset. Also, to make your life easier to debug, you don’t need to call next on the dataloader, but just use the dataset and index it images, labels = det[0] # idx of your img, here 0
st118609
Thanks for your response. this command I mean images, labels = det[0] works for me, but I would like to work with torch.utils.data.DataLoader. So any way, I just want to report this problem to pytorch developer.
st118610
You need to write your own collate_fn in this case, that specifies how the list of targets will be joined together, as the default one doesn’t handle the case you need.
st118611
For reference, here is how the default_collate is implemented in pytorch dataloader 66. You will need to update it to handle your specific case
st118612
Following code is the G network update part of “dcgan” example (examples/dcgan folder). ############################ # (2) Update G network: maximize log(D(G(z))) ########################### netG.zero_grad() label.data.fill_(real_label) # fake labels are real for generator cost output = netD(fake) errG = criterion(output, label) errG.backward() D_G_z2 = output.data.mean() optimizerG.step() “optimizerG” is defined before as. optimizerG = optim.Adam(netG.parameters(), lr = opt.lr, betas = (opt.beta1, 0.999)) Here, “errG” is from “output” and “output” is from “netD”. How the “errG” and “netG” are connected? I mean which lines of codes make the “errG” be backpropagated thru “netG”, even though there is no explicit link?
st118613
github.com pytorch/examples/blob/master/dcgan/main.py#L230 1 input.resize_as_(real_cpu).copy_(real_cpu) label.resize_(batch_size).fill_(real_label) inputv = Variable(input) labelv = Variable(label) output = netD(inputv) errD_real = criterion(output, labelv) errD_real.backward() D_x = output.data.mean() # train with fake noise.resize_(batch_size, nz, 1, 1).normal_(0, 1) noisev = Variable(noise) fake = netG(noisev) labelv = Variable(label.fill_(fake_label)) output = netD(fake.detach()) errD_fake = criterion(output, labelv) errD_fake.backward() D_G_z1 = output.data.mean() errD = errD_real + errD_fake optimizerD.step() fake = netG(noise)
st118614
I need to reshape a tensor with size [12, 1, 28, 28] now I need to flatten the last two and remove the second dimension so that the shape beocmes [12, 784] #28*28 -> 784 is there similar methods like reshape() as in numpy?
st118615
Looking at the examples it seems there’s two ways to initialize a network. The first is to use nn.Sequential, to which one passes, in order, the layerwise operations one wants a network to have. The other is define a class inheriting from Module that then contains an __init__ and forward (and optionally backward method), where in __init__ one explicitly defines the layerwise operations the network is composed of, and in __forward__ the calculations necessary to go from input -> output. As I understand the second method is useful for when one has a more complicated structure, something recursive for example. But then I don’t understand why, save for the one example in the tutorial examples, I never see nn.Sequential being used anywhere. Even in something as simple as a mnist example https://github.com/pytorch/examples/blob/master/mnist/main.py 11 ? Could there be more advantages to using the second method (instead of just passing modules to nn.Sequential)? Onto weight initialization. I’m not sure what is the proper way to do this. It seems there’s again two options here. The first is two loop over the modules and then depending on the instance perform an operation or not (looking into the docs it seems Linear has fields weight and bias). Or one could use the parameters generator function of the network, although I’m not sure how one would differentiate in this case between parameters one wants to change and ones one doesn’t. During some playing around I noticed, due to using the ELU activation function, that it seems the alpha parameter is also included in the parameters of the model. Does this mean it is also a parameter that will be optimized? How can I disable that (the equivalent of requires_grad=False on a tensor). Also, have I understand correctly that the type of my features and targets defines where the operations will be run on? How can I pick a specific GPU if I have multiple?
st118616
to answer your three questions: We chose to make the examples to be best practices. We dont suggest users to use sequential except for basic convenience. Sequential becomes inflexible very quickly. You can use this recently added function http://pytorch.org/docs/nn.html#torch.nn.Module.named_parameters 41 to filter out just the ELU parameters and not send them to the optimizer. you can use the environment variable CUDA_VISIBLE_DEVICES=“device_id” to control which GPU to use. For example CUDA_VISIBLE_DEVICES=2 python main.py # uses GPU-3
st118617
i know this link this 66 but when i feed it the output of the RNN or LSTM it gives a complecated graph. is there a way to avoid that?
st118618
No way yet, but better visualization will be possible in the future after the autograd refactor is merged
st118619
AttributeError Traceback (most recent call last) in () 28 dataset = IF(root=data_root, transform=torchvision.transforms.ToTensor()) 29 loader = data_utils.DataLoader(dataset, batch_size=5,shuffle=True) —> 30 train_dataset, test_dataset = train_test_split(dataset, .2) 31 trainloader = data_utils.DataLoader(train_dataset, batch_size=20, shuffle=True) 32 testloader = data_utils.DataLoader(test_dataset, batch_size=20, shuffle=True) in train_test_split(dataset, test_size) 15 train_dataset = copy.deepcopy(dataset) 16 test_dataset = copy.deepcopy(dataset) —> 17 total_n = train_dataset.len() 18 rand_perm = permutation(total_n) 19 cutoff = int(test_size * total_n) AttributeError: ‘ImageFolder’ object has no attribute ‘len’ I am trying to implemen below code for lerning pytorch (loaded file to use in a classifier) where the input is jpg images. [quote=“farrokhi, post:1, topic:1859, full:true”] import torch import torchvision import numpy as np import torch.nn as nn import torch.optim as optim import torchvision.transforms as transforms import torch.nn.parallel import torch.backends.cudnn as cudnn from torch.autograd import Variable import torch.nn.functional as F import copy from numpy.random import permutation def train_test_split(dataset, test_size): train_dataset = copy.deepcopy(dataset) test_dataset = copy.deepcopy(dataset) total_n = train_dataset.len() rand_perm = permutation(total_n) cutoff = int(test_size * total_n) test_dataset.imgs = [dataset.imgs[rand_perm[i]] for i in range(0, cutoff)] train_dataset.imgs = [dataset.imgs[rand_perm[i]] for i in range(cutoff, total_n)] return train_dataset, test_dataset #great dataset/loader for train and test from torchvision.datasets import ImageFolder as IF import torchvision import torch.utils.data as data_utils data_root = './Genuine/' dataset = IF(root=data_root, transform=torchvision.transforms.ToTensor()) loader = data_utils.DataLoader(dataset, batch_size=5,shuffle=True) train_dataset, test_dataset = train_test_split(dataset, .2) trainloader = data_utils.DataLoader(train_dataset, batch_size=20, shuffle=True) testloader = data_utils.DataLoader(test_dataset, batch_size=20, shuffle=True) classes = dataset.classes print(classes)[quote=“farrokhi, post:1, topic:1859, full:true”]
st118620
can you format your code using triple quotes, like this ``` code ``` From what I see, maybe ImageFolder didn’t find any images
st118621
Hi I would like to know, what’s the type of file which the command torch.save(object, f) saves? tar file or binary file? it seems that it should save the network with tar format, but my own pytorch, my installed version, turn it to binary format. could you please help me?
st118622
Hi, if there is plan to implement more matrix math function as in TF https://www.tensorflow.org/api_guides/python/math_ops#Matrix_Math_Functions 66 https://github.com/HIPS/autograd/blob/master/autograd/numpy/linalg.py 29 Such as solving lower triangular systems, cholesky decomposition, etc. Thanks.
st118623
yes we are constantly adding new operations (and more Pull Requests welcome too). Recently a community member added triangular factorization and solve. See this for a full list: http://pytorch.org/docs/torch.html#blas-and-lapack-operations 706
st118624
Is the weighted all pairs loss implemented? The closest I’ve seen were MultiMarginLoss and MarginRankingLoss (which does not seem to support weights).
st118625
i think it’s not implemented. But you can create one with just torch.* autograd operations.
st118626
I encountered an inconsistent torch.max() behaviour when running it on cpu and gpu, which can be reproduced by: import torch x = torch.FloatTensor(2, 10, 10) x[0, :, :] = 1 x[1, :, :] = 2 x[:, 3:7, 3:7] = 0 value, idx = torch.max(x, 0) print(idx) (0 ,.,.) = 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 1 1 1 1 1 1 0 0 0 0 1 1 1 1 1 1 0 0 0 0 1 1 1 1 1 1 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 [torch.LongTensor of size 1x10x10] , and value, idx = torch.max(x.cuda(), 0) print(idx) (0 ,.,.) = 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 [torch.cuda.LongTensor of size 1x10x10 (GPU 0)] I supposed both the cpu and gpu output should be consistent?
st118627
this is an ambiguous case. In this case, both results are correct. The CPU and GPU will return correct results but might not be consistent with each other when breaking ties). Similar to max, you will see similar behavior when breaking ties in min, sort, topk, etc. The reason it is hard to make CPU and GPU consistent is that if we need consistency then we will have to take a huge hit in GPU performance.
st118628
Hello, I receive Nan values for the cost function from the first epoch. could you please tell me what is going wrong? I define the network as below. class MyNet(nn.Module): def __init__(self, extractor): super(MyNet, self).__init__() self.features = nn.Sequential( # Select Feature *list(extractor.children())[:-2] ) self.maxpool1 = nn.MaxPool2d(2,2) self.conv1 = nn.Conv2d(512,1024,3,padding=1) self.batchNorm1 = nn.BatchNorm2d(1024) self.conv2 = nn.Conv2d(1024,512,1) self.batchNorm2 = nn.BatchNorm2d(512) self.conv3 = nn.Conv2d(512,1024,3,padding=1) self.batchNorm3 = nn.BatchNorm2d(1024) self.conv4 = nn.Conv2d(1024,512,1) self.batchNorm4 = nn.BatchNorm2d(512) self.conv5 = nn.Conv2d(512,1024,3,padding=1) self.batchNorm5 = nn.BatchNorm2d(1024) self.final = nn.Conv2d(1024,30,1) def forward(self, input): output = self.features(input) output = self.maxpool1(output) output = f.leaky_relu(self.batchNorm1(self.conv1(output)), 0.1) output = f.leaky_relu(self.batchNorm2(self.conv2(output)), 0.1) output = f.leaky_relu(self.batchNorm3(self.conv3(output)), 0.1) output = f.leaky_relu(self.batchNorm4(self.conv4(output)), 0.1) output = f.leaky_relu(self.batchNorm5(self.conv5(output)), 0.1) output = f.leaky_relu(f.dropout(output, p = 0.5)) output = self.final(output) return output and here is the initialization: resnet18 = torchvision.models.resnet18(pretrained=True) net = MyNet(resnet18) for param in net.features.parameters(): param.requires_grad = False conv1Params = list(net.conv1.parameters()) conv2Params = list(net.conv2.parameters()) conv3Params = list(net.conv3.parameters()) conv4Params = list(net.conv4.parameters()) conv5Params = list(net.conv5.parameters()) convFinalParams = list(net.final.parameters()) conv1Params[0].data.normal_(0.0, 0.0002); conv2Params[0].data.normal_(0.0, 0.0002); conv3Params[0].data.normal_(0.0, 0.0002); conv4Params[0].data.normal_(0.0, 0.0002); conv5Params[0].data.normal_(0.0, 0.0002); convFinalParams[0].data.normal_(0.0, 0.0002); Here is the adam optimization initialization: input = V(torch.randn(1,nc,imageSize[0], imageSize[1])) parameters = (p for p in list(net.parameters())[-12:]) learning_rate = 1e-4 optimizer = optim.Adam(params = parameters, lr = learning_rate) Could you tell where is the problem? Edit: I did below changes to my forward function: > def forward(self, input): output = self.features(input) print("........... %f"% (output.data.mean())) output = self.maxpool1(output) print("........... %f"% (output.data.mean())) output = f.leaky_relu(self.batchNorm1(self.conv1(output)),0.1) print("........... %f"% (output.data.mean())) output = f.leaky_relu(self.batchNorm2(self.conv2(output)),0.1) output = f.leaky_relu(self.batchNorm3(self.conv3(output)),0.1) print("........... %f"% (output.data.mean())) output = f.leaky_relu(self.batchNorm4(self.conv4(output)),0.1) print("........... %f"% (output.data.mean())) output = f.leaky_relu(self.batchNorm5(self.conv5(output)),0.1) print("........... %f"% (output.data.mean())) output = f.dropout(output, p = 0.5) print("........... %f"% (output.data.mean())) output = self.final(output) # output = f.sigmoid(output) return output And here is the outputs: (I have to say that I did backprop per one image) (1,1) -> Current Batch Loss:nan … 0.893032 … 1.491872 … 0.180793 … nan … nan … nan … nan (1,2) -> Current Batch Loss:nan … 0.903442 … 1.534281 … 0.182008 … nan … nan … nan … nan (1,3) -> Current Batch Loss:nan … 0.896864 … 1.470025 … 0.180523 … nan … nan … nan … nan (1,4) -> Current Batch Loss:nan … 0.911260 … 1.501375 … 0.181454 … nan … nan … nan … nan (1,5) -> Current Batch Loss:nan … 0.897548 … 1.495423 … 0.181025 … nan … nan … nan … nan (1,6) -> Current Batch Loss:nan … 0.907124 … 1.515306 … 0.180970 … nan … nan … nan … nan (1,7) -> Current Batch Loss:nan … 0.894349 … 1.472500 … 0.180993 … nan … nan … nan … nan (1,8) -> Current Batch Loss:nan … 0.907916 … 1.535602 … 0.180869 … nan … nan … nan … nan (1,9) -> Current Batch Loss:nan … 0.889712 … 1.469340 … 0.180603 … nan … nan … nan … nan (1,10) -> Current Batch Loss:nan … 0.912330 … 1.530017 … 0.181718 … nan … nan … nan … nan (1,11) -> Current Batch Loss:nan … 0.916205 … 1.547421 … 0.181335 … nan … nan … nan … nan (1,12) -> Current Batch Loss:nan … 0.914901 … 1.538954 … 0.181181 … nan … nan … nan … nan (1,13) -> Current Batch Loss:nan … 0.910332 … 1.508362 … 0.180705 … nan … nan … nan … nan (1,14) -> Current Batch Loss:nan … 0.921174 … 1.557664 … 0.181560 … nan … nan … nan … nan (1,15) -> Current Batch Loss:nan … 0.905606 … 1.528833 … 0.181028 … nan … nan … nan … nan (1,16) -> Current Batch Loss:nan … 0.880896 … 1.449598 … 0.180272 … nan … nan … nan … nan (1,17) -> Current Batch Loss:nan … 0.897655 … 1.520722 … 0.180509 … nan … nan … nan … nan (1,18) -> Current Batch Loss:nan … 0.897704 … 1.495461 … 0.180581 … nan … nan … nan … nan (1,19) -> Current Batch Loss:nan … 0.921070 … 1.548392 … 0.180941 … nan … nan … nan … nan
st118629
I think you should print output.abs().max() rather than output.mean() Also, I think 0.89 is a really large number as average. Maybe try to add a Batchnorm layer after features.
st118630
Is it possible to create a higher-order tensor with no elements, e.g., torch.LongTensor(1, 0) ? This would be convenient, but the current behavior seems to be returning a tensor with dimensions equal to the prefix until the first zero, e.g., torch.LongTensor(1, 0, 3, 4) yields a tensor of size 1. Are you considering changing this behavior? Is there any good reason for it? Thanks.
st118631
there is no way of constructing tensors with placeholder dimensions. What is the use-case exactly? (I cant think of one). We will not be changing this behavior.
st118632
In the use-case that I came across, I would start with a Tensor of size (1, 0) and increment the second dimension by one at each step of the algorithm. The first dimension also increase, but it isn’t as periodic. This use-case came up in beam search. It is not a big deal. I just added a test based on the number of elements, which can be obtained through numel.
st118633
Is there any simple and common gradient checking method, when extending an autograd function ?
st118634
from torch.autograd import gradcheck (source here https://github.com/pytorch/pytorch/blob/master/torch/autograd/gradcheck.py 2.6k) check out the tests for examples of how to use it
st118635
Thanks a lot! I could fix the backward of my function This should appear here: http://pytorch.org/docs/notes/extending.html 904, it is a very important tool.
st118636
It’s been added only recently and we forgot about that. Can you send a PR please?
st118637
Hello, I wrote a subclass for solve_triangular systems, then I tried to use the gradcheck, but it reports False… Could you help to review this code? Thanks. class SolveTrianguler(Function): # sloves A * x = b def __init__(self, trans=0, lower=True): super(SolveTrianguler, self).__init__() # trans=1, transpose the matrix A.T * x = b self.trans = trans # lower=False, use data contained in the upper triangular, the default is lower self.lower = lower # self.needs_input_grad = (True, False) def forward(self, matrix, rhs): x = torch.from_numpy( solve_triangular(matrix.numpy(), rhs.numpy(), trans=self.trans, lower=self.lower)) self.save_for_backward(matrix, x) return x def backward(self, grad_output): # grad_matrix = grad_rhs = None matrix, x = self.saved_tensors # formula from Giles 2008, 2.3.1 return -matrix.inverse().t().mm(grad_output).mm(torch.t(x)), \ matrix.inverse().t().mm(grad_output)
st118638
Hi, Just wondering if there is a typical amount of epochs one should train for. I am training a few CNNs (Resnet18, Resnet50, InceptionV4, etc) for image classification and was not sure what is the usual amount of epochs. 50 epochs? 100 epochs? Does it perhaps depend on the training set size? Thanks
st118639
It depends on learning rate, net architecture, optimization strategy… But usually you should focus on the loss. When you’ve tried your best but still can’t make the loss decrease, it may be enough.
st118640
Is there any rule about when different variables share the same storage?How to judge simpily if the output and input of a operation is saved in one memory storage?
st118641
if you use inplace operations on the input Variable, then the output will share the same storage. in-place operations are postfixed with an _ symbol. For example: x.add_(y) (inplace) and x.add(y) (out-of-place)
st118642
Hey, I write a model to generate sequential images. My model looks like this: netV's input is noise and hidden state, its output is an image, the next hidden state and next noise. I use a loop to to use netV to generate many images,which look like this: images = [] for i in range( 8 ): image, noise_next, hidden_next = netV(noise, hidden) noise = noise_next hidden = hidden_next images.append(image) The forward is ok. However, I'm not sure can the backwark works. Can the grad backward normally? I don't know how the grad flows in the backward
st118643
I don’t know how you do the backward. Could you give an example? AFAIK, hidden.backward() would be ok, just like RNN.
st118644
The backward is: netD's input is many images. And the output is 0 or 1. images = [] for i in range( 8 ): image, noise_next, hidden_next = netV(noise, hidden) noise = noise_next hidden = hidden_next images.append(image) real_label is a bacth * 1 tesor filled with 1. The backward like this: output = netD(images) criterion = nn.BCELoss error = criterion(output, real_label) error.backward() In my test when the images number less than 5. Grad can backward. However when the number is more than 5. The grad disappear. BTW, I’m not sure this is right.
st118645
if your code goes like this: images = torch.cat(images) output = netD(images) It will work. Also you can have a look at the example of DCGAN github.com pytorch/examples/blob/master/dcgan/main.py#L228-L237 D_x = output.data.mean() # train with fake noise.resize_(batch_size, nz, 1, 1).normal_(0, 1) noisev = Variable(noise) fake = netG(noisev) labelv = Variable(label.fill_(fake_label)) output = netD(fake.detach()) errD_fake = criterion(output, labelv) errD_fake.backward()
st118646
That’s different. I try to generate sequential images. The netD’s input is more like a video.
st118647
i want to add or take the mean of the embedded vectors after this self.embeddings = nn.Embedding(vocab_size, embedding_dim) OR add a linear transformation to every embedded vector (with shared weights of course).
st118648
For the first question: embs = self.embeddings(input) mean_embs = embs.mean(1).squeeze(1) For the second question: embs = self.embeddings(input) trans_embs = self.linear(embs.view(-1, embedding_dim)).view(embs.size(0), embs.size(1), -1)
st118649
I have image data of size 410x1x657x1625 where 410 are the number of images. I have masks of same dimensions where a pixel value is 255 if its part of text or else 0. Now i train my network with loss function SmoothL1Loss without sizeAverage, adding up the loss and then dividing by total number of pixels i.e. 410x1x657x1625, the per pixel loss turns out to be approximately 35. But when I plot the predicted values or the predicted mask for the train data I get 0 for each pixel. I can’t understand the problem.
st118650
Hi, it’s best to post your code as well - it makes it easier to help spot any problems.
st118651
Here’s my network: class convNet(nn.Module): #constructor def __init__(self): super(convNet, self).__init__() #defining layers in convnet #input size=1*657*1625 self.conv1 = nn.Conv2d(1,16, kernel_size=3,stride=1,padding=1) self.conv2 = nn.Conv2d(16,64, kernel_size=3,stride=1,padding=1) #self.bn1=nn.BatchNorm2d(32) self.conv3 = nn.Conv2d(64,128,kernel_size=3,stride=1,padding=1) self.pconv1= nn.Conv2d(128,128, kernel_size=(3,3),stride=1,padding=(1,1)) #self.bn2=nn.BatchNorm2d(64) self.pconv2= nn.Conv2d(128,128, kernel_size=(3,7),stride=1,padding=(1,3)) self.pconv3= nn.Conv2d(128,128, kernel_size=(7,3),stride=1,padding=(3,1)) self.conv4= nn.Conv2d(128,64,kernel_size=3,stride=1,padding=1) self.conv5= nn.Conv2d(64,1,kernel_size=3,stride=1,padding=1) def forward(self, x): x = nnFunctions.relu(self.conv1(x)) x = nnFunctions.relu(self.conv2(x)) x = nnFunctions.relu(self.conv3(x)) #parallel conv x = nnFunctions.relu(self.pconv1(x)+self.pconv2(x)+self.pconv3(x)) x = nnFunctions.relu(self.conv4(x)) x = nnFunctions.relu(self.conv5(x)) return x Initialization: net=convNet() net.cuda() Loss function: def L1Loss(predicted,target): loss=Variable.abs(predicted-target).sum() return loss Learning rate: learning_rate=1e-10 Train function: def train(train_loader,net,epochs,total_samples): global learning_rate prev_loss=0 for epoch in range(int(epochs)): # loop over the dataset multiple times optimizer = optim.Adagrad(net.parameters(), lr=learning_rate,lr_decay=0.25,weight_decay=1e-4) running_loss = 0.0 for i,data in enumerate(train_loader): inputs,labels=data # wrap them in Variable inputs, labels = Variable(inputs).cuda(), Variable(labels).cuda() # zero the parameter gradients optimizer.zero_grad() # forward + backward + optimize outputs = net(inputs) loss = L1Loss(outputs, labels) loss.backward() optimizer.step() # print statistics running_loss += loss.data[0] cur_loss=loss.data[0] print('Batch '+str(i)+':'+str(cur_loss)) running_loss=running_loss/26790000.0 print('\t Iteration '+str(epoch)+':'+str(running_loss)) # if(prev_loss<running_loss): # learning_rate/=10 prev_loss=running_loss print('Finished Training') return net Testing: images, labels = dataiter.next() net.cuda() predicted = net(Variable(images).cuda()) dataiter is iterator on train loader printing predicted.cpu() gives 0 for all values
st118652
I have a pre-trained NN model which was trained on GPU and now I want to demonstrate some result but I need to do that using CPU (because of resource limitation). I tried to load model states using CPU but getting UNKNOWN error. Everything works perfectly if I use GPU. Not sure if this information is important - I have used data parallel while training in GPU.
st118653
Let’s say your model’s name is net to train on gpu you must have written net. cuda(). After training transfer the model to cpu net.cpu() Save the model using torch.save Load the model using torch.load
st118654
Problem is, I have already trained and saved the model. Is there anyway, I can load the states of the model already trained on GPU?
st118655
I have tried this. state_dict = torch.load(f, map_location=lambda storage, loc: storage) Still getting error. Function to load model states in CPU: def load_model_states_without_dataparallel(model, filename): """Load a previously saved model states.""" filepath = os.path.join(args.save_path, filename) with open(filepath, 'rb') as f: state_dict = torch.load(f, map_location=lambda storage, loc: storage) new_state_dict = OrderedDict() for k, v in state_dict.items(): name = k[7:] # remove `module.` new_state_dict[name] = v model.load_state_dict(new_state_dict) I am getting the following error. --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) RuntimeError: cuda runtime error (30) : unknown error at /py/conda-bld/pytorch_1490983232023/work/torch/lib/THC/THCGeneral.c:66 During handling of the above exception, another exception occurred: SystemError Traceback (most recent call last) <ipython-input-12-facf4f3e448a> in <module>() ----> 1 helper.load_model_states_without_dataparallel(model, 'model_loss_3.097534_epoch_4_model.pt') 2 model.eval() 3 print('Model, embedding index and dictionary loaded.') /net/if5/wua4nw/wasi/academic/research_with_prof_wang/projects/seq2seq_cover_query_generation/source_code/helper.py in load_model_states_without_dataparallel(model, filename) 71 filepath = os.path.join(args.save_path, filename) 72 with open(filepath, 'rb') as f: ---> 73 state_dict = torch.load(f) 74 new_state_dict = OrderedDict() 75 for k, v in state_dict.items(): /if5/wua4nw/anaconda3/lib/python3.5/site-packages/torch/serialization.py in load(f, map_location, pickle_module) 227 f = open(f, 'rb') 228 try: --> 229 return _load(f, map_location, pickle_module) 230 finally: 231 if new_fd: /if5/wua4nw/anaconda3/lib/python3.5/site-packages/torch/serialization.py in _load(f, map_location, pickle_module) 375 unpickler = pickle_module.Unpickler(f) 376 unpickler.persistent_load = persistent_load --> 377 result = unpickler.load() 378 379 deserialized_storage_keys = pickle_module.load(f) /if5/wua4nw/anaconda3/lib/python3.5/site-packages/torch/serialization.py in persistent_load(saved_id) 346 if root_key not in deserialized_objects: 347 deserialized_objects[root_key] = restore_location( --> 348 data_type(size), location) 349 storage = deserialized_objects[root_key] 350 if view_metadata is not None: /if5/wua4nw/anaconda3/lib/python3.5/site-packages/torch/serialization.py in default_restore_location(storage, location) 83 def default_restore_location(storage, location): 84 for _, _, fn in _package_registry: ---> 85 result = fn(storage, location) 86 if result is not None: 87 return result /if5/wua4nw/anaconda3/lib/python3.5/site-packages/torch/serialization.py in _cuda_deserialize(obj, location) 65 if location.startswith('cuda'): 66 device_id = max(int(location[5:]), 0) ---> 67 return obj.cuda(device_id) 68 69 /if5/wua4nw/anaconda3/lib/python3.5/site-packages/torch/_utils.py in _cuda(self, device, async) 55 if device is None: 56 device = -1 ---> 57 with torch.cuda.device(device): 58 if self.is_sparse: 59 new_type = getattr(torch.cuda.sparse, self.__class__.__name__) /if5/wua4nw/anaconda3/lib/python3.5/site-packages/torch/cuda/__init__.py in __enter__(self) 127 if self.idx is -1: 128 return --> 129 _lazy_init() 130 self.prev_idx = torch._C._cuda_getDevice() 131 if self.prev_idx != self.idx: /if5/wua4nw/anaconda3/lib/python3.5/site-packages/torch/cuda/__init__.py in _lazy_init() 88 "Cannot re-initialize CUDA in forked subprocess. " + msg) 89 _check_driver() ---> 90 assert torch._C._cuda_init() 91 assert torch._C._cuda_sparse_init() 92 _cudart = _load_cudart() SystemError: <built-in function _cuda_init> returned a result with an error set
st118656
Hi. I have got a question. I have trained a deep neural network on GPU, and then finally moved my GPU mode deep network to the CPU mode. Further more, after saving CPU mode network with torch.save(net.state_dict(), 'final.pth') and moving to another system and loading with net.load_state_dict(torch.load('./final.pth')) and running it, I have encountered with below error. Could you please help me what is the problem of my code? Traceback (most recent call last): File “/home/mohammad/anaconda3/lib/python3.6/tarfile.py”, line 185, in nti n = int(s.strip() or “0”, 8) ValueError: invalid literal for int() with base 8: ‘ons\nOrde’ During handling of the above exception, another exception occurred: Traceback (most recent call last): File “/home/mohammad/anaconda3/lib/python3.6/tarfile.py”, line 2287, in next tarinfo = self.tarinfo.fromtarfile(self) File “/home/mohammad/anaconda3/lib/python3.6/tarfile.py”, line 1086, in fromtarfile obj = cls.frombuf(buf, tarfile.encoding, tarfile.errors) File “/home/mohammad/anaconda3/lib/python3.6/tarfile.py”, line 1028, in frombuf chksum = nti(buf[148:156]) File “/home/mohammad/anaconda3/lib/python3.6/tarfile.py”, line 187, in nti raise InvalidHeaderError(“invalid header”) tarfile.InvalidHeaderError: invalid header During handling of the above exception, another exception occurred: Traceback (most recent call last): File “TestCode.py”, line 357, in net.load_state_dict(torch.load(’./final.pth’)) File “/home/mohammad/anaconda3/lib/python3.6/site-packages/torch/serialization.py”, line 248, in load return _load(f, map_location, pickle_module) File “/home/mohammad/anaconda3/lib/python3.6/site-packages/torch/serialization.py”, line 314, in _load with closing(tarfile.open(fileobj=f, mode=‘r:’, format=tarfile.PAX_FORMAT)) as tar, File “/home/mohammad/anaconda3/lib/python3.6/tarfile.py”, line 1582, in open return func(name, filemode, fileobj, **kwargs) File “/home/mohammad/anaconda3/lib/python3.6/tarfile.py”, line 1612, in taropen return cls(name, mode, fileobj, **kwargs) File “/home/mohammad/anaconda3/lib/python3.6/tarfile.py”, line 1475, in init self.firstmember = self.next() File “/home/mohammad/anaconda3/lib/python3.6/tarfile.py”, line 2299, in next raise ReadError(str(e)) tarfile.ReadError: invalid header I have to say that my server’s pytorch version is 0.1.11 and my own system is 0.1.8. Is this the source of error? Thanks Edit 1: I have tried above code on our server. It is actually fine and without error. These errors were started to generate when I reinstalled pytorch. Edit 2: The saved format before reinstallation of pytorch on our server was .tar but after reinstallation it has changed to Binary. Moreover, I should say that all of the conversion are done on server!
st118657
@apaszke, @fmassa, @albanD, @smth Could you please help me to settle this issue ?
st118658
@mderakhshani did you figure out why it said “cannot read file data” when upgrading pytorch ?
st118659
@smth, Actually upgrading was not applicable for me! I uninstalled anaconda first and then reinstall it again.
st118660
i know how to make scheduler and decrease the learning rate after few steps, i want something like ReduceLROnPlateau() in Kears.
st118661
Hi @Bassel, I’d recommend looking at the following for the current state (“not included”) and the various ways to do it with current torch (be sure to read the later posts to match current pytorch versions): Adaptive learning rate How do I change the learning rate of an optimizer during the training phase? thanks Best regards Thomas
st118662
Hi, I spent a day debugging this, and thought I’d share my finding about batch normalization seemingly overfitting. Here is my setup: I have 24*7 = 168 models, for each hour of a week, with a few hundreds of thousand samples to train each. Out of 168 models, 166 models trained with consistent high accuracy, one was mediocre, and one was overfitting badly (training loss at 1E-5, test loss 0.5). The architecture used batch normalization in fully connected layers. I tried different learning rates, different few batch sizes (128, 1024, 2048, 4096, 8192, 16384) shuffling the data all in vain. When I removed a few samples from the data set, overfitting disappeared, but it does not really depend on which samples I removed. Then, it turned out that my trainin g data set has 196609 = 16384*12 + 1 samples. With PyTorch’s dataloader (http://pytorch.org/docs/_modules/torch/utils/data/dataloader.html 24) and any batch size of size 2^n for n <= 15 (until 32768) the last batch would be exactly 1 element. The way running averages are computed resulted in the variance of BatchNorm1d which is basically unusable. In the training data set with mediocre performance there were 180227 = 16384*11 + 3 samples. The solution was to accurately split into the training and testing data set so that all batches in the training data set have the same specified size. But something more robust is required so that BatchNorm is less fragile here: either make that all batches fed into BatchNorm have the same size and issue error/warning otherwise or compute running average while taking the batch size into consideration. I’d be happy to propose a patch, but would hear opinions first — or probably this was already covered earlier. David
st118663
Hi @dtolpin, thank you for sharing this interesting problem and the detailed analysis. To me, your first option (making all batches the same size) sounds the one that is more reasonable in practice. Quite likely, you could just pick random samples to duplicate for this and be done with it. I must admit that I am quite unsure whether I interpret pytorch’s momentum parameter correctly, but if it means something like alpha in running_mean_estimate = alpha * running_mean_estimate + (1-alpha) * minibatch_mean, I would expect something more like 0.9 rather than pytorch’s default of 0.1. So changing the momentum might help, too, in particular if your analysis for option 2 (use minibatch size in running average computation) is correct. If you wanted to go down option 2, the other (and I would almost expect it to be the more significant) shortcoming of the batch normalization as described in Ioffe and Szegedy’s original article 12 as Algorithm 1 is that during training, the mean and std are taken from the current minibatch. For very small minibatches, I would expect that to be disadvantageous and using a regularization like regularized_mean_estimate = (actual_batchsize * minibatch_mean + ((target_batchsize-actual_batchsize) * running_mean_estimate) / target_batchsize regularized_variance_estimate = ((actual_batchsize-1) * minibatch_mean + ((target_batchsize - actual_batchsize) * running_mean_estimate) / (target_batchsize-1) to work much better. (You could have a fancy Bayesian thing to average them, too, and find out why and how my weights above are rubbish, but it might be a starting point.) As I said above, in practice, I would probably go with amending the data to fill up the last minibatch. On the other hand, might be fun to see which of your suggestion for running mean/std estimate updates, the blanket momentum adjustment, and regularization in the training batch normalisation works best. Best regards Thomas
st118664
I am new to pytorch and I have written a custom nn layer. I have two weight parameters which I have declared in the __init__ function as follows. self.weight_forward = nn.Parameter(torch.Tensor(self.length, self.config.emsize)) self.weight_backward = nn.Parameter(torch.Tensor(self.length, self.config.emsize)) Everything is working fine. I just want to know when I call loss.backward(), whether these weight parameters get updated with other network parameters? Please note, this custom layer is a part of my full model and working as per my expectation. I just want to make sure when pytorch does the backpropagation, whether it considers these weight parameters in the computational graph or not!
st118665
Your parameters will be part of the computational graph if you used them to compute the [loss] variable you start backpropagation from. There are some special cases when backpropagation is not being performed in some sub-graph. You can read more in the docs 169. But parameters do not get updated during backpropagation. Only their gradients are being changed (the new gradients are added to the existing values) when calling backward(). Parameters are usually updated afterwards by an optimizer based on the gradients computed during backpropagation.
st118666
I have two tensors of shape 16 X 1 X 300 and 16 X 9 X 300. 16 here is actually the batch size. I want to use torch.bmm but I need to convert 16 X 1 X 300 to 16 X 300 X 1. I want to get final result as 16 X 9 X 1. I can do that in this way - X.squeeze(1).unsqueeze(2) but I am just wondering whether this is the right way to do this? Can anyone suggest anything better? One more question, if I want to swap two columns of a tensor, how can I do that? For example, I want to convert 16 X 9 X 300 tensor to 16 X 300 X 9 tensor. Its actually transposing a matrix. But here I have a 3d tensor. I guess I can do that using torch.transpose() 5. Is that correct?
st118667
We can use torch.load to load a pretrained model (e.g., xx.pth file). But how can we initialize a network with a pretrained network in the same loop? For example, I have two networks with the same structure, net1 and net2. During training, I train net1 first, and then I want to load net1’s weight to net2 to initialize net2. Can I somehow do it without saving net1’s weight to a file and then load it back to net2? Thank you.