id
stringlengths
3
8
text
stringlengths
1
115k
st45068
I believe this post 81 may have a point about proposed dropconnect implementations being wrong (?), when relying on ‘dropout’ method
st45069
Exporting the operator max_unpool2d to ONNX opset version 11 is not supported. Please let me know if this issue has got fixed ? Is there any work around solution available for this issue.
st45070
I am trying to parallelize the initialization of cuda on multiple GPUs. Indeed, I am trying to make my Pytorch model’s initialization faster and I have noticed that this initialization takes about 3s per GPU on a P6000. A simple code like the following one pool = mp.get_context('spawn').Pool(torch.cuda.device_count()) pool.map(torch.Tensor([0]).cuda, range(torch.cuda.device_count())) does not seem to show any performance improvement compared to for gpu in range(torch.cuda.device_count()): torch.Tensor([0]).cuda(device=gpu) It looks like these operations cannot be parallelized. Am I doing something wrong? Is there anyway to make this faster?
st45071
Hi all, I am trying to calculate the SVD of some complex matrices. It worked fine on the CPU, but when it comes to the GPU, some errors happened, here is the snippet of my code and its corresponding output: print("M_final is: ", M_final) print(M_final.dtype) print(M_final.device) SVD_result = torch.svd(M_final) print(SVD_result) ......Output:...... M_final is: tensor([[ 0.2688+1.2463e-05j, -0.2223+1.0665e-01j], [-0.1943-1.3049e-01j, 0.0545-3.9980e-01j], [-0.5253-5.2017e-01j, 0.1036+3.6822e-01j], [-0.2718-4.8315e-01j, -0.2331-5.5959e-01j], [ 0.0798+1.1285e-01j, -0.2115-4.6674e-01j]], device='cuda:0') torch.complex64 cuda:0 --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-129-4a7caa855681> in <module> 30 print(M_final.dtype) 31 print(M_final.device) ---> 32 SVD_result = torch.svd(M_final) 33 print(SVD_result) RuntimeError: "svd_cuda" not implemented for 'ComplexFloat' Does this mean that the torch.svd does not support the complex tensor on the GPU? I checked some posts and issues, it seemed that this functionality is actually supported on GPU: https://github.com/pytorch/pytorch/pull/42738 2 Can anyone give some suggstions on this? Thanks in advance!
st45072
Hello Yuchen! Yuchen_Mu: Hi all, I am trying to calculate the SVD of some complex matrices. It worked fine on the CPU, but when it comes to the GPU, some errors happened, … Can anyone give some suggstions on this? If you can tolerate the risks of living on the “bleeding edge,” it appears that the nightly build (version 1.8.0) now has complex svd on the gpu: >>> import torch >>> torch.__version__ '1.8.0.dev20201203' >>> torch.svd (torch.randn ([2, 5], dtype = torch.cfloat, device = 'cuda:0')) torch.return_types.svd( U=tensor([[-0.9239+0.0000e+00j, -0.3826+1.1088e-08j], [ 0.0543+3.7870e-01j, -0.1311-9.1457e-01j]], device='cuda:0'), S=tensor([2.3032, 1.6723], device='cuda:0'), V=tensor([[ 0.3954-0.1435j, -0.2532+0.1638j], [ 0.0011+0.3421j, 0.1400-0.3463j], [ 0.0546-0.4828j, -0.3884-0.4571j], [ 0.1831+0.3384j, -0.4973-0.1605j], [-0.2001-0.5309j, 0.0373-0.3678j]], device='cuda:0')) Best. K. Frank
st45073
Hi Frank! Nice to see you here again and thank you very much! Let me try it and see what happens. Regards, Yuchen
st45074
Hi Frank, I tried the nightly version and yes, this version supported the (forward) computation of SVD for comlex matrix, but when I implemented a customized layer which included this complex matrix SVD cmputation, and put this layer inside my model, during the training time the following error occurred: RuntimeError: svd does not support automatic differentiation for outputs with complex dtype. It seems like 1.8 verson still does not support the aotugrad for SVD when complex matrix is used, any suggestions or should I wait Pytorch develop team to improve this functionality? Regards, Yuchen
st45075
Hello Yuchen! Yuchen_Mu: RuntimeError: svd does not support automatic differentiation for outputs with complex dtype. It seems like 1.8 verson still does not support the aotugrad for SVD when complex matrix is used, any suggestions I don’t know of any quick fix for this. or should I wait Pytorch develop team to improve this functionality? Yes, I think waiting is probably the best option. Complex tensors are currently a work in progress in pytorch. As a further note, I’ve been thinking about this some, and I don’t fully understand how autograd ought to work with complex tensor functions. If I can get my thoughts sorted out on this I will likely post something, but right now I am very confused by the whole thing. Best. K. Frank
st45076
I have searched many related posts and already known it is a problem about memory, but the weird thing is that when I use (almost) the same Dataset in tensorflow version, the error goes away. By saying tensorflow version, I mean I only use Dataloader to load data but the code used to define model and train the model are all written in tensorflow. Here is the definition of the dataset: class UsptoDataset(Dataset): def __init__(self, main_file, tree_file): f = open(project + main_file, "r") f_tree = open(project + tree_file, "r") self.main_data = [x.strip() for x in f.readlines()] paths = [x.strip() for x in f_tree.readlines()] tmp = [] self.tree_data = [] # we split the tree_file by <BR> because every block(arbitrary lines) separated by <BR> in tree_file corresponds to a sample. for p in paths: if p.strip() == '<BR>': self.tree_data.append(deepcopy(tmp)) tmp.clear() else: tmp.append(p) if len(tmp) > 0: self.tree_data.append(deepcopy(tmp)) # every four lines in main_file correspond to a sample. assert len(self.main_data) // 4 == len(self.tree_data) f.close() f_tree.close() def __len__(self): return len(self.tree_data) def __getitem__(self, item): i = item * 4 line = self.main_data[i] vec = [int(x) for x in line.split()][:rules_len] syn_tree_indices = np.array(vec + [0] * (rules_len - len(vec))) syn_rule_nl_left, syn_rule_nl_right, _ = line2rule_nl(line) i += 1 # read next line in main file syn_parent_matrix, _ = line2mask(self.main_data[i], rules_len) i += 1 # read next line in main file line = self.main_data[i] vec = [int(x) for x in line.split()][:rules_len - 1] rea_tree_indices = np.array([classnum] + vec + [0] * (rules_len - len(vec) - 1)) rea_rule_nl_left, rea_rule_nl_right, class_mask = line2rule_nl(line) query_paths = read_tree_path(self.tree_data[item]) vec = np.array(vec + [0] * (rules_len - len(vec))) labels = np.array(vec) i += 1 # read next line in main file parent_matrix, path_lens = line2mask(self.main_data[i], rules_len) return {'syn_tree_indices': syn_tree_indices, 'syn_rule_nl_left': syn_rule_nl_left, 'syn_rule_nl_right': syn_rule_nl_right, 'rea_tree_indices': rea_tree_indices, 'rea_rule_nl_left': rea_rule_nl_left, 'rea_rule_nl_right': rea_rule_nl_right, 'class_mask': class_mask, 'query_paths': query_paths, 'labels': labels, 'parent_matrix': parent_matrix, 'syn_parent_matrix': syn_parent_matrix, 'path_lens': path_lens} @staticmethod def collate_fn(batch): syn_tree_indices = np.stack([_['syn_tree_indices'] for _ in batch], axis=0) syn_rule_nl_left = np.stack([_['syn_rule_nl_left'] for _ in batch], axis=0) syn_rule_nl_right = np.stack([_['syn_rule_nl_right'] for _ in batch], axis=0) rea_tree_indices = np.stack([_['rea_tree_indices'] for _ in batch], axis=0) rea_rule_nl_left = np.stack([_['rea_rule_nl_left'] for _ in batch], axis=0) rea_rule_nl_right = np.stack([_['rea_rule_nl_right'] for _ in batch], axis=0) class_mask = np.stack([_['class_mask'] for _ in batch], axis=0) query_paths = np.stack([_['query_paths'] for _ in batch], axis=0) labels = np.stack([_['labels'] for _ in batch], axis=0) parent_matrix = np.stack([_['parent_matrix'] for _ in batch], axis=0) syn_parent_matrix = np.stack([_['syn_parent_matrix'] for _ in batch], axis=0) path_lens = np.stack([_['path_lens'] for _ in batch], axis=0) return_dict = {'syn_tree_indices': syn_tree_indices, 'syn_rule_nl_left': syn_rule_nl_left, 'syn_rule_nl_right': syn_rule_nl_right, 'rea_tree_indices': rea_tree_indices, 'rea_rule_nl_left': rea_rule_nl_left, 'rea_rule_nl_right': rea_rule_nl_right, 'class_mask': class_mask, 'query_paths': query_paths, 'labels': labels, 'parent_matrix': parent_matrix, 'syn_parent_matrix': syn_parent_matrix, 'path_lens': path_lens} return return_dict @staticmethod def torch_collate_fn(batch): syn_tree_indices = torch.tensor(np.stack([_['syn_tree_indices'] for _ in batch], axis=0), dtype=torch.long) syn_rule_nl_left = torch.tensor(np.stack([_['syn_rule_nl_left'] for _ in batch], axis=0), dtype=torch.long) syn_rule_nl_right = torch.tensor(np.stack([_['syn_rule_nl_right'] for _ in batch], axis=0), dtype=torch.long) rea_tree_indices = torch.tensor(np.stack([_['rea_tree_indices'] for _ in batch], axis=0), dtype=torch.long) rea_rule_nl_left = torch.tensor(np.stack([_['rea_rule_nl_left'] for _ in batch], axis=0), dtype=torch.long) rea_rule_nl_right = torch.tensor(np.stack([_['rea_rule_nl_right'] for _ in batch], axis=0), dtype=torch.long) class_mask = torch.tensor(np.stack([_['class_mask'] for _ in batch], axis=0), dtype=torch.float32) query_paths = torch.tensor(np.stack([_['query_paths'] for _ in batch], axis=0), dtype=torch.long) labels = torch.tensor(np.stack([_['labels'] for _ in batch], axis=0), dtype=torch.long) parent_matrix = torch.tensor(np.stack([_['parent_matrix'] for _ in batch], axis=0), dtype=torch.float) syn_parent_matrix = torch.tensor(np.stack([_['syn_parent_matrix'] for _ in batch], axis=0), dtype=torch.float) path_lens = torch.tensor(np.stack([_['path_lens'] for _ in batch], axis=0), dtype=torch.long) return_dict = {'syn_tree_indices': syn_tree_indices, 'syn_rule_nl_left': syn_rule_nl_left, 'syn_rule_nl_right': syn_rule_nl_right, 'rea_tree_indices': rea_tree_indices, 'rea_rule_nl_left': rea_rule_nl_left, 'rea_rule_nl_right': rea_rule_nl_right, 'class_mask': class_mask, 'query_paths': query_paths, 'labels': labels, 'parent_matrix': parent_matrix, 'syn_parent_matrix': syn_parent_matrix, 'path_lens': path_lens} return return_dict I define two collate_fn and use them in different code, collate_fn in tenserflow and torch_collate_fn in pytorch. As I mentioned before, the former works fine but the latter throws the error as the title shows. One can see that the two collate_fn are mainly the same, with the only difference that tensorflow needs ndarray and pytorch needs tensor. Here is how I use the dataset to define a dataloader: train_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True, num_workers=4, collate_fn=UsptoDataset.torch_collate_fn) Any ideas why would this happened? And how to modify my code to make it work in pytorch code? Thanks in advance! BTW, to tackle the error RuntimeError: received 0 items of ancdata, I use the following code torch.multiprocessing.set_sharing_strategy('file_system')
st45077
Solved by pyxiea in post #2 I found the reason, I use cycle to wrap the dataloader, which leads to the problem of memory leakage.
st45078
I found the reason, I use cycle to wrap the dataloader, which leads to the problem of memory leakage.
st45079
A similar problem on stackoverflow is here 4, but no answer is useful. As this tutorial 4 shown, the output of multi-gpus will be concatenated on the dimension 0, but I don’t know why does it not work in my code. model = T2T(......) # T2T is a sub class of nn.Module if torch.cuda.device_count() > 1: print("Using", torch.cuda.device_count(), "GPUs!") model = torch.nn.DataParallel(model) model = model.to(device) optimizer = torch.optim.Adam(model.parameters(), lr=1e-4) train_loader = DataLoader(......) ...... epoch_steps = len(train_loader) STEPS = train_epochs * epoch_steps train_loader = cycle(train_loader) ...... for step in range(STEPS): batch_data = next(train_loader) optimizer.zero_grad() labels = batch_data['labels'].to(device) ...... probs = model(......) # calculate loss using labels and probs When calculating loss, I got an error, the batch size of labels and probs are not the same. The shape of labels is [64,…] and the shape of probs is [32,…]. I am using 2 gpus, so I guess the output of multi-gpus are not concatenated. Any idea how to fix this? Thanks in advance!
st45080
Solved by ptrblck in post #7 This tensor won’t be split, as dim0 has only a single sample and thus only a single GPU will execute this tensor. If you want to make sure each GPU gets a chunk of the input tensor, make sure dim0 has at least a size of num_gpus.
st45081
What is the shape of the input? The nn.DataParallel model would split the input in dim0 and concatenate it afterwards. If the input has a batch size of 32, the output would have the same shape.
st45082
probs = model(inputrulelist, syn_inputrulelist, tree_path_vec, rule_mask, syn_rule_mask, inputrulelistnode, syn_inputrulelistnode, inputrulelistson, syn_inputrulelistson, sequence_mask, treemask, syn_treemask, path_lens) Sorry for the late reply, I have already fixed the problem but the actual reason remains unknown. The model has many params as above, but the first dimensions of them are all batch_size, and the current value is 64. One exception is the sequence_mask tensor, it has shape [1, n, n] and the elements on and below the main diagonal are all zeros (it is used to do the sequence masking in multi-head attention). The batch_size of it is set to 1 because I have noticed that DataParallel model will split the input in dim0, and I hope all devices will get the same sequence_mask. After I change the first dimension of sequence_mask to batch_size(64 for now), the problem is fixed. Specifically, I use tensor.repeat(batch_size, 1, 1) instead of tensor.unsqueeze(0) to generate the batch_size dimension as before. After that, the first dimension of the output of the DataParallel model (probs in the code) becomes 64 instead of 32. I don’t know the exact reason due to my unfamiliarity of DataParallel, it would be so helpful if you could tell me, thanks a lot.
st45083
Your description is right. nn.DataParallel expects all input tensors in the shape [batch_size, *] and will split them in dim0. If you’ve applied broadcasting in the past, repeating the tensor sounds reasonable.
st45084
But I still don’t know why can I only get output from one gpu (instead of the concatenation of outputs from all gpus) if I pass a tensor with shape [1, *]
st45085
This tensor won’t be split, as dim0 has only a single sample and thus only a single GPU will execute this tensor. If you want to make sure each GPU gets a chunk of the input tensor, make sure dim0 has at least a size of num_gpus.
st45086
Hi all, I am faced with the following situation. I am using one model to solve multiple classification tasks, where each classification task itself is multi-class, and the number of possible classes varies across classification tasks. To give an example: The model outputs a vector with 22 elements, where I would like to apply a softmax over: The first 5 elements The following 5 elements The following 8 elements The last 4 elements This is because the model is simultaneously solving 4 classification tasks, where the first 2 tasks have 5 candidate classes each, the third task has 8 candidate classes, and the final task has 4 candidate classes. I would also like to define an appropriate cross-entropy loss that follows this same structure. My questions are: How can I use torch.nn.Softmax to achieve this? How can I define the custom cross-entropy loss mentioned above? Many thanks!
st45087
Hello Ege! ekarais: I am using one model to solve multiple classification tasks, How can I use torch.nn.Softmax to achieve this? First, for numerical-stability reasons, you shouldn’t use Softmax. As I outline below, you should use CrossEntropyLoss, which has, in effect, Softmax built into it. How can I define the custom cross-entropy loss mentioned above? You don’t need to write a custom cross-entropy loss. Just use pytorch’s built-in CrossEntropyLoss four times over, once for each of your classification tasks. Your model outputs a batch of prediction vectors of shape [nBatch, 22]. Your targets could be packaged in a number of ways. The most straightforward is probably to have four sets of targets, one for each classification task. Let’s call the four tasks A, B, C, and D. Your targets should be batches of integer class labels. So, for example, targetA should have shape [nBatch] and consist of class labels that run from 0 to 4, because task A has five classes. targetB should be the same. targetC should also have shape [nBatch], but consist of class labels that run from 0 to 7 because task C has eight classes. Then: loss_fn = torch.nn.CrossEntropyLoss() # only need to do this once lossA = loss_fn (prediction[:, 0:5], targetA) lossB = loss_fn (prediction[:, 5:10], targetB) lossC = loss_fn (prediction[:, 10:18], targetC) lossD = loss_fn (prediction[:, 18:22], targetD) loss = lossA + lossB + lossC + lossD That is, you use indexing to snip out of your vector of 22 predicted class labels the set of predictions relevant to each task. So in this example, labels 10 through 17 (as indexed by 10:18) are the eight predicted class labels relevant to task C. Best. K. Frank
st45088
Hello K. Frank, Thank you for your swift reply. I applied the changes you recommended, and I’m now faced with the following issue: In the second iteration of the training loop, the following runtime error is produced: RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed. Specify retain_graph=True when calling backward the first time. When I specify retain_graph=True when calling backward the first time, and set it to False in the subsequent iterations, I then receive the following error message: RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [200, 336]]... I have determined that this tensor corresponds to the last layer of my fully connected network. Below, you can find the relevant code snippet from the training loop: # update the gradients to zero optimizer.zero_grad() # forward pass logits = model(x) #compute loss for i in range(18): loss_array[i] = criterion(logits[:,i*11:(i+1)*11], torch.argmax(labels[:,i*11:(i+1)*11], dim=1)) for i in range(9): loss_array[18+i] = criterion(logits[:,198+i*8:198+(i+1)*8], torch.argmax(labels[:,198+i*8:198+(i+1)*8], dim=1)) for i in range(6): loss_array[27+i] = criterion(logits[:,270+i*11:270+(i+1)*11], torch.argmax(labels[:,270+i*11:270+(i+1)*11], dim=1)) loss = torch.sum(loss_array) # backward pass loss.backward(retain_graph=first) first = False train_loss += loss.item() # update the weights optimizer.step() The code is a bit more complicated than the example we discussed earlier. I had to use an array to hold all the losses as I have 33 classification tasks instead of 4. I suspect that I am not using optimizer.step() in the correct way as that is the only operations that updates the network layers. Would you have an insight into the problem with my code? Many thanks! Ege
st45089
Hello Ege! ekarais: In the second iteration of the training loop, the following runtime error is produced: RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed. Specify retain_graph=True when calling backward the first time. logits = model(x) #compute loss for i in range(18): loss_array[i] = criterion(logits[:,i*11:(i+1)*11], torch.argmax(labels[:,i*11:(i+1)*11], dim=1)) for i in range(9): loss_array[18+i] = criterion(logits[:,198+i*8:198+(i+1)*8], torch.argmax(labels[:,198+i*8:198+(i+1)*8], dim=1)) for i in range(6): loss_array[27+i] = criterion(logits[:,270+i*11:270+(i+1)*11], torch.argmax(labels[:,270+i*11:270+(i+1)*11], dim=1)) loss = torch.sum(loss_array) # update the weights optimizer.step() I’m not entirely sure what is going on here. You don’t say what loss_array is, but since you call torch.sum (loss_array), I will assume that loss_array is some kind of pytorch tensor. If so, indexing into loss_array multiple times could be your problem. (You also don’t say what criterion is. Let me assume that is is criterion = torch.nn.CrossEntropyLoss(), and therefore that criterion (...) returns a pytorch tensor of shape [1], that is a single number packaged as a tensor.) Try something like: #compute loss loss = 0.0 # python scalar for i in range(18): # loss will become a pytorch tensor loss = loss + criterion(logits[:,i*11:(i+1)*11], torch.argmax(labels[:,i*11:(i+1)*11], dim=1)) for i in range(18): # etc., ... loss.backward() When I specify retain_graph=True when calling backward the first time, and set it to False in the subsequent iterations, I then receive the following error message: RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [200, 336]]... # backward pass loss.backward(retain_graph=first) first = False train_loss += loss.item() If my theory is right, this use of retain_graph = True is incorrect, and, rather than fixing the real issue, is just hiding it. So just call loss.backward(), as outlined above, without specifying retain_graph. Good luck. K. Frank
st45090
Hi Frank, thank you for this answer. Would it make sense to weight these loss components? So that lossA would be weighted by 5/22 etc. Even if all tasks (A…D) have the same priority. Best, Peter
st45091
Hello. I am experiencing a strange behaviour on the forward() method of nn.Module. I have a class that inherits nn.Module and initialize some attributes on the init() method. The forward() in this class does not recognize the attribute that is itself an nn.Module. The context of this error is while trying to run an implementation for DSQ (paper: https://arxiv.org/pdf/1908.05033.pdf, code: https://github.com/ricky40403/DSQ). Either I get “torch.nn.modules.module.ModuleAttributeError: ‘DSQConv’ object has no attribute ‘running_lw’”, or in QuantConv I get “‘QuantMesure object has no attribute ‘quant’””.
st45092
Yes, of course. Thanks for the reply. Using the code from a PyTorch implementation of DSQ (paper: https://arxiv.org/pdf/1908.05033.pdf 1, code: https://github.com/ricky40403/DSQ), the error comes from this class (trimmed some additional functions): class DSQConv(nn.Conv2d): def __init__(self, in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, momentum = 0.1, num_bit = 8, QInput = True, bSetQ = True): super(DSQConv, self).__init__(in_channels, out_channels, kernel_size, stride, padding, dilation, groups, bias) self.num_bit = num_bit self.quan_input = QInput self.bit_range = 2**self.num_bit -1 self.is_quan = bSetQ self.momentum = momentum if self.is_quan: # using int32 max/min as init and backprogation to optimization # Weight self.uW = nn.Parameter(data = torch.tensor(2 **31 - 1).float()) self.lW = nn.Parameter(data = torch.tensor((-1) * (2**32)).float()) self.register_buffer('running_uw', torch.tensor([self.uW.data])) # init with uw self.register_buffer('running_lw', torch.tensor([self.lW.data])) # init with lw self.alphaW = nn.Parameter(data = torch.tensor(0.2).float()) # Bias if self.bias is not None: self.uB = nn.Parameter(data = torch.tensor(2 **31 - 1).float()) self.lB = nn.Parameter(data = torch.tensor((-1) * (2**32)).float()) self.register_buffer('running_uB', torch.tensor([self.uB.data]))# init with ub self.register_buffer('running_lB', torch.tensor([self.lB.data]))# init with lb self.alphaB = nn.Parameter(data = torch.tensor(0.2).float()) # Activation input if self.quan_input: self.uA = nn.Parameter(data = torch.tensor(2 **31 - 1).float()) self.lA = nn.Parameter(data = torch.tensor((-1) * (2**32)).float()) self.register_buffer('running_uA', torch.tensor([self.uA.data])) # init with uA self.register_buffer('running_lA', torch.tensor([self.lA.data])) # init with lA self.alphaA = nn.Parameter(data = torch.tensor(0.2).float()) def forward(self, x): if self.is_quan: # Weight Part # moving average if self.training: cur_running_lw = self.running_lw.mul(1-self.momentum).add((self.momentum) * self.lW) cur_running_uw = self.running_uw.mul(1-self.momentum).add((self.momentum) * self.uW) else: cur_running_lw = self.running_lw cur_running_uw = self.running_uw Qweight = self.clipping(self.weight, cur_running_uw, cur_running_lw) cur_max = torch.max(Qweight) cur_min = torch.min(Qweight) delta = (cur_max - cur_min)/(self.bit_range) interval = (Qweight - cur_min) //delta mi = (interval + 0.5) * delta + cur_min Qweight = self.phi_function(Qweight, mi, self.alphaW, delta) Qweight = self.sgn(Qweight) Qweight = self.dequantize(Qweight, cur_min, delta, interval) Qbias = self.bias # Bias if self.bias is not None: # self.running_lB.mul_(1-self.momentum).add_((self.momentum) * self.lB) # self.running_uB.mul_(1-self.momentum).add_((self.momentum) * self.uB) if self.training: cur_running_lB = self.running_lB.mul(1-self.momentum).add((self.momentum) * self.lB) cur_running_uB = self.running_uB.mul(1-self.momentum).add((self.momentum) * self.uB) else: cur_running_lB = self.running_lB cur_running_uB = self.running_uB Qbias = self.clipping(self.bias, cur_running_uB, cur_running_lB) cur_max = torch.max(Qbias) cur_min = torch.min(Qbias) delta = (cur_max - cur_min)/(self.bit_range) interval = (Qbias - cur_min) //delta mi = (interval + 0.5) * delta + cur_min Qbias = self.phi_function(Qbias, mi, self.alphaB, delta) Qbias = self.sgn(Qbias) Qbias = self.dequantize(Qbias, cur_min, delta, interval) # Input(Activation) Qactivation = x if self.quan_input: if self.training: cur_running_lA = self.running_lA.mul(1-self.momentum).add((self.momentum) * self.lA) cur_running_uA = self.running_uA.mul(1-self.momentum).add((self.momentum) * self.uA) else: cur_running_lA = self.running_lA cur_running_uA = self.running_uA Qactivation = self.clipping(x, cur_running_uA, cur_running_lA) cur_max = torch.max(Qactivation) cur_min = torch.min(Qactivation) delta = (cur_max - cur_min)/(self.bit_range) interval = (Qactivation - cur_min) //delta mi = (interval + 0.5) * delta + cur_min Qactivation = self.phi_function(Qactivation, mi, self.alphaA, delta) Qactivation = self.sgn(Qactivation) Qactivation = self.dequantize(Qactivation, cur_min, delta, interval) output = F.conv2d(Qactivation, Qweight, Qbias, self.stride, self.padding, self.dilation, self.groups) else: output = F.conv2d(x, self.weight, self.bias, self.stride, self.padding, self.dilation, self.groups) return output I get the following error message: Traceback (most recent call last): File “train.py”, line 579, in main() File “train.py”, line 137, in main main_worker(args.gpu, ngpus_per_node, args) File “train.py”, line 320, in main_worker train(train_loader, model, criterion, optimizer, epoch, args) File “train.py”, line 407, in train output = model(images) File “/home/afonso/anaconda3/envs/nlp/lib/python3.7/site-packages/torch/nn/modules/module.py”, line 722, in _call_impl result = self.forward(*input, **kwargs) File “/home/afonso/anaconda3/envs/nlp/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py”, line 153, in forward return self.module(*inputs[0], **kwargs[0]) File “/home/afonso/anaconda3/envs/nlp/lib/python3.7/site-packages/torch/nn/modules/module.py”, line 722, in _call_impl result = self.forward(*input, **kwargs) File “/home/afonso/anaconda3/envs/nlp/lib/python3.7/site-packages/torchvision/models/resnet.py”, line 220, in forward return self._forward_impl(x) File “/home/afonso/anaconda3/envs/nlp/lib/python3.7/site-packages/torchvision/models/resnet.py”, line 203, in _forward_impl x = self.conv1(x) File “/home/afonso/anaconda3/envs/nlp/lib/python3.7/site-packages/torch/nn/modules/module.py”, line 722, in _call_impl result = self.forward(*input, **kwargs) File “/home/afonso/Projects/quantization/DSQ/DSQConv.py”, line 88, in forward cur_running_lw = self.running_lw.mul(1-self.momentum).add((self.momentum) * self.lW) File “/home/afonso/anaconda3/envs/nlp/lib/python3.7/site-packages/torch/nn/modules/module.py”, line 772, in getattr type(self).name, name)) torch.nn.modules.module.ModuleAttributeError: ‘DSQConv’ object has no attribute ‘running_lw’ Additionally, from my implementation of a post-training uniform quantization for 3D models, adapted with some classes from that same DSQ project, for the following code: class QuantConv3d(nn.Conv3d): def __init__(self, in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, num_bits=8, num_bits_weight=None, momentum=0.1): super(QuantConv3d, self).__init__(in_channels, out_channels, kernel_size, stride, padding, dilation, groups, bias) self.num_bits = num_bits self.num_bits_weight = num_bits_weight or num_bits self.quantize_input = QuantMeasure(num_bits=num_bits, momentum=momentum) def forward(self, input): input = self.quantize_input(input) qweight = quantize(self.weight, num_bits=self.num_bits_weight, min_value=float(self.weight.min()), max_value=float(self.weight.max())) if self.bias is not None: qbias = quantize(self.bias, num_bits=self.num_bits_weight) else: qbias = None output = F.conv3d(input, qweight, qbias, self.stride, self.padding, self.dilation, self.groups) return output I get the following error message: Traceback (most recent call last): File “main.py”, line 155, in main() File “main.py”, line 97, in main opt.device) File “/home/ctm/afonso/easyride/acceleration/src/core/trainer.py”, line 41, in train_epoch outputs = model(inputs) File “/home/ctm/.conda/envs/3dcnn/lib/python3.7/site-packages/torch/nn/modules/module.py”, line 722, in _call_impl result = self.forward(*input, **kwargs) File “/home/ctm/.conda/envs/3dcnn/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py”, line 153, in forward return self.module(*inputs[0], **kwargs[0]) File “/home/ctm/.conda/envs/3dcnn/lib/python3.7/site-packages/torch/nn/modules/module.py”, line 722, in _call_impl result = self.forward(*input, **kwargs) File “/home/ctm/afonso/easyride/acceleration/src/models/mobilenetv2.py”, line 126, in forward x = self.features(x) File “/home/ctm/.conda/envs/3dcnn/lib/python3.7/site-packages/torch/nn/modules/module.py”, line 722, in _call_impl result = self.forward(*input, **kwargs) File “/home/ctm/.conda/envs/3dcnn/lib/python3.7/site-packages/torch/nn/modules/container.py”, line 117, in forward input = module(input) File “/home/ctm/.conda/envs/3dcnn/lib/python3.7/site-packages/torch/nn/modules/module.py”, line 722, in _call_impl result = self.forward(*input, **kwargs) File “/home/ctm/.conda/envs/3dcnn/lib/python3.7/site-packages/torch/nn/modules/container.py”, line 117, in forward input = module(input) File “/home/ctm/.conda/envs/3dcnn/lib/python3.7/site-packages/torch/nn/modules/module.py”, line 722, in _call_impl result = self.forward(*input, **kwargs) File “/home/ctm/afonso/easyride/acceleration/src/quantization/uni_quant_3d.py”, line 139, in forward input = self.quantize_input(input) File “/home/ctm/.conda/envs/3dcnn/lib/python3.7/site-packages/torch/nn/modules/module.py”, line 772, in getattr type(self).name, name)) torch.nn.modules.module.ModuleAttributeError: ‘QuantConv3d’ object has no attribute ‘quantize_input’ Both implementations use the PyTransformer (https://github.com/ricky40403/PyTransformer/blob/master/transformers/torchTransformer.py) trans_layer method to permute standard conv layers to quantized ones. The code goes as follows: def trans_layers(self, model, update = True): """! This function transform layer by layers in register dictionaries @param model: input model to transfer @param update: default is True, whether to update the parameter from the original layer or not. Note that it will update matched parameters only. @return transformed model """ # print("trans layer") if len(self._register_dict) == 0: print("No layer to swap") print("Please use register( {origin_layer}, {target_layer} ) to register layer") return model else: for module_name in model._modules: # has children if len(model._modules[module_name]._modules) > 0: self.trans_layers(model._modules[module_name]) else: if type(getattr(model, module_name)) in self._register_dict: # use inspect.signature to know args and kwargs of __init__ _sig = inspect.signature(type(getattr(model, module_name))) _kwargs = {} for key in _sig.parameters: if _sig.parameters[key].default == inspect.Parameter.empty: #args # assign args # default values should be handled more properly, unknown data type might be an issue if 'kernel' in key: # _sig.parameters[key].replace(default=inspect.Parameter.empty, annotation=3) value = 3 elif 'channel' in key: # _sig.parameters[key].replace(default=inspect.Parameter.empty, annotation=32) value = 32 else: # _sig.parameters[key].replace(default=inspect.Parameter.empty, annotation=None) value = None _kwargs[key] = value _attr_dict = getattr(model, module_name).__dict__ _layer_new = self._register_dict[type(getattr(model, module_name))](**_kwargs) # only give positional args _layer_new.__dict__.update(_attr_dict) setattr(model, module_name, _layer_new) return model Thank you for your help in advance.
st45093
Hello! I’m struggling with making custom datasets like MNIST. The dimension of MNIST datasets is like torch.size [6000,2,32,32], and 2 means that it includes the image as well as the label. I have images with label on its image file name but I don’t know how to include image file name(label) into one tensor, my tensor dimension is [6000,1,32,32], and it means the tensor has only image data not the label. how to make label within image tensor?
st45094
Hi guys, I’m trying to use UNet to perform the training on breast images. in particular i have 3 tensors: input tensor that has the shape ([32, 1, 64, 64]) labels, that is a tensor of shape ([32]) Maps tensor that has the shape ([32, 1, 64, 64]) The code i used is the following… class UNet(nn.Module): def contracting_block(self, in_channels, out_channels, kernel_size=3): block = torch.nn.Sequential( torch.nn.Conv2d(kernel_size=kernel_size, in_channels=in_channels, out_channels=out_channels), torch.nn.ReLU(), torch.nn.BatchNorm2d(out_channels), torch.nn.Conv2d(kernel_size=kernel_size, in_channels=out_channels, out_channels=out_channels), torch.nn.ReLU(), torch.nn.BatchNorm2d(out_channels), ) return block def expansive_block(self, in_channels, mid_channel, out_channels, kernel_size=3): block = torch.nn.Sequential( torch.nn.Conv2d(kernel_size=kernel_size, in_channels=in_channels, out_channels=mid_channel), torch.nn.ReLU(), torch.nn.BatchNorm2d(mid_channel), torch.nn.Conv2d(kernel_size=kernel_size, in_channels=mid_channel, out_channels=mid_channel), torch.nn.ReLU(), torch.nn.BatchNorm2d(mid_channel), torch.nn.ConvTranspose2d(in_channels=mid_channel, out_channels=out_channels, kernel_size=3, stride=2, padding=1, output_padding=1) ) return block def final_block(self, in_channels, mid_channel, out_channels, kernel_size=3): block = torch.nn.Sequential( torch.nn.Conv2d(kernel_size=kernel_size, in_channels=in_channels, out_channels=mid_channel), torch.nn.ReLU(), torch.nn.BatchNorm2d(mid_channel), torch.nn.Conv2d(kernel_size=kernel_size, in_channels=mid_channel, out_channels=mid_channel), torch.nn.ReLU(), torch.nn.BatchNorm2d(mid_channel), torch.nn.Conv2d(kernel_size=kernel_size, in_channels=mid_channel, out_channels=out_channels, padding=1), torch.nn.ReLU(), torch.nn.BatchNorm2d(out_channels), ) return block def __init__(self, in_channel=1, out_channel=2): super(UNet, self).__init__() # Encode self.conv_encode1 = self.contracting_block(in_channels=in_channel, out_channels=64) self.conv_maxpool1 = torch.nn.MaxPool2d(kernel_size=2) self.conv_encode2 = self.contracting_block(64, 128) self.conv_maxpool2 = torch.nn.MaxPool2d(kernel_size=2) self.conv_encode3 = self.contracting_block(128, 256) self.conv_maxpool3 = torch.nn.MaxPool2d(kernel_size=1) # Bottleneck self.bottleneck = torch.nn.Sequential( torch.nn.Conv2d(kernel_size=2, in_channels=256, out_channels=512), torch.nn.ReLU(), torch.nn.BatchNorm2d(512), torch.nn.Conv2d(kernel_size=2, in_channels=512, out_channels=512), torch.nn.ReLU(), torch.nn.BatchNorm2d(512), torch.nn.ConvTranspose2d(in_channels=512, out_channels=256, kernel_size=2, stride=2, padding=1, output_padding=1) ) # Decode self.conv_decode3 = self.expansive_block(512, 256, 128) self.conv_decode2 = self.expansive_block(256, 128, 64) self.final_layer = self.final_block(128, 64, out_channel) def crop_and_concat(self, upsampled, bypass, crop=False): if crop: c = (bypass.size()[2] - upsampled.size()[2]) // 2 bypass = F.pad(bypass, (-c, -c, -c, -c)) return torch.cat((upsampled, bypass), 1) def forward(self, x): # Encode #x = x.view(x.size(0), -1) encode_block1 = self.conv_encode1(x) encode_pool1 = self.conv_maxpool1(encode_block1) encode_block2 = self.conv_encode2(encode_pool1) encode_pool2 = self.conv_maxpool2(encode_block2) encode_block3 = self.conv_encode3(encode_pool2) encode_pool3 = self.conv_maxpool3(encode_block3) # Bottleneck bottleneck1 = self.bottleneck(encode_pool3) # Decode #print(x.shape, encode_block1.shape, encode_block2.shape, encode_block3.shape, bottleneck1.shape) #print('Decode Block 3') #print(bottleneck1.shape, encode_block3.shape) decode_block3 = self.crop_and_concat(bottleneck1, encode_block3, crop=True) #print(decode_block3.shape) #print('Decode Block 2') cat_layer2 = self.conv_decode3(decode_block3) #print(cat_layer2.shape, encode_block2.shape) decode_block2 = self.crop_and_concat(cat_layer2, encode_block2, crop=True) cat_layer1 = self.conv_decode2(decode_block2) #print(cat_layer1.shape, encode_block1.shape) #print('Final Layer') #print(cat_layer1.shape, encode_block1.shape) decode_block1 = self.crop_and_concat(cat_layer1, encode_block1, crop=True) #print(decode_block1.shape) final_layer = self.final_layer(decode_block1) #print(final_layer.shape) return final_layer And here i wrote the part used to call the net.. for i, data in enumerate(dataloader, 0): total_time_data_load += time.time() - t0_data_load # get the inputs t0_other = time.time() inputs, labels, maps = data print("...Inputs has shape:", inputs.shape) print("...Labels shape:", labels.shape) print("...Maps shape:", maps.shape) # send to GPU #inputs, labels = inputs.to(DEVICE, non_blocking=True), labels.to(DEVICE, non_blocking=True) inputs, labels, maps = inputs.to(DEVICE, non_blocking=True), labels.to(DEVICE, non_blocking=True), maps.to(DEVICE, non_blocking=True) # update data statistics if ARGS.data_stats: inputs_sum += inputs.sum().detach().cpu() inputs_sum_sq += inputs.pow(2).sum().detach().cpu() inputs_min = min(inputs_min, inputs.min().detach().cpu()) inputs_max = max(inputs_max, inputs.max().detach().cpu()) # zero the parameter gradients optimizer.zero_grad() total_time_other += time.time() - t0_other # forward t0_forward = time.time() outputs = net(inputs) total_time_forward += time.time() - t0_forward # backward t0_backward = time.time() print("The selected loss is:", criterion) print("new outputs is:", outputs.shape) loss = criterion(outputs, labels, maps) The question in my case is…can i pass 3 elements to the criterion? Because if i try to use the criterion with 3 elements i catch the error: TypeError: forward() takes 3 positional arguments but 4 were given The loss function used is the CrossEntropyLoss found on Pytorch site. The batch_size is 32 width= 64 height=64 Can anyone help me to fix this and to start the training? I’m stuck on this problem by 2 weeks… and i’m new/dummy on this context. Thanks a lot.
st45095
I added new mods… In particular: class UNet(nn.Module): def contracting_block(self, in_channels, out_channels, kernel_size=3): block = torch.nn.Sequential( torch.nn.Conv2d(kernel_size=kernel_size, in_channels=in_channels, out_channels=out_channels), torch.nn.ReLU(), torch.nn.BatchNorm2d(out_channels), torch.nn.Conv2d(kernel_size=kernel_size, in_channels=out_channels, out_channels=out_channels), torch.nn.ReLU(), torch.nn.BatchNorm2d(out_channels), ) return block def expansive_block(self, in_channels, mid_channel, out_channels, kernel_size=3): block = torch.nn.Sequential( torch.nn.Conv2d(kernel_size=kernel_size, in_channels=in_channels, out_channels=mid_channel), torch.nn.ReLU(), torch.nn.BatchNorm2d(mid_channel), torch.nn.Conv2d(kernel_size=kernel_size, in_channels=mid_channel, out_channels=mid_channel), torch.nn.ReLU(), torch.nn.BatchNorm2d(mid_channel), torch.nn.ConvTranspose2d(in_channels=mid_channel, out_channels=out_channels, kernel_size=3, stride=2, padding=1, output_padding=1) ) return block def final_block(self, in_channels, mid_channel, out_channels, kernel_size=3): block = torch.nn.Sequential( torch.nn.Conv2d(kernel_size=kernel_size, in_channels=in_channels, out_channels=mid_channel), torch.nn.ReLU(), torch.nn.BatchNorm2d(mid_channel), torch.nn.Conv2d(kernel_size=kernel_size, in_channels=mid_channel, out_channels=mid_channel), torch.nn.ReLU(), torch.nn.BatchNorm2d(mid_channel), torch.nn.Conv2d(kernel_size=kernel_size, in_channels=mid_channel, out_channels=out_channels, padding=1), torch.nn.ReLU(), torch.nn.BatchNorm2d(out_channels), ) return block def __init__(self, in_channel=1, out_channel=2): super(UNet, self).__init__() # Encode self.conv_encode1 = self.contracting_block(in_channels=in_channel, out_channels=64) self.conv_maxpool1 = torch.nn.MaxPool2d(kernel_size=2) self.conv_encode2 = self.contracting_block(64, 128) self.conv_maxpool2 = torch.nn.MaxPool2d(kernel_size=2) self.conv_encode3 = self.contracting_block(128, 256) self.conv_maxpool3 = torch.nn.MaxPool2d(kernel_size=1, ceil_mode=True) # Bottleneck self.bottleneck = torch.nn.Sequential( torch.nn.Conv2d(kernel_size=3, in_channels=256, out_channels=512), torch.nn.ReLU(), torch.nn.BatchNorm2d(512), torch.nn.Conv2d(kernel_size=3, in_channels=512, out_channels=512), torch.nn.ReLU(), torch.nn.BatchNorm2d(512), torch.nn.ConvTranspose2d(in_channels=512, out_channels=256, kernel_size=3, stride=2, padding=1, output_padding=1) ) # Decode self.conv_decode3 = self.expansive_block(512, 256, 128) self.conv_decode2 = self.expansive_block(256, 128, 64) self.final_layer = self.final_block(128, 64, out_channel) def crop_and_concat(self, upsampled, bypass, crop=False): if crop: #print(bypass.shape) c = (bypass.size()[2] - upsampled.size()[2]) // 2 bypass = F.pad(bypass, (-c, -c, -c, -c)) print("CROP",upsampled.shape, bypass.shape) return torch.cat((upsampled, bypass), 1) def forward(self, x): # Encode #x = x.view(x.size(0), -1) encode_block1 = self.conv_encode1(x) print("econde block1", encode_block1.shape) encode_pool1 = self.conv_maxpool1(encode_block1) print("econde pool1", encode_pool1.shape) encode_block2 = self.conv_encode2(encode_pool1) print("econde block2", encode_block2.shape) encode_pool2 = self.conv_maxpool2(encode_block2) print("econde pool2", encode_pool2.shape) encode_block3 = self.conv_encode3(encode_pool2) print("econde block3", encode_block3.shape) encode_pool3 = self.conv_maxpool3(encode_block3) print("econde pool3", encode_pool3.shape) # Bottleneck bottleneck1 = self.bottleneck(encode_pool3) print("Bottleneck1", bottleneck1.shape) # Decode print('Decode Block 3') print(bottleneck1.shape, encode_block3.shape) decode_block3 = self.crop_and_concat(bottleneck1, encode_block3, crop=True) print("Decoded block3", decode_block3.shape) print('Decode Block 2') cat_layer2 = self.conv_decode3(decode_block3) print(cat_layer2.shape, encode_block2.shape) decode_block2 = self.crop_and_concat(cat_layer2, encode_block2, crop=True) cat_layer1 = self.conv_decode2(decode_block2) print(cat_layer1.shape, encode_block1.shape) print('Final Layer') print(cat_layer1.shape, encode_block1.shape) decode_block1 = self.crop_and_concat(cat_layer1, encode_block1, crop=True) print(decode_block1.shape) final_layer = self.final_layer(decode_block1) print(final_layer.shape) return final_layer But when i run the code, i received these shapes with the following error… …Inputs has shape: torch.Size([32, 1, 64, 64]) …Labels shape: torch.Size([32]) …Maps shape: torch.Size([32, 1, 64, 64]) width = 64 height = 64 n_chans = 1 Batch_size= 32 econde block1 torch.Size([32, 64, 60, 60]) econde pool1 torch.Size([32, 64, 30, 30]) econde block2 torch.Size([32, 128, 26, 26]) econde pool2 torch.Size([32, 128, 13, 13]) econde block3 torch.Size([32, 256, 9, 9]) econde pool3 torch.Size([32, 256, 9, 9]) Bottleneck1 torch.Size([32, 256, 10, 10]) Decode Block 3 torch.Size([32, 256, 10, 10]) torch.Size([32, 256, 9, 9]) CROP torch.Size([32, 256, 10, 10]) torch.Size([32, 256, 11, 11]) Traceback (most recent call last): File "/Users/.../work.py", line 627, in <module> main() File "/Users/.../work.py", line 624, in main crossvalid() File "/Users/.../work.py", line 583, in crossvalid train(cross_valid_folder, i) File "/Users/.../work.py", line 324, in train outputs = net(inputs) File "/Users/.../lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/Users/.../mynets.py", line 571, in forward decode_block3 = self.crop_and_concat(bottleneck1, encode_block3, crop=True) File "/Users/.../mynets.py", line 548, in crop_and_concat return torch.cat((upsampled, bypass), 1) RuntimeError: Sizes of tensors must match except in dimension 1. Got 10 and 11 in dimension 2 (The offending index is 1) The error should be the dimensions of the tensors printed in the crop_and_concat function… Can anyone help me? Thanks a lot.
st45096
Hi I have a model that looks like this: def init_weights(layer): if isinstance(layer, nn.Linear): init.xavier_uniform_(layer.weight.data) layer.bias.data.fill_(0) class LSTMMancalaModel(nn.Module): def __init__(self, n_inputs, n_outputs, hidden_size=512, neuron_size=512): super().__init__() def create_block(n_in, n_out, activation=True): block = [nn.Linear(n_in, n_out)] if activation: block.append(nn.ReLU()) return nn.Sequential(*block) # self.linear_block = [] self.reduce_block = [] self.actor_block = [] self.critic_block = [] # block 1: linear self.linear1 = nn.Linear(n_inputs, neuron_size) self.dropout = nn.Dropout(p=0.1) self.linear2 = nn.Linear(neuron_size, hidden_size) # self.linear_block.append(create_block(n_inputs, neuron_size)) # self.linear_block.append(nn.Dropout(p=0.1)) # self.linear_block.append(create_block(neuron_size, hidden_size)) # block 3: LSTM self.lstm = nn.LSTMCell(input_size=hidden_size, hidden_size=hidden_size) # block 4: reduce size self.reduce_block.append(create_block(hidden_size, hidden_size // 4)) # block 5: output self.actor_block.append(create_block(hidden_size // 4, n_outputs, activation=False)) self.critic_block.append(create_block(hidden_size // 4, 1, activation=False)) # self.linear_block = nn.Sequential(*self.linear_block) self.reduce_block = nn.Sequential(*self.reduce_block) self.actor_block = nn.Sequential(*self.actor_block) self.critic_block = nn.Sequential(*self.critic_block) self.apply(init_weights) def forward(self, x, h): x1 = self.linear1(x) if torch.any(torch.isnan(x1)): print(f'x1 before linear: {x}') print(f'x1 after linear: {x1}') print(f'x1 weight: {self.linear1.weight.data}') x1 = F.relu(x1) x2 = self.linear2(x1) if torch.any(torch.isnan(x2)): print(f'x2 before linear: {x1}') print(f'x2 after linear: {x2}') print(f'x2 weight {self.linear2.weight.data}') x2 = F.relu(x2) hx, cx = self.lstm(x2, h) x = self.reduce_block(hx) actor = critics = x actor = self.actor_block(actor) critics = self.critic_block(critics) return actor, critics, (hx, cx) Notice that there are some print statements which executes when nan value is encountered. When I train my model, I get: x1 before linear: tensor([[8., 8., 8., 8., 8., 8., 8., 0., 8., 8., 8., 8., 8., 0.]]) x1 after linear: tensor([[ 0.0566, -3.9470, nan, -2.7168, nan, nan, -2.9603, -4.9630, -2.5412, nan, -2.8165, nan, -0.5789, nan, nan, nan, -0.9703, nan, nan, 1.1186, nan, nan, 0.6268, nan, nan, nan, -6.8352, nan, -0.2077, nan, -1.7982, nan, -2.7823, -6.1533, -6.4347, 0.1245, -0.8074, nan, -5.3137, nan, -2.0226, -2.8472, -1.4723, nan, nan, nan, nan, -4.4897, -0.8788, nan, nan, nan, 1.8545, 0.7467, nan, 1.2779, nan, -4.5292, -2.7516, nan, -4.9784, -3.6310, -2.1911, nan, nan, -3.5215, -5.4934, nan, 0.0476, 1.0664, -2.3185, -2.4567, nan, nan, 1.0471, -4.5475, nan, nan, -3.9216, nan, nan, nan, nan, nan, nan, nan, -2.0197, -0.7250, -1.0801, nan, nan, nan, -0.1893, -2.2739, nan, nan, -3.5715, nan, nan, 0.4107, -1.2012, nan, -1.1502, 1.7738, nan, nan, nan, -4.7655, -0.7162, nan, -1.3802, 1.4844, -0.4502, -1.0727, nan, -5.0542, nan, nan, 0.7092, nan, nan, nan, -3.3112, nan, -0.6340, 0.5102, nan, nan]], grad_fn=<AddmmBackward>) x1 weight: tensor([[ 0.0765, 0.1785, -0.0746, ..., -0.1142, 0.1257, -0.1845], [-0.0146, -0.0688, -0.1043, ..., -0.0738, -0.0838, 0.0141], [ nan, nan, nan, ..., nan, nan, nan], ..., [ 0.1422, 0.0444, 0.0275, ..., -0.0582, -0.0641, -0.0594], [ nan, nan, nan, ..., nan, nan, nan], [ nan, nan, nan, ..., nan, nan, nan]]) x2 before linear: tensor([[0.0566, 0.0000, nan, 0.0000, nan, nan, 0.0000, 0.0000, 0.0000, nan, 0.0000, nan, 0.0000, nan, nan, nan, 0.0000, nan, nan, 1.1186, nan, nan, 0.6268, nan, nan, nan, 0.0000, nan, 0.0000, nan, 0.0000, nan, 0.0000, 0.0000, 0.0000, 0.1245, 0.0000, nan, 0.0000, nan, 0.0000, 0.0000, 0.0000, nan, nan, nan, nan, 0.0000, 0.0000, nan, nan, nan, 1.8545, 0.7467, nan, 1.2779, nan, 0.0000, 0.0000, nan, 0.0000, 0.0000, 0.0000, nan, nan, 0.0000, 0.0000, nan, 0.0476, 1.0664, 0.0000, 0.0000, nan, nan, 1.0471, 0.0000, nan, nan, 0.0000, nan, nan, nan, nan, nan, nan, nan, 0.0000, 0.0000, 0.0000, nan, nan, nan, 0.0000, 0.0000, nan, nan, 0.0000, nan, nan, 0.4107, 0.0000, nan, 0.0000, 1.7738, nan, nan, nan, 0.0000, 0.0000, nan, 0.0000, 1.4844, 0.0000, 0.0000, nan, 0.0000, nan, nan, 0.7092, nan, nan, nan, 0.0000, nan, 0.0000, 0.5102, nan, nan]], grad_fn=<ReluBackward0>) x2 after linear: tensor([[nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan]], grad_fn=<AddmmBackward>) x2 weight tensor([[ nan, nan, nan, ..., nan, nan, nan], [ nan, nan, nan, ..., nan, nan, nan], [ 0.0310, 0.0641, -0.1520, ..., -0.1233, 0.1180, -0.0995], ..., [ 0.1068, 0.0417, -0.0876, ..., -0.0248, 0.0748, 0.0775], [ 0.0961, -0.1246, -0.0960, ..., -0.0572, -0.0186, 0.0976], [-0.0094, 0.1377, -0.1003, ..., 0.0692, 0.0609, 0.0548]]) This happens very often and you can probably get this error within a few tries. Not sure what has gone wrong, any help is appreciated…
st45097
Ok there is something wrong with my loss function, I used tensor.std() on a single element tensor with biased=False, it returns nan value, I changed it to biased=True when encountering single element list and it gives me 0 which is the expected result and solved my problem.
st45098
Hello, is there any guide for adapting cnn to regression? I have images and csv labels, there are demos in keras, can I do it in pytorch, the following are the adaption codes in keras, how should I do the same work in pytorch ? from keras.applications.xception import Xception from keras.models import Model model = Xception(weights='imagenet', include_top=True, input_shape=(299,299, 3)) x = model.get_layer(index=len(model.layers)-2).output print(x) x = Dense(1)(x) model = Model(inputs=model.input, outputs=x) model.summary() opt = RMSprop(lr=0.0001) model.compile(loss='mean_squared_error', optimizer=opt, metrics=['mae'])
st45099
You could replace the last linear layer (often called model.classifier) with a new nn.Linear layer with a single output neuron and use e.g. nn.MSELoss as the criterion.
st45100
class Discriminator(nn.Module): def init(self): super().init() self.model=nn.Sequential( nn.Linear(392*384,1024), nn.ReLU(), nn.Dropout(0.5), nn.Linear(1024,512), nn.ReLU(), nn.Dropout(0.5), nn.Linear(512,256), nn.ReLU(), nn.Dropout(0.5), nn.Linear(256,1), nn.Sigmoid() ) def forward(self,x): x=x.view(x.size(0),672) output=self.model(x) return output #generator class Generator(nn.Module): def init(self): super().init() self.model=nn.Sequential( nn.Linear(1000,224), nn.ReLU(), nn.Linear(224,448), nn.ReLU(), nn.Linear(448,672), nn.Tanh()) def forward(self,x): x=x.view(x.size(0),224) output=self.model(x) return output loss_function=nn.BCELoss() optimizer_discriminator=torch.optim.Adam(discriminator.parameters()) optimizer_generator=torch.optim.Adam(generator.parameters()) discriminator=Discriminator() generator=Generator() #Training the model batch_size=1000 num_epochs=10 for epoch in range(num_epochs): for n,(real_samples,Labels) in enumerate(train_set): real_samples=real_samples real_sample_labels=torch.ones((batch_size,1)) latent_heat_samples=torch.randn((batch_size,224)) generated_samples=generator(latent_heat_samples) generated_sample_label=torch.zeros((batch_size,1)) all_samples=torch.cat((real_samples,generated_samples)) all_sample_labels=torch.cat((real_sample_labels,generated_sample_label)) #training discriminator optimizer_discriminator.zero_grad() discriminator_samples=discriminator(all_samples) loss_discriminator=loss_function(discriminator_samples,all_sample_labels) loss_discriminator.backward() optimizer_discriminator.step() #training generator optimizer_generator.zero_grad() generator_samples=generator(latent_heat_samples) generator_discriminator_sample=discriminator(generator_samples) loss_generator=loss_function(generator_discriminator_samples,real_samples) loss_generator.backward() optimizer_generator.step() #printing losses at each epoch if n==batch_size-1: print(f"Epoch:{epoch}, Loss D:{loss_discriminator}") print(f"Epoch:{epoch}, Loss G:{loss_generator}") Error as RuntimeError Traceback (most recent call last) in 10 latent_heat_samples=torch.randn((batch_size,224)) 11 —> 12 generated_samples=generator(latent_heat_samples) 13 generated_sample_label=torch.zeros((batch_size,1)) 14 ~\anaconda3\envs\gan\lib\site-packages\torch\nn\modules\module.py in call(self, *input, **kwargs) 530 result = self._slow_forward(*input, **kwargs) 531 else: –> 532 result = self.forward(*input, **kwargs) 533 for hook in self._forward_hooks.values(): 534 hook_result = hook(self, input, result) in forward(self, x) 13 def forward(self,x): 14 x=x.view(x.size(0),224) —> 15 output=self.model(x) 16 return output 17 ~\anaconda3\envs\gan\lib\site-packages\torch\nn\modules\module.py in call(self, *input, **kwargs) 530 result = self._slow_forward(*input, **kwargs) 531 else: –> 532 result = self.forward(*input, **kwargs) 533 for hook in self._forward_hooks.values(): 534 hook_result = hook(self, input, result) ~\anaconda3\envs\gan\lib\site-packages\torch\nn\modules\container.py in forward(self, input) 98 def forward(self, input): 99 for module in self: –> 100 input = module(input) 101 return input 102 ~\anaconda3\envs\gan\lib\site-packages\torch\nn\modules\module.py in call(self, *input, **kwargs) 530 result = self._slow_forward(*input, **kwargs) 531 else: –> 532 result = self.forward(*input, **kwargs) 533 for hook in self._forward_hooks.values(): 534 hook_result = hook(self, input, result) ~\anaconda3\envs\gan\lib\site-packages\torch\nn\modules\linear.py in forward(self, input) 85 86 def forward(self, input): —> 87 return F.linear(input, self.weight, self.bias) 88 89 def extra_repr(self): ~\anaconda3\envs\gan\lib\site-packages\torch\nn\functional.py in linear(input, weight, bias) 1368 if input.dim() == 2 and bias is not None: 1369 # fused op is marginally faster -> 1370 ret = torch.addmm(bias, input, weight.t()) 1371 else: 1372 output = input.matmul(weight.t()) RuntimeError: size mismatch, m1: [1000 x 224], m2: [1000 x 224] at C:\Users\builder\AppData\Local\Temp\pip-req-build-e5c8dddg\aten\src\TH/generic/THTensorMath.cpp:136 Please suggest me to solve this error, I am new to Pytorch. Thanks in Advance.
st45101
The view operation is wrong: x=x.view(x.size(0),224) since you are reshaping the output to have 224 features, while the linear layer is defined as: self.model=nn.Sequential( nn.Linear(1000,224), ... You would have to pass an input with 1000 features or change the in_features of the linear layer. PS: you can post code snippets by wrapping them into three backticks ```, which would make debugging easier.
st45102
Hello everyone, I am currently doing a project where I replaced batch normalization with group norm so that I can train in batch size 1. However, the model seems to fail only a specific data during training which did not happen during batch norm. For example, for my validation iou, it goes 0.9,0.91 and then suddenly 0.07 and the model does not seem to improve on this data during training. On the other hand, the model did not fail like this during batch norm training. I know there could be many reasons but I think this is due to changing batch norm to group norm or possibly using batch size 1. Is there a difference in group norm which could have caused the problem?? Also, could there be a solution to this? Thank you!
st45103
The issue might arise from the changed norm layers, but I haven’t seen a similar issue before. You could try to isolate the problematic sample and check the output as well as the internal stats of the norm layers to check, if these layers are responsible for the drop in accuracy.
st45104
If I’m not mistaken, Keras with TF as the backend (unsure if there are more supported backends anymore) uses numpy arrays as the input, so you could simply use tensor = torch.from_numpy(array).
st45105
Normally, we can use texts to do classification. I have one idea need to use multiple texts to do it. For example, I want to do classification by the news. But I need to put the day’s news together to do the classification. So I want to implement RNN between news in one day after implement RNN(LSTM or transformer) for each news(just the headlines). Does it possible?
st45106
Hey everybody, I’m trying to set up a controllable GAN arcitecture, but i don’t want to use a class as the conditional input but two floating point variables (i’ts kind of an 2 angle dependend image deformation). I’ve tried it with a simple DCGAN with conditional inputs, with moderate to good results. To enhance this i want to try more complex arcitecures like BIGGAN. Unfortunately this structure is designed for classes Now i could remap my dataset to from floating point to classes, but i would end up with far more than 1000 classes. Do you think it is worth trying? Or do you have any idea how to modify the BIG-GAN arcitecture? I’ve also searched for research papers with “no-classes” conditioning but i did not found any. Do you know some? Best Greetings, Filos92
st45107
Hi, I was working with the Conv1d layer and noticed a weird inference speed degradation comparing 2 ways of input propagation through this layer, lets say we have: conv_1 = nn.Conv1d(in_channels=1, out_channels=20, kernel_size=(1, 300)) conv_1.weight.data.fill_(0.01) conv_1.bias.data.fill_(0.01) conv_2 = nn.Conv1d(in_channels=300, out_channels=20, kernel_size=1) conv_2.weight.data.fill_(0.01) conv_2.bias.data.fill_(0.01) x1 = torch.FloatTensor(np.ones((10, 1, 100000, 300))) out1 = conv_1(x1).squeeze(3) x2 = torch.FloatTensor(np.ones((10, 300, 100000))) out2 = conv_2(x2) torch.allclose(out1, out2, atol=1e-6) >>> True Then I tried to measure performance speed for conv_1 and conv_2 and got the following results: image1091×118 8.79 KB image1082×122 8.9 KB Can please someone explain me this almost 2-x performance degradation and if this issue is reproducible or not? Config: PyTorch==1.6.0 via pip Operating System: Ubuntu 18.04.5 LTS Kernel: Linux 4.15.0-123-generic CPU: product: Intel® Core™ i5-7200U CPU @ 2.50GHz
st45108
your input tensors are permuted differently (300 element vectors are either contiguous or scattered), so different strategies may be used to obtain the result, in this case mkldnn library does the inner loop, and in second case avx may be unusable.
st45109
I still don’t understand, this is weird behavior, the way, how Conv1d ‘supposed’ to be used is a second way when we processing multichannel 1-d inputs, that’s how documentation proposes to use Conv1d, I didn’t even know till recent times that Conv1d can handle 4-d inputs. So why the “correct” way is 2 times slower, or it is not a “correct” way and I’m missing something?
st45110
you shouldn’t see such a difference on CUDA conv_2 is faster for me (1.8.0a0 with OMP/MKL threading disabled). you may also see a different picture if you change 100000 -> 100 I’ve just seen a related PR: https://github.com/pytorch/pytorch/pull/48885 3 in general, performance & best approach may vary a lot depending on shapes
st45111
With the update to torch 1.7 I now get the following error… RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [24, 129, 1536]] The error is created by this line… return self._in_proj(query).chunk(3, dim=-1) and can be (temporarily) fixed with… return self._in_proj(query).unsafe_chunk(3, dim=-1) My understanding is unsafe_chunk will be removed in the future. Is there a “correct” fix for this? This isn’t code I wrote and I’m unsure the proper way to get the same behavior as the original. Is this an appropriate replacement or is there something more elegant? proj = self._in_proj(query) sz = proj.size()[2] // 3 return proj[:,:,:sz], proj[:,:,sz:2*sz], proj[:,:,2*sz:]
st45112
Hi, I am trying to run following network: class CustomVGG16(torch.nn.Module): def __init__(self): super(CustomVGG16, self).__init__() self.vgg = torchvision.models.vgg16_bn(pretrained = True) self.vgg.classifier[-1] = torch.nn.Linear(4096,25) self.softmax = torch.nn.Softmax() def forward(self, x): x = self.vgg(x) x = self.softmax(x) return x with following parameters: criterion = torch.nn.BCEWithLogitsLoss() # log loss optimizer = torch.optim.SGD(model.parameters(), lr=0.001, momentum=0.9, weight_decay=0.0005) and my training loop is as follows: def train(model,criterion,optimizer,train_loader,epoch=1, val_loader = None): accuracy = [] val_accuracy = [] train_loss = [] model = model.to(device) scheduler = torch.optim.lr_scheduler.StepLR(optimizer, 10, gamma = 0.1) for ep in range(epoch): running_loss = 0.0 correct = 0 total = 0 start_time = time.time() for i, data in enumerate(train_loader, 0): inputs, labels = data inputs, labels = inputs.to(device), labels.to(device) optimizer.zero_grad() outputs = model(inputs) loss = criterion(torch.argmax(outputs,axis=1).float(), labels.float()) loss.backward() optimizer.step() running_loss += loss.item() correct += (outputs.argmax(1) == labels).float().sum() total += len(labels) accuracy_local = (correct / total)*100 accuracy_local = accuracy_local.data.cpu().numpy() train_loss.append(running_loss) accuracy.append(accuracy_local) val_acc, val_loss = valid(model,criterion,optimizer,val_loader) val_accuracy.append(val_acc) scheduler.step() print('EPOCH: {:} Accuracy: {:.2f}% Val_Accuracy: {:.2f}% \ Train_Loss: {:.2f} Validation_Loss: {:.2f} Time: {:.2f} seconds'.format(ep, accuracy_local, val_acc,running_loss, val_loss, time.time() - start_time)) return accuracy, val_accuracy, train_loss, val_loss and validation loop: def valid(model,criterion,optimizer,val_loader): running_loss = 0.0 correct = 0 total = 0 for i, data in enumerate(val_loader, 0): inputs, labels = data inputs, labels = inputs.to(device), labels.to(device) outputs = model(inputs) loss = criterion(torch.argmax(outputs,axis=1).float(), labels.float()) running_loss += loss.item() correct += (outputs.argmax(1) == labels).float().sum() total += len(labels) return ((correct / total)*100).data.cpu().numpy(), running_loss for BCEWithLogitLoss, one hot encoded outputs were not working and were giving shape error so I changed it to a 1D array but still this error, I searched through previous answers on this error but nothing helped. I also tried loss.require_grad = True but it also didn’t work. What could be the problem here? Thanks.
st45113
Solved by ptrblck in post #4 Remove the softmax and the torch.argmax. Also, if your target is one-hot encoded, I assume you are dealing with a multi-class classification, so replace nn.BCEWithLogitsLoss with nn.CrossEntropyLoss.
st45114
You are detaching the computation graph by calling torch.argmax on the model output as this operation is not differentiable: torch.argmax(outputs,axis=1).float() nn.BCCEWithLogitsLoss expects logits as the model output and can be used for a multi-label classification (zero, one, or more classes can be active for each sample). This would also mean that you should remove the softmax operation in your model.
st45115
So what could be the solution? other than removing softmax, as I have to include it in my architecture
st45116
Remove the softmax and the torch.argmax. Also, if your target is one-hot encoded, I assume you are dealing with a multi-class classification, so replace nn.BCEWithLogitsLoss with nn.CrossEntropyLoss.
st45117
Is it possible to install PyTorch with GPU support on macOS 10.13 (MacBook Pro Mid 2014, with NVIDIA GeForce GT 750M 2048 MB)? I installed CUDA 10.2 and Xcode 10.1, and followed the instructions to built from source. However, during the compilation, I get the following error: ............. /Users/brentdehauwere/pytorch/aten/src/ATen/native/cuda/BatchLinearAlgebraLib.cu(80): error: more than one constructor applies to convert from "long" to "c10::Scalar": function "c10::Scalar::Scalar(uint8_t)" function "c10::Scalar::Scalar(int8_t)" function "c10::Scalar::Scalar(int16_t)" function "c10::Scalar::Scalar(int)" function "c10::Scalar::Scalar(int64_t)" function "c10::Scalar::Scalar(float)" function "c10::Scalar::Scalar(double)" detected during instantiation of "void at::native::apply_batched_inverse_lib<scalar_t>(at::Tensor &, at::Tensor &, at::Tensor &) [with scalar_t=c10::complex<float>]" (115): here 24 errors detected in the compilation of "/var/folders/tf/ytkxrs5n31qdrvz9_2prqkm00000gn/T//tmpxft_000059da_00000000-12_BatchLinearAlgebraLib.compute_75.cpp1.ii". CMake Error at torch_cuda_generated_BatchLinearAlgebraLib.cu.o.Release.cmake:281 (message): Error generating file /Users/brentdehauwere/pytorch/build/caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/ATen/native/cuda/./torch_cuda_generated_BatchLinearAlgebraLib.cu.o And the compiling ends with this: ninja: build stopped: subcommand failed. Traceback (most recent call last): File "setup.py", line 773, in <module> build_deps() File "setup.py", line 315, in build_deps build_caffe2(version=version, File "/Users/brentdehauwere/pytorch/tools/build_pytorch_libs.py", line 58, in build_caffe2 cmake.build(my_env) File "/Users/brentdehauwere/pytorch/tools/setup_helpers/cmake.py", line 346, in build self.run(build_args, my_env) File "/Users/brentdehauwere/pytorch/tools/setup_helpers/cmake.py", line 141, in run check_call(command, cwd=self.build_dir, env=env) File "/Users/brentdehauwere/opt/anaconda3/lib/python3.8/subprocess.py", line 364, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command '['cmake', '--build', '.', '--target', 'install', '--config', 'Release', '--', '-j', '8']' returned non-zero exit status 1.
st45118
Dear experts, Is it possible to use Adam for a set of parameters of the model and for an other set of parameters to use SGD for instance? Thanks
st45119
Yes, you just need to set several optimizers passing the corresponding parameters and everything else is the same.
st45120
Well, so imagine I define optim1= SGD(set1 of model parameter) optim2= Adam(set2 of model parameter) what about the scheduler, can I define two of them like sched1=StepLR(optim1) sched2=ReduceLROnPlateau(optim2) Thanks
st45121
Yes, each LR has a parameter called param_groups. Basically each group of parameters you pass to the each optimizer will be taken as a group. Schedulers work over these groups.
st45122
I am new to training autoencoders and trying to work with an unlabeled medical histopathology whole slide image (WSI) dataset and want to visualize possible clusters. The slide image contains a normal tissue region, background, and an abnormality region that I am interested in. I extract patches of fixed size(256x256) from the WSI (2400x1600) and train a convolutional autoencoder. After training, I extract hidden layer (of dimension 32) and perform t-SNE on these feature vectors. But, I am not getting any distinct clusters. A weird thing that I noticed while training is that my binary cross-entropy loss converges within just a few epochs but the reconstructions are really bad. I don’t understand why is that happening. To verify my training, I also tested my model on MNIST and it forms good clusters. Would appreciate any thoughts on this plus any other suggestions on how good is this approach. Below are the training loss, validation loss and reconstructed images.
st45123
Is there an efficient way to reshape a sparse tensor? I’m using pytorch 1.7 and am trying to use a sparse tensor where I’ve been using a dense tensor (which is extremely sparse). The code fails when it hits reshape as there is no implementation for a sparse tensor. Are there any plans to implement this?
st45124
Hello, I have two models that are supposed to be copies of each other, but perform differently. Look at this: >>> repr(model1) == repr(model2) #have the same structure True >>> for idx, (p1, p2) in enumerate(zip(model1.named_parameters(), model2.named_parameters())): if not p1[0] == p2[0]: print('different parameter order for idx {}'.format(idx)) if not torch.equal(p1[1].data, p2[1].data): print('idx {} not equal'.format(idx)) #nothing is printed, means they are the same >>> evaluateModel(model1, test_loader, fastEvaluation=False) 0.8836 >>> evaluateModel(model2, test_loader, fastEvaluation=False) 0.8735 What could be the problem? They are instances of the same class, created with the same parameters. Have the same structure, and the only thing I do is changing the weights in a certain way for model1, then inverting these changes. Point being, the two models have the same weight, I expect them to perform identically. Why does this not happen? What could be the problem? What other fields should I check to make sure that the models are the same? P.S. note that evaluateModel automatically calls eval() on the input model, so this can’t be a train vs eval mode difference.
st45125
I found the problem. It turns out that if you have batch normalization layers, you need to keep track of the running mean and the running variance. These values don’t show up in the model parameter list (they are not parameters) but are important at test time. They are present in the model state dict, though, which is how I found out they were different for the two models.
st45126
Yeah, you should use model.eval() when you eval the model performance. The eval mode has to confirm. It makes difference when there are dropout, batch normalization.
st45127
Hi, I think you have misunderstood. I was already using model.eval() for both models; the issue was that, while the models parameters were indeed the same, their running mean and running var values of their batch normalization layers were not the same. So when you have batch normalization layers, to determine whether two models are the same you can’t just check the parameters of the model, but also the running mean and running var of the batch normalization layers. Or in general just check the state_dict that contains everything
st45128
I also recently faced the same problem. As suggested by @antspy, the trick is to compare their state_dict(), and I found that all the running_mean and running_var params differ, hence causing the mismatch. Just in case, anybody needs the code, here it is - def compare_models(model_1, model_2): models_differ = 0 for key_item_1, key_item_2 in zip(model_1.state_dict().items(), model_2.state_dict().items()): if torch.equal(key_item_1[1], key_item_2[1]): pass else: models_differ += 1 if (key_item_1[0] == key_item_2[0]): print('Mismtach found at', key_item_1[0]) else: raise Exception if models_differ == 0: print('Models match perfectly! :)')
st45129
Just a little addition to check if same device def compare_models(model_1, model_2): models_differ = 0 for key_item_1, key_item_2 in zip(model_1.state_dict().items(), model_2.state_dict().items()): if key_item_1[1].device == key_item_2[1].device and torch.equal(key_item_1[1], key_item_2[1]): pass else: models_differ += 1 if (key_item_1[0] == key_item_2[0]): _device = f'device {key_item_1[1].device}, {key_item_2[1].device}' if key_item_1[1].device != key_item_2[1].device else '' print(f'Mismtach {_device} found at', key_item_1[0]) else: raise Exception if models_differ == 0: print('Models match perfectly! :)')
st45130
I want to implement a simple form of multi-task learning. Let us say there are two tasks A and B. I want to create a dataloader such that the batches alternate between these tasks i.e. one batch should only contain sample from a single task. The first approach that I am trying is to create a dataloaders for each task in the usual way and then combine them using a MultitaskDataLoader. A POC implementation is as follows: class MultitaskDataLoader(torch.utils.data.DataLoader): def __init__(self, task_names, datasets): self.task_names = task_names self.lengths = [len(d) for d in datasets] self.iterators = [iter(d) for d in datasets] indices = [[i] * v for i, v in enumerate(self.lengths)] self.task_indices = sum(indices, []) def _reset(self): random.shuffle(self.task_indices) self.current_index = 0 def __iter__(self): self._reset() return self def __len__(self): return sum(self.lengths) def __next__(self): if self.current_index < len(self.task_indices): task_index = self.task_indices[self.current_index] task_name = self.task_names[task_index] batch = next(self.iterators[task_index]) new_batch = (batch, task_name) self.current_index += 1 return new_batch else: raise StopIteration task_names = ["A", "B"] d1 = ['task-A'] * 5 d2 = ['task-B'] * 10 dl = MultitaskDataLoader(task_names, [d1, d2]) This works as expected but it stops after every epoch. I have to create a new dataloader object when every epoch starts. We do not have to do that for the standard dataloader torch.utils.data.dataloader? Then why do I have to do it for this? What should I change in this one to make it work exactly like the standard one?
st45131
I actually found my mistake. I should be initializing the iterators in iter function because that is what a for loop calls every time. I will leave this question to help others who might have a similar problem.
st45132
Here is the situation. A customized DataLoader is used to load the train/val/test data. The model can be launched on single GPU, but not multiples. class EncoderDecoder(torch.nn.Module): def forward(feats, masks,...) clip_masks = self.clip_feature(masks, feats) .... def clip_feature(self, masks, feats): ''' This function clips input features to pad as same dim. ''' max_len = masks.data.long().sum(1).max() print('max_len:%d' % max_len) masks = masks[:, :max_len].contiguous() .... return masks ...... def train(opt): model = EncoderDecoder(opt) # setting-1 cuda_model = model.cuda().train() # setting-2 # cuda_model = torch.nn.DataParallel(model.cuda()) cuda_model.train() torch.cuda.synchronize() ... If I launch the model on single GPU as marked as “setting-1”, it works but lasts days. The corresponding returned tensors in clip_features is as expected. The debug info is given as follows: masks.shape (150, 61) EncoderDecoder clip_feature masks.shape in (150, 61) masks.device:cuda:0 max_len:61 masks.shape clip_att (150, 61) max_len:61 masks.size (150, 61) att_mask.device cuda:0 Instead of running on single gpu, I use DataParallel, indicated as “setting-2”, the results are changed somehow, EncoderDecoder clip_feature masks.shape in (38, 61) masks.device:cuda:0 masks.shape (38, 61) EncoderDecoder clip_feature masks.shape in (38, 61) masks.device:cuda:1 masks.shape (38, 61) RelationTransformer clip_feature att_masks.shape in (38, 61) masks.device:cuda:2 max_len:50 max_len:50 It posts the runtime error later for multiplication I intend to have: RuntimeError: The size of tensor a (61) must match the size of tensor b (60) at non-singleton dimension 3 I have no idea how it happens. The batched input dispatched on different devices, but the results are totally different with the one returned by a single GUP. I do not think that it depends on the parallel dispatching on GPUs. Maybe I missed some configurations for my model. The running environment looks like as follows(I tested it with different torch visions): torch 0.4.1 / 1.4.0+cu100 torchvision 0.2.1/ 0.5.0+cu100 4 x Tesla V100-SXM2 Driver Version: 410.104 CUDA Version: 10.0 Hoping any inputs to help me out. Thanks.
st45133
Which line of code is throwing this error? Could you add the device information to the max_len print, as I’m not sure where the 50 is coming from, since the masks are cropped to 61.
st45134
ptrblck, thanks for your inputs. Here again the debug info before and after calling clip_feature function running on multiple GPUs. EncoderDecoder clip_feature masks.shape in (38, 61) masks.device:cuda:0 EncoderDecoder clip_feature masks.shape in (38, 61) masks.device:cuda:1 EncoderDecoder clip_feature masks.shape in (38, 61) masks.device:cuda:2 max_len:50 max_len.device: cuda:0 EncoderDecoder clip_feature masks.shape in (36, 61) max_len:54 max_len.device: cuda:1 max_len:61 max_len.device: cuda:2 masks.device:cuda:3 max_len:51 max_len.device: cuda:3 It’s actually a big jump to show where the error comes since there are lots of operations before calling the relation_geo_attention function. Since the whole snippet works on single GPU, I guess it’s necessary to poll all the codes out here (hopefully I’m right:D). The trigger is the dimension after clip_feature is totally wrong away from the expectation (as one output by single GPU). The error comes from lines marked with ^^^^s. def relation_geo_attention(query, key, value, box_embd_matrix, mask=None): N = value.size()[:2] dim_k = key.size(-1) dim_g = box_embd_matrix.size()[-1] w_q = query w_k = key.transpose(-2, -1) w_v = value w_g = box_embd_matrix #attention weights scaled_dot = torch.matmul(w_q,w_k) w_a = scaled_dot / np.sqrt(dim_k) if mask is not None: w_a = w_a.masked_fill(mask == 0, -1e9) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ #! RuntimeError occurs from here # calculating retlation between geometric and feature w_mn = torch.log(torch.clamp(w_g, min = 1e-6)) + w_a ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ #! RuntimeError occurs possibly here either After clipping features, the returned dimension is not as expected, all the following operations in transformer (the relation geometric attention function) are wrong. I do not think it’s a scatter problem from devices. RuntimeError: Caught RuntimeError in replica 0 on device 0. Original Traceback (most recent call last): .... .... w_a = w_a.masked_fill(mask == 0, -1e9) RuntimeError: The size of tensor a (61) must match the size of tensor b (50) at non-singleton dimension 3 What really confuses me is why it works on single device, but got errors instead of running it on n-GPUs. Thanks Anakin
st45135
If the code runs on single GPU, all the data is in a big batch. All the batch data is fed to 8 head attention model to calculate the relation between geometric and appearance separately. Thus all the dimensions are perfect aligned. If code got launched on multiple GPUs, each batch has it own data. This results of course the different dimensions corresponding to the inputs. That’s the problem, as I understand,if it’s correct. Still looking for a solution.
st45136
Could you add an assert statement and check, that all tensors in your relation_geo_attention funtion are on the same device? nn.DataParallel will split the data tensors in dim0 and send each chunk to the corresponding device. I.e. if the first chunk has a batch_size of 51, all other tensors passed to the forward method will have the same batch size. Also, make sure to use the forward method, as nn.DataParallel uses this method to split the data. If you are using a custom function as mode.my_fun(data), you would have to take care of the splitting yourself.
st45137
Thanks for your reply again, ptrblck. I think probably you got confused by my description. As so far, the data is correctly split in the chunks [38 x 61] [38 x 61] [38 x 61] [36 x 61] The input length arises the error. Concretely, I have a relation model that calculates relation between geometric and appearance. This relation model in organized in a ModuleList in cascade fashion. The attentions are fed to the first module in ModuleList. Then its output will be fed to following modules further. If the data just scattered on single device, all the relation models have the aligned (same) dimension. It works. If the data got chunked and scattered on 4 devices, the features have different lengths on single device, i.e, device 0 as debug info suggests below. if mask is not None: print('mask.size', mask.size()) print('mask.device:\t%s' % mask.device) print('w_a.size', w_a.size()) print('w_a.device:\t%s' % w_a.device) assert query.device == key.device, 'query and key are not on the same device' assert value.device == key.device, 'value and key are not on the same device' assert query.device == box_relation_embds_matrix.device, 'query and box are not on the same device' assert query.device == mask.device, 'query and mask are not on the same device' assert w_a.device == mask.device, 'w_a and mask are not on the same device' w_a = w_a.masked_fill(mask == 0, -1e9) All the assertions hold true. Here the corresponding debug outputs(just for simplicity and convenience, i used two devices): ###clip_feature EncoderDecoder clip_feature masks.shape in torch.Size([75, 61]) masks.device:cuda:0 EncoderDecoder clip_feature masks.shape in torch.Size([75, 61]) masks.device:cuda:1 max_len:54 max_len.device: cuda:0 max_len:61 max_len.device: cuda:1 ### got padded somewhere later padded.size torch.Size([75, 61, 512]) padded.size torch.Size([75, 54, 512]) ### the first relation model info mask.size torch.Size([75, 1, 1, 61]) mask.device: cuda:1 w_a.size torch.Size([75, 8, 61, 61]) w_a.device: cuda:1 ### the second relation model info mask.size torch.Size([75, 1, 1, 61]) ^^^^^^^^^^^^^^ mask.device: cuda:0 w_a.size torch.Size([75, 8, 54, 54]) ^^^^^^^^^^^^^^ w_a.device: cuda:0 As in last reply shows, the problem comes from the device 0, showing as follow: Anakin: RuntimeError: Caught RuntimeError in replica 0 on device 0. Original Traceback (most recent call last): error comes at this line w_a = w_a.masked_fill(mask == 0, -1e9) For comparison purpose, I post the log of single device: # clip_feature EncoderDecoder clip_feature masks.shape in torch.Size([150, 61]) masks.device:cuda:0 max_len:61 max_len.device: cuda:0 padded.size torch.Size([150, 61, 512]) ### the first relation model info mask.size torch.Size([150, 1, 1, 61]) mask.device: cuda:0 w_a.size torch.Size([150, 8, 61, 61]) w_a.device: cuda:0 ### the same for other relation models It seems like a coding problem rather than a scattering problem, as I guess as so far. I’ll dig it out continuously. Any inputs will be thankful.
st45138
Yeah, I see that the batch dimension seems to be alright in your first first output ([38, 61] ...). However, why is the mask size [75, 61]? If the batch chunks have a batch size of 38 or 36, I’m not sure why your masks have suddenly a non-matching shape.
st45139
As I stated here ‘(just for simplicity and convenience, i used two devices):’ I tested it just on two devices. That’s why it has [75, 61]. The four batches are split for 4 devices. Thanks for your reply ptrblck
st45140
Hello, I am having similar issue. My code runs good on single GPUs and multiple GPUs on a server. Recently I made my own setup of multiple GPU. But I got this issue. Have you found the solution @Anakin?
st45141
@Anakin @abhidipbhattacharyya I have met a similar issue, when implementing multi-head attention, I need to use the function troch.masked_fill(), and the code runs fine on single gpu but got an dimension mismatch error on multi-gpus. In my situation, I found that the first dimension of the mask I pass to the forward method is not batch_size, but DataParallel will split the tensor on dimension 0 by default, causing the mismatch problem. So what we need to do is adding a batch_size dimention to the mask, using mask=mask.unsqueeze(0) before you pass the mask to the model, and after split, all device will get the same mask because the batch_size is 1 here. Also you will need to remember your mask already has a batch_size dimension and don’t need to add this dimension in self-attention logic. To sum up, ensure all tensors you pass to the DataParallel module have a batch_size dimension.
st45142
Is it possible to use a linear layer (with the same input and output size) in-place? I don’t care about the gradients (torch.no_grad() is enabled). I want to use as little memory as possible because I’m querying the network many thousands of times per batch item (working with 3D point clouds).
st45143
I created a method to do this by looking at the nn.Functional.linear function: def linear_inplace(layer, v): return torch.addmm(layer.bias, v, layer.weight.t(), out=v) However, for some reason my model’s layers are not moved to the GPU if I use this method instead of the standard __call__ method of nn.Linear. I get the following error: RuntimeError: Tensor for argument #3 'mat2' is on CPU, but expected it to be on GPU (while checking arguments for addmm) I know my cuda device is properly selected because it works with layer(v), but not with my linear_inplace(layer, v). Can someone help me understand what’s going on here?
st45144
Hi everone, I am doing a project on cat breeds classification. As I am doing data augmentation, the number of images is doubled but the labels do not match the augmented images. Here is my code. Screen Shot 2020-12-05 at 5.07.25 PM816×809 117 KB
st45145
The augmented images do not have correct labels. Screen Shot 2020-12-05 at 5.03.19 PM1433×263 115 KB
st45146
The number of images won’t change, since the transformations are applied on the fly in the __getitem__ method, so I would recommend to recheck, if the image folders contain unexpected files.
st45147
I am converting y (shape [1,6]) and output and x ([1,64]) to float type respectively, but I get an error. Is the conversion method wrong? error message Traceback (most recent call last): File "C:/Users/name/Desktop/myo-python-1.0.4/bindsnet-master/bindsnet/nextrsnn.py", line 95, in <module> optimizer.zero_grad(); output = model(s) File "C:\Python36\lib\site-packages\torch\nn\modules\module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "C:/Users/name/Desktop/myo-python-1.0.4/bindsnet-master/bindsnet/nextrsnn.py", line 15, in forward return self.linear(x) File "C:\Python36\lib\site-packages\torch\nn\modules\module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "C:\Python36\lib\site-packages\torch\nn\modules\linear.py", line 91, in forward return F.linear(input, self.weight, self.bias) File "C:\Python36\lib\site-packages\torch\nn\functional.py", line 1674, in linear ret = torch.addmm(bias, input, weight.t()) RuntimeError: Expected object of scalar type Float but got scalar type Long for argument #2 'mat1' in call to _th_addmm Process finished with exit code 1 cord import torch import torch.nn as nn import numpy as np from bindsnet.network import Network from bindsnet.network.nodes import Input, LIFNodes from bindsnet.network.topology import Connection from bindsnet.network.monitors import Monitor # defining model class LogisticRegression(nn.Module): def __init__(self, input_size, num_classes): super(LogisticRegression, self).__init__() self.linear = nn.Linear(input_size, num_classes) def forward(self, x): return self.linear(x) # building network time = 500 network = Network(dt=1.0) Batch_size = 300 # x(shape=[1,64]), y(shape=[1,6]) inpt = Input(n=64, sum_input=True) middle = LIFNodes(n=40, trace=True, sum_input=True) center = LIFNodes(n=40, trace=True, sum_input=True) final = LIFNodes(n=40, trace=True, sum_input=True) out = LIFNodes(n=6, sum_input=True) # レイヤー同士の接続 inpt_middle = Connection(source=inpt, target=middle, wmin=0, wmax=1e-1) middle_center = Connection(source=middle, target=center, wmin=0, wmax=1e-1) center_final = Connection(source=center, target=final, wmin=0, wmax=1e-1) final_out = Connection(source=final, target=out, wmin=0, wmax=1e-1) # connecting all layers to network network.add_layer(inpt, name='A') network.add_layer(middle, name='B') network.add_layer(center, name='C') network.add_layer(final, name='D') network.add_layer(out, name='E') forward_connection = Connection(source=inpt, target=middle, w=0.05 + 0.1*torch.randn(inpt.n, middle.n)) network.add_connection(connection=forward_connection, source="A", target="B") forward_connection = Connection(source=middle, target=center, w=0.05 + 0.1*torch.randn(middle.n, center.n)) network.add_connection(connection=forward_connection, source="B", target="C") forward_connection = Connection(source=center, target=final, w=0.05 + 0.1*torch.randn(center.n, final.n)) network.add_connection(connection=forward_connection, source="C", target="D") forward_connection = Connection(source=final, target=out, w=0.05 + 0.1*torch.randn(final.n, out.n)) network.add_connection(connection=forward_connection, source="D", target="E") recurrent_connection = Connection(source=out, target=out, w=0.025*(torch.eye(out.n)-1),) network.add_connection(connection=recurrent_connection, source="E", target="E") # Monitoring layer inpt_monitor = Monitor(obj=inpt, state_vars=("s", "v"), time=500,) middle_monitor = Monitor(obj=inpt, state_vars=("s", "v"), time=500,) center_monitor = Monitor(obj=inpt, state_vars=("s", "v"), time=500,) final_monitor = Monitor(obj=inpt, state_vars=("s", "v"), time=500,) out_monitor = Monitor(obj=inpt, state_vars=("s", "v"), time=500,) # Monitorをネットワークに接続 network.add_monitor(monitor=inpt_monitor, name="A") network.add_monitor(monitor=middle_monitor, name="B") network.add_monitor(monitor=center_monitor, name="C") network.add_monitor(monitor=final_monitor, name="D") network.add_monitor(monitor=out_monitor, name="E") for l in network.layers: m = Monitor(network.layers[l], state_vars=['s'], time=time) network.add_monitor(m, name=l) npzfile = np.load("C:/Users/name/Desktop/myo-python-1.0.4/myo-armband-nn-master/data/train_set.npz") x = npzfile['x'] y = npzfile['y'] converting to tensor.float from numpy x = torch.from_numpy(x).float() y = torch.from_numpy(y).float() training_pairs = [] for i, (x, y) in enumerate(zip(x.view(-1, 64), y)): inputs = {'A': x.repeat(time, 1), 'E_b': torch.ones(time, 1)} network.run(inputs=inputs, time=time) training_pairs.append([network.monitors['E'].get('s').sum(-1), y]) network.reset_state_variables() if (i + 1) % 50 == 0: print('Train progress: (%d / 500)' % (i + 1)) if (i + 1) == 500: print(); break model = LogisticRegression(40, 6); criterion = nn.CrossEntropyLoss() optimizer = torch.optim.SGD(model.parameters(), lr=0.1) # training spikes and y for epoch in range(6): for i, (s, y) in enumerate(training_pairs): optimizer.zero_grad(); output = model(s) loss = criterion(output.softmax(0).view(1, -1), y.squeeze(0).long()) loss.backward(); optimizer.step() test_pairs = [] for i, (x, y) in enumerate(zip(x.view(-1, 64), y)): network.run(inputs=inputs, time=time) test_pairs.append([network.monitors['E'].get('s').sum(-1), y]) network.reset_state_variables() if (i + 1) % 50 == 0: print('Test progress: (%d / 500)' % (i + 1)) if (i + 1) == 500: print(); break correct, total = 0, 0 for s, y in test_pairs: output = model(s); _, predicted = torch.max(output.data.unsueeze(0), 1) total += 1; correct += int(predicted == y.long()) print('Accuracy of logistic regression on 500 test examples: %2f %%\n ' % (100 * correct / total)) torch.save({ 'epoch': epoch, 'model_state_dict': model.state_dict(), 'optimizer_state_dict': optimizer.state_dict(), 'loss': loss, }, "C:/Users/name/Desktop/myo-python-1.0.4/bindsnet-master/bindsnet/pytorchsession")
st45148
Solved by ptrblck in post #4 I’m not sure where this error is coming from as your model definition isn’t posted. Check the complete stack trace and see, if you can spot the line of code which raises the error. Based on the posted error message e.g. a matrix multiplication might fail with the shape mismatch e.g. in a linear la…
st45149
Based on the error message it seems that either the input or the model parameters are LongTensors. You would have to make sure both are FloatTensors by calling s.float() or model.float() before executing the forward pass.
st45150
That was what you said. Rewriting output = model (s.float ()) resolved this error. However, I got the following error. Where does the 500x1 in this error point? I am very sorry to ask you many questions. RuntimeError: size mismatch, m1: [500 x 1], m2: [40 x 6] at ..\aten\src\TH/generic/THTensorMath.cpp:41
st45151
I’m not sure where this error is coming from as your model definition isn’t posted. Check the complete stack trace and see, if you can spot the line of code which raises the error. Based on the posted error message e.g. a matrix multiplication might fail with the shape mismatch e.g. in a linear layer.
st45152
class text_CNN(nn.Module): def __init__(self): super(text_CNN, self).__init__() self.conv1 = nn.Conv1d(in_channels=1, out_channels=10, kernel_size=2) def forward(self, x): print(x.type()) x = F.max_pool1d(F.relu(self.conv1(x)), 2) return x model = text_CNN() x = torch.randint(2, (16, 1, 22)) model(x) Output: torch.LongTensor --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-27-f0bfac495e81> in <module>() 11 model = text_CNN() 12 x = torch.randint(2, (16, 1, 22)) ---> 13 model(x) 3 frames /usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 725 result = self._slow_forward(*input, **kwargs) 726 else: --> 727 result = self.forward(*input, **kwargs) 728 for hook in itertools.chain( 729 _global_forward_hooks.values(), <ipython-input-27-f0bfac495e81> in forward(self, x) 6 def forward(self, x): 7 print(x.type()) ----> 8 x = F.max_pool1d(F.relu(self.conv1(x)), 2) 9 return x 10 /usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 725 result = self._slow_forward(*input, **kwargs) 726 else: --> 727 result = self.forward(*input, **kwargs) 728 for hook in itertools.chain( 729 _global_forward_hooks.values(), /usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py in forward(self, input) 257 _single(0), self.dilation, self.groups) 258 return F.conv1d(input, self.weight, self.bias, self.stride, --> 259 self.padding, self.dilation, self.groups) 260 261 RuntimeError: expected scalar type Long but found Float However, when I change the type to a float: x = torch.randint(2, (16, 1, 22), dtype=torch.float) It works! Why is this?
st45153
Solved by ptrblck in post #4 The error message might be confusing as the data type mismatch points to one of the used tensors. Based on the error it seems that the first tensor (input in this case) is used as the expected type.
st45154
PyTorch expects the input tensor and model parameters to have the same dtype and thus raises the error. torch.randint returns a LongTensor, while the model parameters are initialized as FloatTensors, which is why you need to change the input to torch.float.
st45155
I see, but why is the error message stating that it is expecting scalar type long, while it is actually expecting a type float?
st45156
The error message might be confusing as the data type mismatch points to one of the used tensors. Based on the error it seems that the first tensor (input in this case) is used as the expected type.
st45157
Hi, looking at the example extension-cpp 1. I am wondering why you allocate memory for the output tensors each time you call forward or backward. Sure, for the forward case you could provide an output tensor. But for the backward case this is not possible without bad hacks. Does PyTorch internally take care that old output tensors get reused, or is this not a big issue regarding performance. My forward benchmark is 1.5% faster without reallocating a new tensor.
st45158
Solved by ptrblck in post #2 PyTorch uses a caching allocator and reused already allocated memory. I think the code tries to focus on readability of the code and as you said, these small perf. gains are sometimes not possible without bad hacks.
st45159
PyTorch uses a caching allocator and reused already allocated memory. I think the code tries to focus on readability of the code and as you said, these small perf. gains are sometimes not possible without bad hacks.
st45160
Running the imagenet example on a single node, 4-GPU setup calls 3 NCCL AllReduce ops per mini-batch for gradient synchronization, with sizes 2052000, 28852224, and 15853824 bytes. I assumed that each op will follow the bucket_cap_mb limit, i.e. none of the allReduce would have sizes more than say, 25 MB (default) Am I missing something?
st45161
image750×579 14.6 KB May I ask how do you go from (None, 20, 256) from layer dropout_4 to (None, 256) in lstm layer? I’m trying to rewrite this network in Pytorch but keep getting size mismatch errors. The input that I used for the keras model has shape (128, 20, 108) and the output has shape (128, 108). Input[i,:,:] is a collection of 20 one-hot-encoded vectors indicate the positions of musical notes. Output[i,:] is also one-hot-encoded vector that indicates the notes at the 21st position.
st45162
Hi! I am not some expert here but I will try to answer. The picture you have attached is for a keras model right? Refer this stackoverflow link 1. I don’t use Keras (but from reading docs), it seems that using return sequences = True will return the hidden state output for each input time step. For this, you may use a LSTMCell maybe with a for loop to collect all the timesteps(slow I guess, somebody correct me) or you can just use the output provided by the LSTM. (See the docs for nn.LSTM) Outputs: output, (h_n, c_n) output of shape (seq_len, batch, num_directions * hidden_size): tensor containing the output features (h_t) from the last layer of the LSTM, for each t. If a torch.nn.utils.rnn.PackedSequence has been given as the input, the output will also be a packed sequence. h_n of shape (num_layers * num_directions, batch, hidden_size): tensor containing the hidden state for t = seq_len. c_n of shape (num_layers * num_directions, batch, hidden_size): tensor containing the cell state for t = seq_len. So before lstm_5, it seems the keras network uses all the hidden time steps. After that, it is only passing the latest hidden state through the network. So, you can just take the hidden state using nn.LSTM.
st45163
Also, typically people use LSTMCell in seq2seq models when they have to do some manipulation on the individual hidden states, cell states etc. Then, one feeds these to obtain the hidden,cell states for the next time step.
st45164
I have not used Keras. Is it possible that LSTM_5 is doing something beyond a simply LSTM? As @SANKALP_SHUBHAM suggest it may simply be taking the output at the last time step. Indeed, when looking at Keras documentation this seems to be the case Blockquote return_sequences: Boolean. Whether to return the last output. in the output sequence, or the full sequence. Default: False. above from keras LSTM documentation. if you wanted to replicate the above exactly in pytorch you could do the following either of the following: output, hidden = yourLSTM() 1)take output[19] (your last output) 2)take hidden The last hidden state should correspond to the last output as output simply stores the hidden states at each time step. I am not sure that this is what you should be doing. You would be getting rid of the hidden states of all your previous timesteps. Which in this case would correspond to the previous 19 vectors. Instead I would suggest taking the output of your LSTM of shape [1, 20,256] = [seq_len, batch_size, input_size] and reshaping it so it is of shape [20, 1, 256] = [batch_size, seq_length, input_size]. You can use torch.transpose() to do this. Pass the reshaped data to an nn.Linear() layer with 256 neurons using your activation function of choice (2 things are same as Dense).
st45165
Hi, I’m trying to solve a multi label problem I have a tensor of around 400*2000 values The 2000 are zeros and ones, but the vectors have in average only 10 of the 2000 values with a one the others are zeros. A one should have more importance than the zeros. I standardize the values with a mean square algorithm. So this is my first question. Is this good in that case? I also have output tensors with the size of 60 classes, which are not mutual exclusive. There are always 10 classes one and the others zero. This is my network network = torch.nn.Sequential( torch.nn.Linear(len(self.getVector()), 250), torch.nn.ReLU(), torch.nn.Linear(250, 150), torch.nn.ReLU(), torch.nn.Linear(150, 60), ) loss_function = torch.nn.MultiLabelSoftMarginLoss() optimizer = torch.optim.Adam(network.parameters(), lr=0.0007) network.train() for i in range(500): predicted_value = network(test_input_tensor) loss = loss_function(predicted_value, test_output_tensor) print(i, loss.item()) optimizer.zero_grad() loss.backward() optimizer.step() network.eval() output = network(prognostic_input_tensor) As I have not much experience in machine learning, I want to know if you have some advice, if this is a good approach for a multi label probelm with the features mentioned above? It seems to me that it predicts a lot of negative values, what I dont understand.
st45166
Hi Log! logarith: I also have output tensors with the size of 60 classes, which are not mutual exclusive. There are always 10 classes one and the others zero. network = torch.nn.Sequential( ... torch.nn.Linear(150, 60), ) loss_function = torch.nn.MultiLabelSoftMarginLoss() It seems to me that it predicts a lot of negative values, what I dont understand. Because the output of your model is the output of your last Linear layer, you are predicting raw-score logits. A logit value that is less than zero corresponds to a predicted probability less than one half. Typically a probability of less than one half for the “1” state when interpreted as a hard “yes-no” prediction would be taken to be a “0”-state prediction (and greater than one half would be the “1” state). Many more of your “output-tensor” target values are 0's than are 1's, so if you weight each individual target value equally in loss function, your model can train to do a good job on the loss function by preferentially predicting 1's (that is, predicting negative logits), regardless of the input data. The common approach to addressing this is to weight your less-frequent 1 target values more heavily in your loss function. Note that BCEWithLogitsLoss 1 is essentially the same as MultiLabelSoftMarginLoss but has a pos_weight argument that you can pass to its constructor. You say that you have 60 classes, and that any given sample target has 10 classes in the 1 state and 50 in the 0 state. If all of your classes are about equally likely to be in the 1 state, you could use the same pos_weight for all of them. A reasonable value would be pos_weight = n_negative / n_positive. So: loss_function = torch.nn.BCEWithLogitsLoss (pos_weight = torch.tensor ([5.0])) If the likelihood of your different classes having target value 1 are not all broadly similar, then you would pass in a tensor of length 60 for your pos_weight, that is, a different pos_weight value for each class. Best. K. Frank
st45167
thanks for the answer, but for the input tensors, do I have to normalize them before passing them to the network, or can I input tensors consisting of ones and zeros?