id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st45868
|
Solved by Kushaj in post #4
For GPU, forget about multiprocessing. It is a very tedious task.
For CPU you can use torch.multiprocessing.
|
st45869
|
class MyModel(nn.Module):
def __init__(self, model1, model2, model3):
super().__init__()
# You can also pass a list, rather than
# seperate models
self.model1 = model1
self.model2 = model2
self.model3 = model3
def forward(self, x):
out1 = self.model1(x)
out2 = self.model2(x)
out3 = self.model3(x)
return out1, out2, out3
This method involves moving all the models to GPU. So GPU memory would become bottleneck, but you will only have one set of inputs.
You can also do inference on CPU, which may be faster if your GPU RAM is small.
|
st45870
|
Thanks for your suggestion. Do you know if Pytorch would run some optimization in the background thus leading to parallelization in your suggestion ?
By reading it/not being perfect on Pytorch background optims I fear that it would eventually run the models in a sequence (like a for loop would) rather than in parallel.
The goal is to have time_parallel(model 1, …, model k) << time_unary(model 1) + … + time_unary(model k).
|
st45871
|
For GPU, forget about multiprocessing. It is a very tedious task.
For CPU you can use torch.multiprocessing.
|
st45872
|
Hello Kushaj,
the problem is that this solution runs the 3 models sequentially, not in parallel
|
st45873
|
Hi,
For my code, I need to access _backend.SpatialFullConvolution_updateOutput, when I use it in an earlier version of PyTorch it works fine but in the newer version (‘1.3.0.dev20190814’) I get the following error:
_backend.SpatialFullConvolution_updateOutput(
File “/conda-envs/pytorch_tensorflow/lib/python3.6/site-packages/torch/_thnn/utils.py”, line 27, in __ getattr __
raise NotImplementedError
Is this behavior expected?
Thanks,
Tahereh
|
st45874
|
Hi,
I am trying to access backward path parameters e.g. weights, like what is used for here:
github.com
willwx/sign-symmetry/blob/master/functional/af_conv2d_function.py 3
"""
A conv2d autograd Function that supports different feedforward and feedback weights
Uses the same C functions used in the legacy SpatialConvolution module to implement actual computation
References:
- https://github.com/L0SG/feedback-alignment-pytorch/blob/master/lib/fa_linear.py
- https://pytorch.org/docs/master/notes/extending.html
- torch/legacy/nn/SpatialConvolution.py
"""
import torch.autograd as autograd
from torch._thnn import type2backend
class AsymmetricFeedbackConv2dFunc(autograd.Function):
@staticmethod
def forward(context, input, weight, weight_feedback, bias, stride, padding):
_backend = type2backend[input.type()]
input = input.contiguous()
output = input.new()
This file has been truncated. show original
I was wondering if there is a substitute for this backend in the new version?
Thanks,
Tahereh
|
st45875
|
I’m not 100% sure what your code is doing, but it looks like you could probably use a backward hook to achieve the same effect. “SpatialConvolutionMM” is the same as “thnn_conv2d” I believe.
|
st45876
|
Thanks for your reply.
backward hook only gives me the gradients computed in each layer. I want to be able to write my backward function which does not assume backpropagation (symmetric weight). That’s why access to _backend.SpatialConvolutionMM_updateGradInput
is important for my code. I would like to have access to such a function in the new Pytorch if it is possible.
|
st45877
|
we don’t expose a function that does that anymore to Python. It’s still available in C++ (though not guaranteed to continue to exist), should be thnn_backward.
I’d probably just write your own version for this case, though, if this is the only updateGradInput variant you need.
|
st45878
|
Hi, sorry to bring this up. But may I know how do you access either the torch._thnn.type2backend or SpatialConvolutionMM in pytorch >1.5 nowadays? I am facing the same issue here…
Thank you very much.
|
st45879
|
Hi! Unfortunately, I don’t have access to torch._thnn.type2backend in the newer pytorch. I had to find other ways.
|
st45880
|
In a RNN you can use the last hidden layer as a vector representation of a sequence. I was wondering if there is any similar idea in the transformer architecture?
Background:
I am building a model that deals with protein data. At one stage I wish to include raw protein sequence information to the model. For this I need to encode a sequence into a fixed lenght vector that contains information about the sequence on the global scope. This vector will then be concatenated to some other vector and used in downstrema processing.
|
st45881
|
Solved by mortazavi in post #6
The suggestion here
which has may of the similar set of issues as the task suggested here implies that one may first have to reduce one’s sequence lengths – perhaps, through convolutions with a large stride – before feeding the resulting (shorter) sequences to the transformer layers.
One or more…
|
st45882
|
you can define your own way (\eg average pool all token vectors) or simply using the BERT default setup, the [CLS] token’s output vector
jalammar.github.io
A Visual Guide to Using BERT for the First Time 12
Translations: Chinese, Russian
Progress has been rapidly accelerating in machine learning models that process language over the last couple of years. This progress has left the research lab and started powering some of the leading digital...
bert-distilbert-tokenization-11705×922 63.1 KB
|
st45883
|
Ok interesting. So if I understand correctly the model will learn to generate this artificial vector ([CLS]) in such way that it will be usefull for solving the task at hand? In the linked post this would be text calssification.
|
st45884
|
People always say “look at BERT” but what if one wants to build ones own sequence-to-vector encoder? Most tutorials on BERT are limited to uses within machine translation (which is a sequence to sequence task), and they spend enormous amount of time on just setting up the dataset and the tokenizer and the mask and a single example configuration of BERT . . . As a guide this is entirely inadequate when it comes to studying sequence-to-vector encodings of the type ErikJ has written about . . . Even a simple sequence-to-sequence example that just talks about the use of the basic APIs and how to investigate various configurations would be useful . . . In other words, the tutorials need to get rid of all data-preparation talk and just work on artificial tensor sequences . . . It is not hard to build artificial sequences for experimentation purposes and for showing the use of the transformer APIs . . . Not all examples need to be about text sequences . . .
|
st45885
|
The suggestion here
desh2608.github.io
The Challenges of using Transformers in ASR 14
Since mid 2018 and throughout 2019, one of the most important directions of research in speech recognition has been the use of self-attention networks and transformers, as evident from the numerous papers exploring the subject. In this post, I try to...
which has may of the similar set of issues as the task suggested here implies that one may first have to reduce one’s sequence lengths – perhaps, through convolutions with a large stride – before feeding the resulting (shorter) sequences to the transformer layers.
One or more convolution layers before the transformer layer (in problems such as the one mentioned where sequences are long and where we’re searching for full “sentence” encoding into a vector) can also improve the overall positional encoding of the full model.
|
st45886
|
I’ve got a use case of the following form:
import torch
tensor = torch.tensor([False, True, False, False, True])
indices = tensor.cummax(dim=0).indices
print(indices) # tensor([0, 1, 1, 1, 4])
in which I am relying on the fact that (at least based on my tests) cummax.indices always gives the last occurence of a maximum. The documentation leaves it unspecified what the indices are when there are multiple maximums – i.e. that last 4 could equally well be a 1.
Just in case this changes in future versions of PyTorch I’m adding a test to my library that this behaviour remains consistent. How worried should I be about that possibility / can it be added to the specification for cummax.indices that it keeps this behaviour?
|
st45887
|
Hello. I get incorrect results of Tensor.inverse() if some other tensor is transferred to GPU with non_blocking=True just before that:
import torch
device = 'cuda:0'
# The 'Fixes' seem to eliminate the problem
# os.environ['CUDA_LAUNCH_BLOCKING'] = '1' # Fix 1: Set this
the_matrix = [[2, 0], [0, 1]]
the_inverse = [[.5, 0], [.0, 1]]
the_inverse = torch.tensor(the_inverse, dtype=torch.float32, device=device)
for _ in range(10_000):
batch_size = 2
# batch_size = 4 # Fix 2: Set batch_size != 2
matrix = torch.tensor([the_matrix] * batch_size, dtype=torch.float32)
ballast = torch.ones([batch_size, 2**16], dtype=torch.float32)
matrix = matrix.to(device)
ballast = ballast.to(device, non_blocking=True)
# torch.cuda.synchronize() # Fix 3: call cuda.synchronize here
# matrix = matrix.cpu().to(matrix.device) # Fix 4: Move the matrix to cpu and back
matrix_inv = matrix.inverse()
# matrix_inv = torch.stack([m.inverse() for m in matrix]) # Fix 5: Do inversion one by one
if (matrix_inv != the_inverse).any():
print(f'CAUGHT BROKEN INVERSE\n'
f'Matrix\n'
f'------\n'
f'{matrix}\n'
f'Inverse\n'
f'-------\n'
f'{matrix_inv}\n'
f'True inverse\n'
f'------------\n'
f'{the_inverse}')
break
The problem seems to be gone if I uncomment one of the ‘fixes’.
I managed to reproduce this problem in the following environments
PyTorch version: 1.7.0
Is debug build: True
CUDA used to build PyTorch: 10.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.3 LTS (x86_64)
GCC version: (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0
Clang version: Could not collect
CMake version: Could not collect
Python version: 3.8 (64-bit runtime)
Is CUDA available: True
CUDA runtime version: 10.1.243
GPU models and configuration:
GPU 0: GeForce GTX 1080 Ti
GPU 1: GeForce GTX 1080 Ti
Nvidia driver version: 430.64
cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5
HIP runtime version: N/A
MIOpen runtime version: N/A
Versions of relevant libraries:
[pip3] numpy==1.19.2
[pip3] torch==1.7.0
[pip3] torchaudio==0.7.0a0+ac17b64
[pip3] torchvision==0.8.1
[conda] blas 1.0 mkl
[conda] cudatoolkit 10.1.243 h6bb024c_0
[conda] mkl 2020.2 256
[conda] mkl-service 2.3.0 py38he904b0f_0
[conda] mkl_fft 1.2.0 py38h23d657b_0
[conda] mkl_random 1.1.1 py38h0573a6f_0
[conda] numpy 1.19.2 py38h54aff64_0
[conda] numpy-base 1.19.2 py38hfa32c7d_0
[conda] pytorch 1.7.0 py3.8_cuda10.1.243_cudnn7.6.3_0 pytorch
[conda] torchaudio 0.7.0 py38 pytorch
[conda] torchvision 0.8.1 py38_cu101 pytorch
and
same, but
GPU models and configuration: GPU 0: Tesla V100-SXM2-16GB
Nvidia driver version: 455.32.00
and occasionally in this collab.
|
st45888
|
Solved by ptrblck in post #3
Could you update to the nightly version and rerun the code, as we’ve recently fixed a race condition.
CC @voyleg
|
st45889
|
@ptrblck, would you mind to have a look at this issue?
I’ve faced the same behaviour
|
st45890
|
Could you update to the nightly version and rerun the code, as we’ve recently fixed a race condition.
CC @voyleg
|
st45891
|
I try to run the example from the DDP tutorial:
import torch
import torch.distributed as dist
import torch.multiprocessing as mp
import torch.nn as nn
import torch.optim as optim
from torch.nn.parallel import DistributedDataParallel as DDP
def example(rank, world_size):
# create default process group
dist.init_process_group("nccl", rank=rank, init_method=None, world_size=world_size)
# create local model
model = nn.Linear(10, 10).to(rank)
# construct DDP model
ddp_model = DDP(model, device_ids=[rank])
# define loss function and optimizer
loss_fn = nn.MSELoss()
optimizer = optim.SGD(ddp_model.parameters(), lr=0.001)
# forward pass
outputs = ddp_model(torch.randn(20, 10).to(rank))
labels = torch.randn(20, 10).to(rank)
# backward pass
loss_fn(outputs, labels).backward()
# update parameters
optimizer.step()
def main():
world_size = 2
mp.spawn(ex,
args=(world_size,),
nprocs=world_size,
join=True)
if __name__ == '__main__':
main()
I get an error
Exception: process 0 terminated with exit code 1
I am running this in a jupyter notebook inside a docker container.
When I run this as a script inside the container but outside jupyter, it seems it works fine.
What would be the reason it is not working in jupyter?
In general, what is the method to use DDP in a notebook?
|
st45892
|
I want to make a 2D tensor whose diagonal and its neighbors are filled with ones and others are zeros.
For example 5x5 matrix and neighbor 1,
[[1, 1, 0, 0, 0],
[1, 1, 1, 0, 0],
[0, 1, 1, 1, 0],
[0, 0, 1, 1, 1],
[0, 0, 0, 1, 1]]
Is there any pytorch operation for making those matrix?
|
st45893
|
n = 5
torch.diagflat(torch.ones(n-1), 1) + torch.diagflat(torch.ones(n-1), -1) + torch.eye(n)
|
st45894
|
Hello,
I am trying to import c++ code in python following this tutorial : https://pytorch.org/tutorials/advanced/cpp_extension.html 7
However i’m running into an error that I don’t understand :
ImportError: dynamic module does not define module export function (PyInit_test)
It is very simple to reproduce the error:
I use two files :
test.py
import os
from torch.utils.cpp_extension import load
module_path = os.path.dirname(__file__)
test = load(
name='test',
sources=[os.path.join(module_path, "test_cpp.cpp")],
extra_cflags=['-O2'],
verbose=False)
and test_cpp.cpp
#include <iostream>
int placeholder() {
std::cout << "hello" << '\n';
return 0;
}
“python test.py” to reproduce the error.
Do you have any idea where this error could come from ?
|
st45895
|
You need to configure Python bindings using the pybind11 macros shown here: https://pytorch.org/tutorials/advanced/cpp_extension.html#binding-to-python 111
Remember to also include the extension header — #include <torch/extension.h>
I would highly recommend reading the entire tutorial (perhaps even twice) to get an understanding of all the mechanisms in play here. Lots of moving parts, and definitely not trivial.
Good luck and have fun
|
st45896
|
Having read similar issues posted on the forum, suggested approaches are not working and I am greatly in need of your suggestion(s). Using google colab, I am training on unet model which takes an input [1,1, 512, 512] -> B,C,H, W
type or paste code here
```def train_model(model, dataloaders, criterion, optimizer, num_epochs):
since = time.time()
container = {"train": {"loss": {"pred1":[], "pred2":[]},
"score":{"pred1":[], "pred2":[]}},
"val": {"loss": {"pred1":[], "pred2":[]},
"score":{"pred1":[], "pred2":[]}},
"learning_rate":[]}
best_model_wts = copy.deepcopy(model.state_dict())
best_score = 0.0
for epoch in range(num_epochs):
start_time = time.time()
epoch_loss = {"train":{"loss1":0, "loss2":0},
"val":{"loss1":0, "loss2":0}}
epoch_score = {"train":{"score1":0, "score2":0},
"val":{"score1":0, "score2":0}}
# Each epoch has a training and validation phase
for phase in ['train', 'val']:
if phase == 'train':
model.train() # Set model to training mode
else:
model.eval() # Set model to evaluate mode
running_loss1, running_loss2 = 0.0, 0.0
running_score1, running_score2 = 0.0, 0.0
# Iterate over data.
for data in dataloaders[phase]:
inputs, label_pred1, label_pred2 = data
inputs = inputs.to(device)
label_pred1 = label_pred1.to(device)
label_pred2 = label_pred2.to(device)
labels = torch.cat([label_pred1, label_pred2], dim=1) # not really needed
# zero the parameter gradients
optimizer.zero_grad()
# forward
# track history if only in train
with torch.set_grad_enabled(phase == 'train'):
# use pretrained weight of others architecture
outputs = model(inputs)
loss1 = criterion(outputs[:, [0,1,2], :,:], label_pred1)
loss2 = criterion(outputs[:, [3,4,5,6,7,8], :,:], label_pred2)
dice_coefficient_pred1 = dice_no_threshold(outputs[:, [0,1,2], :,:].detach().cpu(), label_pred1).item()
dice_coefficient_pred2 = dice_no_threshold(outputs[:, [3,4,5,6,7,8], :,:].detach().cpu(), label_pred2).item()
# backward + optimize only if in training phase
if phase == 'train':
loss1.backward(retain_graph=True)
loss2.backward()
optimizer.step()
# statistics
running_loss1 += loss1.item() * inputs.size(0)
running_loss2 += loss2.item() * inputs.size(0)
running_score1 += dice_coefficient_pred1 * inputs.size(0)
running_score2 += dice_coefficient_pred2 * inputs.size(0)
# if phase == 'train':
# scheduler.step()
epoch_loss[phase]["loss1"] = running_loss1 / len(dataloaders[phase].dataset)
epoch_loss[phase]["loss2"] = running_loss2 / len(dataloaders[phase].dataset)
epoch_score[phase]["score1"] = running_score1 / len(dataloaders[phase].dataset)
epoch_score[phase]["score2"] = running_score2 / len(dataloaders[phase].dataset)
# print('{} Loss: {:.4f} Acc: {:.4f}'.format(phase, epoch_loss[phase], epoch_acc[phase]))
# deep copy the model !modify this
# if phase == 'val' and epoch_score["val"]["score1"] > best_score:
# best_score = epoch_score["val"]
# best_model_wts = copy.deepcopy(model.state_dict())
# storing experiment result for visualization
if phase == 'val':
container["val"]["loss"]["pred1"].append(epoch_loss["val"]["loss1"])
container["val"]["loss"]["pred2"].append(epoch_loss["val"]["loss2"])
container["val"]["score"]["pred1"].append(epoch_score["val"]["score1"])
container["val"]["score"]["pred2"].append(epoch_score["val"]["score2"])
else:
container["train"]["loss"]["pred1"].append(epoch_loss["train"]["loss1"])
container["train"]["loss"]["pred2"].append(epoch_loss["train"]["loss2"])
container["train"]["score"]["pred1"].append(epoch_score["train"]["score1"])
container["train"]["score"]["pred2"].append(epoch_score["train"]["score2"])
# container["learning_rate"].append([param_group['lr'] for param_group in optimizer.param_groups])
training_time = str(datetime.timedelta(seconds=time.time() - start_time))[:7]
print("Epoch: {}/{}".format(epoch+1, num_epochs),
"Training | loss1: {:.4f}".format(epoch_loss["train"]["loss1"]), "score1: {:.4f}".format(epoch_score["train"]["score1"]),
"loss2: {:.4f}".format(epoch_loss["train"]["loss2"]), "score2: {:.4f}".format(epoch_score["train"]["score2"]),
"Validation | loss1: {:.4f}".format(epoch_loss["val"]["loss1"]), "score1: {:.4f}".format(epoch_score["val"]["score1"]),
"loss2: {:.4f}".format(epoch_loss["val"]["loss2"]), "score2: {:.4f}".format(epoch_score["val"]["score2"]),
# "|Time: {}".format(training_time)
)
print()
print("Training & Validation Workflow Completed")
print("="*40)
time_elapsed = str(datetime.timedelta(seconds=time.time() - since))[:7]
print('Total estimated time {}'.format(time_elapsed))
print('Best validation accuracy: {:4f}'.format(best_score)) # best acc, loss, epoch
# load best model weights
model.load_state_dict(best_model_wts)
return model, container
if __name__ == "__main__":
parser = argparse.ArgumentParser()
# hyper-parameters
parser.add_argument("--epoch", type=int, default=3, help="epoch_number")
parser.add_argument('--lr', type=float, default=1e-4, help='learning rate')
parser.add_argument('--batchsize', type=int, default=1, help='training batch size')
parser.add_argument('--trainsize', type=int, default=512, help='set the size of training sample')
parser.add_argument('--decay_rate', type=float, default=0.1, help='decay rate of learning rate')
parser.add_argument('--decay_epoch', type=int, default=50, help='every n epochs decay learning rate')
parser.add_argument('--num_workers', type=int, default=0,help='number of workers in dataloader. In windows, set num_workers=0')
# training dataset
parser.add_argument('--train_path', type=str,
default='./Dataset/TrainingSet/LungInfection-Train/Doctor-label')
parser.add_argument('--train_save', type=str, default=None,
help='If you use custom save path, please edit `--is_semi=True` and `--is_pseudo=True`')
# model_lung_infection parameters
parser.add_argument('--net_channel', type=int, default=32,
help='internal channel numbers in the Inf-Net, default=32, try larger for better accuracy')
parser.add_argument('--n_classes', type=int, default=9,
help='binary segmentation when n_classes=1')
parser.add_argument('--backbone', type=str, default='Res2Net50',
help='change different backbone, choice: VGGNet16, ResNet50, Res2Net50')
opt = parser.parse_args()
# ---- device setting ----
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# ---- pretrained architecture ----
# ---- build models ----
model = UNet(input_channels=1, output_channels=opt.n_classes, outputs_activation="softmax")
model = model.to(device)
criterion = DiceLoss(activation="softmax2d")
optimizer = torch.optim.Adam(model.parameters(), opt.lr, weight_decay = opt.decay_rate)
scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optimizer, factor=0.2, patience=2, cooldown=2)
current_lr = [param_group['lr'] for param_group in optimizer.param_groups][0]
dataloaders = get_loader(opt.batchsize, opt.trainsize, opt.num_workers)
model, container = train_model(model, dataloaders, criterion, optimizer, opt.epoch)
|
st45897
|
Solved by ptrblck in post #12
Your scheduler is reducing the learning rate such that the training seems to get stalled.
You could remove the scheduler and check if your model is able to learn the train set. Once this is possible, you could then try to make sure it is also able to generalize well to the validation dataset.
|
st45898
|
If I understand the issue correctly, your training doesn’t stop at all?
If so, could you add print statements and make sure that the training loop is indeed executed and the code isn’t hanging at one point?
|
st45899
|
It does not complete the first epoch after several hours of training. And at one point, the CUDA Out of Memory error message prompt:
RuntimeError: CUDA out of memory. Tried to allocate 12.50 MiB (GPU 0; 10.92 GiB total capacity; 8.57 MiB already allocated; 9.28 GiB free; 4.68 MiB cached is displayed.
|
st45900
|
Could you check the allocated memory via print(torch.cuda.memory_allocated()) inside the training loop and see if the memory usage increases in each iteration?
|
st45901
|
I printed allocated memory using a batchsize of 4 at every 5 step iteration. It constantly give 0 445350912 5 445350912 10 445350912 15 445350912 20 445350912 25 445350912 30 445350912 35 445350912 40 445350912 45 445350912 50 445350912 55 445350912 60 445350912 65 445350912 70 445350912 75 445350912 80 445350912 85 445350912 90 445350912 95 445350912 100 445350912
|
st45902
|
This would correspond to approx. only 424MB inside the loop. Did you see any increase in the allocated memory before the OOM error was raised?
|
st45903
|
No. I did not increase the allocated memory. Can the configuration (num_worker = 0, pin_memory = True)
of the datalaoder or the way of loading my files be responsible for this? The files are approximately 30,000 .npy files in total and 7 files are needed one at a time from the dataloader.
|
st45904
|
This shouldn’t be the case, as the DataLoader would load the batches into the CPU RAM, if no to('cuda') or cuda() operation is used inside the Dataset.
|
st45905
|
A colleague suggested not loading the data from google drive which was my approach. I copied some of the files to the content drive of Colab (because the total file size is more than the allocated disk space) and I run the code with a batch size of 8 and I got results per epoch.
I appreciate your supports.
Quick one, is there any cloud that you suggest that offers high ram memory and speed with larger disk space that a student can use for his project?
|
st45906
|
Currently, my model is not training as it gives the same result
Epoch 4: reducing learning rate of group 0 to 2.0000e-02.
Epoch 9: reducing learning rate of group 0 to 4.0000e-03.
Epoch 14: reducing learning rate of group 0 to 8.0000e-04.
Epoch 19: reducing learning rate of group 0 to 1.6000e-04.
Epoch 24: reducing learning rate of group 0 to 3.2000e-05.
Epoch 29: reducing learning rate of group 0 to 6.4000e-06.
Epoch 34: reducing learning rate of group 0 to 1.2800e-06.
Epoch 39: reducing learning rate of group 0 to 2.5600e-07.
Epoch 44: reducing learning rate of group 0 to 5.1200e-08.
Epoch 49: reducing learning rate of group 0 to 1.0240e-08.
Epoch: 1/10 Training | loss1: 0.4706 score1: 0.8874 loss2: 0.8314 score2: 0.0593 Validation | loss1: 0.4761 score1: 0.8717 loss2: 0.8314 score2: 0.0586 |Time: 0:19:14
Epoch: 2/10 Training | loss1: 0.4688 score1: 0.8931 loss2: 0.8315 score2: 0.0577 Validation | loss1: 0.4761 score1: 0.8717 loss2: 0.8314 score2: 0.0586 |Time: 0:18:03
Epoch: 3/10 Training | loss1: 0.4688 score1: 0.8931 loss2: 0.8315 score2: 0.0577 Validation | loss1: 0.4761 score1: 0.8717 loss2: 0.8314 score2: 0.0586 |Time: 0:17:53
Epoch: 4/10 Training | loss1: 0.4688 score1: 0.8931 loss2: 0.8315 score2: 0.0577 Validation | loss1: 0.4761 score1: 0.8717 loss2: 0.8314 score2: 0.0586 |Time: 0:17:52
Epoch: 5/10 Training | loss1: 0.4688 score1: 0.8931 loss2: 0.8315 score2: 0.0577 Validation | loss1: 0.4761 score1: 0.8717 loss2: 0.8314 score2: 0.0586 |Time: 0:17:51
Epoch: 6/10 Training | loss1: 0.4688 score1: 0.8931 loss2: 0.8315 score2: 0.0576 Validation | loss1: 0.4761 score1: 0.8717 loss2: 0.8314 score2: 0.0586 |Time: 0:17:30
Epoch: 7/10 Training | loss1: 0.4688 score1: 0.8931 loss2: 0.8315 score2: 0.0576 Validation | loss1: 0.4761 score1: 0.8717 loss2: 0.8314 score2: 0.0586 |Time: 0:17:23
Epoch: 8/10 Training | loss1: 0.4688 score1: 0.8931 loss2: 0.8315 score2: 0.0576 Validation | loss1: 0.4761 score1: 0.8717 loss2: 0.8314 score2: 0.0586 |Time: 0:17:23
Epoch: 9/10 Training | loss1: 0.4688 score1: 0.8931 loss2: 0.8315 score2: 0.0576 Validation | loss1: 0.4761 score1: 0.8717 loss2: 0.8314 score2: 0.0586 |Time: 0:17:28
Epoch: 10/10 Training | loss1: 0.4688 score1: 0.8932 loss2: 0.8315 score2: 0.0576 Validation | loss1: 0.4761 score1: 0.8717 loss2: 0.8314 score2: 0.0586 |Time: 0:17:36
|
st45907
|
Your scheduler is reducing the learning rate such that the training seems to get stalled.
You could remove the scheduler and check if your model is able to learn the train set. Once this is possible, you could then try to make sure it is also able to generalize well to the validation dataset.
|
st45908
|
Hi,
I have a sharedAdam code for pytorch 0.1.12 and I have changed it to pytorch 1.6.
Can someone please help a check if the code migration is ok?
Pytorch 0.1.12 code
import math
import torch
import torch.optim as optim
Implementing the Adam optimizer with shared states
class SharedAdam(optim.Adam): # object that inherits from optim.Adam
def __init__(self, params, lr=1e-3, betas=(0.9, 0.999), eps=1e-8, weight_decay=0):
super(SharedAdam, self).__init__(params, lr, betas, eps, weight_decay) # inheriting from the tools of optim.Adam
for group in self.param_groups: # self.param_groups contains all the attributes of the optimizer, including the parameters to optimize (the weights of the network) contained in self.param_groups['params']
for p in group['params']: # for each tensor p of weights to optimize
state = self.state[p] # at the beginning, self.state is an empty dictionary so state = {} and self.state = {p:{}} = {p: state}
state['step'] = torch.zeros(1) # counting the steps: state = {'step' : tensor([0])}
state['exp_avg'] = p.data.new().resize_as_(p.data).zero_() # the update of the adam optimizer is based on an exponential moving average of the gradient (moment 1)
state['exp_avg_sq'] = p.data.new().resize_as_(p.data).zero_() # the update of the adam optimizer is also based on an exponential moving average of the squared of the gradient (moment 2)
# Sharing the memory
def share_memory(self):
for group in self.param_groups:
for p in group['params']:
state = self.state[p]
state['step'].share_memory_() # tensor.share_memory_() acts a little bit like tensor.cuda()
state['exp_avg'].share_memory_() # tensor.share_memory_() acts a little bit like tensor.cuda()
state['exp_avg_sq'].share_memory_() # tensor.share_memory_() acts a little bit like tensor.cuda()
# Performing a single optimization step of the Adam algorithm (see algorithm 1 in https://arxiv.org/pdf/1412.6980.pdf)
def step(self):
loss = None
for group in self.param_groups:
for p in group['params']:
if p.grad is None:
continue
grad = p.grad.data
state = self.state[p]
exp_avg, exp_avg_sq = state['exp_avg'], state['exp_avg_sq']
beta1, beta2 = group['betas']
state['step'] += 1
if group['weight_decay'] != 0:
grad = grad.add(group['weight_decay'], p.data)
exp_avg.mul_(beta1).add_(1 - beta1, grad)
exp_avg_sq.mul_(beta2).addcmul_(1 - beta2, grad, grad)
denom = exp_avg_sq.sqrt().add_(group['eps'])
bias_correction1 = 1 - beta1 ** state['step'][0]
bias_correction2 = 1 - beta2 ** state['step'][0]
step_size = group['lr'] * math.sqrt(bias_correction2) / bias_correction1
p.data.addcdiv_(-step_size, exp_avg, denom)
return loss
Pytorch 1.6 Code-
class SharedAdam(optim.Adam):
“”“Implements Adam algorithm with shared states.
“””
def __init__(self, params, lr=1e-3, betas=(0.9, 0.999), eps=1e-8,
weight_decay=0):
super(SharedAdam, self).__init__(params, lr, betas, eps, weight_decay)
for group in self.param_groups:
for p in group['params']:
state = self.state[p]
state['step'] = torch.zeros(1)
state['exp_avg'] = p.data.new().resize_as_(p.data).zero_()
state['exp_avg_sq'] = p.data.new().resize_as_(p.data).zero_()
def share_memory(self):
for group in self.param_groups:
for p in group['params']:
state = self.state[p]
state['step'].share_memory_()
state['exp_avg'].share_memory_()
state['exp_avg_sq'].share_memory_()
def step(self, closure=None):
"""Performs a single optimization step.
Arguments:
closure (callable, optional): A closure that reevaluates the model
and returns the loss.
"""
loss = None
if closure is not None:
loss = closure()
for group in self.param_groups:
for p in group['params']:
if p.grad is None:
continue
grad = p.grad.data
state = self.state[p]
exp_avg, exp_avg_sq = state['exp_avg'], state['exp_avg_sq']
beta1, beta2 = group['betas']
state['step'] += 1
if group['weight_decay'] != 0:
grad = grad.add(other = p.data, alpha = group['weight_decay'])
# Decay the first and second moment running average coefficient
exp_avg.mul_(beta1).add_(grad, alpha = 1 - beta1)
exp_avg_sq.mul_(beta2).addcmul_(grad, grad, value = 1 - beta2)
denom = exp_avg_sq.sqrt().add_(group['eps'])
bias_correction1 = 1 - beta1 ** state['step'][0]
bias_correction2 = 1 - beta2 ** state['step'][0]
step_size = group['lr'] * math.sqrt(bias_correction2) / bias_correction1
p.data.addcdiv_(exp_avg, denom, value = -step_size)
return loss
|
st45909
|
The function signature or these methods changes and you would have to adapt the passed values to is using the docs 40.
I’m not sure which line of code is causing the error, so feel free to add the stack trace in case you get stuck.
|
st45910
|
Hi there,
Novice and first post in this forum.
I have search the the forum and the documentation and even tried
inspect.getsourcelines(torch.tensor)
Which only resulted in errors.
A search for in-place operations in connection with Pytorch results in anything but what I am looking for.
All the inplace-operations refer to their non-inplace counter part for documentation, which is fine, but I am missing a link to the source code.
In case I am suffering from an XY-problem:
I ultimately want to write s.th. like nn.ReLu() but as a step-function so y = 1. for x > 0. else 0.
y = 1. for x > 0. else -1. would also be fine.
Motivation behind this is to normalize the values to reassume the value-range of the input-valules.
Cheers
|
st45911
|
I upgrade my cuda version to 10.2 and I try to using pip to install torch==1.6.0. According to official command
CUDA 10.2
pip install torch==1.6.0 torchvision==0.7.0
It cannot work, and the error is
pip install torch==1.6.0 torchvision==0.7.0
ERROR: Could not find a version that satisfies the requirement torch==1.6.0 (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2)
ERROR: No matching distribution found for torch==1.6.0
Enviroment
windows 10.0
python 3.6.2 64bit
Cuda compilation tools, release 10.2, V10.2.89
|
st45912
|
Right now I install a torch+1.6.0+cu101 version by using pip install torch==1.6.0 tochvision==0.6.0 -f https://download.pytorch.org/whl/torch_stable.html
It worked whatever although my cuda is actually 10.2
really digging in here…
|
st45913
|
Hello, I’m getting this error whenever I try to run anything with Torch:
UserWarning:
GeForce RTX 3090 with CUDA capability sm_86 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_61 sm_70 sm_75 compute_37.
If you want to use the GeForce RTX 3090 GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/
Does this simply mean that 3090’s aren’t supported yet or is there something I can do to enable support. Currently using CUDA 10.2, PyTorch 1.6
Thanks in advance
TeraChad
|
st45914
|
Solved by niata in post #2
you may want to try it with cuda 11:
pip install --pre torch torchvision -f https://download.pytorch.org/whl/nightly/cu110/torch_nightly.html
|
st45915
|
you may want to try it with cuda 11:
pip install --pre torch torchvision -f https://download.pytorch.org/whl/nightly/cu110/torch_nightly.html 575
|
st45916
|
Are you solve your problem? I will buy a 3090 and i hesitate to buy it!!!
You don’t get to use the full power of RTX-30*, but otherwise it works just fine with pytorch/cuda-11.0
|
st45917
|
Hi,
It seems like when doing augmentation in Pytorch it’s possible to perform fillcolor using one predetermined tuple only. Keras allows doing that with the nearest value, as depicted here:
keras.io
Keras documentation: Image data preprocessing
I’ll copy the relevant part that shows what it does:
‘nearest’: aaaaaaaa|abcd|dddddddd
Is there any option to do the same in Pytorch?
Thanks
|
st45918
|
The error index 1 is out of bounds for dimension 0 with size 1 is in the following code. There are 6 labels for output y, 0-5. y has a type like [1,0,0,0,0,0]. Please tell me what is not going well.
import torch
from bindsnet.network import Network
from bindsnet.network.nodes import Input, LIFNodes
from bindsnet.network.topology import Connection
from bindsnet.network.monitors import Monitor
import numpy as np
time = 25
network = Network()
inpt = Input(n=64,shape=[1,64], sum_input=True) # n=64
middle = LIFNodes(n=40, trace=True, sum_input=True)
center = LIFNodes(n=40, trace=True, sum_input=True)
final = LIFNodes(n=40, trace=True, sum_input=True)
out = LIFNodes(n=6, sum_input=True) # n=6 same numbers of laver(0~5)
inpt_middle = Connection(source=inpt, target=middle, wmin=0, wmax=1e-1)
middle_center = Connection(source=middle, target=center, wmin=0, wmax=1e-1)
center_final = Connection(source=center, target=final, wmin=0, wmax=1e-1)
final_out = Connection(source=final, target=out, wmin=0, wmax=1e-1)
network.add_layer(inpt, name='A')
network.add_layer(middle, name='B')
network.add_layer(center, name='C')
network.add_layer(final, name='D')
network.add_layer(out, name='E')
foward_connection = Connection(source=inpt, target=middle, w=0.05 + 0.1*torch.randn(inpt.n, middle.n))
network.add_connection(connection=foward_connection, source="A", target="B")
foward_connection = Connection(source=middle, target=center, w=0.05 + 0.1*torch.randn(middle.n, center.n))
network.add_connection(connection=foward_connection, source="B", target="C")
foward_connection = Connection(source=center, target=final, w=0.05 + 0.1*torch.randn(center.n, final.n))
network.add_connection(connection=foward_connection, source="C", target="D")
foward_connection = Connection(source=final, target=out, w=0.05 + 0.1*torch.randn(final.n, out.n))
network.add_connection(connection=foward_connection, source="D", target="E")
recurrent_connection = Connection(source=out, target=out, w=0.025*(torch.eye(out.n)-1),)
network.add_connection(connection=recurrent_connection, source="E", target="E")
inpt_monitor = Monitor(obj=inpt, state_vars=("s","v"), time=500,)
middle_monitor = Monitor(obj=inpt, state_vars=("s","v"), time=500,)
center_monitor = Monitor(obj=inpt, state_vars=("s","v"), time=500,)
final_monitor = Monitor(obj=inpt, state_vars=("s","v"), time=500,)
out_monitor = Monitor(obj=inpt, state_vars=("s","v"), time=500,)
network.add_monitor(monitor=inpt_monitor, name="A")
network.add_monitor(monitor=middle_monitor, name="B")
network.add_monitor(monitor=center_monitor, name="C")
network.add_monitor(monitor=final_monitor, name="D")
network.add_monitor(monitor=out_monitor, name="E")
for l in network.layers:
m = Monitor(network.layers[l], state_vars=['s'], time=time)
network.add_monitor(m, name=l)
npzfile = np.load("C:/Users/name/Desktop/myo-python-1.0.4/myo-armband-nn-master/data/train_set.npz")
x = npzfile['x']
y = npzfile['y']
x = torch.from_numpy(x).clone()
y = torch.from_numpy(y).clone()
grads = {}
lr, lr_decay = 1e-2, 0.95
criterion = torch.nn.CrossEntropyLoss()
spike_ims, spike_axes, weight_im = None, None, None
for i,(x,y) in enumerate(zip(x.view(-1,64), y)):
inputs = {'A': x.repeat(time, 1),'E_b': torch.ones(time, 1)}
network.run(inputs=inputs, time=time)
y = torch.tensor(y).long()
spikes = {l: network.monitors[l].get('s') for l in network.layers}
summed_inputs = {l: network.layers[l].summed for l in network.layers}
output = spikes['E'].sum(-1).float().softmax(0).view(1,-1)
predicted = output.argmax(1).item()
grads['dl/df'] = summed_inputs['E'].softmax(0)
grads['dl/df'][y] -= 1 ☚ error
grads['dl/dw'] = torch.ger(summed_inputs['A'], grads['dl/df'])
network.connections['A','B','C','D','E'].w -= lr*grads['dl/dw']
if i > 0 and i % 300 == 0:
lr = lr_decay
network.reset_()
|
st45919
|
Could you post the error message with the complete stack trace?
It should point to the line of code, which raises the error, and you could then check the shape of the tensor inside this method to make sure it really has more than a single dimension.
|
st45920
|
The error statement is as follows.
Is it working only when the index is 0?
Could you just tell me?
grads['dl/df'][y] -= 1
IndexError: index 1 is out of bounds for dimension 0 with size 1
print(y.size())
torch.Size([19573, 6])
|
st45921
|
In your code snippet it seems that grads['dl/df'] has the shape [1, *], so if you want to index it in dim0 you can only use an index of 0.
|
st45922
|
ptrblck:
grads[‘dl/df’]
In other words, should I change the way grads [‘dl / df’] is written?
|
st45923
|
It depends what you are trying to achieve, i.e. what exactly should grads contain and what should y index? At the moment the shapes and indices do not match and thus the error is raised.
|
st45924
|
I wrote it to find the gradient of the input y data (label).
In this case, the gradient between the output obtained by the softmax function in the last layer (named E) and the input label is calculated.
|
st45925
|
I would recommend to check the shape of grads['dl/df'] and make sure to index it in the right dimensions with the right indices.
|
st45926
|
confirmed the shape of grads [‘dl / df’].
The shape was [1,6]. Since the shape of y was [6], grads [‘dl / df’] = torch.squeeze (grads [‘dl / df’], dim = 0) was executed.
Thanks to your advice, this index error has been fixed.
|
st45927
|
Hi,guys, i am use libtorch and glog in the same codes in c++,but there are indices some files conflict between each other. The libtorch’s logging_is_not_google_glog.h(#define LOG(n)) is conflict with glog’s logging.h(#define LOG(severity) COMPACT_GOOGLE_LOG_ ## severity.stream()).The details problem is:
In file included from libtorch/include/c10/util/Logging.h:28:0,
from /libtorch/include/c10/core/TensorImpl.h:17,
from /libtorch/include/ATen/core/TensorBody.h:11,
from /libtorch/include/ATen/Tensor.h:11,
from //libtorch/include/ATen/Context.h:4,
from /libtorch/include/ATen/ATen.h:5,
from /libtorch/include/torch/csrc/api/include/torch/types.h:3,
from /libtorch/include/torch/script.h:3,
/libtorch/include/c10/util/logging_is_not_google_glog.h:96:0: warning: “LOG” redefined
#define LOG(n)
^
In file included from /glog/linux/include/glog/logging.h:483:0: note: this is the location of the previous definition
#define LOG(severity) COMPACT_GOOGLE_LOG_ ## severity.stream()
^
In file included from /libtorch/include/c10/util/Logging.h:28:0,
from /libtorch/include/c10/core/TensorImpl.h:17,
from /libtorch/include/ATen/core/TensorBody.h:11,
from /libtorch/include/ATen/Tensor.h:11,
from /libtorch/include/ATen/Context.h:4,
from /libtorch/include/ATen/ATen.h:5,
from /libtorch/include/torch/csrc/api/include/torch/types.h:3,
from /libtorch/include/torch/script.h:3,
/libtorch/include/c10/util/logging_is_not_google_glog.h:99:0: warning: “VLOG” redefined
#define VLOG(n) LOG((-n))
^
In file included from /glog/linux/include/glog/logging.h:1068:0: note: this is the location of the previous definition
#define VLOG(verboselevel) LOG_IF(INFO, VLOG_IS_ON(verboselevel))
Does anyone can tell me how to deal with it?My environment is ubuntu16.04 pytorch1.3 libtorch libtorch-cxx11-abi-shared-with-deps-1.3.1.zip,qt5.5. tkx.
|
st45928
|
Hi,
I am trying to finetune a sigle-gpu trained model in multi-gpus. First, I specify
CUDA_VISIBLE_DEVICES=0,1,2. Then I warp the defined model with torch.nn.DataParallel() and use the rmsprop optimizer as follows:
model = torch.nn.DataParallel(model).cuda()
optimizer = torch.optim.RMSprop(model.parameters(), lr=opt.lr 1, alpha=0.99,
eps=1e-8, momentum=0, weight_decay=0)
This code works well if I train a model in multi-gpus from scratch. However, if I start from a checkpoint of a single-gpu trained model, when it runs to code
optimizer.step()
an error shows “…/python2.7/site-packages/torch/optim/rmsprop.py”, line 52, in step state = self.state[p]
KeyError: Parameter containing:
( 0 , 0 ,.,.) =1.00000e-02 * 2.5088
( 0 , 1 ,.,.) = 1.00000e-02 * 1.6257
.
.
.
(127,126,.,.) = 1.00000e-02 *2.5302
(127,127,.,.) = 1.00000e-02 *-4.7111
[torch.cuda.FloatTensor of size 128x128x1x1 (GPU 0)]"
Does anyone know what’s the problem here? Thanks in advance!
|
st45929
|
What’s your code to load the model? Try doing something like this:
model = MyNetwork()
model.load_state_dict(path_to_file)
model = torch.nn.DataParallel(model).cuda()
optimizer = torch.optim.RMSprop(model.parameters(), lr=opt.lr, alpha=0.99, eps=1e-8, momentum=0, weight_decay=0)
i.e. load the model before constructing the DataParallel and create the optimizer after creating the data parallel.
|
st45930
|
Thanks for your reply! @colesbury
I tried what said. It works. However, Here comes another related problem. I also need to use the optimizer state saved in the previous single-gpu training rather than create a new optimizer.
optimizer.load_state_dict(checkpoint['optimizer'])
When I do this, the same error occurs. I guess something inside the saved optimizer is not consistent with the multi-gpus setting. Any suggestion to solve this? Thanks.
|
st45931
|
Try move model to gpu first, then create optimizer, loading it’s parameters from checkpoint
|
st45932
|
I am trying to learn about pack_padded_sequence more and want to test it in this small dataset. I managed to merge two tensors of different sequence length but when I try to pad the sequence it gives me an error. Does anybody know how to solve this? I am trying to follow an example given in the stackoverflow comments but with an actual dataset. https://stackoverflow.com/questions/51030782/why-do-we-pack-the-sequences-in-pytorch 2
Runtime Error: The expanded size of the tensor (8) must match the existing size (4) at non-singleton dimension 1. Target sizes: [93, 8, 1]. Tensor sizes: [93, 4, 1]
!wget https://raw.githubusercontent.com/jbrownlee/Datasets/master/airline-passengers.csv
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import torch
import torch.nn as nn
from torch.autograd import Variable
from sklearn.preprocessing import MinMaxScaler
training_set = pd.read_csv('airline-passengers.csv')
def sliding_windows(data, seq_length):
x = []
y = []
for i in range(len(data)-seq_length-1):
_x = data[i:(i+seq_length)]
_y = data[i+seq_length]
x.append(_x)
y.append(_y)
return x,np.array(y)
sc = MinMaxScaler()
training_data = sc.fit_transform(training_set)
seq_length = 8
x, y = sliding_windows(training_data, seq_length)
train_size = int(len(y) * 0.67)
test_size = len(y) - train_size
trainX = Variable(torch.Tensor(np.array(x[0:train_size])))
trainY = Variable(torch.Tensor(np.array(y[0:train_size])))
seq_length = 4
x1, y1 = sliding_windows(training_data, seq_length)
train_size = int(len(y1) * 0.67)
test_size = len(y1) - train_size
trainX1 = Variable(torch.Tensor(np.array(x1[0:train_size])))
trainY1 = Variable(torch.Tensor(np.array(y1[0:train_size])))
seq_batch = [trainX,
trainX1]
seq_lens = [8, 4]
added_seq_batch = torch.nn.utils.rnn.pad_sequence(seq_batch, batch_first=True)
|
st45933
|
Hi.
I’ll start with saying that we have a quite atypical use case so I’ll present some background. We’re using pytorch to develop a GPU accelerated trainer for neural networks used by the chess engine Stockfish. Our networks have a large fully connected input layer (with sparse features), but the network is shallow and the later layers are small (we require the unbatched inference speed to be >1M/s/core on a CPU). Because of this we’re forced to make some stuff differently:
We are using sparse tensors as inputs. The density is about 0.1%. Moreover we consider the batch dimension as sparse, so sparse_dim=2, shape is (batch_size, 41024).
We’re processing hundreds of thousands of predictions/examples per second.
We cannot use the DataLoader because we need fast data loader that’s implemented in C++ and the concept of the DataLoader is fundamentally incompatibile with the way we pass tensors and form batches.
Our batches are not divisible.
We cannot use DataParallel nor DistributedDataParallel for the above reasons (well, we got it kindof working with DataParallel after a lot of struggles but it breaks the nvidia driver, hangs the machine, and requires a reboot).
Therefore we decided to implement multigpu support with the simplest way possible. This turned out to be fairly straightforward. We manually replicate the model across devices, run forward on a single batch on each of them, compute loss on each of them, run backward on each of them, accumulate gradients on the main device, and perform an optimization step. Works great. However, when porting the solution from a playground to our actual trainer we noticed a problem - performance didn’t scale for multiple gpus. After some digging we identified the issue stems from nothing else but the sparse tensors (it’s been causing issues from the start, I hope that this also answers the “what do you need sparse tensors for” that I see in every stagnated issue about them).
The problem is that for sparse tensors loss.backward() takes a huge amount of time, but our single-process multigpu training relies on asynchronicity of forward and backward calls (which works perfectly with dense inputs).
We created a self contained script presenting the issue (batch size may need adjustment to see the problem on different machines).
import torch
from torch import nn
import copy
import time
def test(batch_size, devices, sparse, warmup=False):
# Seed the rng to have deterministic tests
torch.manual_seed(12345)
print('Devices: ' + str(devices), 'Sparse: ' + str(sparse), 'Warmup: ' + str(warmup))
# For some reason MSE loss requires very low lr otherwise it blows up
learning_rate = 0.001
# Whatever arch
model = nn.Sequential(
nn.Linear(512, 512),
nn.Linear(512, 512),
nn.Linear(512, 1)
).to(device=devices[0])
# Whatever loss
loss_fn = nn.MSELoss()
# Whatever optimizer
optimizer = torch.optim.Adagrad(model.parameters(), lr=learning_rate)
# 0. We have 1 model, N devices, N batches, N outcome tensors
def step(model, batches, outcomes, devices):
# 1. Replicate the model to all devices
local_models = [model] + [copy.deepcopy(model).to(device=device) for device in devices[1:]]
# 2. Make each model do forward on 1 batch -> N x forward
torch.cuda.synchronize()
t0 = time.clock()
outs = [m(batch.to(device=device, non_blocking=True)) for batch, m, device in zip(batches, local_models, devices)]
t1 = time.clock()
torch.cuda.synchronize()
t2 = time.clock()
if not warmup:
print('')
print('forward {:6.3f} seconds'.format(t1-t0))
print('sync {:6.3f} seconds'.format(t2-t1))
# 3. Compute loss for each separate forward -> N losses
losses = [loss_fn(out, outcome.to(device=device, non_blocking=True)) for outcome, out, device in zip(outcomes, outs, devices)]
# 4. Remove gradients from all parameters. This has to be done before backwards.
# This should be better than zero_grad because it doesn't block and makes
# the first backward pass assign instead of add - less memory usage
for m in local_models:
for param in m.parameters():
param.grad = None
# 5. Do backward for each loss separately. This *should* not block
torch.cuda.synchronize()
t0 = time.clock()
for loss in losses:
loss.backward()
t1 = time.clock()
torch.cuda.synchronize()
t2 = time.clock()
if not warmup:
print('backward {:6.3f} seconds'.format(t1-t0))
print('sync {:6.3f} seconds'.format(t2-t1))
# 6. Non blocking transfer of all gradients to the main device
# This shouldn't be that much data for our small net
grads_by_model = [[param.grad.to(device=devices[0], non_blocking=True) for param in m.parameters()] for m in local_models[1:]]
# 7. Accumualate gradients. We don't want to average them because we're not
# splitting the batch, we're taking multiple batches in one step.
for grads in grads_by_model:
for main_param, grad in zip(model.parameters(), grads):
main_param.grad += grad
# 8. Optimizer runs with the accumulated gradients on the main model only.
optimizer.step()
# Return loss for diagnostic
return sum(loss.item() for loss in losses) / len(losses)
# Random batches and outcomes. We don't care whether they are different for each iteration
# so we do it once because it's faster.
# Note that we're scaling the batch size by the number of devices so that
# it's transparent to the user.
batches = [(torch.rand(batch_size // len(devices), 512) * 100.0 - 99.0).clamp(0.0, 1.0).to(device=device, non_blocking=True) for device in devices]
if sparse:
batches = [b.to_sparse() for b in batches]
outcomes = [torch.rand(batch_size // len(devices), 1).to(device=device, non_blocking=True) for device in devices]
start_time = time.clock()
losses = []
# We do a fixed number of batch_size chunks, as the user expects
for i in range(10):
losses.append(step(model, batches, outcomes, devices))
# Ensure everything completed before measuring time
torch.cuda.synchronize()
end_time = time.clock()
if not warmup:
print('{:6.3f} seconds'.format(end_time-start_time))
print('Loss went from {} to {}'.format(losses[0], losses[-1]))
batch_size = 2**15
# warmup
test(batch_size, ['cuda:0'], sparse=False, warmup=True)
test(batch_size, ['cuda:0'], sparse=False)
# warmup
test(batch_size, ['cuda:0'], sparse=True, warmup=True)
test(batch_size, ['cuda:0'], sparse=True)
The behaviour we observe is that for dense inputs forward and backward execute asynchronously and take almost 0 time, all time is spent on sync, which is what we want because it allows scheduling multiple forward/backward in parallel.
But for sparse=True we observe that backward is taking much longer, most of the time longer than the subsequent sync. This completely defeats the gains from our multigpu setup. (For our real case we also observe similar high time usage for forward, though it’s not visible in this example).
(Results from my GTX750: https://pastebin.com/TfGiuWKT)
Our questions are:
Why does backward take so much time when the inputs were sparse tensors?
Is this a bug in pytorch?
How can we work around this? Preferably without spawning multiple processes.
|
st45934
|
Is it perphaps this https://github.com/pytorch/pytorch/blob/cfe3defd88b43ba710dd1093e382c5e8c279bd83/aten/src/ATen/native/sparse/cuda/SparseCUDATensorMath.cu#L147 7 that we’re seeing take up time in forward in our original case? This would partially explain why we only see backward taking longer in the toy example (because to_sparse returns a coalesced tensor?). But it rises a question, if this is indeed the issue, then why is the tensor not coalesced during backward?
|
st45935
|
I set the gradient to zero (I tried both via hook and via parameter param.grad) and when I output the gradient, it does say 0, but the value of the parameters during training changes minimally in strange way. I set values to 1 and then during training all the weights go steadily down by 0.0001, that is 1, 0.9999, 0.9998, etc (below it is after a few iteration). What does such a behavior mean, and how to make sure the parameters don’t change?
gradient:
tensor([[[0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0.]]], device=‘cuda:0’)
weights:
tensor([[[0.9967, 0.9967, 0.9967, 0.9967, 0.9967],
[0.9967, 0.9967, 0.9967, 0.9967, 0.9967],
[0.9967, 0.9967, 0.9967, 0.9967, 0.9967],
[0.9967, 0.9967, 0.9967, 0.9967, 0.9967],
[0.9967, 0.9967, 0.9967, 0.9967, 0.9967]]], device=‘cuda:0’)
|
st45936
|
Gradient equal to zero doesn’t stop optimizer inertia (momentum and so on).
make param.grad=None to avoid that.
In short:
for SGD for example if gradient is zero there is still an update.
imagen678×506 28.8 KB
|
st45937
|
Hello,
I’m trying to implement text generation RNN-based with sequence of different length with padding and masked crossed-entropy loss. Here a snippet of critical code.
Each backward step of the loss takes 30s against less than 2 for all above.
Thanks in advance for any suggestion !
import csv
import numpy as np
import logging
import time
import string
from itertools import chain
import torch
import torch.nn as nn
import torch.optim
import torch.nn.functional as F
from torch.utils.data import Dataset, DataLoader
from torch.utils.tensorboard import SummaryWriter
from pathlib import Path
from textloader import *
# from generate import *
import logging
logging.basicConfig(level=logging.INFO)
def maskedCrossEntropy(output, target, padcar):
mask = target != padcar
loss = torch.nn.CrossEntropyLoss(reduction="none")(output.permute(0,2,1), target.long()) * mask
return loss.sum() / mask.sum()
class RNN(nn.Module):
def __init__(self, latent, dim, out):
super().__init__()
self.latent = latent
self.dim = dim
self.out = out
self.hidden_state = torch.tensor(latent)
self.lin_hs = nn.Linear(latent, latent)
self.lin_ft = nn.Linear(dim, latent)
self.lin_dec = nn.Linear(latent, out)
def decode(self, hs):
d = self.lin_dec(hs)
return d
def forward(self, batch, hs):
l = []
for i in range(batch.shape[0]):
hs = self.one_step(batch[i,:], hs)
l.append(hs)
return torch.stack(l)
def one_step(self, batch, hs):
return torch.tanh(self.lin_hs(hs) + self.lin_ft(batch.clone()))
speech = ""
with open('data/full_speech.txt') as f:
while True:
c = f.read(1)
speech += c
if not c:
break
LR = 10e-3
SEQ_LEN = 100
PRED_LEN = 10
LATENT_DIM = 50
BATCH_SIZE = 500
EPOCH_RANGE = 5
embedding = nn.Embedding(len(id2lettre),50)
speech_dataset = TextDataset(speech)
speech_dataloader = DataLoader(speech_dataset, BATCH_SIZE, shuffle=True, drop_last=True, collate_fn=collate_fn)
model = RNN(LATENT_DIM, len(id2lettre), len(id2lettre))
optim = torch.optim.Adam(model.parameters(), lr = LR)
loss = torch.nn.CrossEntropyLoss()
hs = torch.zeros(BATCH_SIZE, LATENT_DIM)
for epoch in range(EPOCH_RANGE):
print(epoch)
i = 0
for x in speech_dataloader:
optim.zero_grad()
hst = model(embedding(x.long()), hs)
hst = model.decode(hst)
l = maskedCrossEntropy(hst, x, PAD_IX)
l.backward()
optim.step()
|
st45938
|
I’m trying to define a static tensor in a pytorch model. Initially I just created the variable as a tensor and then saved it in my model. However that leads to problem when I use, model.to(device), since the tensor does not get shifted to cuda devices when I do this.
I can make the parameter shift by defining it as a nn.parameter(), but I do not want to alter or train this parameter.
So what exactly is the best way to tell pytorch to ensure that this static tensor gets shifted between cuda and cpu, while not messing up anything with the optimizer?
|
st45939
|
Solved by googlebot in post #2
module.register_buffer(“x”, tensor(…)), but for scalars a simple attribute (module.pi = 3.14) is sometimes better, both are auto-transfered to gpu.
|
st45940
|
module.register_buffer(“x”, tensor(…)), but for scalars a simple attribute (module.pi = 3.14) is sometimes better, both are auto-transfered to gpu.
|
st45941
|
I’m trying to implement this paper Two-Stream FCNs to Balance Content and Style for Style Transfer, and here’s a picture of the architecture:
Screen Shot 2020-11-26 at 2.20.00 PM1722×928 279 KB
The outputs of the content and style subnets are fed into the generator subnet.
The content subnet’s loss depends on the generator subnet’s output and the input content image, and the style subnet’s loss depends on the generator subnet’s output and the input style image.
But the generator’s loss is a linear combination of the style and content loss.
The paper mentions training the networks simultaneously and updating each of the weights independently.
How would I go about training these networks simultaneously?
I’ve defined the optimizers and criterions of the 3 subnets. From my outputs the networks only seems to be learning the style of the style image. Is this the wrong approach?
for i in range(epochs):
for xs, _ in train_dataloader:
s_optimizer.zero_grad()
c_optimizer.zero_grad()
g_optimizer.zero_grad()
s_image, p1, p2, p3 = s_model(style_image)
c_image = c_model(xs.to(device), gamma, p1, p2, p3)
generated_images = g_model(c_image.clone(), s_image.clone(), gamma)
# use pretrained vgg as a feature extractor
generated_outputs = VGG(generated_images)
style_loss = s_criterion(generated_outputs, style_image)
content_loss = c_criterion(generated_images, xs.to(device))
generator_loss = g_criterion(style_loss, content_loss)
generator_loss.backward(retain_graph = True)
style_loss.backward(retain_graph = True)
content_loss.backward()
g_optimizer.step()
c_optimizer.step()
s_optimizer.step()
|
st45942
|
Hi,
Recently, I was working on a time series prediction project, using the RNN and LSTM modules of Pytorch.
I have a problem. When I use RNN, the prediction results are acceptable. But when I use LSTM, I get very poor results. 【PS:I use the same data structure, parameter structure, on RNN and LSTM.】
I try to change the amount of data per training, the number of hidden neurons and the number of layers in LSTM, but the predicted results can not fit the real data well.
Here are the prediction results I got:
The above graph shows the RNN prediction results. The green line represents the real data, and the red line represents the prediction result. I am quite satisfied with the result
Next
The above graph shows the LSTM prediction results.The green line represents the real data, and the blue line represents the prediction result. As you can see, the predicted result is almost a straight line.
When I zoom in on the prediction results, I found that the trend was like this.
【I don’t know why this is happening? Is there any solution that can help me solve this problem.】
——
Here are the main code snippets for LSTM
#every time, use the data of three time points to predict the data of the next one time
use_data=torch.Size([1790, 1, 3])
back_data=torch.Size([1790, 1, 1])
BATCH_SIZE=1
LR = 0.0003
EPOCHS = 10
train_set=data.TensorDataset(use_data,back_data)
loader = data.DataLoader(dataset=train_set, batch_size=BATCH_SIZE, shuffle=False, num_workers=0)
class lstm(nn.Module):
def __init__(self):
super(lstm, self).__init__()
self.lstm = nn.LSTM(3,3)
self.linear = nn.Linear(3, 1)
def forward(self, x,h):
y1, h = self.lstm(x,h)
y3 = self.linear(y1)
return y3,h
NET = lstm()
optimizer = torch.optim.Adam(NET.parameters(), lr=LR)
loss_func = nn.MSELoss()
h_state = torch.randn(1,1,3)
c_state=torch.randn(1,1,3)
hx=(h_state,c_state)
**lstm(
(lstm): LSTM(3, 3)
(linear): Linear(in_features=3, out_features=1, bias=True)
)
total_loss=[]
wc_loss_plt=[]
NET.train()
for step in range(EPOCHS):
wc_loss=[]
pre=[]
for i, (batch_x, batch_y) in enumerate(loader):
out, hx = NET(batch_x,hx)
hx1=hx[0].detach()
hx2=hx[1].detach()
hx=(hx1,hx2)
loss = loss_func(out, batch_y)
pre.append(out) ### prediction result
wc_loss.append(loss.data)
optimizer.zero_grad()
loss.backward()
optimizer.step()
total_loss.append(sum(wc_loss))
|
st45943
|
Solved by hpf in post #3
I almost forgot about this question.I still don’t know what caused this problem.
However, I normalized the original data and got a good result.
I try to use the LSTM + Linear structure to train and predict the function y = X,
Only by normalizing x can I get good prediction results
so,I cant unde…
|
st45944
|
You might need to continue the training. Since the predictions seem to already take the shape of the target the scaling might still need to be adjusted.
How are the training and validation loss behaving? Are both still decreasing or are you seeing a plateau?
|
st45945
|
I almost forgot about this question.I still don’t know what caused this problem.
However, I normalized the original data and got a good result.
I try to use the LSTM + Linear structure to train and predict the function y = X,
Only by normalizing x can I get good prediction results
so,I cant understand why normalization can be useful? I think it just a method to shrink data.
Maybe I didn’t find the crux of the problem at all.
|
st45946
|
Normalization helps the model training in general. E.g. one theoretical point of view is that whitening the data is creating loss surfaces with “round” valleys which accelerates the convergence. I’m pretty sure Bishop explains it nicely in Pattern Recognition and Machine Learning 16.
|
st45947
|
What does .data in pytorch mean?
z_x is the output of a linear layer
z_x_nu = z_x.data
So what is z_x.data here? Please explain
|
st45948
|
stackoverflow.com
Is .data still useful in pytorch? 2
python, version, pytorch, tensor
asked by
Maybe
on 09:31AM - 08 Aug 18 UTC
Have a look.
|
st45949
|
I have an input X that goes into an MLP which outputs a tensor xyz of shape torch.Size([4, 64, 1024]). I want to backprop to the input using only the first 3 columns of the last dimension. But when I slice xyz using xyz[:, :, 0:3] what happens is that autograd doesn’t backprop to the input X anymore. I also tried creating a linear layer and assign its weights manually from a tensor so it only selects the columns I want but Pytorch gives an error and it won’t work. Any ideas on what I should do?
|
st45950
|
Hello Everyone,
I am very new to ML field. I have used PyTorch to train my model and test.
I have plotted the curve as shown in the fig.
page1358×567 61 KB
Can someone help me to give basic interpretation to the curve?
Thank You So Much.
|
st45951
|
The model curve tell us two things
Accuracy - Correct predictions
Loss - We reduce a loss using some kind of optimization.
Training accuracy is the one telling about correct samples predicted from training set
Validation acc tells about the predictions on validations set.
Seeing your model I assume it to be trained well as it’s validation and training accuracies are comparable as well loss curves.
|
st45952
|
Still you can further train it as your model might has some room for improvement. Models representational capacity is limited by optimization so, It depends if your model have achieved some local minima or like good minima. To see it better is to further train but save your current model.
|
st45953
|
I’m trying to change from plain DP to DDP. My code works, but is very slow.
I narrowed down the problem to a weird behaviour: the dataset, which I pass as an argument in mp.spawn, is being pickled and unpickled every time I create new iterator from loader.
Why is this the case? Surely all arguments should be pickled and unpickled once when processes are created.
Rough code:
def train(rank, dataset):
setup_ddp(rank)
model = create_model()
loader = create_loader(dataset)
train_iter = iter(loader)
for i in range(NUM_ITERS):
try:
batch, targets = next(train_iter)
except StopIteration:
train_iter = iter(loader)
# The dataset is being pickled and unpickled every time next is called on new iterator
batch, targets = next(train_iter)
step(model, batch, targets)
cleanup_ddp(rank)
dataset = create_dataset()
mp.spawn(train, args=(dataset,), nprocs=WORLD_SIZE)
|
st45954
|
I figured out that the problem was in loader instead of DDP. I had four workers for each loader, which were created repeatedly each time a new iter was created, with 8 processes it meant that 32 workers were created each time.
Reducing the number of workers solved the problem.
|
st45955
|
I have a notebook here that has a image classifier skeleton
github.com
VishakBharadwaj94/CNN_starter/blob/main/CNN_Boilerplate.ipynb 2
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"_cell_guid": "b1076dfc-b9ad-4769-8c92-a6c4dae69d19",
"_uuid": "8f2839f25d086af736a60e9eeb907d3b93b6e0e5"
},
"source": [
"# CNN outline for DL with CIFAR10\n",
"\n",
"\n",
"\n",
"\n",
"In this tutorial, we'll use the following techniques to achieve over 90% accuracy in less than 5 minutes:\n",
"\n",
"- Data normalization\n",
"- Data augmentation\n",
"- Residual connections\n",
"- Batch normalization\n",
This file has been truncated. show original
I tried to fuse layers and quantize the model as in the video but got an error.
Deploying your ML Model with TorchServe
Are there any best practices for writing your model etc that overcome this problem?
Thanks!
|
st45956
|
Your notebook doesn’t show any error, so could you describe what issues you are seeing and what you’ve tried so far?
|
st45957
|
Hi Patrick, the issue arose when I ended up using Torchserve. The API broke, and it worked when I tried a straight MNIST classifier with a clean architecture. What I mean here is that the model didn’t call a small conv function that has a ( conv_bn_relu) combo.
|
st45958
|
How did you check that this modules wasn’t called? Did you get any error message or do you think Torchserve just skipped this module somehow?
|
st45959
|
Got an error messgae @ptrblck
Let me organize this into a full question with pics and error messages. It will help people. Please give me an hour.
|
st45960
|
LOL ok I got my question completely wrong. The Torchserve issue was completely different since I’d screwed up the ModelHandler messing up the preprocess method
The quantization is different, but I still haven’t figured it out yet. Here’s the model.
I’m not sure how I can quantize this, compared to this sweet simple tutorial.
github.com
VishakBharadwaj94/quant_wip/blob/master/quant.ipynb 1
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"name": "quant.ipynb",
"provenance": []
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
},
"accelerator": "GPU"
},
"cells": [
{
"cell_type": "code",
"metadata": {
"id": "Eay3SI8fa6P9"
},
This file has been truncated. show original
|
st45961
|
Let’s say we have a logistic regression model that takes num_features and outputs num_classes. Traditionally, num_classes is equal to one.
Now if I want to weight my loss function, I would make something like this:
F.binary_cross_entropy(probas, target, weight=torch.tensor([2]))
Where my weights were calculated by:
# n samples / n_classes * bincount
((183473+47987) / (2 * np.array([183473, 47987])))
[0.63074419 2.41213088] # [0, 1]
My question is, the weight in the loss function, as I can only give it one value, which class is it for, 0 or 1?
target, in our data set, contains either zeros or ones, like a traditional binary regression problem.
|
st45962
|
Solved by ptrblck in post #2
nn.BCEWithLogitsLoss allows you to pass a weight as well as a pos_weight argument.
The former tensor should have the shape [batch_size] and will be applied to each sample, while the latter should have the shape [nb_classes] and will be applied to the positive examples as described in the docs.
Bas…
|
st45963
|
nn.BCEWithLogitsLoss allows you to pass a weight as well as a pos_weight argument.
The former tensor should have the shape [batch_size] and will be applied to each sample, while the latter should have the shape [nb_classes] and will be applied to the positive examples as described in the docs 1.
Based on your code snippet I assume you would like to use pos_weight.
|
st45964
|
Hi,
This is my first uploading in this forum. I’m now trying to implement simple linear regression task, but it is slightly different from common ols problem.
wls1420×404 28.7 KB
I need a help to implement first term efficiently. Is there any module that help computing weighted least square?
My trial is
output = W.matmul(X) # k * D matrix
l = torch.diagonal((output-y).T.matmul(Sigma).matmul(output-y)).sum()
Thank you in advance,
|
st45965
|
Hi all, I have a model which contains an BiLSTM that helps generating full context embeddings for a list of images.
The model works fine with AMP, training, inference … all good. Except now I’d like to deploy it using AWS Elastic inference which requires a JIT traced model.
When I run the export I receive the following:
/home/davide/.virtualenvs/native/lib/python3.7/site-packages/torch/jit/_trace.py:966: TracerWarning: Output nr 2. of the traced function does not match the corresponding output of the Python function. Detailed error:
With rtol=1e-05 and atol=1e-05, found 1 element(s) (out of 1) whose difference(s) exceeded the margin of error (including 0 nan comparisons). The greatest difference was 0.009498357772827148 (1.0842735767364502 vs. 1.0937719345092773), which occurred at index 0.
_module_class,
/home/davide/.virtualenvs/native/lib/python3.7/site-packages/torch/jit/_trace.py:966: TracerWarning: Output nr 3. of the traced function does not match the corresponding output of the Python function. Detailed error:
With rtol=1e-05 and atol=1e-05, found 15 element(s) (out of 15) whose difference(s) exceeded the margin of error (including 0 nan comparisons). The greatest difference was 0.026749849319458008 (0.37732791900634766 vs. 0.35057806968688965), which occurred at index (4, 2).
_module_class,
Traceback (most recent call last):
File "/home/davide/.pyenv/versions/3.7.8/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/davide/.pyenv/versions/3.7.8/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/davide/Desktop/coefficient/native_oneshot/native_oneshot/scripts/export-to-traced-model.py", line 57, in <module>
model, (support_x, support_y_onehot, target_x, target_y)
File "/home/davide/.virtualenvs/native/lib/python3.7/site-packages/torch/jit/_trace.py", line 742, in trace
_module_class,
File "/home/davide/.virtualenvs/native/lib/python3.7/site-packages/torch/jit/_trace.py", line 966, in trace_module
_module_class,
File "/home/davide/.virtualenvs/native/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 26, in decorate_context
return func(*args, **kwargs)
File "/home/davide/.virtualenvs/native/lib/python3.7/site-packages/torch/jit/_trace.py", line 519, in _check_trace
raise TracingCheckError(*diag_info)
torch.jit._trace.TracingCheckError: Tracing failed sanity checks!
ERROR: Tensor-valued Constant nodes differed in value across invocations. This often indicates that the tracer has encountered untraceable code.
Node:
%hx : Tensor = prim::Constant[value=<Tensor>](), scope: __module.lstm # /home/davide/Desktop/coefficient/native_oneshot/native_oneshot/matching_network/model/lstm.py:50:0
Source Location:
/home/davide/Desktop/coefficient/native_oneshot/native_oneshot/matching_network/model/lstm.py(50): repackage_hidden
/home/davide/Desktop/coefficient/native_oneshot/native_oneshot/matching_network/model/lstm.py(52): <genexpr>
/home/davide/Desktop/coefficient/native_oneshot/native_oneshot/matching_network/model/lstm.py(52): repackage_hidden
/home/davide/Desktop/coefficient/native_oneshot/native_oneshot/matching_network/model/lstm.py(56): forward
/home/davide/.virtualenvs/native/lib/python3.7/site-packages/torch/nn/modules/module.py(726): _slow_forward
/home/davide/.virtualenvs/native/lib/python3.7/site-packages/torch/nn/modules/module.py(742): _call_impl
/home/davide/Desktop/coefficient/native_oneshot/native_oneshot/matching_network/model/matching.py(70): forward
/home/davide/.virtualenvs/native/lib/python3.7/site-packages/torch/nn/modules/module.py(726): _slow_forward
/home/davide/.virtualenvs/native/lib/python3.7/site-packages/torch/nn/modules/module.py(742): _call_impl
/home/davide/.virtualenvs/native/lib/python3.7/site-packages/torch/jit/_trace.py(940): trace_module
/home/davide/.virtualenvs/native/lib/python3.7/site-packages/torch/jit/_trace.py(742): trace
/home/davide/Desktop/coefficient/native_oneshot/native_oneshot/scripts/export-to-traced-model.py(57): <module>
/home/davide/.pyenv/versions/3.7.8/lib/python3.7/runpy.py(85): _run_code
/home/davide/.pyenv/versions/3.7.8/lib/python3.7/runpy.py(193): _run_module_as_main
Comparison exception: With rtol=0.0001 and atol=1e-05, found 90 element(s) (out of 96) whose difference(s) exceeded the margin of error (including 0 nan comparisons). The greatest difference was 11.000000476837158 (-0.947547435760498 vs. -11.947547912597656), which occurred at index (0, 0, 8).
Which says that there is some untraceable code, pointing at the repackage_hidden method of my LSTM. Here is my LSTM module:
from __future__ import annotations
import torch
import torch.nn as nn
from torch.autograd import Variable
class BidirectionalLSTM(nn.Module):
def __init__(self, layer_size, vector_dim, device):
super().__init__()
"""
Initial a muti-layer Bidirectional LSTM
:param layer_size: a list of each layer'size
:param batch_size:
:param vector_dim:
"""
self.batch_size = 1
self.hidden_size = layer_size[0]
self.vector_dim = vector_dim
self.num_layer = len(layer_size)
self.lstm = nn.LSTM(
input_size=self.vector_dim,
num_layers=self.num_layer,
hidden_size=self.hidden_size,
bidirectional=True,
)
self.hidden = (
Variable(
torch.zeros(
self.lstm.num_layers * 2,
self.batch_size,
self.lstm.hidden_size,
),
requires_grad=False,
).to(device),
Variable(
torch.zeros(
self.lstm.num_layers * 2,
self.batch_size,
self.lstm.hidden_size,
),
requires_grad=False,
).to(device),
)
def repackage_hidden(self, h):
"""Wraps hidden states in new Variables,
to detach them from their history."""
if type(h) == torch.Tensor:
return Variable(h.data)
else:
return tuple(self.repackage_hidden(v) for v in h)
def forward(self, inputs):
inputs = inputs.float()
self.hidden = self.repackage_hidden(self.hidden)
output, self.hidden = self.lstm(inputs, self.hidden)
return output
and my export code:
with torch.jit.optimized_execution(True) and torch.no_grad():
traced_model = torch.jit.trace(
model, (support_x, support_y_onehot, target_x, target_y)
)
I assume that the if else control flow in repackage_hidden might be part of the problem? How to bypass / solve the issue?
|
st45966
|
Hello,
I try to create a custom loss function in a differentiable manner. For example I have two tensors:
a = [2, 8, 9, 10, 2] and b = [1, 2, 6, 7, 8] and my loss function should penalize a - b when a - b is positive.
if I write loss function as follows, I think it is not differentiable, because of “<” operation.
diff = a - b
diff[ diff < 0 ] = 0
return diff
How can I create this loss function in a differentiable manner?
Thanks in advance.
|
st45967
|
Solved by ptrblck in post #2
It should work and pass a zero gradient to the masked elements:
a = torch.randn(2, 2, requires_grad=True)
b = torch.randn(2, 2)
diff = a - b
diff[ diff < 0 ] = 0
print(diff)
> tensor([[0.6480, 0.0000],
[0.0000, 0.0272]], grad_fn=<IndexPutBackward>)
diff.mean().backward()
print(a.grad)
> …
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.