Serial Number
int64 1
6k
| Issue Number
int64 75.6k
112k
| Title
stringlengths 3
357
| Labels
stringlengths 3
241
⌀ | Body
stringlengths 9
74.5k
⌀ | Comments
int64 0
867
|
---|---|---|---|---|---|
3,801 | 91,670 |
AssertionError: tensor's device must be `meta` when trying to export a fake-initialized module
|
triaged, module: fakeTensor
|
```
import torch
from torch._dynamo import export
from torch._subclasses import FakeTensorMode
class JankyLinear(torch.nn.Module):
def __init__(self):
super().__init__()
self.out_features = torch.nn.Parameter(torch.randn([512, 1677]))
self.bias = torch.nn.Parameter(torch.randn([512]))
def forward(self, input):
return torch.nn.functional.linear(input, self.out_features, bias=self.bias)
# initialize a module in fake tensor mode
with FakeTensorMode():
j = JankyLinear()
j.cuda()
# try to export it
export(j, torch.randn([512, 1677]).cuda())
```
This raises:
```
AssertionError: tensor's device must be `meta`, got cuda instead
```
Which is strange, as all the tensors are clearly FakeTensors. Seems like a decomp generates some intermediate that is tripped an internal assert.
| 1 |
3,802 | 91,661 |
[FSDP][BE] Add check that compute device equals current device
|
oncall: distributed, triaged, module: fsdp
|
The FSDP implementation requires that the [current device](https://pytorch.org/docs/stable/generated/torch.cuda.current_device.html) be equal to `FullyShardedDataParallel.compute_device` and `FlatParamHandle.device`. Otherwise, on nonzero ranks, there will be an error in `FlatParamHandle.post_unshard()` at:
https://github.com/pytorch/pytorch/blob/57b7f33ba8bfca7918f78aef7dc8d1d63198510b/torch/distributed/fsdp/flat_param.py#L1007
This results from `padded_unsharded_flat_param[:unsharded_size.numel()]` returning a tensor on `cuda:0` instead of `cuda:n` for `n > 1`, despite `padded_unsharded_flat_param` correctly being on `cuda:n`.
https://github.com/pytorch/pytorch/blob/57b7f33ba8bfca7918f78aef7dc8d1d63198510b/torch/distributed/fsdp/flat_param.py#L982-L984
We should add a check at construction time that the current device is set to ensure that CUDA tensors are constructed on the correct device.
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501
| 0 |
3,803 | 91,655 |
FakeTensors not moving between device properly on Module.cuda()
|
triaged, module: fakeTensor
|
```
from torch._subclasses import FakeTensorMode
class MyModule(torch.nn.Module):
def __init__(self):
super().__init__()
self.param = torch.nn.Parameter(torch.empty([10]).fill_(1.0))
with FakeTensorMode():
foo = MyModule()
print(foo.param)
foo.cuda()
print(foo.param)
```
both print statements show `param` on cpu.
Some cursory debugging shows that the issue is the setting of `data` here: https://github.com/pytorch/pytorch/blob/master/torch/nn/modules/module.py#L809-L811. If I add:
```
torch.__future__.set_overwrite_module_params_on_conversion(True)
```
to get the new behavior, things work correctly.
| 4 |
3,804 | 91,653 |
Stochastic Illegal Memory Access error mid-epoch on AWS p4d instances
|
module: cuda, triaged, module: cublas, matrix multiplication
|
### 🐛 Describe the bug
While training on a large cluster of AWS p4d instances with PyTorch, PyTorch Lightning, and DeepSpeed, we observe an IllegalMemoryAccess error that happens stochastically at different places and different steps mid-epoch.
The error is stochastic in the sense that when training a 1000-sized epoch on 120GPUs, the error sometimes happens at step ~400, sometimes at ~600, or sometimes at ~700.
We did some memory tracking on our end and found out that there was no memory leak happening and that the peak memory usage remains the same from start of epoch (after some stabilization) till the end and to the next epochs. It is unclear whether a specific location in the code is faulty since we have only been training on the large compute instances (and we have limited budget for that). Upon trying the same version of code in a comparatively very small cluster of 18 GPUs, with same specifications and same datasets, we have not observed the issue so far.
The traceback pasted at the end is with `CUDA_LAUNCH_BLOCKING=1` and hence should reflect the real exception being thrown at the point of the error.
Selected List of our package versions:
```
cuda 11.6.2 0 nvidia
cuda-cccl 11.6.55 hf6102b2_0 nvidia
cuda-command-line-tools 11.6.2 0 nvidia
cuda-compiler 11.6.2 0 nvidia
cuda-cudart 11.6.55 he381448_0 nvidia
cuda-cudart-dev 11.6.55 h42ad0f4_0 nvidia
cuda-cuobjdump 11.6.124 h2eeebcb_0 nvidia
cuda-cupti 11.6.124 h86345e5_0 nvidia
cuda-cuxxfilt 11.6.124 hecbf4f6_0 nvidia
cuda-driver-dev 11.6.55 0 nvidia
cuda-gdb 11.8.86 0 nvidia
cuda-libraries 11.6.2 0 nvidia
cuda-libraries-dev 11.6.2 0 nvidia
cuda-memcheck 11.8.86 0 nvidia
cuda-nsight 11.8.86 0 nvidia
cuda-nsight-compute 11.8.0 0 nvidia
cuda-nvcc 11.6.124 hbba6d2d_0 nvidia
cuda-nvdisasm 11.8.86 0 nvidia
cuda-nvml-dev 11.6.55 haa9ef22_0 nvidia
cuda-nvprof 11.8.87 0 nvidia
cuda-nvprune 11.6.124 he22ec0a_0 nvidia
cuda-nvrtc 11.6.124 h020bade_0 nvidia
cuda-nvrtc-dev 11.6.124 h249d397_0 nvidia
cuda-nvtx 11.6.124 h0630a44_0 nvidia
cuda-nvvp 11.8.87 0 nvidia
cuda-runtime 11.6.2 0 nvidia
cuda-samples 11.6.101 h8efea70_0 nvidia
cuda-sanitizer-api 11.8.86 0 nvidia
cuda-toolkit 11.6.2 0 nvidia
cuda-tools 11.6.2 0 nvidia
cuda-visual-tools 11.6.2 0 nvidia
cudatoolkit 11.6.0 hecad31d_10 conda-forge
deepspeed 0.5.10 pypi_0 pypi
python 3.7.12 hf930737_100_cpython conda-forge
python_abi 3.7 2_cp37m conda-forge
pytorch 1.13.0 py3.7_cuda11.6_cudnn8.3.2_0 pytorch
pytorch-cuda 11.6 h867d48c_0 pytorch
pytorch-lightning 1.5.10 pypi_0 pypi
pytorch-mutex 1.0 cuda pytorch
```
Sample Code
All of our code is in the repository: https://github.com/aqlaboratory/openfold/
The snapshot of the code that I am using is uploaded here: https://github.com/aqlaboratory/openfold/tree/aws-finetuning-code
Traceback
```
Epoch 0: 46%|████▌ | 4/home/ec2-user/miniconda3/envs/openfold_train_env/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py:430: UserWarning: torch.distributed.distributed_c10d._get_global_rank is deprecated please use torch.distributed.distributed_c10d.get_global_rank instead
"torch.distributed.distributed_c10d._get_global_rank is deprecated "
Traceback (most recent call last):
File "/shared/openfold-release/train_openfold.py", line 585, in <module>
main(args)
File "/shared/openfold-release/train_openfold.py", line 369, in main
ckpt_path=ckpt_path,
File "/home/ec2-user/miniconda3/envs/openfold_train_env/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 741, in fit
self._fit_impl, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path
File "/home/ec2-user/miniconda3/envs/openfold_train_env/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 685, in _call_and_handle_interrupt
return trainer_fn(*args, **kwargs)
File "/home/ec2-user/miniconda3/envs/openfold_train_env/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 777, in _fit_impl
self._run(model, ckpt_path=ckpt_path)
File "/home/ec2-user/miniconda3/envs/openfold_train_env/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1199, in _run
self._dispatch()
File "/home/ec2-user/miniconda3/envs/openfold_train_env/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1279, in _dispatch
self.training_type_plugin.start_training(self)
File "/home/ec2-user/miniconda3/envs/openfold_train_env/lib/python3.7/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 202, in start_training
self._results = trainer.run_stage()
File "/home/ec2-user/miniconda3/envs/openfold_train_env/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1289, in run_stage
return self._run_train()
File "/home/ec2-user/miniconda3/envs/openfold_train_env/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1319, in _run_train
self.fit_loop.run()
File "/home/ec2-user/miniconda3/envs/openfold_train_env/lib/python3.7/site-packages/pytorch_lightning/loops/base.py", line 145, in run
self.advance(*args, **kwargs)
File "/home/ec2-user/miniconda3/envs/openfold_train_env/lib/python3.7/site-packages/pytorch_lightning/loops/fit_loop.py", line 234, in advance
self.epoch_loop.run(data_fetcher)
File "/home/ec2-user/miniconda3/envs/openfold_train_env/lib/python3.7/site-packages/pytorch_lightning/loops/base.py", line 145, in run
self.advance(*args, **kwargs)
File "/home/ec2-user/miniconda3/envs/openfold_train_env/lib/python3.7/site-packages/pytorch_lightning/loops/epoch/training_epoch_loop.py", line 193, in advance
batch_output = self.batch_loop.run(batch, batch_idx)
File "/home/ec2-user/miniconda3/envs/openfold_train_env/lib/python3.7/site-packages/pytorch_lightning/loops/base.py", line 145, in run
self.advance(*args, **kwargs)
File "/home/ec2-user/miniconda3/envs/openfold_train_env/lib/python3.7/site-packages/pytorch_lightning/loops/batch/training_batch_loop.py", line 88, in advance
outputs = self.optimizer_loop.run(split_batch, optimizers, batch_idx)
File "/home/ec2-user/miniconda3/envs/openfold_train_env/lib/python3.7/site-packages/pytorch_lightning/loops/base.py", line 145, in run
self.advance(*args, **kwargs)
File "/home/ec2-user/miniconda3/envs/openfold_train_env/lib/python3.7/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 219, in advance
self.optimizer_idx,
File "/home/ec2-user/miniconda3/envs/openfold_train_env/lib/python3.7/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 266, in _run_optimization
self._optimizer_step(optimizer, opt_idx, batch_idx, closure)
File "/home/ec2-user/miniconda3/envs/openfold_train_env/lib/python3.7/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 386, in _optimizer_step
using_lbfgs=is_lbfgs,
File "/home/ec2-user/miniconda3/envs/openfold_train_env/lib/python3.7/site-packages/pytorch_lightning/core/lightning.py", line 1652, in optimizer_step
optimizer.step(closure=optimizer_closure)
File "/home/ec2-user/miniconda3/envs/openfold_train_env/lib/python3.7/site-packages/pytorch_lightning/core/optimizer.py", line 164, in step
trainer.accelerator.optimizer_step(self._optimizer, self._optimizer_idx, closure, **kwargs)
File "/home/ec2-user/miniconda3/envs/openfold_train_env/lib/python3.7/site-packages/pytorch_lightning/accelerators/accelerator.py", line 339, in optimizer_step
self.precision_plugin.optimizer_step(model, optimizer, opt_idx, closure, **kwargs)
File "/home/ec2-user/miniconda3/envs/openfold_train_env/lib/python3.7/site-packages/pytorch_lightning/plugins/precision/deepspeed_precision.py", line 65, in optimizer_step
closure_result = closure()
File "/home/ec2-user/miniconda3/envs/openfold_train_env/lib/python3.7/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 160, in __call__
self._result = self.closure(*args, **kwargs)
File "/home/ec2-user/miniconda3/envs/openfold_train_env/lib/python3.7/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 155, in closure
self._backward_fn(step_output.closure_loss)
File "/home/ec2-user/miniconda3/envs/openfold_train_env/lib/python3.7/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 327, in backward_fn
self.trainer.accelerator.backward(loss, optimizer, opt_idx)
File "/home/ec2-user/miniconda3/envs/openfold_train_env/lib/python3.7/site-packages/pytorch_lightning/accelerators/accelerator.py", line 314, in backward
self.precision_plugin.backward(self.lightning_module, closure_loss, *args, **kwargs)
File "/home/ec2-user/miniconda3/envs/openfold_train_env/lib/python3.7/site-packages/pytorch_lightning/plugins/precision/deepspeed_precision.py", line 48, in backward
deepspeed_engine.backward(closure_loss, *args, **kwargs)
File "/home/ec2-user/miniconda3/envs/openfold_train_env/lib/python3.7/site-packages/deepspeed/runtime/engine.py", line 1697, in backward
self.optimizer.backward(loss)
File "/home/ec2-user/miniconda3/envs/openfold_train_env/lib/python3.7/site-packages/deepspeed/runtime/zero/stage_1_and_2.py", line 1910, in backward
self.loss_scaler.backward(loss.float(), retain_graph=retain_graph)
File "/home/ec2-user/miniconda3/envs/openfold_train_env/lib/python3.7/site-packages/deepspeed/runtime/fp16/loss_scaler.py", line 53, in backward
scaled_loss.backward(retain_graph=retain_graph)
File "/home/ec2-user/miniconda3/envs/openfold_train_env/lib/python3.7/site-packages/torch/_tensor.py", line 488, in backward
self, gradient, retain_graph, create_graph, inputs=inputs
File "/home/ec2-user/miniconda3/envs/openfold_train_env/lib/python3.7/site-packages/torch/autograd/__init__.py", line 199, in backward
allow_unreachable=True, accumulate_grad=True) # Calls into the C++ engine to run the backward_fnard pass
File "/home/ec2-user/miniconda3/envs/openfold_train_env/lib/python3.7/site-packages/torch/autograd/function.py", line 267, in apply
return user_fn(self, *args)
File "/home/ec2-user/miniconda3/envs/openfold_train_env/lib/python3.7/site-packages/deepspeed/runtime/activation_checkpointing/checkpointing.py", line 693, in backward
outputs = ctx.run_function(*detached_inputs)
File "/shared/openfold-release/openfold/utils/checkpointing.py", line 69, in exec_sliced
return exec(blocks[s:e], a)
File "/shared/openfold-release/openfold/utils/checkpointing.py", line 64, in exec
a = wrap(block(*a))
File "/home/ec2-user/miniconda3/envs/openfold_train_env/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/shared/openfold-release/openfold/model/template.py", line 229, in forward
inplace_safe=inplace_safe,
File "/home/ec2-user/miniconda3/envs/openfold_train_env/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/shared/openfold-release/openfold/model/triangular_attention.py", line 142, in forward
use_lma=use_lma
File "/home/ec2-user/miniconda3/envs/openfold_train_env/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/shared/openfold-release/openfold/model/primitives.py", line 469, in forward
o = _attention(q, k, v, biases)
File "/shared/openfold-release/openfold/model/primitives.py", line 237, in _attention
a = torch.matmul(a, value)
RuntimeError: CUDA error: CUBLAS_STATUS_EXECUTION_FAILED when calling `cublasSgemmStridedBatched( handle, opa, opb, m, n, k, &alpha, a, lda, stridea, b, ldb, strideb, &beta, c, ldc, stridec, num_batches)`
terminate called after throwing an instance of 'c10::Error'
what(): CUDA error: an illegal memory access was encountered
Exception raised from c10_cuda_check_implementation at /opt/conda/conda-bld/pytorch_1666643004612/work/c10/cuda/CUDAException.cpp:31 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f4db1cce457 in /home/ec2-user/miniconda3/envs/openfold_train_env/lib/python3.7/site-packages/torch/lib/libc10.so)
frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::string const&) + 0x64 (0x7f4db1c983ec in /home/ec2-user/miniconda3/envs/openfold_train_env/lib/python3.7/site-packages/torch/lib/libc10.so)
frame #2: c10::cuda::c10_cuda_check_implementation(std::string const&, std::string const&, int, bool) + 0xb4 (0x7f4db1d68044 in /home/ec2-user/miniconda3/envs/openfold_train_env/lib/python3.7/site-packages/torch/lib/libc10_cuda.so)
frame #3: <unknown function> + 0x164bc (0x7f4db1d3f4bc in /home/ec2-user/miniconda3/envs/openfold_train_env/lib/python3.7/site-packages/torch/lib/libc10_cuda.so)
frame #4: c10::cuda::CUDACachingAllocator::raw_delete(void*) + 0x244 (0x7f4db1d42434 in /home/ec2-user/miniconda3/envs/openfold_train_env/lib/python3.7/site-packages/torch/lib/libc10_cuda.so)
frame #5: <unknown function> + 0x4edcc3 (0x7f4d88ab1cc3 in /home/ec2-user/miniconda3/envs/openfold_train_env/lib/python3.7/site-packages/torch/lib/libtorch_python.so)
frame #6: c10::TensorImpl::~TensorImpl() + 0x1a0 (0x7f4db1cae9e0 in /home/ec2-user/miniconda3/envs/openfold_train_env/lib/python3.7/site-packages/torch/lib/libc10.so)
frame #7: c10::TensorImpl::~TensorImpl() + 0x9 (0x7f4db1caeaf9 in /home/ec2-user/miniconda3/envs/openfold_train_env/lib/python3.7/site-packages/torch/lib/libc10.so)
frame #8: <unknown function> + 0x75e5a8 (0x7f4d88d225a8 in /home/ec2-user/miniconda3/envs/openfold_train_env/lib/python3.7/site-packages/torch/lib/libtorch_python.so)
frame #9: THPVariable_subclass_dealloc(_object*) + 0x2f8 (0x7f4d88d229a8 in /home/ec2-user/miniconda3/envs/openfold_train_env/lib/python3.7/site-packages/torch/lib/libtorch_python.so)
frame #10: <unknown function> + 0xe59d2 (0x5584083189d2 in /home/ec2-user/miniconda3/envs/openfold_train_env/bin/python)
frame #11: <unknown function> + 0xe707f (0x55840831a07f in /home/ec2-user/miniconda3/envs/openfold_train_env/bin/python)
frame #12: <unknown function> + 0x17fcc1 (0x5584083b2cc1 in /home/ec2-user/miniconda3/envs/openfold_train_env/bin/python)
frame #13: <unknown function> + 0xe707f (0x55840831a07f in /home/ec2-user/miniconda3/envs/openfold_train_env/bin/python)
frame #14: <unknown function> + 0xe7782 (0x55840831a782 in /home/ec2-user/miniconda3/envs/openfold_train_env/bin/python)
frame #15: <unknown function> + 0xfc8c7 (0x55840832f8c7 in /home/ec2-user/miniconda3/envs/openfold_train_env/bin/python)
frame #16: _PyGC_CollectNoFail + 0x2b (0x5584084542db in /home/ec2-user/miniconda3/envs/openfold_train_env/bin/python)
frame #17: PyImport_Cleanup + 0x367 (0x558408466307 in /home/ec2-user/miniconda3/envs/openfold_train_env/bin/python)
frame #18: Py_FinalizeEx + 0x67 (0x558408466407 in /home/ec2-user/miniconda3/envs/openfold_train_env/bin/python)
frame #19: <unknown function> + 0x248b1b (0x55840847bb1b in /home/ec2-user/miniconda3/envs/openfold_train_env/bin/python)
frame #20: _Py_UnixMain + 0x3c (0x55840847be9c in /home/ec2-user/miniconda3/envs/openfold_train_env/bin/python)
frame #21: __libc_start_main + 0xea (0x7f4dc323813a in /lib64/libc.so.6)
frame #22: <unknown function> + 0x1c761d (0x5584083fa61d in /home/ec2-user/miniconda3/envs/openfold_train_env/bin/python)
srun: error: train-p4d-st-train-p4d-15: task 118: Aborted
srun: error: Node failure on train-p4d-st-train-p4d-1
slurmstepd: error: *** JOB 44 ON train-p4d-st-train-p4d-1 CANCELLED AT 2022-11-18T13:21:41 DUE TO NODE FAILURE, SEE SLURMCTLD LOG FOR DETAILS ***
srun: Job step aborted: Waiting up to 32 seconds for job step to finish.
```
### Versions
Collecting environment information...
PyTorch version: 1.13.0
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Amazon Linux release 2 (Karoo) (x86_64)
GCC version: (GCC) 7.3.1 20180712 (Red Hat 7.3.1-15)
Clang version: Could not collect
CMake version: version 2.8.12.2
Libc version: glibc-2.10
Python version: 3.7.12 | packaged by conda-forge | (default, Oct 26 2021, 06:08:21) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.10.149-133.644.amzn2.x86_64-x86_64-with-glibc2.10
Is CUDA available: False
CUDA runtime version: 11.6.124
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.21.2
[pip3] pytorch-lightning==1.5.10
[pip3] torch==1.13.0
[pip3] torchmetrics==0.10.2
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.6.0 hecad31d_10 conda-forge
[conda] mkl 2021.4.0 h8d4b97c_729 conda-forge
[conda] mkl-service 2.4.0 py37h402132d_0 conda-forge
[conda] mkl_fft 1.3.1 py37h3e078e5_1 conda-forge
[conda] mkl_random 1.2.2 py37h219a48f_0 conda-forge
[conda] numpy 1.21.2 pypi_0 pypi
[conda] pytorch 1.13.0 py3.7_cuda11.6_cudnn8.3.2_0 pytorch
[conda] pytorch-cuda 11.6 h867d48c_0 pytorch
[conda] pytorch-lightning 1.5.10 pypi_0 pypi
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchmetrics 0.10.2 pypi_0 pypi
cc @ngimel @csarofeen @ptrblck @xwang233
| 9 |
3,805 | 91,633 |
Segmentation fault when running torch.nn.functional.fractional_max_pool3d on torch 1.13.1
|
needs reproduction, module: crash, triaged
|
### 🐛 Describe the bug
With the following input combinations, it results in segfault:
```
import torch
import numpy as np
arg_1_tensor = torch.rand([2, 4, 5, 5, 5], dtype=torch.float64)
arg_1 = arg_1_tensor.clone()
arg_2_0 = 2
arg_2_1 = 2
arg_2_2 = 2
arg_2 = [arg_2_0,arg_2_1,arg_2_2,]
arg_3 = None
arg_4_0 = 0.5
arg_4_1 = 0.5
arg_4_2 = 0.5
arg_4 = [arg_4_0,arg_4_1,arg_4_2,]
arg_5 = False
arg_6_tensor = torch.tensor([], dtype=torch.float64)
arg_6 = arg_6_tensor.clone()
try:
res = torch.nn.functional.fractional_max_pool3d(arg_1,arg_2,arg_3,arg_4,arg_5,_random_samples=arg_6,)
except Exception as e:
print("Error:"+str(e))
```
### Versions
PyTorch version: 1.13.1
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.9.15 (main, Nov 24 2022, 14:31:59) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-56-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: 11.6.124
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.5
[pip3] torch==1.13.1
[pip3] torchaudio==0.13.1
[pip3] torchvision==0.14.1
[conda] blas 1.0 mkl
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.23.5 py39h14f4228_0
[conda] numpy-base 1.23.5 py39h31eccc5_0
[conda] pytorch 1.13.1 py3.9_cuda11.6_cudnn8.3.2_0 pytorch
[conda] pytorch-cuda 11.6 h867d48c_1 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 0.13.1 py39_cu116 pytorch
[conda] torchvision 0.14.1 py39_cu116 pytorch
| 4 |
3,806 | 91,630 |
Periodic ROCM distribtued jobs are broken
|
module: rocm, triaged
|
### 🐛 Describe the bug
See https://hud.pytorch.org/hud/pytorch/pytorch/master/1?per_page=50&name_filter=periodic%20%2F%20linux-focal-rocm
Last successful run was on Dec 16th: https://hud.pytorch.org/pytorch/pytorch/commit/bd94ee66ea361e97158d37d6ddd8dbbeb8b624ef
### Versions
CI
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport
| 1 |
3,807 | 91,624 |
trainer
|
triaged
|
### 🚀 The feature, motivation and pitch
a trainer please, which obey the rules in **the zen of python.**
**bad case: pytorch lightning, fairseq, transformers**
### Alternatives
_No response_
### Additional context
_No response_
| 2 |
3,808 | 91,623 |
Investigate CUDA enabled build-time difference between MSVC and GCC+WSL
|
module: build, module: windows, triaged
|
From preliminary investigation results, it seems that build time when CUDA is enabled is three times longer with MSVC than with GCC on WSL on the same hardware. Within this issue, we should investigate what is the cause and suggest possible solution.
cc @malfet @seemethere @peterjc123 @mszhanyi @skyline75489 @nbcsm
| 0 |
3,809 | 91,622 |
Cross-compiled libtorch Windows Arm64 binaries
|
module: windows, feature, module: ci, triaged, module: arm
|
Nightly CD pipelines should build Debug and Release libtorch binaries for Windows ARM64 platform cross-compiling using x64 runners.
cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @seemethere @malfet @pytorch/pytorch-dev-infra
| 0 |
3,810 | 91,618 |
There is no developer documentation about getting started with MPS native debugging
|
module: docs, triaged, module: mps
|
### 📚 The doc issue
per title - getting started with writing native MPS code, or even tracking down bugs, assumes a large amount of pre-existing knowledge, and requires a lot of trial and error, because there's no documentation.
### Suggest a potential alternative/fix
I have already [written an informal guide/devlog](https://medium.com/p/f094b7f4e8f0) about what I had to do to be able to break into an LLDB session inside ATen MPS native code. I'd be very happy to adapt it for inclusion here, but I'd like to get approval from whomever is responsible for docs on the PyTorch team before going ahead.
cc @svekars @carljparker @kulinseth @albanD @malfet @DenisVieriu97 @razarmehr @abhudev
| 0 |
3,811 | 91,617 |
MPS: `torch.sub` erroneously returns 0 on outputs of `chunk` via `layer_norm`
|
triaged, module: mps
|
### 🐛 Describe the bug
When calling `chunk(2)` on the output of a complex unet model that includes at least `layer_norm()`, subtracting the tensors returned from the `chunk()` call from each other yields - incorrectly - a tensor full of zeroes. If the second tensor is `clone()`d or if both are sent to `cpu()` before subtracting, valid output is produced.
I do not yet have a concise repro case, but the symptom is the following:
```python
ab = stable_diffusion_unet_model.forward(...)
a, b = ab.chunk(2)
# a, b have device mps:0
test1 = a - b
# test1 is all 0s
test2 = a - b.clone()
# test2 has what looks like valid output
test3 = a.clone() - b
# test3 is all 0s
test4 = a.cpu() - b.cpu()
# test4 has what looks like valid output, matching test2
```
I have managed to trace the earliest site of corruption as a call to `layer_norm` inside `stable_diffusion_unet_model.forward(...)`. however, trying to reproduce in a simple python session using `torch.randn()` and passing that directly to `layer_norm` i'm not able to reproduce the issue.
In order to debug this I have set myself up with a native MPS debugging session, and I've added a `__builtin_debugtrap()` to the call to `torch.sub()`. However I've reached a limit of publically available documentation. My assumption is that something about how `chunk()` is implemented is messing up the call to `sub()` and/or giving it insufficient information to resolve whatever offsets it needs to resolve to actually get the tensor data it needs.
Upon adding some printing code to the top of `binaryOpTensor()` in `BinaryOps.mm`, like this:
```
const auto sub_op_name = "sub_out_mps:";
if (op_name == sub_op_name) {
std::cout<<op_name<<": enter binaryOpTensor"<<std::endl;
stream_out_tensor(std::cout<<op_name<<": self: ", self, 8)<<std::endl;
stream_out_tensor(std::cout<<op_name<<": other: ", other, 8)<<std::endl;
std::cout << self.options() << std::endl;
__builtin_debugtrap();
}
```
and after rebuilding pytorch in DEBUG mode, [which i have documented here](https://medium.com/@damian0815/how-to-debug-native-mps-operations-in-pytorch-f094b7f4e8f0), and running `lldb -- python scripts/my_script.py` with the Python code that triggers the bug, I get the following output:
```
sub_out_mps:: enter binaryOpTensor
sub_out_mps:: self: -27530939119479883300864.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00,
sub_out_mps:: other: -27530939119479883300864.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00,
TensorOptions(dtype=float, device=mps:0, layout=Strided, requires_grad=false (default), pinned_memory=false (default), memory_format=(nullopt))
Process 98436 stopped
(lldb)
```
Note that `stream_out_tensor` is adapted from `PRINT_TENSOR()` in `MPSCNNTests.mm`, which prints `t.data_ptr<float>()[i]]` - I assume this is wrong, which is why I'm seeing `-27530939119479883300864.00`. nevertheless It's suspicious that both `self` and `other` have the same number, but without better documentation I'm just guessing.
Additionally, in the lldb session I can determine that `self` and `other` are both non-view tensors with contiguous memory and a storage offset of 0:
```
(lldb) expr other.storage_offset()
(int64_t) $4 = 0
(lldb) expr self.storage_offset()
(int64_t) $5 = 0
(lldb) expr self.is_view()
(bool) $6 = false
(lldb) expr other.is_view()
(bool) $7 = false
(lldb) expr other.is_contiguous(c10::MemoryFormat::Contiguous)
(bool) $8 = true
(lldb) expr self.is_contiguous(c10::MemoryFormat::Contiguous)
(bool) $9 = true
```
Given as how they each result from a call to `chunk(2)` I'm not sure how this can be the case, unless something upstream of `binaryOpTensor` is copying them to contiguous memory -- but then, I'd expect different values to be printed above:
```
sub_out_mps:: self: -27530939119479883300864.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00,
sub_out_mps:: other: -27530939119479883300864.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00,
```
rather than each showing `-27530939119479883300864.00`.
Posting here in the hope that @kulinseth or someone else with MPS knowledge will be able to point me in the right direction. Better yet would be an invite to y'all's Slack, but I do understand why you'd want to keep that more tightly controlled.
### Versions
```
Collecting environment information...
PyTorch version: 1.14.0a0+gitdc40b6d
Is debug build: True
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 13.0.1 (arm64)
GCC version: Could not collect
Clang version: 14.0.0 (clang-1400.0.29.202)
CMake version: version 3.21.4
Libc version: N/A
Python version: 3.10.8 | packaged by conda-forge | (main, Nov 22 2022, 08:25:13) [Clang 14.0.6 ] (64-bit runtime)
Python platform: macOS-13.0.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.5
[pip3] pytorch-lightning==1.7.7
[pip3] torch==1.14.0a0+gitdc40b6d
[pip3] torch-fidelity==0.3.0
[pip3] torchdiffeq==0.2.3
[pip3] torchmetrics==0.10.3
[pip3] torchsde==0.2.5
[pip3] torchvision==0.13.1
[conda] Could not collect
```
cc @kulinseth @albanD @malfet @DenisVieriu97 @razarmehr @abhudev
| 10 |
3,812 | 91,615 |
sparse.mm(coo, dense) produces wrong results on T4/V100 GPUs
|
module: sparse, triaged
|
### 🐛 Describe the bug
`sparse.mm(coo, dense)` produces wrong results on T4 GPUs.
```python
In [1]: import torch
...:
...: a = torch.sparse_coo_tensor(
...: indices=torch.tensor([[1],
...: [2]]),
...: values=torch.tensor([0.1940]),
...: size=(2, 7))
...: b = torch.tensor([[ 0.1391, -0.1082, -0.7174, 0.7566, 0.3715, -1.0049, 0.0083, 0.3277, 0.2829, -0.8926],
...: [-0.1626, -0.8062, -0.1168, -1.6124, -0.1541, -0.0646, -0.5324, 0.0533, -0.0314, -0.7431],
...: [-1.1582, -0.0249, -0.7584, -0.4157, 0.6389, -0.2545, -1.2304, -1.5822, 0.6431, 0.9715],
...: [-1.3249, -1.0006, -0.4556, 0.4031, -0.7707, -1.1757, -0.1555, -0.6527, 0.2520, 0.4590],
...: [ 1.8932, 0.1633, -0.2634, -1.1079, 0.7673, -1.1230, 0.3047, 0.3896, -0.2520, -1.0066],
...: [ 0.2927, -1.1003, 1.9016, -1.5185, 1.6850, -1.3713, 0.2893, 0.3939, 0.6529, -0.5519],
...: [ 0.7724, 0.9300, 0.0463, 0.3486, 1.0300, 0.0132, 0.5492, 0.3500, -0.3119, -0.0407]])
# correct results on CPU
In [2]: torch.sparse.mm(a, b)
Out[2]:
tensor([[ 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000],
[-0.2247, -0.0048, -0.1471, -0.0806, 0.1239, -0.0494, -0.2387, -0.3069,
0.1248, 0.1885]])
In [3]: device = torch.device("cuda:0")
# incorrect results on GPU
In [4]: torch.sparse.mm(a.to(device), b.to(device))
Out[4]:
tensor([[ 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000],
[-1.0377, -4.0358, -0.7311, -8.1426, -0.6466, -0.3724, -2.9007, -0.0404,
-0.0322, -3.5270]], device='cuda:0')
```
The wrong results occur on T4/V100 + PyTorch 1.13.0/1.13.1. It disappears on T4/V100 + PyTorch 1.12.1 or A100 + PyTorch 1.13.0/1.13.1.
### Versions
```bash
PyTorch version: 1.13.0
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: Could not collect
CMake version: version 3.10.2
Libc version: glibc-2.27
Python version: 3.9.12 (main, Jun 1 2022, 11:38:51) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-4.15.0-163-generic-x86_64-with-glibc2.27
Is CUDA available: True
CUDA runtime version: 12.0.0
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: Tesla T4
Nvidia driver version: 525.60.13
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.22.3
[pip3] torch==1.13.0
[pip3] torchmetrics==0.9.2
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.6.0 habf752d_9 nvidia
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.22.3 py39he7a7128_0
[conda] numpy-base 1.22.3 py39hf524024_0
[conda] pytorch 1.13.0 py3.9_cuda11.6_cudnn8.3.2_0 pytorch
[conda] pytorch-cuda 11.6 h867d48c_1 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchmetrics 0.9.2 pypi_0 pypi
```
cc @nikitaved @pearu @cpuhrsch @amjames @bhosmer
| 3 |
3,813 | 91,608 |
SSL: CERTIFICATE_VERIFY_FAILED while trying to download pretrained model within a company that transforms SSL certificates for security purposes
|
triaged, module: hub
|
### 🐛 Describe the bug
I currently have difficulty using the code in companies with enhanced security when using external networks.
**from torch.hub import download_url_to_file
url = 'https://download.openmmlab.com/pretrain/third_party/resnet50_v1c-2cccc1ad.pth'
download_url_to_file(url)**
The following error occurs when I execute the code because the SSL certificate is tampered with by internal security.
File "C:\Users\user\miniconda3\envs\torch112\lib\site-packages\torch\hub.py", line 727, in load_state_dict_from_url
download_url_to_file(url, cached_file, hash_prefix, progress=progress)
File "C:\Users\user\miniconda3\envs\torch112\lib\site-packages\torch\hub.py", line 593, in download_url_to_file
u = urlopen(req)
File "C:\Users\user\miniconda3\envs\torch112\lib\urllib\request.py", line 216, in urlopen
return opener.open(url, data, timeout)
File "C:\Users\user\miniconda3\envs\torch112\lib\urllib\request.py", line 519, in open
response = self._open(req, data)
File "C:\Users\user\miniconda3\envs\torch112\lib\urllib\request.py", line 536, in _open
result = self._call_chain(self.handle_open, protocol, protocol +
File "C:\Users\user\miniconda3\envs\torch112\lib\urllib\request.py", line 496, in _call_chain
result = func(*args)
File "C:\Users\user\miniconda3\envs\torch112\lib\urllib\request.py", line 1391, in https_open
return self.do_open(http.client.HTTPSConnection, req,
File "C:\Users\user\miniconda3\envs\torch112\lib\urllib\request.py", line 1351, in do_open
raise URLError(err)
urllib.error.URLError: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: CA certificate key too weak (_ssl.c:997)>
It seems that there was a similar issue as follows #33288
And there is a case of solving the cause in the issue.
**import ssl
ssl._create_default_https_context = ssl._create_unverified_context**
Can I request PR for the code to which this solution is applied?
### Versions
PyTorch version: 1.12.1
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 10 Enterprise
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A
Python version: 3.10.6 | packaged by conda-forge | (main, Aug 22 2022, 20:29:51) [MSC v.1929 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.19042-SP0
Is CUDA available: True
CUDA runtime version: 11.6.55
CUDA_MODULE_LOADING set to:
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3090
Nvidia driver version: 516.94
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.23.3
[pip3] torch==1.12.1
[pip3] torchaudio==0.12.1
[pip3] torchvision==0.13.1
[conda] blas 2.116 mkl conda-forge
[conda] blas-devel 3.9.0 16_win64_mkl conda-forge
[conda] cudatoolkit 11.6.0 hc0ea762_10 conda-forge
[conda] libblas 3.9.0 16_win64_mkl conda-forge
[conda] libcblas 3.9.0 16_win64_mkl conda-forge
[conda] liblapack 3.9.0 16_win64_mkl conda-forge
[conda] liblapacke 3.9.0 16_win64_mkl conda-forge
[conda] mkl 2022.1.0 h6a75c08_874 conda-forge
[conda] mkl-devel 2022.1.0 h57928b3_875 conda-forge
[conda] mkl-include 2022.1.0 h6a75c08_874 conda-forge
[conda] numpy 1.23.3 py310h4a8f9c9_0 conda-forge
[conda] pytorch 1.12.1 py3.10_cuda11.6_cudnn8_0 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 0.12.1 py310_cu116 pytorch
[conda] torchvision 0.13.1 py310_cu116 pytorch
cc @nairbv @NicolasHug @vmoens @jdsgomes
| 8 |
3,814 | 91,604 |
wrong assert message
|
oncall: quantization, triaged
|
https://github.com/pytorch/pytorch/blob/cce577b39154b501705f32ee0392c77eee43820b/torch/ao/quantization/utils.py#L357-L359
In the assert condition, `2**31` is used, which is `2**31 = 2147483648`
However, in the error message, `4294967296` is used, which is `2**32 = 4294967296`
I think the assert message is wrong.
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel
| 1 |
3,815 | 91,629 |
vmap + nn.SyncBatchNorm.convert_sync_batchnorm
|
oncall: distributed, module: data parallel, module: ddp, module: functorch
|
I ran into this problem with `vmap` and `functorch`. I am currently trying out different parallelization schemes, i.e. `dataparallel` and `distributeddataparallel`. `vmap` and `functorch` work fine when I use `model= nn.DataParallel(model)`, but when I use `distributeddataparallel` the error mentioned in pytorch/functorch#867 comes up again (specifically [this](https://github.com/pytorch/functorch/issues/867#issuecomment-1154273349), which was then fixed in pytorch/functorch#958 ), despite the fact that the model is in `.eval()` mode. To apply a quick fix, I used `replace_all_batch_norm_modules_` to patch the batchnorms, but then received this error
`
RuntimeError: functorch functions (vmap, grad, vjp, etc.) currently do not support the use of autograd.Function. Please rewrite your function to not use autograd.Function while we work on fixing this
`
I am not sure what this error implies/means, but my hunch (which might very well be wrong) is that the problem is coming from `model = nn.SyncBatchNorm.convert_sync_batchnorm(model)` that is required for `distributeddataparallel`. Do you have any suggested quick fix or am I making mistakes somewhere?
(I asked the same question in pytorch/functorch#958 at [here](https://github.com/pytorch/functorch/pull/958#issuecomment-1369177208))
EDIT: potentially similar discussion pytorch/functorch#207
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @zou3519 @Chillee @samdow @soumith
| 2 |
3,816 | 91,599 |
`mul(CSC, CSC)` fails with layout mismatch between the inputs and the output.
|
module: sparse, triaged
|
### 🐛 Describe the bug
```python
In [1]: import torch
In [2]: x = torch.rand(3, 3).to_sparse_csr()
<ipython-input-2-a111f877c399>:1: UserWarning: Sparse CSR tensor support is in beta state. If you miss a functionality in the sparse tensor support, please submit a feature request to https://github.com/pytorch/pytorch/issues. (Triggered internally at /home/nik/git/Quansight/pytorch/aten/src/ATen/SparseCsrTensorImpl.cpp:54.)
x = torch.rand(3, 3).to_sparse_csr()
In [3]: x * x
Out[3]:
tensor(crow_indices=tensor([0, 3, 6, 9]),
col_indices=tensor([0, 1, 2, 0, 1, 2, 0, 1, 2]),
values=tensor([0.7678, 0.0257, 0.1134, 0.4288, 0.0411, 0.0229, 0.3167,
0.1143, 0.1028]), size=(3, 3), nnz=9,
layout=torch.sparse_csr)
In [4]: x_csc = x.to_sparse_csc()
In [5]: x_csc * x_csc
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Input In [5], in <cell line: 1>()
----> 1 x_csc * x_csc
RuntimeError: Expected result Tensor to be of format CSR
```
### Versions
Current master.
cc @pearu @cpuhrsch @amjames @bhosmer
| 0 |
3,817 | 91,593 |
Division by zero error when running torch.nn.functional.lp_pool1d
|
triaged, module: edge cases
|
### 🐛 Describe the bug
When I run torch.nn.functional.lp_pool1d with second argument as zero, it results in division by zero
```
import torch
import numpy as np
arg_1_tensor = torch.rand([3, 7], dtype=torch.float64)
arg_1 = arg_1_tensor.clone()
arg_2 = 0
arg_3 = 2
arg_4 = 3
arg_5 = False
try:
res = torch.nn.functional.lp_pool1d(arg_1,arg_2,arg_3,arg_4,arg_5,)
except Exception as e:
print("Error:"+str(e))
```
The log message is:
```
Error:float division by zero
```
### Versions
PyTorch version: 1.12.0
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.9.15 (main, Nov 24 2022, 14:31:59) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-56-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to:
GPU models and configuration: GPU 0: NVIDIA GeForce GTX 1660 Ti
Nvidia driver version: 525.60.11
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.5
[pip3] torch==1.12.0
[pip3] torchaudio==0.12.0
[pip3] torchvision==0.13.0
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.3.1 h2bc3f7f_2
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.23.5 py39h14f4228_0
[conda] numpy-base 1.23.5 py39h31eccc5_0
[conda] pytorch 1.12.0 py3.9_cuda11.3_cudnn8.3.2_0 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 0.12.0 py39_cu113 pytorch
[conda] torchvision 0.13.0 py39_cu113 pytorch
| 6 |
3,818 | 91,590 |
Crashes of linalg.ldl_solve on different edge cases not coming from linalg.ldl_factor
|
module: crash, triaged, module: linear algebra, module: edge cases
|
## Double free on negative indices on CPU
```python
import torch
import numpy as np
arg_1_tensor = torch.neg(torch.rand([4, 5, 5], dtype=torch.float32))
arg_1 = arg_1_tensor.clone()
arg_2_tensor = torch.randint(-2048,1,[4, 5], dtype=torch.int32)
arg_2 = arg_2_tensor.clone()
arg_3_tensor = torch.rand([4, 5, 1], dtype=torch.float32)
arg_3 = arg_3_tensor.clone()
arg_4 = False
try:
res = torch.linalg.ldl_solve(arg_1,arg_2,arg_3,hermitian=arg_4,)
except Exception as e:
print("Error:"+str(e))
```
## Segfault on neg + CPU
```python
import torch
import numpy as np
arg_1_tensor = torch.neg(torch.rand([2, 2, 5, 5], dtype=torch.complex64))
arg_1 = arg_1_tensor.clone()
arg_2_tensor = torch.randint(-128,512,[2, 2, 5], dtype=torch.int32)
arg_2 = arg_2_tensor.clone()
arg_3_tensor = torch.rand([2, 2, 5, 1], dtype=torch.complex64)
arg_3 = arg_3_tensor.clone()
arg_4 = True
try:
res = torch.linalg.ldl_solve(arg_1,arg_2,arg_3,hermitian=arg_4,)
except Exception as e:
print("Error:"+str(e))
```
| 2 |
3,819 | 91,582 |
Softmax function slows down for data with large range
|
module: performance, module: cuda, triaged
|
### 🐛 Describe the bug
For two tensors with the same shape, but different initial range, `torch.softmax` has significantly different launch latency.
I have took a look at [the source code](https://github.com/pytorch/pytorch/blob/2b52db9c953d063db7b46c12f4df35b47aca4381/aten/src/ATen/native/cuda/PersistentSoftmax.cuh#L69), but can't get any clue.
For people who do not know how `softmax` was implemented, please let me briefly explain the core part (in case dim < 1024):
step 1 : each thread of block loads data into buffer from global memory (VRAM) for selected size.
step 2 : calculate max value among the target dimension using warp reduction.
step 3 : calculate `::exp(value - max_value) -- compliled as __expf if --fast_math flag is passed` for each data point in the buffer and gather the sum among the target dimension using warp reduction.
step 4 : apply `::exp(value - max_value) / sum` for each data point in the buffer and save to output tensor.
P.S. `exp(x - max_value) / sum(exp(x - max_value)) = exp(x) * exp(-max_value) / exp(-max_value) * sum(exp(x)) = exp(x) / sum(exp(x))` we get mathematically identical result with much higer numerical stability by subtracting max value.
My first guess is max_value part, because __expf(x) may have different latency when the value varies. So I implemented the softmax kernel myself with CUDA C, and I found launch time have no difference at all with my own implementation (and faster than pytorch version). Now I am really confused about this.
Any help is welcome.
Test code:
```python
import random
import numpy as np
import torch
start = torch.cuda.Event(enable_timing=True)
end = torch.cuda.Event(enable_timing=True)
def main():
seed = 8282
random.seed(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
torch.cuda.set_device(0)
torch.backends.cudnn.benchmark = True
torch.backends.cuda.matmul.allow_tf32 = False
torch.backends.cudnn.allow_tf32 = False
batch_size = 8
N = 100
src_seq_len = 512
tgt_seq_len = 512
num_heads = 8
dtype = torch.float16
x = torch.zeros((batch_size, num_heads, src_seq_len, tgt_seq_len))
x = x.to(0, dtype=dtype)
scale = 1
x.uniform_(-scale, scale)
cost = 0
with torch.inference_mode():
for _ in range(N):
start.record()
out = torch.softmax(x, dim=-1)
end.record()
torch.cuda.synchronize()
cost += start.elapsed_time(end)
print(cost)
scale = 100
x.uniform_(-scale, scale)
cost = 0
with torch.inference_mode():
for _ in range(N):
start.record()
out = torch.softmax(x, dim=-1)
end.record()
torch.cuda.synchronize()
cost += start.elapsed_time(end)
print(cost)
if __name__ == '__main__':
main()
```
Result of above code:
```bash
10.987520195543766
33.92409551143646
```
### Versions
PyTorch version: 1.13.0a0+08820cb
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.23.2
Libc version: glibc-2.31
Python version: 3.8.13 | packaged by conda-forge | (default, Mar 25 2022, 06:04:10) [GCC 10.3.0] (64-bit runtime)
Python platform: Linux-5.4.0-135-generic-x86_64-with-glibc2.10
Is CUDA available: True
CUDA runtime version: 11.7.99
CUDA_MODULE_LOADING set to:
GPU models and configuration:
GPU 0: NVIDIA A100-PCIE-40GB
GPU 1: NVIDIA A100-PCIE-40GB
GPU 2: NVIDIA A100-PCIE-40GB
GPU 3: NVIDIA A100-PCIE-40GB
Nvidia driver version: 515.86.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.4.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.22.4
[pip3] pytorch-quantization==2.1.2
[pip3] torch==1.13.0a0+08820cb
[pip3] torch-model-archiver==0.7.0
[pip3] torch-tensorrt==1.2.0a0
[pip3] torchserve==0.7.0
[pip3] torchtext==0.13.0a0
[pip3] torchvision==0.14.0a0
[conda] mkl 2020.4 h726a3e6_304 conda-forge
[conda] mkl-include 2020.4 h726a3e6_304 conda-forge
[conda] numpy 1.22.4 py38h99721a1_0 conda-forge
[conda] pytorch-quantization 2.1.2 pypi_0 pypi
[conda] torch 1.13.0a0+08820cb pypi_0 pypi
[conda] torch-model-archiver 0.7.0 pypi_0 pypi
[conda] torch-tensorrt 1.2.0a0 pypi_0 pypi
[conda] torchserve 0.7.0 pypi_0 pypi
[conda] torchtext 0.13.0a0 pypi_0 pypi
[conda] torchvision 0.14.0a0 pypi_0 pypi
cc @ngimel
| 0 |
3,820 | 91,581 |
LBFGS wolfe exceeds the maximum allowed iterations
|
module: optimizer, triaged, actionable
|
### 🐛 Describe the bug
We can specify the maximum amount of evaluations lbfgs can use every step. It is passed by `max_eval`.
When using strong wolfe, a call to `_strong_wolfe` is made and return the amount of evaluations done during the call. After that, a condition is checked if the amount of evaluations is below `max_eval`. Note that if `max_eval=3`, `_strong_wolfe` can still run 25 iterations and return 25. In theory `_strong_wolfe` has a parameter `max_ls` to limit exactly that. But LBFGS doesn't pass it and return the default value of 25.
It makes `max_eval` inaccurate and for small values almost useless.
Interestingly, if you install `1.10.1+cu113` from `https://download.pytorch.org/whl/torch_stable.html` you can see a fix was done:
```python
loss, flat_grad, t, ls_func_evals = _strong_wolfe(
obj_func, x_init, t, d, loss, flat_grad, gtd, max_ls=min(25, max_eval - current_evals)
)
```
In version 1.12 we don't see the usage of `max_ls` and we see:
```python
loss, flat_grad, t, ls_func_evals = _strong_wolfe(
obj_func, x_init, t, d, loss, flat_grad, gtd)
```
I don't find any commit with the fix I mentioned above but somehow it was released and then reverted(?).
Moving to commit `302ee7bfb6` which is tagged as `1.10.1` doesn't show this change either.
### Versions
PyTorch version: 1.12.0+cu116
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.22.3
Libc version: glibc-2.31
Python version: 3.9.13 | packaged by conda-forge | (main, May 27 2022, 16:56:21) [GCC 10.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-1019-aws-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to:
GPU models and configuration: GPU 0: NVIDIA A10G
Nvidia driver version: 510.73.08
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.21.0
[pip3] torch==1.12.0+cu116
[pip3] torch-model-archiver==0.5.3b20220226
[pip3] torch-workflow-archiver==0.2.4b20220513
[pip3] torchaudio==0.12.0
[pip3] torchfile==0.1.0
[pip3] torchserve==0.6.0b20220513
[pip3] torchtext==0.13.0
[pip3] torchvision==0.13.0
[conda] No relevant packages
cc @vincentqb @jbschlosser @albanD @janeyx99
| 1 |
3,821 | 91,577 |
[RFC] FP8 dtype introduction to PyTorch
|
oncall: quantization, triaged
|
### 🚀 The feature, motivation and pitch
Proposal of adding native fp8 dtypes in PyTorch.
Motivation and details in rfcs PR: https://github.com/pytorch/rfcs/pull/51
### Alternatives
_No response_
### Additional context
_No response_
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel
| 14 |
3,822 | 91,574 |
Add BlockWise Distribution Support to the torch.distributions Package
|
module: distributions, triaged
|
### 🚀 The feature, motivation and pitch
I am requesting the addition of BlockWise distribution support to the torch.distributions package in PyTorch. By "BlockWise distribution", I am referring to distributions for which the event dimensions have separate, distinct and independent distributions.
For example, consider an n-dimensional event space, where each dimension is independently distributed according to a separate distribution. A BlockWise distribution would allow users to specify and sample from this type of distribution using the torch.distributions API.
This type of distribution can be useful in a variety of applications, such as factorizing a complex distribution into a product of simpler distributions, or modeling distributions over high-dimensional event spaces. The particular use case I have in mind is to create separate distributions for a radial and angular coordinate (say a MixtureSameFamily and a Uniform distribution to create multiple rings), then use a Transform (Bijector) to recover cartesian coordinates.
It would be great if PyTorch could provide support for BlockWise distributions through the torch.distributions package.
Here is how my particular example would look like
```python
import numpy as np
import torch
import torch.distributions as tfd
def rings(modes: int = 4, mode_width: float = 0.1, inner_radius: float = 1., outer_radius = 10.):
"""
Concentric rings distribution
:param modes: Number of concentric rings in the mixture
:param inner_radius: Size of the smallest ring.
:param outer_radius: Size of the largest ring.
:param mode_width: Width of the modes (radial coordinate only)
"""
r = torch.linspace(inner_radius, outer_radius, modes)
mixture = tfd.Categorical(probs=torch.ones(modes), validate_args=False)
component = tfd.Normal(loc=r, scale=mode_width, validate_args=False)
r_distribution = tfd.MixtureSameFamily(mixture, component, validate_args=False)
theta_distribution = tfd.Uniform(low=-np.pi, high=np.pi, validate_args=False)
return tfd.TransformedDistribution(
base_distribution=tfd.BlockWise([r_distribution, theta_distribution]),
transforms=PolarToCartesianTransform()
)
```
Thank you for considering this feature request!
### Alternatives
_No response_
### Additional context
_No response_
cc @fritzo @neerajprad @alicanb @nikitaved
| 1 |
3,823 | 91,570 |
Security policy impractical / lacks contact information?
|
high priority, module: docs, triaged, security
|
### 📚 The doc issue
Hi, happy new year! :tada:
I have a potential security issue to report but https://github.com/pytorch/pytorch/security/policy does not answer the question where to send reports to but only links to a page [Meta/Facebook(!) Bug Bounty Program Info](https://www.facebook.com/whitehat) with a wall of text and where it's not clear which parts applies to Meta/Facbook only and which to PyTorch. On a side note, I do not have a Facbook account. So this seems rather impractical, and I can now choose whether to (a) just report right to the public issue tracker or (b) not report at all. Please recondsider this approach, thank you! If you'd like to reach out off-GitHub, my profile has my e-mail address for contact.
Best, Sebastian
### Suggest a potential alternative/fix
_No response_
cc @ezyang @gchanan @zou3519 @svekars @carljparker
| 3 |
3,824 | 93,498 |
torch.compiled mish function is x5 slower than eager (CPU)
|
triaged, bug, oncall: pt2, module: cpu inductor
|
### 🐛 Describe the bug
I wanted to test the benefits of`torch.compile` function in Pytorch 2. As I don't have access to a powerful GPU, I tried to speed up computations on CPU. So, I tested if inductor could match the pytorch hadnwritten [mish activation ](https://pytorch.org/docs/stable/generated/torch.nn.Mish.html).
However, the compiled function is x5 slower than the handwritten eager function or the builtin mish function. This results are the same when:
- With input tensor of 1000 or 10000000
- When using 1 or 4 cores (my laptop has 4 physical cores)
Is this the expected behaviour? Will cuda path be affected too? Thanks!
Here is the code:
```python
import torch
from torch import nn
import torch.nn.functional as F
import time
torch.set_num_interop_threads(1)
torch.set_num_threads(1)
def mish(x: torch.Tensor):
return x * torch.tanh(F.softplus(x))
ops = [
F.mish,
mish,
torch.jit.script(mish),
# torch.compile(mish, fullgraph=True),
torch.compile(mish, mode="max-autotune", fullgraph=True),
# torch.compile(mish, mode="reduce-overhead", fullgraph=True),
]
t = torch.randn(10000000)
torch._inductor.config.debug = True
# Warm up
for _ in range(10):
for op in ops:
_ = op(t)
res = {}
N = 10
for i, op in enumerate(ops):
start = time.time_ns()
for _ in range(N):
_ = op(t)
end = time.time_ns()
res[f"{i}_{op}"] = f"{(end - start) / 1e6 / N:.2f} ms"
print(res)
```
The result for a input of 10000000 elements:
```
{'0_<function mish at 0x7f3f452a8f70>': '214.37 ms', # F,mish
'1_<function mish at 0x7f3f7a7d2e50>': '224.24 ms', # eager mish
'2_<torch.jit.ScriptFunction object at 0x7f3f44325f90>': '216.41 ms', # TS mish
'3_<function mish at 0x7f3f4266a1f0>': '1228.75 ms'} # Inductor mish
```
If the input tensor has 1000 elements, all functions take 0.01ms except inductor that takes 0.05ms
Here is the C++ code gen:
```
from ctypes import c_void_p, c_long
import torch
import random
from torch import empty_strided, as_strided, device
from torch._inductor.codecache import AsyncCompile
aten = torch.ops.aten
assert_size_stride = torch._C._dynamo.guards.assert_size_stride
async_compile = AsyncCompile()
kernel_cpp_0 = async_compile.cpp('''
#include "/tmp/torchinductor_victor/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h"
extern "C" void kernel(const float* __restrict__ in_ptr0,
float* __restrict__ out_ptr0)
{
#pragma GCC ivdep
for(long i0=0; i0<10000000; i0+=1)
{
{
{
auto tmp0 = in_ptr0[i0];
auto tmp1 = static_cast<float>(20);
auto tmp2 = tmp0 > tmp1;
auto tmp3 = std::exp(tmp0);
auto tmp4 = std::log1p(tmp3);
auto tmp5 = tmp2 ? tmp0 : tmp4;
auto tmp6 = std::tanh(tmp5);
auto tmp7 = tmp0 * tmp6;
out_ptr0[i0] = tmp7;
}
}
}
}
''')
async_compile.wait(globals())
del async_compile
def call(args):
arg0_1, = args
args.clear()
buf0 = empty_strided((10000000, ), (1, ), device='cpu', dtype=torch.float32)
kernel_cpp_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()))
del arg0_1
return (buf0, )
if __name__ == "__main__":
from torch._dynamo.testing import rand_strided
from torch._inductor.utils import print_performance
arg0_1 = rand_strided((10000000, ), (1, ), device='cpu', dtype=torch.float32)
print_performance(lambda: call([arg0_1]))
```
My environment is:
- Torch version: 2.0.0.dev20221231+cpu
- CPU: Intel® Core™ i5-7300HQ CPU @ 2.50GHz × 4
- OS: Ubuntu 20.04.3 LTS
### Error logs
_No response_
### Minified repro
_No response_
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 4 |
3,825 | 91,566 |
Build Error: OpenMP library could not be found. Proceeding might lead to highly sub-optimal performance.
|
module: build, triaged, module: mkldnn, module: third_party
|
### 🐛 Describe the bug
Error Message:
```
Building wheel torch-2.0.0a0+git8992eec
-- Building version 2.0.0a0+git8992eec
cmake -GNinja -DBUILD_PYTHON=True -DBUILD_TEST=True -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/mnt/c/Users/ZJ/Raw/pytorch/torch -DCMAKE_PREFIX_PATH=/home/u/anaconda3/lib/python3.9/site-packages;/home/u/anaconda3 -DNUMPY_INCL
UDE_DIR=/home/u/anaconda3/lib/python3.9/site-packages/numpy/core/include -DPYTHON_EXECUTABLE=/home/u/anaconda3/bin/python -DPYTHON_INCLUDE_DIR=/home/u/anaconda3/include/python3.9 -DPYTHON_LIBRARY=/home/u/anaconda3/lib/libpython3.9.a
-DTORCH_BUILD_VERSION=2.0.0a0+git8992eec -DUSE_CUDA=0 -DUSE_NUMPY=True -DUSE_ROCM=0 /mnt/c/Users/ZJ/Raw/pytorch
CMake Warning (dev) at /home/u/anaconda3/share/cmake-3.22/Modules/CMakeDependentOption.cmake:84 (message):
Policy CMP0127 is not set: cmake_dependent_option() supports full Condition
Syntax. Run "cmake --help-policy CMP0127" for policy details. Use the
cmake_policy command to set the policy and suppress this warning.
Call Stack (most recent call first):
CMakeLists.txt:245 (cmake_dependent_option)
This warning is for project developers. Use -Wno-dev to suppress it.
CMake Warning (dev) at /home/u/anaconda3/share/cmake-3.22/Modules/CMakeDependentOption.cmake:84 (message):
Policy CMP0127 is not set: cmake_dependent_option() supports full Condition
Syntax. Run "cmake --help-policy CMP0127" for policy details. Use the
cmake_policy command to set the policy and suppress this warning.
Call Stack (most recent call first):
CMakeLists.txt:276 (cmake_dependent_option)
This warning is for project developers. Use -Wno-dev to suppress it.
-- Could not find ccache. Consider installing ccache to speed up compilation.
-- std::exception_ptr is supported.
-- Turning off deprecation warning due to glog.
-- Current compiler supports avx2 extension. Will build perfkernels.
-- Current compiler supports avx512f extension. Will build fbgemm.
-- Building using own protobuf under third_party per request.
-- Use custom protobuf build.
--
-- 3.13.0.0
-- Caffe2 protobuf include directory: $<BUILD_INTERFACE:/mnt/c/Users/ZJ/Raw/pytorch/third_party/protobuf/src>$<INSTALL_INTERFACE:include>
-- Trying to find preferred BLAS backend of choice: MKL
-- MKL_THREADING = OMP
-- MKL libraries: /home/u/anaconda3/lib/libmkl_intel_lp64.so;/home/u/anaconda3/lib/libmkl_intel_thread.so;/home/u/anaconda3/lib/libmkl_core.so;/home/u/anaconda3/lib/libiomp5.so;/usr/lib/x86_64-linux-gnu/libpthread.a;/usr/lib/x86_64-
linux-gnu/libm.so;/usr/lib/x86_64-linux-gnu/libdl.a
-- MKL include directory: /home/u/anaconda3/include
-- MKL OpenMP type: Intel
-- MKL OpenMP library: /home/u/anaconda3/lib/libiomp5.so
-- Brace yourself, we are building NNPACK
-- NNPACK backend is x86-64
-- Failed to find LLVM FileCheck
-- git version: v1.6.1 normalized to 1.6.1
-- Version: 1.6.1
-- Performing Test HAVE_THREAD_SAFETY_ATTRIBUTES -- failed to compile
-- Performing Test HAVE_STD_REGEX -- success
-- Performing Test HAVE_GNU_POSIX_REGEX -- failed to compile
-- Performing Test HAVE_POSIX_REGEX -- success
-- Performing Test HAVE_STEADY_CLOCK -- success
CMake Warning (dev) at /home/u/anaconda3/share/cmake-3.22/Modules/FindPackageHandleStandardArgs.cmake:438 (message):
The package name passed to `find_package_handle_standard_args` (OpenMP_C)
does not match the name of the calling package (OpenMP). This can lead to
problems in calling code that expects `find_package` result variables
(e.g., `_FOUND`) to follow a certain pattern.
Call Stack (most recent call first):
cmake/Modules/FindOpenMP.cmake:576 (find_package_handle_standard_args)
third_party/fbgemm/CMakeLists.txt:85 (find_package)
This warning is for project developers. Use -Wno-dev to suppress it.
-- Could NOT find OpenMP_C (missing: OpenMP_C_FLAGS OpenMP_C_LIB_NAMES)
CMake Warning (dev) at /home/u/anaconda3/share/cmake-3.22/Modules/FindPackageHandleStandardArgs.cmake:438 (message):
The package name passed to `find_package_handle_standard_args` (OpenMP_CXX)
does not match the name of the calling package (OpenMP). This can lead to
problems in calling code that expects `find_package` result variables
(e.g., `_FOUND`) to follow a certain pattern.
Call Stack (most recent call first):
cmake/Modules/FindOpenMP.cmake:576 (find_package_handle_standard_args)
third_party/fbgemm/CMakeLists.txt:85 (find_package)
This warning is for project developers. Use -Wno-dev to suppress it.
-- Could NOT find OpenMP_CXX (missing: OpenMP_CXX_FLAGS OpenMP_CXX_LIB_NAMES)
-- Could NOT find OpenMP (missing: OpenMP_C_FOUND OpenMP_CXX_FOUND)
CMake Warning at third_party/fbgemm/CMakeLists.txt:93 (message):
OpenMP is not supported by the compiler
CMake Warning at third_party/fbgemm/CMakeLists.txt:186 (message):
==========
CMake Warning at third_party/fbgemm/CMakeLists.txt:187 (message):
CMAKE_BUILD_TYPE = Release
CMake Warning at third_party/fbgemm/CMakeLists.txt:188 (message):
CMAKE_CXX_FLAGS_DEBUG is -g
CMake Warning at third_party/fbgemm/CMakeLists.txt:189 (message):
CMAKE_CXX_FLAGS_RELEASE is -O3 -DNDEBUG
CMake Warning at third_party/fbgemm/CMakeLists.txt:190 (message):
==========
** AsmJit Summary **
ASMJIT_DIR=/mnt/c/Users/ZJ/Raw/pytorch/third_party/fbgemm/third_party/asmjit
ASMJIT_TEST=FALSE
ASMJIT_TARGET_TYPE=STATIC
ASMJIT_DEPS=pthread;rt
ASMJIT_LIBS=asmjit;pthread;rt
ASMJIT_CFLAGS=-DASMJIT_STATIC
ASMJIT_PRIVATE_CFLAGS=-Wall;-Wextra;-Wconversion;-fno-math-errno;-fno-threadsafe-statics;-fno-semantic-interposition;-DASMJIT_STATIC
ASMJIT_PRIVATE_CFLAGS_DBG=
ASMJIT_PRIVATE_CFLAGS_REL=-O2;-fmerge-all-constants
-- Using third party subdirectory Eigen.
-- Found PythonInterp: /home/u/anaconda3/bin/python (found suitable version "3.9.13", minimum required is "3.0")
-- Found PythonLibs: /home/u/anaconda3/lib/libpython3.9.a (found suitable version "3.9.13", minimum required is "3.0")
-- Using third_party/pybind11.
-- pybind11 include dirs: /mnt/c/Users/ZJ/Raw/pytorch/cmake/../third_party/pybind11/include
-- summary of build options:
Install prefix: /mnt/c/Users/ZJ/Raw/pytorch/torch
Target system: Linux
Compiler:
C compiler: /usr/bin/clang-14
CFLAGS:
-- Gloo build as SHARED library
-- Found PythonInterp: /home/u/anaconda3/bin/python (found version "3.9.13")
-- Found PythonLibs: /home/u/anaconda3/lib/libpython3.9.a (found version "3.9.13")
Generated: /mnt/c/Users/ZJ/Raw/pytorch/build/third_party/onnx/onnx/onnx_onnx_torch-ml.proto
Generated: /mnt/c/Users/ZJ/Raw/pytorch/build/third_party/onnx/onnx/onnx-operators_onnx_torch-ml.proto
Generated: /mnt/c/Users/ZJ/Raw/pytorch/build/third_party/onnx/onnx/onnx-data_onnx_torch.proto
--
-- ******** Summary ********
-- CMake version : 3.22.1
-- CMake command : /home/u/anaconda3/bin/cmake
-- System : Linux
-- C++ compiler : /usr/bin/clang++-14
-- C++ compiler version : 14.0.0
-- CXX flags : -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -Wnon-virtual-dtor
-- Build type : Release
-- Compile definitions : ONNX_ML=1;ONNXIFI_ENABLE_EXT=1;__STDC_FORMAT_MACROS
-- CMAKE_PREFIX_PATH : /home/u/anaconda3/lib/python3.9/site-packages;/home/u/anaconda3
-- CMAKE_INSTALL_PREFIX : /mnt/c/Users/ZJ/Raw/pytorch/torch
-- CMAKE_MODULE_PATH : /mnt/c/Users/ZJ/Raw/pytorch/cmake/Modules
--
-- ONNX version : 1.12.0
-- ONNX NAMESPACE : onnx_torch
-- ONNX_USE_LITE_PROTO : OFF
-- USE_PROTOBUF_SHARED_LIBS : OFF
-- Protobuf_USE_STATIC_LIBS : ON
-- ONNX_DISABLE_EXCEPTIONS : OFF
-- ONNX_WERROR : OFF
-- ONNX_BUILD_TESTS : OFF
-- ONNX_BUILD_BENCHMARKS : OFF
-- ONNXIFI_DUMMY_BACKEND : OFF
-- ONNXIFI_ENABLE_EXT : OFF
--
-- Protobuf compiler :
-- Protobuf includes :
-- Protobuf libraries :
-- BUILD_ONNX_PYTHON : OFF
--
-- ******** Summary ********
-- CMake version : 3.22.1
-- CMake command : /home/u/anaconda3/bin/cmake
-- System : Linux
-- C++ compiler : /usr/bin/clang++-14
-- C++ compiler version : 14.0.0
-- CXX flags : -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -Wnon-virtual-dtor
-- Build type : Release
-- Compile definitions : ONNX_ML=1;ONNXIFI_ENABLE_EXT=1
-- CMAKE_PREFIX_PATH : /home/u/anaconda3/lib/python3.9/site-packages;/home/u/anaconda3
-- CMAKE_INSTALL_PREFIX : /mnt/c/Users/ZJ/Raw/pytorch/torch
-- CMAKE_MODULE_PATH : /mnt/c/Users/ZJ/Raw/pytorch/cmake/Modules
--
-- ONNX version : 1.4.1
-- ONNX NAMESPACE : onnx_torch
-- ONNX_BUILD_TESTS : OFF
-- ONNX_BUILD_BENCHMARKS : OFF
-- ONNX_USE_LITE_PROTO : OFF
-- ONNXIFI_DUMMY_BACKEND : OFF
--
-- Protobuf compiler :
-- Protobuf includes :
-- Protobuf libraries :
-- BUILD_ONNX_PYTHON : OFF
-- Could not find CUDA with FP16 support, compiling without torch.CudaHalfTensor
-- Adding -DNDEBUG to compile flags
-- MAGMA not found. Compiling without MAGMA support
-- Could not find hardware support for NEON on this machine.
-- No OMAP3 processor on this machine.
-- No OMAP4 processor on this machine.
-- Found a library with LAPACK API (mkl).
disabling CUDA because NOT USE_CUDA is set
-- USE_CUDNN is set to 0. Compiling without cuDNN support
disabling ROCM because NOT USE_ROCM is set
-- MIOpen not found. Compiling without MIOpen support
-- Will build oneDNN Graph
-- MKLDNN_CPU_RUNTIME = OMP
-- cmake version: 3.22.1
CMake Deprecation Warning at third_party/ideep/mkl-dnn/CMakeLists.txt:36 (cmake_policy):
The OLD behavior for policy CMP0025 will be removed from a future version
of CMake.
The cmake-policies(7) manual explains that the OLD behaviors of all
policies are deprecated and that a policy should be set to OLD only under
specific short-term circumstances. Projects should be ported to the NEW
behavior and not rely on setting a policy to OLD.
-- DNNL_TARGET_ARCH: X64
-- DNNL_LIBRARY_NAME: dnnl
CMake Warning (dev) at /home/u/anaconda3/share/cmake-3.22/Modules/FindPackageHandleStandardArgs.cmake:438 (message):
The package name passed to `find_package_handle_standard_args` (OpenMP_C)
does not match the name of the calling package (OpenMP). This can lead to
problems in calling code that expects `find_package` result variables
(e.g., `_FOUND`) to follow a certain pattern.
Call Stack (most recent call first):
cmake/Modules/FindOpenMP.cmake:576 (find_package_handle_standard_args)
third_party/ideep/mkl-dnn/third_party/oneDNN/cmake/OpenMP.cmake:69 (find_package)
third_party/ideep/mkl-dnn/third_party/oneDNN/CMakeLists.txt:117 (include)
This warning is for project developers. Use -Wno-dev to suppress it.
-- Could NOT find OpenMP_C (missing: OpenMP_C_FLAGS OpenMP_C_LIB_NAMES)
CMake Warning (dev) at /home/u/anaconda3/share/cmake-3.22/Modules/FindPackageHandleStandardArgs.cmake:438 (message):
The package name passed to `find_package_handle_standard_args` (OpenMP_CXX)
does not match the name of the calling package (OpenMP). This can lead to
problems in calling code that expects `find_package` result variables
(e.g., `_FOUND`) to follow a certain pattern.
Call Stack (most recent call first):
cmake/Modules/FindOpenMP.cmake:576 (find_package_handle_standard_args)
third_party/ideep/mkl-dnn/third_party/oneDNN/cmake/OpenMP.cmake:69 (find_package)
third_party/ideep/mkl-dnn/third_party/oneDNN/CMakeLists.txt:117 (include)
This warning is for project developers. Use -Wno-dev to suppress it.
-- Could NOT find OpenMP_CXX (missing: OpenMP_CXX_FLAGS OpenMP_CXX_LIB_NAMES)
-- Could NOT find OpenMP (missing: OpenMP_C_FOUND OpenMP_CXX_FOUND)
CMake Error at third_party/ideep/mkl-dnn/third_party/oneDNN/cmake/OpenMP.cmake:118 (message):
OpenMP library could not be found. Proceeding might lead to highly
sub-optimal performance.
Call Stack (most recent call first):
third_party/ideep/mkl-dnn/third_party/oneDNN/CMakeLists.txt:117 (include)
```
### Versions
```
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
Clang version: 14.0.0-1ubuntu1
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.9.13 (main, Aug 25 2022, 23:26:10) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.10.16.3-microsoft-standard-WSL2-x86_64-with-glibc2.35
Is CUDA available: N/A
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.21.5
[pip3] numpydoc==1.4.0
[conda] blas 1.0 mkl
[conda] magma-cuda110 2.5.2 1 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-include 2022.1.0 h06a4308_224
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.21.5 py39h6c91a56_3
[conda] numpy-base 1.21.5 py39ha15fc14_3
[conda] numpydoc 1.4.0 py39h06a4308_0
```
cc @malfet @seemethere @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen
| 1 |
3,826 | 91,565 |
min/max not supported for Long dtype on MPS
|
triaged, module: mps
|
### 🐛 Describe the bug
```python
dev = torch.device('mps')
indices = torch.tensor([[0, 1, 2], [0, 1, 2]])
value = torch.tensor([1, 2, 3])
a = torch.sparse_coo_tensor(indices, value, (4, 4),
dtype=torch.float32,
device=dev).to_dense()
print(a)
```
```
RuntimeError Traceback (most recent call last)
Cell In [16], line 5
3 indices = torch.tensor([[0, 1, 2], [0, 1, 2]])
4 value = torch.tensor([1, 2, 3])
----> 5 a = torch.sparse_coo_tensor(indices, value, (4, 4),
6 dtype=torch.float32,
7 device=dev).to_dense()
8 print(a)
RuntimeError: input_t.scalar_type() != ScalarType::Long INTERNAL ASSERT FAILED at "/Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/native/mps/operations/ReduceOps.mm":1196, please report a bug to PyTorch. min/max not supported for Long dtype on MPS
```
### Versions
```
torch 2.0.0.dev20221231
torchaudio 2.0.0.dev20221231
torchvision 0.15.0.dev20221231
```
cc @kulinseth @albanD @malfet @DenisVieriu97 @razarmehr @abhudev
| 2 |
3,827 | 91,562 |
`torch::jit::optimize_for_inference` doesn't preserve exported methods when calling `freeze`
|
oncall: jit
|
I came across this issue while trying to optimize a JIT scripted model from python in my C++ app. I'm developing against `libtorch==1.13.1` and using `torch==1.13.1` in python in the CUDA+cuDNN dev container. The model was converted with `torch.jit.script` and had an additional `torch.jit.export` exported method named "inference".
When trying to run `torch::jit::optimize_for_inference(model, {"inference"})` in my C++ app, I get an error saying:
```
terminate called after throwing an instance of 'c10::Error'
what(): Method 'inference' is not defined.
```
This stems from the fact that during model optimization, the model is frozen but the vector `other_methods` is not passed into the `preserved_attrs` parameter of `freeze`. I don't have a full `libtorch` development environment set up but have written my own version of `optimize_for_inference` with the proposed change of `frozen_mod = freeze(module, other_methods, true);` and it appears to work well. If I have time/can get a dev environment set up before you all get to it, I'm happy to make a PR.
https://github.com/pytorch/pytorch/blob/77c2a8a11f7b5164c255b5b49dbc66a3f6533e9d/torch/csrc/jit/api/module.cpp#L502
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| 0 |
3,828 | 91,557 |
Segmentation fault when running torch.nn.AdaptiveMaxPool3d
|
triaged, module: edge cases
|
### 🐛 Describe the bug
If torch.nn.AdaptiveMaxPool3d is given very large list elements, it results in segmentation fault.
```
import torch
import numpy as np
arg_1_0 = 125091515651
arg_1_1 = 125091515651
arg_1_2 = 125091515651
arg_1 = [arg_1_0,arg_1_1,arg_1_2,]
arg_class = torch.nn.AdaptiveMaxPool3d(arg_1,)
arg_2_0_tensor = torch.rand([2, 3, 5, 6, 7], dtype=torch.float64)
arg_2_0 = arg_2_0_tensor.clone()
arg_2 = [arg_2_0,]
try:
res = arg_class(*arg_2)
except Exception as e:
print("Error:"+str(e))
```
The log message is:
```
Segmentation fault (core dumped)
```
### Versions
Collecting environment information...
PyTorch version: 1.8.0
Is debug build: False
CUDA used to build PyTorch: 10.2
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: 6.0.0-1ubuntu2 (tags/RELEASE_600/final)
CMake version: version 3.22.6
Libc version: glibc-2.27
Python version: 3.8.16 (default, Dec 7 2022, 01:12:13) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.10.133+-x86_64-with-glibc2.27
Is CUDA available: False
CUDA runtime version: 11.2.152
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.1.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.1.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.1.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.1.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.1.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.1.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.1.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.21.6
[pip3] torch==1.8.0
[pip3] torchaudio==0.13.0+cu116
[pip3] torchsummary==1.5.1
[pip3] torchtext==0.14.0
[pip3] torchvision==0.14.0+cu116
[conda] Could not collect
| 0 |
3,829 | 91,556 |
Overflow when running torch.nn.AdaptiveMaxPool3d on torch 1.12.0 and 1.13.1
|
triaged, module: edge cases
|
### 🐛 Describe the bug
When running torch.nn.AdaptiveMaxPool3d, it overflows with very large elements in input lists.
```
import torch
import numpy as np
arg_1_0 = 125091515651
arg_1_1 = 125091515651
arg_1_2 = 125091515651
arg_1 = [arg_1_0,arg_1_1,arg_1_2,]
arg_class = torch.nn.AdaptiveMaxPool3d(arg_1,)
arg_2_0_tensor = torch.rand([2, 3, 5, 6, 7], dtype=torch.float64)
arg_2_0 = arg_2_0_tensor.clone()
arg_2 = [arg_2_0,]
try:
res = arg_class(*arg_2)
except Exception as e:
print("Error:"+str(e))
```
Log message:
```
Error:Storage size calculation overflowed with sizes=[2, 3, 125091515651, 125091515651, 125091515651]
```
### Versions
Collecting environment information...
PyTorch version: 1.13.0+cu116
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: 6.0.0-1ubuntu2 (tags/RELEASE_600/final)
CMake version: version 3.22.6
Libc version: glibc-2.27
Python version: 3.8.16 (default, Dec 7 2022, 01:12:13) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.10.133+-x86_64-with-glibc2.27
Is CUDA available: False
CUDA runtime version: 11.2.152
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.1.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.1.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.1.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.1.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.1.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.1.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.1.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.21.6
[pip3] torch==1.12.0
[pip3] torchaudio==0.13.0+cu116
[pip3] torchsummary==1.5.1
[pip3] torchtext==0.14.0
[pip3] torchvision==0.14.0+cu116
[conda] Could not collect
| 1 |
3,830 | 91,553 |
Segmentation fault when running torch.nn.AdaptiveMaxPool2d
|
triaged, module: edge cases
|
### 🐛 Describe the bug
If torch.nn.AdaptiveMaxPool2d is given very large list elements, it results in segmentation fault:
```
import torch
import numpy as np
arg_1_0 = 125091515651
arg_1_1 = 125091515651
arg_1 = [arg_1_0,arg_1_1,]
arg_class = torch.nn.AdaptiveMaxPool2d(arg_1,)
arg_2_0_tensor = torch.rand([1, 3, 5, 6], dtype=torch.float64)
arg_2_0 = arg_2_0_tensor.clone()
arg_2 = [arg_2_0,]
try:
res = arg_class(*arg_2)
except Exception as e:
print("Error:"+str(e))
```
The log message is:
```
Segmentation fault (core dumped)
```
Here is the link to [gist](https://colab.research.google.com/gist/nimashiri/973e5abe10001083c7ebce01171a11da/tensorflow-python-ops-gen_list_ops-tensor_list_from_tensor.ipynb)
### Versions
Collecting environment information...
PyTorch version: 1.8.0
Is debug build: False
CUDA used to build PyTorch: 10.2
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.3 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.24.3
Libc version: glibc-2.31
Python version: 3.8.10 (default, Nov 14 2022, 12:59:47) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-56-generic-x86_64-with-glibc2.29
Is CUDA available: True
CUDA runtime version: 11.0.194
CUDA_MODULE_LOADING set to:
GPU models and configuration: GPU 0: NVIDIA GeForce GTX 1660 Ti
Nvidia driver version: 510.108.03
cuDNN version: Probably one of the following:
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn.so.8.1.1
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.1.1
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.1.1
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.1.1
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.1.1
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.1.1
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.1.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.19.5
[pip3] pytorch-lightning==1.5.6
[pip3] pytorch-ranger==0.1.1
[pip3] torch==1.8.0
[pip3] torch-geometric==2.0.4
[pip3] torch-tb-profiler==0.4.0
[conda] Could not collect
| 1 |
3,831 | 91,552 |
Overflow when running torch.nn.AdaptiveMaxPool2d
|
triaged, module: pooling, module: edge cases
|
### 🐛 Describe the bug
Overflow when running torch.nn.AdaptiveMaxPool2d with very large list elements:
```
import torch
import numpy as np
arg_1_0 = 125091515651
arg_1_1 = 125091515651
arg_1 = [arg_1_0,arg_1_1,]
arg_class = torch.nn.AdaptiveMaxPool2d(arg_1,)
arg_2_0_tensor = torch.rand([1, 3, 5, 6], dtype=torch.float64)
arg_2_0 = arg_2_0_tensor.clone()
arg_2 = [arg_2_0,]
try:
res = arg_class(*arg_2)
except Exception as e:
print("Error:"+str(e))
```
The log message is:
```
Error:Storage size calculation overflowed with sizes=[1, 3, 125091515651, 125091515651]
```
### Versions
Collecting environment information...
PyTorch version: 1.12.0+cu102
Is debug build: False
CUDA used to build PyTorch: 10.2
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: 6.0.0-1ubuntu2 (tags/RELEASE_600/final)
CMake version: version 3.22.6
Libc version: glibc-2.27
Python version: 3.8.16 (default, Dec 7 2022, 01:12:13) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.10.133+-x86_64-with-glibc2.27
Is CUDA available: False
CUDA runtime version: 11.2.152
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.1.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.1.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.1.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.1.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.1.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.1.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.1.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.21.6
[pip3] torch==1.12.0
[pip3] torchaudio==0.13.0+cu116
[pip3] torchsummary==1.5.1
[pip3] torchtext==0.14.0
[pip3] torchvision==0.14.0+cu116
[conda] Could not collect
| 1 |
3,832 | 93,497 |
[Inductor] The Way of Input Mutation Handing Conflicts with CPP Kernel Declaration
|
triaged, oncall: pt2, module: cpu inductor
|
Input mutation handling results in alias of input and output buffers while CPP kernel always assumes that these buffers do not overlap with `__restrict__` qualifier. They caused conflict and may result in potential correctness issues due to compiler optimization. Take the test case `test_input_mutation1` in test_torchinductor.py as an example:
```python
def fn(a):
b = a + 1
a.copy_(b)
c = a + 2
return a * b / c
```
The function above produces the following kernel code, where `in_ptr0` and `out_ptr1` are aliases. Should we make them as an inplace buffer instead?
```c++
extern "C" void kernel(const float* __restrict__ in_ptr0,
float* __restrict__ out_ptr1,
float* __restrict__ out_ptr2)
{
for(long i0=0; i0<4; i0+=1)
{
auto tmp0 = at::vec::Vectorized<float>::loadu(in_ptr0 + 16*i0);
auto tmp1 = at::vec::Vectorized<float>(static_cast<float>(1));
auto tmp2 = tmp0 + tmp1;
auto tmp3 = tmp2 * tmp2;
auto tmp4 = at::vec::Vectorized<float>(static_cast<float>(2));
auto tmp5 = tmp2 + tmp4;
auto tmp6 = tmp3 / tmp5;
tmp2.store(out_ptr1 + 16*i0);
tmp6.store(out_ptr2 + 16*i0);
}
#pragma omp simd simdlen(8)
for(long i0=64; i0<64; i0+=1)
{
auto tmp0 = out_ptr1[i0];
auto tmp1 = static_cast<float>(1);
auto tmp2 = tmp0 + tmp1;
auto tmp3 = tmp2 * tmp2;
auto tmp4 = static_cast<float>(2);
auto tmp5 = tmp2 + tmp4;
auto tmp6 = tmp3 / tmp5;
out_ptr1[i0] = tmp2;
out_ptr2[i0] = tmp6;
}
}
```
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 0 |
3,833 | 91,545 |
Adding label smoothing option to `nn.BCELoss` and `nn.BCEWithLogitsLoss`?
|
module: nn, module: loss, triaged, actionable
|
### 🚀 The feature, motivation and pitch
Hello, I was looking at BCELoss and CrossEntropyLoss and found that while the latter has the option to add label_smoothing, the former does not.
### Alternatives
Seeing as how the reasoning used for performing label smoothing (i.e. the logit values having to go to $\pm\infty$ to approach 1 or 0) should also apply to sigmoid functions (which are used with BCELoss), it stands to reason that label smoothing can and should be implemented with BCELoss? (i.e. for ex : if the label value is 1, with label smoothing of 0.2, the value should be 0.8?)
### Additional context
_No response_
cc @albanD @mruberry @jbschlosser @walterddr @kshitij12345 @saketh-are
| 9 |
3,834 | 91,543 |
Python 3.11.1 , even with nightly version of PyTorch: ERROR: No matching distribution found for torch
|
oncall: binaries, triaged
|
### 🚀 The feature, motivation and pitch
```
Defaulting to user installation because normal site-packages is not writeable
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com, https://download.pytorch.org/whl/cu117
ERROR: Could not find a version that satisfies the requirement torch (from versions: none)
ERROR: No matching distribution found for torch
```
### Alternatives
_No response_
### Additional context
See https://stackoverflow.com/questions/74967443/error-no-matching-distribution-found-for-torch
cc @ezyang @seemethere @malfet
| 1 |
3,835 | 91,542 |
`torch.compile` frees computation graph in a GAN training setup and tries to call `backward` a second time
|
module: autograd, triaged, oncall: pt2, module: aotdispatch
|
### 🐛 Describe the bug
Reported by @edeyneka in the discussion board in [this topic](https://discuss.pytorch.org/t/adversarial-training-with-torch-compile/168933).
Ekaterina used the [DCGAN tutorial](https://pytorch.org/tutorials/beginner/dcgan_faces_tutorial.html) and provided a minimal, executable code snippet to reproduce the issue [here](https://discuss.pytorch.org/t/adversarial-training-with-torch-compile/168933/3?u=ptrblck) (thanks again for providing the code snippet and reporting the issue).
A smaller code snippet is attached:
```python
import torch
import torch.nn as nn
def train(netD, netG, x):
# train netD
fake = netG(x)
outD_only = netD(fake.detach())
outD_only.mean().backward()
# confirm the backward call does not touch netG
print(["{}, {}".format(name, p.grad) for name, p in netG.named_parameters()])
# train netG
outD_attached = netD(fake)
outD_attached.mean().backward()
# confirm netG now has valid gradients
print(["{}, {}".format(name, p.grad) for name, p in netG.named_parameters()])
## Eager
# setup
netG = nn.Sequential(
nn.Linear(1, 1),
nn.ReLU(),
nn.Linear(1, 1))
netD = nn.Sequential(
nn.Linear(1, 1),
nn.ReLU(),
nn.Linear(1, 1))
x = torch.randn(1, 1)
train(netD, netG, x)
# ['0.weight, None', '0.bias, None', '2.weight, None', '2.bias, None']
# ['0.weight, tensor([[6.7547e-05]])', '0.bias, tensor([0.0002])', '2.weight, tensor([[0.0008]])', '2.bias, tensor([0.0009])']
## Compile
# setup
netG = nn.Sequential(
nn.Linear(1, 1),
nn.ReLU(),
nn.Linear(1, 1))
netD = nn.Sequential(
nn.Linear(1, 1),
nn.ReLU(),
nn.Linear(1, 1))
x = torch.randn(1, 1)
train_compiled = torch.compile(train, mode="default")
train_compiled(netD, netG, x)
# ['0.weight, tensor([[0.]])', '0.bias, tensor([0.])', '2.weight, tensor([[0.]])', '2.bias, tensor([0.])'] # grads seems to be already created
# RuntimeError: Trying to backward through the graph a second time (or directly access saved tensors after they have already been freed). Saved intermediate values of the graph are freed when you call .backward() or autograd.grad(). Specify retain_graph=True if you need to backward through the graph a second time or if you need to access saved tensors after calling backward.
```
which simulates a GAN training routine.
The expected logic would be:
* train the discriminator using a real sample (let's ignore this forward/backward pass here as it's not failing)
* train the discriminator using a detached fake sample (created by the generator)
* the `outD_only.mean().backward()` should not touch `netG` at all since `fake` was detached in `outD_only = netD(fake.detach())`
* the eager mode execution also verifies this assumption by printing `None` gradients in `netG`
* train the generator by using the attached `fake` tensor in `netD`
* this works in eager mode but fails when `torch.compile` is used with the "Trying to backward a second time" error.
### Versions
PyTorch version: 2.0.0.dev20221227+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.25.0
Libc version: glibc-2.31
Python version: 3.8.15 | packaged by conda-forge | (default, Nov 22 2022, 08:49:35) [GCC 10.4.0] (64-bit runtime)
Python platform: Linux-5.13.0-41-generic-x86_64-with-glibc2.10
Is CUDA available: True
CUDA runtime version: 11.7.64
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3090
Nvidia driver version: 515.43.04
cuDNN version: Probably one of the following:
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn.so.8.5.0
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.5.0
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.5.0
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.5.0
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.5.0
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.5.0
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.5.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.24.1
[pip3] numpydoc==1.5.0
[pip3] torch==2.0.0.dev20221227+cu118
[pip3] torchtriton==2.0.0+0d7e753227
[conda] numpy 1.24.1 pypi_0 pypi
[conda] numpydoc 1.5.0 pypi_0 pypi
[conda] torch 2.0.0.dev20221227+cu118 pypi_0 pypi
[conda] torchtriton 2.0.0+0d7e753227 pypi_0 pypi
cc @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved @soulitzer @Lezcano @Varal7 @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 7 |
3,836 | 91,537 |
Unclear how to change compiler used by `torch.compile`
|
module: docs, triaged, oncall: pt2
|
### 📚 The doc issue
It is not clear from https://pytorch.org/tutorials//intermediate/torch_compile_tutorial.html, nor from the docs in `torch.compile`, nor even from looking through `_dynamo/config.py`, how one can change the compiler used by PyTorch.
Right now I am seeing the following issue. My code:
```python
import torch
@torch.compile
def f(x):
return 0.5 * x
f(torch.tensor(1.0))
```
<details><summary>This produces the following error message (click to toggle):</summary>
```
---------------------------------------------------------------------------
CalledProcessError Traceback (most recent call last)
File ~/venvs/main/lib/python3.10/site-packages/torch/_inductor/codecache.py:445, in CppCodeCache.load(cls, source_code)
444 try:
--> 445 subprocess.check_output(cmd, stderr=subprocess.STDOUT)
446 except subprocess.CalledProcessError as e:
File /opt/homebrew/Cellar/python@3.10/3.10.9/Frameworks/Python.framework/Versions/3.10/lib/python3.10/subprocess.py:421, in check_output(timeout, *popenargs, **kwargs)
419 kwargs['input'] = empty
--> 421 return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
422 **kwargs).stdout
File /opt/homebrew/Cellar/python@3.10/3.10.9/Frameworks/Python.framework/Versions/3.10/lib/python3.10/subprocess.py:526, in run(input, capture_output, timeout, check, *popenargs, **kwargs)
525 if check and retcode:
--> 526 raise CalledProcessError(retcode, process.args,
527 output=stdout, stderr=stderr)
528 return CompletedProcess(process.args, retcode, stdout, stderr)
CalledProcessError: Command '['g++', '/tmp/torchinductor_mcranmer/p4/cp42uf272g2qggmogzazkui7he4vnm4ftyfi2ghvyudtmaxxi25x.cpp', '-shared', '-fPIC', '-Wall', '-std=c++17', '-Wno-unused-variable', '-I/Users/mcranmer/venvs/main/lib/python3.10/site-packages/torch/include', '-I/Users/mcranmer/venvs/main/lib/python3.10/site-packages/torch/include/torch/csrc/api/include', '-I/Users/mcranmer/venvs/main/lib/python3.10/site-packages/torch/include/TH', '-I/Users/mcranmer/venvs/main/lib/python3.10/site-packages/torch/include/THC', '-I/opt/homebrew/opt/python@3.10/Frameworks/Python.framework/Versions/3.10/include/python3.10', '-lgomp', '-march=native', '-O3', '-ffast-math', '-fno-finite-math-only', '-fopenmp', '-D', 'C10_USING_CUSTOM_GENERATED_MACROS', '-o/tmp/torchinductor_mcranmer/p4/cp42uf272g2qggmogzazkui7he4vnm4ftyfi2ghvyudtmaxxi25x.so']' returned non-zero exit status 1.
The above exception was the direct cause of the following exception:
CppCompileError Traceback (most recent call last)
File ~/venvs/main/lib/python3.10/site-packages/torch/_dynamo/output_graph.py:676, in OutputGraph.call_user_compiler(self, gm)
675 else:
--> 676 compiled_fn = compiler_fn(gm, self.fake_example_inputs())
677 _step_logger()(logging.INFO, f"done compiler function {name}")
File ~/venvs/main/lib/python3.10/site-packages/torch/_dynamo/debug_utils.py:1032, in wrap_backend_debug.<locals>.debug_wrapper(gm, example_inputs, **kwargs)
1031 else:
-> 1032 compiled_gm = compiler_fn(gm, example_inputs, **kwargs)
1034 return compiled_gm
File ~/venvs/main/lib/python3.10/site-packages/torch/__init__.py:1190, in _TorchCompileInductorWrapper.__call__(self, model_, inputs_)
1189 with self.cm:
-> 1190 return self.compile_fn(model_, inputs_)
File ~/venvs/main/lib/python3.10/site-packages/torch/_inductor/compile_fx.py:398, in compile_fx(model_, example_inputs_, inner_compile)
393 with overrides.patch_functions():
394
395 # TODO: can add logging before/after the call to create_aot_dispatcher_function
396 # in torch._functorch/aot_autograd.py::aot_module_simplified::aot_function_simplified::new_func
397 # once torchdynamo is merged into pytorch
--> 398 return aot_autograd(
399 fw_compiler=fw_compiler,
400 bw_compiler=bw_compiler,
401 decompositions=select_decomp_table(),
402 partition_fn=functools.partial(
403 min_cut_rematerialization_partition, compiler="inductor"
404 ),
405 )(model_, example_inputs_)
File ~/venvs/main/lib/python3.10/site-packages/torch/_dynamo/optimizations/training.py:78, in aot_autograd.<locals>.compiler_fn(gm, example_inputs)
77 with enable_aot_logging():
---> 78 cg = aot_module_simplified(gm, example_inputs, **kwargs)
79 counters["aot_autograd"]["ok"] += 1
File ~/venvs/main/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py:2355, in aot_module_simplified(mod, args, fw_compiler, bw_compiler, partition_fn, decompositions, hasher_type, static_argnums)
2353 full_args.extend(args)
-> 2355 compiled_fn = create_aot_dispatcher_function(
2356 functional_call,
2357 full_args,
2358 aot_config,
2359 )
2361 # TODO: There is something deeply wrong here; compiled_fn running with
2362 # the boxed calling convention, but aot_module_simplified somehow
2363 # historically returned a function that was not the boxed calling
2364 # convention. This should get fixed...
File ~/venvs/main/lib/python3.10/site-packages/torch/_dynamo/utils.py:88, in dynamo_timed.<locals>.time_wrapper(*args, **kwargs)
87 t0 = time.time()
---> 88 r = func(*args, **kwargs)
89 latency = time.time() - t0
File ~/venvs/main/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py:2052, in create_aot_dispatcher_function(flat_fn, flat_args, aot_config)
2050 # You can put more passes here
-> 2052 compiled_fn = compiler_fn(flat_fn, fake_flat_tensor_args, aot_config)
2054 if not hasattr(compiled_fn, '_boxed_call'):
File ~/venvs/main/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py:1307, in aot_wrapper_dedupe(flat_fn, flat_args, aot_config, compiler_fn)
1306 if ok:
-> 1307 return compiler_fn(flat_fn, leaf_flat_args, aot_config)
1309 # Strategy 2: Duplicate specialize.
1310 #
1311 # In Haskell types, suppose you have:
(...)
1343 # }
1344 # keep_arg_mask = [True, True, False, True]
File ~/venvs/main/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py:957, in aot_dispatch_base(flat_fn, flat_args, aot_config)
956 with context(), track_graph_compiling(aot_config, "inference"):
--> 957 compiled_fw = aot_config.fw_compiler(fw_module, flat_args)
959 @wraps(compiled_fw)
960 def new_fn(args):
File ~/venvs/main/lib/python3.10/site-packages/torch/_dynamo/utils.py:88, in dynamo_timed.<locals>.time_wrapper(*args, **kwargs)
87 t0 = time.time()
---> 88 r = func(*args, **kwargs)
89 latency = time.time() - t0
File ~/venvs/main/lib/python3.10/site-packages/torch/_inductor/compile_fx.py:373, in compile_fx.<locals>.fw_compiler(model, example_inputs)
372 fixed = len(example_inputs) - num_example_inputs
--> 373 return inner_compile(
374 model,
375 example_inputs,
376 num_fixed=fixed,
377 cudagraphs=cudagraphs,
378 graph_id=graph_id,
379 )
File ~/venvs/main/lib/python3.10/site-packages/torch/_dynamo/debug_utils.py:588, in wrap_compiler_debug.<locals>.debug_wrapper(gm, example_inputs, **kwargs)
587 else:
--> 588 compiled_fn = compiler_fn(gm, example_inputs, **kwargs)
590 return compiled_fn
File ~/venvs/main/lib/python3.10/site-packages/torch/_inductor/debug.py:223, in DebugContext.wrap.<locals>.inner(*args, **kwargs)
222 with DebugContext():
--> 223 return fn(*args, **kwargs)
File /opt/homebrew/Cellar/python@3.10/3.10.9/Frameworks/Python.framework/Versions/3.10/lib/python3.10/contextlib.py:79, in ContextDecorator.__call__.<locals>.inner(*args, **kwds)
78 with self._recreate_cm():
---> 79 return func(*args, **kwds)
File ~/venvs/main/lib/python3.10/site-packages/torch/_inductor/compile_fx.py:140, in compile_fx_inner(gm, example_inputs, cudagraphs, num_fixed, is_backward, graph_id)
139 graph.run(*example_inputs)
--> 140 compiled_fn = graph.compile_to_fn()
142 if cudagraphs:
File ~/venvs/main/lib/python3.10/site-packages/torch/_inductor/graph.py:538, in GraphLowering.compile_to_fn(self)
537 def compile_to_fn(self):
--> 538 return self.compile_to_module().call
File ~/venvs/main/lib/python3.10/site-packages/torch/_dynamo/utils.py:88, in dynamo_timed.<locals>.time_wrapper(*args, **kwargs)
87 t0 = time.time()
---> 88 r = func(*args, **kwargs)
89 latency = time.time() - t0
File ~/venvs/main/lib/python3.10/site-packages/torch/_inductor/graph.py:527, in GraphLowering.compile_to_module(self)
525 print(code)
--> 527 mod = PyCodeCache.load(code)
528 for name, value in self.constants.items():
File ~/venvs/main/lib/python3.10/site-packages/torch/_inductor/codecache.py:468, in PyCodeCache.load(cls, source_code)
467 mod.key = key
--> 468 exec(code, mod.__dict__, mod.__dict__)
469 # another thread might set this first
File /tmp/torchinductor_mcranmer/a3/ca3u2aiw5tfkcpspspezlhxlp4nhdcj55j5s4mdeny7h57cnudzz.py:30
13 kernel_cpp_0 = async_compile.cpp('''
14 #include "/tmp/torchinductor_mcranmer/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h"
15 extern "C" void kernel(const float* __restrict__ in_ptr0,
(...)
26 }
27 ''')
---> 30 async_compile.wait(globals())
31 del async_compile
File ~/venvs/main/lib/python3.10/site-packages/torch/_inductor/codecache.py:663, in AsyncCompile.wait(self, scope)
662 if isinstance(result, (Future, TritonFuture)):
--> 663 scope[key] = result.result()
664 pbar.update(1)
File /opt/homebrew/Cellar/python@3.10/3.10.9/Frameworks/Python.framework/Versions/3.10/lib/python3.10/concurrent/futures/_base.py:458, in Future.result(self, timeout)
457 elif self._state == FINISHED:
--> 458 return self.__get_result()
459 else:
File /opt/homebrew/Cellar/python@3.10/3.10.9/Frameworks/Python.framework/Versions/3.10/lib/python3.10/concurrent/futures/_base.py:403, in Future.__get_result(self)
402 try:
--> 403 raise self._exception
404 finally:
405 # Break a reference cycle with the exception in self._exception
File /opt/homebrew/Cellar/python@3.10/3.10.9/Frameworks/Python.framework/Versions/3.10/lib/python3.10/concurrent/futures/thread.py:58, in _WorkItem.run(self)
57 try:
---> 58 result = self.fn(*self.args, **self.kwargs)
59 except BaseException as exc:
File ~/venvs/main/lib/python3.10/site-packages/torch/_inductor/codecache.py:640, in AsyncCompile.cpp.<locals>.task()
639 def task():
--> 640 return CppCodeCache.load(source_code).kernel
File ~/venvs/main/lib/python3.10/site-packages/torch/_inductor/codecache.py:447, in CppCodeCache.load(cls, source_code)
446 except subprocess.CalledProcessError as e:
--> 447 raise exc.CppCompileError(cmd, e.output) from e
449 cls.cache[key] = cls._load_library(output_path)
CppCompileError: C++ compile error
Command:
g++ /tmp/torchinductor_mcranmer/p4/cp42uf272g2qggmogzazkui7he4vnm4ftyfi2ghvyudtmaxxi25x.cpp -shared -fPIC -Wall -std=c++17 -Wno-unused-variable -I/Users/mcranmer/venvs/main/lib/python3.10/site-packages/torch/include -I/Users/mcranmer/venvs/main/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -I/Users/mcranmer/venvs/main/lib/python3.10/site-packages/torch/include/TH -I/Users/mcranmer/venvs/main/lib/python3.10/site-packages/torch/include/THC -I/opt/homebrew/opt/python@3.10/Frameworks/Python.framework/Versions/3.10/include/python3.10 -lgomp -march=native -O3 -ffast-math -fno-finite-math-only -fopenmp -D C10_USING_CUSTOM_GENERATED_MACROS -o/tmp/torchinductor_mcranmer/p4/cp42uf272g2qggmogzazkui7he4vnm4ftyfi2ghvyudtmaxxi25x.so
Output:
clang: error: the clang compiler does not support '-march=native'
clang: error: unsupported option '-fopenmp'
clang: error: unsupported option '-fopenmp'
The above exception was the direct cause of the following exception:
BackendCompilerFailed Traceback (most recent call last)
Cell In [15], line 1
----> 1 f(torch.tensor(1.0))
File ~/venvs/main/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:212, in _TorchDynamoContext.__call__.<locals>._fn(*args, **kwargs)
210 dynamic_ctx.__enter__()
211 try:
--> 212 return fn(*args, **kwargs)
213 finally:
214 set_eval_frame(prior)
File ~/venvs/main/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:333, in catch_errors_wrapper.<locals>.catch_errors(frame, cache_size)
330 return hijacked_callback(frame, cache_size, hooks)
332 with compile_lock:
--> 333 return callback(frame, cache_size, hooks)
File ~/venvs/main/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py:480, in convert_frame.<locals>._convert_frame(frame, cache_size, hooks)
478 counters["frames"]["total"] += 1
479 try:
--> 480 result = inner_convert(frame, cache_size, hooks)
481 counters["frames"]["ok"] += 1
482 return result
File ~/venvs/main/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py:103, in wrap_convert_context.<locals>._fn(*args, **kwargs)
101 torch.fx.graph_module._forward_from_src = fx_forward_from_src_skip_result
102 try:
--> 103 return fn(*args, **kwargs)
104 finally:
105 torch._C._set_grad_enabled(prior_grad_mode)
File ~/venvs/main/lib/python3.10/site-packages/torch/_dynamo/utils.py:88, in dynamo_timed.<locals>.time_wrapper(*args, **kwargs)
86 compilation_metrics[key] = []
87 t0 = time.time()
---> 88 r = func(*args, **kwargs)
89 latency = time.time() - t0
90 # print(f"Dynamo timer: key={key}, latency={latency:.2f} sec")
File ~/venvs/main/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py:339, in convert_frame_assert.<locals>._convert_frame_assert(frame, cache_size, hooks)
336 global initial_grad_state
337 initial_grad_state = torch.is_grad_enabled()
--> 339 return _compile(
340 frame.f_code,
341 frame.f_globals,
342 frame.f_locals,
343 frame.f_builtins,
344 compiler_fn,
345 one_graph,
346 export,
347 hooks,
348 frame,
349 )
File ~/venvs/main/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py:400, in _compile(code, globals, locals, builtins, compiler_fn, one_graph, export, hooks, frame)
398 for attempt in itertools.count():
399 try:
--> 400 out_code = transform_code_object(code, transform)
401 orig_code_map[out_code] = code
402 break
File ~/venvs/main/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py:341, in transform_code_object(code, transformations, safe)
338 instructions = cleaned_instructions(code, safe)
339 propagate_line_nums(instructions)
--> 341 transformations(instructions, code_options)
343 fix_vars(instructions, code_options)
345 dirty = True
File ~/venvs/main/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py:387, in _compile.<locals>.transform(instructions, code_options)
374 nonlocal output
375 tracer = InstructionTranslator(
376 instructions,
377 code,
(...)
385 mutated_closure_cell_contents,
386 )
--> 387 tracer.run()
388 output = tracer.output
389 assert output is not None
File ~/venvs/main/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:1684, in InstructionTranslator.run(self)
1682 def run(self):
1683 _step_logger()(logging.INFO, f"torchdynamo start tracing {self.f_code.co_name}")
-> 1684 super().run()
File ~/venvs/main/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:538, in InstructionTranslatorBase.run(self)
533 try:
534 self.output.push_tx(self)
535 while (
536 self.instruction_pointer is not None
537 and not self.output.should_exit
--> 538 and self.step()
539 ):
540 pass
541 except BackendCompilerFailed:
File ~/venvs/main/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:501, in InstructionTranslatorBase.step(self)
499 if not hasattr(self, inst.opname):
500 unimplemented(f"missing: {inst.opname}")
--> 501 getattr(self, inst.opname)(inst)
503 return inst.opname != "RETURN_VALUE"
504 except BackendCompilerFailed:
File ~/venvs/main/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:1750, in InstructionTranslator.RETURN_VALUE(self, inst)
1745 _step_logger()(
1746 logging.INFO,
1747 f"torchdynamo done tracing {self.f_code.co_name} (RETURN_VALUE)",
1748 )
1749 log.debug("RETURN_VALUE triggered compile")
-> 1750 self.output.compile_subgraph(self)
1751 self.output.add_output_instructions([create_instruction("RETURN_VALUE")])
File ~/venvs/main/lib/python3.10/site-packages/torch/_dynamo/output_graph.py:529, in OutputGraph.compile_subgraph(self, tx, partial_convert, reason)
512 self.add_output_instructions(random_calls_instructions)
514 if (
515 stack_values
516 and all(
(...)
526
527 # optimization to generate better code in a common case
528 self.add_output_instructions(
--> 529 self.compile_and_call_fx_graph(tx, list(reversed(stack_values)), root)
530 + [create_instruction("UNPACK_SEQUENCE", len(stack_values))]
531 )
532 else:
533 graph_output_var = self.new_var("graph_out")
File ~/venvs/main/lib/python3.10/site-packages/torch/_dynamo/output_graph.py:600, in OutputGraph.compile_and_call_fx_graph(self, tx, rv, root)
598 assert_no_fake_params_or_buffers(gm)
599 with tracing(self.tracing_context):
--> 600 compiled_fn = self.call_user_compiler(gm)
601 compiled_fn = disable(compiled_fn)
603 counters["stats"]["unique_graphs"] += 1
File ~/venvs/main/lib/python3.10/site-packages/torch/_dynamo/output_graph.py:681, in OutputGraph.call_user_compiler(self, gm)
679 except Exception as e:
680 compiled_fn = gm.forward
--> 681 raise BackendCompilerFailed(self.compiler_fn, e) from e
682 return compiled_fn
BackendCompilerFailed: debug_wrapper raised CppCompileError: C++ compile error
Command:
g++ /tmp/torchinductor_mcranmer/p4/cp42uf272g2qggmogzazkui7he4vnm4ftyfi2ghvyudtmaxxi25x.cpp -shared -fPIC -Wall -std=c++17 -Wno-unused-variable -I/Users/mcranmer/venvs/main/lib/python3.10/site-packages/torch/include -I/Users/mcranmer/venvs/main/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -I/Users/mcranmer/venvs/main/lib/python3.10/site-packages/torch/include/TH -I/Users/mcranmer/venvs/main/lib/python3.10/site-packages/torch/include/THC -I/opt/homebrew/opt/python@3.10/Frameworks/Python.framework/Versions/3.10/include/python3.10 -lgomp -march=native -O3 -ffast-math -fno-finite-math-only -fopenmp -D C10_USING_CUSTOM_GENERATED_MACROS -o/tmp/torchinductor_mcranmer/p4/cp42uf272g2qggmogzazkui7he4vnm4ftyfi2ghvyudtmaxxi25x.so
Output:
clang: error: the clang compiler does not support '-march=native'
clang: error: unsupported option '-fopenmp'
clang: error: unsupported option '-fopenmp'
You can suppress this exception and fall back to eager by setting:
torch._dynamo.config.suppress_errors = True
```
</details>
(For the record, I am on macOS 13.1 with an M1 processor, Python 3.10.8, PyTorch 2.0)
I installed the `brew` version of clang which does in fact have `openmp` support. However I have no idea how to force this, as the compiler used by `torch.compile` seems to be defaulting to the one which comes with macOS (at `/usr/bin/g++`), which does not have `openmp` support.
### Suggest a potential alternative/fix
It would be great if the docs specify how I can tweak the compilation settings, at the very least how to specify a compiler to use.
(For what it's worth, I did try specifying `CXX` as an environment variable, which was my first reflex - it might be a good idea to simply check for this as others might try that first as well.)
cc @svekars @carljparker @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 3 |
3,837 | 91,536 |
The speed of matrix inversion is relatively slow for many small matrices
|
module: performance, module: cuda, triaged, module: linear algebra
|
### 🐛 Describe the bug
I found that `torch.linalg.inv` is relatively slow for many (= 100,000) small (< 32 x 32) matrices compared to CuPy.
The complete benchmark code is uploaded here: https://gist.github.com/yoshipon/dc7e14635d48656c767d47132351eaf6
For an excerpt, `torch.linalg.inv` for `size = (100000, 4, 4)` costs `639 µs` with the following code:
```
%%timeit x = torch.randn(*size, dtype=dtype, device="cuda"); torch.cuda.synchronize()
y = torch.linalg.inv(x)
torch.cuda.synchronize()
```
On the other hand, CuPy's one costs only `288 µs` (2x faster) with the following code:
```
%%timeit x = torch.randn(*size, dtype=dtype, device="cuda"); torch.cuda.synchronize()
y = cupy_inv(x)
torch.cuda.synchronize()
```
Note that `cupy_inv` uses `*getrfBatched` and `*getriBatched` of cuBLAS and is defined as follows:
```
def cupy_inv(x_):
x = cp.from_dlpack(x_)
return torch.from_dlpack(cp.linalg.inv(x))
```
I would appreciate it if `torch.linalg.inv` could be speeded up because it is quite important for my research field of multichannel audio signal processing.
The multichannel (microphone array) signal processing is important for many audio applications including distant speech recognition [1], speech enhancement [2], and source separation [3].
[1] T. Ochiai, et al. "Multichannel end-to-end speech recognition." ICML, 2017.
[2] L. Drude, et al. "Unsupervised training of neural mask-based beamforming." INTERSPEECH, 2019.
[3] Y. Bando, et al. "Neural full-rank spatial covariance analysis for blind source separation." IEEE SP Letters, 2021.
Because we usually calculate more than 100K small matrices per training sample in a forward path, it often becomes a bottleneck in training speed.
### Versions
PyTorch version: 1.13.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.9.13 (main, Aug 25 2022, 23:26:10) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-3.10.0-862.el7.x86_64-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.7.99
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: Tesla V100-SXM2-16GB
GPU 1: Tesla V100-SXM2-16GB
GPU 2: Tesla V100-SXM2-16GB
GPU 3: Tesla V100-SXM2-16GB
Nvidia driver version: 510.47.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.5.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] mypy==0.991
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.21.5
[pip3] numpydoc==1.4.0
[pip3] pytorch-ignite==0.4.10
[pip3] torch==1.13.1+cu117
[pip3] torchaudio==0.13.1+cu117
[pip3] torchvision==0.14.1+cu117
[conda] blas 1.0 mkl
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.21.5 py39h6c91a56_3
[conda] numpy-base 1.21.5 py39ha15fc14_3
[conda] numpydoc 1.4.0 py39h06a4308_0
[conda] pytorch-ignite 0.4.10 pypi_0 pypi
[conda] torch 1.13.1+cu117 pypi_0 pypi
[conda] torchaudio 0.13.1+cu117 pypi_0 pypi
[conda] torchvision 0.14.1+cu117 pypi_0 pypi
cc @ngimel @jianyuh @nikitaved @pearu @mruberry @walterddr @IvanYashchuk @xwang233 @Lezcano
| 2 |
3,838 | 91,535 |
When dist.broadcast float32 to int64, it will silently generate wrong results
|
oncall: distributed
|
### 🐛 Describe the bug
I find that if we use torch.distributed.broadcast, if the tensor on another GPU has a different dtype, it will silently generate wrong results.
```python
def run(rank, world_size=2):
dist.init_process_group('gloo', rank=rank, world_size=world_size)
if rank == 0:
n_id = torch.arange(6.).to(rank)
print(n_id.dtype)
else:
n_id = torch.zeros(14, dtype=torch.int64).to(rank)
dist.broadcast(n_id, src=0)
print(f'rank {rank} {n_id} ')
```
Outputs:
```
torch.float32
rank 1 tensor([4575657221408423936, 4629700418010611712, 4656722015783223296,
0, 0, 0,
0, 0, 0,
0, 0, 0,
0, 0], device='cuda:1')
rank 0 tensor([0., 1., 2., 3., 4., 5.], device='cuda:0')
```
Maybe we could add a dtype sanity check on dist.broadcast and other distribution functions.
### Versions
GPU models and configuration:
GPU 0: Tesla V100-SXM2-16GB
GPU 1: Tesla V100-SXM2-16GB
GPU 2: Tesla V100-SXM2-16GB
Nvidia driver version: 515.65.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.2.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.2.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.2.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.2.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.2.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.2.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.2.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.21.2
[pip3] torch==1.10.0
[pip3] torch-geometric==2.1.0.post1
[pip3] torch-quiver==0.1.0
[pip3] torch-scatter==2.0.9
[pip3] torch-sparse==0.6.13
[pip3] torchelastic==0.2.0
[pip3] torchtext==0.11.0
[pip3] torchvision==0.11.0
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.3.1 ha36c431_9 nvidia
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.3.0 h06a4308_520
[conda] mkl-service 2.4.0 py37h7f8727e_0
[conda] mkl_fft 1.3.1 py37hd3c417c_0
[conda] mkl_random 1.2.2 py37h51133e4_0
[conda] numpy 1.21.2 py37h20f2e39_0
[conda] numpy-base 1.21.2 py37h79a1101_0
[conda] pytorch 1.10.0 py3.7_cuda11.3_cudnn8.2.0_0 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torch-geometric 2.1.0.post1 pypi_0 pypi
[conda] torch-quiver 0.1.0 pypi_0 pypi
[conda] torch-scatter 2.0.9 pypi_0 pypi
[conda] torch-sparse 0.6.13 pypi_0 pypi
[conda] torchelastic 0.2.0 pypi_0 pypi
[conda] torchtext 0.11.0 py37 pytorch
[conda] torchvision 0.11.0 py37_cu113 pytorch
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
| 2 |
3,839 | 91,533 |
Cannot cast float64 to float32
|
needs reproduction, module: crash, triaged, module: macos, module: numpy
|
### 🐛 Describe the bug
when i try to cast float64 to float32 tensor, it raises a segfault, but it seems nn.Linear requires float32 data, stuck in here.
```python
X = torch.from_numpy(X)
print(X.dtype)
y = torch.tensor(y, dtype=torch.float32)
train_data = TensorDataset(X, y)
train_loader = DataLoader(dataset=train_data, batch_size=batch_size, shuffle=True)
net = nn.Sequential(
nn.Linear(X.shape[1], 16),
nn.ReLU(),
nn.Linear(16, 1)
)
trainer = torch.optim.Adam(net.parameters(), lr=lr)
loss = nn.MSELoss()
for epoch in range(epochs):
for _X, _y in train_loader:
trainer.zero_grad()
l = loss(net(_X), _y)
l.backward()
trainer.step()
with torch.no_grad():
l = loss(net(X), y)
print(f'epoch {epoch} train loss: {l}')
```
i got
```
torch.float64
Traceback (most recent call last):
File "5_train_mlp.py", line 69, in <module>
l = loss(net(_X), _y)
File "/Users/ben/miniconda3/envs/autogluon/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/Users/ben/miniconda3/envs/autogluon/lib/python3.7/site-packages/torch/nn/modules/container.py", line 139, in forward
input = module(input)
File "/Users/ben/miniconda3/envs/autogluon/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/Users/ben/miniconda3/envs/autogluon/lib/python3.7/site-packages/torch/nn/modules/linear.py", line 114, in forward
return F.linear(input, self.weight, self.bias)
RuntimeError: expected scalar type Float but found Double
```
then i try to make a cast
```python
X = torch.from_numpy(X).float()
print(X.dtype)
# same as above...
```
it returns most of the time
```
48207 Segmentation fault: 11 python 5_train_mlp.py
```
the peculiar thing is for only once, it didnt throw any exception but gave me a warning
```
torch.float32
/Users/ben/miniconda3/envs/autogluon/lib/python3.7/site-packages/torch/nn/modules/loss.py:530: UserWarning: Using a target size (torch.Size([16])) that is different to the input size (torch.Size([16, 1])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size.
return F.mse_loss(input, target, reduction=self.reduction)
/Users/ben/miniconda3/envs/autogluon/lib/python3.7/site-packages/torch/nn/modules/loss.py:530: UserWarning: Using a target size (torch.Size([3])) that is different to the input size (torch.Size([3, 1])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size.
return F.mse_loss(input, target, reduction=self.reduction)
/Users/ben/miniconda3/envs/autogluon/lib/python3.7/site-packages/torch/nn/modules/loss.py:530: UserWarning: Using a target size (torch.Size([723])) that is different to the input size (torch.Size([723, 1])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size.
return F.mse_loss(input, target, reduction=self.reduction)
epoch 0 train loss: 36856.08203125
epoch 1 train loss: 446.7268371582031
epoch 2 train loss: 45.26797866821289
epoch 3 train loss: 43.33668518066406
...
```
So, i wonder what's wrong here and how can i fix it?
### Versions
❯ python collect_env.py
[1] 50310 segmentation fault python collect_env.py
versions used:
torch 1.12.1
Python 3.7.12
cc @malfet @albanD @mruberry @rgommers
| 3 |
3,840 | 91,524 |
functorch.so is installed back into the source directory
|
module: build, triaged, module: functorch
|
### 🐛 Describe the bug
It installs this file:
```
/usr/ports/misc/py-pytorch/work/pytorch-v1.13.1/functorch/functorch.so
```
where ```/usr/ports/misc/py-pytorch/work/pytorch-v1.13.1``` is a source directory it was built from.
cmake args:
-DPSIMD_SOURCE_DIR=/usr/ports/misc/py-pytorch/work/pytorch-v1.13.1/third_party/psimd -DFREEBSD_PYTHON_VER=3.9 -DPYTHON_EXECUTABLE:STRING=/usr/local/bin/python3.9 -DBUILD_PYTHON:BOOL=true -DCMAKE_C_COMPILER:STRING="cc" -DCMAKE_CXX_COMPILER:STRING="c++" -DCMAKE_C_FLAGS:STRING="-O2 -pipe -fstack-protector-strong -fno-strict-aliasing " -DCMAKE_C_FLAGS_DEBUG:STRING="-O2 -pipe -fstack-protector-strong -fno-strict-aliasing " -DCMAKE_C_FLAGS_RELEASE:STRING="-O2 -pipe -fstack-protector-strong -fno-strict-aliasing -DNDEBUG" -DCMAKE_CXX_FLAGS:STRING="-O2 -pipe -fstack-protector-strong -fno-strict-aliasing " -DCMAKE_CXX_FLAGS_DEBUG:STRING="-O2 -pipe -fstack-protector-strong -fno-strict-aliasing " -DCMAKE_CXX_FLAGS_RELEASE:STRING="-O2 -pipe -fstack-protector-strong -fno-strict-aliasing -DNDEBUG" -DCMAKE_EXE_LINKER_FLAGS:STRING=" -lexecinfo -fstack-protector-strong " -DCMAKE_MODULE_LINKER_FLAGS:STRING=" -lexecinfo -fstack-protector-strong " -DCMAKE_SHARED_LINKER_FLAGS:STRING=" -lexecinfo -fstack-protector-strong " -DCMAKE_INSTALL_PREFIX:PATH="/usr/local" -DCMAKE_BUILD_TYPE:STRING="Release" -DTHREADS_HAVE_PTHREAD_ARG:BOOL=YES -DCMAKE_INSTALL_RPATH_USE_LINK_PATH=YES -DCMAKE_VERBOSE_MAKEFILE:BOOL=ON -DFETCHCONTENT_FULLY_DISCONNECTED:BOOL=ON -DBUILD_CUSTOM_PROTOBUF:BOOL=OFF -DUSE_CUDA:BOOL=OFF -DUSE_ROCM:BOOL=OFF -DUSE_NNPACK:BOOL=OFF -DUSE_QNNPACK:BOOL=OFF -DUSE_PYTORCH_QNNPACK:BOOL=OFF -DUSE_FBGEMM:BOOL=OFF -GNinja -DPython_ADDITIONAL_VERSIONS=3.9 -DBOOST_PYTHON_SUFFIX:STRING=39
Version: 1.13.1
Python-3.9
FreeBSD 13.1
### Versions
```
$ python3.9 ./work/pytorch-v1.13.1/torch/utils/collect_env.py
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: freebsd13
GCC version: Could not collect
Clang version: 14.0.5 (https://github.com/llvm/llvm-project.git llvmorg-14.0.5-0-gc12386ae247c)
CMake version: version 3.24.3
Libc version: N/A
Python version: 3.9.16 (main, Dec 18 2022, 01:15:32) [Clang 13.0.0 (git@github.com:llvm/llvm-project.git llvmorg-13.0.0-0-gd7b669b3a (64-bit runtime)
Python platform: FreeBSD-13.1-STABLE-amd64-64bit-ELF
Is CUDA available: N/A
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 2060
Nvidia driver version: 510.60.02
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] mypy-protobuf==3.3.0
[pip3] numpy==1.23.4
[conda] Could not collect
```
cc @malfet @seemethere @zou3519 @Chillee @samdow @soumith
| 0 |
3,841 | 91,509 |
[functorch] make batch norm docs point to UX limitations
|
triaged, module: functorch
|
UX limitations docs haven't been pulled over yet but batch norm docs have. We should make the batch norm docs point to the UX limitations section on in-place mutations when it talks about why batch norm doesn't generally work
See https://github.com/pytorch/pytorch/pull/89213#discussion_r1058373807
cc @zou3519 @Chillee @soumith
| 0 |
3,842 | 91,508 |
Update map_nt to take into account size and strides
|
triaged, module: nestedtensor, bug
|
# Summary
Since map_nt doesn't check that the input is contiguous but only that the numels of the input match that of the buffer. This is in order to allow for the unary ops to be run on transposed inputs. We should also update the map_nt to construct the output tensor with the same stride and offset information.
cc @cpuhrsch @jbschlosser @bhosmer @mikaylagawarecki
| 0 |
3,843 | 91,504 |
torch.jit.script ERR: RuntimeError: Can't redefine method: forward on class: __torch__.SoSadModule
|
oncall: jit
|
### 🐛 Describe the bug
I used storch.jit.script to script my model, but got some err. This is fine . But I continue script its sub module. I got an err "RuntimeError: Can't redefine method: forward on class: __torch__.SoSadModule (of Python compilation unit at: 0x530d560)"
the simple code is following, this is my nn.Module:
```
import torch
from torch import nn
class SoSadModule(nn.Module):
def __init__(self,):
nn.Module.__init__(self)
def forward(self, x):
return nn.Softmax(x)
class SadModule(nn.Module):
def __init__(self, use_skip: bool):
nn.Module.__init__(self)
self.use_skip = use_skip
self.layer = nn.Linear(2, 2)
self.layer_softmax = SoSadModule()
def forward(self, x):
if self.use_skip:
x_input = x
x = self.layer(x)
if self.use_skip:
x = x + x_input
self.layer_softmax(x)
return x
```
and i run code:
```
model = SadModule(False)
try:
torch.jit.script(model)
except:
pass
torch.jit.script(model.layer_softmax)
```
i got message that I didn't expect.
```
Traceback (most recent call last):
File "pr.py", line 38, in <module>
torch.jit.script(model.layer_softmax)
File "/home/disk2/users/chenfuyuan/icode/project_repository/cont_time_classify/torch/jit/_script.py", line 1286, in script
return torch.jit._recursive.create_script_module(
File "/home/disk2/users/chenfuyuan/icode/project_repository/cont_time_classify/torch/jit/_recursive.py", line 480, in create_script_module
return create_script_module_impl(nn_module, concrete_type, stubs_fn)
File "/home/disk2/users/chenfuyuan/icode/project_repository/cont_time_classify/torch/jit/_recursive.py", line 546, in create_script_module_impl
create_methods_and_properties_from_stubs(concrete_type, method_stubs, property_stubs)
File "/home/disk2/users/chenfuyuan/icode/project_repository/cont_time_classify/torch/jit/_recursive.py", line 397, in create_methods_and_properties_from_stubs
concrete_type._create_methods_and_properties(property_defs, property_rcbs, method_defs, method_rcbs, method_defaults)
RuntimeError: Can't redefine method: forward on class: __torch__.SoSadModule (of Python compilation unit at: 0x3f45600)
```
If i script sub module of it directly, like this.
```
model = SadModule(False)
# try:
# script_model = torch.jit.script(model)
# except:
# pass
torch.jit.script(model.layer_softmax)
```
I got err that i expected , where is not supported by script.
```
Traceback (most recent call last):
File "pr.py", line 38, in <module>
torch.jit.script(model.layer_softmax)
File "/home/disk2/users/chenfuyuan/icode/project_repository/cont_time_classify/torch/jit/_script.py", line 1286, in script
return torch.jit._recursive.create_script_module(
File "/home/disk2/users/chenfuyuan/icode/project_repository/cont_time_classify/torch/jit/_recursive.py", line 480, in create_script_module
return create_script_module_impl(nn_module, concrete_type, stubs_fn)
File "/home/disk2/users/chenfuyuan/icode/project_repository/cont_time_classify/torch/jit/_recursive.py", line 546, in create_script_module_impl
create_methods_and_properties_from_stubs(concrete_type, method_stubs, property_stubs)
File "/home/disk2/users/chenfuyuan/icode/project_repository/cont_time_classify/torch/jit/_recursive.py", line 397, in create_methods_and_properties_from_stubs
concrete_type._create_methods_and_properties(property_defs, property_rcbs, method_defs, method_rcbs, method_defaults)
File "/home/disk2/users/chenfuyuan/icode/project_repository/cont_time_classify/torch/jit/annotations.py", line 142, in check_fn
raise torch.jit.frontend.FrontendError(
torch.jit.frontend.FrontendError: Cannot instantiate class 'Softmax' in a script function:
File "pr.py", line 14
def forward(self, x):
return nn.Softmax(x)
~~~~~~~~~~ <--- HERE
```
JIT have affect my model? or there are some API to close this ill effects, so i can JIT a model multiple times with no err " redefine method: forward on class: __torch__.SoSadModule "
### Versions
pytorch: 1.9.1 and pytorch: 2.0.0.dev and all version of pytorch now.
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| 1 |
3,844 | 91,497 |
DISABLED test_tensor_requires_grad (test_jit.TestScript)
|
oncall: jit, module: flaky-tests, skipped
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/failure/test_tensor_requires_grad) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/10335682113).
Over the past 72 hours, it has flakily failed in 2 workflow(s).
**Debugging instructions (after clicking on the recent samples link):**
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Grep for `test_tensor_requires_grad`
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| 5 |
3,845 | 91,495 |
DISABLED test_rand (test_jit.TestScript)
|
oncall: jit, module: flaky-tests, skipped
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/failure/test_rand) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/10335682113).
Over the past 72 hours, it has flakily failed in 2 workflow(s).
**Debugging instructions (after clicking on the recent samples link):**
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Grep for `test_rand`
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| 10 |
3,846 | 91,494 |
DISABLED test_optional_tensor (test_jit.TestScript)
|
oncall: jit, module: flaky-tests, skipped
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/failure/test_optional_tensor) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/10335682113).
Over the past 72 hours, it has flakily failed in 2 workflow(s).
**Debugging instructions (after clicking on the recent samples link):**
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Grep for `test_optional_tensor`
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| 7 |
3,847 | 91,493 |
DISABLED test_prim_grad_undefined (test_jit.TestScript)
|
oncall: jit, module: flaky-tests, skipped
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/failure/test_prim_grad_undefined) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/10335682113).
Over the past 72 hours, it has flakily failed in 2 workflow(s).
**Debugging instructions (after clicking on the recent samples link):**
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Grep for `test_prim_grad_undefined`
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| 7 |
3,848 | 91,492 |
DISABLED test_requires_grad_loop (test_jit.TestScript)
|
oncall: jit, module: flaky-tests, skipped
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/failure/test_requires_grad_loop) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/10335682113).
Over the past 72 hours, it has flakily failed in 2 workflow(s).
**Debugging instructions (after clicking on the recent samples link):**
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Grep for `test_requires_grad_loop`
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| 8 |
3,849 | 91,489 |
DISABLED test_successful (jit.test_freezing.TestMKLDNNReinplacing)
|
oncall: jit, module: flaky-tests, skipped
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/failure/test_successful) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/10335682113).
Over the past 72 hours, it has flakily failed in 2 workflow(s).
**Debugging instructions (after clicking on the recent samples link):**
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Grep for `test_successful`
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| 5 |
3,850 | 91,488 |
DISABLED test_switch_inputs_to_inplace (jit.test_freezing.TestMKLDNNReinplacing)
|
oncall: jit, module: flaky-tests, skipped
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/failure/test_switch_inputs_to_inplace) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/10335682113).
Over the past 72 hours, it has flakily failed in 2 workflow(s).
**Debugging instructions (after clicking on the recent samples link):**
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Grep for `test_switch_inputs_to_inplace`
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| 4 |
3,851 | 91,486 |
DISABLED test_always_alive_values (jit.test_freezing.TestMKLDNNReinplacing)
|
oncall: jit, module: flaky-tests, skipped
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/failure/test_always_alive_values) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/10335682113).
Over the past 72 hours, it has flakily failed in 2 workflow(s).
**Debugging instructions (after clicking on the recent samples link):**
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Grep for `test_always_alive_values`
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| 5 |
3,852 | 91,484 |
DISABLED test_optional_list (test_jit.TestScript)
|
oncall: jit, module: flaky-tests, skipped
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/failure/test_optional_list) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/10335682113).
Over the past 72 hours, it has flakily failed in 2 workflow(s).
**Debugging instructions (after clicking on the recent samples link):**
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Grep for `test_optional_list`
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| 7 |
3,853 | 91,482 |
DISABLED test_tensor_as_tensor_shape_prop (test_jit.TestScript)
|
oncall: jit, module: flaky-tests, skipped
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/failure/test_tensor_as_tensor_shape_prop) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/10335682113).
Over the past 72 hours, it has flakily failed in 2 workflow(s).
**Debugging instructions (after clicking on the recent samples link):**
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Grep for `test_tensor_as_tensor_shape_prop`
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| 8 |
3,854 | 91,481 |
DISABLED test_merge_liveness (jit.test_freezing.TestMKLDNNReinplacing)
|
oncall: jit, module: flaky-tests, skipped
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/failure/test_merge_liveness) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/10335682113).
Over the past 72 hours, it has flakily failed in 2 workflow(s).
**Debugging instructions (after clicking on the recent samples link):**
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Grep for `test_merge_liveness`
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| 2 |
3,855 | 91,471 |
Clean up nt impl duplicates where one can
|
triaged, better-engineering, module: nestedtensor, release notes: nested tensor
|
# Summary
There is overlap between some of the NestedTensor implementations defined in NestedTensorMath.cpp and NestedTensorTransformerFunctions.cpp. As well since NestedTensorMath was the only cpp file for a while some of the implementations would be better suited to different files for organizational purposes. This issue is used to track the clean up of the code base.
cc @cpuhrsch @jbschlosser @bhosmer @mikaylagawarecki
| 0 |
3,856 | 91,470 |
torch.compile loud error on functorch transforms
|
triaged, oncall: pt2, module: functorch, module: dynamo
|
## Issue description
I would have expected these to fall back to eager mode, given that functorch transforms have try-catch blocks and context managers that dynamo should bail out of.
## Code example
```py
def f(x):
y = x * x * x * x * x * 1. * 1. * 1.
return y
def g(x):
return torch.vmap(f)(x)
cg = torch.compile(backend='aot_eager')(g)
x = torch.randn(5)
# Very loud error: dynamo/fake_tensor needs to peek into storage()
# result = cg(x)
expected = g(x)
def g(x):
return torch.func.grad(f)(x)
cg = torch.compile(backend='aot_eager')(g)
x = torch.randn([])
expected = g(x)
# Very loud error: dynamo needs to peek into storage()
# result = cg(x)
assert torch.allclose(result, expected)
```
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh @Chillee @samdow @mlazos @voznesenskym @yanboliang @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @chunyuan-w @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @desertfire
| 2 |
3,857 | 91,469 |
torch.compile with aotautograd does not support double backwards
|
module: autograd, triaged, enhancement, oncall: pt2, module: aotdispatch
|
## Code example
The second-order gradients are completely bogus:
```py
def f(x):
# Just to make sure there isn't a limit or something
y = x * x * x * x * x * 1 * 1 * 1. * 1. * 1.
print("graph_break")
gx, = torch.autograd.grad(y, x, create_graph=True)
ggx, = torch.autograd.grad(gx, x)
return gx, ggx
x = torch.tensor(1., requires_grad=True)
of = torch.compile(backend='aot_eager')(f)
gx, ggx = f(x)
ogx, oggx = of(x)
assert torch.allclose(gx, ogx)
# BUG: second-order gradients are completely bogus. assertion failed
assert torch.allclose(ggx, oggx)
```
## Fix
Easy short-term fix is likely marking the autograd.Function backward pass as once_differentiable and adding a better error message
cc @ezyang @gchanan @albanD @gqchen @pearu @nikitaved @soulitzer @Lezcano @Varal7 @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 4 |
3,858 | 91,468 |
torch.compile incorrect when imperative autograd APIs are used
|
high priority, module: autograd, triaged, oncall: pt2, module: aotdispatch
|
## Issue description
torch.compile may be silently incorrect when Tensor.retain_grad or Tensor.register_hook are involved.
## Code example
Example with retain_grad:
```py
# Test 1:
# Tensor.retain_grad inside function
# on intermediate, and on output
# See if the grad can be used :P
def f(x):
y = x.clone()
y.retain_grad()
z = y.clone()
z.retain_grad()
return z, y
of = torch.compile(backend='aot_eager')(f)
x = torch.randn([], requires_grad=True)
z, y = of(x)
z.clone().backward()
# inspect z.grad, y.grad
# Bug: they do not exist. May lead to silent correctness problems
```
Example with register_hook:
```py
# If you register a hook on an intermediate, it won't work.
def f(x):
y = x * x
z = y * x * x * x * 1 * 1 * 1. * 1. * 1.
print("graph_break")
y.register_hook(lambda x: x if x is None else 3.14 * x)
result, = torch.autograd.grad(z, x)
return result
of = torch.compile(backend='aot_eager')(f)
x = torch.tensor(1., requires_grad=True)
expected = f(x)
result = of(x)
# assertion failed
assert torch.allclose(result, expected)
```
cc @ezyang @gchanan @albanD @gqchen @pearu @nikitaved @soulitzer @Lezcano @Varal7 @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 3 |
3,859 | 91,467 |
DISABLED test_fs_preserve_sharing (__main__.TestMultiprocessing)
|
module: multiprocessing, triaged, skipped, module: dynamo
|
Platforms: dynamo
This test was disabled because it is failing on master ([recent examples](http://torch-ci.com/failure/test_fs_preserve_sharing%2CTestMultiprocessing)).
It looks like the temp `/dev/shm/torch_*` files are not cleaned up properly when running with dynamo
cc @VitalyFedyunin @ejguan @mlazos @soumith @voznesenskym @yanboliang @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @chunyuan-w @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @desertfire
| 2 |
3,860 | 91,447 |
Degenerate ranges are allowed in NumPy, but not in PyTorch.
|
triaged, module: numpy
|
### 🐛 Describe the bug
```python
import torch as t
t.arange(5, 0)
```
Result:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
RuntimeError: upper bound and larger bound inconsistent with step sign
```
I can understand why this behavior was implemented, but it seems wrong for a few reasons:
1. NumPy, whose API heavily inspires that of PyTorch, allows this:
```python
>>> import numpy as np
>>> np.arange(5, 0)
array([], dtype=int64)
```
2. Most programming languages (including Python), are perfectly happy letting you construct degenerate ranges.
### Versions
```
Collecting environment information...
/home/brennan/.local/lib/python3.10/site-packages/torch/cuda/__init__.py:146: UserWarning:
NVIDIA GeForce RTX 3080 Laptop GPU with CUDA capability sm_86 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_70.
If you want to use the NVIDIA GeForce RTX 3080 Laptop GPU GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/
warnings.warn(incompatible_device_warn.format(device_name, capability, " ".join(arch_list), device_name))
PyTorch version: 1.12.1+cu102
Is debug build: False
CUDA used to build PyTorch: 10.2
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.10 (x86_64)
GCC version: (Ubuntu 12.2.0-3ubuntu1) 12.2.0
Clang version: 15.0.2-1
CMake version: version 3.24.2
Libc version: glibc-2.36
Python version: 3.10.7 (main, Nov 24 2022, 19:45:47) [GCC 12.2.0] (64-bit runtime)
Python platform: Linux-5.19.0-26-generic-x86_64-with-glibc2.36
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to:
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3080 Laptop GPU
Nvidia driver version: 515.86.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] mypy-boto3-ec2==1.23.0.post1
[pip3] mypy-boto3-elbv2==1.23.0.post1
[pip3] mypy-boto3-resourcegroupstaggingapi==1.23.0.post1
[pip3] mypy-boto3-route53==1.23.0.post1
[pip3] mypy-boto3-s3==1.23.0.post1
[pip3] mypy-boto3-sts==1.23.1
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.21.5
[pip3] torch==1.12.1
[conda] Could not collect
```
cc @mruberry @rgommers
| 1 |
3,861 | 91,439 |
Pytorch2.0 doesn't support compiling GRU and RNN model
|
triaged, oncall: pt2, module: inductor
|
### 🐛 Describe the bug
I am trying to boost model with compiling model in PyTorch2.0. However, GRU modules are not abled to be compiled.
Following is the MWE:
```python
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
class Model(nn.Module):
def __init__(self):
super().__init__()
self.gru = nn.GRU(16, 16, batch_first=True)
def forward(self, x):
return self.gru(x)
def test():
device = torch.device('cuda')
model = Model().to(device)
model = torch.compile(model)
x = torch.rand(1024, 20, 16).to(device)
model(x)
```
The error message was saying:
```plain
Traceback (most recent call last):
File "example.py", line 23, in <module>
test()
File "example.py", line 20, in test
model(x)
File "/miniconda3/envs/pytorch2/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1482, in _call_impl
return forward_call(*args, **kwargs)
File "/miniconda3/envs/pytorch2/lib/python3.8/site-packages/torch/_dynamo/eval_frame.py", line 83, in forward
return self.dynamo_ctx(self._orig_mod.forward)(*args, **kwargs)
File "/miniconda3/envs/pytorch2/lib/python3.8/site-packages/torch/_dynamo/eval_frame.py", line 212, in _fn
return fn(*args, **kwargs)
File "/miniconda3/envs/pytorch2/lib/python3.8/site-packages/torch/_dynamo/eval_frame.py", line 333, in catch_errors
return callback(frame, cache_size, hooks)
File "/miniconda3/envs/pytorch2/lib/python3.8/site-packages/torch/_dynamo/convert_frame.py", line 480, in _convert_frame
result = inner_convert(frame, cache_size, hooks)
File "/miniconda3/envs/pytorch2/lib/python3.8/site-packages/torch/_dynamo/convert_frame.py", line 103, in _fn
return fn(*args, **kwargs)
File "/miniconda3/envs/pytorch2/lib/python3.8/site-packages/torch/_dynamo/utils.py", line 88, in time_wrapper
r = func(*args, **kwargs)
File "/miniconda3/envs/pytorch2/lib/python3.8/site-packages/torch/_dynamo/convert_frame.py", line 339, in _convert_frame_assert
return _compile(
File "/miniconda3/envs/pytorch2/lib/python3.8/site-packages/torch/_dynamo/convert_frame.py", line 400, in _compile
out_code = transform_code_object(code, transform)
File "/miniconda3/envs/pytorch2/lib/python3.8/site-packages/torch/_dynamo/bytecode_transformation.py", line 341, in transform_code_object
transformations(instructions, code_options)
File "/miniconda3/envs/pytorch2/lib/python3.8/site-packages/torch/_dynamo/convert_frame.py", line 387, in transform
tracer.run()
File "/miniconda3/envs/pytorch2/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 1684, in run
super().run()
File "/miniconda3/envs/pytorch2/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 538, in run
and self.step()
File "/miniconda3/envs/pytorch2/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 501, in step
getattr(self, inst.opname)(inst)
File "/miniconda3/envs/pytorch2/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 1750, in RETURN_VALUE
self.output.compile_subgraph(self)
File "/miniconda3/envs/pytorch2/lib/python3.8/site-packages/torch/_dynamo/output_graph.py", line 553, in compile_subgraph
self.compile_and_call_fx_graph(tx, pass2.graph_output_vars(), root)
File "/miniconda3/envs/pytorch2/lib/python3.8/site-packages/torch/_dynamo/output_graph.py", line 600, in compile_and_call_fx_graph
compiled_fn = self.call_user_compiler(gm)
File "/miniconda3/envs/pytorch2/lib/python3.8/site-packages/torch/_dynamo/output_graph.py", line 681, in call_user_compiler
raise BackendCompilerFailed(self.compiler_fn, e) from e
torch._dynamo.exc.BackendCompilerFailed: debug_wrapper raised NotImplementedError: could not find kernel for aten._cudnn_rnn_flatten_weight.default at dispatch key DispatchKey.Meta
```
The code run smoothly if I use nn.LSTM, but got the same error if use nn.RNN.
### Versions
```plain
PyTorch version: 2.0.0.dev20221227+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
Clang version: Could not collect
CMake version: version 3.25.0
Libc version: glibc-2.35
Python version: 3.8.15 (default, Nov 24 2022, 15:19:38) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.79.1-microsoft-standard-WSL2-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: 11.7.64
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA TITAN RTX
Nvidia driver version: 522.25
cuDNN version: Probably one of the following:
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn.so.8.7.0
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.7.0
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.7.0
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.7.0
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.7.0
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.7.0
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.7.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Versions of relevant libraries:
[pip3] numpy==1.24.1
[pip3] torch==2.0.0.dev20221227+cu117
[pip3] torchaudio==2.0.0.dev20221222+cu117
[pip3] torchtriton==2.0.0+0d7e753227
[pip3] torchvision==0.15.0.dev20221222+cu117
[conda] numpy 1.24.1 pypi_0 pypi
[conda] torch 2.0.0.dev20221227+cu117 pypi_0 pypi
[conda] torchaudio 2.0.0.dev20221222+cu117 pypi_0 pypi
[conda] torchtriton 2.0.0+0d7e753227 pypi_0 pypi
[conda] torchvision 0.15.0.dev20221222+cu117 pypi_0 pypi
```
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh @mlazos @voznesenskym @yanboliang @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @chunyuan-w @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @desertfire
| 7 |
3,862 | 91,437 |
using Tensor subclass between vmap layers
|
triaged, module: __torch_dispatch__, tensor subclass, module: functorch
|
### 🐛 Describe the bug
Trying to use Tensor subclass between vmap layers:
```python
import torch
import functorch
from torch.utils._pytree import tree_map
class CustomTensor(torch.Tensor):
def __new__(cls, elem):
self = torch.Tensor._make_wrapper_subclass(
cls, elem.size(),
dtype=elem.dtype, layout=elem.layout,
device=elem.device, requires_grad=elem.requires_grad,
strides=elem.stride(), storage_offset=elem.storage_offset())
self.elem = elem
return self
def __repr__(self):
return f'{type(self).__name__}({repr(self.elem)})'
@classmethod
def __torch_dispatch__(cls, func, types, args=(), kargs=None):
print('dispatch', func, types)
if kargs is None:
kargs = {}
memo = {}
unwrap = lambda x : x.elem if type(x) is cls else x
wrap = lambda x : cls(x) if torch.overrides.is_tensor_like(x) else x
args = tree_map(unwrap, args)
kargs = tree_map(unwrap, kargs)
with torch.overrides.enable_reentrant_dispatch():
ans = func(*args, **kargs)
ans = tree_map(wrap, ans)
return ans
def f(x):
def g(y):
return y + y
a = functorch.vmap(g)(CustomTensor(x))
return a.elem
functorch.vmap(f)(torch.tensor([[1., 2.], [3., 4.]]))
```
This outputs the following:
```
dispatch aten.add.Tensor (<class '__main__.CustomTensor'>,)
Traceback (most recent call last):
File "/home/alan/test.py", line 38, in <module>
functorch.vmap(f)(torch.tensor([[1., 2.], [3., 4.]]))
File "/home/alan/env/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 420, in wrapped
return _flat_vmap(
File "/home/alan/env/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 39, in fn
return f(*args, **kwargs)
File "/home/alan/env/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 605, in _flat_vmap
batched_outputs = func(*batched_inputs, **kwargs)
File "/home/alan/test.py", line 36, in f
a = functorch.vmap(g)(CustomTensor(x))
File "/home/alan/env/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 420, in wrapped
return _flat_vmap(
File "/home/alan/env/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 39, in fn
return f(*args, **kwargs)
File "/home/alan/env/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 605, in _flat_vmap
batched_outputs = func(*batched_inputs, **kwargs)
File "/home/alan/test.py", line 35, in g
return y + y
File "/home/alan/test.py", line 29, in __torch_dispatch__
ans = func(*args, **kargs)
File "/home/alan/env/lib/python3.10/site-packages/torch/_ops.py", line 284, in __call__
return self._op(*args, **kwargs or {})
RuntimeError: Either your tensor may have escaped from inside a function being vmapped and this is a user error (see https://pytorch.org/functorch/stable/ux_limitations.html), or there is an internal functorch error in `gen_vmap_plumbing` Please file an issue if it looks like the latter
```
without enable_reentrant_dispatch, it also throws:
```
RuntimeError: NYI: querying is_contiguous inside of vmap for memory_format other than torch.contiguous_format
```
### Versions
PyTorch version: 2.0.0.dev20221227+cu116
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: 10.0.0-4ubuntu1
CMake version: version 3.25.0
Libc version: glibc-2.31
Python version: 3.10.7 (main, Sep 7 2022, 15:22:19) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce GTX 1650
Nvidia driver version: 512.72
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.6.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.24.1
[pip3] torch==2.0.0.dev20221227+cu116
[pip3] torchaudio==2.0.0.dev20221227+cu116
[pip3] torchtriton==2.0.0+0d7e753227
[pip3] torchvision==0.15.0.dev20221227+cu116
[conda] Could not collect
cc @Chillee @ezyang @zou3519 @albanD @samdow @msaroufim @soumith
| 2 |
3,863 | 91,435 |
Batch_first attribute in quantizable multiheadattention
|
oncall: transformer/mha
|
### 🐛 Describe the bug
I think that in the module torch.ao.nn.quantizable.modules.activation.MultiheadAttention in the method from_float passing the attribute batch_first is missing.
```python
@classmethod
def from_float(cls, other):
assert type(other) == cls._FLOAT_MODULE
assert hasattr(other, 'qconfig'), "The float module must have 'qconfig'"
# Setting the dropout to 0.0!
observed = cls(other.embed_dim, other.num_heads, other.dropout,
(other.in_proj_bias is not None),
(other.bias_k is not None),
other.add_zero_attn, other.kdim, other.vdim)
```
And because of it one can not set batch_first format when quantizing MultiheadAttention layer.
### Versions
PyTorch version: 1.13.0+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0
Clang version: Could not collect
CMake version: version 3.25.0
Libc version: glibc-2.10
Python version: 3.7.9 (default, Aug 31 2020, 12:42:55) [GCC 7.3.0] (64-bit runtime)
Python platform: Linux-5.4.0-84-generic-x86_64-with-debian-bullseye-sid
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: GeForce RTX 2080 Ti
GPU 1: GeForce RTX 2080 Ti
Nvidia driver version: 455.23.04
cuDNN version: Probably one of the following:
/usr/local/cuda-11.4/targets/x86_64-linux/lib/libcudnn.so.8
/usr/local/cuda-11.4/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8
/usr/local/cuda-11.4/targets/x86_64-linux/lib/libcudnn_adv_train.so.8
/usr/local/cuda-11.4/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8
/usr/local/cuda-11.4/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8
/usr/local/cuda-11.4/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8
/usr/local/cuda-11.4/targets/x86_64-linux/lib/libcudnn_ops_train.so.8
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.21.6
[pip3] pytorch-wpe==0.0.1
[pip3] torch==1.13.0
[pip3] torch-complex==0.4.3
[pip3] torch-tb-profiler==0.4.0
[pip3] torchaudio==0.13.0
[pip3] torchvision==0.14.0
[conda] numpy 1.21.6 pypi_0 pypi
[conda] pytorch-wpe 0.0.1 pypi_0 pypi
[conda] torch 1.13.0 pypi_0 pypi
[conda] torch-complex 0.4.3 pypi_0 pypi
[conda] torch-tb-profiler 0.4.0 pypi_0 pypi
[conda] torchaudio 0.13.0 pypi_0 pypi
[conda] torchvision 0.14.0 pypi_0 pypi
cc @jbschlosser @bhosmer @cpuhrsch @erichan1
| 0 |
3,864 | 91,417 |
[bazel] error: use of undeclared identifier 'cudaGraphDebugDotPrint'
|
module: build, module: cuda, triaged
|
### 🐛 Describe the bug
I’m trying to build with bazel from the tip of the master (`67c53d50e5b34a536cce2ad98b9ca5f65ff8a34d` at the time of writing) and I'm getting this error
```
external/pytorch/aten/src/ATen/cuda/CUDAGraph.cpp:259:27: error: use of undeclared identifier 'cudaGraphDebugDotPrint'
C10_CUDA_CHECK_WARN(cudaGraphDebugDotPrint(graph_, debug_path.c_str(), 1<<10)); // most verbose output
```
I looked at the PR that introduced this code at https://github.com/pytorch/pytorch/pull/85519/files and I fail to understand where is cudaGraphDebugDotPrint defined…
Note that I'm building slightly patched version (with Cruise internal patches, primarily related to build).
I haven't tried to repro this on the unpatched OSS version, but CI didn't catch it...
### Versions
master `67c53d50e5b34a536cce2ad98b9ca5f65ff8a34d`
cc @malfet @seemethere @ngimel
| 6 |
3,865 | 91,406 |
Pytorch clang-tidy header-filter is still broken
|
module: build, triaged
|
### 🐛 Describe the bug
Header-Filter for Clang-Tidy is broken. Currently, almost no headers are analyzed by the clang-tidy analyzer. I tried to fix this in a recent PR by extending the header filter to C10, but ultimately ended up making the same mistake. The current regex is invalid because LLVM does not support lookahead/lookbehinds in it's regex engine.
### Versions
I tried to fix clang-tidy's header filter specified in .clang-tidy and it worked when I ran the regex through a python code snippit. However, I now have realized that even the original regex was actually broken this whole time because llvm does NOT support regex with lookahead or lookbehinds: https://stackoverflow.com/questions/71797349/is-it-possible-to-ignore-a-header-with-clang-tidy . This means that we are not running any clang-tidy static analysis on ANY of our header files currently which is allowing a lot of performance issues to permeate the code base, (especially in templates).
Not sure what the best way to reconstruct the current regex is without lookaheads. Any recommendations from folks? I can try just to positively include matching folders for all but the currently excluded folders, but that seems error prone as new folder will likely not be added as refactoring of the codebase continues.
cc @malfet @seemethere
| 3 |
3,866 | 91,396 |
[JIT] Zero-channel conv2d cannot be applied with `optimize_for_inference`
|
oncall: jit
|
### 🐛 Describe the bug
```python
import torch
class MyModule(torch.nn.Module):
def __init__(self):
super(MyModule, self).__init__()
self.conv2d = torch.nn.Conv2d(0, 1, kernel_size=(1, 1))
def forward(self, v0):
return self.conv2d(v0)
# jit ~ trace
m = MyModule()
m.eval()
inp = torch.randn(1, 0, 1, 1)
print(f"{m(inp).shape = }") # [1, 0, 1, 1]
jit_m = torch.jit.trace(m, torch.randn(1, 0, 1, 1))
print(f"{jit_m(inp).shape = }") # [1, 0, 1, 1]
# Zero-channel conv2d failed when doing `optimize_for_inference`.
opt_m = torch.jit.optimize_for_inference(jit_m) # ❌ compiler failure!
print(f"{opt_m(inp).shape = }")
```
Zero-channel conv2d layer, which is legal for both eager and jit and basically returns an empty tensor ~ is however invalid (i.e., cannot be compiled) with `torch.jit.optimize_for_inference`.
```
/miniconda3/lib/python3.9/site-packages/torch/nn/init.py:405: UserWarning: Initializing zero-element tensors is a no-op
warnings.warn("Initializing zero-element tensors is a no-op")
m(inp).shape = torch.Size([1, 0, 1, 1])
tensor([], size=(1, 0, 1, 1), grad_fn=<ConvolutionBackward0>)
jit_m(inp).shape = torch.Size([1, 0, 1, 1])
Traceback (most recent call last):
File "test.py", line 54, in <module>
opt_m = torch.jit.optimize_for_inference(jit_m)
File "/miniconda3/lib/python3.9/site-packages/torch/jit/_freeze.py", line 217, in optimize_for_inference
torch._C._jit_pass_optimize_for_inference(mod._c, other_methods)
RuntimeError: could not create a descriptor for a dilated convolution forward propagation primitive
```
This looks like a dispatch bug that should be fixed by falling back to some default implementation.
*Of course this zero-channel conv2d bug is not user-facing for being automatically found by a fuzzer -- but fixing it could improve PyTorch compiler's robustness and keep code compatibility between interpreter and compiler.
### Versions
<details><summary><b>Env </b> <i>[click to expand]</i></summary>
<div>
```python
"""
Collecting environment information...
PyTorch version: 1.14.0.dev20221202+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
Clang version: Could not collect
CMake version: version 3.25.0
Libc version: glibc-2.35
Python version: 3.9.12 (main, Apr 5 2022, 06:56:58) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.15.0-56-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.6.124
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
Nvidia driver version: 515.86.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.4.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.23.3
[pip3] onnx2torch==1.5.3
[pip3] torch==1.14.0.dev20221202+cu117
[pip3] torchaudio==0.14.0.dev20221203+cu117
[pip3] torchtriton==2.0.0+0d7e753227
[pip3] torchvision==0.15.0.dev20221203+cpu
[conda] numpy 1.23.3 pypi_0 pypi
[conda] onnx2torch 1.5.3 pypi_0 pypi
[conda] torch 1.14.0.dev20221202+cu117 pypi_0 pypi
[conda] torchaudio 0.14.0.dev20221203+cu117 pypi_0 pypi
[conda] torchtriton 2.0.0+0d7e753227 pypi_0 pypi
[conda] torchvision 0.15.0.dev20221203+cpu pypi_0 pypi
"""
```
</div>
</details>
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| 1 |
3,867 | 91,395 |
PyObject preservation and resurrection for `StorageImpl`
|
module: internals, triaged, enhancement, better-engineering
|
It may be useful for `StorageImpl` to support PyObject preservation and resurrection.
TensorImpl has this behavior. When a reference count to the PyObject for a tensor goes to zero, but its associated TensorImpl is still being used, the ownership is flipped so that the TensorImpl hangs onto the PyObject, preventing it from getting deallocated. If the PyObject ever needs to be resurrected again, ownership is flipped again and the PyObject is given back to Python. The Python object will retain all of the properties it had before, since it was never deallocated. See #55686
cc @ezyang @bhosmer @smessmer @ljk53 @bdhirsh
| 18 |
3,868 | 91,387 |
getting issue 'typeindex' file not found in Littorch-Lite/install/include/ATen/core/custom_class.h
|
oncall: mobile
|
### 🐛 Describe the bug
in Littorch-Lite/install/include/ATen/core/custom_class.h
#pragma once
#include <typeindex>
#include <memory>
### Versions
Hello Team,
I am using pod 'LibTorch-Lite','1.13.0.1' in podfile of iOS app with swift language. while building app getting error ''typeindex' file not found' in file Littorch-Lite/install/include/ATen/core/custom_class.h at line number 3.
Kindly Help.
| 2 |
3,869 | 91,375 |
Internal Assert failed
|
module: cuda, triaged, module: cuda graphs
|
### 🐛 Describe the bug
I am using an EncoderDecoder model from hugging face with Bert-base uncased as both encoder and decoder. I am getting this by running this code.
from transformers import Seq2SeqTrainingArguments, Seq2SeqTrainer
training_args = Seq2SeqTrainingArguments(
output_dir="./",
learning_rate=5e-5,
evaluation_strategy="steps",
per_device_train_batch_size=4,
per_device_eval_batch_size=8,
predict_with_generate=True,
overwrite_output_dir=True,
save_total_limit=3,
fp16=True,
)
trainer = Seq2SeqTrainer(
model=bert2bert,
tokenizer=tokenizer,
args=training_args,
compute_metrics=compute_metrics,
train_dataset=train,
eval_dataset=test,
)
RuntimeError: false INTERNAL ASSERT FAILED at "../c10/cuda/CUDAGraphsC10Utils.h":73, please report a bug to PyTorch. Unknown CUDA graph CaptureStatus32508
### Versions
--2022-12-24 21:02:31-- https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.108.133, 185.199.109.133, 185.199.110.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.108.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 17278 (17K) [text/plain]
Saving to: ‘collect_env.py’
collect_env.py 100%[===================>] 16.87K --.-KB/s in 0s
2022-12-24 21:02:32 (77.8 MB/s) - ‘collect_env.py’ saved [17278/17278]
Collecting environment information...
PyTorch version: 1.13.0+cu116
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: 6.0.0-1ubuntu2 (tags/RELEASE_600/final)
CMake version: version 3.22.6
Libc version: glibc-2.27
Python version: 3.8.16 (default, Dec 7 2022, 01:12:13) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.10.133+-x86_64-with-glibc2.27
Is CUDA available: True
CUDA runtime version: 11.2.152
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: Tesla T4
Nvidia driver version: 460.32.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.1.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.1.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.1.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.1.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.1.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.1.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.1.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.21.6
[pip3] torch==1.13.0+cu116
[pip3] torchaudio==0.13.0+cu116
[pip3] torchsummary==1.5.1
[pip3] torchtext==0.14.0
[pip3] torchvision==0.14.0+cu116
[conda] Could not collect
cc @ngimel @mcarilli @ezyang
| 0 |
3,870 | 91,374 |
[RFC] `quantile` should work for `float16`/`half` on the GPU
|
module: cuda, triaged, enhancement, module: half
|
### 🚀 The feature, motivation and pitch
The `torch.quantile`/`torch.Tensor.quantile` function should work with `float16` (half precision) inputs on the GPU.
Currently,
```python
import torch
a = torch.randn(10, 10, dtype=torch.float16, device="cuda")
b = torch.tensor(0.25, dtype=torch.float16, device="cuda")
a.quantile(b)
# or
torch.quantile(a, b)
```
fails with
```python
RuntimeError: quantile() input tensor must be either float or double dtype
```
Instead, it should just work.
Also the same goes for the scalar versions (for consistency with `float32` and `float64` versions)
```python
a.quantile(0.25)
# or
torch.quantile(a, 0.25)
```
### Alternatives
_No response_
### Additional context
I was asked to resubmit this as an RFC. See #91156.
cc @ngimel
| 0 |
3,871 | 91,408 |
`NotImplementedError` when using `torch.distributed.launch`
|
oncall: distributed, module: functorch
|
First of all, really thanks for the nice work and huge contribution.
I tried to get the pre-sample grad from the instruction here https://pytorch.org/functorch/1.13/notebooks/per_sample_grads.html, and it worked for me on CPU-mode, single-GPU mode. However, I encounter the Error:
```NotImplementedError: Cannot access storage of TensorWrapper```
when I switch to the multi-GPU mode by calling
```python -m torch.distributed.launch --nproc_per_node=2 train.py```
The Traceback shows the Error comes from the forward function when it is called from `grad_and_value()`. Note again, the same code works fine for single-GPU mode.
The whole code is as follows:
```python
model.eval()
fmodel, params, buffers = make_functional_with_buffers(model)
for batch_idx, (input, target) in enumerate(loader_plot):
input, target = input.cuda(), target.cuda()
if mixup_fn is not None:
input, target = mixup_fn(input, target)
if args.channels_last:
input = input.contiguous(memory_format=torch.channels_last)
def compute_loss_stateless_model(params, buffers, sample, target):
batch = sample.unsqueeze(0)
targets = target.unsqueeze(0)
predictions = fmodel(params, buffers, batch)
loss = loss_fn(predictions, targets)
return loss
ft_compute_grad = grad_and_value(compute_loss_stateless_model)
ft_compute_sample_grad = vmap(ft_compute_grad, in_dims=(None, None, 0, 0))
with torch.no_grad():
per_sample_opt = ft_compute_sample_grad(params, buffers, input, target) # Error comes from here
losses = per_sample_opt[1].cpu().numpy()
sample_grads = per_sample_opt[0]
device = sample_grads[0].device
global_grad_norm = torch.zeros(len(losses), device=device)
for layer_grads in sample_grads:
dims = list(range(1, layer_grads.dim()))
global_grad_norm.add_(layer_grads.pow(2).sum(dim=dims))
global_grad_norm = torch.sqrt(global_grad_norm).cpu().numpy()
losses_plot = losses if losses_plot is None else np.append(losses_plot, losses)
gradnorm_plot = global_grad_norm if gradnorm_plot is None else np.append(gradnorm_plot, global_grad_norm)
torch.cuda.synchronize()
output_dir = get_outdir(args.output if args.output else './output/train', 'grad_figures')
np.savez('{}/{}_{}.npz'.format(output_dir,epoch,args.local_rank),
loss = losses_plot, norm = gradnorm_plot)
```
The Error information is as follows:
```
Traceback (most recent call last):
File "train.py", line 1139, in <module>
main()
File "train.py", line 856, in main
train_metrics = train_one_epoch(
File "train.py", line 947, in train_one_epoch
per_sample_opt = ft_compute_sample_grad(params, buffers, input, target)
File "/usr/local/lib/python3.8/dist-packages/functorch/_src/vmap.py", line 362, in wrapped
return _flat_vmap(
File "/usr/local/lib/python3.8/dist-packages/functorch/_src/vmap.py", line 35, in fn
return f(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/functorch/_src/vmap.py", line 489, in _flat_vmap
batched_outputs = func(*batched_inputs, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/functorch/_src/vmap.py", line 35, in fn
return f(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/functorch/_src/eager_transforms.py", line 1111, in wrapper
output = func(*args, **kwargs)
File "train_grad2.py", line 938, in compute_loss_stateless_model
predictions = fmodel(params, buffers, batch)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/functorch/_src/make_functional.py", line 282, in forward
return self.stateless_model(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/parallel/distributed.py", line 1060, in forward
self.require_forward_param_sync = False
File "/usr/local/lib/python3.8/dist-packages/torch/autograd/profiler.py", line 493, in __exit__
torch.ops.profiler._record_function_exit(self.handle)
File "/usr/local/lib/python3.8/dist-packages/torch/_ops.py", line 442, in __call__
return self._op(*args, **kwargs or {})
NotImplementedError: Cannot access storage of TensorWrapper
```
Thanks in advance if you could help me to find the solution. Best
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @zou3519 @Chillee @samdow @soumith
| 4 |
3,872 | 91,368 |
PyTorch memory leak reference cycle in for loop, Mac M1
|
triaged, module: arm, module: mps
|
### 🐛 Describe the bug
I am facing a memory leak when iteratively updating tensors in PyTorch on my Mac M1 GPU using the PyTorch mps interface. The following is a minimal reproducible example that replicates the behavior:
```python
import torch
def leak_example(p1, ubar, device):
t1 = torch.cat((torch.diff(ubar, dim=0), torch.zeros_like(ubar[:1,:,:,:], dtype = torch.float32)), dim = 0)
u1 = p1 + 2 * (t1)
B = torch.rand_like(u1, device = device)
mask = u1 < B
a1 = u1
a1[~mask] = torch.rand_like(a1)[~mask]
return a1
if torch.cuda.is_available(): # cuda gpus
device = torch.device("cuda")
elif torch.backends.mps.is_available(): # mac gpus
device = torch.device("mps")
torch.set_grad_enabled(False)
p1 = torch.rand(5, 5, 224, 224, device = device)
ubar = torch.rand(5, 5, 224, 224, device = device)
for i in range(10000):
p1 = leak_example(p1, ubar, device)
```
My Mac's GPU memory steadily grows when I execute this loop. I have tried running it on a CUDA GPU in Google Colab and the output of `print(torch.cuda.memory_allocated()/1024**2)` remains constant, though the GPU's Active memory, Non-releasable memory, and Allocated memory do appear to increase as the loop progresses (the output of `torch.cuda.memory_summary()`).
I have tried detaching and cloning the tensors and using weakrefs, to no avail. Interestingly, if I don't reassign the output of `leak_example` to `p1`, the behavior disappears, so it really seems related to the recursive assignment. Also, if I drop the `torch.diff()` assignment, the memory increase stalls out at some point, so it may be related to this function. Does anyone have any idea how I could resolve this?
### Versions
PyTorch version: 1.13.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 12.5.1 (arm64)
GCC version: Could not collect
Clang version: 14.0.0 (clang-1400.0.29.102)
CMake version: Could not collect
Libc version: N/A
Python version: 3.9.15 (main, Nov 24 2022, 08:28:41) [Clang 14.0.6 ] (64-bit runtime)
Python platform: macOS-12.5.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.4
[pip3] torch==1.13.0
[conda] numpy 1.23.4 py39h1398885_0
[conda] numpy-base 1.23.4 py39h90707a3_0
[conda] torch 1.13.0 pypi_0 pypi
cc @malfet @kulinseth @albanD @DenisVieriu97 @razarmehr @abhudev
| 4 |
3,873 | 91,363 |
MPS backend does not accept int64 model weights or input data
|
triaged, module: mps
|
### 🐛 Describe the bug
I am trying to send my pretrained model to M2 GPU with the new MPS backend from PyTorch, but I am getting a TypeError that int64 tensors are not supported in the MPS backend.
Steps to reproduce the problem
- Setup Enformer pretrained model from EleutherAI via: https://github.com/lucidrains/enformer-pytorch
- Send model to GPU and try to run inference:
```python
class EnformerInference:
def __init__(self, data_path: str, model_path="EleutherAI/enformer-official-rough"):
if torch.backends.mps.is_available():
print("Using Apple Silicon GPU")
device = torch.device("mps")
else:
print("Using CPU")
device = torch.device("cpu")
self.device = device
self.model = Enformer.from_pretrained(model_path).to(device)
self.data = EnformerDataLoader(pd.read_csv(data_path, sep="\t"))
def forward(self, x: torch.Tensor) -> torch.Tensor:
return self.model(x.to(self.device))
```
- Error: `TypeError: Operation 'abs_out_mps()' does not support input type 'int64' in MPS backend.`
I have checked all my input tensors and they are of type float32. The weights of the Enformer model on the other hand are not all of type float32 as some are int64. I have tried to recast the weights of my model to float32 using the following code:
```python
weights = self.model.state_dict()
for name, param in weights.items():
weights[name] = param.to(torch.float32)
self.model = self.model.load_state_dict(weights)
```
However, this did not preserve the original PyTorch pretrained model object.
### Versions
PyTorch version: 1.13.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 13.1 (arm64)
GCC version: Could not collect
Clang version: 14.0.0 (clang-1400.0.29.202)
CMake version: Could not collect
Libc version: N/A
Python version: 3.10.8 | packaged by conda-forge | (main, Nov 22 2022, 08:25:29) [Clang 14.0.6 ] (64-bit runtime)
Python platform: macOS-13.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.24.0
[pip3] torch==1.13.1
[pip3] torchaudio==0.13.1
[pip3] torchvision==0.14.1
[conda] numpy 1.24.0 py310h5d7c261_0 conda-forge
[conda] pytorch 1.13.1 py3.10_0 pytorch
[conda] torchaudio 0.13.1 py310_cpu pytorch
[conda] torchvision 0.14.1 py310_cpu pytorch
cc @kulinseth @albanD @malfet @DenisVieriu97 @razarmehr @abhudev
| 5 |
3,874 | 91,360 |
Offer `get_buffer` and `get_submodule` in `ScriptModule`?
|
oncall: jit
|
### 🚀 The feature, motivation and pitch
This is something in-between a feature request / documentation issue / bug report.
The documentation of [`ScriptModule`](https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html#torch.jit.ScriptModule) documents that the class offers [`get_buffer`](https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html#torch.jit.ScriptModule.get_buffer) and [`get_submodule`](https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html#torch.jit.ScriptModule.get_submodule). This doesn't seem to be the case though, because trying to access them errors with
```
RuntimeError: get_buffer is not supported on ScriptModules
```
because they are not in the allow list here:
https://github.com/pytorch/pytorch/blob/71d50f4f8955cde773e30b207bfeb98616206823/torch/jit/_script.py#L943
Since `ScriptModules` apparently offers access the buffers and modules, it would feel more consistent if these methods were allowed, right? Furthermore they could offer extra safety when trying to access non-existing buffers/modules. In particular I was running into a bug because setting a buffer like the following is possible without an error:
```py
# Attempting to set a "known" buffer, but due to a typo it didn't have an effect.
# Unfortunately this doesn't seem to error, which hides the fact that there is a typo.
my_script_module.buffer_with_typo = torch.tensor(True)
```
I was hoping that I can use `get_buffer` instead, which (hopefully) would perform a check if the buffer actually exists:
```py
# Would hope that this gives a clear error if the accessed buffer doesn't exist.
my_script_module.get_buffer("buffer_with_typo").copy_(torch.tensor(True))
```
Since `get_buffer` isn't supported at all, this isn't possible though.
For now I'm resorting to `hasattr` checks to make this a bit more robust.
### Alternatives
Adapt the documentation to avoid suggesting the functions are usable.
### Additional context
_No response_
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| 0 |
3,875 | 91,358 |
[JIT] .backward() not supported by JIT trace
|
oncall: jit
|
# 🐛 Describe the bug
I want to export my pytorch model to onnx format. MyModel wraps an existing model and, for a given input, computes the wrapped model’s output and its gradient to the input. (Virtual env: Python 3.9, CUDA 11.7.1 and, PyTorch 1.13.1.)
The following code achieves what I want within PyTorch:
cc. @t-vi, @apaszke, @fritzo.
Code example
The following example:
```
import torch
import torch.nn as nn
# MyModel we want to wrap
class BaseModel(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(in_channels=2, out_channels=1, kernel_size=3, padding=1)
def forward(self, x):
return self.conv1(x)
# the MyModel we want to export using ONNX
class MyModel(nn.Module):
def __init__(self, forward_model):
super().__init__()
self.forward_model = forward_model
def forward(self, x, dy):
y = self.forward_model(x)
y.backward(gradient=dy)
dx = x.grad
return y, dx
model = BaseModel()
MyModel = MyModel(model)
x = torch.randn(1, 2, 5, 5) # input
dy = torch. Ones(1, 1, 5, 5) # output
# compute the wrapped model's output and the input sensitivity
# (i.e., the gradient w.r.t. the input and the given output sensitivity)
y, dx = MyModel(x, y)
torch.onnx.export(MyModel, (x, dy), "MyModel.onnx", input_names=["x", "dy"], output_names=["y", "dx"])
```
throws an error on a recent PyTorch version:
```
Traceback (most recent call last):
File "MyModel.py", line 117, in <module>
torch.onnx.export(MyModel, (x, dy), "MyModel.onnx", input_names=["x", "dy"], output_names=["y", "dx"])
File "~/my-venv/lib/python3.9/site-packages/torch/onnx/__init__.py", line 316, in export
return utils.export(model, args, f, export_params, verbose, training,
File "~/my-venv/lib/python3.9/site-packages/torch/onnx/utils.py", line 107, in export
_export(model, args, f, export_params, verbose, training, input_names, output_names,
File "~/my-venv/lib/python3.9/site-packages/torch/onnx/utils.py", line 724, in _export
_model_to_graph(model, args, verbose, input_names,
File "~/my-venv/lib/python3.9/site-packages/torch/onnx/utils.py", line 493, in _model_to_graph
graph, params, torch_out, module = _create_jit_graph(model, args)
File "~/my-venv/lib/python3.9/site-packages/torch/onnx/utils.py", line 437, in _create_jit_graph
graph, torch_out = _trace_and_get_graph_from_model(model, args)
File "~/my-venv/lib/python3.9/site-packages/torch/onnx/utils.py", line 388, in _trace_and_get_graph_from_model
torch.jit._get_trace_graph(model, args, strict=False, _force_outplace=False, _return_inputs_states=True)
File "~/my-venv/lib/python3.9/site-packages/torch/jit/_trace.py", line 1166, in _get_trace_graph
outs = ONNXTracedModule(f, strict, _force_outplace, return_inputs, _return_inputs_states)(*args, **kwargs)
File "~/my-venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "~/my-venv/lib/python3.9/site-packages/torch/jit/_trace.py", line 127, in forward
graph, out = torch._C._create_graph_by_tracing(
File "~/my-venv/lib/python3.9/site-packages/torch/jit/_trace.py", line 118, in wrapper
outs.append(self. Inner(*trace_inputs))
File "~/my-venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "~/my-venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1090, in _slow_forward
result = self. Forward(*input, **kwargs)
File "pullback_onnx.py", line 94, in forward
y.backward(gradient=dy)
File "~/my-venv/lib/python3.9/site-packages/torch/_tensor.py", line 307, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File "~/my-venv/lib/python3.9/site-packages/torch/autograd/__init__.py", line 154, in backward
Variable._execution_engine.run_backward(
RuntimeError: Found an unsupported argument type in the JIT tracer. File a bug report.
```
# Versions
```
Collecting environment information...
PyTorch version: 1.13.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.9.16 (main, Dec 6 2022, 18:36:13) [GCC 11.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-56-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce GTX 1660 Ti
Nvidia driver version: 525.60.13
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.5
[pip3] pytorch-lightning==1.8.4.post0
[pip3] torch==1.13.1
[pip3] torch-model-archiver==0.6.1
[pip3] torch-ort==1.13.1
[pip3] torch-workflow-archiver==0.2.5
[pip3] torchaudio==0.13.1
[pip3] torchinfo==1.7.1
[pip3] torchmetrics==0.10.0
[pip3] torchserve==0.6.1
[pip3] torchtext==0.14.0
[pip3] torchvision==0.14.1
[conda] Could not collect
```
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| 1 |
3,876 | 91,356 |
nop_partitioner for AOTAutograd
|
triaged, oncall: pt2, module: aotdispatch
|
### 🚀 The feature, motivation and pitch
Hello everyone, I've got an idea I'd like to share with you. It is about graph partitioners in AOTAutograd.
Currently, there are two partitioners defined (default_partition and min_cut_rematerialization_partition) which always result in two separate graphs, one for FWD and second for BWD. I would like to have an option to get whole joint graph containing both FWD and BWD to get most of optimization opportunities for my torch.compile backend.
I tried to implement such "nop_partitioner" myself that would just pass the joint graph back, as well as tried changing default_partition to gather more outputs into single graph, but it always resulted in some kind of crash. I assume that in current AOTAutograd code there might be hidden assumptions on this mechanism that do not allow this kind of operation (or my programming skills just suck, that's also possible).
Is it possible for you to create such "nop_partitioner" as part of PT2.0 release so it could be used by everyone?
Thanks,
--mb
### Alternatives
_No response_
### Additional context
_No response_
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 2 |
3,877 | 91,353 |
Perf reduction due to munmap with dataloader pinning thread ?
|
module: dataloader, triaged
|
### 🐛 Describe the bug
As shown below, I am observing 10+ ms bubbles in the GPU streams, and it not every step will apper, and i add nvtx to https://github.com/pytorch/pytorch/blob/master/torch/utils/data/_utils/pin_memory.py#L39 ,Through analysis OS runtime libraries on nsight , i found that this line will lead to long munmap operation, so i doubt the bubble problem is caused by this
and Although I am not quite sure that the gap is caused by munmap, I observed the timeline of all gaps, and the corresponding CPU stack are all long munmap
relerated issue:
https://github.com/pytorch/pytorch/issues/77139
models :
centerpoint https://github.com/tianweiy/CenterPoint
c3d https://github.com/xinge008/Cylinder3D

### Versions
Collecting environment information...
PyTorch version: 1.13.0
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.23.1
Libc version: glibc-2.31
Python version: 3.8.15 | packaged by conda-forge | (default, Nov 22 2022, 08:49:35) [GCC 10.4.0] (64-bit runtime)
Python platform: Linux-4.14.0_1-0-0-44-x86_64-with-glibc2.10
Is CUDA available: True
CUDA runtime version: 11.7.99
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-40GB
GPU 1: NVIDIA A100-SXM4-40GB
GPU 2: NVIDIA A100-SXM4-40GB
GPU 3: NVIDIA A100-SXM4-40GB
GPU 4: NVIDIA A100-SXM4-40GB
GPU 5: NVIDIA A100-SXM4-40GB
GPU 6: NVIDIA A100-SXM4-40GB
GPU 7: NVIDIA A100-SXM4-40GB
Nvidia driver version: 470.82.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.4.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.5
[pip3] torch==1.13.0
[pip3] torch-scatter==2.0.9
[pip3] torchaudio==0.13.0
[pip3] torchvision==0.14.0
[conda] blas 2.116 mkl conda-forge
[conda] blas-devel 3.9.0 16_linux64_mkl conda-forge
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] libblas 3.9.0 16_linux64_mkl conda-forge
[conda] libcblas 3.9.0 16_linux64_mkl conda-forge
[conda] liblapack 3.9.0 16_linux64_mkl conda-forge
[conda] liblapacke 3.9.0 16_linux64_mkl conda-forge
[conda] mkl 2022.1.0 h84fe81f_915 conda-forge
[conda] mkl-devel 2022.1.0 ha770c72_916 conda-forge
[conda] mkl-include 2022.1.0 h84fe81f_915 conda-forge
[conda] numpy 1.23.5 py38h7042d01_0 conda-forge
[conda] pytorch 1.13.0 py3.8_cuda11.7_cudnn8.5.0_0 pytorch
[conda] pytorch-cuda 11.7 h67b0de4_0 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torch-scatter 2.0.9 pypi_0 pypi
[conda] torchaudio 0.13.0 py38_cu117 pytorch
[conda] torchvision 0.14.0 py38_cu117 pytorch
cc @SsnL @VitalyFedyunin @ejguan @NivekT
| 4 |
3,878 | 91,352 |
Internal error during ONNX export, diagnostic unusable
|
module: onnx, triaged
|
### 🐛 Describe the bug
I encounter this internal error during export of relatively simple network using LSTM.
Same network can be jit.traced and executed just fine (well, after a Pytorch path I have submitted in issue #91351).
I tried to create a simple repro case - but then it exports successfully. Still working on repro.
But this issue is more general, I have encountered it in many cases - how do I debug this ?
The message is rather cryptic, and the unique ID printed out (3555) seem to be internal to C lib - there is nothing in the ONNX graph printed on exception, that would help me to connect that ID with the graph.
```
File "/git/NeMo/nemo/core/classes/exportable.py", line 71, in export
out, descr, out_example = model._export(
File "/git/NeMo/nemo/core/classes/exportable.py", line 178, in _export
torch.onnx.export(
File "/usr/local/lib/python3.8/dist-packages/torch/onnx/utils.py", line 504, in export
_export(
File "/usr/local/lib/python3.8/dist-packages/torch/onnx/utils.py", line 1529, in _export
graph, params_dict, torch_out = _model_to_graph(
File "/usr/local/lib/python3.8/dist-packages/torch/onnx/utils.py", line 1115, in _model_to_graph
graph = _optimize_graph(
File "/usr/local/lib/python3.8/dist-packages/torch/onnx/utils.py", line 675, in _optimize_graph
_C._jit_pass_lint(graph)
RuntimeError: 0 INTERNAL ASSERT FAILED at "/opt/pytorch/pytorch/torch/csrc/jit/ir/ir.cpp":590, please report a bug to PyTorch. 3555 not in scope
```
### Versions
Collecting environment information...
PyTorch version: 1.14.0a0+410ce96
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.24.1
Libc version: glibc-2.31
Python version: 3.8.10 (default, Nov 14 2022, 12:59:47) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-46-generic-x86_64-with-glibc2.29
Is CUDA available: True
CUDA runtime version: 11.8.89
GPU models and configuration: GPU 0: NVIDIA RTX A5000
Nvidia driver version: 515.65.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.7.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] k2==1.23.2.dev20221221+cuda11.8.torch1.14.0a0
[pip3] numpy==1.22.2
[pip3] pytorch-lightning==1.8.6
[pip3] pytorch-quantization==2.1.2
[pip3] torch==1.14.0a0+410ce96
[pip3] torch-tensorrt==1.3.0a0
[pip3] torchaudio==0.14.0
[pip3] torchmetrics==0.11.0
[pip3] torchvision==0.15.0a0
[conda] Could not collect
| 5 |
3,879 | 91,347 |
Remove redundant logics
|
triaged, oncall: pt2
|
In class SymInt, member data_(int64_t) can be casted to SymNodeImpl* by bit operation as follow:
https://github.com/pytorch/pytorch/blob/f471770fd40cae2065a8d066932ddf59e12a758d/c10/core/SymInt.h#L91-L99
It seems that `~MASK` always make 61st bit of `unextended_bits` as 0, which may leads to `(unextended_bits ^ sign_bit_mask) - sign_bit_mask` equal `unextended_bits` itself, not signed-extension(0b111........).
I’m confused that whether is there may be special circumstances to make signed-extension really happen, or just comment out L94-L96 is fine.
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 3 |
3,880 | 91,334 |
Pytorch 1.13 conda package with cuda requires too many unneccessary packages
|
oncall: binaries, module: cuda, triaged
|
With the switch from cudatoolkit to pytoch-cuda came a lot of very large "dependencies" that aren't actually dependencies and need to be eliminated.
[Related discusson](https://discuss.pytorch.org/t/large-cuda-dependencies-in-pytorch-1-13-conda-installation/165068).
cc @ezyang @seemethere @malfet @ngimel
| 8 |
3,881 | 91,325 |
Support for saving multiple storages/tensors that view same data as different dtypes
|
feature, module: serialization, triaged
|
### 🚀 The feature, motivation and pitch
Saving and loading multiple tensors or storages that view the same data with dfferent dtypes is not currently possible:
```python
>>> import torch
>>> t0 = torch.randn(10)
>>> t1 = t0.view(torch.int)
>>> torch.save([t0, t1], 'tmp.pt')
...
RuntimeError: Cannot save multiple tensors or storages that view the same data as different types
```
In the past, it would have been pretty difficult to add support for this because of the way storage types were defined (different Python and C++ classes for each storage dtype).
But now that storages have been refactored so that they all use the same untyped storage class underneath, enabling serialization of views of the same data with different dtypes should be much more straightforward now
### Alternatives
_No response_
### Additional context
_No response_
cc @mruberry @ezyang
| 1 |
3,882 | 91,323 |
Expand torch.utils._pytree.tree_map
|
feature, triaged, actionable, module: pytree, module: functorch
|
`tree_map(f, tree)` maps f to every leaf of tree. However, in many callsites in the codebase, we do something like a "tree zip" where we want to map a function `f(x, y)` over two trees, tree1 and tree2. JAX's tree_map has these semantics: https://jax.readthedocs.io/en/latest/_autosummary/jax.tree_util.tree_map.html
We should update our tree_map to be similar; e.g. `tree_map(f, tree, *rest)`.
cc @soumith @Chillee @samdow
| 0 |
3,883 | 91,311 |
Profiler is not properly recording seq number when any key above autograd is used
|
oncall: profiler
|
This is because we don't record function on redispatch. Meaning that the profiler is ONLY seeing the top key.
Anything changing the top key (dispatch subclass, dispatch mode, functorch, etc) will break the assignment.
https://github.com/pytorch/pytorch/blob/55749b9c41857050d4e875d4613c9ba7cddeae02/aten/src/ATen/core/dispatch/Dispatcher.h#L645
cc @robieta @chaekit @aaronenyeshi @ngimel @nbcsm @guotuofeng @guyang3532 @gaoteng-git @tiffzhaofb
| 0 |
3,884 | 91,310 |
AOT Autograd should allow backend compilers to see input mutations
|
triaged, oncall: pt2, module: functorch
|
Today, AOT Autograd has special handling for input mutations. Given a user code / captured graph like this:
```
def user_f(x):
x.mul_(2)
out = x * 3
return out
```
AOT Autograd will split it into (1) a purely functional graph for the compiler, and (2) an epilogue that runs after, and applies any global side effects (aka input mutations):
```
// gets sent to a compiler
def compiled_functional_graph(x):
x_updated = x.mul(2)
out = x_updated * 3
return x_updated_out
// wrapper that aot autograd calls
def wrapper_run_compiled_graph_and_epilogue(x):
x_updated, out = compiled_functional_graph(x)
x.copy_(x_updated)
return out
```
This technically leaves some performance on the table: an out-of-place mul() + a copy_() takes twice as many operations as an inplace mul.
Once we start running functionalization all the time in AOT Autograd, and once we capture the optimizer step in a graph that gets sent to AOT Autograd, then this will likely have a larger impact on the performance of optimizer graphs: An optimizer graph consists of a large number of mutations to graph inputs (the parameters), and all of those mutations will now take 2 operations instead of one, and require extra memory - their out-of-place equivalent, and a copy_() call.
The most general way to fix this would be to give backend compilers the ability to indicate to AOT Autograd that they're ok with seeing input mutations in their graph and handling them.
One complication is that AOT Autograd also runs min-cut-partitioning and DCE, which either won't work, or will require even more special handling if there are input mutations in the graph. A simpler version of this github issue would be to only allow input mutations in the graph when we are not generating a backward graph and running partitioning, since graphs capturing the optimizer step should (mostly) fall into this bucket.
cc @ezyang @soumith @msaroufim @wconstab @ngimel @zou3519 @Chillee @samdow
| 1 |
3,885 | 91,309 |
iSTFT produces RuntimeError with center=False and Blackman/Bartlett/Hann windows
|
triaged, module: fft
|
### 🐛 Describe the bug
When using the `torch.istft` function with the option `center=False` and windows produced by `torch.blackman_window`, `torch.hann_window` or `torch.bartlett_window`, then a `RuntimeError` is produced with message
`window overlap add min: 1`.
The expected behavior is that no error is produced.
```python
import math
import torch
f = 1500.0 # exactly periodic
fs = 48000
nfft = 1024
hop = 256
dtype = torch.float64
original = torch.sin(2 * math.pi * f / fs * torch.arange(2 * fs, dtype=dtype))
# torch
# - use `center=False`
# - zero-pad the front of the signal
windows = {
"hamming": torch.hamming_window(nfft, dtype=dtype),
"kaiser": torch.kaiser_window(nfft, dtype=dtype),
"blackman": torch.blackman_window(nfft, dtype=dtype),
"bartlett": torch.bartlett_window(nfft, dtype=dtype),
"hann": torch.hann_window(nfft, dtype=dtype),
}
for center in [True, False]:
for win_name, win in windows.items():
# center == True
try:
PT = torch.stft(
original,
n_fft=nfft,
hop_length=hop,
window=win,
return_complex=True,
pad_mode="constant",
center=center,
)
recon_pt = torch.istft(
PT, n_fft=nfft, hop_length=hop, window=win, center=center
)
err = float(abs(recon_pt - original[: recon_pt.shape[0]]).max())
print(
f"{win_name=:9s} {center=} {err=:.6e}",
)
except RuntimeError as e:
print(
f"{win_name=:9s} {center=} Exception!!!",
)
print(e)
```
Output
```
win_name=hamming center=True err=7.771561e-16
win_name=kaiser center=True err=7.771561e-16
win_name=blackman center=True err=6.661338e-16
win_name=bartlett center=True err=5.551115e-16
win_name=hann center=True err=6.661338e-16
win_name=hamming center=False err=6.217249e-15
win_name=kaiser center=False err=3.472223e-12
win_name=blackman center=False Exception!!!
istft(CPUComplexDoubleType[513, 372], n_fft=1024, hop_length=256, win_length=1024, window=torch.DoubleTensor{[1024]}, center=0, normalized=0, onesided=None, length=None, return_complex=0) window overlap add min: 1
[ CPUBoolType{} ]
win_name=bartlett center=False Exception!!!
istft(CPUComplexDoubleType[513, 372], n_fft=1024, hop_length=256, win_length=1024, window=torch.DoubleTensor{[1024]}, center=0, normalized=0, onesided=None, length=None, return_complex=0) window overlap add min: 1
[ CPUBoolType{} ]
win_name=hann center=False Exception!!!
istft(CPUComplexDoubleType[513, 372], n_fft=1024, hop_length=256, win_length=1024, window=torch.DoubleTensor{[1024]}, center=0, normalized=0, onesided=None, length=None, return_complex=0) window overlap add min: 1
[ CPUBoolType{} ]
```
### Versions
PyTorch version: 1.13.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 12.6.1 (arm64)
GCC version: Could not collect
Clang version: 13.1.6 (clang-1316.0.21.2.5)
CMake version: Could not collect
Libc version: N/A
Python version: 3.10.8 (main, Nov 24 2022, 08:08:27) [Clang 14.0.6 ] (64-bit runtime)
Python platform: macOS-12.6.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.4
[pip3] torch==1.13.1
[pip3] torchaudio==0.13.1
[conda] numpy 1.23.4 py310hb93e574_0
[conda] numpy-base 1.23.4 py310haf87e8b_0
[conda] torch 1.13.1 pypi_0 pypi
[conda] torchaudio 0.13.1 pypi_0 pypi
cc @mruberry @peterbell10
| 17 |
3,886 | 91,305 |
`nn.TransformerEncoderLayer` fastpath (BetterTransformer) is slower than the normal path when no mask is provided
|
oncall: transformer/mha
|
### 🐛 Describe the bug
Reproduction:
<details>
<summary>script</summary>
```python
import argparse
import torch
import torch.nn as nn
from tqdm import tqdm
def get_parser():
parser = argparse.ArgumentParser()
parser.add_argument(
"--num-batches",
type=int,
default=50,
help="",
)
parser.add_argument(
"--batch-size",
type=int,
default=64,
help="",
)
parser.add_argument(
"--max-seqlen",
type=int,
default=256,
help="",
)
parser.add_argument(
"--use-half",
action="store_true",
)
parser.add_argument(
"--device-id",
type=int,
help="GPU device id"
)
return parser
def timing_cuda(model, num_batches, inputs):
start_event = torch.cuda.Event(enable_timing=True)
end_event = torch.cuda.Event(enable_timing=True)
start_event.record()
for _ in range(num_batches):
_ = model(inputs)
end_event.record()
torch.cuda.synchronize()
return (start_event.elapsed_time(end_event)) / num_batches
def benchmark(num_batches: int, batch_size: int, max_seqlen: int, use_half: bool, device_id: int):
layers_vanilla = []
layers_bt = []
for i in range(12):
vanilla_layer = nn.TransformerEncoderLayer(d_model=768, nhead=12) # as bert-base-uncased
bt_layer = nn.TransformerEncoderLayer(d_model=768, nhead=12)
vanilla_layer.norm2.eps = 2e-5 # disable fastpath
assert vanilla_layer.norm1.eps != vanilla_layer.norm2.eps
layers_vanilla.append(vanilla_layer)
layers_bt.append(bt_layer)
vanilla_model = nn.Sequential(*layers_vanilla)
bt_model = nn.Sequential(*layers_bt)
inputs = torch.rand(batch_size, max_seqlen, 768)
if use_half is True:
vanilla_model = vanilla_model.half()
bt_model = bt_model.half()
inputs = inputs.half()
vanilla_model = vanilla_model.eval().to(f"cuda:{device_id}")
bt_model = bt_model.eval().to(f"cuda:{device_id}")
inputs = inputs.to(f"cuda:{device_id}")
# Warmup
_ = vanilla_model(inputs)
torch.cuda.synchronize()
_ = bt_model(inputs)
torch.cuda.synchronize()
vanilla_time = timing_cuda(vanilla_model, num_batches, inputs)
bt_time = timing_cuda(bt_model, num_batches, inputs)
return vanilla_time, bt_time
if __name__ == "__main__":
parser = get_parser()
args = parser.parse_args()
BATCH_SIZES = [32, 64, 128]
SEQ_LEN = [16, 32, 64, 128, 256]
output_file = open("log_transformerencoderlayer.py", "w")
output_file.write(
"num_batches, batch_size, seq_len, use half, Vanilla time (ms), BT time (ms), Speedup\n"
)
for bs in tqdm(BATCH_SIZES, desc="batch size"):
for seq_len in tqdm(SEQ_LEN, desc="sequence length"):
max_seqlen = seq_len
vanilla_time, bt_time = benchmark(
args.num_batches,
bs,
max_seqlen,
args.use_half,
args.device_id
)
speedup = vanilla_time / bt_time
output_file.write(
"{},{},{},{},{},{},{}\n".format(
args.num_batches,
bs,
seq_len,
args.use_half,
f"{vanilla_time:.2f}",
f"{bt_time:.2f}",
f"{speedup:.3f}",
)
)
output_file.close()
```
</details>
Results (fp32):
|num_batches|batch_size|seq_len|use half|Vanilla time (ms)|BT time (ms)|Speedup|
|-----------|----------|-------|--------|-----------------|------------|-------|
|50 |32 |16 |False |7.49 |6.60 |1.135 |
|50 |32 |32 |False |10.40 |11.58 |0.898 |
|50 |32 |64 |False |20.70 |22.54 |0.918 |
|50 |32 |128 |False |37.61 |40.72 |0.924 |
|50 |32 |256 |False |75.08 |80.72 |0.930 |
|50 |64 |16 |False |10.25 |11.27 |0.909 |
|50 |64 |32 |False |20.34 |22.01 |0.925 |
|50 |64 |64 |False |37.12 |39.86 |0.931 |
|50 |64 |128 |False |73.38 |78.53 |0.934 |
|50 |64 |256 |False |139.59 |149.46 |0.934 |
|50 |128 |16 |False |20.23 |21.80 |0.928 |
|50 |128 |32 |False |36.78 |39.54 |0.930 |
|50 |128 |64 |False |73.09 |78.23 |0.934 |
|50 |128 |128 |False |139.19 |148.47 |0.937 |
|50 |128 |256 |False |275.27 |294.23 |0.936 |
Results (fp16):
|num_batches|batch_size|seq_len|use half|Vanilla time (ms)|BT time (ms)|Speedup|
|-----------|----------|-------|--------|-----------------|------------|-------|
|50 |32 |16 |True |5.73 |5.68 |1.009 |
|50 |32 |32 |True |5.64 |5.58 |1.011 |
|50 |32 |64 |True |5.62 |5.54 |1.015 |
|50 |32 |128 |True |5.64 |5.59 |1.009 |
|50 |32 |256 |True |9.71 |11.12 |0.873 |
|50 |64 |16 |True |5.70 |5.61 |1.015 |
|50 |64 |32 |True |5.64 |5.57 |1.013 |
|50 |64 |64 |True |5.85 |5.56 |1.051 |
|50 |64 |128 |True |10.25 |11.67 |0.879 |
|50 |64 |256 |True |20.08 |22.01 |0.913 |
|50 |128 |16 |True |5.62 |5.56 |1.012 |
|50 |128 |32 |True |5.62 |5.61 |1.001 |
|50 |128 |64 |True |10.73 |12.12 |0.885 |
|50 |128 |128 |True |21.66 |23.01 |0.942 |
|50 |128 |256 |True |41.35 |44.50 |0.929 |
cc @jbschlosser @bhosmer @cpuhrsch @erichan1 @HamidShojanazeri @mikekgfb @younesbelkada
### Versions
The above is run on a single A100-80GB.
PyTorch version: 1.13.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: 10.0.0-4ubuntu1
CMake version: version 3.24.3
Libc version: glibc-2.31
Python version: 3.9.13 (main, Oct 13 2022, 21:15:33) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-125-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.0.221
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-80GB
GPU 1: NVIDIA A100-SXM4-80GB
GPU 2: NVIDIA A100-SXM4-80GB
GPU 3: NVIDIA DGX Display
GPU 4: NVIDIA A100-SXM4-80GB
Nvidia driver version: 515.65.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.4
[pip3] torch==1.13.1
[pip3] torchaudio==2.0.0.dev20221220+cu116
[pip3] torchtriton==2.0.0+0d7e753227
[pip3] torchvision==0.15.0.dev20221220+cu116
[conda] numpy 1.23.4 pypi_0 pypi
[conda] torch 1.13.1 pypi_0 pypi
[conda] torchaudio 2.0.0.dev20221220+cu116 pypi_0 pypi
[conda] torchtriton 2.0.0+0d7e753227 pypi_0 pypi
[conda] torchvision 0.15.0.dev20221220+cu116 pypi_0 pypi
| 4 |
3,887 | 91,300 |
[🚀 Feature Request] pdf and sampling from Alpha-stable distribution
|
module: distributions, feature, triaged
|
### 🚀 The feature, motivation and pitch
I'm a long-time user of pytorch but a first-time contributor, and this is my first issue. I propose the implementation of the pdf and sampling of alpha-stable distribution into Pytorch.
Recently, my team published a paper on the Neurips 2022 Workshop titled [Score-Based Generative Models using Lévy Processes](https://openreview.net/forum?id=ErzyBArv6Ue). In this study, gaussian noise on diffusion model is replaced with alpha stable noise. Alpha-stable distribution is unfortunately not supported by PyTorch. Therefore, I implemented this feature to a different repository named [torchlevy](https://github.com/UNIST-LIM-Lab/torchlevy). However, it will be preferable if the pdf and sample features are included in PyTorch so that other AI researchers may quickly and readily utilize them.
### How to implement
- Implementation of pdf and sampling method is based on Zolotarev thm([Stable Diffusion](https://papers.ssrn.com/sol3/Delivery.cfm/SSRN_ID2894444_code545.pdf?abstractid=2894444&mirid=1), page 7.)
- I want to add new file on `torch/distribution` for these features.
### Issue
- Pdf caculation requires numerical integration which is not supported on Pytorch. In [torchlevy](https://github.com/UNIST-LIM-Lab/torchlevy), I’ve used another python module called torchquad for efficient numerical integration. But I’m not sure I can use this module on Pytorch. If I cannot use this module, I want to implement only sampling from alpha stable distribution.
cc [@fritzo](https://github.com/fritzo) [@neerajprad](https://github.com/neerajprad) [@alicanb](https://github.com/alicanb) [@nikitaved](https://github.com/nikitaved)
### Alternatives
_No response_
### Additional context
I would be happy to work on solving this when the issue has been discussed further.
cc @fritzo @neerajprad @alicanb @nikitaved
| 7 |
3,888 | 91,293 |
torch.fx tracer emits type error when tracing module that directly contains and uses the torch.cat() function
|
oncall: fx
|
### 🐛 Describe the bug
### Problem:
When I use torch.fx's tracer to trace an `nn.Module` that uses the `torch.cat()` function directly (please see the `MyCat` module below), the tracing fails with the following type error message:
```
TypeError: cat() received an invalid combination of arguments - got (Proxy, int), but expected one of:
* (tuple of Tensors tensors, int dim, *, Tensor out)
* (tuple of Tensors tensors, name dim, *, Tensor out)
```
Interestingly however, when I use the torch.fx tracer to trace a nested `nn.Module` that uses `torch.cat()` both in the module itself as well as in the child module nested within (please see the `NestedCat` module below), the tracing completes with no error messages emitted. I am not sure if this is the expected behavior (and if so, the reasoning and mechanism behind this), or if this is a bug. I have also found that others seem to have very similar issues, and a fix in one GitHub issue ([https://github.com/pytorch/pytorch/issues/34294](https://github.com/pytorch/pytorch/issues/34294)) seems to address this problem; however, when the following code is run, I observed the aforementioned behavior and the problem still persists. If I understand correctly, I think the correct behavior should be that the tracing for both of the following modules should complete without emitting any error messages.
### Related issues:
[https://github.com/pytorch/pytorch/issues/79715](https://github.com/pytorch/pytorch/issues/79715)
[https://github.com/pytorch/pytorch/issues/34294](https://github.com/pytorch/pytorch/issues/34294) (Seems to contain an (incomplete) fix?)
[https://github.com/pytorch/pytorch/pull/43248](https://github.com/pytorch/pytorch/pull/43248)
### Code for reproducing the issue:
The `graph = tracer.trace(nested_cat)` line completes successfully with no problem, but the `graph = tracer.trace(my_cat)` line emits the type error messages.
```py
import torch
from torch import nn
from torch import fx
class MyCat(nn.Module):
def __init__(self):
super().__init__()
def forward(self, inputs):
cat = torch.cat(inputs, 1)
return cat
class NestedCat(nn.Module):
def __init__(self):
super().__init__()
self.my_cat_0 = MyCat()
def forward(self, x):
my_list = [self.my_cat_0([x])]
cat = torch.cat(my_list, 1)
return cat
tracer = fx.Tracer()
nested_cat = NestedCat()
graph = tracer.trace(nested_cat)
my_cat = MyCat()
graph = tracer.trace(my_cat)
```
### Versions
```
Windows 10
Python 3.9.10
CUDA 11.6
torch 1.13.1+cu117
torchaudio 0.13.1+cu117
torchvision 0.14.1+cu117
numpy 1.22.3
```
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| 8 |
3,889 | 91,280 |
custom Function that supports functorch jvp doesn't work with in-place
|
triaged, module: forward ad, module: functorch
|
https://github.com/pytorch/pytorch/pull/91222#discussion_r1054910483
cc @zou3519 @Chillee @samdow @soumith
| 0 |
3,890 | 91,278 |
Keep getting ChildFailedError Error
|
oncall: distributed
|
### 🐛 Describe the bug
I'm running a slightly modified [run_clm.py script](https://github.com/huggingface/transformers/blob/v4.24.0/examples/pytorch/language-modeling/run_clm.py) with vary number of A100 GPUs (4-8) on a single node, and keep getting the ChildFailedError right after the training/evaluation ends.
I’m running [GPT2 (smallest model)](https://huggingface.co/gpt2) on the [OpenWebText dataset](https://huggingface.co/datasets/openwebtext).
### I'm running my code from a shell script as follow:
> GPU=1,2,3,4,5
export TORCH_CPP_LOG_LEVEL=INFO NCCL_DEBUG=INFO
export CUDA_VISIBLE_DEVICES=$GPU
>
> torchrun \
--standalone \
--nnodes=1 \
--nproc_per_node=${NUM_GPU} \
run_clm.py \
--model_name_or_path ${MODEL} \
--dataset_name ${DS_NAME} \
--preprocessing_num_workers 16 \
--logging_steps 5000 \
--save_steps ${SAVE_STEPS} \
--do_eval \
--per_device_eval_batch_size ${EVAL_BATCH} \
--seed ${RANDOM} \
--evaluation_strategy steps \
--logging_dir ${OUTPUT_DIR} \
--output_dir ${OUTPUT_DIR} \
--overwrite_output_dir \
--ddp_timeout 324000 \
### And getting the follow error:
> 100%|██████████| 2209/2209 [39:03<00:00, 1.06s/it]WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 4041 closing signal SIGTERM
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 4042 closing signal SIGTERM
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 4043 closing signal SIGTERM
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 4045 closing signal SIGTERM
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: -9) local_rank: 3 (pid: 4044) of binary: /venv/bin/python3
Traceback (most recent call last):
File "/venv/bin/torchrun", line 8, in <module>
sys.exit(main())
File "/venv/lib/python3.8/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 345, in wrapper
return f(*args, **kwargs)
File "/venv/lib/python3.8/site-packages/torch/distributed/run.py", line 719, in main
run(args)
File "/venv/lib/python3.8/site-packages/torch/distributed/run.py", line 710, in run
elastic_launch(
File "/venv/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 131, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/venv/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 259, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
./code/gpt2/Model-Compression-Research-Package/examples/transformers/language-modeling/run_clm.py FAILED
Failures:
<NO_OTHER_FAILURES>
Root Cause (first observed failure):
[0]:
time : 2022-12-21_21:57:29
host : december-ds-2h4b6-5hpkj
rank : 3 (local_rank: 3)
exitcode : -9 (pid: 4044)
error_file: <N/A>
traceback : Signal 9 (SIGKILL) received by PID 4044
============================================================
### Full log is attached:
[debug_flags_log.txt](https://github.com/pytorch/pytorch/files/10282140/debug_flags_log.txt)
### Notes:
1. The error occurs in training and in evaluation.
2. In order to eliminate the option of timeout I deliberately fixed very high value for timeout.
3. I tried to run using torchrun and using torch.distributed.launch and faced the same issue.
4. The number of samples in my training/eval doesn’t affect and the issue remain.
5. I track my memory usage and OOM is not the case here (kinda wish it was).
6. The error occurs only in distributed setup. When not using distributed, or when using it with a single GPU, the problem doesn't pop.
Would really appreciate any help on this issue since I'm really stuck on my research till then :(
### Versions
According to running `pip freeze --local`:
**torch==1.10.2+cu113
torchaudio==0.10.2+cu113
torchvision==0.11.3+cu113
nvidia-ml-py==11.495.46
multiprocess==0.70.14
packaging==21.3
transformers==4.25.1
huggingface-hub==0.11.1**
absl-py==1.0.0
accelerate==0.15.0
aiohttp==3.8.3
aiosignal==1.3.1
async-timeout==4.0.2
attrs==22.1.0
blessed==1.19.1
cachetools==5.0.0
certifi==2021.10.8
charset-normalizer==2.0.12
click==8.1.3
datasets==2.7.1
deepspeed==0.6.0
dill==0.3.6
docker-pycreds==0.4.0
evaluate==0.4.0
fairscale==0.4.6
filelock==3.8.2
frozenlist==1.3.3
fsspec==2022.11.0
gitdb==4.0.10
GitPython==3.1.29
google-auth==2.6.0
google-auth-oauthlib==0.4.6
gpustat==1.0.0
grpcio==1.44.0
hjson==3.0.2
idna==3.3
importlib-metadata==4.11.2
joblib==1.2.0
Markdown==3.3.6
model-compression-research @ file:///
multidict==6.0.3
ninja==1.10.2.3
nltk==3.8
numpy==1.22.3
oauthlib==3.2.0
pandas==1.5.2
pathtools==0.1.2
Pillow==9.0.1
pkg_resources==0.0.0
promise==2.3
protobuf==3.19.4
psutil==5.9.0
py-cpuinfo==8.0.0
pyarrow==10.0.1
pyasn1==0.4.8
pyasn1-modules==0.2.8
pyparsing==3.0.7
python-dateutil==2.8.2
pytz==2022.7
PyYAML==6.0
regex==2022.10.31
requests==2.27.1
requests-oauthlib==1.3.1
responses==0.18.0
rsa==4.8
scikit-learn==1.2.0
scipy==1.9.3
sentencepiece==0.1.97
sentry-sdk==1.12.0
setproctitle==1.3.2
shortuuid==1.0.11
six==1.16.0
sklearn==0.0.post1
smmap==5.0.0
tensorboard==2.8.0
tensorboard-data-server==0.6.1
tensorboard-plugin-wit==1.8.1
threadpoolctl==3.1.0
tokenizers==0.13.2
tqdm==4.63.0
typing_extensions==4.1.1
urllib3==1.26.13
wandb==0.13.7
wcwidth==0.2.5
Werkzeug==2.0.3
xxhash==3.1.0
yarl==1.8.2
zipp==3.7.0
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
| 6 |
3,891 | 91,274 |
torch.compile for calling func(**kwargs)
|
triaged, oncall: pt2
|
### 🐛 Describe the bug
See code:
```python
def func(a, b):
return a + b
def func1(**kwargs):
return func(a=1, **kwargs)
c_f = torch.compile(func1, fullgraph=True)
c_f(b=torch.rand([2]))
```
At the moment, when you compile with `torch.compile(func, fullgraph=True)`, it throws `Unsupported: missing: BUILD_MAP_UNPACK_WITH_CALL`. When you compile in normal mode, it produces a graph break, because the python op_code for "func(... **kwargs)" is not supported.
I know that torchscript does not support **kwargs. However, I wonder if torch.compile(fullgraph) is going to support this?
### Versions
Pytorch version: pre-release version 2.0.0a0+fb
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 2 |
3,892 | 93,493 |
tensor.to_sparse() handling indices incorrectly under dynamo/fake tensor
|
triaged, bug, oncall: pt2, module: dynamic shapes
|
### 🐛 Describe the bug
to_sparse() is returning a FakeTensor where the indices attribute has the wrong shape/size.
### Error logs
_No response_
### Minified repro
Repro 1:
```python
import torch
import torch._dynamo
import logging
# torch._dynamo.config.verbose = True
# torch._dynamo.config.log_level = logging.DEBUG
def fn():
x = torch.rand((7, 3))
return x.to_sparse().indices().stride()
print(fn()) # returns (21, 1)
print(torch._dynamo.optimize("eager")(fn)()) # returns (1, 1)
```
Repro 2:
```python
import torch
import torch._dynamo
import logging
# torch._dynamo.config.verbose = True
# torch._dynamo.config.log_level = logging.DEBUG
fake_mode = torch._subclasses.fake_tensor.FakeTensorMode()
print("~~ fake mode below")
with fake_mode:
x = torch.rand((7, 3))
y = x.to_sparse()
print(f"fake mode indices size: {y.indices().size()}") # torch.Size([2, 0]); should be [2, 21]
```
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 2 |
3,893 | 91,263 |
Make quant_min/quant_max required for observer/fake_quant
|
oncall: quantization, triaged
|
### 🐛 Describe the bug
https://github.com/pytorch/pytorch/pull/91159#discussion_r1053901160
### Versions
master
cc @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel
| 0 |
3,894 | 91,252 |
Open file leak when dataloader is using persistent_workers and pin_memory AND you create multiple dataloaders.
|
module: dataloader, triaged, module: data
|
### 🐛 Describe the bug
As the title says: if you create a dataloader multiple times, and the dataloader is using pin_memory and persistent_workers, then there is a file handle leak (OSError: [Errno 24] Too many open files).
This is because of this line in torch.utils.data.Dataloader:
``` python
if self._persistent_workers and self._pin_memory:
import atexit
for w in self._workers:
atexit.register(_MultiProcessingDataLoaderIter._clean_up_worker, w)
```
In case (self._persistent_workers and self._pin_memory) is True, we are registering a clean up function. The atexit.register function opens up a file until the whole program exits. However, if we keep creating dataloaders, we keep opening files, hence the issue.
Minimal code to reproduce:
``` python
import torch
from tqdm import tqdm
import numpy as np
class DummyDataSet:
def __len__(self):
return 5000
def __getitem__(self, item):
return np.zeros(10)
class Trainer:
def __init__(self):
self.trainloader = torch.utils.data.DataLoader(
DummyDataSet(),
batch_size=100,
shuffle=True,
drop_last=True,
prefetch_factor=20,
pin_memory=True,
num_workers=20,
persistent_workers=True)
def train(self):
for _ in range(100):
for d in self.trainloader:
pass
if __name__ == '__main__':
for _ in tqdm(range(1000)):
t = Trainer()
t.train()
```
run, to show number of open files:
```bash
ls /proc/proc_id/fd/ | wc -l
```
Temporary solution: don't create many dataloaders or don't use dataloaders with this specific arguments.
(Why would someone use a Trainer() in a for loop: the training is stochastic, this way we can collect multiple trainings and get a more accurate evaluation)
### Versions
PyTorch version: 1.12.0
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.17
Python version: 3.7.13 (default, Mar 29 2022, 02:18:16) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.15.0-56-generic-x86_64-with-debian-buster-sid
Is CUDA available: True
CUDA runtime version: 11.3.109
CUDA_MODULE_LOADING set to:
GPU models and configuration: GPU 0: NVIDIA GeForce GTX 1080 Ti
Nvidia driver version: 515.65.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.2.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.2.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.2.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.2.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.2.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.2.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.2.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.20.0
[pip3] torch==1.12.0
[pip3] torchelastic==0.2.0
[pip3] torchtext==0.13.0
[pip3] torchvision==0.13.0
[pip3] torchviz==0.0.2
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.3.1 ha36c431_9 nvidia
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py37h7f8727e_0
[conda] mkl_fft 1.3.1 py37hd3c417c_0
[conda] mkl_random 1.2.2 py37h51133e4_0
[conda] numpy 1.20.0 pypi_0 pypi
[conda] pytorch 1.12.0 py3.7_cuda11.3_cudnn8.3.2_0 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchelastic 0.2.0 pypi_0 pypi
[conda] torchtext 0.13.0 py37 pytorch
[conda] torchvision 0.13.0 py37_cu113 pytorch
[conda] torchviz 0.0.2 pypi_0 pypi
cc @SsnL @VitalyFedyunin @ejguan @NivekT
| 2 |
3,895 | 91,251 |
Potential bug found with pybind11 dec_ref while gil released
|
triaged, module: pybind, shadow review, bug
|
### 🐛 Describe the bug
I tried updating pybind11 in this PR: https://github.com/pytorch/pytorch/pull/91248/ and in the new version, we added an assert to make sure that the GIL is held during all inc_ref / dec_ref of the python object's underlying reference count. This seems to be failing on one of the tests https://github.com/pytorch/pytorch/actions/runs/3750738878/jobs/6371337256 . Since the GIL isn't held here, this could cause invalid reference counts to develop which could corrupt the python / gc state and lead to memory leaks and or crashes.
### Versions
Ran on master https://github.com/pytorch/pytorch/actions/runs/3750738878/jobs/6371337256
cc @ezyang
| 5 |
3,896 | 91,249 |
Use dynamo to detect incorrect op schemas automatically
|
triaged, oncall: pt2, module: dynamo
|
A known issue that's bitten people in the past is when someone adds a new pytorch operator to the dispatcher that either mutates or aliases its inputs, but doesn't add proper alias annotations to the op's [schema](https://github.com/pytorch/pytorch/blob/15af4b1ceea3b689354ad664675b930486804c8e/test/test_schema_check.py#L331).
This will cause silent correctness issues across in the stack, both when using autograd, and when using functionalization (which runs by default when using `torch.compile()`).
We now have some [infra](https://github.com/pytorch/pytorch/blob/master/test/test_schema_check.py#L331) for automatically detecting improperly annotated schema using a `TorchDispatchMode`, which runs for a handful of ATen ops. However, it doesn't "automatically" run for custom user operators, and can cause models to silently break when using `torch.compile()`.
There's an open question of what the UX should look like though, because these checks are expensive to run. (Yet another) new debugging knob? Have it run automatically in the `aot_eager` backend?
One option would be to run this mode automatically under the `torch.compile(backend="aot_eager")`, likely inside of aot_autograd's tracing. The `aot_eager` backend is getting more debug checks added to it soon, so there's some precedent for using it as a catch-all for adding extra asserts.
cc @ezyang @soumith @msaroufim @wconstab @ngimel @mlazos @voznesenskym @yanboliang @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @chunyuan-w @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @desertfire
| 5 |
3,897 | 91,245 |
Segmentation faults in DataLoader (in latest torch version).
|
needs reproduction, module: dataloader, triaged
|
### 🐛 Describe the bug
I'm experiencing segmentation faults in the dataloader.
# To Reproduce
This happens with `num_workers=16` or `12`, `8`, `4`, `3`.
The Dataloader doesn't segfault when num_workers isn't set.
# Expected behavior
No segfaults
# Relevant snippet from code
```python
dataloader = torch.utils.data.DataLoader(train_data, shuffle=True,
batch_size=batch_size,
num_workers=16)
```
# Trace
```
ERROR: Unexpected segmentation fault encountered in worker.
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[20], line 38
36 errG = criterion(output, labels)
37 errG.backward()
---> 38 D_G_z2 = output.mean().item()
39 optimizerG.step()
41 # Save Losses for plotting later
File /lab/mlenv/lib/python3.10/site-packages/torch/utils/data/_utils/signal_handling.py:66, in _set_SIGCHLD_handler.<locals>.handler(signum, frame)
63 def handler(signum, frame):
64 # This following call uses `waitid` with WNOHANG from C side. Therefore,
65 # Python can still get and update the process status successfully.
---> 66 _error_if_any_worker_fails()
67 if previous_handler is not None:
68 assert callable(previous_handler)
RuntimeError: DataLoader worker (pid 139803) is killed by signal: Segmentation fault.
```
I've skimmed through earlier issues but those were closed and apparently fixed.
kind regards
Max
### Versions
PyTorch version: 1.13.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.10.6 (main, Nov 14 2022, 16:10:14) [GCC 11.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-56-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4090
Nvidia driver version: 525.60.11
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.24.0
[pip3] torch==1.13.1
[pip3] torchvision==0.14.1
[conda] Could not collect
cc @SsnL @VitalyFedyunin @ejguan @NivekT
| 10 |
3,898 | 91,244 |
first class dims leak memeory
|
triaged, module: functorch
|
The new PyCode_Get* functions return new references. So decref are missing and we leak memory right now.
https://github.com/pytorch/pytorch/blob/15af4b1ceea3b689354ad664675b930486804c8e/functorch/csrc/dim/dim.cpp#L1447
cc @zdevito
cc @zou3519 @Chillee @samdow @soumith
| 0 |
3,899 | 93,492 |
ONNX export question (using torchdynamo)
|
triaged, bug, oncall: pt2
|
### 🐛 Describe the bug
I would like to export a model that can be run either as an ONNX file. My model fails with both the scripting and the trace options. Using PyTorch 2.0 I (uses the torchdynamo) I can create convert my model (to an optimized on) that runs. Is there a way to export the model to ONNX without a dependency on trace/script?
Model_dynamo = torch.compile(model,backend="onnxrt_cpu",mode="max-autotune")
Here an export for ONNX?
exported_model = torch._dynamo.export(Model_dynamo , input)
### Error logs
no error
### Minified repro
_No response_
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 8 |
3,900 | 91,229 |
JIT mishandles torch.round() in PyTorch 1.10.1
|
oncall: jit
|
### 🐛 Describe the bug
Behavior of `round()` is different when used in a function annotated with `@torch.jit.script`. Per documentation, `round()` should break ties to even; this is the behavior observed in a regular Python function. The JIT sometimes emits code that breaks ties by rounding up, as in the following example:
```python
import torch
import time
@torch.jit.script
def round_jit(x: torch.Tensor, inv_scale: float):
x_scaled = x / inv_scale
return x_scaled.round()
def round_baseline(x: torch.Tensor, inv_scale: float):
x_scaled = x / inv_scale
return x_scaled.round()
print(f'Torch version: {torch.__version__}')
inv_scale = 1
x0 = torch.zeros([4, 128, 16, 16], dtype=torch.float32)
x0 = x0 * 0 + 0.5
for n in range(2,3):
for c in range(2,65):
for h in range(2,9):
for w in range(2,9):
x = x0[1:n, 1:c, 1:h, 1:w]
rbl = round_baseline(x, inv_scale)
rji = round_jit(x, inv_scale)
time.sleep(1) # Give time for the JIT to compile...
x = x0[0:1, 0:1, 0:1, 0:1]
rbl = round_baseline(x, inv_scale)
rji = round_jit(x, inv_scale)
print(f'Input: {x}')
print(f'Output (baseline): {rbl}')
print(f'Output (JIT): {rji}')
assert rbl.equal(rji)
```
```
Torch version: 1.10.1+cu113
Input: tensor([[[[0.5000]]]])
Output (baseline): tensor([[[[0.]]]])
Output (JIT): tensor([[[[1.]]]])
Traceback (most recent call last):
File "test-jit.py", line 31, in <module>
assert rbl.equal(rji)
AssertionError
```
Disclaimer: This bug was discovered as part of work I performed as an IBM employee.
### Versions
PyTorch version: 1.10.1+cu113
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Red Hat Enterprise Linux release 8.6 (Ootpa) (x86_64)
GCC version: (GCC) 8.5.0 20210514 (Red Hat 8.5.0-10)
Clang version: 9.0.1 (Red Hat 9.0.1-2.module+el8.2.0+5494+7b8075cf)
CMake version: version 3.20.2
Libc version: glibc-2.3.4
Python version: 3.6.8 (default, Jan 14 2022, 11:04:20) [GCC 8.5.0 20210514 (Red Hat 8.5.0-7)] (64-bit runtime)
Python platform: Linux-4.18.0-372.26.1.el8_6.x86_64-x86_64-with-redhat-8.6-Ootpa
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.19.5
[pip3] torch==1.10.1+cu113
[pip3] torchaudio==0.10.1+cu113
[pip3] torchvision==0.11.2+cu113
[conda] Could not collect
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.