Serial Number
int64 1
6k
| Issue Number
int64 75.6k
112k
| Title
stringlengths 3
357
| Labels
stringlengths 3
241
β | Body
stringlengths 9
74.5k
β | Comments
int64 0
867
|
---|---|---|---|---|---|
4,601 | 86,276 |
JIT miss the argument `as_tuple` for API `nonzero`
|
oncall: jit
|
### π Describe the bug
JIT miss the argument `as_tuple` for API `nonzero`
```py
import torch
def fn(input):
as_tuple = False
return torch.nonzero(input, as_tuple=as_tuple, )
inp = torch.tensor(1., dtype=torch.float32)
print(fn(inp))
jit_fn = torch.jit.script(fn)
print(jit_fn(inp))
```
```
tensor([], size=(1, 0), dtype=torch.int64)
RuntimeError:
Arguments for call are not valid.
The following variants are available:
aten::nonzero(Tensor self) -> Tensor:
Keyword argument as_tuple unknown.
aten::nonzero.out(Tensor self, *, Tensor(a!) out) -> Tensor(a!):
Argument out not provided.
The original call is:
def fn(input):
as_tuple = False
return torch.nonzero(input, as_tuple=as_tuple, )
~~~~~~~~~~~~~ <--- HERE
```
### Versions
```
PyTorch version: 1.13.0.dev20221001
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.2.0-19ubuntu1) 11.2.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.9.13 (main, Aug 25 2022, 23:26:10) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.19.5-051905-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.7.99
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060
Nvidia driver version: 515.65.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.3
[pip3] torch==1.13.0.dev20221001
[pip3] torchaudio==0.13.0.dev20221001
[pip3] torchvision==0.14.0.dev20221001
[conda] blas 1.0 mkl
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.23.3 pypi_0 pypi
[conda] numpy-base 1.23.1 py39ha15fc14_0
[conda] pytorch 1.13.0.dev20221001 py3.9_cuda11.7_cudnn8.5.0_0 pytorch-nightly
[conda] pytorch-cuda 11.7 h67b0de4_0 pytorch-nightly
[conda] pytorch-mutex 1.0 cuda pytorch-nightly
[conda] torch 1.13.0.dev20220923+cu116 pypi_0 pypi
[conda] torchaudio 0.13.0.dev20220923+cu116 pypi_0 pypi
[conda] torchvision 0.14.0.dev20220923+cu116 pypi_0 pypi
```
| 0 |
4,602 | 86,274 |
TransformerEncoder/TransformerDecoder has same initial parameters for all layers
|
triaged, oncall: transformer/mha
|
### π Describe the bug
Example code (based on [documentation](https://pytorch.org/docs/stable/generated/torch.nn.TransformerEncoder.html#torch.nn.TransformerEncoder)):
```python
encoder_layer = nn.TransformerEncoderLayer(d_model=512, nhead=8)
transformer_encoder = nn.TransformerEncoder(encoder_layer, num_layers=6)
```
Then you have:
```python
(transformer_encoder.layers[0].linear1.weight == transformer_encoder.layers[1].linear1.weight).all() == True
```
Etc.
This is because:
```python
self.layers = _get_clones(encoder_layer, num_layers)
```
Where `_get_clones` uses `deepcopy`.
The same problem is for `TransformerDecoder`.
This is unexpected behavior to me but maybe it is expected and just not well documented?
Note that `Transformer` itself does not have this problem because it calls `_reset_parameters` after all the layers are assigned.
### Versions
(I have an outdated version but I checked the source code of the current version.)
cc @jbschlosser @bhosmer @cpuhrsch @erichan1
| 4 |
4,603 | 86,271 |
AUTOGRAD is not working on IOS
|
module: autograd, triaged, oncall: mobile, module: ios
|
### π Describe the bug
``Hello!
I've built latest pytorch from sources with `BUILD_MOBILE_AUTOGRAD : ON` on ios. However, when I'm trying to reproduce the easiest autograd example, I get an error.
```c++
{
torch::AutoGradMode enable_grad(true);
auto x = torch::ones({2, 2}, torch::requires_grad());
auto y = x + 2;
y = y.sum();
y.backward();
}
```
ERROR_LOG: `element 0 of tensors does not require grad and does not have a grad_fn`
My initial task is to collect the gradients from forward pass in order to preprocess input(to perform forward pass again, and get more robust softmax scores).
Seems like autograd is not working.
### Versions
BUILD FLAGS
-- TORCH_VERSION : 1.13.0
-- CAFFE2_VERSION : 1.13.0
-- BUILD_CAFFE2 : OFF
-- BUILD_CAFFE2_OPS : OFF
-- BUILD_STATIC_RUNTIME_BENCHMARK: OFF
-- BUILD_TENSOREXPR_BENCHMARK: OFF
-- BUILD_NVFUSER_BENCHMARK: OFF
-- BUILD_BINARY : OFF
-- BUILD_CUSTOM_PROTOBUF : OFF
-- Protobuf compiler :
-- Protobuf includes :
-- Protobuf libraries :
-- BUILD_DOCS : OFF
-- BUILD_PYTHON : OFF
-- BUILD_SHARED_LIBS : OFF
-- CAFFE2_USE_MSVC_STATIC_RUNTIME : ON
-- BUILD_TEST : OFF
-- BUILD_JNI : OFF
-- BUILD_MOBILE_AUTOGRAD : ON
-- BUILD_LITE_INTERPRETER: ON
-- CROSS_COMPILING_MACOSX :
-- INTERN_BUILD_MOBILE : ON
-- TRACING_BASED : OFF
-- USE_BLAS : OFF
-- USE_LAPACK : 0
-- USE_ASAN : OFF
-- USE_CPP_CODE_COVERAGE : OFF
-- USE_CUDA : OFF
-- USE_ROCM : OFF
-- USE_EIGEN_FOR_BLAS : ON
-- USE_FBGEMM : OFF
-- USE_FAKELOWP : OFF
-- USE_KINETO : OFF
-- USE_FFMPEG : OFF
-- USE_GFLAGS : OFF
-- USE_GLOG : OFF
-- USE_LEVELDB : OFF
-- USE_LITE_PROTO : OFF
-- USE_LMDB : OFF
-- USE_METAL : ON
-- USE_PYTORCH_METAL : OFF
-- USE_PYTORCH_METAL_EXPORT : OFF
-- USE_MPS : OFF
-- USE_FFTW : OFF
-- USE_MKL :
-- USE_MKLDNN : OFF
-- USE_UCC : OFF
-- USE_ITT : OFF
-- USE_NCCL : OFF
-- USE_NNPACK : OFF
-- USE_NUMPY : OFF
-- USE_OBSERVERS : OFF
-- USE_OPENCL : OFF
-- USE_OPENCV : OFF
-- USE_OPENMP : OFF
-- USE_TBB : OFF
-- USE_VULKAN : OFF
-- USE_PROF : OFF
-- USE_QNNPACK : OFF
-- USE_PYTORCH_QNNPACK : ON
-- USE_XNNPACK : ON
-- USE_REDIS : OFF
-- USE_ROCKSDB : OFF
-- USE_ZMQ : OFF
-- USE_DISTRIBUTED : OFF
-- USE_DEPLOY : OFF
-- Public Dependencies : caffe2::Threads
-- Private Dependencies : pthreadpool;cpuinfo;pytorch_qnnpack;XNNPACK;fp16;fmt::fmt-header-only
-- USE_COREML_DELEGATE : OFF
-- BUILD_LAZY_TS_BACKEND : OFF
cc @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved @soulitzer @Lezcano @Varal7
| 8 |
4,604 | 86,270 |
Autocast with BF16 on CPU slows down model more than 2X
|
module: performance, triaged, module: bfloat16, module: amp (automated mixed precision)
|
### π Describe the bug
Trying to run inference with autocast and bf16 slows down a Hugging Face BERT model immensely. I would expect inference acceleration.
This makes it hard to just include autocast in a script for inference and have it run bfloat16 or fp16 depending on the availability of CUDA, since the model might be slowed on.
The benchmark is very simple.
Code to replicate:
```
import torch
from transformers import AutoModelForSequenceClassification
bert_model = AutoModelForSequenceClassification.from_pretrained('bert-base-uncased').eval()
tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased', do_lower_case=True)
inp_single = "Bloomberg has decided to publish a new report on global economic situation."
tensor_single = tokenizer(
inp_single,
max_length=128,
pad_to_max_length=True,
add_special_tokens=True,
return_tensors="pt",
)
with torch.no_grad():
start = time.time()
for i in range(30):
bert_model(tensor_single['input_ids'])
duration =time.time() - start
print(f'Normal time: {duration}')
with torch.autocast(device_type='cpu',enabled=True,dtype=torch.bfloat16):
with torch.no_grad():
# Warmup
for i in range(5):
bert_model(tensor_single['input_ids'])
start = time.time()
for i in range(30):
bert_model(tensor_single['input_ids'])
duration =time.time() - start
print(f'Autocast time: {duration}')
```
Output:
```
Normal time: 2.8249216079711914
Autocast time: 7.403575658798218
```
System Info:
Torch Version: 1.10.0
Transformers Version: 4.9.1
CPU Info:
```
/bin/bash /proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 85
model name : Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz
stepping : 7
microcode : 0x500320a
cpu MHz : 3202.860
cache size : 36608 KB
physical id : 0
siblings : 8
core id : 0
cpu cores : 4
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
: : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer a/proc/cpuinfo
```
### Versions
Collecting environment information...
PyTorch version: 1.10.0
Is debug build: False
CUDA used to build PyTorch: 11.1
ROCM used to build PyTorch: N/A
OS: Amazon Linux 2 (x86_64)
GCC version: (GCC) 7.3.1 20180712 (Red Hat 7.3.1-15)
Clang version: Could not collect
CMake version: version 3.22.3
Libc version: glibc-2.26
Python version: 3.8.12 | packaged by conda-forge | (default, Oct 12 2021, 21:59:51) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.10.102-99.473.amzn2.x86_64-x86_64-with-glibc2.10
Is CUDA available: True
CUDA runtime version: 11.1.105
GPU models and configuration: GPU 0: Tesla T4
Nvidia driver version: 510.47.03
cuDNN version: Probably one of the following:
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.0.5
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.1.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.21.2
[pip3] numpydoc==1.1.0
[pip3] torch==1.10.0
[pip3] torch-model-archiver==0.5.0b20211117
[pip3] torch-workflow-archiver==0.2.0b20211118
[pip3] torchaudio==0.10.0
[pip3] torchserve==0.5.0b20211117
[pip3] torchtext==0.11.0
[pip3] torchvision==0.11.1
[conda] blas 1.0 mkl
[conda] captum 0.4.1 0 pytorch
[conda] cudatoolkit 11.1.1 h6406543_9 conda-forge
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] libblas 3.9.0 12_linux64_mkl conda-forge
[conda] libcblas 3.9.0 12_linux64_mkl conda-forge
[conda] liblapack 3.9.0 12_linux64_mkl conda-forge
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py38h95df7f1_0 conda-forge
[conda] mkl_fft 1.3.1 py38h8666266_1 conda-forge
[conda] mkl_random 1.2.2 py38h1abd341_0 conda-forge
[conda] numpy 1.21.2 py38h20f2e39_0
[conda] numpy-base 1.21.2 py38h79a1101_0
[conda] numpydoc 1.1.0 py_1 conda-forge
[conda] pytorch 1.10.0 py3.8_cuda11.1_cudnn8.0.5_0 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torch-model-archiver 0.5.0 py38_0 pytorch
[conda] torch-workflow-archiver 0.2.0 py38_0 pytorch
[conda] torchaudio 0.10.0 py38_cu111 pytorch
[conda] torchserve 0.5.0 py38_0 pytorch
[conda] torchtext 0.11.0 py38 pytorch
[conda] torchvision 0.11.1 py38_cu111 pytorch
cc @VitalyFedyunin @ngimel @mcarilli @ptrblck
| 4 |
4,605 | 86,265 |
TORCH_WARN is executed just once per set of parameters
|
module: logging, triaged
|
### π Describe the bug
Set up the fix in https://github.com/pytorch/pytorch/pull/86255 to repro this one, as otherwise you won't see the warnings even.
It seems that `TORCH_WARN` acts as `TORCH_WARN_ONCE` given a fixed set of arguments. To repro this, add the lines `TORCH_WARN("HI"); TORCH_CHECK(false);` in, say, `FuncionsManual.cpp` in the function `svd_backwards` after the second `if` statement. Then run `pytest test/test_ops_gradients.py -vk test_fn_grad_linalg_svd_cpu`. This will run two tests that will fail. Now, the warning will just be triggered in the first test, but not in the second.
More generally, it seems that if `TORCH_WARN` already printed a given parameter in a previous run, it will not print it again. More concretely, if you do `TORCH_WARN(a);` and, say, `a = 0`, then `a` will be printed, but if later on this warning is hit again with `a = 0`, it will not be printed. To see this, change in the code above `TORCH_WARN("HI")` by `TORCH_WARN(gS)`
cc @kurtamohler @peterbell10
### Versions
After patching in https://github.com/pytorch/pytorch/pull/86255
| 7 |
4,606 | 86,261 |
ONNX export of any TorchScript submodule (scripted or traced) fails with "Modules that are called during a trace must be registered as submodules of the thing being traced"
|
oncall: jit, module: onnx
|
### π Describe the bug
This is re-spin of #47887, making sure it gets the attention it deserves.
This bug prevents any use of scripted modules in ONNX export.
Here is extended repro source - it illustrates the fact neither script nor trace works, and the fact that the memory address the exporter complains about, does not belong to original unscripted module.
```
import torch
class Inner(torch.nn.Module):
def forward(self, x):
return (x, )
class Outer(torch.nn.Module):
def __init__(self):
super().__init__()
i = Inner()
self.inner = torch.jit.script(i)
# self.inner = torch.jit.trace(i, torch.zeros(1))
print (hex(id(i)), hex(id(self.inner)))
def forward(self, x):
return self.inner(x)
```
x = torch.zeros(1)
m = Outer()
print (hex(id(m)), hex(id(m.inner)))
for idx, mm in enumerate(m.modules()):
print(idx, '->', mm)
torch.onnx.export(m, (x,), 'test.onnx')
### Versions
root@30f44d0dcd3f:/git/radtts# python collect_env.py
Collecting environment information...
PyTorch version: 1.13.0a0+d321be6
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.23.3
Libc version: glibc-2.31
Python version: 3.8.13 | packaged by conda-forge | (default, Mar 25 2022, 06:04:10) [GCC 10.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-46-generic-x86_64-with-glibc2.10
Is CUDA available: True
CUDA runtime version: 11.7.99
GPU models and configuration: GPU 0: NVIDIA RTX A5000
Nvidia driver version: 515.65.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.5.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] k2==1.19.dev20220911+cuda11.7.torch1.13.0a0
[pip3] numpy==1.22.4
[pip3] pytorch-lightning==1.7.5
[pip3] pytorch-quantization==2.1.2
[pip3] torch==1.13.0a0+d321be6
[pip3] torch-tensorrt==1.2.0a0
[pip3] torchaudio==0.13.0
[pip3] torchmetrics==0.9.3
[pip3] torchvision==0.14.0a0
[conda] k2 1.19.dev20220911+cuda11.7.torch1.13.0a0 pypi_0 pypi
[conda] mkl 2020.4 h726a3e6_304 conda-forge
[conda] mkl-include 2020.4 h726a3e6_304 conda-forge
[conda] numpy 1.22.4 py38h99721a1_0 conda-forge
[conda] pytorch-lightning 1.7.5 pypi_0 pypi
[conda] pytorch-quantization 2.1.2 pypi_0 pypi
[conda] torch 1.13.0a0+d321be6 pypi_0 pypi
[conda] torch-tensorrt 1.2.0a0 pypi_0 pypi
[conda] torchaudio 0.13.0 pypi_0 pypi
[conda] torchmetrics 0.9.3 pypi_0 pypi
[conda] torchvision 0.14.0a0 pypi_0 pypi
| 7 |
4,607 | 93,709 |
Support guard on thread number
|
triaged, enhancement, oncall: pt2, module: dynamo
|
As mentioned in https://github.com/pytorch/torchdynamo/pull/1452 and discussed in https://github.com/pytorch/torchdynamo/pull/1452#discussion_r986774789, thread number is similar to the shapes that can be specialized to get better perf. Supporting guard on thread number can make sure the specialized kernels are recompiled with the changes of thread number.
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh @voznesenskym @yanboliang @penguinwu @anijain2305 @EikanWang @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @desertfire
| 0 |
4,608 | 93,708 |
[Inductor] Support float32 accumulation type for float16 sum on CPU
|
triaged, oncall: pt2, module: cpu inductor
|
Currently float16 sum on CPU uses float16 accumulation type (https://github.com/pytorch/torchdynamo/pull/1452). We should use float32 accumulation type instead to reduce numerical errors.
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 0 |
4,609 | 86,225 |
[Feature] Dispatching PyTorch Distributed Collectives
|
oncall: distributed, feature, triaged, module: c10d
|
### π The feature, motivation and pitch
### Motivations:
- This feature proposes a solution from the PyTorch Distributed team to the following use case: customers will have their users code "just work" on machines that have GPUs as well as machines that lack GPUs, without changing the c10d backend specification.
- The proposed solution also aims at better supporting customers with both GPU collectives and CPU collectives.
- Existing library specification by users will be honored and will not require change.
- Improve PyTorch Distributed collective extensibility and support for third-party extensions.
### Proposal:
PyTorch Distributed Team proposes to dispatch collective operations to meet the above requirements.
### Details:
The c10d library of PyTorch has been relying on the call of `init_process_group(backend=βxyzβ)` to know which backend to prepare for the user.
While this has provided users a choice when there were backends with similar functionalities (such as NCCL and Gloo for CUDA tensors) β which is less needed now β it also ties userβs later operations to the capability of the specified backend. For example, if the user specifies `βncclβ` as the backend, it is expected that all later collective operations are on CUDA tensors.
To make the life of backend developers easier while supporting a growing diversity in collective needs, there is a need for PyTorch to dispatch collectives of given tensor types to the correct backend.
The infrastructure for supporting dispatchability already exists today. PyTorch core has a dispatcher internally that figures out which kernel implementation to call for a given tensor type. For example, it can switch between the CPU and CUDA implementations of a `torch.add` operator, depending on the [torch.device](https://pytorch.org/docs/stable/tensor_attributes.html#torch.torch.device) attribute of the input tensor (`'cpu' `or `'cuda'`). While this capability is mainly built for ATen operators today, it can be extended to c10d operations, and there has been effort in achieving that.
The dispatch capability makes it possible for PyTorch to have a default solution for a tensor type, rather than fully relying on the user to get such knowledge via the `init_process_group(backend)` call. We expect that the backend argument in this API would become optional after PyTorch has the dispatching capability. Note that this does not break backward compatibility with respect to current usage of this API.
Users may still use the backend argument to specify their preference. The difference is that the effectiveness of syntax `backend='xyz'` would be limited to the tensor type(s) backend xyz can support. For example, `backend='nccl'` would be understood by PyTorch as: βfor CUDA tensors (rather than all tensors), the user's preferred backend is NCCL.β This would leave the non-CUDA preference floating. If the user later passes in a CPU tensor, PyTorch can still use its default preferred solution for CPU. This helps us achieve 0 lines of code change in the first use case identified in the Motivation section.
For design completeness, users can specify multiple backends in a single command. This is only for usability: `backend='cuda:backend1,cpu:backend2'`.
### Example
Here is a basic example we would be able to support. **Note: this is just a sample and the specifics are subject to change.**
```python
import torch
import torch.distributed as dist
import os
os.environ["MASTER_ADDR"] = "localhost"
os.environ["MASTER_PORT"] = "29500"
os.environ["RANK"] = "0"
os.environ["WORLD_SIZE"] = "1"
if __name__ == "__main__":
dist.init_process_group() # The backend argument is optional
# We will use the same process group for both collectives
t = torch.ones(1)
output = [torch.zeros(1)]
dist.all_gather(output, t) # Called with CPU backend (default GLOO)
print(output)
t_cuda = torch.ones(1, device="cuda:0")
cuda_output = [torch.zeros(1, device="cuda:0")]
dist.all_gather(cuda_output, t_cuda) # Called with CUDA backend (default NCCL)
print(cuda_output)
# Output:
# >> [tensor([1.])]
# >> [tensor([1.], device='cuda:0')]
```
### Timeline
We are targetting this feature for the next release after 1.13. Please follow along with this issue as changes will be checked into nightly trunk and the feature may be ready before the official release.
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @kwen2501
| 0 |
4,610 | 86,204 |
How to perform unstructured interpolation
|
triaged, module: interpolation
|
### π The feature, motivation and pitch
My feature request is very simple, I'm not sure if there already exists some approach or implementation to achieve this functionality.
In scipy, there is scipy.interpolate.NearestNDInterpolator class or scipy.interpolate.LinearNDInterpolator class to achieve unstructured interpolation, i.e., giving a set of sparse points that distribute non-uniformly in the spatial domain and interpolate values at any given points, however, currently torch seems only support simple grid structured interpolation like grid_sample
### Alternatives
I have found some implementation for this in 1d, but not sure if this is very efficient and support GPU, also how to extend it to 2D.
https://github.com/aliutkus/torchinterp1d
### Additional context
_No response_
| 0 |
4,611 | 86,194 |
path in WORKSPACE
|
triaged, module: bazel
|
### π Describe the bug
When I try to get the dependencies with `bazel query 'deps(//:torch)' --output graph > graph.in`, It will fail and get the following outputs.
`Loading: 0 packages loaded
ERROR: Error fetching repository: java.io.IOException: The repository's path is "/usr/local/cuda" (absolute: "/usr/local/cuda") but this directory does not exist.
ERROR: /workspace/pytorch/c10/cuda/BUILD.bazel:4:15: no such package '@cuda//': The repository's path is "/usr/local/cuda" (absolute: "/usr/local/cuda") but this directory does not exist. and referenced by '//c10/cuda:cuda'
Loading: 15 packages loaded
ERROR: /workspace/pytorch/BUILD.bazel:371:11: no such package '@cuda//': The repository's path is "/usr/local/cuda" (absolute: "/usr/local/cuda") but this directory does not exist. and referenced by '//:aten_nvrtc'
ERROR: /workspace/pytorch/BUILD.bazel:371:11: no such package '@cuda//': The repository's path is "/usr/local/cuda" (absolute: "/usr/local/cuda") but this directory does not exist. and referenced by '//:aten_nvrtc'
ERROR: /workspace/pytorch/BUILD.bazel:371:11: no such package '@cuda//': The repository's path is "/usr/local/cuda" (absolute: "/usr/local/cuda") but this directory does not exist. and referenced by '//:aten_nvrtc'
ERROR: /workspace/pytorch/BUILD.bazel:1473:11: no such package '@cuda//': The repository's path is "/usr/local/cuda" (absolute: "/usr/local/cuda") but this directory does not exist. and referenced by '//:caffe2_cuda'
ERROR: /workspace/pytorch/BUILD.bazel:1473:11: no such package '@cuda//': The repository's path is "/usr/local/cuda" (absolute: "/usr/local/cuda") but this directory does not exist. and referenced by '//:caffe2_cuda'
ERROR: /root/.cache/bazel/_bazel_root/f1885fd2033fa9405d092b9e0e17aa9b/external/tensorpipe/BUILD.bazel:163:11: no such package '@cuda//': The repository's path is "/usr/local/cuda" (absolute: "/usr/local/cuda") but this directory does not exist. and referenced by '@tensorpipe//:tensorpipe_cuda'
ERROR: /workspace/pytorch/BUILD.bazel:398:11: no such package '@cuda//': The repository's path is "/usr/local/cuda" (absolute: "/usr/local/cuda") but this directory does not exist. and referenced by '//:aten_cuda_cpp'
ERROR: /workspace/pytorch/BUILD.bazel:398:11: no such package '@cuda//': The repository's path is "/usr/local/cuda" (absolute: "/usr/local/cuda") but this directory does not exist. and referenced by '//:aten_cuda_cpp'
ERROR: /workspace/pytorch/BUILD.bazel:398:11: no such package '@cuda//': The repository's path is "/usr/local/cuda" (absolute: "/usr/local/cuda") but this directory does not exist. and referenced by '//:aten_cuda_cpp'
ERROR: /workspace/pytorch/BUILD.bazel:422:11: no such package '@cuda//': The repository's path is "/usr/local/cuda" (absolute: "/usr/local/cuda") but this directory does not exist. and referenced by '//:aten_cuda'
ERROR: /workspace/pytorch/BUILD.bazel:422:11: no such package '@cuda//': The repository's path is "/usr/local/cuda" (absolute: "/usr/local/cuda") but this directory does not exist. and referenced by '//:aten_cuda'
ERROR: /workspace/pytorch/BUILD.bazel:422:11: no such package '@cuda//': The repository's path is "/usr/local/cuda" (absolute: "/usr/local/cuda") but this directory does not exist. and referenced by '//:aten_cuda'
ERROR: /workspace/pytorch/BUILD.bazel:1650:11: no such package '@cuda//': The repository's path is "/usr/local/cuda" (absolute: "/usr/local/cuda") but this directory does not exist. and referenced by '//:torch'
ERROR: Evaluation of query "deps(//:torch)" failed: errors were encountered while computing transitive closure
Loading: 17 packages loaded
Loading: 17 packages loaded`
I think the reasons are that in WORKSPACE, the path for "cuda" and "cudnn" are not correct.
https://github.com/pytorch/pytorch/blob/master/WORKSPACE#L191
### Versions
Collecting environment information...
PyTorch version: 1.13.0a0
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: 6.0.0-1ubuntu2 (tags/RELEASE_600/final)
CMake version: version 3.22.1
Libc version: glibc-2.17
Python version: 3.7.13 (default, Mar 29 2022, 02:18:16) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.4.0-74-generic-x86_64-with-debian-buster-sid
Is CUDA available: False
CUDA runtime version: Could not collect
GPU models and configuration:
GPU 0: GeForce RTX 2080 Ti
GPU 1: GeForce RTX 2080 Ti
Nvidia driver version: 450.102.04
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.21.5
[pip3] torch==1.13.0a0+gitunknown
[pip3] torchaudio==0.7.0a0+a853dff
[pip3] torchelastic==0.2.0
[pip3] torchtext==0.8.0a0+0f911ec
[pip3] torchvision==0.8.0a0
[conda] _pytorch_select 0.1 cpu_0
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.0.221 h6bb024c_0
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] magma-cuda110 2.5.2 1 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-include 2022.1.0 h06a4308_224
[conda] mkl-service 2.4.0 py37h7f8727e_0
[conda] mkl_fft 1.3.1 py37hd3c417c_0
[conda] mkl_random 1.2.2 py37h51133e4_0
[conda] numpy 1.21.5 py37he7a7128_2
[conda] numpy-base 1.21.5 py37hf524024_2
[conda] pytorch 1.7.1 py3.7_cuda11.0.221_cudnn8.0.5_0 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torch 1.13.0a0+gitunknown pypi_0 pypi
[conda] torchaudio 0.7.2 py37 pytorch
[conda] torchelastic 0.2.0 pypi_0 pypi
[conda] torchtext 0.8.1 py37 pytorch
[conda] torchvision 0.8.2 cpu_py37ha229d99_0
| 1 |
4,612 | 86,192 |
fmt/src/os.cc: error: unknown type name 'error_code'; did you mean 'std::error_code'?
|
module: build, triaged
|
### π Describe the bug
I get the following error when building from master branch source:
```
cmake \
-D BUILD_SHARED_LIBS=ON \
-D CMAKE_BUILD_TYPE:STRING=Release \
-D PYTHON_EXECUTABLE:PATH=`which python3` \
-D CMAKE_INSTALL_PREFIX:PATH=/usr/home/user/pytorch/build/install ..
```
```
$ make -j8
Consolidate compiler generated dependencies of target clog
Consolidate compiler generated dependencies of target kineto_api
Consolidate compiler generated dependencies of target asmjit
Consolidate compiler generated dependencies of target fmt
Consolidate compiler generated dependencies of target libprotobuf
[ 0%] Built target defs.bzl
[ 0%] Built target libkineto_defs.bzl
[ 0%] Built target clog
Consolidate compiler generated dependencies of target libprotobuf-lite
[ 0%] Building CXX object third_party/kineto/libkineto/CMakeFiles/kineto_api.dir/src/ThreadUtil.cpp.o
[ 0%] Building CXX object third_party/fmt/CMakeFiles/fmt.dir/src/format.cc.o
[ 0%] Building CXX object third_party/fbgemm/asmjit/CMakeFiles/asmjit.dir/src/asmjit/core/builder.cpp.o
[ 0%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf-lite.dir/__/src/google/protobuf/generated_enum_util.cc.o
[ 0%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/arena.cc.o
[ 0%] Building CXX object c10/CMakeFiles/c10.dir/core/Allocator.cpp.o
[ 0%] Building C object third_party/foxi/CMakeFiles/foxi_loader.dir/foxi/onnxifi_loader.c.o
[ 0%] Linking C static library ../../lib/libfoxi_loader.a
[ 0%] Built target foxi_loader
[ 0%] Building CXX object third_party/fmt/CMakeFiles/fmt.dir/src/os.cc.o
[ 0%] Generating ATen declarations_yaml
In file included from /home/user/pytorch/third_party/protobuf/src/google/protobuf/generated_enum_util.cc:31:
In file included from /home/user/pytorch/third_party/protobuf/src/google/protobuf/generated_enum_util.h:36:
In file included from /home/user/pytorch/third_party/protobuf/src/google/protobuf/message_lite.h:45:
In file included from /home/user/pytorch/third_party/protobuf/src/google/protobuf/stubs/common.h:39:
In file included from /usr/include/c++/v1/iostream:37:
In file included from /usr/include/c++/v1/ios:214:
In file included from /usr/include/c++/v1/__locale:21:
/usr/local/include/fmt/locale.h:2:2: warning: fmt/locale.h is deprecated, include fmt/format.h or fmt/xchar.h instead [-W#warnings]
#warning fmt/locale.h is deprecated, include fmt/format.h or fmt/xchar.h instead
^
In file included from /home/user/pytorch/third_party/protobuf/src/google/protobuf/generated_enum_util.cc:31:
In file included from /home/user/pytorch/third_party/protobuf/src/google/protobuf/generated_enum_util.h:36:
In file included from /home/user/pytorch/third_party/protobuf/src/google/protobuf/message_lite.h:45:
In file included from /home/user/pytorch/third_party/protobuf/src/google/protobuf/stubs/common.h:39:
In file included from /usr/include/c++/v1/iostream:37:
In file included from /usr/include/c++/v1/ios:214:
In file included from /usr/include/c++/v1/__locale:40:
In file included from /usr/include/xlocale.h:37:
/usr/local/include/fmt/locale.h:2:2: warning: fmt/locale.h is deprecated, include fmt/format.h or fmt/xchar.h instead [-W#warnings]
#warning fmt/locale.h is deprecated, include fmt/format.h or fmt/xchar.h instead
^
In file included from /home/user/pytorch/third_party/protobuf/src/google/protobuf/arena.cc:31:
In file included from /home/user/pytorch/third_party/protobuf/src/google/protobuf/arena.h:55:
In file included from /home/user/pytorch/third_party/protobuf/src/google/protobuf/arena_impl.h:39:
In file included from /home/user/pytorch/third_party/protobuf/src/google/protobuf/stubs/common.h:39:
In file included from /usr/include/c++/v1/iostream:37:
In file included from /usr/include/c++/v1/ios:214:
In file included from /usr/include/c++/v1/__locale:21:
/usr/local/include/fmt/locale.h:2:2: warning: fmt/locale.h is deprecated, include fmt/format.h or fmt/xchar.h instead [-W#warnings]
#warning fmt/locale.h is deprecated, include fmt/format.h or fmt/xchar.h instead
^
In file included from /home/user/pytorch/third_party/protobuf/src/google/protobuf/arena.cc:31:
In file included from /home/user/pytorch/third_party/protobuf/src/google/protobuf/arena.h:55:
In file included from /home/user/pytorch/third_party/protobuf/src/google/protobuf/arena_impl.h:39:
In file included from /home/user/pytorch/third_party/protobuf/src/google/protobuf/stubs/common.h:39:
In file included from /usr/include/c++/v1/iostream:37:
In file included from /usr/include/c++/v1/ios:214:
In file included from /usr/include/c++/v1/__locale:40:
In file included from /usr/include/xlocale.h:37:
/usr/local/include/fmt/locale.h:2:2: warning: fmt/locale.h is deprecated, include fmt/format.h or fmt/xchar.h instead [-W#warnings]
#warning fmt/locale.h is deprecated, include fmt/format.h or fmt/xchar.h instead
^
In file included from /home/user/pytorch/third_party/fmt/src/format.cc:8:
In file included from /usr/local/include/fmt/format-inl.h:22:
In file included from /usr/include/c++/v1/locale:192:
In file included from /usr/include/c++/v1/__locale:21:
/usr/local/include/fmt/locale.h:2:2: warning: fmt/locale.h is deprecated, include fmt/format.h or fmt/xchar.h instead [-W#warnings]
#warning fmt/locale.h is deprecated, include fmt/format.h or fmt/xchar.h instead
^
In file included from /home/user/pytorch/third_party/fmt/src/format.cc:8:
In file included from /usr/local/include/fmt/format-inl.h:22:
In file included from /usr/include/c++/v1/locale:192:
In file included from /usr/include/c++/v1/__locale:40:
In file included from /usr/include/xlocale.h:37:
/usr/local/include/fmt/locale.h:2:2: warning: fmt/locale.h is deprecated, include fmt/format.h or fmt/xchar.h instead [-W#warnings]
#warning fmt/locale.h is deprecated, include fmt/format.h or fmt/xchar.h instead
^
In file included from /home/user/pytorch/third_party/kineto/libkineto/src/ThreadUtil.cpp:22:
In file included from /usr/local/include/fmt/format.h:3099:
In file included from /usr/local/include/fmt/format-inl.h:22:
In file included from /usr/include/c++/v1/locale:192:
In file included from /usr/include/c++/v1/__locale:21:
/usr/local/include/fmt/locale.h:2:2: warning: fmt/locale.h is deprecated, include fmt/format.h or fmt/xchar.h instead [-W#warnings]
#warning fmt/locale.h is deprecated, include fmt/format.h or fmt/xchar.h instead
^
In file included from /home/user/pytorch/third_party/kineto/libkineto/src/ThreadUtil.cpp:22:
In file included from /usr/local/include/fmt/format.h:3099:
In file included from /usr/local/include/fmt/format-inl.h:22:
In file included from /usr/include/c++/v1/locale:192:
In file included from /usr/include/c++/v1/__locale:40:
In file included from /usr/include/xlocale.h:37:
/usr/local/include/fmt/locale.h:2:2: warning: fmt/locale.h is deprecated, include fmt/format.h or fmt/xchar.h instead [-W#warnings]
#warning fmt/locale.h is deprecated, include fmt/format.h or fmt/xchar.h instead
^
In file included from /home/user/pytorch/c10/core/Allocator.cpp:1:
In file included from /home/user/pytorch/c10/core/Allocator.h:6:
In file included from /home/user/pytorch/c10/core/Device.h:3:
In file included from /home/user/pytorch/c10/core/DeviceType.h:11:
In file included from /usr/include/c++/v1/ostream:138:
In file included from /usr/include/c++/v1/ios:214:
In file included from /usr/include/c++/v1/__locale:21:
In file included from /usr/local/include/fmt/locale.h:1:
In file included from /usr/local/include/fmt/xchar.h:14:
In file included from /usr/local/include/fmt/format.h:3099:
In file included from /usr/local/include/fmt/format-inl.h:22:
In file included from /usr/include/c++/v1/locale:204:
/usr/include/c++/v1/streambuf:141:5: error: unknown type name 'locale'
locale pubimbue(const locale& __loc) {
^
/usr/include/c++/v1/streambuf:141:27: error: unknown type name 'locale'
locale pubimbue(const locale& __loc) {
^
/usr/include/c++/v1/streambuf:149:5: error: unknown type name 'locale'
locale getloc() const { return __loc_; }
^
/usr/include/c++/v1/streambuf:153:48: error: unknown type name 'streamsize'
basic_streambuf* pubsetbuf(char_type* __s, streamsize __n)
^
/usr/include/c++/v1/streambuf:157:41: error: incomplete type 'std::ios_base' named in nested name specifier
pos_type pubseekoff(off_type __off, ios_base::seekdir __way,
^~~~~~~~~~
/usr/include/c++/v1/iosfwd:106:24: note: forward declaration of 'std::ios_base'
class _LIBCPP_TYPE_VIS ios_base;
^
In file included from /home/user/pytorch/c10/core/Allocator.cpp:1:
In file included from /home/user/pytorch/c10/core/Allocator.h:6:
In file included from /home/user/pytorch/c10/core/Device.h:3:
In file included from /home/user/pytorch/c10/core/DeviceType.h:11:
In file included from /usr/include/c++/v1/ostream:138:
In file included from /usr/include/c++/v1/ios:214:
In file included from /usr/include/c++/v1/__locale:21:
In file included from /usr/local/include/fmt/locale.h:1:
In file included from /usr/local/include/fmt/xchar.h:14:
In file included from /usr/local/include/fmt/format.h:3099:
In file included from /usr/local/include/fmt/format-inl.h:22:
In file included from /usr/include/c++/v1/locale:204:
/usr/include/c++/v1/streambuf:158:25: error: incomplete type 'std::ios_base' named in nested name specifier
ios_base::openmode __which = ios_base::in | ios_base::out)
^~~~~~~~~~
/usr/include/c++/v1/iosfwd:106:24: note: forward declaration of 'std::ios_base'
class _LIBCPP_TYPE_VIS ios_base;
^
In file included from /home/user/pytorch/c10/core/Allocator.cpp:1:
In file included from /home/user/pytorch/c10/core/Allocator.h:6:
In file included from /home/user/pytorch/c10/core/Device.h:3:
In file included from /home/user/pytorch/c10/core/DeviceType.h:11:
In file included from /usr/include/c++/v1/ostream:138:
In file included from /usr/include/c++/v1/ios:214:
In file included from /usr/include/c++/v1/__locale:21:
In file included from /usr/local/include/fmt/locale.h:1:
In file included from /usr/local/include/fmt/xchar.h:14:
In file included from /usr/local/include/fmt/format.h:3099:
In file included from /usr/local/include/fmt/format-inl.h:22:
In file included from /usr/include/c++/v1/locale:204:
/usr/include/c++/v1/streambuf:163:25: error: incomplete type 'std::ios_base' named in nested name specifier
ios_base::openmode __which = ios_base::in | ios_base::out)
^~~~~~~~~~
/usr/include/c++/v1/iosfwd:106:24: note: forward declaration of 'std::ios_base'
class _LIBCPP_TYPE_VIS ios_base;
^
In file included from /home/user/pytorch/third_party/fmt/src/os.cc:13:
In file included from /usr/local/include/fmt/os.h:12:
In file included from /usr/include/c++/v1/clocale:38:
/usr/local/include/fmt/locale.h:2:2: warning: fmt/locale.h is deprecated, include fmt/format.h or fmt/xchar.h instead [-W#warnings]
#warning fmt/locale.h is deprecated, include fmt/format.h or fmt/xchar.h instead
^
In file included from /home/user/pytorch/third_party/fmt/src/os.cc:13:
In file included from /usr/local/include/fmt/os.h:19:
In file included from /usr/include/xlocale.h:37:
/usr/local/include/fmt/locale.h:2:2: warning: fmt/locale.h is deprecated, include fmt/format.h or fmt/xchar.h instead [-W#warnings]
#warning fmt/locale.h is deprecated, include fmt/format.h or fmt/xchar.h instead
^
In file included from /home/user/pytorch/c10/core/Allocator.cpp:1:
In file included from /home/user/pytorch/c10/core/Allocator.h:6:
In file included from /home/user/pytorch/c10/core/Device.h:3:
In file included from /home/user/pytorch/c10/core/DeviceType.h:11:
In file included from /usr/include/c++/v1/ostream:138:
In file included from /usr/include/c++/v1/ios:214:
In file included from /usr/include/c++/v1/__locale:21:
In file included from /usr/local/include/fmt/locale.h:1:
In file included from /usr/local/include/fmt/xchar.h:14:
In file included from /usr/local/include/fmt/format.h:3099:
In file included from /usr/local/include/fmt/format-inl.h:22:
In file included from /usr/include/c++/v1/locale:204:
/usr/include/c++/v1/streambuf:172:5: error: unknown type name 'streamsize'
streamsize in_avail() {
^
/usr/include/c++/v1/streambuf:200:5: error: unknown type name 'streamsize'
streamsize sgetn(char_type* __s, streamsize __n)
^
/usr/include/c++/v1/streambuf:200:38: error: unknown type name 'streamsize'
streamsize sgetn(char_type* __s, streamsize __n)
^
/usr/include/c++/v1/streambuf:228:5: error: unknown type name 'streamsize'
streamsize sputn(const char_type* __s, streamsize __n)
^
/usr/include/c++/v1/streambuf:228:44: error: unknown type name 'streamsize'
streamsize sputn(const char_type* __s, streamsize __n)
^
/usr/include/c++/v1/streambuf:261:18: error: unknown type name 'streamsize'
void __pbump(streamsize __n) { __nout_ += __n; }
^
/usr/include/c++/v1/streambuf:271:30: error: unknown type name 'locale'
virtual void imbue(const locale& __loc);
^
/usr/include/c++/v1/streambuf:274:53: error: unknown type name 'streamsize'
virtual basic_streambuf* setbuf(char_type* __s, streamsize __n);
^
/usr/include/c++/v1/streambuf:275:46: error: incomplete type 'std::ios_base' named in nested name specifier
virtual pos_type seekoff(off_type __off, ios_base::seekdir __way,
^~~~~~~~~~
/usr/include/c++/v1/iosfwd:106:24: note: forward declaration of 'std::ios_base'
class _LIBCPP_TYPE_VIS ios_base;
^
In file included from /home/user/pytorch/c10/core/Allocator.cpp:1:
In file included from /home/user/pytorch/c10/core/Allocator.h:6:
In file included from /home/user/pytorch/c10/core/Device.h:3:
In file included from /home/user/pytorch/c10/core/DeviceType.h:11:
In file included from /usr/include/c++/v1/ostream:138:
In file included from /usr/include/c++/v1/ios:214:
In file included from /usr/include/c++/v1/__locale:21:
In file included from /usr/local/include/fmt/locale.h:1:
In file included from /usr/local/include/fmt/xchar.h:14:
In file included from /usr/local/include/fmt/format.h:3099:
In file included from /usr/local/include/fmt/format-inl.h:22:
In file included from /usr/include/c++/v1/locale:204:
/usr/include/c++/v1/streambuf:276:30: error: incomplete type 'std::ios_base' named in nested name specifier
ios_base::openmode __which = ios_base::in | ios_base::out);
^~~~~~~~~~
/usr/include/c++/v1/iosfwd:106:24: note: forward declaration of 'std::ios_base'
class _LIBCPP_TYPE_VIS ios_base;
^
In file included from /home/user/pytorch/c10/core/Allocator.cpp:1:
In file included from /home/user/pytorch/c10/core/Allocator.h:6:
In file included from /home/user/pytorch/c10/core/Device.h:3:
In file included from /home/user/pytorch/c10/core/DeviceType.h:11:
In file included from /usr/include/c++/v1/ostream:138:
In file included from /usr/include/c++/v1/ios:214:
In file included from /usr/include/c++/v1/__locale:21:
In file included from /usr/local/include/fmt/locale.h:1:
In file included from /usr/local/include/fmt/xchar.h:14:
In file included from /usr/local/include/fmt/format.h:3099:
In file included from /usr/local/include/fmt/format-inl.h:22:
In file included from /usr/include/c++/v1/locale:204:
/usr/include/c++/v1/streambuf:278:30: error: incomplete type 'std::ios_base' named in nested name specifier
ios_base::openmode __which = ios_base::in | ios_base::out);
^~~~~~~~~~
/usr/include/c++/v1/iosfwd:106:24: note: forward declaration of 'std::ios_base'
class _LIBCPP_TYPE_VIS ios_base;
^
/home/user/pytorch/third_party/fmt/src/os.cc:262:25: error: unknown type name 'error_code'; did you mean 'std::error_code'?
void file::dup2(int fd, error_code& ec) FMT_NOEXCEPT {
^~~~~~~~~~
std::error_code
/usr/include/c++/v1/system_error:315:24: note: 'std::error_code' declared here
class _LIBCPP_TYPE_VIS error_code
^
In file included from /home/user/pytorch/c10/core/Allocator.cpp:1:
In file included from /home/user/pytorch/c10/core/Allocator.h:6:
In file included from /home/user/pytorch/c10/core/Device.h:3:
In file included from /home/user/pytorch/c10/core/DeviceType.h:11:
In file included from /usr/include/c++/v1/ostream:138:
In file included from /usr/include/c++/v1/ios:214:
In file included from /usr/include/c++/v1/__locale:21:
In file included from /usr/local/include/fmt/locale.h:1:
In file included from /usr/local/include/fmt/xchar.h:14:
In file included from /usr/local/include/fmt/format.h:3099:
In file included from /usr/local/include/fmt/format-inl.h:22:
In file included from /usr/include/c++/v1/locale:204:
/usr/include/c++/v1/streambuf:282:13: error: unknown type name 'streamsize'
virtual streamsize showmanyc();
^
/home/user/pytorch/third_party/fmt/src/os.cc:265:26: error: use of undeclared identifier 'error_code'; did you mean 'std::error_code'?
if (result == -1) ec = error_code(errno);
^
/usr/include/c++/v1/system_error:315:24: note: 'std::error_code' declared here
class _LIBCPP_TYPE_VIS error_code
^
fatal error: too many errors emitted, stopping now [-ferror-limit=]
[ 0%] Building CXX object third_party/fbgemm/asmjit/CMakeFiles/asmjit.dir/src/asmjit/core/codeholder.cpp.o
/home/user/pytorch/third_party/fmt/src/format.cc:28:61: error: variable cannot be defined in an explicit instantiation; if this declaration is meant to be a variable definition, remove the 'template' keyword
template struct FMT_INSTANTIATION_DEF_API detail::basic_data<void>;
~~~~~~~~~ ^
/home/user/pytorch/third_party/fmt/src/format.cc:28:51: error: no member named 'basic_data' in namespace 'fmt::detail'
template struct FMT_INSTANTIATION_DEF_API detail::basic_data<void>;
~~~~~~~~^
/home/user/pytorch/third_party/fmt/src/format.cc:28:61: error: expected ';' at end of declaration
template struct FMT_INSTANTIATION_DEF_API detail::basic_data<void>;
^
;
/home/user/pytorch/third_party/fmt/src/format.cc:28:61: error: expected unqualified-id
/home/user/pytorch/third_party/fmt/src/format.cc:41:51: error: variable cannot be defined in an explicit instantiation; if this declaration is meant to be a variable definition, remove the 'template' keyword
template FMT_API std::string detail::grouping_impl<char>(locale_ref);
~~~~~~~~~ ^
/home/user/pytorch/third_party/fmt/src/format.cc:41:38: error: no member named 'grouping_impl' in namespace 'fmt::detail'
template FMT_API std::string detail::grouping_impl<char>(locale_ref);
~~~~~~~~^
/home/user/pytorch/third_party/fmt/src/format.cc:41:51: error: expected ';' at end of declaration
template FMT_API std::string detail::grouping_impl<char>(locale_ref);
^
;
/home/user/pytorch/third_party/fmt/src/format.cc:41:51: error: expected unqualified-id
/home/user/pytorch/third_party/fmt/src/format.cc:42:31: error: explicit instantiation of 'thousands_sep_impl' does not refer to a function template, variable template, member function, member class, or static data member
template FMT_API char detail::thousands_sep_impl(locale_ref);
^
/usr/local/include/fmt/format-inl.h:111:15: note: candidate template ignored: could not match 'thousands_sep_result<type-parameter-0-0>' against 'char'
FMT_FUNC auto thousands_sep_impl(locale_ref loc) -> thousands_sep_result<Char> {
^
/home/user/pytorch/third_party/fmt/src/format.cc:47:61: error: explicit instantiation of 'vformat_to' does not refer to a function template, variable template, member function, member class, or static data member
template FMT_API FMT_BUFFER_CONTEXT(char)::iterator detail::vformat_to(
^
/usr/local/include/fmt/format.h:2909:6: note: candidate template ignored: could not match 'void' against 'fmt::appender'
void vformat_to(
^
/home/user/pytorch/third_party/fmt/src/format.cc:63:51: error: variable cannot be defined in an explicit instantiation; if this declaration is meant to be a variable definition, remove the 'template' keyword
template FMT_API std::string detail::grouping_impl<wchar_t>(locale_ref);
~~~~~~~~~ ^
/home/user/pytorch/third_party/fmt/src/format.cc:63:38: error: redefinition of 'grouping_impl'
template FMT_API std::string detail::grouping_impl<wchar_t>(locale_ref);
^
/home/user/pytorch/third_party/fmt/src/format.cc:41:38: note: previous definition is here
template FMT_API std::string detail::grouping_impl<char>(locale_ref);
^
/home/user/pytorch/third_party/fmt/src/format.cc:63:51: error: expected ';' at end of declaration
template FMT_API std::string detail::grouping_impl<wchar_t>(locale_ref);
^
;
/home/user/pytorch/third_party/fmt/src/format.cc:63:51: error: expected unqualified-id
/home/user/pytorch/third_party/fmt/src/format.cc:64:34: error: explicit instantiation of 'thousands_sep_impl' does not refer to a function template, variable template, member function, member class, or static data member
template FMT_API wchar_t detail::thousands_sep_impl(locale_ref);
^
/usr/local/include/fmt/format-inl.h:111:15: note: candidate template ignored: could not match 'thousands_sep_result<type-parameter-0-0>' against 'wchar_t'
FMT_FUNC auto thousands_sep_impl(locale_ref loc) -> thousands_sep_result<Char> {
^
/home/user/pytorch/third_party/kineto/libkineto/src/ThreadUtil.cpp:52:32: error: use of undeclared identifier 'SYS_gettid'
_sysTid = (int32_t)syscall(SYS_gettid);
^
20 errors generated.
--- c10/CMakeFiles/c10.dir/core/Allocator.cpp.o ---
*** [c10/CMakeFiles/c10.dir/core/Allocator.cpp.o] Error code 1
make[2]: stopped in /usr/home/user/pytorch/build
*** [all] Error code 6
make: stopped in /usr/home/user/pytorch/build
1 error
make: stopped in /usr/home/user/pytorch/build
```
Thanks
### Versions
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: freebsd13
GCC version: Could not collect
Clang version: 13.0.0 (git@github.com:llvm/llvm-project.git llvmorg-13.0.0-0-gd7b669b3a303)
CMake version: version 3.24.0
Libc version: N/A
Python version: 3.9.13 (main, Aug 11 2022, 01:10:20) [Clang 11.0.1 (git@github.com:llvm/llvm-project.git llvmorg-11.0.1-0-g43ff75f2c (64-bit runtime)
Python platform: FreeBSD-13.1-RELEASE-p1-amd64-64bit-ELF
Is CUDA available: N/A
CUDA runtime version: Could not collect
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
Versions of relevant libraries:
[pip3] numpy==1.23.3
[pip3] pytorchvideo==0.1.5
[conda] Could not collect
cc @malfet @seemethere
| 1 |
4,613 | 93,707 |
Dynamo shouldn't name getitem variables getitem; instead it should derive the name from the variable that was getitem'ed
|
triaged, oncall: pt2
|
This would make return statements like
```
return (self_encoder_dropout_1, getitem_5, getitem_10, getitem_13, getitem_16, getitem_19, getitem_22)
```
a lot more informative
cc @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 2 |
4,614 | 93,706 |
When you call tensor.size(), dynamo returns a tuple, instead of a torch.Size
|
triaged, oncall: pt2
|
I have this fragment of code, based off of the bug report https://github.com/pytorch/torchdynamo/issues/1462
```
elif input_ids is not None:
input_shape = input_ids.size()
input_ids = input_ids.view(-1, input_shape[-1])
elif inputs_embeds is not None:
input_shape = inputs_embeds.size()[:-1]
else:
err_msg_prefix = "decoder_" if self.is_decoder else ""
raise ValueError(f"You have to specify either {err_msg_prefix}input_ids or {err_msg_prefix}inputs_embeds ")
print("early input shape: ", input_shape)
```
what I have done is inserted the print. When I run model normally, this print says
```
early input shape: torch.Size([2, 1024])
```
however, when it's run under dynamo, I don't get a torch.Size
```
early input shape: (2, 1024)
```
This is unexpected and can lead to behavior divergence.
I can try to minimize the repro if you like, but I'm guessing it is very obvious that we are not doing this correctly.
cc @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 1 |
4,615 | 86,162 |
torch.nn.functional.one_hot only works for int64
|
module: nn, triaged
|
### π Describe the bug
Tried applying torch.nn.functional.one_hot to an int32 and int16 tensor, got "[RuntimeError: one_hot is only applicable to index tensor.](https://github.com/pytorch/pytorch/blob/8d1ff9fc5dc70bdc65a83748c01cddf187728452/aten/src/ATen/native/Onehot.cpp#L6)"
It seems that it only works with int64 tensors. Is this behaviour expected? If so, why?
Thanks!
### Versions
PyTorch version: 1.12.1+cu113
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: 6.0.0-1ubuntu2 (tags/RELEASE_600/final)
CMake version: version 3.22.6
Libc version: glibc-2.26
Python version: 3.7.14 (default, Sep 8 2022, 00:06:44) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.10.133+-x86_64-with-Ubuntu-18.04-bionic
Is CUDA available: True
CUDA runtime version: 11.1.105
GPU models and configuration: GPU 0: Tesla T4
Nvidia driver version: 460.32.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.0.5
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.21.6
[pip3] torch==1.12.1+cu113
[pip3] torchaudio==0.12.1+cu113
[pip3] torchsummary==1.5.1
[pip3] torchtext==0.13.1
[pip3] torchvision==0.13.1+cu113
[conda] Could not collect
cc @albanD @mruberry @jbschlosser @walterddr @kshitij12345 @saketh-are
| 1 |
4,616 | 86,152 |
MPSNDArray.mm:782: failed assertion; bufer is not large enough Mac M1 MPS
|
module: memory usage, triaged, module: regression, module: mps
|
### π Describe the bug
This appears a duplicate or a regression of #77886. I'm trying to use MPS with OpenAI's whisper. This had been failing but I saw that ` aten::index_put_impl ` was now supported so I tried again:
```{python}
import whisper
import torch
device = torch.device('mps')
model.transcribe("test dictation.m4a",language='english',verbose=True,fp16=False)
```
Which leads to a crash:
```
/usr/local/opt/miniforge3/lib/python3.9/site-packages/whisper/decoding.py:628: UserWarning: The operator 'aten::repeat_interleave.self_int' is not currently supported on the MPS backend and will fall back to run on the CPU. This may have performance implications. (Triggered internally at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/mps/MPSFallback.mm:11.)
audio_features = audio_features.repeat_interleave(self.n_group, dim=0)
/AppleInternal/Library/BuildRoots/a0876c02-1788-11ed-b9c4-96898e02b808/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShaders/MPSCore/Types/MPSNDArray.mm:782: failed assertion `[MPSNDArray, initWithBuffer:descriptor:] Error: buffer is not large enough. Must be 201452 bytes
```
I'm running `torch-1.13.0.dev20221003`
### Versions
```
Collecting environment information...
PyTorch version: 1.13.0.dev20221003
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 12.6 (arm64)
GCC version: Could not collect
Clang version: 14.0.0 (clang-1400.0.29.102)
CMake version: version 3.23.3
Libc version: N/A
Python version: 3.9.13 | packaged by conda-forge | (main, May 27 2022, 17:00:33) [Clang 13.0.1 ] (64-bit runtime)
Python platform: macOS-12.6-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.22.0
[pip3] torch==1.13.0.dev20221003
[pip3] torchaudio==0.13.0.dev20221003
[pip3] torchvision==0.14.0.dev20221003
[conda] numpy 1.22.0 py39h61a45d2_0 conda-forge
[conda] torch 1.13.0.dev20221003 pypi_0 pypi
[conda] torchaudio 0.13.0.dev20221003 pypi_0 pypi
[conda] torchvision 0.14.0.dev20221003 pypi_0 pypi
```
cc @kulinseth @albanD @malfet @DenisVieriu97 @razarmehr @abhudev
| 17 |
4,617 | 86,131 |
Debuggability++: Share instructions for building exotic CI configurations
|
module: ci, triaged
|
### π The feature, motivation and pitch
We know how to build PyTorch on various exotic CI configurations (@ezyang has them [documented internally](https://fb.workplace.com/groups/1274980366011614/posts/2317382101771430/?comment_id=2317519805090993)).
It would be great to make these more visible to people. In particular, if an exotic CI job fails, it would be good to link to instructions for how to build in that configuration directly. This could be done in HUD, or directly in the logfiles (although logfiles are at risk of not being seen by users.) Is anyone in Dev Infra interested in helping make this happen? I think it would be a great benefit for anyone working inside core PyTorch.
Practically, this would be two stages:
1. **Make the info available** externally. Perhaps copy/paste it to CONTRIBUTING.md or the wiki
2. **Surface the info** by sharing it in HUD under failing workflows
### Why do this?
This is a usability improvement that will help people debug their failing builds, making them less likely to decide to `merge -f` a broken change
cc @seemethere @malfet @pytorch/pytorch-dev-infra
| 0 |
4,618 | 86,128 |
[TorchDispatch] Scalar Only Inputs Gets Matched To Tensor Schema
|
triaged, module: __torch_dispatch__
|
### π Describe the bug
When you call `torch.div` with scalar-only inputs, the schema that gets matched to is `aten::div.Tensor(Tensor self, Tensor other) -> Tensor`.
```
import torch
from torch.utils._python_dispatch import TorchDispatchMode
class DebugMode(TorchDispatchMode):
def __torch_dispatch__(self, func, types, args=(), kwargs=None):
print(func, args, kwargs)
return func(*args, **kwargs)
with DebugMode():
print(torch.div(240, 8))
```
> aten.div.Tensor (240, 8) {}
tensor(30.)
I would expect the scalar inputs to get promoted to Tensors (lift_fresh) before the `aten.div` is invoked, or for a scalar schema to get matched to (probably the former).
Also, I couldn't find the tests for scalar only inputs to Binary torch functions.
Related: https://github.com/pytorch/torchdynamo/issues/1445
### Versions
master
cc @Chillee @ezyang @zou3519 @albanD @samdow
| 6 |
4,619 | 86,124 |
torch.jit.trace throwing Invalid name for qualified name eror
|
oncall: jit
|
### π Describe the bug
While running `torch.jit.trace`, the function throws an error and explicitly asks to open a bug. Based on the name in the last line of the trace, it looks like something is wrong with the format, but I am not sure if this is a bug, as stated in the error message, or I can tweak something to change the name.
Code:
```
module = torch.jit.trace(model, test_input)
```
Trace
```
File "model_generator.py", line 89, in pytorch_to_fp32onnx
module = torch.jit.trace(model, test_input)
File "/home/hcsuser/envs/latest/lib/python3.7/site-packages/torch/jit/_trace.py", line 750, in trace
_module_class,
File "/home/hcsuser/envs/latest/lib/python3.7/site-packages/torch/jit/_trace.py", line 942, in trace_module
module = make_module(mod, _module_class, _compilation_unit)
File "/home/hcsuser/envs/latest/lib/python3.7/site-packages/torch/jit/_trace.py", line 568, in make_module
return _module_class(mod, _compilation_unit=_compilation_unit)
File "/home/hcsuser/envs/latest/lib/python3.7/site-packages/torch/jit/_trace.py", line 1072, in __init__
tmp_module, lambda module: (), share_types=False, is_tracing=True
File "/home/hcsuser/envs/latest/lib/python3.7/site-packages/torch/jit/_recursive.py", line 451, in create_script_module
concrete_type = get_module_concrete_type(nn_module, share_types)
File "/home/hcsuser/envs/latest/lib/python3.7/site-packages/torch/jit/_recursive.py", line 408, in get_module_concrete_type
concrete_type = concrete_type_builder.build()
RuntimeError: atom.size() > 0INTERNAL ASSERT FAILED at "../aten/src/ATen/core/qualified_name.h":25, please report a bug to PyTorch. Invalid name for qualified name: '__torch__..model_wrapper.TigerViT'
```
### Versions
PyTorch version: 1.11.0+cu102
Is debug build: False
CUDA used to build PyTorch: 10.2
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: Could not collect
CMake version: version 3.10.2
Libc version: glibc-2.26
Python version: 3.7.5 (default, Dec 9 2021, 17:04:37) [GCC 8.4.0] (64-bit runtime)
Python platform: Linux-5.4.0-1083-azure-x86_64-with-Ubuntu-18.04-bionic
Is CUDA available: True
CUDA runtime version: 9.1.85
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.4.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.21.6
[pip3] torch==1.11.0
[pip3] torchvision==0.13.1
[conda] Could not collect
| 0 |
4,620 | 86,120 |
TransformerEncoder src_key_padding_mask does not work in eval()
|
high priority, triaged, oncall: transformer/mha
|
### π Describe the bug
Described here: https://discuss.pytorch.org/t/model-eval-predicts-all-the-same-or-nearly-same-outputs/155025/12
Starting with version 1.12, TransformerEncoder uses two different code paths, which perform differently between training and evaluation mode. In training, the `why_not_sparsity_fast_path` will be set, and the sparsity fast path will not be used. During eval(), if using a `src_key_padding_mask`, output will be modified:
https://github.com/pytorch/pytorch/blob/b04b2fa9aa52cacbdc9aaaf477d55b0af845ce81/torch/nn/modules/transformer.py#L271
```
if (not why_not_sparsity_fast_path) and (src_key_padding_mask is not None):
convert_to_nested = True
output = torch._nested_tensor_from_mask(output, src_key_padding_mask.logical_not(), mask_check=False)
src_key_padding_mask_for_layers = None
```
The output is usually some constant value. It seems like their might be some misalignment with the expected `src_key_padding_mask` and the `logical_not()` operation. Perhaps the mask inversion should not be performed.
### Versions
Python version: 3.8.13 (default, Mar 28 2022, 06:59:08) [MSC v.1916 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.22621-SP0
Is CUDA available: True
CUDA runtime version: 11.6.124
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3090
Nvidia driver version: 516.94
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] efficientnet-pytorch==0.7.1
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.23.3
[pip3] segmentation-models-pytorch==0.3.0
[pip3] torch==1.12.1+cu116
[pip3] torchaudio==0.12.1+cu116
[pip3] torchmetrics==0.9.3
[pip3] torchvision==0.13.1+cu116
[conda] efficientnet-pytorch 0.7.1 pypi_0 pypi
[conda] numpy 1.23.3 pypi_0 pypi
[conda] segmentation-models-pytorch 0.3.0 pypi_0 pypi
[conda] torch 1.12.1+cu116 pypi_0 pypi
[conda] torchaudio 0.12.1+cu116 pypi_0 pypi
[conda] torchmetrics 0.9.3 pypi_0 pypi
[conda] torchvision 0.13.1+cu116 pypi_0 pypi
cc @ezyang @gchanan @zou3519 @jbschlosser @bhosmer @cpuhrsch @erichan1
| 9 |
4,621 | 86,116 |
JIT fails to trace binary cross entropy with a strange error msg
|
oncall: jit
|
### π Describe the bug
JIT fails to trace binary cross entropy with a strange error msg
```py
import torch
def fn(input, target):
weight = torch.tensor(64.0)
reduction = "mean"
return torch.nn.functional.binary_cross_entropy(input, target, weight=weight, reduction=reduction)
input = torch.tensor([0.1, 0.1, 0.1], dtype=torch.float32)
target = torch.tensor([0.15, 0.15, 0.15], dtype=torch.float32)
inputs = (input, target)
print(fn(*inputs))
new_fn = torch.jit.trace(fn, inputs)
print(new_fn(*inputs))
```
```
tensor(27.8364)
RuntimeError: expected int at position 0, but got: Tensor
```
I think the input here is valid and JIT should succeed to trace it
### Versions
```
PyTorch version: 1.13.0.dev20221001
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.2.0-19ubuntu1) 11.2.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.9.13 (main, Aug 25 2022, 23:26:10) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.19.5-051905-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.7.99
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060
Nvidia driver version: 515.65.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.3
[pip3] torch==1.13.0.dev20221001
[pip3] torchaudio==0.13.0.dev20221001
[pip3] torchvision==0.14.0.dev20221001
[conda] blas 1.0 mkl
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.23.3 pypi_0 pypi
[conda] numpy-base 1.23.1 py39ha15fc14_0
[conda] pytorch 1.13.0.dev20221001 py3.9_cuda11.7_cudnn8.5.0_0 pytorch-nightly
[conda] pytorch-cuda 11.7 h67b0de4_0 pytorch-nightly
[conda] pytorch-mutex 1.0 cuda pytorch-nightly
[conda] torch 1.13.0.dev20220923+cu116 pypi_0 pypi
[conda] torchaudio 0.13.0.dev20220923+cu116 pypi_0 pypi
[conda] torchvision 0.14.0.dev20220923+cu116 pypi_0 pypi
```
| 0 |
4,622 | 86,112 |
`cdist` should succeed when `p` is integer in JIT
|
oncall: jit
|
### π Describe the bug
`cdist` should succeed when `p` is integer in JIT. Actually, the [doc](https://pytorch.org/docs/stable/generated/torch.cdist.html) of `cdist` doesn't mention that `p` must be float and the non-JIT function will succeed with integer `p`. However, the JIT version will fail with "Expected a value of type 'float' for argument 'p' but instead found type 'int'."
```py
import torch
def fn(x1, x2):
return torch.cdist(x1, x2, p=2)
x1 = torch.rand([2, 2], dtype=torch.float64)
x2 = torch.rand([2, 2], dtype=torch.float64)
print(fn(x1, x2))
jit_fn = torch.jit.script(fn)
print(jit_fn(x1, x2))
```
```
tensor([[0.4998, 0.8013],
[0.8789, 0.8492]], dtype=torch.float64)
RuntimeError:
cdist(Tensor x1, Tensor x2, float p=2., str compute_mode="use_mm_for_euclid_dist_if_necessary") -> Tensor:
Expected a value of type 'float' for argument 'p' but instead found type 'int'.
```
### Versions
```
PyTorch version: 1.13.0.dev20221001
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.2.0-19ubuntu1) 11.2.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.9.13 (main, Aug 25 2022, 23:26:10) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.19.5-051905-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.7.99
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060
Nvidia driver version: 515.65.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.3
[pip3] torch==1.13.0.dev20221001
[pip3] torchaudio==0.13.0.dev20221001
[pip3] torchvision==0.14.0.dev20221001
[conda] blas 1.0 mkl
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.23.3 pypi_0 pypi
[conda] numpy-base 1.23.1 py39ha15fc14_0
[conda] pytorch 1.13.0.dev20221001 py3.9_cuda11.7_cudnn8.5.0_0 pytorch-nightly
[conda] pytorch-cuda 11.7 h67b0de4_0 pytorch-nightly
[conda] pytorch-mutex 1.0 cuda pytorch-nightly
[conda] torch 1.13.0.dev20220923+cu116 pypi_0 pypi
[conda] torchaudio 0.13.0.dev20220923+cu116 pypi_0 pypi
[conda] torchvision 0.14.0.dev20220923+cu116 pypi_0 pypi
```
| 0 |
4,623 | 86,111 |
When will the torch.sparse module be usable?
|
module: sparse, triaged
|
### π The feature, motivation and pitch
Hello,
I tried to use the ```torch.sparse``` package for creating neural networks but it utterly failed due to so many
errors, incompatinilities and generally the error message.
```python
NotImplementedError: Could not run 'aten::is_coalesced' with arguments from the 'CPU' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::is_coalesced' is only available for these backends: [SparseCPU, SparseCUDA, BackendSelect, Python, Named, Conjugate, Negative, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradLazy, AutogradXPU, AutogradMLC, AutogradHPU, AutogradNestedTensor, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, UNKNOWN_TENSOR_TYPE_ID, Autocast, Batched, VmapMode].
```
Even when using the functions from [this source](https://pytorch.org/docs/stable/sparse.html)
This includes:
- batch addition and multiplication
- activation functions such as ReLU or Sigmoid on sparse tensors
- even creating simple loss functions gives an error with ```torch.sparse.sum``` gives an error
When can we be expecting to have a working version of ```torch.sparse``` that can be used for actually modelling or do we have to stay with tensorflow?
### Alternatives
Use tensorflow
### Additional context
_No response_
cc @nikitaved @pearu @cpuhrsch @amjames @bhosmer
| 1 |
4,624 | 86,110 |
JIT return a tensor with different datatype from the tensor w/o gradient and normal function
|
oncall: jit
|
### π Describe the bug
JIT return a tensor with different datatype from the tensor w/o gradient and normal function
```py
import torch
class M(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, input):
res = torch.abs(input, )
return torch.div(res, torch.tensor(7, dtype=torch.float32, device='cpu'))
fn = M().to('cpu')
input = torch.tensor(11, dtype=torch.bfloat16)
v = fn(input.clone().requires_grad_())
print(v.dtype)
jit_fn = torch.jit.script(fn)
jv1 = jit_fn(input.clone())
print(jv1.dtype)
jv2 = jit_fn(input.clone().requires_grad_())
print(jv2.dtype)
```
```
torch.float32
torch.float32
torch.bfloat16
```
### Versions
```
PyTorch version: 1.13.0.dev20221001
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.2.0-19ubuntu1) 11.2.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.9.13 (main, Aug 25 2022, 23:26:10) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.19.5-051905-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.7.99
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060
Nvidia driver version: 515.65.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.3
[pip3] torch==1.13.0.dev20221001
[pip3] torchaudio==0.13.0.dev20221001
[pip3] torchvision==0.14.0.dev20221001
[conda] blas 1.0 mkl
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.23.3 pypi_0 pypi
[conda] numpy-base 1.23.1 py39ha15fc14_0
[conda] pytorch 1.13.0.dev20221001 py3.9_cuda11.7_cudnn8.5.0_0 pytorch-nightly
[conda] pytorch-cuda 11.7 h67b0de4_0 pytorch-nightly
[conda] pytorch-mutex 1.0 cuda pytorch-nightly
[conda] torch 1.13.0.dev20220923+cu116 pypi_0 pypi
[conda] torchaudio 0.13.0.dev20220923+cu116 pypi_0 pypi
[conda] torchvision 0.14.0.dev20220923+cu116 pypi_0 pypi
```
| 1 |
4,625 | 86,107 |
`F.affine_grid` crashes on MPS
|
triaged, module: mps
|
### π Describe the bug
`F.affine_grid` crashes on MPS. Code to reproduce:
```python
import torch
import torch.nn.functional as F
dev_cpu = torch.device('cpu')
dev_mps = torch.device('mps')
device=dev_cpu
laf = torch.tensor([[2, 0, 4.], [0, 2., 5.]], device=device)
grid = F.affine_grid(laf.view(1,2,3), [1, 3, 3, 3], align_corners=False)
print ("cpu grid:", grid)
laf=laf.to(dev_mps)
grid = F.affine_grid(laf.view(1,2,3), [1, 3, 3, 3], align_corners=False)
print ("mps grid:", grid)
```
Output:
```
cpu grid: tensor([[[[2.6667, 3.6667],
[4.0000, 3.6667],
[5.3333, 3.6667]],
[[2.6667, 5.0000],
[4.0000, 5.0000],
[5.3333, 5.0000]],
[[2.6667, 6.3333],
[4.0000, 6.3333],
[5.3333, 6.3333]]]])
-:27:11: error: invalid input tensor shapes, indices shape and updates shape must be equal
-:27:11: note: see current operation: %25 = "mps.scatter_along_axis"(%23, %arg1, %24, %1) {mode = 6 : i32} : (tensor<27xf32>, tensor<3xf32>, tensor<9xi32>, tensor<i32>) -> tensor<27xf32>
/AppleInternal/Library/BuildRoots/a0876c02-1788-11ed-b9c4-96898e02b808/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShadersGraph/mpsgraph/MetalPerformanceShadersGraph/Core/Files/MPSGraphExecutable.mm:1267: failed assertion `Error: MLIR pass manager failed'
zsh: abort python fail_affine_grid.py
```
Expected output: don't crash.
### Versions
```
Collecting environment information...
PyTorch version: 1.13.0.dev20221002
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 12.6 (arm64)
GCC version: Could not collect
Clang version: 14.0.6
CMake version: version 3.22.3
Libc version: N/A
Python version: 3.9.4 | packaged by conda-forge | (default, May 10 2021, 22:10:52) [Clang 11.1.0 ] (64-bit runtime)
Python platform: macOS-12.6-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] mypy==0.910
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.21.2
[pip3] pytest-mypy==0.9.0
[pip3] torch==1.13.0.dev20221002
[pip3] torch-dimcheck==0.0.1
[pip3] torchaudio==0.11.0
[pip3] torchvision==0.10.0
[conda] numpy 1.21.2 pypi_0 pypi
[conda] torch 1.13.0.dev20221002 pypi_0 pypi
[conda] torch-dimcheck 0.0.1 pypi_0 pypi
[conda] torchaudio 0.11.0 pypi_0 pypi
[conda] torchvision 0.10.0 pypi_0 pypi
```
cc @kulinseth @albanD @malfet @DenisVieriu97 @razarmehr @abhudev
| 3 |
4,626 | 86,097 |
[Activation Checkpointing] Investigate pin_memory for CPU offload
|
oncall: distributed, module: checkpoint, triaged
|
### π The feature, motivation and pitch
@awgu had a good point here: https://github.com/pytorch/pytorch/pull/85459#discussion_r979263528 that we shouldn't assume we have unlimited space in the pinned memory region, right now `save_on_cpu` does pin_memory=True in a hardcoded way, we should investigate performance implications of this and improve our intuition.
### Alternatives
_No response_
### Additional context
_No response_
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang @kwen2501
| 0 |
4,627 | 86,076 |
Figure out the future of Metal backend given the existence of MPS
|
module: build, triaged, module: arm, module: mps
|
### π Describe the bug
We have simultaneously a Metal backend and an MPS backend in PyTorch. It's not clear to me we need both of them. In particular, if we want to deploy to iOS, if you can be iOS 9+ MPS should be usable. Well, I don't know if anyone has actually tried this in reality. And there might be deployment considerations (as binary size matters for the mobile target.) We are also still using Metal in prod here at Meta.
Any comments from folks about if we should get rid of Metal for MPS? Or if we don't get rid of it, should there be better interop between the two? E.g., not having a separate Tensor type for MPS versus Metal. The Metal support in PT is very limited, and a few years ago @xta0 was complaining about how views are not done correctly; MPS *does* do views correctly.
cc @malfet @seemethere @kulinseth @albanD @DenisVieriu97 @razarmehr @abhudev for perspectives on MPS
cc @dreiss @xta0 @SS-JIA for perspectives on Metal
### Versions
master
| 9 |
4,628 | 86,074 |
torch.remainder and torch.fmod produce wrong results
|
module: numerical-stability, triaged, module: correctness (silent)
|
### π Describe the bug
For some inputs, torch.remainder and torch.fmod produce wrong results, especially for integer datatype. When converting the int32 input to float32, they can produce correct results.
I suspect this might be caused by type promotion.
Reproduce code:
```
import torch
import numpy as np
input = torch.tensor(1693446850, dtype=torch.int32)
other = torch.tensor([7285], dtype=torch.int16)
# test the apis with int input (wrong results)
r = torch.remainder(input, other)
print(r)
r = torch.fmod(input, other)
print(r)
r = np.fmod(input.numpy(), other.numpy())
print(r)
input = input.to(torch.float32)
# test the apis with float input (correct results)
r = torch.remainder(input, other)
print(r)
r = torch.fmod(input, other)
print(r)
r = np.fmod(input.numpy(), other.numpy())
print(r)
```
Output:
```
tensor([3895], dtype=torch.int16)
tensor([-3390], dtype=torch.int16)
[4890]
tensor([4952.])
tensor([4952.])
[4952.]
```
### Versions
```
Collecting environment information...
PyTorch version: 1.13.0.dev20220923+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.17
Python version: 3.7.13 (default, Mar 29 2022, 02:18:16) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-4.15.0-176-generic-x86_64-with-debian-buster-sid
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.21.6
[pip3] torch==1.13.0.dev20220923+cpu
[pip3] torchaudio==0.13.0.dev20220923+cpu
[pip3] torchvision==0.14.0.dev20220923+cpu
[conda] numpy 1.21.6 pypi_0 pypi
[conda] torch 1.13.0.dev20220923+cpu pypi_0 pypi
[conda] torchaudio 0.13.0.dev20220923+cpu pypi_0 pypi
[conda] torchvision 0.14.0.dev20220923+cpu pypi_0 pypi
```
| 1 |
4,629 | 86,055 |
partial view/reshaping
|
triaged, enhancement, module: python frontend
|
### π The feature, motivation and pitch
The current reshaping procedure of tensors like [A x B*C x D] to [A x B x C x D] requires the passing of sizes of new dimensions.
`.view(A, B, C, D)`
`.view(-1, B, C, D)`
The user needs to find the exact values of ABCD or at least 3 of them.
Is it possible to add dimensions keeping like dimension auto-calculation with -1?
proposed key `0` - like `-1` but for keeping dimension size.
For the example above:
`.view(0, B, -1, 0)`
keep A and D as is. add dim B and calculate dim C from tensor size and given B.
### Alternatives
_No response_
### Additional context
_No response_
| 1 |
4,630 | 86,048 |
Significantly worse MPS performance between torch 1.13.0.dev20220922 and torch 1.13.0.dev20220930
|
module: performance, triaged, module: mps
|
### π Describe the bug
This is running Stable Diffusion code. A basic Stable Diffusion example used to have somewhere around 1.6it/s on an Apple M1 Max machine with torch 1.13.0.dev20220922. But once I upgrade to the latest torch 1.13.0.dev20220930 version, the same code (with everything else the same) takes around 8s/it (notice that it takes 8s for each iteration as opposed to 1.6 iterations per second).
I have two separate conda environments with the two torch nightly versions and the timing difference is consistent across multiple runs.
### Versions
PyTorch version: 1.13.0.dev20220930
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 12.6 (arm64)
GCC version: Could not collect
Clang version: 14.0.0 (clang-1400.0.29.102)
CMake version: version 3.24.1
Libc version: N/A
Python version: 3.9.13 | packaged by conda-forge | (main, May 27 2022, 17:00:33) [Clang 13.0.1 ] (64-bit runtime)
Python platform: macOS-12.6-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.3
[pip3] torch==1.13.0.dev20220930
[pip3] torchaudio==0.13.0.dev20220930
[pip3] torchvision==0.14.0.dev20220930
[conda] numpy 1.23.3 py39hcb4b507_0 conda-forge
[conda] pytorch 1.13.0.dev20220930 py3.9_0 pytorch-nightly
[conda] torchaudio 0.13.0.dev20220930 py39_cpu pytorch-nightly
[conda] torchvision 0.14.0.dev20220930 py39_cpu pytorch-nightly
cc @VitalyFedyunin @ngimel @kulinseth @albanD @malfet @DenisVieriu97 @razarmehr @abhudev
| 2 |
4,631 | 86,020 |
Functorch memory_efficient_fusion gives wrong output batch size
|
module: docs, triaged, module: functorch
|
### π Describe the bug
Run the following script:
```python
import torch
from functorch.compile import memory_efficient_fusion
import timm
x = torch.randn(5, 3, 224, 224).cuda()
y = torch.randn(6, 3, 224, 224).cuda()
net = timm.create_model('resnet50').cuda()
net = memory_efficient_fusion(net)
net.eval()
with torch.jit.fuser('fuser2'):
xout = net(x)
yout = net(y)
print(xout.shape, yout.shape)
```
Got:
```
torch.Size([5, 1000]) torch.Size([5, 1000])
```
Expecting:
```
torch.Size([5, 1000]) torch.Size([6, 1000])
```
Note that `yout` has the wrong batch size. Commenting out the memory_efficient_fusion (using eager mode) gives the expected result.
### Versions
Was working fine on 9/28
torch_commit: https://github.com/pytorch/pytorch/commit/614d6f19e3d30cac0d77059e738d1f25d75eb408
torchdynamo_commit: https://github.com/pytorch/torchdynamo/commit/6ead5cae0d1234aa64db06fe230ef56e12ec76fe
torchvision_commit: https://github.com/pytorch/vision/commit/a35be97a6e6725c83032315a8f5e5f6911c9ef41
First saw error on 9/29
torch_commit: https://github.com/pytorch/pytorch/commit/3cfc61b84659cea435411a546eca6a891584247f
torchdynamo_commit: https://github.com/pytorch/torchdynamo/commit/6ead5cae0d1234aa64db06fe230ef56e12ec76fe
torchvision_commit: https://github.com/pytorch/vision/commit/a35be97a6e6725c83032315a8f5e5f6911c9ef41
cc @svekars @carljparker @zou3519 @Chillee @samdow @soumith @ngimel @ptrblck
| 8 |
4,632 | 85,968 |
Quantized version of Sigmoid doesn't have _get_name method
|
oncall: quantization, triaged
|
### π Describe the bug
Hello, i'm interested in fx quantization of pytorch model. In my project i need to register model by its class name, for example:
I define module class from GraphModule by calling node.target. For quantized version of ConvBnReLU output would be QuantizedConvBnReLU.
Despite quantized Sigmoid my method works for any quantized module because of βQuantizedβ prefix in module class name.
I checked realization of quantized activations and Sigmoid doesnβt contain method _get_name like any other realization of quantized activation.
### Versions
wget https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py
# For security purposes, please check the contents of collect_env.py before running it.
python collect_env.py
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel
| 4 |
4,633 | 85,960 |
Discrepancy in output shape for batch_norm inference mode between CUDA and CPU
|
module: nn, module: cuda, triaged, actionable
|
### π Describe the bug
For CPU inputs `torch.native_batch_norm` returns 0-sized outputs, but for CUDA inputs it returns a copy of `running_mean` argument and inverse of standard deviation.
What's the reason for this discrepancy and what's the correct output? Can we align the behavior of two backends?
CPU code path:
https://github.com/pytorch/pytorch/blob/8f4edf1e1dc9419f0bab66a67c8f149d7b53fc25/aten/src/ATen/native/Normalization.cpp#L684-L687
CUDA code path:
https://github.com/pytorch/pytorch/blob/fe87ae692f813934d1a74d000fd1e3b546c27ae2/aten/src/ATen/native/cuda/Normalization.cu#L440-L443
### Versions
Latest master.
cc @albanD @mruberry @jbschlosser @walterddr @kshitij12345 @saketh-are @ngimel
| 7 |
4,634 | 85,949 |
[Distributed] Loading distributed checkpoint with FSDP fails with varying key errors (pos.embedding, shared.weight)
|
oncall: distributed
|
### π Describe the bug
Using 1.13.0.dev20220928+cu116 and FSDP, save out a distributed checkpoint.
Saving proceeds smoothly, and distributed checkpoint is created.
However, in attempting to load the just saved checkpoint, receive varying key errors depending on the model that was saved.
DeepVit:
~~~
Traceback (most recent call last): (RANK 3)
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/torch/distributed/_shard/checkpoint/utils.py", line 140, in reduce_scatter
local_data = map_fun()
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/torch/distributed/_shard/checkpoint/state_dict_loader.py", line 89, in local_step
local_plan = planner.create_local_plan()
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/torch/distributed/_shard/checkpoint/default_planner.py", line 85, in create_local_plan
return create_default_local_load_plan(self.state_dict, self.metadata)
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/torch/distributed/_shard/checkpoint/default_planner.py", line 131, in create_default_local_load_plan
md = metadata.state_dict_metadata[fqn]
KeyError: 'pos_embedding'
~~~
T5:
~~~
Traceback (most recent call last): (RANK 3)
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/torch/distributed/_shard/checkpoint/utils.py", line 140, in reduce_scatter
local_data = map_fun()
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/torch/distributed/_shard/checkpoint/state_dict_loader.py", line 89, in local_step
local_plan = planner.create_local_plan()
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/torch/distributed/_shard/checkpoint/default_planner.py", line 85, in create_local_plan
return create_default_local_load_plan(self.state_dict, self.metadata)
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/torch/distributed/_shard/checkpoint/default_planner.py", line 131, in create_default_local_load_plan
md = metadata.state_dict_metadata[fqn]
KeyError: 'shared.weight'
~~~
I'm unclear if we need to make changes to properly load, but this code used to work fine two months ago. Saving works nicely as before, just the loading is erring.
here's the code used to load:
~~~
def load_distributed_model_checkpoint(model, rank, cfg):
if cfg.checkpoint_type == StateDictType.LOCAL_STATE_DICT:
print(f"loading distributed checkpoint, rank {rank}...")
folder_name = (
cfg.dist_checkpoint_root_folder
+ "/"
+ cfg.dist_checkpoint_folder
+ "-"
+ cfg.model_name
)
checkdir = Path.cwd() / folder_name
if not checkdir.exists():
if rank == 0:
print(f"No checkpoint directory found...skipping")
return
if rank == 0:
load_timer = Timer()
load_timer.start()
reader = FileSystemReader(checkdir)
with FSDP.state_dict_type(
model,
StateDictType.LOCAL_STATE_DICT,
):
state_dict = model.state_dict()
load_state_dict(state_dict, reader)
if rank == 0:
load_timer.interval(name="load of state dict ")
model.load_state_dict(state_dict)
print(f"--> local state loaded on rank {rank}")
if rank == 0:
load_timer.stop()
return
~~~
### Versions
Collecting environment information...
PyTorch version: 1.13.0.dev20220928+cu116
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.22.3
Libc version: glibc-2.31
Python version: 3.9.13 | packaged by conda-forge | (main, May 27 2022, 16:56:21) [GCC 10.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-1020-aws-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.6.124
GPU models and configuration:
GPU 0: NVIDIA A10G
GPU 1: NVIDIA A10G
GPU 2: NVIDIA A10G
GPU 3: NVIDIA A10G
Nvidia driver version: 510.73.08
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.3
[pip3] torch==1.13.0.dev20220928+cu116
[pip3] torch-model-archiver==0.5.3b20220226
[pip3] torch-workflow-archiver==0.2.4b20220513
[pip3] torchserve==0.6.0b20220513
[pip3] torchtext==0.13.1
[pip3] torchvision==0.14.0.dev20220928+cu116
[pip3] vit-pytorch==0.35.8
[conda] blas 2.116 mkl conda-forge
[conda] blas-devel 3.9.0 16_linux64_mkl conda-forge
[conda] captum 0.5.0 0 pytorch
[conda] cudatoolkit 11.6.0 hecad31d_10 conda-forge
[conda] libblas 3.9.0 16_linux64_mkl conda-forge
[conda] libcblas 3.9.0 16_linux64_mkl conda-forge
[conda] liblapack 3.9.0 16_linux64_mkl conda-forge
[conda] liblapacke 3.9.0 16_linux64_mkl conda-forge
[conda] magma-cuda116 2.6.1 1 pytorch
[conda] mkl 2022.1.0 h84fe81f_915 conda-forge
[conda] mkl-devel 2022.1.0 ha770c72_916 conda-forge
[conda] mkl-include 2022.1.0 h84fe81f_915 conda-forge
[conda] numpy 1.23.3 py39hba7629e_0 conda-forge
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torch 1.13.0.dev20220928+cu116 pypi_0 pypi
[conda] torch-model-archiver 0.5.3 py39_0 pytorch
[conda] torch-workflow-archiver 0.2.4 py39_0 pytorch
[conda] torchserve 0.6.0 py39_0 pytorch
[conda] torchtext 0.13.1 py39 pytorch
[conda] torchvision 0.14.0.dev20220928+cu116 pypi_0 pypi
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang @kwen2501
| 1 |
4,635 | 85,939 |
CUDA OOM issue when running tests in CI
|
high priority, module: ci, triaged
|
### π Describe the bug
There have been reports of OOM issue when running CUDA tests in CI. For example,
* https://github.com/pytorch/pytorch/actions/runs/3146997338/jobs/5116296083
As the first step, we need to gather more examples to see if the same test fails always or if it's a more widespread issue because Linux CUDA test_ops are now run in parallel (https://github.com/pytorch/pytorch/pull/85528, in 3 processes)
### Versions
CUDA 11.6 (and may be 10.2 too)
cc @ezyang @gchanan @zou3519 @seemethere @malfet @pytorch/pytorch-dev-infra @clee2000
| 16 |
4,636 | 85,932 |
Setup ssh sometimes fail
|
module: ci, triaged
|
Example failure: https://github.com/pytorch/pytorch/actions/runs/3154700893/jobs/5132596332
```
Prepare all required actions
Getting action download info
Download action repository 'seemethere/add-github-ssh-key@v1' (SHA:105f7619adc4054f5f1be5f79ebd354d8[2](https://github.com/pytorch/pytorch/actions/runs/3154700893/jobs/5132596332#step:4:2)[3](https://github.com/pytorch/pytorch/actions/runs/3154700893/jobs/5132596332#step:4:3)8[4](https://github.com/pytorch/pytorch/actions/runs/3154700893/jobs/5132596332#step:4:4)638)
Run ./.github/actions/setup-ssh
Run seemethere/add-github-ssh-key@v1
Grabbing public ssh keys from https://github.com/albanD.keys
Error: Server Error
```
cc @seemethere @malfet @pytorch/pytorch-dev-infra
| 3 |
4,637 | 85,909 |
Steam Deck Core Dump
|
high priority, module: rocm, triaged
|
### π Describe the bug
The following happened on my new Steam Deck.
OK, thre's obviously no driver but I think a core dump is a bit excessive (I can fall back to CPU)
```
Python 3.10.2 (main, Jan 15 2022, 19:56:27) [GCC 11.1.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> if torch.cuda.is_available() and torch.version.hip:
... print('We have ROCm')
... else:
... print('CPU Only')
...
"hipErrorNoBinaryForGpu: Unable to find code object for all current devices!"
Aborted (core dumped)
(134)(deck@steamdeck ~)$
```
### Versions
pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/rocm5.1.1
cc @ezyang @gchanan @zou3519 @jeffdaily @sunway513 @jithunnair-amd @ROCmSupport @KyleCZH
| 0 |
4,638 | 85,893 |
TorchScript does not recognize mix-in types with `Enum`
|
oncall: jit
|
### π Describe the bug
When inheriting str and Enum to create `StrEnum` class (similar to `IntEnum`) https://docs.python.org/3/library/enum.html#others, TorchScript does not recognize that these enums are comparable with `str` types. Below is an example that use `(str, Enum)`, which works fine but cannot be scripted.
```python
class Color(str, Enum):
RED = "1"
GREEN = "2"
@torch.jit.script
def test_str_enum(x: str) -> str:
if x == Color.GREEN.value:
return "green"
if x == Color.RED:
return "red"
```
```
RuntimeError:
Arguments for call are not valid.
The following variants are available:
aten::eq.Tensor(Tensor self, Tensor other) -> (Tensor):
Expected a value of type 'Tensor' for argument 'self' but instead found type 'str'.
aten::eq.Scalar(Tensor self, Scalar other) -> (Tensor):
Expected a value of type 'Tensor' for argument 'self' but instead found type 'str'.
aten::eq.Scalar_out(Tensor self, Scalar other, *, Tensor(a!) out) -> (Tensor(a!)):
Expected a value of type 'Tensor' for argument 'self' but instead found type 'str'.
aten::eq.Tensor_out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!)):
Expected a value of type 'Tensor' for argument 'self' but instead found type 'str'.
aten::eq.int_list(int[] a, int[] b) -> (bool):
Expected a value of type 'List[int]' for argument 'a' but instead found type 'str'.
aten::eq.device(Device a, Device b) -> (bool):
Expected a value of type 'Device' for argument 'b' but instead found type 'Enum<__torch__.Color>'.
aten::eq.bool(bool a, bool b) -> (bool):
Expected a value of type 'bool' for argument 'a' but instead found type 'str'.
aten::eq.enum(AnyEnumType a, AnyEnumType b) -> (bool):
Expected a value of type 'AnyEnumType' for argument 'a' but instead found type 'str'.
aten::eq.int(int a, int b) -> (bool):
Expected a value of type 'int' for argument 'a' but instead found type 'str'.
aten::eq.complex(complex a, complex b) -> (bool):
Expected a value of type 'complex' for argument 'a' but instead found type 'str'.
aten::eq.float(float a, float b) -> (bool):
Expected a value of type 'float' for argument 'a' but instead found type 'str'.
aten::eq.int_float(int a, float b) -> (bool):
Expected a value of type 'int' for argument 'a' but instead found type 'str'.
aten::eq.float_int(float a, int b) -> (bool):
Expected a value of type 'float' for argument 'a' but instead found type 'str'.
aten::eq.float_complex(float a, complex b) -> (bool):
Expected a value of type 'float' for argument 'a' but instead found type 'str'.
aten::eq.complex_float(complex a, float b) -> (bool):
Expected a value of type 'complex' for argument 'a' but instead found type 'str'.
aten::eq(Scalar a, Scalar b) -> (bool):
Expected a value of type 'number' for argument 'a' but instead found type 'str'.
aten::eq.str(str a, str b) -> (bool):
Expected a value of type 'str' for argument 'b' but instead found type 'Enum<__torch__.Color>'.
aten::eq.float_list(float[] a, float[] b) -> (bool):
Expected a value of type 'List[float]' for argument 'a' but instead found type 'str'.
aten::eq.Tensor_list(Tensor[] a, Tensor[] b) -> (bool):
Expected a value of type 'List[Tensor]' for argument 'a' but instead found type 'str'.
aten::eq.bool_list(bool[] a, bool[] b) -> (bool):
Expected a value of type 'List[bool]' for argument 'a' but instead found type 'str'.
aten::eq.str_list(str[] a, str[] b) -> (bool):
Expected a value of type 'List[str]' for argument 'a' but instead found type 'str'.
eq(float a, Tensor b) -> (Tensor):
Expected a value of type 'float' for argument 'a' but instead found type 'str'.
eq(int a, Tensor b) -> (Tensor):
Expected a value of type 'int' for argument 'a' but instead found type 'str'.
The original call is:
File "torch_script_loss.py", line 71
if x == Color.GREEN.value:
return "green"
if x == Color.RED:
~~~~~~~~~~~~~~ <--- HERE
return "red"
```
### Versions
Collecting environment information...
PyTorch version: 1.12.0a0+2c916ef
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04) 9.4.0
Clang version: Could not collect
CMake version: version 3.22.3
Libc version: glibc-2.31
Python version: 3.8.12 | packaged by conda-forge | (default, Jan 30 2022, 23:42:07) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.13.0-30-generic-x86_64-with-glibc2.10
Is CUDA available: True
CUDA runtime version: 11.6.112
Nvidia driver version: 470.103.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.3.3
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.3.3
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.3.3
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.3.3
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.3.3
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.3.3
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.3.3
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] mypy==0.942
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.22.3
[pip3] pytorch-ignite==0.4.8
[pip3] pytorch-quantization==2.1.2
[pip3] torch==1.12.0a0+2c916ef
[pip3] torch-tensorrt==1.1.0a0
[pip3] torchinfo==1.6.5
[pip3] torchtext==0.12.0a0
[pip3] torchvision==0.13.0a0
[conda] magma-cuda110 2.5.2 5 local
[conda] mkl 2019.5 281 conda-forge
[conda] mkl-include 2019.5 281 conda-forge
[conda] numpy 1.22.3 py38h05e7239_0 conda-forge
[conda] pytorch-ignite 0.4.8 pypi_0 pypi
[conda] pytorch-quantization 2.1.2 pypi_0 pypi
[conda] torch 1.12.0a0+2c916ef pypi_0 pypi
[conda] torch-tensorrt 1.1.0a0 pypi_0 pypi
[conda] torchinfo 1.6.5 pypi_0 pypi
[conda] torchtext 0.12.0a0 pypi_0 pypi
[conda] torchvision 0.13.0a0 pypi_0 pypi
| 0 |
4,639 | 85,889 |
High occupation on GPU 0 when converting Tensor to multi GPU
|
module: performance, module: cuda, triaged
|
### π Describe the bug
I'm using distributed GPU training, and using `args.device = torch.device(f'cuda:{args.local_rank}' if torch.cuda.is_available() else 'cpu')` to set device. When I convert model to GPU with `model.to(args.device)`, it works well. However, when I convert data to GPU during training with the following code:
```python
if isinstance(x, list):
for x_i in range(len(x)):
x[x_i] = x[x_i].to(args.device, non_blocking=True)
label[x_i] = label[x_i].to(args.device, non_blocking=True)
x = torch.cat(x, dim=0)
label = torch.cat(label, dim=0)
else:
x = x.to(args.device, non_blocking=True) # .cuda()
label = label.to(args.device, non_blocking=True) # .cuda()
```
It makes `GPU 0` have much higher occupation than other GPUs, like the following picture, `3488` vs `1443`. And all the tensors are converted from CPU to the correct GPUs, which means the code assigned `args.device` correctly.

But when I set device with `torch.cuda.set_device(args.local_rank)` at the beginning of my code and convert cpu tensor to gpu with `x.cuda(non_blocking=True)`, all the GPUs have the same memory-usage with `1443`
### Versions
PyTorch version: 1.8.1
Is debug build: False
CUDA used to build PyTorch: 10.2
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: Could not collect
CMake version: version 3.10.2
Libc version: glibc-2.27
Python version: 3.8.13 | packaged by conda-forge | (default, Mar 25 2022, 06:04:18) [GCC 10.3.0] (64-bit runtime)
Python platform: Linux-4.19.126-1.jdcloud.x86_64-x86_64-with-glibc2.10
Is CUDA available: True
CUDA runtime version: 10.2.89
GPU models and configuration:
GPU 0: Tesla P40
GPU 1: Tesla P40
GPU 2: Tesla P40
GPU 3: Tesla P40
Nvidia driver version: 470.57.02
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.3.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.3.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.3.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.3.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.3.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.3.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.3.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.22.4
[pip3] torch==1.8.1
[pip3] torchaudio==0.8.0a0+e4e171a
[pip3] torchvision==0.9.1
[conda] blas 1.0 mkl conda-forge
[conda] cudatoolkit 10.2.89 h713d32c_10 conda-forge
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] libblas 3.9.0 16_linux64_mkl conda-forge
[conda] libcblas 3.9.0 16_linux64_mkl conda-forge
[conda] liblapack 3.9.0 16_linux64_mkl conda-forge
[conda] mkl 2022.1.0 hc2b9512_224 defaults
[conda] numpy 1.22.4 py38h99721a1_0 conda-forge
[conda] pytorch 1.8.1 py3.8_cuda10.2_cudnn7.6.5_0 pytorch
[conda] torchaudio 0.8.1 py38 pytorch
[conda] torchvision 0.9.1 py38_cu102 pytorch
cc @VitalyFedyunin @ngimel
| 3 |
4,640 | 93,703 |
Collect Operator Coverage
|
triaged, oncall: pt2
|
Aten decomposes operators at the FX level (torch/_decomp/decompositions.py). To collect the decompose operator coverage both for FX and TorchInductor.
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 0 |
4,641 | 85,879 |
DISABLED test_aot_autograd_exhaustive_as_strided_scatter_cpu_float32 (__main__.TestEagerFusionOpInfoCPU)
|
triaged, module: flaky-tests, skipped, module: functorch
|
### π Describe the bug
Platforms: mac, macos
It looks like another side effect of https://github.com/pytorch/pytorch/pull/85794 and https://github.com/pytorch/pytorch/pull/85820 is that test_aot_autograd_exhaustive_as_strided_scatter_cpu_float32 (main.TestEagerFusionOpInfoCPU) is now flaky. Here are some example:
* https://hud.pytorch.org/pytorch/pytorch/commit/0b93afb112d48bb6d89a1e183a90b403560e84e4
* https://hud.pytorch.org/pytorch/pytorch/commit/29c78266c046c7f83e7d84fc764af47e62ae9542
* https://hud.pytorch.org/pytorch/pytorch/commit/5deeb09d4e3001adfd3d04139b4a330915069ea7
There is also an occurrence in Windows, but I won't disable the test there yet untill i see more of them there:
* https://hud.pytorch.org/pytorch/pytorch/commit/29c78266c046c7f83e7d84fc764af47e62ae9542
### Versions
Functorch on Mac
cc @zou3519 @Chillee @samdow @soumith
| 1 |
4,642 | 85,877 |
JIT model could return 'NaN' gradient after the first execution
|
oncall: jit
|
### π Describe the bug
JIT model could return 'NaN' gradient after the first execution
```py
import torch
class M(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, input):
input = torch.nn.functional.relu(input)
input = torch.cos(input)
return torch.acos(input, )
input_tensor = torch.tensor([-1., -2.], dtype=torch.float32)
device = 'cuda'
m = M().to(device)
input = input_tensor.to(device)
from torch.autograd.functional import jacobian
input1 = input.clone().requires_grad_()
jac1 = jacobian(m, input1)
print(jac1)
jit_m = torch.jit.script(m)
# jit_m = torch.jit.trace(m, input.clone())
input2 = input.clone().requires_grad_()
input_temp = input.clone().requires_grad_()
jit_m(input_temp) # without this, it will output the same gradient
jac2 = jacobian(jit_m, input2)
print(jac2)
```
```
tensor([[0., 0.],
[0., 0.]], device='cuda:0')
tensor([[nan, nan],
[nan, nan]], device='cuda:0')
```
### Versions
```
Collecting environment information...
PyTorch version: 1.13.0.dev20220923+cu116
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.2.0-19ubuntu1) 11.2.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.9.13 (main, Aug 25 2022, 23:26:10) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.19.5-051905-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060
Nvidia driver version: 515.65.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.3
[pip3] torch==1.13.0.dev20220923+cu116
[pip3] torchaudio==0.13.0.dev20220923+cu116
[pip3] torchvision==0.14.0.dev20220923+cu116
[conda] numpy 1.23.3 pypi_0 pypi
[conda] torch 1.13.0.dev20220923+cu116 pypi_0 pypi
[conda] torchaudio 0.13.0.dev20220923+cu116 pypi_0 pypi
[conda] torchvision 0.14.0.dev20220923+cu116 pypi_0 pypi
```
| 1 |
4,643 | 85,852 |
`torch.mm` produces wrong result on cpu when using in-place computation
|
module: docs, triaged
|
### π Describe the bug
If I use `out` to specify the output tensor to be the same as the input tensor, `torch.mm` gives wrong result on cpu. Using`torch.mm` on cpu w/o `out` or on cuda gives the same correct result.
Here is the minimized code to reproduce the bug:
```
import torch
torch.manual_seed(420)
torch.cuda.manual_seed_all(420)
A = torch.rand(50, 50)
B = torch.clone(A).cuda()
C = torch.clone(A)
torch.mm(C, C, out=C)
print("cpu in place:", C) # CPU (in place) gives wrong results
D = torch.mm(A, A)
print("cpu:", D)
print(torch.allclose(C, D)) # False
torch.mm(B, B, out=B)
print("GPU in place:", B) # GPU gives right answer
print(torch.allclose(B.cpu(), D)) # True
```
outputs:
```
cpu in place: tensor([[1.2465e+01, 1.1267e+01, 1.2204e+01, ..., 8.7870e+02, 1.0072e+04,
1.0345e+04],
[1.2971e+01, 1.1560e+01, 1.1516e+01, ..., 8.3331e+02, 9.5578e+03,
9.8192e+03],
[1.3755e+01, 1.3913e+01, 1.3279e+01, ..., 9.6774e+02, 1.1126e+04,
1.1423e+04],
...,
[1.2447e+01, 1.1978e+01, 1.1767e+01, ..., 8.3779e+02, 9.6226e+03,
9.8825e+03],
[1.2835e+01, 1.0712e+01, 1.0594e+01, ..., 7.8456e+02, 8.9946e+03,
9.2391e+03],
[1.1591e+01, 9.3684e+00, 1.0208e+01, ..., 7.8547e+02, 9.0087e+03,
9.2529e+03]])
cpu: tensor([[12.4649, 11.2666, 12.2036, ..., 10.9338, 12.4515, 11.9901],
[12.9706, 11.5600, 11.5161, ..., 10.0883, 11.9705, 11.6685],
[13.7549, 13.9132, 13.2790, ..., 11.9514, 16.0034, 13.1333],
...,
[12.4474, 11.9785, 11.7670, ..., 9.8250, 13.5130, 10.8965],
[12.8352, 10.7118, 10.5938, ..., 9.0758, 12.7024, 10.3281],
[11.5908, 9.3684, 10.2079, ..., 9.5821, 11.1185, 9.8437]])
False
GPU in place: tensor([[12.4649, 11.2666, 12.2036, ..., 10.9338, 12.4515, 11.9901],
[12.9706, 11.5600, 11.5161, ..., 10.0883, 11.9705, 11.6685],
[13.7549, 13.9132, 13.2790, ..., 11.9514, 16.0034, 13.1333],
...,
[12.4474, 11.9785, 11.7670, ..., 9.8250, 13.5130, 10.8965],
[12.8352, 10.7118, 10.5938, ..., 9.0758, 12.7024, 10.3281],
[11.5908, 9.3684, 10.2079, ..., 9.5821, 11.1185, 9.8437]],
device='cuda:0')
True
```
### Versions
PyTorch version: 1.12.1+cu113
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: 6.0.0-1ubuntu2 (tags/RELEASE_600/final)
CMake version: version 3.22.6
Libc version: glibc-2.26
Python version: 3.7.14 (default, Sep 8 2022, 00:06:44) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.10.133+-x86_64-with-Ubuntu-18.04-bionic
Is CUDA available: True
CUDA runtime version: 11.1.105
GPU models and configuration: GPU 0: Tesla T4
Nvidia driver version: 460.32.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.0.5
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.21.6
[pip3] torch==1.12.1+cu113
[pip3] torchaudio==0.12.1+cu113
[pip3] torchsummary==1.5.1
[pip3] torchtext==0.13.1
[pip3] torchvision==0.13.1+cu113
[conda] Could not collect
cc @svekars @holly1238
| 2 |
4,644 | 85,851 |
Print a warning when user specifies a qconfig for some node and the qconfig is not supported by BackendConfig
|
oncall: quantization, triaged
|
### π Describe the bug
today when user provide a qconfig that doesn't match the BackendConfig, we just ignore the qconfig, it would be better if we can provide a warning message so that user know the qconfig is ignored for the operator
### Versions
master
cc @jianyuh @raghuramank100 @jamesr66a @vkuzo
| 0 |
4,645 | 85,841 |
Setting the cuda device when using start_processes in Jupyter on Ampere leads to CUDA reinitialization error
|
module: multiprocessing, module: cuda, triaged
|
### π Describe the bug
When using a non-amp GPU this issue does not arise (tested environment for success was two Tesla T4's out of GCP). However when testing on an amp-enabled GPU system I cannot start a multi-gpu processes and it will lead to a re-initialization error. Tested environment for this was two RTX A4000 from runpod.io
To recreate, run the following code in a Jupyter Notebook cell:
```python
import torch
import os
from contextlib import contextmanager
from torch.multiprocessing import start_processes
@contextmanager
def patch_environment(**kwargs):
"""
A context manager that will add each keyword argument passed to `os.environ` and remove them when exiting.
Will convert the values in `kwargs` to strings and upper-case all the keys.
"""
for key, value in kwargs.items():
os.environ[key.upper()] = str(value)
yield
for key in kwargs:
del os.environ[key.upper()]
def training_loop():
device = torch.device("cuda", int(os.environ.get("LOCAL_RANK", -1)))
torch.cuda.set_device(device)
print(device)
def launch_func(index, *args):
with patch_environment(
local_rank=str(index),
rank=str(index)
):
training_loop()
with patch_environment(
world_size=2,
master_addr="127.0.0.1",
):
start_processes(launch_func, nprocs=2, start_method="fork")
```
Desired effect is I should print the current device that was set, instead I receive the following trace:
```bash
---------------------------------------------------------------------------
ProcessRaisedException Traceback (most recent call last)
/tmp/ipykernel_14417/2989602212.py in <module>
4 master_addr="127.0.0.1",
5 ):
----> 6 start_processes(launch_func, nprocs=2, start_method="fork")
/opt/conda/lib/python3.7/site-packages/torch/multiprocessing/spawn.py in start_processes(fn, args, nprocs, join, daemon, start_method)
196
197 # Loop on join until it returns True or raises an exception.
--> 198 while not context.join():
199 pass
200
/opt/conda/lib/python3.7/site-packages/torch/multiprocessing/spawn.py in join(self, timeout)
158 msg = "\n\n-- Process %d terminated with the following error:\n" % error_index
159 msg += original_trace
--> 160 raise ProcessRaisedException(msg, error_index, failed_process.pid)
161
162
ProcessRaisedException:
-- Process 0 terminated with the following error:
Traceback (most recent call last):
File "/opt/conda/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 69, in _wrap
fn(i, *args)
File "/tmp/ipykernel_14417/1434743469.py", line 6, in launch_func
training_loop()
File "/tmp/ipykernel_14417/3423764464.py", line 3, in training_loop
torch.cuda.set_device(device)
File "/opt/conda/lib/python3.7/site-packages/torch/cuda/__init__.py", line 314, in set_device
torch._C._cuda_setDevice(device)
File "/opt/conda/lib/python3.7/site-packages/torch/cuda/__init__.py", line 208, in _lazy_init
"Cannot re-initialize CUDA in forked subprocess. To use CUDA with "
RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method
```
### Versions
PyTorch version: 1.12.0
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.17
Python version: 3.7.13 (default, Mar 29 2022, 02:18:16) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.15.0-47-generic-x86_64-with-debian-buster-sid
Is CUDA available: True
CUDA runtime version: Could not collect
GPU models and configuration: GPU 0: NVIDIA RTX A4000
Nvidia driver version: 515.65.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.21.5
[pip3] torch==1.12.0
[pip3] torchelastic==0.2.0
[pip3] torchtext==0.13.0
[pip3] torchvision==0.13.0
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.3.1 ha36c431_9 nvidia
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py37h7f8727e_0
[conda] mkl_fft 1.3.1 py37hd3c417c_0
[conda] mkl_random 1.2.2 py37h51133e4_0
[conda] numpy 1.21.5 py37he7a7128_2
[conda] numpy-base 1.21.5 py37hf524024_2
[conda] pytorch 1.12.0 py3.7_cuda11.3_cudnn8.3.2_0 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchelastic 0.2.0 pypi_0 pypi
[conda] torchtext 0.13.0 py37 pytorch
[conda] torchvision 0.13.0 py37_cu113 pytorch
cc @VitalyFedyunin @ngimel
| 1 |
4,646 | 85,834 |
[primTorch] Need to update data-dependent check policy
|
triaged, module: primTorch
|
### π Describe the bug
Excerpt from #81128 that checks if the values are within bounds for `nll_loss` reference.
```python
num_classes = input.shape[1] if input.ndim > 1 else input.shape[0]
valid_classes_mask = torch.logical_and(
(flat_target >= 0), (flat_target < num_classes)
)
class_check = torch.all(torch.logical_or(ignore_classes_mask, valid_classes_mask))
# TODO: This check does not work with FakeTensor inputs
# Explicit cast for class_check to bool; See Issue #78071
utils.check(
isinstance(target, FakeTensor) or bool(class_check.item()),
lambda: "A target class is out-of-bounds and not the ignore index.",
)
```
### Versions
Pytorch upstream commit: a4bd89b267e81dc2a23ed767e1efc30fcfb7c665
cc @ezyang @mruberry @ngimel @Lezcano @fdrocha
| 2 |
4,647 | 85,831 |
[FSDP] `use_orig_params=True` Follow-Ups & Known Issues
|
oncall: distributed, triaged, module: fsdp
|
This issue tracks some follow-ups and known issues with `use_orig_params=True`.
For classification, we define a few tags:
- [<==] Supported in existing FSDP / with `use_orig_params=False` but unsupported with `use_orig_params=True`
- [===] Supported in existing FSDP / with `use_orig_params=False` and supported with `use_orig_params=True`
- [!!!] Unsupported in existing FSDP / with `use_orig_params=False` and unsupported with `use_orig_params=True`
If a [To Do] tag is combined with [===] then that means the "supposed with `use_orig_params=True`" is not yet true but is expected to become true.
---
**Activation Checkpointing Compatibility**
- [To Do] Non-reentrant activation checkpointing is untested.
**`sync_module_states=True`**
- [To Do]
https://github.com/pytorch/pytorch/blob/1655b47a384d5e6ba31420b5daee1c029a821387/torch/distributed/fsdp/fully_sharded_data_parallel.py#L1498-L1499
**[<==] `named_parameters()` between Pre-Forward and Post-Forward**
- ~For `use_orig_params=True`, calling `fsdp_model.named_parameters()` between the pre-forward and post-forward (specifically, after the unshard and before the reshard) will result in **nothing being returned**. This is because the original parameters are switched to being _plain_ `Tensor` views into the underlying `FlatParameter`, not `nn.Parameter` views. Since they are plain `Tensor` views, they are no longer registered and hence not returned by `named_parameters()`.~
- For `use_orig_params=False`, doing so returns the `FlatParameter`s because they are always registered.
- ~Given the existing design, this is a hard blocker. However, we are currently unaware of any users requesting for this, and this only affects `named_parameters()` -- the parameters are still directly accessible.~
- To support TorchDynamo + FSDP, we actually register the plain `Tensor`s by manually modifying the module's `_parameters`. This means that `named_parameters()` returns the expected generator, only the parameters are `Tensor`s instead of `nn.Parameter`s, and the references should not be relied upon across iterations.
**[!!!] Parameter Changes**
- The user may change the parameter variable, i.e. `module.param = new_param`. In that case, FSDP needs to update its internal reference to the parameter variable and related metadata.
- Since from the experience with DDP, parameter change is not an important ask, we **do not plan to formally support this**. We can provide some warning to prevent silent errors.
- If we were to support this, then FSDP needs to update its internal reference to the parameter variable and related metadata. Moreover, FSDP should keep `weakref`s to the parameters to enable immediate garbage collection of the old parameter, assuming that there are no issues when using `weakref`s. Finally, FSDP should handle shared parameters becoming non-shared and vice versa.
```
# Shared becoming non-shared
lin = nn.Linear(5, 5)
lin.weight = lin.bias
fsdp_model = FSDP(lin, use_orig_params=True)
lin.weight = nn.Parameter(torch.randn((5, 5)))
# Non-shared becoming shared
lin = nn.Linear(5, 5)
fsdp_model = FSDP(lin, use_orig_params=True)
lin.weight = lin.bias
```
**[===] Gradient Changes**
- The user may change the gradient variable, i.e. `module.param.grad = new_grad`. Unlike for parameters, `new_grad` maybe `None`. This typically encodes the intention to zero the gradient while saving memory and can happen via `optim.zero_grad(set_to_none=True)` or `module.zero_grad(set_to_none=True)`.
- A `None` gradient and a zero gradient are identical with respect to gradient computation, but they are not identical with respect to optimizer computation. The optimizer still steps on parameters with zero gradient, which means that optimizer states may update, while the optimizer does not step on parameters with `None` gradient.
- [Done] To address the difference between `None` and zero gradient, FSDP should maintain a mask over the original parameters that indicates whether it is `None` or not. Since the `flat_param.grad` interfaces with gradient computation, it is fine for it to treat `None` and zero identically, but when pointing the `orig_param.grad`s into `flat_param.grad`, FSDP must use the mask to set `orig_param.grad = None` as needed.
- [<==] FSDP can only propagate changes to the original parameters and their gradients to the underlying `FlatParameter`s and their gradients during FSDP-owned logic. This exposes windows during which those changes are not reflected. For example, if the user calls `optim.zero_grad(set_to_none=True)`, the memory frees cannot happen until the pre-forward, when execution enters FSDP-owned logic.
**[===] Optimizer Checkpointing**
- [To Do] We need to implement optimizer state checkpointing for `use_orig_params=True`, where the optimizer states are now per original parameter instead of per `FlatParameter`. Notably, some parameters may be unused and hence not have state. The design for implementing this is still under discussion.
**Performance Benchmarking**
- ~[To Do] Compare performance of `use_orig_params=True` vs. `False` on some representative workloads.~
- Benchmarking using the `transformer_framework` repo shows parity for T5-11B, RegNet-9B, and DeepViT-8B.
**`.data`**
- The entire approach to `use_orig_params=True` relies on setting the original parameters' `.data` in a controlled way. Ultimately, we want to migrate from this to a more robust approach such as a tensor subclass. We are investing in this `.data` approach to unblock in the near-term and since it is still a good exercise to uncover edge cases.
---
Some engineering follow-ups:
- Formalize contract for parameter and gradient writeback. Then, document it for users, and harden unit tests following that contract.
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang @kwen2501
| 2 |
4,648 | 85,815 |
string interning for dispatcher operator names
|
triaged, module: dispatch
|
## The Problem
Sometimes in a boxed fallback we need to branch on whether an OperatorHandle is a specific operator. In https://github.com/pytorch/pytorch/pull/85374, functorch's grad transform needs to handle lift_fresh and alias differently. The current thing we do in that PR is we check the operator handle for the name of the operator and then do a string comparison, but string comparisons are slow and affect our microbenchmarks.
The traditional way to differentiate between operators in the dispatcher is: your subsystem gets a dispatch key and you register an override for the (operator, dispatch key) and each override behaves differently. However, this doesn't apply cleanly to functorch: functorch's grad transform is not a dispatch key, it works through a combination of two functorch-specific dispatch keys (DynamicLayer{Front, Back}Mode) and also re-uses the existing autograd keys (Autograd{Backend}, ADInplaceAndView).
## Proposal 1: Hack this together inside of functorch
- functorch DynamicLayerFront can get a separate registration per kernel (instead of using a boxed fallback)
- the registration per kernel passes an enum around, where the enum gets a different value per ATen operator
- functorch's logic then can branch on the value of the enum
## Proposal 2: Intern operator names
- Give FunctionSchema `interned_name` and `interned_overload_name` fields.
- These probably need to be optional and would be a part of some enum somewhere that we put together at compile time.
## Discussion
I don't really like either solutions. Proposal 1 sounds like a hack, Proposal 2 shouldn't be necessary in the traditional sense of the dispatcher.
Thoughts, @ezyang @samdow @bdhirsh ?
| 2 |
4,649 | 85,813 |
TorchScript error for `Enum` inside a module
|
oncall: jit
|
### π Describe the bug
```python
from enum import Enum
import torch
class Paint(torch.nn.Module):
class Color(Enum):
RED = "1"
GREEN = "2"
@torch.jit.script
def enum_fn(x: Paint.Color, y: Paint.Color) -> bool:
if x == Paint.Color.RED:
return True
return x == y
```
The code would work fine if `Paint` does not inherit `nn.Module` but here is the output if it does:
```
Traceback (most recent call last):
File "torch_script_loss.py", line 16, in <module>
def enum_fn(x: Paint.Color, y: Paint.Color) -> bool:
File "/opt/conda/lib/python3.8/site-packages/torch/jit/_script.py", line 1322, in script
fn = torch._C._jit_script_compile(
RuntimeError:
attribute lookup is not defined on python value of type 'type':
File "torch_script_loss.py", line 17
@torch.jit.script
def enum_fn(x: Paint.Color, y: Paint.Color) -> bool:
if x == Paint.Color.RED:
~~~~~~~~~~~ <--- HERE
return True
return x == y
```
### Versions
PyTorch version: 1.12.0a0+2c916ef
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04) 9.4.0
Clang version: Could not collect
CMake version: version 3.22.3
Libc version: glibc-2.31
Python version: 3.8.12 | packaged by conda-forge | (default, Jan 30 2022, 23:42:07) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.13.0-30-generic-x86_64-with-glibc2.10
Is CUDA available: True
CUDA runtime version: 11.6.112
Nvidia driver version: 470.103.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.3.3
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.3.3
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.3.3
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.3.3
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.3.3
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.3.3
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.3.3
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] mypy==0.942
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.22.3
[pip3] pytorch-ignite==0.4.8
[pip3] pytorch-quantization==2.1.2
[pip3] torch==1.12.0a0+2c916ef
[pip3] torch-tensorrt==1.1.0a0
[pip3] torchinfo==1.6.5
[pip3] torchtext==0.12.0a0
[pip3] torchvision==0.13.0a0
[conda] magma-cuda110 2.5.2 5 local
[conda] mkl 2019.5 281 conda-forge
[conda] mkl-include 2019.5 281 conda-forge
[conda] numpy 1.22.3 py38h05e7239_0 conda-forge
[conda] pytorch-ignite 0.4.8 pypi_0 pypi
[conda] pytorch-quantization 2.1.2 pypi_0 pypi
[conda] torch 1.12.0a0+2c916ef pypi_0 pypi
[conda] torch-tensorrt 1.1.0a0 pypi_0 pypi
[conda] torchinfo 1.6.5 pypi_0 pypi
[conda] torchtext 0.12.0a0 pypi_0 pypi
[conda] torchvision 0.13.0a0 pypi_0 pypi
| 0 |
4,650 | 85,805 |
`vector_norm` will trigger "Tracing failed sanity checks" for JIT when ord is boolean tensor
|
oncall: jit
|
### π Describe the bug
`vector_norm` will trigger "Tracing failed sanity checks" for JIT when ord is boolean tensor
```py
import torch
def f(A):
ord = torch.tensor(True, dtype=torch.bool)
return torch.linalg.vector_norm(A, ord=ord, )
A = torch.tensor(1.0, dtype=torch.float64)
torch.jit.trace(f, A)
```
```
torch.jit._trace.TracingCheckError: Tracing failed sanity checks!
encountered an exception while running the trace with test inputs.
```
### Versions
```
Collecting environment information...
PyTorch version: 1.13.0.dev20220923+cu116
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.2.0-19ubuntu1) 11.2.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.9.13 (main, Aug 25 2022, 23:26:10) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.19.5-051905-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060
Nvidia driver version: 515.65.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.3
[pip3] torch==1.13.0.dev20220923+cu116
[pip3] torchaudio==0.13.0.dev20220923+cu116
[pip3] torchvision==0.14.0.dev20220923+cu116
[conda] numpy 1.23.3 pypi_0 pypi
[conda] torch 1.13.0.dev20220923+cu116 pypi_0 pypi
[conda] torchaudio 0.13.0.dev20220923+cu116 pypi_0 pypi
[conda] torchvision 0.14.0.dev20220923+cu116 pypi_0 pypi
```
| 0 |
4,651 | 85,804 |
JIT fails to trace `sparse.mm` with a strange error
|
oncall: jit
|
### π Describe the bug
JIT fails to trace `sparse.mm` with a strange error. It says that "RuntimeError: Unsupported value kind: Tensor"
```py
import torch
class M(torch.nn.Module):
def __init__(self):
super().__init__()
mat1_tensor = torch.randint(0, 1, [20, 1], dtype=torch.int32)
mat1 = mat1_tensor.to_sparse()
self.mat1 = mat1
def forward(self, mat2):
return torch.sparse.mm(self.mat1, mat2, )
mat2_tensor = torch.rand([1, 20], dtype=torch.float32)
f = M()
torch.jit.trace(f, mat2_tensor)
```
```
RuntimeError: Unsupported value kind: Tensor
```
### Versions
```
Collecting environment information...
PyTorch version: 1.13.0.dev20220923+cu116
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.2.0-19ubuntu1) 11.2.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.9.13 (main, Aug 25 2022, 23:26:10) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.19.5-051905-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060
Nvidia driver version: 515.65.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.3
[pip3] torch==1.13.0.dev20220923+cu116
[pip3] torchaudio==0.13.0.dev20220923+cu116
[pip3] torchvision==0.14.0.dev20220923+cu116
[conda] numpy 1.23.3 pypi_0 pypi
[conda] torch 1.13.0.dev20220923+cu116 pypi_0 pypi
[conda] torchaudio 0.13.0.dev20220923+cu116 pypi_0 pypi
[conda] torchvision 0.14.0.dev20220923+cu116 pypi_0 pypi
```
| 0 |
4,652 | 85,792 |
TorchScript causes range_check error after a few iterations of forward-backward passes
|
oncall: jit
|
### π Describe the bug
I came across an error caused by a TorchScript model after two iterations of forward-backward passes.
The following script can reproduce the error.
The serialized model can be downloaded from: [repro_model.bin.zip](https://github.com/pytorch/pytorch/files/9662120/repro_model.bin.zip)
```python
import torch
m = torch.jit.load("repro_model.bin")
print("Graph: {}".format(m.graph))
x1 = torch.randn(1, 300, 1024, requires_grad=True).cuda()
x2 = torch.randn(1, 300, 1024, requires_grad=True).cuda()
for i in range(0, 3):
y = m((x1, x2))
tgt = torch.randn_like(y[0])
y[0].backward(tgt)
print("Iteration {} finished".format(i))
```
The output of the script is shown below.
Note that the first two iterations successfully finished, and the third iteration produced an error.
The state of the model should be the same at the beginning of every iteration.
Once a pair of forward/backward passes is successfully finished, it is expected that the following iteration also succeeds.
```bash
Graph: graph(%self : __torch__.test_module,
%merged_input.1 : (Tensor, Tensor)):
%46 : Tensor = prim::Constant[value={1e-12}]() # :0:0
%23 : bool = prim::Constant[value=1]() # :0:0
%17 : NoneType = prim::Constant()
%11 : int = prim::Constant[value=1]()
%7 : bool = prim::Constant[value=0]() # :0:0
%6 : float = prim::Constant[value=0.10000000000000001]() # :0:0
%14 : int = prim::Constant[value=6]() # :0:0
%19 : int = prim::Constant[value=-1]() # :0:0
%36 : int = prim::Constant[value=2]() # :0:0
%_input.1 : Tensor, %_input_tensor.1 : Tensor = prim::TupleUnpack(%merged_input.1)
%_hidden_states.1 : Tensor = aten::dropout(%_input.1, %6, %7) # :0:0
%_x.1 : Tensor = aten::add(%_hidden_states.1, %_input_tensor.1, %11) # :0:0
%_x0.1 : Tensor = aten::to(%_x.1, %14, %7, %7, %17) # :0:0
%20 : int[] = prim::ListConstruct(%19)
%_u.1 : Tensor = aten::mean(%_x0.1, %20, %23, %17) # :0:0
%30 : Tensor = aten::sub(%_x0.1, %_u.1, %11) # :0:0
%34 : Tensor = aten::sub(%_x0.1, %_u.1, %11) # :0:0
%37 : Tensor = aten::pow(%34, %36) # :0:0
%38 : int[] = prim::ListConstruct(%19)
%_s.1 : Tensor = aten::mean(%37, %38, %23, %17) # :0:0
%48 : Tensor = aten::add(%_s.1, %46, %11) # :0:0
%50 : Tensor = aten::sqrt(%48) # :0:0
%_x1.1 : Tensor = aten::div(%30, %50) # :0:0
%55 : (Tensor) = prim::TupleConstruct(%_x1.1)
return (%55)
Iteration 0 finished
Iteration 1 finished
Traceback (most recent call last):
File "run_graph_test.py", line 10, in <module>
y = m((x1, x2))
File "/.../python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
RuntimeError: The following operation failed in the TorchScript interpreter.
Traceback of TorchScript (most recent call last):
RuntimeError: The following operation failed in the TorchScript interpreter.
Traceback of TorchScript (most recent call last):
RuntimeError: vector::_M_range_check: __n (which is 18446744073709551615) >= this->size() (which is 3)
```
### Versions
I used PyTorch 1.12.0 with CUDA 11.3 on NVIDIA A100.
```bash
$ python collect_env.py
Collecting environment information...
PyTorch version: 1.12.0+cu113
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Red Hat Enterprise Linux release 8.4 (Ootpa) (x86_64)
GCC version: (GCC) 8.4.1 20200928 (Red Hat 8.4.1-1)
Clang version: 11.0.1 (Red Hat 11.0.1-1.module+el8.4.0+12483+89b287b0)
CMake version: version 3.24.1
Libc version: glibc-2.28
Python version: 3.8.14 (default, Sep 16 2022, 16:31:55) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)] (64-bit runtime)
Python platform: Linux-4.18.0-305.25.1.el8_4.x86_64-x86_64-with-glibc2.28
Is CUDA available: True
CUDA runtime version: Could not collect
GPU models and configuration:
GPU 0: A100-SXM-80GB
GPU 1: A100-SXM-80GB
GPU 2: A100-SXM-80GB
GPU 3: A100-SXM-80GB
GPU 4: A100-SXM-80GB
GPU 5: A100-SXM-80GB
GPU 6: A100-SXM-80GB
GPU 7: A100-SXM-80GB
Nvidia driver version: 460.73.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.3
[pip3] torch==1.12.0+cu113
[conda] No relevant packages
```
| 2 |
4,653 | 85,791 |
nn.CrossEntropyLoss overflow with FP16 and minibatch
|
module: nn, module: loss, triaged, module: edge cases
|
### π Describe the bug
Using nn.CrossEntropyLoss with FP16 and long sequence is stable. However, introducing minibatch dimension would led to overflow and `CrossEntropyLoss` would output `inf`.
To reproduce:
```Python
import torch
ce = torch.nn.CrossEntropyLoss().cuda().half()
inp = torch.rand((20, 14749, 1025))
inp = inp.cuda().half()
t = torch.randint(low=0, high=14749, size=[20, 1025]).cuda()
loss = ce(inp, t)
print(loss)
ce = torch.nn.CrossEntropyLoss().cuda()
inp = torch.rand((20, 14749, 1025))
inp = inp.cuda()
loss = ce(inp, t)
print(loss)
ce.half()
inp = inp.cuda().half()
inp = inp.transpose(1,2)
inp = inp.flatten(start_dim=0, end_dim=1)
t = t.flatten(start_dim=0, end_dim=1)
loss = ce(inp, t)
print(loss)
```
The first loss would be `inf`. Both the second and the third would be correct.
### Versions
I tested on 1.8.2 and 1.12.1, both are the same.
cc @albanD @mruberry @jbschlosser @walterddr @kshitij12345 @saketh-are
| 0 |
4,654 | 85,775 |
Timed out receiving the shared seed from the distribtued store on Rank 2
|
oncall: distributed, module: dataloader, triaged
|
### π Describe the bug
On a 8xA100-40GB SXM machine, with 6 workers per training process, after a few hours I get the following error:
```
contrastive_train-contrastive_train-1 | trainer.fit() [0/1942]
contrastive_train-contrastive_train-1 | File "/root/micromamba/envs/video-rec/lib/python3.8/site-packages/composer/trainer/trainer.py", line 1386, in fit
contrastive_train-contrastive_train-1 | self._train_loop()
contrastive_train-contrastive_train-1 | File "/root/micromamba/envs/video-rec/lib/python3.8/site-packages/composer/trainer/trainer.py", line 1512, in _train_loop
contrastive_train-contrastive_train-1 | for batch_idx, self.state.batch in enumerate(self._iter_dataloader(TrainerMode.TRAIN)):
contrastive_train-contrastive_train-1 | File "/root/micromamba/envs/video-rec/lib/python3.8/site-packages/composer/trainer/trainer.py", line 2194, in _iter_dataloader
contrastive_train-contrastive_train-1 | dataloader_iter = iter(self.state.dataloader)
contrastive_train-contrastive_train-1 | File "/root/micromamba/envs/video-rec/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 444, in __iter__
contrastive_train-contrastive_train-1 | return self._get_iterator()
contrastive_train-contrastive_train-1 | File "/root/micromamba/envs/video-rec/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 390, in _get_iterator
contrastive_train-contrastive_train-1 | return _MultiProcessingDataLoaderIter(self)
contrastive_train-contrastive_train-1 | File "/root/micromamba/envs/video-rec/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1038, in __init__
contrastive_train-contrastive_train-1 | super(_MultiProcessingDataLoaderIter, self).__init__(loader)
contrastive_train-contrastive_train-1 | File "/root/micromamba/envs/video-rec/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 623, in __init__
contrastive_train-contrastive_train-1 | self._shared_seed = loader._get_shared_seed()
contrastive_train-contrastive_train-1 | File "/root/micromamba/envs/video-rec/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 604, in _get_shared_seed
contrastive_train-contrastive_train-1 | raise RuntimeError("Timed out receiving the shared seed from the distribtued store "
contrastive_train-contrastive_train-1 | RuntimeError: Timed out receiving the shared seed from the distribtued store on Rank 2. (world_size=8, timeout=1800)
contrastive_train-contrastive_train-1 |
contrastive_train-contrastive_train-1 |
contrastive_train-contrastive_train-1 |
contrastive_train-contrastive_train-1 |
contrastive_train-contrastive_train-1 | ----------End global rank 2 STDERR----------ERROR:composer.cli.launcher:Global rank 0 (PID 162) exited with code -15
```
Here's what happens prior to this error:
- 1 GPU goes to 0%, the other 7 remain at 100% in `nvidia-smi`
- In `htop`, 7 cores (not always the same cores) out of my 124 cores are constantly at 100%, the rest are barely active. It's vaguely suspicious that the number of CPU cores at 100% is equal to the number of workers + 1
- The worker processes for the rank 2 GPU have died. I.e, if I send `kill -s SIGUSR1 <pid>` to a dataloader worker pid for the rank 2 GPU, I will get the response "no process with this pid" (or whatever the exact error is). However, if I run that command with the pid for the dataloader worker of a different rank, I won't receive an error
### Versions
PyTorch version: 1.12.1+cu116
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.31
Python version: 3.8.13 (default, Mar 28 2022, 11:38:47) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.15.0-46-generic-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: 11.6.124
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-40GB
GPU 1: NVIDIA A100-SXM4-40GB
GPU 2: NVIDIA A100-SXM4-40GB
GPU 3: NVIDIA A100-SXM4-40GB
GPU 4: NVIDIA A100-SXM4-40GB
GPU 5: NVIDIA A100-SXM4-40GB
GPU 6: NVIDIA A100-SXM4-40GB
GPU 7: NVIDIA A100-SXM4-40GB
Nvidia driver version: 510.47.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.22.4
[pip3] pytorch-ranger==0.1.1
[pip3] torch==1.12.1+cu116
[pip3] torch-optimizer==0.1.0
[pip3] torchdata==0.4.1
[pip3] torchmetrics==0.7.3
[pip3] torchvision==0.13.1a0+bddbd7e
[conda] Could not collect
Pillow/Pillow-SIMD version: 7.0.0.post3
Postfix means using pillow-simd
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang @kwen2501 @SsnL @VitalyFedyunin @ejguan @NivekT
| 4 |
4,655 | 85,773 |
Conda Pytorch (Pytorch channel) in WSL2 Ubuntu can't find libcudnn shared objects
|
module: build, module: cuda, triaged
|
### π Describe the bug
This is on a fresh install of Ubuntu 22,04 in WSL. All I did before running this was an `apt-get upgrade` and `wget` and install the Linux Miniconda installer. I did not download any Cuda packages from Nvidia, or the Ubuntu `nvidia-cuda-toolkit` package.
```
conda create -n torch pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch
(skip install text)
conda activate torch
python test.py
```
test.py:
```python
import torch
print(f"{torch.cuda.is_available()=}")
f = torch.nn.Conv2d(3, 8, 3, device="cuda")
X = torch.randn(2, 3, 4, 4, device="cuda")
Y = X @ X
print(f"{Y.shape=}")
print("matrix multiply works")
Y = f(X)
print(f"{Y.shape=}")
print("Conv2d works")
```
Console output;
```
torch.cuda.is_available()=True
Y.shape=torch.Size([2, 3, 4, 4])
matrix multiply works
Could not load library libcudnn_cnn_infer.so.8. Error: libcuda.so: cannot open shared object file: No such file or directory
Please make sure libcudnn_cnn_infer.so.8 is in your library path!
Aborted
```
The shared objects are where I would expect them to be:
```
$ ls $CONDA_PREFIX/lib/python3.10/site-packages/torch/lib
libc10_cuda.so libcudnn_adv_train.so.8 libcudnn_ops_train.so.8 libtorch_cpu.so libtorch_cuda.so
libc10.so libcudnn_cnn_infer.so.8 libcudnn.so.8 libtorch_cuda_cpp.so libtorch_global_deps.so
libcaffe2_nvrtc.so libcudnn_cnn_train.so.8 libcupti-53b4cc5d.so.11.3 libtorch_cuda_cu.so libtorch_python.so
libcudnn_adv_infer.so.8 libcudnn_ops_infer.so.8 libshm.so libtorch_cuda_linalg.so libtorch.so
```
This error does not happen when running Pytorch on the Windows that's hosting this WSL container. On Windows, everything works fine.
This is a different bug from [#73487. ](https://github.com/pytorch/pytorch/issues/73487). They report Pytorch not finding Cuda, while what I've found is Pytorch crashing on a call to, e.g., a forward pass of a Conv2d layer with an error saying it's unable to load the libcudnn shared object.
### Versions
```
$ python collect_env.py
Collecting environment information...
PyTorch version: 1.12.1
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.10.4 (main, Mar 31 2022, 08:41:55) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3090
Nvidia driver version: 516.94
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.1
[pip3] torch==1.12.1
[pip3] torchaudio==0.12.1
[pip3] torchvision==0.13.1
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.3.1 h2bc3f7f_2
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py310h7f8727e_0
[conda] mkl_fft 1.3.1 py310hd6ae3a3_0
[conda] mkl_random 1.2.2 py310h00e6091_0
[conda] numpy 1.23.1 py310h1794996_0
[conda] numpy-base 1.23.1 py310hcba007f_0
[conda] pytorch 1.12.1 py3.10_cuda11.3_cudnn8.3.2_0 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 0.12.1 py310_cu113 pytorch
[conda] torchvision 0.13.1 py310_cu113 pytorch
```
cc @malfet @seemethere @ngimel
| 11 |
4,656 | 93,701 |
Replace same with TestCase assertEqual
|
triaged, oncall: pt2
|
In TorchDynamo tests, we use `same` function to check for accuracy. But for many cases, we can rely on TestCase assertEqual which has pre-defined atol/rtol for different dtypes. We should migrate the tests to `assertEqual` as much as possible. Thanks to @SherlockNoMad for pointing this out.
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh @williamwen42 if you are interested.
| 1 |
4,657 | 93,699 |
retro inductor OOM
|
triaged, oncall: pt2
|
## Repro
Another lucidrains model
` pip install retro-pytorch`
```python
import torch
from retro_pytorch import RETRO
import torchdynamo
retro = RETRO(
chunk_size = 64, # the chunk size that is indexed and retrieved (needed for proper relative positions as well as causal chunked cross attention)
max_seq_len = 2048, # max sequence length
enc_dim = 896, # encoder model dim
enc_depth = 2, # encoder depth
dec_dim = 796, # decoder model dim
dec_depth = 12, # decoder depth
dec_cross_attn_layers = (3, 6, 9, 12), # decoder cross attention layers (with causal chunk cross attention)
heads = 8, # attention heads
dim_head = 64, # dimension per head
dec_attn_dropout = 0.25, # decoder attention dropout
dec_ff_dropout = 0.25, # decoder feedforward dropout
use_deepnet = True # turn on post-normalization with DeepNet residual scaling and initialization, for scaling to 1000 layers
).cuda()
seq = torch.randint(0, 20000, (2, 2048 + 1)).cuda() # plus one since it is split into input and labels for training
retrieved = torch.randint(0, 20000, (2, 32, 2, 128)).cuda() # retrieved tokens - (batch, num chunks, num retrieved neighbors, retrieved chunk with continuation)
@torchdynamo.optimize("inductor")
def train():
loss = retro(seq, retrieved, return_loss = True)
loss.backward()
train()
```
## Logs
```
(dynamo) ubuntu@ip-172-31-31-152:~/tests$ python retro.py
torchdynamo.symbolic_convert: [WARNING] Graph break: missing: BUILD_MAP_UNPACK_WITH_CALL from user code at File "retro.py", line 26, in train
loss = retro(seq, retrieved, return_loss = True)
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/retro_pytorch/retro_pytorch.py", line 618, in forward
pos_emb = rearrange(pos_emb, 'n d -> 1 n d')
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/einops/einops.py", line 487, in rearrange
return reduce(tensor, pattern, reduction='rearrange', **axes_lengths)
torchdynamo.symbolic_convert: [WARNING] Graph break: call_method ConstDictVariable() __contains__ [GetAttrVariable(UserDefinedClassVariable(), framework_name)] {} from user code at File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/einops/einops.py", line 231, in _apply_recipe
backend = get_backend(tensor)
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/einops/_backends.py", line 42, in get_backend
if BackendSubclass.framework_name not in _backends:
torchdynamo.symbolic_convert: [WARNING] Graph break: call_function UserDefinedObjectVariable(_lru_cache_wrapper) [UserDefinedObjectVariable(TransformRecipe), ShapeVariable()] {} from user code at File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/einops/einops.py", line 233, in <graph break in _apply_recipe>
_reconstruct_from_shape(recipe, backend.shape(tensor))
torchinductor.compile_fx: [WARNING] skipping cudagraphs due to complex input striding
torchinductor.compile_fx: [WARNING] skipping cudagraphs due to complex input striding
torchinductor.lowering: [WARNING] using triton random, expect difference from eager
torchinductor.lowering: [WARNING] using triton random, expect difference from eager
torchinductor.compile_fx: [WARNING] skipping cudagraphs due to complex input striding
torchinductor.compile_fx: [WARNING] skipping cudagraphs due to complex input striding
torchinductor.compile_fx: [WARNING] skipping cudagraphs due to complex input striding
torchinductor.compile_fx: [WARNING] skipping cudagraphs due to complex input striding
torchdynamo.symbolic_convert: [WARNING] Graph break: missing: BUILD_MAP_UNPACK_WITH_CALL from user code at File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/retro_pytorch/retro_pytorch.py", line 87, in forward
out = self.fn(x, *args, **kwargs) + residual
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/retro_pytorch/retro_pytorch.py", line 276, in forward
k_pos_emb = repeat(k_pos_emb, 'b h n d -> b h (r n) d', r = num_retrieved)
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/einops/einops.py", line 537, in repeat
return reduce(tensor, pattern, reduction='repeat', **axes_lengths)
torchinductor.compile_fx: [WARNING] skipping cudagraphs due to complex input striding
torchinductor.compile_fx: [WARNING] skipping cudagraphs due to complex input striding
torchinductor.compile_fx: [WARNING] skipping cudagraphs due to complex input striding
torchinductor.compile_fx: [WARNING] skipping cudagraphs due to complex input striding
torchinductor.lowering: [WARNING] using triton random, expect difference from eager
torchinductor.lowering: [WARNING] using triton random, expect difference from eager
torchinductor.lowering: [WARNING] using triton random, expect difference from eager
torchinductor.lowering: [WARNING] using triton random, expect difference from eager
torchinductor.lowering: [WARNING] using triton random, expect difference from eager
torchinductor.lowering: [WARNING] using triton random, expect difference from eager
torchinductor.lowering: [WARNING] using triton random, expect difference from eager
Traceback (most recent call last):
File "retro.py", line 29, in <module>
train()
File "/home/ubuntu/torchdynamo/torchdynamo/eval_frame.py", line 166, in _fn
return fn(*args, **kwargs)
File "retro.py", line 24, in train
@torchdynamo.optimize("inductor")
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/retro_pytorch/retro_pytorch.py", line 564, in forward
def forward(
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/retro_pytorch/retro_pytorch.py", line 618, in <graph break in forward>
pos_emb = rearrange(pos_emb, 'n d -> 1 n d')
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/retro_pytorch/retro_pytorch.py", line 627, in <graph break in forward>
encoder_retrieved_mask = rearrange(mask, 'b k r n -> (b k r) n')
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/retro_pytorch/retro_pytorch.py", line 404, in forward
def forward(self, x, *, encoder = None, encoder_retrieved_mask = None, context_mask = None, retrieved = None):
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/retro_pytorch/retro_pytorch.py", line 406, in <graph break in forward>
self_attn_pos_emb = self.rotary_pos_emb(seq_len, device = device)
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/retro_pytorch/retro_pytorch.py", line 418, in <graph break in forward>
cross_attn_q_pos_emb = self.rotary_pos_emb(self.chunk_size, device = device, offset = self.chunk_size - 1) # need to add extra chunk size, since it will be shifted
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/retro_pytorch/retro_pytorch.py", line 430, in <graph break in forward>
x = attn(x, pos_emb = self_attn_pos_emb)
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/retro_pytorch/retro_pytorch.py", line 85, in forward
def forward(self, x, *args, **kwargs):
File "/home/ubuntu/torchdynamo/torchdynamo/eval_frame.py", line 166, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/functorch/_src/aot_autograd.py", line 904, in forward
return compiled_f(
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/functorch/_src/aot_autograd.py", line 895, in new_func
return compiled_fn(args)
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/functorch/_src/aot_autograd.py", line 228, in g
return f(*args)
File "/home/ubuntu/torchdynamo/torchdynamo/eval_frame.py", line 166, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/functorch/_src/aot_autograd.py", line 354, in forward
fw_outs = call_func_with_args(
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/functorch/_src/aot_autograd.py", line 250, in call_func_with_args
out = normalize_as_list(f(args))
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/functorch/_src/aot_autograd.py", line 228, in g
return f(*args)
File "/home/ubuntu/torchdynamo/torchinductor/compile_fx.py", line 125, in run
compiled_fn = cudagraphify_impl(model, new_inputs, static_input_idxs)
File "/home/ubuntu/torchdynamo/torchinductor/compile_fx.py", line 150, in cudagraphify_impl
static_inputs = [
File "/home/ubuntu/torchdynamo/torchinductor/compile_fx.py", line 151, in <listcomp>
static_input(x) if idx not in static_input_idxs else inputs[idx]
File "/home/ubuntu/torchdynamo/torchinductor/compile_fx.py", line 143, in static_input
buffer = torch.zeros(needed_size, dtype=x.dtype, device=x.device)
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 14.00 MiB (GPU 0; 15.78 GiB total capacity; 13.71 GiB already allocated; 11.94 MiB free; 14.77 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
```
cc @ezyang @soumith @wconstab @ngimel @bdhirsh
| 0 |
4,658 | 93,698 |
imagen inductor errors
|
triaged, oncall: pt2
|
## Repro
Another lucidrains model
`pip install imagen-pytorch`
```python
import torch
from imagen_pytorch import Unet, Imagen
import torchdynamo
# unet for imagen
unet1 = Unet(
dim = 32,
cond_dim = 512,
dim_mults = (1, 2, 4, 8),
num_resnet_blocks = 3,
layer_attns = (False, True, True, True),
layer_cross_attns = (False, True, True, True)
)
unet2 = Unet(
dim = 32,
cond_dim = 512,
dim_mults = (1, 2, 4, 8),
num_resnet_blocks = (2, 4, 8, 8),
layer_attns = (False, False, False, True),
layer_cross_attns = (False, False, False, True)
)
# imagen, which contains the unets above (base unet and super resoluting ones)
imagen = Imagen(
unets = (unet1, unet2),
image_sizes = (64, 256),
timesteps = 1000,
cond_drop_prob = 0.1
).cuda()
# mock images (get a lot of this) and text encodings from large T5
text_embeds = torch.randn(4, 256, 768).cuda()
images = torch.randn(4, 3, 256, 256).cuda()
# feed images into imagen, training each unet in the cascade
@torchdynamo.optimize('inductor')
def train():
for i in (1, 2):
loss = imagen(images, text_embeds = text_embeds, unet_number = i)
loss.backward()
train()
# do the above for many many many many steps
# now you can sample an image based on the text embeddings from the cascading ddpm
images = imagen.sample(texts = [
'a whale breaching from afar',
'young girl blowing out candles on her birthday cake',
'fireworks with blue and green sparkles'
], cond_scale = 3.)
images.shape # (3, 3, 256, 256)
```
## Logs
```
(dynamo) ubuntu@ip-172-31-31-152:~/tests$ python imggen.py
Downloading: 100%|βββββββββββββββββββββ| 605/605 [00:00<00:00, 484kB/s]
The base dimension of your u-net should ideally be no smaller than 128, as recommended by a professional DDPM trainer https://nonint.com/2022/05/04/friends-dont-let-friends-train-small-diffusion-models/
torchdynamo.symbolic_convert: [WARNING] Graph break: call_function in skip_files /opt/conda/envs/dynamo/lib/python3.8/functools.py from user code at File "imggen.py", line 45, in train
loss = imagen(images, text_embeds = text_embeds, unet_number = i)
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/imagen_pytorch/imagen_pytorch.py", line 2451, in forward
cond_images = maybe(cast_uint8_images_to_float)(cond_images)
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/imagen_pytorch/imagen_pytorch.py", line 44, in maybe
@wraps(fn)
torchdynamo.symbolic_convert: [WARNING] Graph break: call_function BuiltinVariable(callable) [NestedUserFunctionVariable()] {} from user code at File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/imagen_pytorch/imagen_pytorch.py", line 2457, in <graph break in forward>
unet = default(unet, lambda: self.get_unet(unet_number))
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/imagen_pytorch/imagen_pytorch.py", line 67, in default
return d() if callable(d) else d
torchdynamo.symbolic_convert: [WARNING] Graph break: call_function BuiltinVariable(delattr) [NNModuleVariable(), ConstantVariable(str)] {} from user code at File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/imagen_pytorch/imagen_pytorch.py", line 2457, in <lambda>
unet = default(unet, lambda: self.get_unet(unet_number))
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/imagen_pytorch/imagen_pytorch.py", line 1928, in get_unet
delattr(self, 'unets')
torchdynamo.symbolic_convert: [WARNING] Graph break: non-const NNModule method to from user code at File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/imagen_pytorch/imagen_pytorch.py", line 1933, in <graph break in get_unet>
unet.to(self.device if unet_index == index else 'cpu')
torchdynamo.symbolic_convert: [WARNING] Graph break: missing: BUILD_MAP_UNPACK_WITH_CALL from user code at File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/imagen_pytorch/imagen_pytorch.py", line 2470, in <graph break in forward>
check_shape(images, 'b c ...', c = self.channels)
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/einops_exts/einops_exts.py", line 12, in check_shape
return rearrange(tensor, f"{pattern} -> {pattern}", **kwargs)
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/einops/einops.py", line 487, in rearrange
return reduce(tensor, pattern, reduction='rearrange', **axes_lengths)
torchdynamo.symbolic_convert: [WARNING] Graph break: call_function UserDefinedObjectVariable(_lru_cache_wrapper) [UserDefinedObjectVariable(TransformRecipe), ShapeVariable()] {} from user code at File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/einops/einops.py", line 233, in _apply_recipe
_reconstruct_from_shape(recipe, backend.shape(tensor))
torchinductor.graph: [WARNING] Creating implicit fallback for:
target: aten.uniform_.default
args[0]: TensorBox(StorageBox(
Pointwise(
'cuda',
torch.float32,
constant(0, torch.float32),
ranges=[4],
origins={zeros}
)
))
args[1]: 0.0
args[2]: 0.999
torchinductor.ir: [WARNING] Using FallbackKernel: torch.ops.aten.uniform_.default
torchinductor.graph: [WARNING] Creating implicit fallback for:
target: aten.randn_like.default
args[0]: TensorBox(StorageBox(
InputBuffer(name='arg0_1', layout=FixedLayout('cuda', torch.float32, size=[s0, s1, s2, s2], stride=[s1*s2**2, s2**2, s2, 1]))
))
kwargs: {'dtype': torch.float32, 'layout': torch.strided, 'device': device(type='cuda', index=0), 'pin_memory': False}
torchinductor.ir: [WARNING] Using FallbackKernel: torch.ops.aten.randn_like.default
torchdynamo.symbolic_convert: [WARNING] Graph break: call_function UserDefinedObjectVariable(ScriptFunction) [TensorVariable()] {} from user code at File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/imagen_pytorch/imagen_pytorch.py", line 2361, in <graph break in p_losses>
x_noisy, log_snr = noise_scheduler.q_sample(x_start = x_start, t = times, noise = noise)
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/imagen_pytorch/imagen_pytorch.py", line 251, in q_sample
log_snr = self.log_snr(t).type(dtype)
torchdynamo.symbolic_convert: [WARNING] Graph break: call_function UserDefinedObjectVariable(ScriptFunction) [TensorVariable()] {} from user code at File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/imagen_pytorch/imagen_pytorch.py", line 214, in <graph break in get_condition>
return maybe(self.log_snr)(times)
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/imagen_pytorch/imagen_pytorch.py", line 48, in inner
return fn(x)
torchdynamo.variables.builtin: [WARNING] incorrect arg count <bound method BuiltinVariable.call_dict of BuiltinVariable(dict)> missing a required argument: 'arg'
torchdynamo.symbolic_convert: [WARNING] Graph break: call_function BuiltinVariable(dict) [] {'text_embeds': TensorVariable(), 'text_mask': TensorVariable(), 'cond_images': ConstantVariable(NoneType), 'lowres_noise_times': ConstantVariable(NoneType), 'lowres_cond_img': ConstantVariable(NoneType), 'cond_drop_prob': ConstantVariable(float)} from user code at File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/imagen_pytorch/imagen_pytorch.py", line 2377, in <graph break in p_losses>
unet_kwargs = dict(
torchdynamo.convert_frame: [ERROR] WON'T CONVERT <graph break in p_losses> /opt/conda/envs/dynamo/lib/python3.8/site-packages/imagen_pytorch/imagen_pytorch.py line 2377
due to:
Traceback (most recent call last):
File "/home/ubuntu/torchdynamo/torchdynamo/variables/nn_module.py", line 53, in unpack_var_sequence
assert isinstance(
AssertionError: Unet
from user code:
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/imagen_pytorch/imagen_pytorch.py", line 2393, in <graph break in p_losses>
if self_cond and random() < 0.5:
Set torchdynamo.config.verbose=True for more information
==========
torchdynamo.symbolic_convert: [WARNING] Graph break: non-const NNModule method _get_item_by_idx from user code at File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/torch/nn/modules/container.py", line 107, in __getitem__
return self._get_item_by_idx(self._modules.values(), idx)
torchinductor.ir: [WARNING] Using FallbackKernel: torch.ops.aten.uniform_.default
torchdynamo.symbolic_convert: [WARNING] Graph break: missing: BUILD_MAP_UNPACK_WITH_CALL from user code at File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/imagen_pytorch/imagen_pytorch.py", line 1586, in <graph break in forward>
text_tokens = self.attn_pool(text_tokens)
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/imagen_pytorch/imagen_pytorch.py", line 444, in forward
latents = repeat(self.latents, 'n d -> b n d', b = x.shape[0])
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/einops/einops.py", line 537, in repeat
return reduce(tensor, pattern, reduction='repeat', **axes_lengths)
torchdynamo.convert_frame: [ERROR] WON'T CONVERT forward /opt/conda/envs/dynamo/lib/python3.8/site-packages/imagen_pytorch/imagen_pytorch.py line 363
due to:
Traceback (most recent call last):
File "/home/ubuntu/torchdynamo/torchdynamo/symbolic_convert.py", line 1637, in LOAD_CLOSURE
self.push(self.closure_cells[inst.argval])
KeyError: 'fn'
from user code:
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/imagen_pytorch/imagen_pytorch.py", line 375, in forward
q, k, v = rearrange_many((q, k, v), 'b n (h d) -> b h n d', h = h)
Set torchdynamo.config.verbose=True for more information
==========
torchinductor.compile_fx: [WARNING] skipping cudagraphs due to complex input striding
torchdynamo.symbolic_convert: [WARNING] Graph break: call_function UserDefinedObjectVariable(Always) [TensorVariable()] {} from user code at File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/imagen_pytorch/imagen_pytorch.py", line 713, in <graph break in forward>
h = h * self.gca(h)
torchdynamo.convert_frame: [ERROR] WON'T CONVERT <graph break in forward> /opt/conda/envs/dynamo/lib/python3.8/site-packages/imagen_pytorch/imagen_pytorch.py line 702
due to:
Traceback (most recent call last):
File "/home/ubuntu/torchdynamo/torchdynamo/symbolic_convert.py", line 1637, in LOAD_CLOSURE
self.push(self.closure_cells[inst.argval])
KeyError: 'fn'
from user code:
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/imagen_pytorch/imagen_pytorch.py", line 713, in <graph break in forward>
h = h * self.gca(h)
Set torchdynamo.config.verbose=True for more information
==========
torchinductor.compile_fx: [WARNING] skipping cudagraphs due to complex input striding
torchdynamo.convert_frame: [ERROR] WON'T CONVERT forward /opt/conda/envs/dynamo/lib/python3.8/site-packages/imagen_pytorch/imagen_pytorch.py line 923
due to:
Traceback (most recent call last):
File "/home/ubuntu/torchdynamo/torchdynamo/symbolic_convert.py", line 1637, in LOAD_CLOSURE
self.push(self.closure_cells[inst.argval])
KeyError: 'fn'
from user code:
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/imagen_pytorch/imagen_pytorch.py", line 925, in forward
x, context = rearrange_many((x, context), 'b n ... -> b n (...)')
Set torchdynamo.config.verbose=True for more information
==========
torchinductor.compile_fx: [WARNING] skipping cudagraphs due to complex input striding
torchinductor.compile_fx: [WARNING] skipping cudagraphs due to complex input striding
torchdynamo.convert_frame: [ERROR] WON'T CONVERT forward /opt/conda/envs/dynamo/lib/python3.8/site-packages/imagen_pytorch/imagen_pytorch.py line 750
due to:
Traceback (most recent call last):
File "/home/ubuntu/torchdynamo/torchdynamo/symbolic_convert.py", line 1637, in LOAD_CLOSURE
self.push(self.closure_cells[inst.argval])
KeyError: 'fn'
from user code:
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/imagen_pytorch/imagen_pytorch.py", line 758, in forward
q, k, v = rearrange_many((q, k, v), 'b n (h d) -> b h n d', h = self.heads)
Set torchdynamo.config.verbose=True for more information
==========
torchinductor.compile_fx: [WARNING] skipping cudagraphs due to complex input striding
torchinductor.compile_fx: [WARNING] skipping cudagraphs due to complex input striding
torchinductor.compile_fx: [WARNING] skipping cudagraphs due to complex input striding
torchinductor.compile_fx: [WARNING] skipping cudagraphs due to complex input striding
torchinductor.compile_fx: [WARNING] skipping cudagraphs due to complex input striding
torchdynamo.convert_frame: [ERROR] WON'T CONVERT <graph break in forward> /opt/conda/envs/dynamo/lib/python3.8/site-packages/imagen_pytorch/imagen_pytorch.py line 497
due to:
Traceback (most recent call last):
File "/home/ubuntu/torchdynamo/torchdynamo/symbolic_convert.py", line 1637, in LOAD_CLOSURE
self.push(self.closure_cells[inst.argval])
KeyError: 'fn'
from user code:
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/imagen_pytorch/imagen_pytorch.py", line 502, in <graph break in forward>
nk, nv = repeat_many(self.null_kv.unbind(dim = -2), 'd -> b 1 d', b = b)
Set torchdynamo.config.verbose=True for more information
==========
torchinductor.compile_fx: [WARNING] skipping cudagraphs due to complex input striding
torchinductor.compile_fx: [WARNING] skipping cudagraphs due to complex input striding
torchinductor.compile_fx: [WARNING] skipping cudagraphs due to complex input striding
torchinductor.compile_fx: [WARNING] skipping cudagraphs due to complex input striding
torchinductor.compile_fx: [WARNING] skipping cudagraphs due to complex input striding
torchinductor.compile_fx: [WARNING] skipping cudagraphs due to complex input striding
torchinductor.compile_fx: [WARNING] skipping cudagraphs due to complex input striding
torchinductor.compile_fx: [WARNING] skipping cudagraphs due to complex input striding
torchinductor.compile_fx: [WARNING] skipping cudagraphs due to complex input striding
torchinductor.compile_fx: [WARNING] skipping cudagraphs due to complex input striding
torchinductor.compile_fx: [WARNING] skipping cudagraphs due to complex input striding
torchinductor.compile_fx: [WARNING] skipping cudagraphs due to complex input striding
torchinductor.compile_fx: [WARNING] skipping cudagraphs due to complex input striding
torchinductor.compile_fx: [WARNING] skipping cudagraphs due to complex input striding
torchinductor.graph: [WARNING] Creating implicit fallback for:
target: aten.pixel_shuffle.default
args[0]: TensorBox(StorageBox(
Pointwise(
'cuda',
torch.float32,
silu(load(buf0, i3 + 8 * i2 + 64 * i1 + 32768 * i0) + load(arg1_1, i1)),
ranges=[s2, 512, 8, 8],
origins={silu}
)
))
args[1]: 2
torchinductor.ir: [WARNING] Using FallbackKernel: torch.ops.aten.pixel_shuffle.default
torchinductor.compile_fx: [WARNING] skipping cudagraphs due to complex input striding
torchinductor.compile_fx: [WARNING] skipping cudagraphs due to complex input striding
torchinductor.compile_fx: [WARNING] skipping cudagraphs due to complex input striding
torchinductor.compile_fx: [WARNING] skipping cudagraphs due to complex input striding
torchinductor.ir: [WARNING] Using FallbackKernel: torch.ops.aten.pixel_shuffle.default
torchinductor.compile_fx: [WARNING] skipping cudagraphs due to complex input striding
torchinductor.compile_fx: [WARNING] skipping cudagraphs due to complex input striding
torchinductor.compile_fx: [WARNING] skipping cudagraphs due to complex input striding
torchinductor.compile_fx: [WARNING] skipping cudagraphs due to complex input striding
torchinductor.ir: [WARNING] Using FallbackKernel: torch.ops.aten.pixel_shuffle.default
torchinductor.compile_fx: [WARNING] skipping cudagraphs due to complex input striding
torchinductor.compile_fx: [WARNING] skipping cudagraphs due to complex input striding
torchinductor.compile_fx: [WARNING] skipping cudagraphs due to complex input striding
torchdynamo.convert_frame: [WARNING] torchdynamo hit config.cache_size_limit (64)
function: 'forward' (/opt/conda/envs/dynamo/lib/python3.8/site-packages/imagen_pytorch/imagen_pytorch.py:640)
reasons: ['___check_obj_id(self, 140496746038752)']
to diagnose recompilation issues, see https://github.com/pytorch/torchdynamo/blob/main/TROUBLESHOOTING.md.
torchdynamo.symbolic_convert: [WARNING] Graph break: call_function UserDefinedObjectVariable(ScriptFunction) [TensorVariable()] {} from user code at File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/imagen_pytorch/imagen_pytorch.py", line 280, in predict_start_from_noise
log_snr = self.log_snr(t)
torchinductor.compile_fx: [WARNING] skipping cudagraphs due to complex input striding
torchdynamo.convert_frame: [WARNING] torchdynamo hit config.cache_size_limit (64)
function: 'forward' (/opt/conda/envs/dynamo/lib/python3.8/site-packages/torch/nn/modules/container.py:202)
reasons: ['___check_obj_id(self, 140496750308800)']
to diagnose recompilation issues, see https://github.com/pytorch/torchdynamo/blob/main/TROUBLESHOOTING.md.
torchinductor.compile_fx: [WARNING] skipping cudagraphs due to complex input striding
torchdynamo.convert_frame: [WARNING] torchdynamo hit config.cache_size_limit (64)
function: '<graph break in _apply_recipe>' (/opt/conda/envs/dynamo/lib/python3.8/site-packages/einops/einops.py:233)
reasons: ['len(___stack0[0]) == 3']
to diagnose recompilation issues, see https://github.com/pytorch/torchdynamo/blob/main/TROUBLESHOOTING.md.
torchinductor.compile_fx: [WARNING] skipping cudagraphs due to complex input striding
torchinductor.compile_fx: [WARNING] skipping cudagraphs due to complex input striding
torchinductor.compile_fx: [WARNING] skipping cudagraphs due to complex input striding
torchdynamo.convert_frame: [WARNING] torchdynamo hit config.cache_size_limit (64)
function: 'reduce' (/opt/conda/envs/dynamo/lib/python3.8/site-packages/einops/einops.py:355)
reasons: ["set(axes_lengths.keys()) == {'c'}"]
to diagnose recompilation issues, see https://github.com/pytorch/torchdynamo/blob/main/TROUBLESHOOTING.md.
torchdynamo.convert_frame: [WARNING] torchdynamo hit config.cache_size_limit (64)
function: '_apply_recipe' (/opt/conda/envs/dynamo/lib/python3.8/site-packages/einops/einops.py:229)
reasons: ["tensor 'tensor' requires_grad mismatch. expected requires_grad=0"]
to diagnose recompilation issues, see https://github.com/pytorch/torchdynamo/blob/main/TROUBLESHOOTING.md.
torchdynamo.convert_frame: [WARNING] torchdynamo hit config.cache_size_limit (64)
function: 'rearrange' (/opt/conda/envs/dynamo/lib/python3.8/site-packages/einops/einops.py:428)
reasons: ["set(axes_lengths.keys()) == {'c'}"]
to diagnose recompilation issues, see https://github.com/pytorch/torchdynamo/blob/main/TROUBLESHOOTING.md.
torchinductor.ir: [WARNING] Using FallbackKernel: torch.ops.aten.pixel_shuffle.default
torchdynamo.convert_frame: [WARNING] torchdynamo hit config.cache_size_limit (64)
function: 'reshape' (/opt/conda/envs/dynamo/lib/python3.8/site-packages/einops/_backends.py:83)
reasons: ['len(shape) == 4']
to diagnose recompilation issues, see https://github.com/pytorch/torchdynamo/blob/main/TROUBLESHOOTING.md.
Traceback (most recent call last):
File "imggen.py", line 48, in <module>
train()
File "/home/ubuntu/torchdynamo/torchdynamo/eval_frame.py", line 166, in _fn
return fn(*args, **kwargs)
File "imggen.py", line 42, in train
@torchdynamo.optimize('inductor')
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/imagen_pytorch/imagen_pytorch.py", line 2435, in forward
def forward(
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/imagen_pytorch/imagen_pytorch.py", line 2451, in <graph break in forward>
cond_images = maybe(cast_uint8_images_to_float)(cond_images)
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/imagen_pytorch/imagen_pytorch.py", line 2457, in <graph break in forward>
unet = default(unet, lambda: self.get_unet(unet_number))
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/imagen_pytorch/imagen_pytorch.py", line 2470, in <graph break in forward>
check_shape(images, 'b c ...', c = self.channels)
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/imagen_pytorch/imagen_pytorch.py", line 2487, in <graph break in forward>
text_masks = default(text_masks, lambda: torch.any(text_embeds != 0., dim = -1))
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/imagen_pytorch/imagen_pytorch.py", line 2313, in p_losses
def p_losses(
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/imagen_pytorch/imagen_pytorch.py", line 2333, in <graph break in p_losses>
noise = default(noise, lambda: torch.randn_like(x_start))
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/imagen_pytorch/imagen_pytorch.py", line 2338, in <graph break in p_losses>
lowres_cond_img = maybe(self.normalize_img)(lowres_cond_img)
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/imagen_pytorch/imagen_pytorch.py", line 2361, in <graph break in p_losses>
x_noisy, log_snr = noise_scheduler.q_sample(x_start = x_start, t = times, noise = noise)
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/imagen_pytorch/imagen_pytorch.py", line 2373, in <graph break in p_losses>
noise_cond = noise_scheduler.get_condition(times)
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/imagen_pytorch/imagen_pytorch.py", line 2381, in <graph break in p_losses>
lowres_noise_times = self.lowres_noise_schedule.get_condition(lowres_aug_times),
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/imagen_pytorch/imagen_pytorch.py", line 2407, in <graph break in p_losses>
pred = unet.forward(
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/imagen_pytorch/imagen_pytorch.py", line 1475, in forward
def forward(
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/imagen_pytorch/imagen_pytorch.py", line 1524, in <graph break in forward>
time_hiddens = self.to_time_hiddens(time)
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/imagen_pytorch/imagen_pytorch.py", line 1552, in <graph break in forward>
text_keep_mask_embed = rearrange(text_keep_mask, 'b -> b 1 1')
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/imagen_pytorch/imagen_pytorch.py", line 1552, in <graph break in forward>
text_keep_mask_embed = rearrange(text_keep_mask, 'b -> b 1 1')
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/imagen_pytorch/imagen_pytorch.py", line 1574, in <graph break in forward>
text_mask = rearrange(text_mask, 'b n -> b n 1')
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/imagen_pytorch/imagen_pytorch.py", line 1651, in <graph break in forward>
x = init_block(x, t, c)
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/imagen_pytorch/imagen_pytorch.py", line 697, in forward
def forward(self, x, time_emb = None, cond = None):
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/imagen_pytorch/imagen_pytorch.py", line 709, in <graph break in forward>
h = self.cross_attn(h, context = cond) + h
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/einops_exts/torch.py", line 17, in forward
x = self.fn(x, **kwargs)
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/imagen_pytorch/imagen_pytorch.py", line 764, in forward
k = torch.cat((nk, k), dim = -2)
RuntimeError: Output 0 of ReshapeAliasBackward0 is a view and its base or another view of its base has been modified inplace. This view was created inside a custom Function (or because an input was returned as-is) and the autograd logic to handle view+inplace would override the custom backward associated with the custom Function, leading to incorrect gradients. This behavior is forbidden. You can fix this by cloning the output of the custom Function.
```
cc @ezyang @soumith @wconstab @ngimel @bdhirsh
| 0 |
4,659 | 93,697 |
Inductor stable baselines assertion errors
|
triaged, oncall: pt2
|
Followup on pytorch/torchdynamo#797
`pip install stable-baselines3[extra]`
## Repro
```python
from stable_baselines3 import PPO
import torchdynamo
@torchdynamo.optimize("inductor")
def train():
model = PPO("MlpPolicy", "CartPole-v1").learn(10_000)
import time
tic = time.time()
train()
toc = time.time()
print(toc - tic)
```
Lots of assertion errors
## Logs
```
(dynamo) ubuntu@ip-172-31-31-152:~/tests$ python baseline.py
torchdynamo.symbolic_convert: [WARNING] Graph break: call_method GetAttrVariable(UserDefinedObjectVariable(PPO), policy_aliases) __contains__ [ConstantVariable(str)] {} from user code at File "baseline.py", line 8, in train
model = PPO("MlpPolicy", "CartPole-v1").learn(10_000)
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/ppo/ppo.py", line 102, in __init__
super().__init__(
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/on_policy_algorithm.py", line 75, in __init__
super().__init__(
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/base_class.py", line 106, in __init__
self.policy_class = self._get_policy_from_name(policy)
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/base_class.py", line 340, in _get_policy_from_name
if policy_name in self.policy_aliases:
torchdynamo.convert_frame: [ERROR] WON'T CONVERT get_device /opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/utils.py line 134
due to:
Traceback (most recent call last):
File "/home/ubuntu/torchdynamo/torchdynamo/variables/tensor.py", line 268, in create
assert (
AssertionError: torch.* op returned non-Tensor bool call_function <function is_available at 0x7f3904cc4310>
from user code:
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/utils.py", line 151, in get_device
if device.type == th.device("cuda").type and not th.cuda.is_available():
Set torchdynamo.config.verbose=True for more information
==========
torchdynamo.symbolic_convert: [WARNING] Graph break: numpy from user code at File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/gym/envs/classic_control/cartpole.py", line 83, in __init__
np.finfo(np.float32).max,
torchdynamo.symbolic_convert: [WARNING] Graph break: numpy from user code at File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/gym/envs/classic_control/cartpole.py", line 85, in <graph break in __init__>
np.finfo(np.float32).max,
torchdynamo.symbolic_convert: [WARNING] Graph break: numpy from user code at File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/gym/envs/classic_control/cartpole.py", line 80, in <graph break in __init__>
high = np.array(
torchdynamo.symbolic_convert: [WARNING] Graph break: numpy from user code at File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/gym/envs/classic_control/cartpole.py", line 90, in <graph break in __init__>
self.action_space = spaces.Discrete(2)
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/gym/spaces/discrete.py", line 17, in __init__
super(Discrete, self).__init__((), np.int64, seed)
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/gym/spaces/space.py", line 23, in __init__
self.dtype = None if dtype is None else np.dtype(dtype)
torchdynamo.convert_frame: [ERROR] WON'T CONVERT <graph break in __init__> /opt/conda/envs/dynamo/lib/python3.8/site-packages/gym/envs/classic_control/cartpole.py line 90
due to:
Traceback (most recent call last):
File "/home/ubuntu/torchdynamo/torchdynamo/variables/functions.py", line 34, in wrap_bound_arg
assert isinstance(val, VariableTracker), typestr(val)
AssertionError: type
from user code:
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/gym/envs/classic_control/cartpole.py", line 91, in <graph break in __init__>
self.observation_space = spaces.Box(-high, high, dtype=np.float32)
Set torchdynamo.config.verbose=True for more information
==========
torchdynamo.convert_frame: [ERROR] WON'T CONVERT __init__ /opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/policies.py line 413
due to:
Traceback (most recent call last):
File "/home/ubuntu/torchdynamo/torchdynamo/variables/functions.py", line 34, in wrap_bound_arg
assert isinstance(val, VariableTracker), typestr(val)
AssertionError: type
from user code:
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/policies.py", line 440, in __init__
super().__init__(
Set torchdynamo.config.verbose=True for more information
==========
torchdynamo.convert_frame: [ERROR] WON'T CONVERT __init__ /opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/policies.py line 269
due to:
Traceback (most recent call last):
File "/home/ubuntu/torchdynamo/torchdynamo/variables/functions.py", line 34, in wrap_bound_arg
assert isinstance(val, VariableTracker), typestr(val)
AssertionError: type
from user code:
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/policies.py", line 270, in __init__
super().__init__(*args, **kwargs)
Set torchdynamo.config.verbose=True for more information
==========
torchdynamo.symbolic_convert: [WARNING] Graph break: Patched init cannot be inlined. from user code at File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/policies.py", line 70, in __init__
super().__init__()
torchdynamo.symbolic_convert: [WARNING] Graph break: call_function in skip_files /opt/conda/envs/dynamo/lib/python3.8/functools.py from user code at File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/torch_layers.py", line 44, in __init__
super().__init__(observation_space, get_flattened_obs_dim(observation_space))
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/preprocessing.py", line 177, in get_flattened_obs_dim
return spaces.utils.flatdim(observation_space)
torchdynamo.symbolic_convert: [WARNING] Graph break: Patched init cannot be inlined. from user code at File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/torch_layers.py", line 22, in __init__
super().__init__()
torchdynamo.symbolic_convert: [WARNING] Graph break: Patched init cannot be inlined. from user code at File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/policies.py", line 542, in _build
self._build_mlp_extractor()
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/policies.py", line 528, in _build_mlp_extractor
self.mlp_extractor = MlpExtractor(
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/torch_layers.py", line 172, in __init__
super().__init__()
torchdynamo.symbolic_convert: [WARNING] Graph break: Patched init cannot be inlined. from user code at File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/torch/nn/modules/container.py", line 85, in __init__
super(Sequential, self).__init__()
torchdynamo.symbolic_convert: [WARNING] Graph break: construct nn.Module: Linear from user code at File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/policies.py", line 555, in <graph break in _build>
self.action_net = self.action_dist.proba_distribution_net(latent_dim=latent_dim_pi)
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/distributions.py", line 271, in proba_distribution_net
action_logits = nn.Linear(latent_dim, self.action_dim)
torchdynamo.symbolic_convert: [WARNING] Graph break: construct nn.Module: Linear from user code at File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/policies.py", line 559, in <graph break in _build>
self.value_net = nn.Linear(self.mlp_extractor.latent_dim_vf, 1)
torchdynamo.symbolic_convert: [WARNING] Graph break: numpy from user code at File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/policies.py", line 568, in <graph break in _build>
self.features_extractor: np.sqrt(2),
torchdynamo.symbolic_convert: [WARNING] Graph break: numpy from user code at File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/policies.py", line 569, in <graph break in _build>
self.mlp_extractor: np.sqrt(2),
torchdynamo.convert_frame: [ERROR] WON'T CONVERT <graph break in _build> /opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/policies.py line 569
due to:
Traceback (most recent call last):
File "/home/ubuntu/torchdynamo/torchdynamo/symbolic_convert.py", line 891, in BUILD_MAP
assert isinstance(k, ConstantVariable) or (
AssertionError
from user code:
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/policies.py", line 567, in <graph break in _build>
module_gains = {
Set torchdynamo.config.verbose=True for more information
==========
torchdynamo.symbolic_convert: [WARNING] Graph break: hasattr: GetAttrVariable(UserDefinedClassVariable(), step) from user code at File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/torch/optim/optimizer.py", line 49, in __init__
self._hook_for_profile()
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/torch/optim/optimizer.py", line 130, in _hook_for_profile
hooked = getattr(self.__class__.step, "hooked", None)
torchdynamo.symbolic_convert: [WARNING] Graph break: call_function in skip_files /opt/conda/envs/dynamo/lib/python3.8/collections/__init__.py from user code at File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/torch/optim/optimizer.py", line 56, in <graph break in __init__>
self.state = defaultdict(dict)
torchdynamo.symbolic_convert: [WARNING] Graph break: call_function BuiltinVariable(list) [UserDefinedObjectVariable(generator)] {} from user code at File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/torch/optim/optimizer.py", line 59, in <graph break in __init__>
param_groups = list(params)
torchdynamo.symbolic_convert: [WARNING] Graph break: generic_jump GetAttrVariable(TensorVariable(), is_leaf) from user code at File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/torch/optim/optimizer.py", line 66, in <graph break in __init__>
self.add_param_group(param_group)
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/torch/optim/optimizer.py", line 309, in add_param_group
if not self.defaults.get('differentiable', None) and not (param.is_leaf or param.retains_grad):
torchdynamo.symbolic_convert: [WARNING] Graph break: call_function UserDefinedObjectVariable(time) [] {} from user code at File "baseline.py", line 8, in <graph break in train>
model = PPO("MlpPolicy", "CartPole-v1").learn(10_000)
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/ppo/ppo.py", line 310, in learn
return super().learn(
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/on_policy_algorithm.py", line 239, in learn
total_timesteps, callback = self._setup_learn(
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/base_class.py", line 425, in _setup_learn
self.start_time = time.time()
torchdynamo.symbolic_convert: [WARNING] Graph break: call_function in skip_files /opt/conda/envs/dynamo/lib/python3.8/collections/__init__.py from user code at File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/base_class.py", line 429, in <graph break in _setup_learn>
self.ep_info_buffer = deque(maxlen=100)
torchdynamo.symbolic_convert: [WARNING] Graph break: call_function in skip_files /opt/conda/envs/dynamo/lib/python3.8/collections/__init__.py from user code at File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/base_class.py", line 430, in <graph break in _setup_learn>
self.ep_success_buffer = deque(maxlen=100)
torchdynamo.symbolic_convert: [WARNING] Graph break: call_method UserDefinedObjectVariable(RandomState) uniform [] {'low': ConstantVariable(float), 'high': ConstantVariable(float), 'size': ConstantVariable(tuple)} from user code at File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/base_class.py", line 446, in <graph break in _setup_learn>
self._last_obs = self.env.reset() # pytype: disable=annotation-type-mismatch
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/vec_env/dummy_vec_env.py", line 63, in reset
obs = self.envs[env_idx].reset()
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/monitor.py", line 79, in reset
return self.env.reset(**kwargs)
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/gym/wrappers/time_limit.py", line 27, in reset
return self.env.reset(**kwargs)
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/gym/envs/classic_control/cartpole.py", line 162, in reset
self.state = self.np_random.uniform(low=-0.05, high=0.05, size=(4,))
torchdynamo.symbolic_convert: [WARNING] Graph break: numpy from user code at File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/base_class.py", line 447, in <graph break in _setup_learn>
self._last_episode_starts = np.ones((self.env.num_envs,), dtype=bool)
torchdynamo.symbolic_convert: [WARNING] Graph break: call_function in skip_files /opt/conda/envs/dynamo/lib/python3.8/os.py from user code at File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/base_class.py", line 459, in <graph break in _setup_learn>
self._logger = utils.configure_logger(self.verbose, self.tensorboard_log, tb_log_name, reset_num_timesteps)
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/utils.py", line 210, in configure_logger
return configure(save_path, format_strings=format_strings)
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/logger.py", line 596, in configure
folder = os.getenv("SB3_LOGDIR")
torchdynamo.symbolic_convert: [WARNING] Graph break: call_function BuiltinVariable(locals) [] {} from user code at File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/on_policy_algorithm.py", line 243, in <graph break in learn>
callback.on_training_start(locals(), globals())
torchdynamo.symbolic_convert: [WARNING] Graph break: UnspecializedNNModuleVariable missing train from user code at File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/on_policy_algorithm.py", line 247, in <graph break in learn>
continue_training = self.collect_rollouts(self.env, callback, self.rollout_buffer, n_rollout_steps=self.n_steps)
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/on_policy_algorithm.py", line 148, in collect_rollouts
self.policy.set_training_mode(False)
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/policies.py", line 215, in set_training_mode
self.train(mode)
torchdynamo.symbolic_convert: [WARNING] Graph break: numpy from user code at File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/on_policy_algorithm.py", line 151, in <graph break in collect_rollouts>
rollout_buffer.reset()
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/buffers.py", line 360, in reset
self.observations = np.zeros((self.buffer_size, self.n_envs) + self.obs_shape, dtype=np.float32)
torchdynamo.symbolic_convert: [WARNING] Graph break: call_function args: NumpyVariable() from user code at File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/on_policy_algorithm.py", line 165, in <graph break in collect_rollouts>
obs_tensor = obs_as_tensor(self._last_obs, self.device)
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/utils.py", line 451, in obs_as_tensor
return th.as_tensor(obs).to(device)
torchinductor.ir: [WARNING] DeviceCopy
torchinductor.compile_fx: [WARNING] skipping cudagraphs due to multiple devices
torchdynamo.symbolic_convert: [WARNING] /opt/conda/envs/dynamo/lib/python3.8/site-packages/torch/distributions/categorical.py <function Categorical.sample at 0x7f3903e27b80> [] {} missing a required argument: 'self'
torchdynamo.symbolic_convert: [WARNING] Graph break: arg mismatch inlining from user code at File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/on_policy_algorithm.py", line 166, in <graph break in collect_rollouts>
actions, values, log_probs = self.policy(obs_tensor)
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/policies.py", line 593, in forward
actions = distribution.get_actions(deterministic=deterministic)
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/distributions.py", line 80, in get_actions
return self.sample()
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/distributions.py", line 285, in sample
return self.distribution.sample()
torchdynamo.symbolic_convert: [WARNING] /opt/conda/envs/dynamo/lib/python3.8/site-packages/torch/distributions/categorical.py <function Categorical.sample at 0x7f3903e27b80> [] {} missing a required argument: 'self'
torchdynamo.symbolic_convert: [WARNING] Graph break: call_function BuiltinVariable(islice) [UserDefinedObjectVariable(odict_values), ConstantVariable(int), ConstantVariable(NoneType)] {} from user code at File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/torch/nn/modules/container.py", line 100, in _get_item_by_idx
return next(islice(iterator, idx, None))
torchdynamo.symbolic_convert: [WARNING] /opt/conda/envs/dynamo/lib/python3.8/site-packages/torch/distributions/categorical.py <function Categorical.sample at 0x7f3903e27b80> [] {} missing a required argument: 'self'
torchdynamo.symbolic_convert: [WARNING] /opt/conda/envs/dynamo/lib/python3.8/site-packages/torch/distributions/categorical.py <function Categorical.sample at 0x7f3903e27b80> [] {} missing a required argument: 'self'
torchdynamo.symbolic_convert: [WARNING] /opt/conda/envs/dynamo/lib/python3.8/site-packages/torch/distributions/categorical.py <function Categorical.log_prob at 0x7f3903e27c10> [TensorVariable()] {} missing a required argument: 'value'
torchdynamo.symbolic_convert: [WARNING] Graph break: arg mismatch inlining from user code at File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/policies.py", line 594, in <graph break in forward>
log_prob = distribution.log_prob(actions)
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/distributions.py", line 279, in log_prob
return self.distribution.log_prob(actions)
torchdynamo.symbolic_convert: [WARNING] /opt/conda/envs/dynamo/lib/python3.8/site-packages/torch/distributions/categorical.py <function Categorical.log_prob at 0x7f3903e27c10> [TensorVariable()] {} missing a required argument: 'value'
torchdynamo.symbolic_convert: [WARNING] Graph break: Tensor.numpy from user code at File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/on_policy_algorithm.py", line 167, in <graph break in collect_rollouts>
actions = actions.cpu().numpy()
torchinductor.ir: [WARNING] DeviceCopy
torchinductor.compile_fx: [WARNING] skipping cudagraphs due to multiple devices
torchdynamo.symbolic_convert: [WARNING] Graph break: call_method ConstantVariable(str) __contains__ [GetAttrVariable(GetAttrVariable(ConstantVariable(int64), dtype), char)] {} from user code at File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/on_policy_algorithm.py", line 175, in <graph break in collect_rollouts>
new_obs, rewards, dones, infos = env.step(clipped_actions)
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/vec_env/base_vec_env.py", line 162, in step
return self.step_wait()
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/vec_env/dummy_vec_env.py", line 43, in step_wait
obs, self.buf_rews[env_idx], self.buf_dones[env_idx], self.buf_infos[env_idx] = self.envs[env_idx].step(
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/monitor.py", line 90, in step
observation, reward, done, info = self.env.step(action)
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/gym/wrappers/time_limit.py", line 18, in step
observation, reward, done, info = self.env.step(action)
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/gym/envs/classic_control/cartpole.py", line 105, in step
assert self.action_space.contains(action), err_msg
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/gym/spaces/discrete.py", line 26, in contains
x.dtype.char in np.typecodes["AllInteger"] and x.shape == ()
torchdynamo.symbolic_convert: [WARNING] Graph break: call_function BuiltinVariable(locals) [] {} from user code at File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/on_policy_algorithm.py", line 180, in <graph break in collect_rollouts>
callback.update_locals(locals())
torchdynamo.symbolic_convert: [WARNING] Graph break: numpy from user code at File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/on_policy_algorithm.py", line 184, in <graph break in collect_rollouts>
self._update_info_buffer(infos)
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/base_class.py", line 475, in _update_info_buffer
dones = np.array([False] * len(infos))
torchdynamo.symbolic_convert: [WARNING] Graph break: call_method ConstDictVariable() get [ConstantVariable(str)] {} from user code at File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/base_class.py", line 477, in <graph break in _update_info_buffer>
maybe_ep_info = info.get("episode")
torchdynamo.symbolic_convert: [WARNING] Graph break: call_method ConstDictVariable() get [ConstantVariable(str)] {} from user code at File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/base_class.py", line 478, in <graph break in _update_info_buffer>
maybe_is_success = info.get("is_success")
torchdynamo.symbolic_convert: [WARNING] Graph break: numpy from user code at File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/on_policy_algorithm.py", line 189, in <graph break in collect_rollouts>
actions = actions.reshape(-1, 1)
torchdynamo.symbolic_convert: [WARNING] Graph break: numpy from user code at File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/buffers.py", line 436, in add
self.observations[self.pos] = np.array(obs).copy()
torchdynamo.symbolic_convert: [WARNING] Graph break: numpy from user code at File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/buffers.py", line 437, in <graph break in add>
self.actions[self.pos] = np.array(action).copy()
torchdynamo.symbolic_convert: [WARNING] Graph break: numpy from user code at File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/buffers.py", line 438, in <graph break in add>
self.rewards[self.pos] = np.array(reward).copy()
torchdynamo.symbolic_convert: [WARNING] Graph break: numpy from user code at File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/buffers.py", line 439, in <graph break in add>
self.episode_starts[self.pos] = np.array(episode_start).copy()
torchdynamo.symbolic_convert: [WARNING] Graph break: Tensor.numpy from user code at File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/buffers.py", line 440, in <graph break in add>
self.values[self.pos] = value.clone().cpu().numpy().flatten()
torchinductor.ir: [WARNING] DeviceCopy
torchinductor.compile_fx: [WARNING] skipping cudagraphs due to multiple devices
Traceback (most recent call last):
File "baseline.py", line 13, in <module>
train()
File "/home/ubuntu/torchdynamo/torchdynamo/eval_frame.py", line 166, in _fn
return fn(*args, **kwargs)
File "baseline.py", line 6, in train
@torchdynamo.optimize("inductor")
File "baseline.py", line 8, in <graph break in train>
model = PPO("MlpPolicy", "CartPole-v1").learn(10_000)
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/ppo/ppo.py", line 297, in learn
def learn(
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/on_policy_algorithm.py", line 239, in learn
total_timesteps, callback = self._setup_learn(
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/on_policy_algorithm.py", line 239, in <graph break in learn>
total_timesteps, callback = self._setup_learn(
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/on_policy_algorithm.py", line 243, in <graph break in learn>
callback.on_training_start(locals(), globals())
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/on_policy_algorithm.py", line 243, in <graph break in learn>
callback.on_training_start(locals(), globals())
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/on_policy_algorithm.py", line 243, in <graph break in learn>
callback.on_training_start(locals(), globals())
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/on_policy_algorithm.py", line 126, in collect_rollouts
def collect_rollouts(
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/on_policy_algorithm.py", line 148, in <graph break in collect_rollouts>
self.policy.set_training_mode(False)
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/on_policy_algorithm.py", line 151, in <graph break in collect_rollouts>
rollout_buffer.reset()
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/on_policy_algorithm.py", line 165, in <graph break in collect_rollouts>
obs_tensor = obs_as_tensor(self._last_obs, self.device)
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/on_policy_algorithm.py", line 166, in <graph break in collect_rollouts>
actions, values, log_probs = self.policy(obs_tensor)
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/on_policy_algorithm.py", line 167, in <graph break in collect_rollouts>
actions = actions.cpu().numpy()
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/on_policy_algorithm.py", line 175, in <graph break in collect_rollouts>
new_obs, rewards, dones, infos = env.step(clipped_actions)
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/on_policy_algorithm.py", line 180, in <graph break in collect_rollouts>
callback.update_locals(locals())
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/on_policy_algorithm.py", line 180, in <graph break in collect_rollouts>
callback.update_locals(locals())
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/on_policy_algorithm.py", line 184, in <graph break in collect_rollouts>
self._update_info_buffer(infos)
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/on_policy_algorithm.py", line 189, in <graph break in collect_rollouts>
actions = actions.reshape(-1, 1)
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/on_policy_algorithm.py", line 204, in <graph break in collect_rollouts>
rollout_buffer.add(self._last_obs, actions, rewards, self._last_episode_starts, values, log_probs)
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/buffers.py", line 408, in add
def add(
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/buffers.py", line 436, in <graph break in add>
self.observations[self.pos] = np.array(obs).copy()
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/buffers.py", line 436, in <graph break in add>
self.observations[self.pos] = np.array(obs).copy()
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/buffers.py", line 436, in <graph break in add>
self.observations[self.pos] = np.array(obs).copy()
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/buffers.py", line 437, in <graph break in add>
self.actions[self.pos] = np.array(action).copy()
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/buffers.py", line 437, in <graph break in add>
self.actions[self.pos] = np.array(action).copy()
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/buffers.py", line 437, in <graph break in add>
self.actions[self.pos] = np.array(action).copy()
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/buffers.py", line 438, in <graph break in add>
self.rewards[self.pos] = np.array(reward).copy()
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/buffers.py", line 438, in <graph break in add>
self.rewards[self.pos] = np.array(reward).copy()
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/buffers.py", line 438, in <graph break in add>
self.rewards[self.pos] = np.array(reward).copy()
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/buffers.py", line 439, in <graph break in add>
self.episode_starts[self.pos] = np.array(episode_start).copy()
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/buffers.py", line 439, in <graph break in add>
self.episode_starts[self.pos] = np.array(episode_start).copy()
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/stable_baselines3/common/buffers.py", line 439, in <graph break in add>
self.episode_starts[self.pos] = np.array(episode_start).copy()
RuntimeError: Can't call numpy() on Tensor that requires grad. Use tensor.detach().numpy() instead.
```
cc @ezyang @soumith @wconstab @ngimel @bdhirsh
| 1 |
4,660 | 85,757 |
[ONNX] Conversion failed when using dict as input to a scripted module
|
module: onnx, triaged, onnx-triaged
|
### π Describe the bug
```python
import torch
from typing import Dict
class Module(torch.nn.Module):
def forward(
self, x: Dict[str, torch.Tensor]
) -> Dict[str, torch.Tensor]:
return x
model = torch.jit.script(Module())
x = {"input": torch.zeros(1)}
torch.onnx.export(model, (x, {}), "out", opset_version=9) # ERR
```
```
Traceback (most recent call last):
File "test.py", line 14, in <module>
torch.onnx.export(model, (x, {}), "out", opset_version=9) # ERR
File "/opt/homebrew/lib/python3.9/site-packages/torch/onnx/__init__.py", line 350, in export
return utils.export(
File "/opt/homebrew/lib/python3.9/site-packages/torch/onnx/utils.py", line 163, in export
_export(
File "/opt/homebrew/lib/python3.9/site-packages/torch/onnx/utils.py", line 1074, in _export
graph, params_dict, torch_out = _model_to_graph(
File "/opt/homebrew/lib/python3.9/site-packages/torch/onnx/utils.py", line 747, in _model_to_graph
example_outputs = _get_example_outputs(model, args)
File "/opt/homebrew/lib/python3.9/site-packages/torch/onnx/utils.py", line 630, in _get_example_outputs
example_outputs = model(*input_args, **input_kwargs)
File "/opt/homebrew/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
RuntimeError: forward() is missing value for argument 'x'. Declaration: forward(__torch__.Module self, Dict(str, Tensor) x) -> (Dict(str, Tensor))
```
Using dict as input raises runtime error when exporting scripted module to ONNX.
It seems the inputs are not passed to the model as they should, since in
> File .../torch/onnx/utils.py, line 630, in _get_example_outputs
example_outputs = model(*input_args, **input_kwargs)
the variables ```input_args``` and ```input_kwargs``` are
```
input_args=()
input_kwargs={'input': tensor([0.])}
```
### Versions
PyTorch version: 1.12.0
Is debug build: False
OS: macOS 12.2.1 (arm64)
Python version: 3.9.13 (main, May 24 2022, 21:13:51) [Clang 13.1.6 (clang-1316.0.21.2)] (64-bit runtime)
| 1 |
4,661 | 93,696 |
Minifier should not produce repro with backward call if it is not necessary to trigger error
|
triaged, oncall: pt2
|
ATM it unconditionally calls run_fwd_maybe_bwd but this is unnecessary in many cases
cc @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 0 |
4,662 | 93,695 |
Composer inductor errors
|
triaged, module: fsdp, oncall: pt2
|
As a followup to https://github.com/pytorch/torchdynamo/issues/887 which worked with eager
## Repro
`pip install mosaicml`
```python
from torch.utils.data import DataLoader
from torchvision import datasets, transforms
from composer import Trainer
from composer.algorithms import ChannelsLast, CutMix, LabelSmoothing
from composer.models import mnist_model
import torchdynamo
transform = transforms.Compose([transforms.ToTensor()])
train_dataset = datasets.MNIST("data", download=True, train=True, transform=transform)
eval_dataset = datasets.MNIST("data", download=True, train=False, transform=transform)
train_dataloader = DataLoader(train_dataset, batch_size=128)
eval_dataloader = DataLoader(eval_dataset, batch_size=128)
trainer = Trainer(
model=mnist_model(),
train_dataloader=train_dataloader,
eval_dataloader=eval_dataloader,
max_duration="2ep",
algorithms=[
ChannelsLast(),
CutMix(alpha=1.0),
LabelSmoothing(smoothing=0.1),
]
)
@torchdynamo.optimize("inductor")
def train():
trainer.fit()
import time
tic = time.time()
train()
toc = time.time()
print(toc - tic)
```
Tried running the same file on inductor and a bunch of warnings are coming up,
* import deepspeed is not working even though I don't think this program is using deepspeed?
* A few `UserDefinedObjectVariable` not a constant errors
* An assertion error on buffer strides
* numpy int64 not implemented
* keyerror _IPYTHON
* torch op returned non tensor
## Logs
```(dynamo) ubuntu@ip-172-31-31-152:~/tests$ python comp.py
Downloading http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz
Downloading http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz to data/MNIST/raw/train-images-idx3-ubyte.gz
100%|ββββββββββββββββββ| 9912422/9912422 [00:00<00:00, 34569393.90it/s]
Extracting data/MNIST/raw/train-images-idx3-ubyte.gz to data/MNIST/raw
Downloading http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz
Downloading http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz to data/MNIST/raw/train-labels-idx1-ubyte.gz
100%|βββββββββββββββββββββ| 28881/28881 [00:00<00:00, 139076571.55it/s]
Extracting data/MNIST/raw/train-labels-idx1-ubyte.gz to data/MNIST/raw
Downloading http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz
Downloading http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz to data/MNIST/raw/t10k-images-idx3-ubyte.gz
100%|βββββββββββββββββββ| 1648877/1648877 [00:00<00:00, 7789551.97it/s]
Extracting data/MNIST/raw/t10k-images-idx3-ubyte.gz to data/MNIST/raw
Downloading http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz
Downloading http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz to data/MNIST/raw/t10k-labels-idx1-ubyte.gz
100%|ββββββββββββββββββββββββ| 4542/4542 [00:00<00:00, 34637325.03it/s]
Extracting data/MNIST/raw/t10k-labels-idx1-ubyte.gz to data/MNIST/raw
/opt/conda/envs/dynamo/lib/python3.8/site-packages/composer/trainer/trainer.py:852: UserWarning: No optimizer was specified. Defaulting to DecoupledSGDW(lr=0.1)
warnings.warn(('No optimizer was specified. Defaulting to '
torchdynamo.symbolic_convert: [WARNING] Graph break: call_function UserDefinedObjectVariable(EnumMeta) [EnumVariable(<enum 'TimeUnit'>)] {} from user code at File "comp.py", line 29, in train
trainer.fit()
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/composer/trainer/trainer.py", line 1379, in fit
if self.state.max_duration <= self.state.timestamp.get(self.state.max_duration.unit) and not reset_time:
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/composer/core/time.py", line 559, in get
unit = TimeUnit(unit)
torchdynamo.symbolic_convert: [WARNING] Graph break: inline in skipfiles: info /opt/conda/envs/dynamo/lib/python3.8/logging/__init__.py from user code at File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/composer/trainer/trainer.py", line 1555, in _train_loop
log.info('Using precision %s', self.state.precision)
torchdynamo.symbolic_convert: [WARNING] Graph break: call_function UserDefinedObjectVariable(EnumMeta) [EnumVariable(<enum 'Event'>)] {} from user code at File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/composer/trainer/trainer.py", line 1563, in <graph break in _train_loop>
self.engine.run_event(Event.FIT_START)
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/composer/core/engine.py", line 210, in run_event
event = Event(event)
torchdynamo.symbolic_convert: [WARNING] Graph break: call_function BuiltinVariable(int) [UserDefinedObjectVariable(Time)] {} from user code at File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/composer/core/engine.py", line 212, in <graph break in run_event>
self._debug_log(event, 'Running event')
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/composer/core/engine.py", line 419, in _debug_log
timestamp = f'[ep={int(self.state.timestamp.epoch)}][ba={int(self.state.timestamp.batch)}]'
torchdynamo.symbolic_convert: [WARNING] Graph break: call_function BuiltinVariable(sorted) [ListVariable()] {'key': NestedUserFunctionVariable()} from user code at File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/composer/core/engine.py", line 315, in _run_algorithms
algorithms_to_run = self._compile(algorithms_to_run, event)
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/composer/core/engine.py", line 369, in _compile
algorithms_to_run = passes(algorithms_to_run, event)
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/composer/core/passes.py", line 91, in sort_selective_backprop_first
return sort_to_front(algorithms, cls=SelectiveBackprop)
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/composer/core/passes.py", line 54, in sort_to_front
return sorted(list_to_sort, key=lambda x: not isinstance(x, cls))
torchdynamo.symbolic_convert: [WARNING] Graph break: call_function in skip_files /opt/conda/envs/dynamo/lib/python3.8/collections/__init__.py from user code at File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/composer/core/engine.py", line 317, in <graph break in _run_algorithms>
trace = _setup_trace(algorithms_to_run, event)
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/composer/core/engine.py", line 147, in _setup_trace
return OrderedDict([(f'{algo}/{event}', Trace(name=algo.__class__.__name__)) for algo in algorithms])
torchdynamo.symbolic_convert: [WARNING] Graph break: call_function UserDefinedObjectVariable(EnumMeta) [EnumVariable(<enum 'Event'>)] {} from user code at File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/composer/core/engine.py", line 382, in _run_callbacks
event = Event(event)
torchdynamo.symbolic_convert: [WARNING] Graph break: inline in skipfiles: __init__ /opt/conda/envs/dynamo/lib/python3.8/contextlib.py from user code at File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/composer/core/engine.py", line 405, in <graph break in _run_callbacks>
ctx = cast(ContextManager, contextlib.nullcontext()) if marker is None else marker
torchdynamo.convert_frame: [ERROR] WON'T CONVERT run_event /opt/conda/envs/dynamo/lib/python3.8/site-packages/composer/core/callback.py line 87
due to:
Traceback (most recent call last):
File "/home/ubuntu/torchdynamo/torchdynamo/variables/base.py", line 144, in as_python_constant
raise NotImplementedError(f"{self} is not a constant")
NotImplementedError: GetAttrVariable(EnumVariable(<enum 'Event'>), value) is not a constant
from user code:
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/composer/core/callback.py", line 95, in run_event
event_cb = getattr(self, event.value)
Set torchdynamo.config.verbose=True for more information
==========
torchdynamo.convert_frame: [ERROR] WON'T CONVERT <graph break in _train_loop> /opt/conda/envs/dynamo/lib/python3.8/site-packages/composer/trainer/trainer.py line 1563
due to:
Traceback (most recent call last):
File "/home/ubuntu/torchdynamo/torchdynamo/symbolic_convert.py", line 530, in IMPORT_NAME
value = __import__(
ModuleNotFoundError: No module named 'deepspeed'
from user code:
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/composer/trainer/trainer.py", line 1565, in <graph break in _train_loop>
use_grad_scaling = self._use_grad_scaling(self.state.precision, self.state.scaler)
Set torchdynamo.config.verbose=True for more information
==========
torchdynamo.convert_frame: [ERROR] WON'T CONVERT _use_grad_scaling /opt/conda/envs/dynamo/lib/python3.8/site-packages/composer/trainer/trainer.py line 2435
due to:
Traceback (most recent call last):
File "/home/ubuntu/torchdynamo/torchdynamo/symbolic_convert.py", line 530, in IMPORT_NAME
value = __import__(
ModuleNotFoundError: No module named 'deepspeed'
from user code:
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/composer/trainer/trainer.py", line 2451, in _use_grad_scaling
if self.deepspeed_enabled:
Set torchdynamo.config.verbose=True for more information
==========
torchdynamo.convert_frame: [ERROR] WON'T CONVERT deepspeed_enabled /opt/conda/envs/dynamo/lib/python3.8/site-packages/composer/trainer/trainer.py line 1149
due to:
Traceback (most recent call last):
File "/home/ubuntu/torchdynamo/torchdynamo/symbolic_convert.py", line 530, in IMPORT_NAME
value = __import__(
ModuleNotFoundError: No module named 'deepspeed'
from user code:
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/composer/trainer/trainer.py", line 1155, in deepspeed_enabled
return is_model_deepspeed(self.state.model)
Set torchdynamo.config.verbose=True for more information
==========
torchdynamo.convert_frame: [ERROR] WON'T CONVERT is_model_deepspeed /opt/conda/envs/dynamo/lib/python3.8/site-packages/composer/utils/misc.py line 16
due to:
Traceback (most recent call last):
File "/home/ubuntu/torchdynamo/torchdynamo/symbolic_convert.py", line 530, in IMPORT_NAME
value = __import__(
ModuleNotFoundError: No module named 'deepspeed'
from user code:
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/composer/utils/misc.py", line 19, in is_model_deepspeed
import deepspeed
Set torchdynamo.config.verbose=True for more information
==========
torchdynamo.symbolic_convert: [WARNING] Graph break: inline in skipfiles: debug /opt/conda/envs/dynamo/lib/python3.8/logging/__init__.py from user code at File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/composer/trainer/trainer.py", line 1514, in _spin_dataloaders
log.debug('Spinning the dataloaders')
torchdynamo.symbolic_convert: [WARNING] Graph break: call_function BuiltinVariable(int) [TensorVariable()] {} from user code at File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/torchvision/datasets/mnist.py", line 138, in __getitem__
img, target = self.data[index], int(self.targets[index])
torchdynamo.symbolic_convert: [WARNING] Graph break: Tensor.numpy from user code at File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/torchvision/datasets/mnist.py", line 142, in <graph break in __getitem__>
img = Image.fromarray(img.numpy(), mode="L")
torchdynamo.symbolic_convert: [WARNING] Graph break: call_method GetAttrVariable(NumpyVariable(), __array_interface__) __getitem__ (ConstantVariable(str),) {} from user code at File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/torchvision/datasets/mnist.py", line 142, in <graph break in __getitem__>
img = Image.fromarray(img.numpy(), mode="L")
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/PIL/Image.py", line 2944, in fromarray
shape = arr["shape"]
torchdynamo.symbolic_convert: [WARNING] Graph break: call_function UserDefinedObjectVariable(Compose) [UserDefinedObjectVariable(Image)] {} from user code at File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/torchvision/datasets/mnist.py", line 145, in <graph break in __getitem__>
img = self.transform(img)
torchdynamo.convert_frame: [ERROR] WON'T CONVERT to_tensor /opt/conda/envs/dynamo/lib/python3.8/site-packages/torchvision/transforms/functional.py line 122
due to:
Traceback (most recent call last):
File "/home/ubuntu/torchdynamo/torchdynamo/variables/tensor.py", line 268, in create
assert (
AssertionError: torch.* op returned non-Tensor dtype call_function <built-in function get_default_dtype>
from user code:
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/torchvision/transforms/functional.py", line 142, in to_tensor
default_float_dtype = torch.get_default_dtype()
Set torchdynamo.config.verbose=True for more information
==========
torchdynamo.convert_frame: [WARNING] torchdynamo hit config.cache_size_limit (64)
function: '__getitem__' (/opt/conda/envs/dynamo/lib/python3.8/site-packages/torchvision/datasets/mnist.py:130)
reasons: ['index == 0']
to diagnose recompilation issues, see https://github.com/pytorch/torchdynamo/blob/main/TROUBLESHOOTING.md.
torchdynamo.symbolic_convert: [WARNING] Graph break: hasattr: TensorVariable() from user code at File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/torchmetrics/metric.py", line 591, in __hash__
if hasattr(val, "__iter__") and not isinstance(val, Tensor):
torchdynamo.symbolic_convert: [WARNING] Graph break: hasattr: TensorVariable() from user code at File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/torchmetrics/metric.py", line 594, in <graph break in __hash__>
hash_vals.append(val)
torchdynamo.symbolic_convert: [WARNING] Graph break: call_function BuiltinVariable(hash) [TupleVariable()] {} from user code at File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/torchmetrics/metric.py", line 596, in <graph break in __hash__>
return hash(tuple(hash_vals))
torchdynamo.convert_frame: [ERROR] WON'T CONVERT epoch_start /opt/conda/envs/dynamo/lib/python3.8/site-packages/composer/loggers/progress_bar_logger.py line 308
due to:
Traceback (most recent call last):
File "/home/ubuntu/torchdynamo/torchdynamo/variables/tensor.py", line 268, in create
assert (
AssertionError: torch.* op returned non-Tensor bool call_function <function is_available at 0x7f1edb2a0430>
from user code:
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/composer/loggers/progress_bar_logger.py", line 309, in epoch_start
if self.show_pbar and not self.train_pbar:
Set torchdynamo.config.verbose=True for more information
==========
torchdynamo.convert_frame: [ERROR] WON'T CONVERT show_pbar /opt/conda/envs/dynamo/lib/python3.8/site-packages/composer/loggers/progress_bar_logger.py line 167
due to:
Traceback (most recent call last):
File "/home/ubuntu/torchdynamo/torchdynamo/variables/tensor.py", line 268, in create
assert (
AssertionError: torch.* op returned non-Tensor bool call_function <function is_available at 0x7f1edb2a0430>
from user code:
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/composer/loggers/progress_bar_logger.py", line 169, in show_pbar
return self._show_pbar and dist.get_local_rank() == 0
Set torchdynamo.config.verbose=True for more information
==========
torchdynamo.convert_frame: [ERROR] WON'T CONVERT _get_distributed_config_var /opt/conda/envs/dynamo/lib/python3.8/site-packages/composer/utils/dist.py line 75
due to:
Traceback (most recent call last):
File "/home/ubuntu/torchdynamo/torchdynamo/variables/tensor.py", line 268, in create
assert (
AssertionError: torch.* op returned non-Tensor bool call_function <function is_available at 0x7f1edb2a0430>
from user code:
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/composer/utils/dist.py", line 81, in _get_distributed_config_var
if not dist.is_available():
Set torchdynamo.config.verbose=True for more information
==========
torchdynamo.convert_frame: [ERROR] WON'T CONVERT _build_pbar /opt/conda/envs/dynamo/lib/python3.8/site-packages/composer/loggers/progress_bar_logger.py line 234
due to:
Traceback (most recent call last):
File "/home/ubuntu/torchdynamo/torchdynamo/symbolic_convert.py", line 446, in LOAD_GLOBAL
value = self.f_globals[name]
KeyError: '__IPYTHON__'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ubuntu/torchdynamo/torchdynamo/symbolic_convert.py", line 567, in load_builtin
assert inst.argval in self.f_builtins
AssertionError
from user code:
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/composer/loggers/progress_bar_logger.py", line 244, in _build_pbar
position = None if is_notebook() else 1
Set torchdynamo.config.verbose=True for more information
==========
torchinductor.ir: [WARNING] DeviceCopy | 0/469 [00:00
torchinductor.ir: [WARNING] DeviceCopy
torchinductor.compile_fx: [WARNING] skipping cudagraphs due to multiple devices
torchdynamo.symbolic_convert: [WARNING] Graph break: hasattr: TensorVariable() from user code at File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/composer/core/data_spec.py", line 197, in _default_get_num_samples_in_batch
if not hasattr(t, 'shape'):
torchdynamo.symbolic_convert: [WARNING] Graph break: hasattr: TensorVariable() from user code at File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/composer/core/data_spec.py", line 202, in <graph break in _default_get_num_samples_in_batch>
dim0_sizes.append(t.shape[0])
torchdynamo.symbolic_convert: [WARNING] Graph break: setattr(UserDefinedObjectVariable) <function Metric.__setattr__ at 0x7f1eed36b550> from user code at File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/composer/trainer/trainer.py", line 1788, in _train_batch
metric.reset()
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/torchmetrics/metric.py", line 402, in reset
self._update_called = False
torchdynamo.symbolic_convert: [WARNING] Graph break: setattr(UserDefinedObjectVariable) <function Metric.__setattr__ at 0x7f1eed36b550> from user code at File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/torchmetrics/metric.py", line 411, in <graph break in reset>
setattr(self, attr, default.detach().clone().to(current_val.device))
torchdynamo.convert_frame: [ERROR] WON'T CONVERT <graph break in _train_batch> /opt/conda/envs/dynamo/lib/python3.8/site-packages/composer/trainer/trainer.py line 1788
due to:
Traceback (most recent call last):
File "/home/ubuntu/torchdynamo/torchdynamo/symbolic_convert.py", line 530, in IMPORT_NAME
value = __import__(
ModuleNotFoundError: No module named 'deepspeed'
from user code:
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/composer/trainer/trainer.py", line 1795, in <graph break in _train_batch>
if self._use_closures():
Set torchdynamo.config.verbose=True for more information
==========
torchinductor.ir: [WARNING] DeviceCopy
torchinductor.compile_fx: [WARNING] skipping cudagraphs due to multiple devices
torchdynamo.convert_frame: [ERROR] WON'T CONVERT _use_closures /opt/conda/envs/dynamo/lib/python3.8/site-packages/composer/trainer/trainer.py line 2496
due to:
Traceback (most recent call last):
File "/home/ubuntu/torchdynamo/torchdynamo/symbolic_convert.py", line 530, in IMPORT_NAME
value = __import__(
ModuleNotFoundError: No module named 'deepspeed'
from user code:
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/composer/trainer/trainer.py", line 2502, in _use_closures
if self.deepspeed_enabled:
Set torchdynamo.config.verbose=True for more information
==========
torchdynamo.symbolic_convert: [WARNING] Graph break: inline in skipfiles: __init__ /opt/conda/envs/dynamo/lib/python3.8/contextlib.py from user code at File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/composer/trainer/trainer.py", line 1856, in _train_microbatches
with context():
torchdynamo.variables.torch: [WARNING] Profiler will be ignored
torchdynamo.symbolic_convert: [WARNING] Graph break: copy.copy/copy.deepcopy called on non-tensor from user code at File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/composer/trainer/trainer.py", line 1905, in _train_microbatch
device_batch = deepcopy(self.state.batch)
torchdynamo.symbolic_convert: [WARNING] Graph break: call_id with args (ListVariable(),) from user code at File "/opt/conda/envs/dynamo/lib/python3.8/copy.py", line 137, in deepcopy
d = id(x)
torchdynamo.symbolic_convert: [WARNING] Graph break: call_id with args (ListVariable(),) from user code at File "/opt/conda/envs/dynamo/lib/python3.8/copy.py", line 202, in _deepcopy_list
memo[id(x)] = y
torchdynamo.symbolic_convert: [WARNING] Graph break: copy.copy/copy.deepcopy does not have 1 argument from user code at File "/opt/conda/envs/dynamo/lib/python3.8/copy.py", line 205, in <graph break in _deepcopy_list>
append(deepcopy(a, memo))
torchdynamo.symbolic_convert: [WARNING] Graph break: call_function UserDefinedObjectVariable(_default_get_num_samples_in_batch) [ListVariable()] {} from user code at File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/composer/trainer/trainer.py", line 1907, in <graph break in _train_microbatch>
microbatch_num_samples = self._train_data_spec.get_num_samples_in_batch(self.state.batch)
torchdynamo.convert_frame: [ERROR] WON'T CONVERT <graph break in _train_microbatch> /opt/conda/envs/dynamo/lib/python3.8/site-packages/composer/trainer/trainer.py line 1907
due to:
Traceback (most recent call last):
File "/home/ubuntu/torchdynamo/torchdynamo/symbolic_convert.py", line 530, in IMPORT_NAME
value = __import__(
ModuleNotFoundError: No module named 'deepspeed'
from user code:
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/composer/trainer/trainer.py", line 1908, in <graph break in _train_microbatch>
sync_context = contextlib.nullcontext() if self.deepspeed_enabled else ddp_sync_context(
Set torchdynamo.config.verbose=True for more information
==========
torchdynamo.convert_frame: [ERROR] WON'T CONVERT apply /opt/conda/envs/dynamo/lib/python3.8/site-packages/composer/algorithms/cutmix/cutmix.py line 219
due to:
Traceback (most recent call last):
File "/home/ubuntu/torchdynamo/torchdynamo/variables/base.py", line 144, in as_python_constant
raise NotImplementedError(f"{self} is not a constant")
NotImplementedError: UserDefinedObjectVariable(_GenericAlias) is not a constant
from user code:
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/composer/algorithms/cutmix/cutmix.py", line 220, in apply
input = state.batch_get_item(key=self.input_key)
Set torchdynamo.config.verbose=True for more information
==========
torchdynamo.convert_frame: [ERROR] WON'T CONVERT batch_get_item /opt/conda/envs/dynamo/lib/python3.8/site-packages/composer/core/state.py line 388
due to:
Traceback (most recent call last):
File "/home/ubuntu/torchdynamo/torchdynamo/variables/base.py", line 144, in as_python_constant
raise NotImplementedError(f"{self} is not a constant")
NotImplementedError: UserDefinedObjectVariable(_GenericAlias) is not a constant
from user code:
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/composer/core/state.py", line 404, in batch_get_item
return batch_get(self.batch, key)
Set torchdynamo.config.verbose=True for more information
==========
torchdynamo.convert_frame: [ERROR] WON'T CONVERT batch_get /opt/conda/envs/dynamo/lib/python3.8/site-packages/composer/utils/batch_helpers.py line 12
due to:
Traceback (most recent call last):
File "/home/ubuntu/torchdynamo/torchdynamo/variables/base.py", line 144, in as_python_constant
raise NotImplementedError(f"{self} is not a constant")
NotImplementedError: UserDefinedObjectVariable(_GenericAlias) is not a constant
from user code:
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/composer/utils/batch_helpers.py", line 40, in batch_get
if (isinstance(key, Sequence) and not isinstance(key, str) and _is_key_get_and_set_fn_pair(key)):
Set torchdynamo.config.verbose=True for more information
==========
torchinductor.ir: [WARNING] Using FallbackKernel: aten.randperm
torchdynamo.symbolic_convert: [WARNING] Graph break: numpy from user code at File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/composer/algorithms/cutmix/cutmix.py", line 118, in cutmix_batch
cutmix_lambda = _gen_cutmix_coef(alpha)
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/composer/algorithms/cutmix/cutmix.py", line 325, in _gen_cutmix_coef
cutmix_lambda = np.random.beta(alpha, alpha)
torchdynamo.symbolic_convert: [WARNING] Graph break: partial tensor op: BuiltinVariable(getitem) [TensorVariable(), TupleVariable()] {} from user code at File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/composer/algorithms/cutmix/cutmix.py", line 136, in <graph break in cutmix_batch>
X_cutmix[:, :, rx:rw, ry:rh] = X_cutmix[shuffled_idx, :, rx:rw, ry:rh]
torchinductor.compile_fx: [WARNING] skipping cudagraphs due to multiple devices
torchdynamo.convert_frame: [ERROR] WON'T CONVERT <graph break in cutmix_batch> /opt/conda/envs/dynamo/lib/python3.8/site-packages/composer/algorithms/cutmix/cutmix.py line 136
due to:
Traceback (most recent call last):
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/torch/fx/proxy.py", line 166, in create_arg
raise NotImplementedError(f"argument of type: {type(a)}")
NotImplementedError: argument of type: <class 'numpy.int64'>
from user code:
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/composer/algorithms/cutmix/cutmix.py", line 136, in <graph break in cutmix_batch>
X_cutmix[:, :, rx:rw, ry:rh] = X_cutmix[shuffled_idx, :, rx:rw, ry:rh]
Set torchdynamo.config.verbose=True for more information
==========
torchdynamo.convert_frame: [ERROR] WON'T CONVERT batch_set_item /opt/conda/envs/dynamo/lib/python3.8/site-packages/composer/core/state.py line 406
due to:
Traceback (most recent call last):
File "/home/ubuntu/torchdynamo/torchdynamo/variables/base.py", line 144, in as_python_constant
raise NotImplementedError(f"{self} is not a constant")
NotImplementedError: UserDefinedObjectVariable(_GenericAlias) is not a constant
from user code:
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/composer/core/state.py", line 425, in batch_set_item
self.batch = batch_set(self.batch, key=key, value=value)
Set torchdynamo.config.verbose=True for more information
==========
torchdynamo.convert_frame: [ERROR] WON'T CONVERT batch_set /opt/conda/envs/dynamo/lib/python3.8/site-packages/composer/utils/batch_helpers.py line 61
due to:
Traceback (most recent call last):
File "/home/ubuntu/torchdynamo/torchdynamo/variables/base.py", line 144, in as_python_constant
raise NotImplementedError(f"{self} is not a constant")
NotImplementedError: UserDefinedObjectVariable(_GenericAlias) is not a constant
from user code:
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/composer/utils/batch_helpers.py", line 97, in batch_set
if (isinstance(key, Sequence) and not isinstance(key, str) and _is_key_get_and_set_fn_pair(key)):
Set torchdynamo.config.verbose=True for more information
==========
Traceback (most recent call last):
File "comp.py", line 33, in <module>
train()
File "/home/ubuntu/torchdynamo/torchdynamo/eval_frame.py", line 166, in _fn
return fn(*args, **kwargs)
File "comp.py", line 27, in train
@torchdynamo.optimize("inductor")
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/composer/trainer/trainer.py", line 1218, in fit
def fit(
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/composer/trainer/trainer.py", line 1457, in <graph break in fit>
self._train_loop()
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/composer/trainer/trainer.py", line 1552, in _train_loop
def _train_loop(self) -> None:
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/composer/trainer/trainer.py", line 1555, in <graph break in _train_loop>
log.info('Using precision %s', self.state.precision)
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/composer/trainer/trainer.py", line 1613, in <graph break in _train_loop>
total_loss_dict = self._train_batch(use_grad_scaling)
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/composer/trainer/trainer.py", line 1766, in _train_batch
def _train_batch(self, use_grad_scaling: bool) -> Dict[str, torch.Tensor]:
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/composer/trainer/trainer.py", line 1805, in <graph break in _train_batch>
self._train_microbatches(microbatches, total_loss_dict)
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/composer/trainer/trainer.py", line 1835, in _train_microbatches
def _train_microbatches(self,
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/composer/trainer/trainer.py", line 1873, in <graph break in _train_microbatches>
microbatch_loss_dict = self._train_microbatch(use_grad_scaling, current_batch_size, is_final_microbatch)
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/composer/trainer/trainer.py", line 1891, in _train_microbatch
def _train_microbatch(self, use_grad_scaling: bool, current_batch_size: int,
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/composer/trainer/trainer.py", line 1905, in <graph break in _train_microbatch>
device_batch = deepcopy(self.state.batch)
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/composer/trainer/trainer.py", line 1919, in <graph break in _train_microbatch>
self.state.outputs = self.state.model(self.state.batch)
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/composer/models/tasks/classification.py", line 104, in forward
def forward(self, batch: Tuple[Tensor, Any]) -> Tensor:
File "/home/ubuntu/torchdynamo/torchdynamo/eval_frame.py", line 166, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/functorch/_src/aot_autograd.py", line 904, in forward
return compiled_f(
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/functorch/_src/aot_autograd.py", line 895, in new_func
return compiled_fn(args)
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/functorch/_src/aot_autograd.py", line 228, in g
return f(*args)
File "/home/ubuntu/torchdynamo/torchdynamo/eval_frame.py", line 166, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/functorch/_src/aot_autograd.py", line 354, in forward
fw_outs = call_func_with_args(
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/functorch/_src/aot_autograd.py", line 250, in call_func_with_args
out = normalize_as_list(f(args))
File "/opt/conda/envs/dynamo/lib/python3.8/site-packages/functorch/_src/aot_autograd.py", line 228, in g
return f(*args)
File "/home/ubuntu/torchdynamo/torchinductor/compile_fx.py", line 125, in run
compiled_fn = cudagraphify_impl(model, new_inputs, static_input_idxs)
File "/home/ubuntu/torchdynamo/torchinductor/compile_fx.py", line 165, in cudagraphify_impl
model(*inputs)
File "/tmp/torchinductor_ubuntu/tl/ctlxby7hf5bmlaxx2jqq3rgece36mjri4jn7smw54hxr7lgtrkaq.py", line 517, in call
assert buf5.stride() == (18432, 576, 24, 1)
AssertionError
train Epoch 0: 0%| | 0/469 [02:4
```
cc @zhaojuanmao @mrshenli @rohan-varma @awgu @ezyang @soumith @wconstab @ngimel @bdhirsh
| 2 |
4,663 | 93,694 |
Minifier dumps checkpoints which don't actually reproduce the error
|
triaged, bug, oncall: pt2, module: minifier
|
Steps to reproduce:
1. Checkout 07d794094bd4eaf1ad946429044f1c742f278d6b of pytorch (at time of writing, this is tip of symbolic-shapes branch)
2. Checkout e50d6695990fda7a0dfde460c6c6a2f93e1c8d1d of torchdynamo (at time of writing, this is tip of accuracy-minifier branch)
3. Intentionally introduce a failure in pytorch
```
diff --git a/torch/_decomp/__init__.py b/torch/_decomp/__init__.py
index 9512b0f38a..7f3d29290e 100644
--- a/torch/_decomp/__init__.py
+++ b/torch/_decomp/__init__.py
@@ -100,7 +100,6 @@ def register_decomposition(aten_op, registry=None, *, disable_meta: bool = False
if op_overload in registry:
raise RuntimeError(f"duplicate registrations for {op_overload}")
registry[op_overload] = fn
- op_overload.py_impl(torch._C.DispatchKey.Meta)(fn)
# TODO: factor this logic into OpOverload or Library API
name = op_overload._schema.name
if op_overload._schema.overload_name:
```
4. Trigger the error and try to minify it `(cd ../torchdynamo && TORCHDYNAMO_REPRO_AFTER=dynamo TORCH_SHOW_CPP_STACKTRACES=1 AOT_FX_GRAPHS_JOINT=1 TORCHDYNAMO_DYNAMIC_SHAPES=1 AOT_DYNAMIC_SHAPES=1 time python benchmarks/torchbench.py --only BERT_pytorch --accuracy --backend aot_eager --training)`
This produces this checkpoint /tmp/minifier_ezyang/checkpoints/2.py
```
import torch
import torchdynamo
from torch import tensor, device
import torch.fx as fx
from torchdynamo.testing import rand_strided
from math import inf
from torchdynamo.debug_utils import run_fwd_maybe_bwd
from torch.nn import *
class Repro(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, embedding):
return (embedding,)
mod = Repro().cuda()
opt_mod = torchdynamo.optimize("aot_eager")(mod)
args = [((2, 128, 768), (98304, 768, 1), torch.float32, 'cuda', True)]
args = [rand_strided(sh, st, dt, dev).requires_grad_(rg) for (sh, st, dt, dev, rg) in args]
with torch.cuda.amp.autocast(enabled=False):
ref = run_fwd_maybe_bwd(mod, args)
res = run_fwd_maybe_bwd(opt_mod, args)
```
However this script runs without reproducing the error. It looks like the final repro.py DOES reproduce the error, so I am confused why there's a checkpoint that doesn't do it.
cc @soumith @msaroufim @wconstab @ngimel @bdhirsh @anijain2305
| 0 |
4,664 | 85,745 |
[Quant] Remove or clarify the meaning of Nones in QConfig/BackendConfig
|
oncall: quantization, low priority, triaged
|
There are many instances of potential `None`s throughout QConfig/BackendConfig, e.g.
```
dtype_with_constraints.dtype
dtype_config.is_dynamic
qconfig.activation
qconfig.weight
get_target_activation_dtype_for_node(...) # in fx/prepare.py
```
We can probably remove the `Optional` around some of these, such as `dtype_config.is_dynamic`. For others, we should clearly document what `None` means.
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel
| 1 |
4,665 | 85,706 |
RuntimeError: [enforce fail at CPUAllocator.cpp:68] . DefaultCPUAllocator: can't allocate memory: you tried to allocate 4860032 bytes. Error code 12 (Cannot allocate memory)
|
oncall: jit
|
### π Describe the bug
The interpreter fails after multiple executions on set of images. Throws following runtime error
The following operation failed in the TorchScript interpreter.
Traceback of TorchScript, serialized code (most recent call last):
File "code/__torch__/ai360_net.py", line 34, in forward
confidences = []
locations = []
x0 = (self.base_net).forward(x, )
~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
_1 = self.extras
_2 = getattr(_1, "0")
File "code/__torch__/torch/nn/modules/container/___torch_mangle_189.py", line 47, in forward
input0 = (_0).forward(input, )
input1 = (_1).forward(input0, )
input2 = (_2).forward(input1, )
~~~~~~~~~~~ <--- HERE
input3 = (_3).forward(input2, )
input4 = (_4).forward(input3, )
File "code/__torch__/ai360_net/___torch_mangle_16.py", line 13, in forward
_0 = torch.add(x, (self.conv).forward(x, ), alpha=1)
else:
_0 = (self.conv).forward(x, )
~~~~~~~~~~~~~~~~~~ <--- HERE
return _0
File "code/__torch__/torch/nn/modules/container/___torch_mangle_15.py", line 23, in forward
_6 = getattr(self, "6")
_7 = getattr(self, "7")
input0 = (_0).forward(input, )
~~~~~~~~~~~ <--- HERE
input1 = (_1).forward(input0, )
input2 = (_2).forward(input1, )
File "code/__torch__/torch/nn/modules/conv/___torch_mangle_10.py", line 9, in forward
def forward(self: __torch__.torch.nn.modules.conv.___torch_mangle_10.Conv2d,
input: Tensor) -> Tensor:
_0 = (self).conv2d_forward(input, self.weight, )
~~~~~~~~~~~~~~~~~~~~ <--- HERE
return _0
def conv2d_forward(self: __torch__.torch.nn.modules.conv.___torch_mangle_10.Conv2d,
File "code/__torch__/torch/nn/modules/conv/___torch_mangle_10.py", line 29, in conv2d_forward
_11 = _4
else:
_11 = torch.conv2d(input, weight, None, [1, 1], [0, 0], [1, 1], 1)
~~~~~~~~~~~~ <--- HERE
return _11
Traceback of TorchScript, original code (most recent call last):
File "/home/tony/anaconda3/envs/pytorch1.4/lib/python3.6/site-packages/torch/nn/modules/container.py", line 100, in forward
def forward(self, input):
for module in self:
input = module(input)
~~~~~~ <--- HERE
return input
File "/home/tony/anaconda3/envs/pytorch1.4/lib/python3.6/site-packages/torch/nn/modules/container.py", line 100, in forward
def forward(self, input):
for module in self:
input = module(input)
~~~~~~ <--- HERE
return input
File "/home/tony/anaconda3/envs/pytorch1.4/lib/python3.6/site-packages/torch/nn/modules/conv.py", line 345, in forward
def forward(self, input):
return self.conv2d_forward(input, self.weight)
~~~~~~~~~~~~~~~~~~~ <--- HERE
File "/home/tony/anaconda3/envs/pytorch1.4/lib/python3.6/site-packages/torch/nn/modules/conv.py", line 341, in conv2d_forward
weight, self.bias, self.stride,
_pair(0), self.dilation, self.groups)
return F.conv2d(input, weight, self.bias, self.stride,
~~~~~~~~ <--- HERE
self.padding, self.dilation, self.groups)
RuntimeError: [enforce fail at CPUAllocator.cpp:68] . DefaultCPUAllocator: can't allocate memory: you tried to allocate 4860032 bytes. Error code 12 (Cannot allocate memory)
### Versions
pod 'LibTorch'
Xcode Version 13.4.1 (13F100)
Device Details -it is reproducible only on iPhone XR & iPhone X iOS -15.6.1
| 0 |
4,666 | 85,699 |
Add vector-Jacobian-products for a subset of nvFuser-supported prims; add backward support for nvprims
|
triaged, open source, cla signed, module: nvfuser, module: primTorch, no-stale
|
NOTE: Currently, this PR is blocked until https://github.com/pytorch/pytorch/issues/85696 is resolved.
`vjp_implementations["prim_name"]` is a callable that computes vector-Jacobian products for given cotangents, forward results, and forward inputs. And this function is used in the `torch.autograd.Function.backward` method.
Here's an example trace for the `torch.sigmoid` function:
```py
import torch
from torch._prims.context import NvfuserPrimsMode, TorchRefsMode
from torch.fx.experimental.proxy_tensor import make_fx
a = torch.randn(3, 3, device='cuda', requires_grad=True)
g = torch.ones_like(a)
def func(a):
c = torch.sigmoid(a)
return torch.autograd.grad(c, a, g)
# torch._refs.sigmoid trace
with NvfuserPrimsMode(), TorchRefsMode():
make_fx(func)(a).graph.print_tabular()
```
```
opcode name target args kwargs
------------- ------------------- ------------------------- -------------------------- --------
placeholder a_1 a_1 () {}
call_function neg nvprims.neg.default (a_1,) {}
call_function detach aten.detach.default (neg,) {}
call_function exp nvprims.exp.default (neg,) {}
call_function detach_1 aten.detach.default (exp,) {}
call_function add nvprims.add.default (1.0, exp) {}
call_function detach_2 aten.detach.default (add,) {}
call_function div nvprims.div.default (1.0, add) {}
call_function detach_3 aten.detach.default (div,) {}
get_attr _tensor_constant0 _tensor_constant0 () {}
call_function is_same_size aten.is_same_size.default (div, _tensor_constant0) {}
call_function detach_4 aten.detach.default (detach_3,) {}
call_function detach_5 aten.detach.default (detach_3,) {}
get_attr _tensor_constant0_1 _tensor_constant0 () {}
call_function div_1 nvprims.div.default (_tensor_constant0_1, add) {}
get_attr _tensor_constant0_2 _tensor_constant0 () {}
call_function neg_1 nvprims.neg.default (_tensor_constant0_2,) {}
call_function mul nvprims.mul.default (neg_1, 1.0) {}
call_function pow_1 nvprims.pow.default (add, -2) {}
call_function mul_1 nvprims.mul.default (mul, pow_1) {}
call_function detach_6 aten.detach.default (detach_2,) {}
call_function detach_7 aten.detach.default (detach_2,) {}
call_function view_of nvprims.view_of.default (mul_1,) {}
call_function view_of_1 nvprims.view_of.default (mul_1,) {}
call_function detach_8 aten.detach.default (detach_1,) {}
call_function detach_9 aten.detach.default (detach_1,) {}
call_function mul_2 nvprims.mul.default (view_of_1, detach_9) {}
call_function detach_10 aten.detach.default (detach,) {}
call_function detach_11 aten.detach.default (detach,) {}
call_function neg_2 nvprims.neg.default (mul_2,) {}
output output output ((neg_2,),) {}
```
`aten.detach` and `aten.is_same_size` appear in the trace that are called somewhere in the autograd engine code but other than that the trace consists of `nvprims` calls.
Thanks to @rdspring1 for implementing several VJP functions!
cc @kevinstephano @jjsjann123 @ezyang @mruberry @ngimel @Lezcano @fdrocha @peterbell10
| 5 |
4,667 | 85,698 |
AMP consumes 30x gpu memory with bmm
|
triaged, module: amp (automated mixed precision)
|
### π Describe the bug
AMP consumes about 30x gpu memory when used on `bmm`.
Code:
```
import torch
import torch.nn as nn
class MyModule(nn.Module):
def __init__(self):
super().__init__()
base = torch.randn((41292, 64))
self.register_buffer("base", base)
def forward(self, tensor):
batch_size = tensor.shape[0]
tensor = tensor.unsqueeze(-1) # (B, 64, 1)
return self.base.expand(batch_size, -1, -1).bmm(tensor)
device = torch.device('cuda')
model = MyModule().to(device)
for use_amp in [True, False]:
for i in range(10):
with torch.autocast(
device_type="cuda", dtype=torch.float16, enabled=use_amp
):
output = model(torch.randn((1024, 64), requires_grad=True, device=device))
torch.cuda.empty_cache()
print("GPU Memory Used:", torch.cuda.memory_allocated())
```
Result:
```
GPU Memory Used: 5508148224
GPU Memory Used: 5508148224
GPU Memory Used: 5508148224
GPU Memory Used: 5508148224
GPU Memory Used: 5508148224
GPU Memory Used: 5508148224
GPU Memory Used: 5508148224
GPU Memory Used: 5508148224
GPU Memory Used: 5508148224
GPU Memory Used: 5508148224
GPU Memory Used: 180702208
GPU Memory Used: 180702208
GPU Memory Used: 180702208
GPU Memory Used: 180702208
GPU Memory Used: 180702208
GPU Memory Used: 180702208
GPU Memory Used: 180702208
GPU Memory Used: 180702208
GPU Memory Used: 180702208
GPU Memory Used: 180702208
```
### Versions
```
Collecting environment information...
PyTorch version: 1.10.0+cu113
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.5 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: Could not collect
CMake version: version 3.22.4
Libc version: glibc-2.10
Python version: 3.7.10 (default, Feb 26 2021, 18:47:35) [GCC 7.3.0] (64-bit runtime)
Python platform: Linux-5.4.0-126-generic-x86_64-with-debian-buster-sid
Is CUDA available: True
CUDA runtime version: 11.1.105
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
Nvidia driver version: 510.85.02
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.0.5
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.21.6
[pip3] pytorch3d==0.6.2
[pip3] torch==1.10.0+cu113
[pip3] torch-tb-profiler==0.4.0
[pip3] torchaudio==0.10.0+cu113
[pip3] torchfile==0.1.0
[pip3] torchvision==0.11.1+cu113
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.1.74 h6bb024c_0 nvidia
[conda] cudatoolkit-dev 11.4.0 h5e8e339_5 conda-forge
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-fft 1.3.0 pypi_0 pypi
[conda] mkl-random 1.2.1 pypi_0 pypi
[conda] mkl-service 2.3.0 pypi_0 pypi
[conda] mkl_fft 1.3.1 py37hd3c417c_0
[conda] mkl_random 1.2.2 py37h51133e4_0
[conda] numpy 1.20.2 pypi_0 pypi
[conda] numpy-base 1.21.5 py37ha15fc14_3
[conda] pytorch-msssim 0.2.1 pypi_0 pypi
[conda] torch 1.10.0+cu113 pypi_0 pypi
[conda] torch-tb-profiler 0.4.0 pypi_0 pypi
[conda] torchaudio 0.10.0+cu113 pypi_0 pypi
[conda] torchelastic 0.2.0 pypi_0 pypi
[conda] torchfile 0.1.0 pypi_0 pypi
[conda] torchtext 0.10.0 py37 pytorch
[conda] torchvision 0.11.1+cu113 pypi_0 pypi
```
cc @mcarilli @ptrblck
| 5 |
4,668 | 93,693 |
TorchInductor CUDA memory leak / memory corruption debugging master task
|
module: cuda, triaged, bug, oncall: pt2
|
There's been an increase in memory and corruption bugs surfaced in inductor tests, mostly through the op tests.
ex: [#1362](https://github.com/pytorch/torchdynamo/pull/1362) - among others.
Of all the current enabled ops, these were detected as memory leaking through `compute-sanitizer`. There may be more leaks in other ops, we have not check all - only all enabled in the op info test as of https://github.com/pytorch/torchdynamo/pull/1340.
- [x] 1. ___rmatmul__ float16
- [x] 2. ___rmatmul__ float32
- [x] 3. ___rmatmul__ float64
- [ ] 4. __masked_cumprod float32
- [ ] 5. __masked_cumprod float64
- [ ] 6. __masked_cumprod int32
- [ ] 7. __masked_cumprod int64
- [ ] 8. __masked_cumsum float16
- [ ] 9. __masked_cumsum float32
- [ ] 10. __masked_cumsum float64
- [ ] 11. __masked_cumsum int32
- [ ] 12. __masked_cumsum int64
- [ ] 13. __masked_logsumexp float16
- [ ] 14. __masked_logsumexp float32
- [ ] 15. __masked_logsumexp int32
- [ ] 16. __masked_logsumexp int64
- [ ] 17. __masked_norm float16
- [ ] 18. __masked_norm float32
- [ ] 19. _allclose float16
- [ ] 20. _allclose float32
- [ ] 21. _allclose float64
- [ ] 22. _argwhere bool
- [ ] 23. _argwhere float16
- [ ] 24. _argwhere float32
- [ ] 25. _argwhere float64
- [ ] 26. _bernoulli float32
- [ ] 27. _bernoulli float64
- [ ] 28. _bincount int32
- [ ] 29. _bincount int64
- [ ] 30. _combinations bool
- [ ] 31. _combinations float16
- [ ] 32. _combinations float32
- [ ] 33. _combinations float64
- [ ] 34. _combinations int32
- [ ] 35. _combinations int64
- [ ] 36. _copysign float16
- [ ] 37. _copysign float32
- [ ] 38. _copysign float64
- [ ] 39. _corrcoef float16
- [ ] 40. _corrcoef float32
- [ ] 41. _cov float16
- [ ] 42. _cov float32
- [ ] 43. _cov float64
- [ ] 44. _cov int32
- [ ] 45. _dist float32
- [ ] 46. _dist float64
- [ ] 47. _equal bool
- [ ] 48. _erf float16
- [ ] 49. _erf float32
- [ ] 50. _fmax float16
- [ ] 51. _fmax float32
- [ ] 52. _fmin float16
- [ ] 53. _fmin float32
- [ ] 54. _fmin float64
- [ ] 55. _jiterator_2inputs_2outputs bool
- [ ] 56. _jiterator_4inputs_with_extra_args float16
- [ ] 57. _jiterator_4inputs_with_extra_args float32
- [ ] 58. _jiterator_4inputs_with_extra_args float64
- [ ] 59. _jiterator_binary float64
- [ ] 60. _jiterator_binary int32
- [ ] 61. _jiterator_binary int64
- [ ] 62. _jiterator_binary_return_by_ref bool
- [ ] 63. _jiterator_binary_return_by_ref int32
- [ ] 64. _jiterator_unary float16
- [ ] 65. _jiterator_unary float32
- [ ] 66. _linalg_cholesky float32
- [ ] 67. _linalg_inv float32
- [ ] 68. _linalg_inv float64
- [ ] 69. _linalg_norm float16
- [ ] 70. _linalg_norm float32
- [ ] 71. _linalg_qr float32
- [ ] 72. _linalg_qr float64
- [ ] 73. _logsumexp float32
- [ ] 74. _nn_functional__scaled_dot_product_attention float16
- [ ] 75. _nn_functional__scaled_dot_product_attention float32
- [ ] 76. _nn_functional__scaled_dot_product_attention float64
- [ ] 77. _nn_functional_feature_alpha_dropout_without_train float32
- [ ] 78. _nn_functional_fractional_max_pool2d float16
- [ ] 79. _nn_functional_fractional_max_pool2d float32
- [ ] 80. _nn_functional_fractional_max_pool2d float64
- [ ] 81. _nn_functional_fractional_max_pool3d float16
- [ ] 82. _nn_functional_fractional_max_pool3d float32
- [ ] 83. _nn_functional_fractional_max_pool3d float64
- [ ] 84. _nn_functional_hardswish float32
- [ ] 85. _nn_functional_interpolate_bicubic float16
- [ ] 86. _nn_functional_interpolate_bicubic float32
- [ ] 87. _nn_functional_interpolate_bicubic float64
- [ ] 88. _nn_functional_triplet_margin_with_distance_loss float16
- [ ] 89. _nn_functional_triplet_margin_with_distance_loss float32
- [ ] 90. _nn_functional_triplet_margin_with_distance_loss float64
- [ ] 91. _nn_functional_triplet_margin_with_distance_loss int32
- [ ] 92. _nn_functional_triplet_margin_with_distance_loss int64
- [ ] 93. _pca_lowrank float32
- [ ] 94. _rand_like float16
- [ ] 95. _rand_like float32
- [ ] 96. _rand_like float64
- [ ] 97. _randint_like float16
- [ ] 98. _randint_like float32
- [ ] 99. _randint_like float64
- [ ] 100. _randn_like float16
- [ ] 101. _randn_like float32
- [ ] 102. _randn_like float64
- [ ] 103. _svd_lowrank float32
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 3 |
4,669 | 85,671 |
nn.Embedding weights are not synced across processes with DistributedDataParallel when other parameters are present
|
oncall: distributed, module: ddp
|
### π Describe the bug
If our model only contains a single `nn.Embedding` and no other parameters, then running DistributedDataParallel on two RTX 3090s leads to the expected behavior: the embeddings are the same across both processes. However, adding a `nn.Linear` (which isn't used at all) leads to the embeddings only being locally updated on each process.
```python
import os
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.multiprocessing as mp
import torch.distributed as dist
class TestModel(nn.Module):
def __init__(self):
super().__init__()
self.embedding = nn.Embedding(num_embeddings=2, embedding_dim=3)
self.embedding.weight.data.fill_(0)
# self.fc = nn.Linear(1, 1) # Note: uncommenting this results in the bug
def forward(self, idx):
return self.embedding(idx).mean(dim=-1)
def main_worker(rank):
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '12355'
dist.init_process_group("gloo", rank=rank, world_size=2)
model = TestModel().to(rank)
model = torch.nn.parallel.DistributedDataParallel(model)
optimizer = torch.optim.SGD(model.parameters(), lr=0.1)
# dataset
ids = torch.arange(2)
targets = torch.arange(2).float() + 1
dataset = torch.utils.data.TensorDataset(ids, targets)
train_sampler = torch.utils.data.distributed.DistributedSampler(dataset, shuffle=True)
train_loader = torch.utils.data.DataLoader(dataset, batch_size=1, shuffle=False,
sampler=train_sampler, drop_last=True)
print(f"Rank={rank} before update:", rank, model.module.embedding.weight)
# 1 epoch of training
for (id, target) in train_loader:
preds = model(id.to(rank))
optimizer.zero_grad()
loss = F.mse_loss(preds, target.to(rank))
loss.backward()
optimizer.step()
print(f"Rank={rank} after update:", rank, model.module.embedding.weight)
dist.destroy_process_group()
if __name__ == '__main__':
mp.spawn(main_worker, nprocs=2)
```
Output with just `nn.Embedding`, which matches expected behavior:
```
Rank=1 before update: 1 Parameter containing:
tensor([[0., 0., 0.],
[0., 0., 0.]], device='cuda:1', requires_grad=True)
Rank=0 before update: 0 Parameter containing:
tensor([[0., 0., 0.],
[0., 0., 0.]], device='cuda:0', requires_grad=True)
Rank=1 after update: 1 Parameter containing:
tensor([[0.0333, 0.0333, 0.0333],
[0.0667, 0.0667, 0.0667]], device='cuda:1', requires_grad=True)
Rank=0 after update: 0 Parameter containing:
tensor([[0.0333, 0.0333, 0.0333],
[0.0667, 0.0667, 0.0667]], device='cuda:0', requires_grad=True)
```
Output when other parameters are present (embedding weights are not synced):
```
Rank=1 before update: 1 Parameter containing:
tensor([[0., 0., 0.],
[0., 0., 0.]], device='cuda:1', requires_grad=True)
Rank=1 after update: 1 Parameter containing:
tensor([[0.0000, 0.0000, 0.0000],
[0.1333, 0.1333, 0.1333]], device='cuda:1', requires_grad=True)
Rank=0 before update: 0 Parameter containing:
tensor([[0., 0., 0.],
[0., 0., 0.]], device='cuda:0', requires_grad=True)
Rank=0 after update: 0 Parameter containing:
tensor([[0.0667, 0.0667, 0.0667],
[0.0000, 0.0000, 0.0000]], device='cuda:0', requires_grad=True)
```
### Versions
Collecting environment information...
PyTorch version: 1.11.0
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.3 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.8.13 (default, Mar 28 2022, 11:38:47) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.15.0-46-generic-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: 10.1.243
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
Nvidia driver version: 515.43.04
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.19.2
[pip3] torch==1.11.0
[pip3] torchaudio==0.11.0
[pip3] torchvision==0.12.0
[conda] _pytorch_select 0.1 cpu_0
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.3.1 h2bc3f7f_2
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] libmklml 2019.0.5 h06a4308_0
[conda] mkl 2020.2 256
[conda] mkl-service 2.3.0 py38he904b0f_0
[conda] mkl_fft 1.3.0 py38h54f3939_0
[conda] mkl_random 1.1.1 py38h0573a6f_0
[conda] numpy 1.19.2 py38h54aff64_0
[conda] numpy-base 1.19.2 py38hfa32c7d_0
[conda] pytorch 1.11.0 py3.8_cuda11.3_cudnn8.2.0_0 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 0.11.0 py38_cu113 pytorch
[conda] torchvision 0.12.0 py38_cu113 pytorch
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang @kwen2501
| 4 |
4,670 | 85,663 |
[ONNX] torch/onnx is using rank to differentiate between ScalarType and TensorType
|
module: onnx, triaged, onnx-triaged
|
### π Describe the bug
ONNX is using condition of following:
https://github.com/pytorch/pytorch/blob/6a04df3ac85714b33dcb2af20d0eeea96d131d75/torch/csrc/jit/passes/onnx/scalar_type_analysis.cpp#L209
to differentiate between ScalarType and TensorType, which is revealed as bug by the following case:
```python
import torch
from io import BytesIO
import onnx
class MulLeakyReLU(torch.nn.Module):
def __init__(self):
super().__init__()
self.m = torch.nn.LeakyReLU(negative_slope=0.1)
def forward(self, x):
return x * self.m(x)
f = BytesIO()
torch.onnx.export(
MulLeakyReLU(),
torch.randn((), dtype=torch.float64),
f,
verbose=True,
opset_version=14,
)
onnx_model = onnx.load_from_string(f.getvalue())
onnx.checker.check_model(onnx_model, full_check=True)
assert MulLeakyReLU()(torch.randn((), dtype=torch.float64)).dtype == torch.float64
onnx_output_type = onnx_model.graph.output[0].type.tensor_type.elem_type
assert (
onnx_output_type == onnx.TensorProto.DataType.DOUBLE
), f"Expected output to be double but the converted ONNX model outputs is {onnx.TensorProto.DataType.Name(onnx_output_type)}"
"""
Exported graph: graph(%input : Double(requires_grad=0, device=cpu)):
%/m/LeakyRelu_output_0 : Double(requires_grad=0, device=cpu) = onnx::LeakyRelu[alpha=0.10000000000000001, onnx_name="/m/LeakyRelu"](%input), scope: __main__.MulLeakyReLU::/torch.nn.modules.activation.LeakyReLU::m
%/Cast_output_0 : Float(requires_grad=0, device=cpu) = onnx::Cast[to=1, onnx_name="/Cast"](%input), scope: __main__.MulLeakyReLU:: # test.py:14:0
%/Cast_1_output_0 : Float(requires_grad=0, device=cpu) = onnx::Cast[to=1, onnx_name="/Cast_1"](%/m/LeakyRelu_output_0), scope: __main__.MulLeakyReLU:: # test.py:14:0
%5 : Float(requires_grad=0, device=cpu) = onnx::Mul[onnx_name="/Mul"](%/Cast_output_0, %/Cast_1_output_0), scope: __main__.MulLeakyReLU:: # test.py:14:0
return (%5)
Traceback (most recent call last):
File "test.py", line 29, in <module>
assert (
AssertionError: Expected output to be double but the converted ONNX model outputs is FLOAT
"""
```
NOTE: the tensor with no rank is recognized as scalar.
Actionable iitems:
1. Explore in tracing to see if we can get scalartype/tensortype information from upstream.
2. SymFloat might be a great help.
### Versions
Nightly
| 9 |
4,671 | 85,660 |
[functorch] CUDA Graph failure with AOTAutograd
|
triaged, module: cuda graphs, module: functorch
|
### π Describe the bug
We are having issues with one of the latest commits on functorch. Namely [PR 85301](https://github.com/pytorch/pytorch/pull/85301) is breaking our TIMM benchmark with CUDA Graphs:
Before PyTorch commit `5f623f5c4c759cada9f0dc3866b63c906178dbc1` it was working fine
```
Benchmarking in float32 precision. NCHW layout. torchscript disabled
Model adv_inception_v3 created, param count: 23834568
Running train benchmark on adv_inception_v3 for 40 steps w/ input size (3, 224, 224) and batch size 32.
/home/aaitzhan/git/pytorch-master/torch/jit/_check.py:181: UserWarning: The TorchScript type system doesn't support instance-level annotations on empty non-base types in `__init__`. Instead, either 1) use a type annotation in the class body, or 2) wrap the type in `torch.jit.Attribute`.
warnings.warn("The TorchScript type system doesn't support "
Train [8/40]. 199.36 samples/sec. 160.512 ms/step.
Train [16/40]. 196.29 samples/sec. 163.025 ms/step.
Train [24/40]. 196.33 samples/sec. 162.992 ms/step.
Train [32/40]. 196.01 samples/sec. 163.257 ms/step.
Train [40/40]. 196.39 samples/sec. 162.943 ms/step.
Train benchmark of adv_inception_v3 done. 196.38 samples/sec, 162.94 ms/sample
--result
{
"model": "adv_inception_v3",
"train_samples_per_sec": 196.38,
"train_step_time": 162.943,
"train_batch_size": 32,
"train_img_size": 224,
"param_count": 23.83
}
```
However on PyTorch commit `5f623f5c4c759cada9f0dc3866b63c906178dbc1` and newer it is failing:
```
Benchmarking in float32 precision. NCHW layout. torchscript disabled
Model adv_inception_v3 created, param count: 23834568
Running train benchmark on adv_inception_v3 for 40 steps w/ input size (3, 224, 224) and batch size 32.
/home/aaitzhan/git/pytorch-master/torch/jit/_check.py:181: UserWarning: The TorchScript type system doesn't support instance-level annotations on empty non-base types in `__init__`. Instead, either 1) use a type annotation in the class body, or 2) wrap the type in `torch.jit.Attribute`.
warnings.warn("The TorchScript type system doesn't support "
ERROR: "CUDA error: operation failed due to a previous error during capture
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1." while running benchmark.
```
Steps to reproduce:
```
git clone --recursive https://github.com/xwang233/pytorch-image-models.git
cd pytorch-image-models/
git checkout 094755c9ed1f92575ed74bc29d09fba11d96a3bc
TIMM_BENCHMARK_ENABLE_CUDAGRAPH=1 python -u ./benchmark.py --bench train --model adv_inception_v3 --img-size 224 -b 32 --fuser nvfuser --aot-autograd
```
CC @ezyang
### Versions
PyTorch upstream.
cc @mcarilli @ezyang @zou3519 @Chillee @samdow @soumith
| 10 |
4,672 | 85,656 |
[functorch] conv.{1, 2, 3}d should raise errors
|
triaged, module: functorch
|
### π Describe the bug
We discovered it in #85202.
`conv.{1, 2, 3}d` should raise *RuntimeError* when the bias shape does not equal the number of output channels. Instead, in functorch, it throws the *ValueError*.
#### Current behavior
```python
In [3]: input = torch.rand((1, 1, 4, 4))
In [4]: weight = torch.rand((1, 1, 3, 2))
In [5]: bias = torch.rand((2,))
In [6]: vmap(torch.nn.functional.conv2d)(input, weight, bias)
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Input In [6], in <cell line: 1>()
----> 1 vmap(torch.nn.functional.conv2d)(input, weight, bias)
File ~/Documents/pytorch/functorch/_src/vmap.py:348, in vmap.<locals>.wrapped(*args, **kwargs)
345 @functools.wraps(func)
346 def wrapped(*args, **kwargs):
347 _check_out_dims_is_int_or_int_pytree(out_dims, func)
--> 348 batch_size, flat_in_dims, flat_args, args_spec = _process_batched_inputs(in_dims, args, func)
349 return _flat_vmap(
350 func, batch_size, flat_in_dims, flat_args, args_spec, out_dims, randomness, **kwargs
351 )
File ~/Documents/pytorch/functorch/_src/vmap.py:102, in _process_batched_inputs(in_dims, args, func)
99 if in_dim is not None and in_dim < 0:
100 flat_in_dims[i] = in_dim % arg.dim()
--> 102 return _validate_and_get_batch_size(flat_in_dims, flat_args), flat_in_dims, flat_args, args_spec
File ~/Documents/pytorch/functorch/_src/vmap.py:35, in _validate_and_get_batch_size(flat_in_dims, flat_args)
33 raise ValueError('vmap: Expected at least one Tensor to vmap over')
34 if batch_sizes and any(size != batch_sizes[0] for size in batch_sizes):
---> 35 raise ValueError(
36 f'vmap: Expected all tensors to have the same size in the mapped '
37 f'dimension, got sizes {batch_sizes} for the mapped dimension')
38 return batch_sizes[0]
ValueError: vmap: Expected all tensors to have the same size in the mapped dimension, got sizes [1, 1, 2] for the mapped dimension
```
#### Expected
```python
In [7]: torch.nn.functional.conv2d(input, weight, bias)
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Input In [7], in <cell line: 1>()
----> 1 torch.nn.functional.conv2d(input, weight, bias)
RuntimeError: Given weight of size [1, 1, 3, 2], expected bias to be 1-dimensional with 1 elements, but got bias of size [2] instead
```
The CI test raises *AssertionError* stating `Exception not raised`.
Link: https://github.com/pytorch/pytorch/actions/runs/3093401271/jobs/5005991218. Currently, these test cases are skipped in the `test_vmap` file.
### Versions
`1.13.0a0+git83c73f7`
cc @zou3519 @Chillee @samdow @soumith
| 0 |
4,673 | 85,652 |
CUDA allocator feature requests
|
module: cuda, triaged, module: CUDACachingAllocator
|
### π The feature, motivation and pitch
I'm working on torch for R, a project that exposes similar functionality as PyTorch for the R language.
Tensors are exposed to R as very thin wrappers around LibTorch tensors (basically just pointers to the actual LibTorch tensor), thus R is not aware at all of memory used by torch Tensors.
However, R is a garbage collected language and its GC is very lazy. It's only called when R really needs another chunk of memory from the OS. This causes tensors to keep dangling on the session using the valuable GPU memory. To solve this problem we've been using the `FreeMemoryCallbacks` functionality that will call the R GC whenever getting a free block fails:
https://github.com/pytorch/pytorch/blob/89896b8778b76685c4fc40d1f8ce337d36105c02/c10/cuda/CUDACachingAllocator.cpp#L607-L611
In theory this solves the problem, but it adds a lot of overhead (calling a full garbage collection is very slow) and the callbacks are called almost for every CUDA allocation if the session still didn't 'reserve' a good chunk of memory.
Like Python's garbage collector, R's is generational garbage collector, so you can choose to do faster collections. But with the `FreeMemoryCallbacks` API we don't have enough information to decide whether we should do a full collection or a faster one (we end up needing to do full collections to support peak memory usage) because
1. The callbacks don't know how much memory needs to be allocated, nor in which device.
2. Either all callbacks are called or no callback is called - as you can see the `trigger_free_memory_callbacks` implementation.
https://github.com/pytorch/pytorch/blob/89896b8778b76685c4fc40d1f8ce337d36105c02/c10/cuda/CUDACachingAllocator.cpp#L1285-L1292
### Alternatives
I'm proposing two alternatives, but open to other suggestions.
1. Passing `device` and `size` (or possibly `AllocParams`) to the callback `Execute` method defined in:
https://github.com/pytorch/pytorch/blob/89896b8778b76685c4fc40d1f8ce337d36105c02/c10/cuda/CUDACachingAllocator.h#L16-L20 This way we could query for how much memory the faster collection released and if necessary run a full collection.
2. Introduce some notion of priority to the callbacks and call them in order until it was possible to release enough memory. For example, modifying `trigger_free_memory_callback` to something like:
```c++
bool trigger_free_memory_callbacks(AllocParams& p) {
bool freed_memory = false;
for (const auto& name : FreeCudaMemoryCallbacksRegistry()->Keys()) {
if (FreeCudaMemoryCallbacksRegistry()->Create(name)->Execute() && get_free_block(p)) {
return true;
}
}
return false;
}
```
Thank you very much for taking a look at this!
### Additional context
_No response_
cc @ngimel
| 1 |
4,674 | 85,642 |
Could not run 'aten::native_batch_norm' with arguments from the 'SparseCUDA' backend. using batch_norm
|
module: sparse, triaged
|
### π Describe the bug
Getting error when training below model with sparse data using batch_norm
**Pytorch** version 1.12.1+cu113 ..
looking at stack trace happens at BatchNorm step .Is there any workaround also to this if its a limitation currently ?
Model
```python
ModelCNN(
(batch_norm1): BatchNorm1d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(dropout1): Dropout(p=0.1, inplace=False)
(dense1): Linear(in_features=1024, out_features=2048, bias=True)
(batch_norm_c1): BatchNorm1d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(dropout_c1): Dropout(p=0.1, inplace=False)
(conv1): Conv1d(256, 512, kernel_size=(5,), stride=(1,), padding=(2,), bias=False)
(ave_po_c1): AdaptiveAvgPool1d(output_size=4)
(batch_norm_c2): BatchNorm1d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(dropout_c2): Dropout(p=0.1, inplace=False)
(conv2): Conv1d(512, 512, kernel_size=(3,), stride=(1,), padding=(1,))
(batch_norm_c2_1): BatchNorm1d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(dropout_c2_1): Dropout(p=0.3, inplace=False)
(conv2_1): Conv1d(512, 512, kernel_size=(3,), stride=(1,), padding=(1,))
....
```
Trace
```python
in forward(self, x)
143 def forward(self, x):
144
--> 145 x = self.batch_norm1(x)
146 x = self.dropout1(x)
147 x = F.celu(self.dense1(x), alpha=0.06)
[/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *input, **kwargs)
1128 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1129 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1130 return forward_call(*input, **kwargs)
1131 # Do not call functions when jit is used
1132 full_backward_hooks, non_full_backward_hooks = [], []
[/usr/local/lib/python3.7/dist-packages/torch/nn/modules/batchnorm.py](https://localhost:8080/#) in forward(self, input)
177 bn_training,
178 exponential_average_factor,
--> 179 self.eps,
180 )
181
[/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py](https://localhost:8080/#) in batch_norm(input, running_mean, running_var, weight, bias, training, momentum, eps)
2437
2438 return torch.batch_norm(
-> 2439 input, weight, bias, running_mean, running_var, training, momentum, eps, torch.backends.cudnn.enabled
2440 )
2441
NotImplementedError: Could not run 'aten::native_batch_norm' with arguments from the 'SparseCUDA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::native_batch_norm' is only available for these backends: [Dense, FPGA, VmapMode, UNKNOWN_TENSOR_TYPE_ID, QuantizedXPU, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, SparseCPU, SparseCUDA, SparseHIP, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, SparseXPU, UNKNOWN_TENSOR_TYPE_ID, SparseVE, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, NestedTensorCUDA, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID].
CPU: registered at aten/src/ATen/RegisterCPU.cpp:37386 [kernel]
CUDA: registered at aten/src/ATen/RegisterCUDA.cpp:51977 [kernel]
MkldnnCPU: registered at aten/src/ATen/RegisterMkldnnCPU.cpp:690 [kernel]
BackendSelect: fallthrough registered at ../aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback]
Python: registered at ../aten/src/ATen/core/PythonFallbackKernel.cpp:133 [backend fallback]
Named: registered at ../aten/src/ATen/core/NamedRegistrations.cpp:7 [backend fallback]
Conjugate: registered at ../aten/src/ATen/ConjugateFallback.cpp:18 [backend fallback]
Negative: registered at ../aten/src/ATen/native/NegateFallback.cpp:18 [backend fallback]
ZeroTensor: registered at ../aten/src/ATen/ZeroTensorFallback.cpp:86 [backend fallback]
ADInplaceOrView: fallthrough registered at ../aten/src/ATen/core/VariableFallbackKernel.cpp:64 [backend fallback]
AutogradOther: registered at ../torch/csrc/autograd/generated/VariableType_1.cpp:12167 [autograd kernel]
AutogradCPU: registered at ../torch/csrc/autograd/generated/VariableType_1.cpp:12167 [autograd kernel]
AutogradCUDA: registered at ../torch/csrc/autograd/generated/VariableType_1.cpp:12167 [autograd kernel]
UNKNOWN_TENSOR_TYPE_ID: registered at ../torch/csrc/autograd/generated/VariableType_1.cpp:12167 [autograd kernel]
AutogradXLA: registered at ../torch/csrc/autograd/generated/VariableType_1.cpp:12167 [autograd kernel]
AutogradMPS: registered at ../torch/csrc/autograd/generated/VariableType_1.cpp:12167 [autograd kernel]
AutogradIPU: registered at ../torch/csrc/autograd/generated/VariableType_1.cpp:12167 [autograd kernel]
AutogradXPU: registered at ../torch/csrc/autograd/generated/VariableType_1.cpp:12167 [autograd kernel]
AutogradHPU: registered at ../torch/csrc/autograd/generated/VariableType_1.cpp:12167 [autograd kernel]
UNKNOWN_TENSOR_TYPE_ID: registered at ../torch/csrc/autograd/generated/VariableType_1.cpp:12167 [autograd kernel]
AutogradLazy: registered at ../torch/csrc/autograd/generated/VariableType_1.cpp:12167 [autograd kernel]
AutogradPrivateUse1: registered at ../torch/csrc/autograd/generated/VariableType_1.cpp:12167 [autograd kernel]
AutogradPrivateUse2: registered at ../torch/csrc/autograd/generated/VariableType_1.cpp:12167 [autograd kernel]
AutogradPrivateUse3: registered at ../torch/csrc/autograd/generated/VariableType_1.cpp:12167 [autograd kernel]
Tracer: registered at ../torch/csrc/autograd/generated/TraceType_1.cpp:12753 [kernel]
AutocastCPU: fallthrough registered at ../aten/src/ATen/autocast_mode.cpp:481 [backend fallback]
Autocast: fallthrough registered at ../aten/src/ATen/autocast_mode.cpp:324 [backend fallback]
Batched: registered at ../aten/src/ATen/BatchingRegistrations.cpp:1064 [backend fallback]
VmapMode: fallthrough registered at ../aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]
Functionalize: registered at ../aten/src/ATen/FunctionalizeFallbackKernel.cpp:89 [backend fallback]
PythonTLSSnapshot: registered at ../aten/src/ATen/core/PythonFallbackKernel.cpp:137 [backend fallback]
```
### Versions
**Pytorch** version 1.12.1+cu113 ..
cc @nikitaved @pearu @cpuhrsch @amjames @bhosmer
| 2 |
4,675 | 85,625 |
How to install pytorch with cuda 11.7 in anaconda envirment?
|
triaged
|
### π The doc issue


could not find the version of cuda 11.7 when use conda or pip
### Suggest a potential alternative/fix
add cuda 11.7 in conda
| 2 |
4,676 | 85,621 |
Gloo DDP SocketTimeout error on Windows
|
oncall: distributed, triaged, module: c10d
|
### π Describe the bug
Hi everyone,
I've developed this little POC using [pytorch distributed][1] package: essentially a Trainer spawns N processes and orchestrate them using python Pipes (it could also be Queues). Normally it should send data at every epoch, but in this POC the data is just sent once on process creation. The processes train a model through [DDP][2].
```python
import os
import signal
import socket
from contextlib import closing
from multiprocessing.connection import Connection, Pipe
from typing import List
import torch
import torch.distributed as dist
import torch.multiprocessing as mp
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import torchvision
from torch.nn.parallel import DistributedDataParallel as DDP
def init_process(rank, world_size, ddp_free_port, recv, train_data):
"""Initialize the distributed environment."""
torch.set_num_threads(1)
os.environ["MASTER_ADDR"] = "localhost"
os.environ["MASTER_PORT"] = ddp_free_port
os.environ["RANK"] = str(rank)
os.environ["LOCAL_RANK"] = str(rank)
os.environ["WORLD_SIZE"] = str(world_size)
os.environ["NODE_RANK"] = "0"
dist.init_process_group("gloo", init_method=f"tcp://localhost:{ddp_free_port}", rank=rank, world_size=world_size)
Worker(recv, train_data).train()
class Worker:
def __init__(self, queue, train_dset):
self.rank = dist.get_rank()
self.world_size = dist.get_world_size()
self.queue: Connection = queue
self.train_dset = train_dset
self.model = torch.nn.Sequential(nn.Linear(784, 64), torch.nn.ReLU(), torch.nn.Linear(64, 10))
self.model = DDP(self.model)
self.optimizer = optim.Adam(self.model.parameters(), lr=0.001)
def train(self):
loss_fn = nn.CrossEntropyLoss()
sampler = torch.utils.data.distributed.DistributedSampler(
self.train_dset, num_replicas=self.world_size, rank=self.rank, shuffle=True
)
train_loader = torch.utils.data.DataLoader(self.train_dset, sampler=sampler, batch_size=32)
while True:
epoch = self.queue.recv()
if epoch is False:
print(f"Rank-{self.rank} done!")
return
total_loss = 0
sampler.set_epoch(epoch)
for i, batch in enumerate(train_loader):
images, labels = batch
out = self.model(images.view(-1, 28 * 28))
loss = loss_fn(out, labels)
self.optimizer.zero_grad()
loss.backward()
self.optimizer.step()
total_loss += loss.item()
dist.barrier()
if self.rank == 0:
print(f"Epoch: {epoch}, Loss@rank-{self.rank}: {total_loss / len(train_loader):.4f}")
print(f"Rank-0 is telling the trainer that everything is done for the epoch {epoch}")
self.queue.send(True)
class Trainer:
def __init__(self, world_size: int, epochs: int = 5) -> None:
self.world_size = world_size
self.epochs = epochs
self.train_data = torchvision.datasets.MNIST(
"/tmp/data",
train=True,
download=True,
transform=torchvision.transforms.Compose([torchvision.transforms.ToTensor()]),
)
self.test_data = torchvision.datasets.MNIST(
"/tmp/data",
train=False,
download=True,
transform=torchvision.transforms.Compose([torchvision.transforms.ToTensor()]),
)
self.ddp_free_port = str(find_free_port())
def run(self):
"""Run the distributed environment."""
print("Start training")
queues = []
processes = []
for rank in range(self.world_size):
if rank == 0:
recv, send = Pipe(duplex=True)
else:
recv, send = Pipe(duplex=False)
p = mp.Process(
target=init_process,
args=(rank, self.world_size, self.ddp_free_port, recv, self.train_data),
daemon=True,
)
p.start()
queues.append(send)
processes.append(p.pid)
self.train(queues, processes)
def train(self, queues, processes):
for epoch in range(self.epochs):
for rank in range(self.world_size):
queues[rank].send(epoch)
print("Training waiting for rank-0")
queues[0].recv()
for rank in range(self.world_size):
queues[rank].send(False)
queues[rank].close()
os.kill(processes[rank], signal.SIGTERM)
def find_free_port():
with closing(socket.socket(socket.AF_INET, socket.SOCK_STREAM)) as s:
s.bind(("", 0))
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
return s.getsockname()[1]
if __name__ == "__main__":
os.environ["LOGLEVEL"] = "DEBUG"
mp.set_start_method("spawn")
trainer = Trainer(world_size=16)
trainer.run()
print("Finished training")
```
I receive the following error, for every process spawned, randomly, if i increase the number of processes from 16 to 32 for example:
```python
...
Process Process-1:
Traceback (most recent call last):
File "C:\Program Files (x86)\Python38\lib\multiprocessing\process.py", line 315, in _bootstrap
self.run()
File "C:\Program Files (x86)\Python38\lib\multiprocessing\process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "c:\Users\belof\Desktop\temp\examples\ddp_cpu.py", line 27, in init_process
dist.init_process_group("gloo", init_method=f"tcp://localhost:{ddp_free_port}", rank=rank, world_size=world_size)
File "C:\Users\belof\Desktop\temp\.venv\lib\site-packages\torch\distributed\distributed_c10d.py", line 602, in init_process_group
default_pg = _new_process_group_helper(
File "C:\Users\belof\Desktop\temp\.venv\lib\site-packages\torch\distributed\distributed_c10d.py", line 703, in _new_process_group_helper
pg = ProcessGroupGloo(prefix_store, rank, world_size, timeout=timeout)
RuntimeError: Socket Timeout
Traceback (most recent call last):
File "C:\Program Files (x86)\Python38\lib\multiprocessing\connection.py", line 312, in _recv_bytes
nread, err = ov.GetOverlappedResult(True)
BrokenPipeError: [WinError 109] The pipe has been ended
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "c:/Users/belof/Desktop/temp/examples/ddp_cpu.py", line 131, in <module>
trainer.run()
File "c:/Users/belof/Desktop/temp/examples/ddp_cpu.py", line 106, in run
self.train(queues, processes)
File "c:/Users/belof/Desktop/temp/examples/ddp_cpu.py", line 113, in train
queues[0].recv()
File "C:\Program Files (x86)\Python38\lib\multiprocessing\connection.py", line 250, in recv
buf = self._recv_bytes()
File "C:\Program Files (x86)\Python38\lib\multiprocessing\connection.py", line 321, in _recv_bytes
raise EOFError
EOFError
```
It seemed to me something related to windows spawn method and the queue references passed to the processes, but i don't really know what is happening here.
[1]: https://pytorch.org/docs/stable/distributed.html#
[2]: https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html
[3]: https://github.com/pytorch/pytorch/blob/master/torch/utils/collect_env.py
### Versions
```python
Collecting environment information...
PyTorch version: 1.12.1+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 10 Pro
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A
Python version: 3.8.8 (tags/v3.8.8:024d805, Feb 19 2021, 13:18:16) [MSC v.1928 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.19041-SP0
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] mypy==0.931
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.23.3
[pip3] pytorch-lightning==1.6.4
[pip3] torch==1.12.1
[pip3] torchmetrics==0.9.3
[pip3] torchvision==0.12.0
[conda] Could not collect
```
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang @kwen2501
| 3 |
4,677 | 85,618 |
Build from source failed with error of different gpu architecture (compiler shows sm_30-related error but I use sm_86 GPU)
|
module: build, module: cuda, triaged
|
### π Describe the bug
## Purpose
I'd like to use pytorch with cuda 11.7.
## Summary
The compiler shows that "Value 'sm_30' is not defined for option 'gpu-name'" but I am using 2x RTX 3090 which is sm_86 architecture.
## Detail
I tried to build by refering [here](https://github.com/pytorch/pytorch#from-source). I'm sure to install `magma-cuda117` .
The output is below.
Whole logs are at [CMakeOutput.log](https://gist.github.com/somisawa/cf7456db96efed3331cbd7911d75a85e) and [CMakeError.log](https://gist.github.com/somisawa/b7cd34287edab5fc049e5973cf667f65)
```bash
Building wheel torch-1.13.0a0+git71dddec
-- Building version 1.13.0a0+git71dddec
cmake -GNinja -DBUILD_PYTHON=True -DBUILD_TEST=True -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/home/sota_misawa/workspace/torch-diffusion/pytorch/torch -DCMAKE_PREFIX_PATH=/home/sota_misawa/.pyenv/versions/3.9.7/lib/python3.9/site-packages;/home/sota_misawa/anaconda3/envs/torch-diffusion -DNUMPY_INCLUDE_DIR=/home/sota_misawa/.pyenv/versions/3.9.7/lib/python3.9/site-packages/numpy/core/include -DPYTHON_EXECUTABLE=/home/sota_misawa/.pyenv/versions/3.9.7/bin/python -DPYTHON_INCLUDE_DIR=/home/sota_misawa/.pyenv/versions/3.9.7/include/python3.9 -DPYTHON_LIBRARY=/home/sota_misawa/.pyenv/versions/3.9.7/lib/libpython3.9.a -DTORCH_BUILD_VERSION=1.13.0a0+git71dddec -DUSE_NUMPY=True /home/sota_misawa/workspace/torch-diffusion/pytorch
-- Could not find ccache. Consider installing ccache to speed up compilation.
-- std::exception_ptr is supported.
-- Current compiler supports avx2 extension. Will build perfkernels.
-- Current compiler supports avx512f extension. Will build fbgemm.
-- The CUDA compiler identification is unknown
-- Detecting CUDA compiler ABI info
-- Detecting CUDA compiler ABI info - failed
-- Check for working CUDA compiler: /usr/bin/nvcc
-- Check for working CUDA compiler: /usr/bin/nvcc - broken
CMake Error at /home/sota_misawa/.pyenv/versions/3.9.7/lib/python3.9/site-packages/cmake/data/share/cmake-3.21/Modules/CMakeTestCUDACompiler.cmake:56 (message):
The CUDA compiler
"/usr/bin/nvcc"
is not able to compile a simple test program.
It fails with the following output:
Change Dir: /home/sota_misawa/workspace/torch-diffusion/pytorch/build/CMakeFiles/CMakeTmp
Run Build Command(s):/home/sota_misawa/anaconda3/envs/torch-diffusion/bin/ninja cmTC_5ec83 && [1/2] Building CUDA object CMakeFiles/cmTC_5ec83.dir/main.cu.o
FAILED: CMakeFiles/cmTC_5ec83.dir/main.cu.o
/usr/bin/nvcc -Xfatbin -compress-all -c /home/sota_misawa/workspace/torch-diffusion/pytorch/build/CMakeFiles/CMakeTmp/main.cu -o CMakeFiles/cmTC_5ec83.dir/main.cu.o
ptxas fatal : Value 'sm_30' is not defined for option 'gpu-name'
ninja: build stopped: subcommand failed.
CMake will not be able to correctly generate this project.
Call Stack (most recent call first):
cmake/public/cuda.cmake:47 (enable_language)
cmake/Dependencies.cmake:43 (include)
CMakeLists.txt:725 (include)
-- Configuring incomplete, errors occurred!
See also "/home/sota_misawa/workspace/torch-diffusion/pytorch/build/CMakeFiles/CMakeOutput.log".
See also "/home/sota_misawa/workspace/torch-diffusion/pytorch/build/CMakeFiles/CMakeError.log".
```
### Versions
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.21.4
Libc version: glibc-2.31
Python version: 3.9.7 (default, Oct 30 2021, 00:56:25) [GCC 9.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-46-generic-x86_64-with-glibc2.31
Is CUDA available: N/A
CUDA runtime version: 11.7.64
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
Nvidia driver version: 515.65.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
Versions of relevant libraries:
[pip3] denoising-diffusion-pytorch==0.27.11
[pip3] ema-pytorch==0.0.10
[pip3] mypy==0.910
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.23.3
[pip3] torch==1.12.1
[pip3] torchvision==0.13.1
[conda] blas 1.0 mkl
[conda] magma-cuda117 2.6.1 1 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-include 2022.1.0 h06a4308_224
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.23.1 py39h6c91a56_0
[conda] numpy-base 1.23.1 py39ha15fc14_0
cc @malfet @seemethere @ngimel
| 1 |
4,678 | 85,613 |
[MPS?] .to(memory_format=contiguous_format) behaves incorrectly; differently to .contiguous()
|
triaged, module: mps
|
### π Describe the bug
found this whilst investigating training of stable-diffusion:
https://github.com/invoke-ai/InvokeAI/issues/517#issuecomment-1257235358
after `rearrange()` is used: we have a strided tensor. if we wish to copy this safely from CPU to MPS: we should move it to contiguous memory first.
`.contiguous()` works fine.
`.to(memory_format=contiguous_format)` doesn't seem to give us the same protection.
consequently, when we transfer our PIL image from CPU to MPS: (I think) the original, unstrided pixel data is transferred, and eventually this grayscale grid is logged:

whereas if we use `.contiguous()`: all is well:

```python
from torch import randn, contiguous_format
from einops import rearrange
x = randn(1, 512, 512, 3, device='cpu')
x = rearrange(x, 'b h w c -> b c h w')
y = x.to(memory_format=contiguous_format)
print(y.to('mps').cpu().allclose(y))
# False (and bad)
z = x.contiguous()
print(z.to('mps').cpu().allclose(z))
# True
```
Very similar to this issue, which was a transfer in the other direction (MPS to CPU), but lacked any attempt to transfer to contiguous memory:
https://github.com/pytorch/pytorch/issues/79383
Affects Pytorch stable 1.12.1 and nightly 1.13.0.dev20220917.
### Versions
```
PyTorch version: 1.13.0.dev20220917
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 12.5 (arm64)
GCC version: Could not collect
Clang version: 13.0.0 (clang-1300.0.29.30)
CMake version: version 3.22.1
Libc version: N/A
Python version: 3.10.4 (main, Mar 31 2022, 03:37:37) [Clang 12.0.0 ] (64-bit runtime)
Python platform: macOS-12.5-arm64-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.22.4
[pip3] pytorch-lightning==1.7.5
[pip3] torch==1.13.0.dev20220917
[pip3] torch-fidelity==0.3.0
[pip3] torchdiffeq==0.2.3
[pip3] torchmetrics==0.9.3
[pip3] torchtyping==0.1.4
[pip3] torchvision==0.14.0.dev20220917
[conda] numpy 1.22.4 pypi_0 pypi
[conda] pytorch-lightning 1.7.5 pypi_0 pypi
[conda] torch 1.13.0.dev20220917 pypi_0 pypi
[conda] torch-fidelity 0.3.0 pypi_0 pypi
[conda] torchdiffeq 0.2.3 pypi_0 pypi
[conda] torchmetrics 0.9.3 pypi_0 pypi
[conda] torchtyping 0.1.4 pypi_0 pypi
[conda] torchvision 0.14.0.dev20220917 pypi_0 pypi
```
cc @kulinseth @albanD
| 1 |
4,679 | 85,607 |
[Distributed: RPC] Failed to initialize RPC with >18 workers
|
oncall: distributed, triaged, module: rpc
|
### π Describe the bug
Failed to initialize RPC with >18 workers. Here is a minimal script:
```python
# rpc_test.py
import os
import random
import numpy as np
import torch
import torch.distributed.rpc as rpc
def worker_init():
rank = int(os.environ['RANK'])
random.seed(rank)
np.random.seed(rank)
torch.manual_seed(rank)
print(f'Rank {rank}')
def main():
rank = int(os.environ['RANK'])
world_size = int(os.environ['WORLD_SIZE'])
rpc.init_rpc(name=f'worker{rank}', rank=rank, world_size=world_size)
worker_init()
# no-op
rpc.shutdown()
if __name__ == '__main__':
main()
```
Run in command line:
```console
$ torchrun --nnode 1 --nproc_per_node 18 rpc_test.py
WARNING:torch.distributed.run:
*****************************************
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
*****************************************
Rank 0
Rank 1
Rank 2
Rank 3
Rank 4
Rank 5
Rank 6
Rank 7
Rank 8
Rank 9
Rank 10
Rank 11
Rank 12
Rank 13
Rank 14
Rank 15
Rank 17
Rank 16
```
It works for me for 18 workers but raises errors when I change the proc number larger than 18 (e.g. 19)
Output:
```console
$ torchrun --nnode 1 --nproc_per_node 19 rpc_test.py
WARNING:torch.distributed.run:
*****************************************
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
*****************************************
[W tensorpipe_agent.cpp:916] RPC agent for worker10 encountered error when sending outgoing request #0 to worker0: connect: Resource temporarily unavailable (this error originated at tensorpipe/common/socket.cc:114)
[W tensorpipe_agent.cpp:916] RPC agent for worker2 encountered error when sending outgoing request #0 to worker0: connect: Resource temporarily unavailable (this error originated at tensorpipe/common/socket.cc:114)
Traceback (most recent call last):
File "rpc_test.py", line 33, in <module>
[W tensorpipe_agent.cpp:916] RPC agent for worker16 encountered error when sending outgoing request #0 to worker0: connect: Resource temporarily unavailable (this error originated at tensorpipe/common/socket.cc:114)
main()
File "rpc_test.py", line 24, in main
rpc.init_rpc(name=f'worker{rank}', rank=rank, world_size=world_size)
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/__init__.py", line 196, in init_rpc
_init_rpc_backend(backend, store, name, rank, world_size, rpc_backend_options)
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/__init__.py", line 231, in _init_rpc_backend
rpc_agent = backend_registry.init_backend(
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/backend_registry.py", line 101, in init_backend
Traceback (most recent call last):
File "rpc_test.py", line 33, in <module>
return backend.value.init_backend_handler(*args, **kwargs)
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/backend_registry.py", line 359, in _tensorpipe_init_backend_handler
Traceback (most recent call last):
File "rpc_test.py", line 33, in <module>
main()
File "rpc_test.py", line 24, in main
api._all_gather(None, timeout=rpc_backend_options.rpc_timeout)
rpc.init_rpc(name=f'worker{rank}', rank=rank, world_size=world_size)
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/__init__.py", line 196, in init_rpc
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/api.py", line 83, in wrapper
_init_rpc_backend(backend, store, name, rank, world_size, rpc_backend_options)return func(*args, **kwargs)
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/__init__.py", line 231, in _init_rpc_backend
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/api.py", line 222, in _all_gather
main()
File "rpc_test.py", line 24, in main
rpc_agent = backend_registry.init_backend(rpc_sync(rpc.init_rpc(name=f'worker{rank}', rank=rank, world_size=world_size)
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/api.py", line 83, in wrapper
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/backend_registry.py", line 101, in init_backend
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/__init__.py", line 196, in init_rpc
return backend.value.init_backend_handler(*args, **kwargs)return func(*args, **kwargs)
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/backend_registry.py", line 359, in _tensorpipe_init_backend_handler
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/api.py", line 800, in rpc_sync
[W tensorpipe_agent.cpp:916] RPC agent for worker14 encountered error when sending outgoing request #0 to worker0: connect: Resource temporarily unavailable (this error originated at tensorpipe/common/socket.cc:114)
_init_rpc_backend(backend, store, name, rank, world_size, rpc_backend_options)api._all_gather(None, timeout=rpc_backend_options.rpc_timeout)
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/__init__.py", line 231, in _init_rpc_backend
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/api.py", line 83, in wrapper
return func(*args, **kwargs)
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/api.py", line 222, in _all_gather
rpc_agent = backend_registry.init_backend(
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/backend_registry.py", line 101, in init_backend
rpc_sync(
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/api.py", line 83, in wrapper
return fut.wait()
return backend.value.init_backend_handler(*args, **kwargs)return func(*args, **kwargs)
RuntimeError
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/backend_registry.py", line 359, in _tensorpipe_init_backend_handler
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/api.py", line 800, in rpc_sync
: connect: Resource temporarily unavailable (this error originated at tensorpipe/common/socket.cc:114)
api._all_gather(None, timeout=rpc_backend_options.rpc_timeout)
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/api.py", line 83, in wrapper
return func(*args, **kwargs)
return fut.wait()
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/api.py", line 222, in _all_gather
RuntimeError: connect: Resource temporarily unavailable (this error originated at tensorpipe/common/socket.cc:114)
rpc_sync(
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/api.py", line 83, in wrapper
return func(*args, **kwargs)
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/api.py", line 800, in rpc_sync
Traceback (most recent call last):
return fut.wait()
File "rpc_test.py", line 33, in <module>
RuntimeError: connect: Resource temporarily unavailable (this error originated at tensorpipe/common/socket.cc:114)
main()
File "rpc_test.py", line 24, in main
rpc.init_rpc(name=f'worker{rank}', rank=rank, world_size=world_size)
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/__init__.py", line 196, in init_rpc
[W tensorpipe_agent.cpp:916] RPC agent for worker4 encountered error when sending outgoing request #0 to worker0: connect: Resource temporarily unavailable (this error originated at tensorpipe/common/socket.cc:114)
_init_rpc_backend(backend, store, name, rank, world_size, rpc_backend_options)
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/__init__.py", line 231, in _init_rpc_backend
rpc_agent = backend_registry.init_backend(
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/backend_registry.py", line 101, in init_backend
[W tensorpipe_agent.cpp:916] RPC agent for worker8 encountered error when sending outgoing request #0 to worker0: connect: Resource temporarily unavailable (this error originated at tensorpipe/common/socket.cc:114)
return backend.value.init_backend_handler(*args, **kwargs)
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/backend_registry.py", line 359, in _tensorpipe_init_backend_handler
api._all_gather(None, timeout=rpc_backend_options.rpc_timeout)
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/api.py", line 83, in wrapper
return func(*args, **kwargs)
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/api.py", line 222, in _all_gather
rpc_sync(
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/api.py", line 83, in wrapper
return func(*args, **kwargs)Traceback (most recent call last):
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/api.py", line 800, in rpc_sync
File "rpc_test.py", line 33, in <module>
Traceback (most recent call last):
File "rpc_test.py", line 33, in <module>
return fut.wait()
RuntimeError: connect: Resource temporarily unavailable (this error originated at tensorpipe/common/socket.cc:114)
main()
File "rpc_test.py", line 24, in main
rpc.init_rpc(name=f'worker{rank}', rank=rank, world_size=world_size)
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/__init__.py", line 196, in init_rpc
main()
File "rpc_test.py", line 24, in main
_init_rpc_backend(backend, store, name, rank, world_size, rpc_backend_options)rpc.init_rpc(name=f'worker{rank}', rank=rank, world_size=world_size)
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/__init__.py", line 231, in _init_rpc_backend
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/__init__.py", line 196, in init_rpc
_init_rpc_backend(backend, store, name, rank, world_size, rpc_backend_options)rpc_agent = backend_registry.init_backend(
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/__init__.py", line 231, in _init_rpc_backend
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/backend_registry.py", line 101, in init_backend
rpc_agent = backend_registry.init_backend(return backend.value.init_backend_handler(*args, **kwargs)
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/backend_registry.py", line 101, in init_backend
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/backend_registry.py", line 359, in _tensorpipe_init_backend_handler
return backend.value.init_backend_handler(*args, **kwargs)
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/backend_registry.py", line 359, in _tensorpipe_init_backend_handler
api._all_gather(None, timeout=rpc_backend_options.rpc_timeout)api._all_gather(None, timeout=rpc_backend_options.rpc_timeout)
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/api.py", line 83, in wrapper
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/api.py", line 83, in wrapper
return func(*args, **kwargs)return func(*args, **kwargs)
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/api.py", line 222, in _all_gather
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/api.py", line 222, in _all_gather
rpc_sync(rpc_sync(
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/api.py", line 83, in wrapper
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/api.py", line 83, in wrapper
return func(*args, **kwargs)
return func(*args, **kwargs) File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/api.py", line 800, in rpc_sync
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/api.py", line 800, in rpc_sync
return fut.wait()
RuntimeError: connect: Resource temporarily unavailable (this error originated at tensorpipe/common/socket.cc:114)
return fut.wait()
RuntimeError: connect: Resource temporarily unavailable (this error originated at tensorpipe/common/socket.cc:114)
[W tensorpipe_agent.cpp:530] RPC agent for worker0 encountered error when accepting incoming pipe: async error on socket: Connection reset by peer (this error originated at tensorpipe/transport/shm/connection_impl.cc:187)
[W tensorpipe_agent.cpp:726] RPC agent for worker0 encountered error when reading incoming request from worker14: async error on socket: Connection reset by peer (this error originated at tensorpipe/transport/shm/connection_impl.cc:187)
[W tensorpipe_agent.cpp:726] RPC agent for worker0 encountered error when reading incoming request from worker10: async error on socket: Connection reset by peer (this error originated at tensorpipe/transport/shm/connection_impl.cc:187)
[W tensorpipe_agent.cpp:726] RPC agent for worker0 encountered error when reading incoming request from worker15: async error on socket: Connection reset by peer (this error originated at tensorpipe/transport/shm/connection_impl.cc:187)
[W tensorpipe_agent.cpp:726] RPC agent for worker0 encountered error when reading incoming request from worker9: async error on socket: Connection reset by peer (this error originated at tensorpipe/transport/shm/connection_impl.cc:187)
[W tensorpipe_agent.cpp:726] RPC agent for worker0 encountered error when reading incoming request from worker8: async error on socket: Connection reset by peer (this error originated at tensorpipe/transport/shm/connection_impl.cc:187)
[W tensorpipe_agent.cpp:726] RPC agent for worker0 encountered error when reading incoming request from worker1: async error on socket: Connection reset by peer (this error originated at tensorpipe/transport/shm/connection_impl.cc:187)
[W tensorpipe_agent.cpp:726] RPC agent for worker0 encountered error when reading incoming request from worker17: async error on socket: Connection reset by peer (this error originated at tensorpipe/transport/shm/connection_impl.cc:187)
[W tensorpipe_agent.cpp:726] RPC agent for worker0 encountered error when reading incoming request from worker6: async error on socket: Connection reset by peer (this error originated at tensorpipe/transport/shm/connection_impl.cc:187)
[W tensorpipe_agent.cpp:726] RPC agent for worker0 encountered error when reading incoming request from worker11: async error on socket: Connection reset by peer (this error originated at tensorpipe/transport/shm/connection_impl.cc:187)
[W tensorpipe_agent.cpp:726] RPC agent for worker0 encountered error when reading incoming request from worker13: async error on socket: Connection reset by peer (this error originated at tensorpipe/transport/shm/connection_impl.cc:187)
[W tensorpipe_agent.cpp:726] RPC agent for worker0 encountered error when reading incoming request from worker7: async error on socket: Connection reset by peer (this error originated at tensorpipe/transport/shm/connection_impl.cc:187)
[W tensorpipe_agent.cpp:726] RPC agent for worker0 encountered error when reading incoming request from worker3: async error on socket: Connection reset by peer (this error originated at tensorpipe/transport/shm/connection_impl.cc:187)
[W tensorpipe_agent.cpp:726] RPC agent for worker0 encountered error when reading incoming request from worker5: async error on socket: Connection reset by peer (this error originated at tensorpipe/transport/shm/connection_impl.cc:187)
[W tensorpipe_agent.cpp:726] RPC agent for worker0 encountered error when reading incoming request from worker4: async error on socket: Connection reset by peer (this error originated at tensorpipe/transport/shm/connection_impl.cc:187)
[W tensorpipe_agent.cpp:726] RPC agent for worker0 encountered error when reading incoming request from worker2: async error on socket: Connection reset by peer (this error originated at tensorpipe/transport/shm/connection_impl.cc:187)
[W tensorpipe_agent.cpp:726] RPC agent for worker0 encountered error when reading incoming request from worker16: async error on socket: Connection reset by peer (this error originated at tensorpipe/transport/shm/connection_impl.cc:187)
[W tensorpipe_agent.cpp:726] RPC agent for worker0 encountered error when reading incoming request from worker12: async error on socket: Connection reset by peer (this error originated at tensorpipe/transport/shm/connection_impl.cc:187)
[W tensorpipe_agent.cpp:726] RPC agent for worker0 encountered error when reading incoming request from worker18: async error on socket: Connection reset by peer (this error originated at tensorpipe/transport/shm/connection_impl.cc:187)
[W tensorpipe_agent.cpp:916] RPC agent for worker18 encountered error when sending outgoing request #0 to worker0: async error on socket: Connection reset by peer (this error originated at tensorpipe/transport/shm/connection_impl.cc:187)
[W tensorpipe_agent.cpp:916] RPC agent for worker5 encountered error when sending outgoing request #0 to worker0: async error on socket: Connection reset by peer (this error originated at tensorpipe/transport/shm/connection_impl.cc:187)
[W tensorpipe_agent.cpp:940] RPC agent for worker17 encountered error when reading incoming response from worker0: async error on socket: Connection reset by peer (this error originated at tensorpipe/transport/shm/connection_impl.cc:187)
[W tensorpipe_agent.cpp:916] RPC agent for worker3 encountered error when sending outgoing request #0 to worker0: async error on socket: Connection reset by peer (this error originated at tensorpipe/transport/shm/connection_impl.cc:187)
[W tensorpipe_agent.cpp:940] RPC agent for worker9 encountered error when reading incoming response from worker0: async error on socket: Connection reset by peer (this error originated at tensorpipe/transport/shm/connection_impl.cc:187)
[W tensorpipe_agent.cpp:916] RPC agent for worker15 encountered error when sending outgoing request #0 to worker0: async error on socket: Connection reset by peer (this error originated at tensorpipe/transport/shm/connection_impl.cc:187)
[W tensorpipe_agent.cpp:940] RPC agent for worker1 encountered error when reading incoming response from worker0: async error on socket: Connection reset by peer (this error originated at tensorpipe/transport/shm/connection_impl.cc:187)
[W tensorpipe_agent.cpp:940] RPC agent for worker13 encountered error when reading incoming response from worker0: async error on socket: Connection reset by peer (this error originated at tensorpipe/transport/shm/connection_impl.cc:187)
[W tensorpipe_agent.cpp:940] RPC agent for worker7 encountered error when reading incoming response from worker0: async error on socket: Connection reset by peer (this error originated at tensorpipe/transport/shm/connection_impl.cc:187)
[W tensorpipe_agent.cpp:916] RPC agent for worker12 encountered error when sending outgoing request #0 to worker0: async error on socket: Connection reset by peer (this error originated at tensorpipe/transport/shm/connection_impl.cc:187)
[W tensorpipe_agent.cpp:940] RPC agent for worker11 encountered error when reading incoming response from worker0: async error on socket: Connection reset by peer (this error originated at tensorpipe/transport/shm/connection_impl.cc:187)
Traceback (most recent call last):
File "rpc_test.py", line 33, in <module>
Traceback (most recent call last):
File "rpc_test.py", line 33, in <module>
Traceback (most recent call last):
Traceback (most recent call last):
File "rpc_test.py", line 33, in <module>
File "rpc_test.py", line 33, in <module>
Traceback (most recent call last):
File "rpc_test.py", line 33, in <module>
main()main()
File "rpc_test.py", line 24, in main
File "rpc_test.py", line 24, in main
rpc.init_rpc(name=f'worker{rank}', rank=rank, world_size=world_size)
rpc.init_rpc(name=f'worker{rank}', rank=rank, world_size=world_size)
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/__init__.py", line 196, in init_rpc
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/__init__.py", line 196, in init_rpc
main() main()
File "rpc_test.py", line 24, in main
File "rpc_test.py", line 24, in main
rpc.init_rpc(name=f'worker{rank}', rank=rank, world_size=world_size)
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/__init__.py", line 196, in init_rpc
_init_rpc_backend(backend, store, name, rank, world_size, rpc_backend_options)rpc.init_rpc(name=f'worker{rank}', rank=rank, world_size=world_size)
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/__init__.py", line 231, in _init_rpc_backend
_init_rpc_backend(backend, store, name, rank, world_size, rpc_backend_options) File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/__init__.py", line 196, in init_rpc
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/__init__.py", line 231, in _init_rpc_backend
rpc_agent = backend_registry.init_backend(
_init_rpc_backend(backend, store, name, rank, world_size, rpc_backend_options) File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/backend_registry.py", line 101, in init_backend
_init_rpc_backend(backend, store, name, rank, world_size, rpc_backend_options)
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/__init__.py", line 231, in _init_rpc_backend
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/__init__.py", line 231, in _init_rpc_backend
rpc_agent = backend_registry.init_backend(
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/backend_registry.py", line 101, in init_backend
main()
return backend.value.init_backend_handler(*args, **kwargs)
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/backend_registry.py", line 359, in _tensorpipe_init_backend_handler
rpc_agent = backend_registry.init_backend( File "rpc_test.py", line 24, in main
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/backend_registry.py", line 101, in init_backend
return backend.value.init_backend_handler(*args, **kwargs)rpc_agent = backend_registry.init_backend(
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/backend_registry.py", line 359, in _tensorpipe_init_backend_handler
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/backend_registry.py", line 101, in init_backend
return backend.value.init_backend_handler(*args, **kwargs)
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/backend_registry.py", line 359, in _tensorpipe_init_backend_handler
return backend.value.init_backend_handler(*args, **kwargs)
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/backend_registry.py", line 359, in _tensorpipe_init_backend_handler
api._all_gather(None, timeout=rpc_backend_options.rpc_timeout)
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/api.py", line 83, in wrapper
return func(*args, **kwargs)
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/api.py", line 222, in _all_gather
api._all_gather(None, timeout=rpc_backend_options.rpc_timeout)
api._all_gather(None, timeout=rpc_backend_options.rpc_timeout) File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/api.py", line 83, in wrapper
api._all_gather(None, timeout=rpc_backend_options.rpc_timeout) File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/api.py", line 83, in wrapper
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/api.py", line 83, in wrapper
rpc_sync(
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/api.py", line 83, in wrapper
rpc.init_rpc(name=f'worker{rank}', rank=rank, world_size=world_size) return func(*args, **kwargs)return func(*args, **kwargs)
return func(*args, **kwargs)
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/api.py", line 222, in _all_gather
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/api.py", line 222, in _all_gather
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/api.py", line 222, in _all_gather
return func(*args, **kwargs)
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/api.py", line 800, in rpc_sync
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/__init__.py", line 196, in init_rpc
rpc_sync(
rpc_sync( File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/api.py", line 83, in wrapper
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/api.py", line 83, in wrapper
rpc_sync(
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/api.py", line 83, in wrapper
return func(*args, **kwargs)return func(*args, **kwargs)
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/api.py", line 800, in rpc_sync
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/api.py", line 800, in rpc_sync
return func(*args, **kwargs)
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/api.py", line 800, in rpc_sync
return fut.wait()
RuntimeError: async error on socket: Connection reset by peer (this error originated at tensorpipe/transport/shm/connection_impl.cc:187)
return fut.wait()
return fut.wait()
RuntimeError: async error on socket: Connection reset by peer (this error originated at tensorpipe/transport/shm/connection_impl.cc:187)RuntimeError
: async error on socket: Connection reset by peer (this error originated at tensorpipe/transport/shm/connection_impl.cc:187)
_init_rpc_backend(backend, store, name, rank, world_size, rpc_backend_options)
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/__init__.py", line 231, in _init_rpc_backend
return fut.wait()
RuntimeError: async error on socket: Connection reset by peer (this error originated at tensorpipe/transport/shm/connection_impl.cc:187)
rpc_agent = backend_registry.init_backend(
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/backend_registry.py", line 101, in init_backend
return backend.value.init_backend_handler(*args, **kwargs)
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/backend_registry.py", line 359, in _tensorpipe_init_backend_handler
api._all_gather(None, timeout=rpc_backend_options.rpc_timeout)
Traceback (most recent call last):
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/api.py", line 83, in wrapper
File "rpc_test.py", line 33, in <module>
main()
File "rpc_test.py", line 24, in main
rpc.init_rpc(name=f'worker{rank}', rank=rank, world_size=world_size)
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/__init__.py", line 196, in init_rpc
return func(*args, **kwargs)
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/api.py", line 222, in _all_gather
_init_rpc_backend(backend, store, name, rank, world_size, rpc_backend_options)
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/__init__.py", line 231, in _init_rpc_backend
rpc_agent = backend_registry.init_backend(
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/backend_registry.py", line 101, in init_backend
return backend.value.init_backend_handler(*args, **kwargs)
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/backend_registry.py", line 359, in _tensorpipe_init_backend_handler
api._all_gather(None, timeout=rpc_backend_options.rpc_timeout)
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/api.py", line 83, in wrapper
rpc_sync(
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/api.py", line 83, in wrapper
return func(*args, **kwargs)
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/api.py", line 222, in _all_gather
[W tensorpipe_agent.cpp:940] RPC agent for worker6 encountered error when reading incoming response from worker0: async error on socket: Connection reset by peer (this error originated at tensorpipe/transport/shm/connection_impl.cc:187)
rpc_sync(
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/api.py", line 83, in wrapper
return func(*args, **kwargs)
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/api.py", line 800, in rpc_sync
return func(*args, **kwargs) return fut.wait()
RuntimeError: async error on socket: Connection reset by peer (this error originated at tensorpipe/transport/shm/connection_impl.cc:187)
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/api.py", line 800, in rpc_sync
Traceback (most recent call last):
File "rpc_test.py", line 33, in <module>
main()
File "rpc_test.py", line 24, in main
return fut.wait()
rpc.init_rpc(name=f'worker{rank}', rank=rank, world_size=world_size)
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/__init__.py", line 196, in init_rpc
RuntimeError: async error on socket: Connection reset by peer (this error originated at tensorpipe/transport/shm/connection_impl.cc:187)
_init_rpc_backend(backend, store, name, rank, world_size, rpc_backend_options)
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/__init__.py", line 231, in _init_rpc_backend
rpc_agent = backend_registry.init_backend(
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/backend_registry.py", line 101, in init_backend
return backend.value.init_backend_handler(*args, **kwargs)
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/backend_registry.py", line 359, in _tensorpipe_init_backend_handler
api._all_gather(None, timeout=rpc_backend_options.rpc_timeout)
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/api.py", line 83, in wrapper
return func(*args, **kwargs)
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/api.py", line 222, in _all_gather
rpc_sync(
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/api.py", line 83, in wrapper
return func(*args, **kwargs)
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/api.py", line 800, in rpc_sync
return fut.wait()
RuntimeError: async error on socket: Connection reset by peer (this error originated at tensorpipe/transport/shm/connection_impl.cc:187)
Traceback (most recent call last):
File "rpc_test.py", line 33, in <module>
main()
File "rpc_test.py", line 24, in main
rpc.init_rpc(name=f'worker{rank}', rank=rank, world_size=world_size)
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/__init__.py", line 196, in init_rpc
_init_rpc_backend(backend, store, name, rank, world_size, rpc_backend_options)
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/__init__.py", line 231, in _init_rpc_backend
rpc_agent = backend_registry.init_backend(
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/backend_registry.py", line 101, in init_backend
return backend.value.init_backend_handler(*args, **kwargs)
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/backend_registry.py", line 359, in _tensorpipe_init_backend_handler
api._all_gather(None, timeout=rpc_backend_options.rpc_timeout)
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/api.py", line 83, in wrapper
return func(*args, **kwargs)
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/api.py", line 222, in _all_gather
rpc_sync(
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/api.py", line 83, in wrapper
Traceback (most recent call last):
File "rpc_test.py", line 33, in <module>
return func(*args, **kwargs)
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/api.py", line 800, in rpc_sync
return fut.wait()
RuntimeError: async error on socket: Connection reset by peer (this error originated at tensorpipe/transport/shm/connection_impl.cc:187)
main()
File "rpc_test.py", line 24, in main
rpc.init_rpc(name=f'worker{rank}', rank=rank, world_size=world_size)
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/__init__.py", line 196, in init_rpc
_init_rpc_backend(backend, store, name, rank, world_size, rpc_backend_options)
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/__init__.py", line 231, in _init_rpc_backend
rpc_agent = backend_registry.init_backend(
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/backend_registry.py", line 101, in init_backend
return backend.value.init_backend_handler(*args, **kwargs)
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/backend_registry.py", line 359, in _tensorpipe_init_backend_handler
api._all_gather(None, timeout=rpc_backend_options.rpc_timeout)
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/api.py", line 83, in wrapper
return func(*args, **kwargs)
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/api.py", line 222, in _all_gather
rpc_sync(
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/api.py", line 83, in wrapper
return func(*args, **kwargs)
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/api.py", line 800, in rpc_sync
return fut.wait()
RuntimeError: async error on socket: Connection reset by peer (this error originated at tensorpipe/transport/shm/connection_impl.cc:187)
Traceback (most recent call last):
Traceback (most recent call last):
File "rpc_test.py", line 33, in <module>
File "rpc_test.py", line 33, in <module>
main()
File "rpc_test.py", line 24, in main
rpc.init_rpc(name=f'worker{rank}', rank=rank, world_size=world_size)
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/__init__.py", line 196, in init_rpc
_init_rpc_backend(backend, store, name, rank, world_size, rpc_backend_options)
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/__init__.py", line 231, in _init_rpc_backend
rpc_agent = backend_registry.init_backend(
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/backend_registry.py", line 101, in init_backend
return backend.value.init_backend_handler(*args, **kwargs)
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/backend_registry.py", line 359, in _tensorpipe_init_backend_handler
api._all_gather(None, timeout=rpc_backend_options.rpc_timeout)
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/api.py", line 83, in wrapper
return func(*args, **kwargs)
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/api.py", line 222, in _all_gather
main() rpc_sync(
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/api.py", line 83, in wrapper
File "rpc_test.py", line 24, in main
return func(*args, **kwargs)
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/api.py", line 800, in rpc_sync
return fut.wait()
RuntimeError: async error on socket: Connection reset by peer (this error originated at tensorpipe/transport/shm/connection_impl.cc:187)
rpc.init_rpc(name=f'worker{rank}', rank=rank, world_size=world_size)
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/__init__.py", line 196, in init_rpc
_init_rpc_backend(backend, store, name, rank, world_size, rpc_backend_options)
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/__init__.py", line 231, in _init_rpc_backend
rpc_agent = backend_registry.init_backend(
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/backend_registry.py", line 101, in init_backend
return backend.value.init_backend_handler(*args, **kwargs)
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/backend_registry.py", line 359, in _tensorpipe_init_backend_handler
api._all_gather(None, timeout=rpc_backend_options.rpc_timeout)
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/api.py", line 83, in wrapper
return func(*args, **kwargs)
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/api.py", line 222, in _all_gather
rpc_sync(
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/api.py", line 83, in wrapper
return func(*args, **kwargs)
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/api.py", line 800, in rpc_sync
return fut.wait()
RuntimeError: async error on socket: Connection reset by peer (this error originated at tensorpipe/transport/shm/connection_impl.cc:187)
Traceback (most recent call last):
File "rpc_test.py", line 33, in <module>
main()
File "rpc_test.py", line 24, in main
rpc.init_rpc(name=f'worker{rank}', rank=rank, world_size=world_size)
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/__init__.py", line 196, in init_rpc
_init_rpc_backend(backend, store, name, rank, world_size, rpc_backend_options)
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/__init__.py", line 231, in _init_rpc_backend
rpc_agent = backend_registry.init_backend(
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/backend_registry.py", line 101, in init_backend
return backend.value.init_backend_handler(*args, **kwargs)
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/backend_registry.py", line 359, in _tensorpipe_init_backend_handler
api._all_gather(None, timeout=rpc_backend_options.rpc_timeout)
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/api.py", line 83, in wrapper
return func(*args, **kwargs)
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/api.py", line 222, in _all_gather
rpc_sync(
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/api.py", line 83, in wrapper
return func(*args, **kwargs)
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/rpc/api.py", line 800, in rpc_sync
return fut.wait()
RuntimeError: async error on socket: Connection reset by peer (this error originated at tensorpipe/transport/shm/connection_impl.cc:187)
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 17244 closing signal SIGTERM
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 1 (pid: 17245) of binary: /home/PanXuehai/Miniconda3/envs/torchopt/bin/python
Traceback (most recent call last):
File "/home/PanXuehai/Miniconda3/envs/torchopt/bin/torchrun", line 33, in <module>
sys.exit(load_entry_point('torch==1.12.1', 'console_scripts', 'torchrun')())
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 345, in wrapper
return f(*args, **kwargs)
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/run.py", line 761, in main
run(args)
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/run.py", line 752, in run
elastic_launch(
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 131, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/home/PanXuehai/Miniconda3/envs/torchopt/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 245, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
rpc_test.py FAILED
------------------------------------------------------------
Failures:
[1]:
time : 2022-09-25_22:37:55
host : BIGAI-PanXuehai.localdomain
rank : 2 (local_rank: 2)
exitcode : 1 (pid: 17246)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
[2]:
time : 2022-09-25_22:37:55
host : BIGAI-PanXuehai.localdomain
rank : 3 (local_rank: 3)
exitcode : 1 (pid: 17247)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
[3]:
time : 2022-09-25_22:37:55
host : BIGAI-PanXuehai.localdomain
rank : 4 (local_rank: 4)
exitcode : 1 (pid: 17248)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
[4]:
time : 2022-09-25_22:37:55
host : BIGAI-PanXuehai.localdomain
rank : 5 (local_rank: 5)
exitcode : 1 (pid: 17249)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
[5]:
time : 2022-09-25_22:37:55
host : BIGAI-PanXuehai.localdomain
rank : 6 (local_rank: 6)
exitcode : 1 (pid: 17250)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
[6]:
time : 2022-09-25_22:37:55
host : BIGAI-PanXuehai.localdomain
rank : 7 (local_rank: 7)
exitcode : 1 (pid: 17251)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
[7]:
time : 2022-09-25_22:37:55
host : BIGAI-PanXuehai.localdomain
rank : 8 (local_rank: 8)
exitcode : 1 (pid: 17252)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
[8]:
time : 2022-09-25_22:37:55
host : BIGAI-PanXuehai.localdomain
rank : 9 (local_rank: 9)
exitcode : 1 (pid: 17253)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
[9]:
time : 2022-09-25_22:37:55
host : BIGAI-PanXuehai.localdomain
rank : 10 (local_rank: 10)
exitcode : 1 (pid: 17254)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
[10]:
time : 2022-09-25_22:37:55
host : BIGAI-PanXuehai.localdomain
rank : 11 (local_rank: 11)
exitcode : 1 (pid: 17256)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
[11]:
time : 2022-09-25_22:37:55
host : BIGAI-PanXuehai.localdomain
rank : 12 (local_rank: 12)
exitcode : 1 (pid: 17259)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
[12]:
time : 2022-09-25_22:37:55
host : BIGAI-PanXuehai.localdomain
rank : 13 (local_rank: 13)
exitcode : 1 (pid: 17264)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
[13]:
time : 2022-09-25_22:37:55
host : BIGAI-PanXuehai.localdomain
rank : 14 (local_rank: 14)
exitcode : 1 (pid: 17267)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
[14]:
time : 2022-09-25_22:37:55
host : BIGAI-PanXuehai.localdomain
rank : 15 (local_rank: 15)
exitcode : 1 (pid: 17268)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
[15]:
time : 2022-09-25_22:37:55
host : BIGAI-PanXuehai.localdomain
rank : 16 (local_rank: 16)
exitcode : 1 (pid: 17269)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
[16]:
time : 2022-09-25_22:37:55
host : BIGAI-PanXuehai.localdomain
rank : 17 (local_rank: 17)
exitcode : 1 (pid: 17271)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
[17]:
time : 2022-09-25_22:37:55
host : BIGAI-PanXuehai.localdomain
rank : 18 (local_rank: 18)
exitcode : 1 (pid: 17273)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2022-09-25_22:37:55
host : BIGAI-PanXuehai.localdomain
rank : 1 (local_rank: 1)
exitcode : 1 (pid: 17245)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================
```
Different machines report different maximum worker numbers on my side.
- single-socket 16C32T CPU: raise error 19+ procs
- single-socket 6C12T CPU: raises error 9+ procs
- dual-socket 22C44T CPU (44C88T in total): raise error 9+ procs
### Versions
```
PyTorch version: 1.12.1
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (conda-forge gcc 10.4.0-16) 10.4.0
Clang version: 10.0.1
CMake version: version 3.22.1
Libc version: glibc-2.31
Python version: 3.8.12 | packaged by conda-forge | (default, Jan 30 2022, 23:42:07) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.10
Is CUDA available: True
CUDA runtime version: 11.6.124
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3070
Nvidia driver version: 516.94
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] functorch==0.2.1
[pip3] mypy==0.910
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.23.1
[pip3] torch==1.12.1
[pip3] torchopt==0.5.1.dev66+gd738101
[pip3] torchvision==0.13.1
[pip3] torchviz==0.0.2
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.6.0 habf752d_9
[conda] functorch 0.2.1 pypi_0
[conda] libblas 3.9.0 12_linux64_mkl
[conda] libcblas 3.9.0 12_linux64_mkl
[conda] liblapack 3.9.0 12_linux64_mkl
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py38h7f8727e_0
[conda] mkl_fft 1.3.1 py38hd3c417c_0
[conda] mkl_random 1.2.2 py38h51133e4_0
[conda] numpy 1.23.1 py38h6c91a56_0
[conda] numpy-base 1.23.1 py38ha15fc14_0
[conda] pytorch 1.12.1 py3.8_cuda11.6_cudnn8.3.2_0
[conda] pytorch-mutex 1.0 cuda
[conda] torchopt 0.5.1.dev66+gd738101 pypi_0
[conda] torchvision 0.13.1 py38_cu116
[conda] torchviz 0.0.2 pypi_0
```
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang @kwen2501 @jjlilley @mrzzd
| 4 |
4,680 | 85,606 |
Creating NumPy array with `dtype=object` of PyTorch tensors fails
|
feature, triaged, module: numpy
|
### π Describe the bug
PyTorch tensors can easily be stored in Python lists. However, lists are slow and cumbersome for some operations (e.g. more complex indexing operations).
NumPy arrays support storing any Python object by specifying `dtype=object` when creating the array.
However, when attempting to create a NumPy array with `dtype=object`, PyTorch tries to convert the tensors to NumPy arrays. This should not be done, as we're not interested in storing the tensors as arrays. We're only interested in storing references to these tensors.
To reproduce this behavior:
```python
>>> tensors = (torch.randn(2,2, device="cuda"), torch.randn(1,3, device="cpu"))
>>> list(tensors)
[tensor([[-0.0517, 0.4685],
[ 0.5149, 0.0535]], device='cuda:0'), tensor([[ 0.2241, 2.3390, -0.6848]])]
>>> np.array(tensors, dtype=object)
TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
```
The section of the code that is responsible for this behavior can be found in `_tensor.py:__array__()`.
```python
def __array__(self, dtype=None):
if has_torch_function_unary(self):
return handle_torch_function(Tensor.__array__, (self,), self, dtype=dtype)
if dtype is None:
return self.numpy()
else:
return self.numpy().astype(dtype, copy=False)
```
One way to fix this bug would be to add a check in the `__array__` method to handle the case when `dtype=object`. We must then create an object array and add the tensor to it.
```python
def __array__(self, dtype=None):
if has_torch_function_unary(self):
return handle_torch_function(Tensor.__array__, (self,), self, dtype=dtype)
if dtype is None:
return self.numpy()
elif dtype == np.dtype(object):
arr = np.empty(1, dtype=object)
arr[0] = self
return arr
else:
return self.numpy().astype(dtype, copy=False)
```
This way, we can create a NumPy array that stores references to CPU and CUDA Tensors, such as:
```python
>>> tensors = (torch.randn(2,2, device="cuda"), torch.randn(1,3,device="cpu"))
>>> np.array(tensors, dtype=object)
array([tensor([[ 1.9628, -0.2343],
[ 0.1929, -1.1450]], device='cuda:0'),
tensor([[ 0.7147, -1.3656, 0.8823]])], dtype=object)
```
The proposed solution is probably not the best. Maybe @rgommers has a better idea? I would be willing to develop a solution for this, if the maintainers are interested.
Interestingly, with the current code the conversion works if the tensor is on the CPU, but does not work if it is on the GPU, even though NumPy is only storing a reference to the tensors!
### Versions
Versions of relevant libraries:
[pip3] numpy==1.22.4
[pip3] pytorch-memlab==0.2.4
[pip3] raisim-gym-torch==0.0.0
[pip3] torch==1.11.0
[pip3] torch-tb-profiler==0.4.0
[pip3] torchvision==0.12.0
[conda] Could not collect
cc @mruberry @rgommers
| 7 |
4,681 | 85,604 |
Multiple GPUs get "errno: 98 - Address already in use"
|
oncall: distributed, triaged, module: c10d
|
### π Describe the bug
I run the train.py
torchrun --nnodes=1 --nproc_per_node=1 train.py --local_rank 0
it is run, but just one GPU,
when I run
torchrun --standalone --nnodes=1 --nproc_per_node=2 train.py --local_rank 0
get a error
```
torchrun --standalone --nnodes=1 --nproc_per_node=2 train.py --local_rank 0
WARNING:torch.distributed.run:
*****************************************
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
*****************************************
[W socket.cpp:401] [c10d] The server socket has failed to listen on [::]:47531 (errno: 98 - Address already in use).
[W socket.cpp:401] [c10d] The server socket has failed to bind to 0.0.0.0:47531 (errno: 98 - Address already in use).
[E socket.cpp:435] [c10d] The server socket has failed to listen on any local network address.
Traceback (most recent call last):
File "train.py", line 180, in <module>
train()
File "train.py", line 47, in train
dist.init_process_group(
File "/home/wupeng/anaconda3/envs/ldm/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 595, in init_process_group
store, rank, world_size = next(rendezvous_iterator)
File "/home/wupeng/anaconda3/envs/ldm/lib/python3.8/site-packages/torch/distributed/rendezvous.py", line 232, in _env_rendezvous_handler
store = _create_c10d_store(master_addr, master_port, rank, world_size, timeout)
File "/home/wupeng/anaconda3/envs/ldm/lib/python3.8/site-packages/torch/distributed/rendezvous.py", line 160, in _create_c10d_store
return TCPStore(
RuntimeError: The server socket has failed to listen on any local network address. The server socket has failed to listen on [::]:47531 (errno: 98 - Address already in use). The server socket has failed to bind to 0.0.0.0:47531 (errno: 98 - Address already in use).
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 18620 closing signal SIGTERM
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 1 (pid: 18621) of binary: /home/wupeng/anaconda3/envs/ldm/bin/python
Traceback (most recent call last):
File "/home/wupeng/anaconda3/envs/ldm/bin/torchrun", line 33, in <module>
sys.exit(load_entry_point('torch==1.11.0', 'console_scripts', 'torchrun')())
File "/home/wupeng/anaconda3/envs/ldm/lib/python3.8/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 345, in wrapper
return f(*args, **kwargs)
File "/home/wupeng/anaconda3/envs/ldm/lib/python3.8/site-packages/torch/distributed/run.py", line 724, in main
run(args)
File "/home/wupeng/anaconda3/envs/ldm/lib/python3.8/site-packages/torch/distributed/run.py", line 715, in run
elastic_launch(
File "/home/wupeng/anaconda3/envs/ldm/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 131, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/home/wupeng/anaconda3/envs/ldm/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 245, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
train.py FAILED
------------------------------------------------------------
Failures:
<NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2022-09-25_19:36:17
host : Double-1080ti
rank : 1 (local_rank: 1)
exitcode : 1 (pid: 18621)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================
```
### Versions
pytorch version is: v1.11.0
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang @kwen2501
| 1 |
4,682 | 85,590 |
Solve default argument induced include cycles by not using defaults / moving the defaults to inl
|
module: build, module: internals, triaged
|
### π Describe the bug
Here is an example of a particularly pernicious cause of import cycles in our C++ codebase:
```
struct Scalar {
Scalar(int i) : i_(i) {};
int i_;
};
struct Tensor {
void add(const Tensor& other, const Scalar& s = 0);
};
```
To compile this, the FULL definition of Scalar must be available at the time Tensor is being defined, so that the default parameter can be specified. This means that Scalar must NOT include any headers that transitively depend on Tensor. In the past, when people have been monkeying around with Scalar (e.g., wanting it to depend on IValue), this easily causes include cycles, and for no good reason: Tensor doesn't *really* depend (as far as struct layout is concerned) on Scalar, and although intuitively there shouldn't be a dependency between Scalar and Tensor, there seemingly is one. You cannot even solve the problem by wrapping the Scalar in a c10::optional, as c10::optional is an incomplete type without Scalar being complete.
Fortunately, there is a resolution to this problem. In fact, there are two options:
First, we can simply not use defaulted arguments. Thus, instead of a single add definition, we instead have:
```
struct Tensor {
void add(const Tensor& other);
void add(const Tensor& other, const Scalar& s);
};
```
As most methods on Tensor are code-generated, this doesn't involve driving a big refactor; just modify the codegen to separately define each definition and generate the necessary boilerplate.
Second, we can define the default argument later. If we assume that add is going to be implemented inline in an inl header later, this is perfectly valid, and still defaults the argument:
```
struct Tensor {
void add(const Tensor& other, const Scalar& s);
};
// Later, in Tensor_inl.h where we can assume we have Scalar struct defined now
void Tensor::add(const Tensor& other, const Scalar& s = 0) {
... definition ...
};
```
alas, you are obligated to also provide the definition at this point; if you are looking to define out-of-line, option one is probably easier.
I looked into this because I wanted to see if it would be feasible to implement Scalar using IValue. In the end I decided this refactor would take too much time to be worth it for just this. But maybe someone else will find it useful some time...
### Versions
master
cc @malfet @seemethere @bhosmer @smessmer @ljk53 @bdhirsh
| 1 |
4,683 | 85,588 |
`linalg.norm` cannot compute the grad in forward mode after script
|
oncall: jit
|
### π Describe the bug
`linalg.norm` cannot compute the grad in forward mode after script
```py
import torch
def m(input):
return input.norm(p=1)
input = torch.tensor([[1., 2., 3., 4.]], dtype=torch.float32)
from torch.autograd.functional import jacobian
input1 = input.clone().requires_grad_()
jac1 = jacobian(m, input1, vectorize=True, strategy='forward-mode')
print(jac1)
jit_m = torch.jit.script(m)
input2 = input.clone().requires_grad_()
jac2 = jacobian(jit_m, input2, vectorize=True, strategy='forward-mode')
print(jac2)
```
```
tensor([[1., 1., 1., 1.]], grad_fn=<ReshapeAliasBackward0>)
RuntimeError: Trying to set a forward gradient that has a different size than that of the original Tensor, this is not supported. Tensor is of size [] while the given forward gradient is of size [1, 1].
```
### Versions
```
Collecting environment information...
PyTorch version: 1.13.0.dev20220923+cu116
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.2.0-19ubuntu1) 11.2.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.9.13 (main, Aug 25 2022, 23:26:10) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.19.5-051905-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060
Nvidia driver version: 515.65.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.3
[pip3] torch==1.13.0.dev20220923+cu116
[pip3] torchaudio==0.13.0.dev20220923+cu116
[pip3] torchvision==0.14.0.dev20220923+cu116
[conda] numpy 1.23.3 pypi_0 pypi
[conda] torch 1.13.0.dev20220923+cu116 pypi_0 pypi
[conda] torchaudio 0.13.0.dev20220923+cu116 pypi_0 pypi
[conda] torchvision 0.14.0.dev20220923+cu116 pypi_0 pypi
```
cc @ezyang @zou3519 @gqchen @pearu @nikitaved @soulitzer @Lezcano @Varal7
| 0 |
4,684 | 85,585 |
`as_tensor` will return a different dtype with script
|
oncall: jit
|
### π Describe the bug
`as_tensor` will return a different dtype with script
```py
import torch
def m(data):
return torch.as_tensor(data)
input = torch.tensor(2.25, dtype=torch.bfloat16)
print(m(input).dtype)
jit_m = torch.jit.script(m)
print(jit_m(input).dtype)
```
```
torch.bfloat16
torch.float32
```
### Versions
PyTorch version: 1.13.0.dev20220923+cu116
| 0 |
4,685 | 93,688 |
FSDP's FlattenParamsWrapper breaks dynamo's faketensor wrapping
|
triaged, module: fsdp, bug, oncall: pt2
|
Repro script
```
import torch
import torchdynamo
from torch import nn
from torch.distributed.fsdp.flat_param import HandleConfig, HandleShardingStrategy
from torch.distributed.fsdp.flatten_params_wrapper import FlattenParamsWrapper
from torch.distributed.fsdp.fully_sharded_data_parallel import FullyShardedDataParallel as FSDP
class ToyModel(nn.Module):
def __init__(self):
super(ToyModel, self).__init__()
self.net = nn.Sequential(
nn.Linear(10, 10000),
nn.ReLU(),
)
def forward(self, x):
return self.net(x)
class Wrapper(nn.Module):
def __init__(self, wrapped):
super(Wrapper, self).__init__()
self.mod = wrapped
def forward(self, *args, **kwargs):
return self.mod(*args, **kwargs)
def flatten():
mod = ToyModel()
# FlattenParamsWrapper is a helper class used inside FSDP. It appears to not work with Dynamo
mod = FlattenParamsWrapper(
mod,
params=list(mod.parameters()),
device="cpu",
config=HandleConfig(HandleShardingStrategy.FULL_SHARD, False, None, None),
)
mod = torchdynamo.optimize("aot_eager")(mod)
inputs = torch.randn(20, 10)
outputs = mod(inputs)
print(outputs)
if __name__ == "__main__":
flatten()
"""
Currently getting this error:
Exception: Invoking operators with non-Fake Tensor inputs in FakeTensorMode is not yet supported. Please convert all Tensors to FakeTensors first. Found in aten.t.default(*(tensor([[ 0.2899, -0.1847, 0.0950, ..., 0.0304, -0.2226, -0.3076],
[-0.1424, 0.2049, -0.2806, ..., -0.0127, -0.0831, 0.0546],
[ 0.1324, -0.0840, -0.0635, ..., -0.0338, -0.1047, 0.2138],
...,
[ 0.2627, 0.1499, 0.0301, ..., -0.2043, -0.0677, 0.0513],
[ 0.0060, -0.1727, -0.1449, ..., -0.0631, -0.0106, -0.0005],
[-0.0464, -0.1969, 0.1693, ..., 0.1114, 0.0926, -0.2426]],
grad_fn=<ViewBackward0>),), **{})
"""
```
cc @zhaojuanmao @mrshenli @rohan-varma @awgu @ezyang @soumith @msaroufim @ngimel @bdhirsh
| 18 |
4,686 | 85,573 |
`jit` could make some undifferentiable APIs differentiable
|
oncall: jit
|
### π Describe the bug
`jit` could make some undifferentiable APIs differentiable, like `numel, nelement`
```py
import torch
def m(input):
return input.nelement()
device = 'cpu'
input = torch.tensor([1], dtype=torch.float32, device=device)
jit_m = torch.jit.trace(m, (input.clone(), ))
from torch.autograd.functional import jacobian
print(jacobian(jit_m, input.clone().requires_grad_(), vectorize=True, strategy='reverse-mode'))
print(jacobian(m, input.clone().requires_grad_(), vectorize=True, strategy='reverse-mode'))
```
```
tensor([0.])
TypeError: The outputs of the user-provided function given to jacobian must be either a Tensor or a tuple of Tensors but the given outputs of the user-provided function has type <class 'int'>.
```
### Versions
1.13.0
| 0 |
4,687 | 85,570 |
`mvlgamma_` will fail when compiling with trace `jit`
|
oncall: jit
|
### π Describe the bug
`mvlgamma_` will fail with `jit` on cuda
```py
import torch
torch.random.manual_seed(7303)
def f(input):
return input.mvlgamma_(1)
device = 'cuda'
input = torch.tensor([1.2012, 1.2717, 0.5784,
2.3689, 0.2925, 2.4305], dtype=torch.float64, device=device)
print(f(input.clone()))
jit_f = torch.jit.trace(f, input.clone())
print(jit_f(input.clone()))
```
```
tensor([-0.0857, -0.1029, 0.4324, 0.1968, 1.1224, 0.2370], device='cuda:0',
dtype=torch.float64)
RuntimeError: All elements must be greater than (p-1)/2
```
### Versions
```
PyTorch version: 1.13.0.dev20220919
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: 10.0.0-4ubuntu1
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.9.13 (main, Aug 25 2022, 23:26:10) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-125-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.6.124
GPU models and configuration:
GPU 0: NVIDIA RTX A6000
GPU 1: NVIDIA RTX A6000
GPU 2: NVIDIA RTX A6000
GPU 3: NVIDIA RTX A6000
Nvidia driver version: 510.68.02
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.2.2
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.2.2
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.2.2
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.2.2
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.2.2
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.2.2
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.2.2
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] functorch==0.2.1
[pip3] numpy==1.23.1
[pip3] torch==1.13.0.dev20220919
[pip3] torchaudio==0.13.0.dev20220919
[pip3] torchvision==0.14.0.dev20220919
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.3.1 h2bc3f7f_2
[conda] functorch 0.2.1 pypi_0 pypi
[conda] magma-cuda110 2.5.2 1 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.23.1 py39h6c91a56_0
[conda] numpy-base 1.23.1 py39ha15fc14_0
[conda] pytorch 1.13.0.dev20220919 py3.9_cuda11.6_cudnn8.3.2_0 pytorch-nightly
[conda] pytorch-cuda 11.6 h867d48c_0 pytorch-nightly
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torch 1.13.0.dev20220916+cpu pypi_0 pypi
[conda] torchaudio 0.13.0.dev20220919 py39_cu116 pytorch-nightly
[conda] torchvision 0.14.0.dev20220916+cpu pypi_0 pypi
```
| 0 |
4,688 | 85,568 |
`detach_` behaves differently when computing the gradients in forward mode w/ `jit`
|
oncall: jit
|
### π Describe the bug
`detach_` behaves differently when computing the gradients in forward mode w/ `jit`
```py
import torch
def m(input):
return input.detach_()
device = 'cuda'
input = torch.empty([1], dtype=torch.float32, device=device)
jit_m = torch.jit.trace(m, (input.clone(), ))
from torch.autograd.functional import jacobian
try:
print(jacobian(m, input.clone().requires_grad_(), vectorize=True, strategy='forward-mode'))
except Exception as e:
print(e)
print(jacobian(jit_m, input.clone().requires_grad_(), vectorize=True, strategy='forward-mode'))
```
```
Can't detach views in-place. Use detach() instead. If you are using DistributedDataParallel (DDP) for training, and gradient_as_bucket_view is set as True, gradients are views of DDP buckets, and hence detach_() cannot be called on these gradients. To fix this error, please refer to the Optimizer.zero_grad() function in torch/optim/optimizer.py as the solution.
tensor([[0.]], device='cuda:0')
```
Interestingly, the reverse mode can give a gradient without jit
```py
import torch
def m(input):
return input.detach_()
device = 'cuda'
input = torch.empty([1], dtype=torch.float32, device=device)
jit_m = torch.jit.trace(m, (input.clone(), ))
from torch.autograd.functional import jacobian
print(jacobian(m, input.clone().requires_grad_(), vectorize=True, strategy='reverse-mode'))
# tensor([[0.]], device='cuda:0')
```
I am not sure which way is correct, but they do need to behave consistently.
### Versions
```
Collecting environment information...
PyTorch version: 1.13.0.dev20220923+cu116
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.2.0-19ubuntu1) 11.2.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.9.13 (main, Aug 25 2022, 23:26:10) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.19.5-051905-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060
Nvidia driver version: 515.65.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.3
[pip3] torch==1.13.0.dev20220923+cu116
[pip3] torchaudio==0.13.0.dev20220923+cu116
[pip3] torchvision==0.14.0.dev20220923+cu116
[conda] numpy 1.23.3 pypi_0 pypi
[conda] torch 1.13.0.dev20220923+cu116 pypi_0 pypi
[conda] torchaudio 0.13.0.dev20220923+cu116 pypi_0 pypi
[conda] torchvision 0.14.0.dev20220923+cu116 pypi_0 pypi
```
| 0 |
4,689 | 85,558 |
torch.Tensor.transpose().contiguous() on dimension of size 1 gives wrong stride
|
module: docs, triaged
|
### π Describe the bug
See code snippet below
```python
>>> torch.empty(1, 2, 3, 4, 5).transpose(0,1).contiguous().stride()
(60, 120, 20, 5, 1) # Should be (60, 60, 20, 5, 1)
>>> torch.empty(2, 1, 3, 4, 5).contiguous().stride()
(60, 60, 20, 5, 1)
```
### Versions
Collecting environment information...
PyTorch version: 1.12.0a0+bd13bc6
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.22.3
Libc version: glibc-2.31
Python version: 3.8.13 | packaged by conda-forge | (default, Mar 25 2022, 06:04:10) [GCC 10.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-48-generic-x86_64-with-glibc2.10
Is CUDA available: False
CUDA runtime version: 11.6.124
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.4.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] functorch==0.2.0a0+bee44f8
[pip3] numpy==1.22.3
[pip3] pytorch-quantization==2.1.2
[pip3] torch==1.12.0a0+bd13bc6
[pip3] torch-tensorrt==1.1.0a0
[pip3] torchtext==0.13.0a0
[pip3] torchvision==0.13.0a0
[conda] magma-cuda110 2.5.2 5 local
[conda] mkl 2019.5 281 conda-forge
[conda] mkl-include 2019.5 281 conda-forge
[conda] numpy 1.22.3 py38h1d589f8_2 conda-forge
[conda] pytorch-quantization 2.1.2 pypi_0 pypi
[conda] torch 1.12.0a0+bd13bc6 pypi_0 pypi
[conda] torch-tensorrt 1.1.0a0 pypi_0 pypi
[conda] torchtext 0.13.0a0 pypi_0 pypi
[conda] torchvision 0.13.0a0 pypi_0 pypi
cc @svekars @holly1238
| 4 |
4,690 | 85,544 |
NvFuser single mode changes the output
|
triaged, module: nvfuser
|
### π Describe the bug
NvFuser single mode changes the output
```py
import torch
device = 'cuda'
class M(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, input, other):
fn_res = torch.add(input, other, )
return fn_res
input = torch.tensor([[[0.1733]]], dtype=torch.bfloat16, device=device)
other = torch.tensor(59.8794, dtype=torch.float64, device=device)
m = M().to(device)
torch._C._jit_set_nvfuser_single_node_mode(True)
jit_m = torch.jit.trace(m, (input.clone(), other.clone()))
print(jit_m(input.clone(), other.clone()))
torch._C._jit_set_nvfuser_single_node_mode(False)
jit_m = torch.jit.trace(m, (input.clone(), other.clone()))
print(jit_m(input.clone(), other.clone()))
```
```
tensor([[[60.]]], device='cuda:0', dtype=torch.bfloat16)
tensor([[[60.2500]]], device='cuda:0', dtype=torch.bfloat16)
```
More interestingly, it will not affect the output of reloading model
```py
import torch
device = 'cuda'
class M(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, input, other):
fn_res = torch.add(input, other, )
return fn_res
input = torch.tensor([[[0.1733]]], dtype=torch.bfloat16, device=device)
other = torch.tensor(59.8794, dtype=torch.float64, device=device)
m = M().to(device)
torch._C._jit_set_nvfuser_single_node_mode(True)
jit_m = torch.jit.trace(m, (input.clone(), other.clone()))
print(jit_m(input.clone(), other.clone()))
import io
buffer = io.BytesIO()
torch.jit.save(jit_m, buffer)
buffer.seek(0)
new_m = torch.jit.load(buffer)
print(new_m(input.clone(), other.clone()))
```
```
tensor([[[60.]]], device='cuda:0', dtype=torch.bfloat16)
tensor([[[60.2500]]], device='cuda:0', dtype=torch.bfloat16)
```
### Versions
```
PyTorch version: 1.13.0.dev20220919
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: 10.0.0-4ubuntu1
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.9.13 (main, Aug 25 2022, 23:26:10) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-125-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.6.124
GPU models and configuration:
GPU 0: NVIDIA RTX A6000
GPU 1: NVIDIA RTX A6000
GPU 2: NVIDIA RTX A6000
GPU 3: NVIDIA RTX A6000
Nvidia driver version: 510.68.02
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.2.2
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.2.2
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.2.2
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.2.2
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.2.2
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.2.2
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.2.2
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] functorch==0.2.1
[pip3] numpy==1.23.1
[pip3] torch==1.13.0.dev20220919
[pip3] torchaudio==0.13.0.dev20220919
[pip3] torchvision==0.14.0.dev20220919
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.3.1 h2bc3f7f_2
[conda] functorch 0.2.1 pypi_0 pypi
[conda] magma-cuda110 2.5.2 1 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.23.1 py39h6c91a56_0
[conda] numpy-base 1.23.1 py39ha15fc14_0
[conda] pytorch 1.13.0.dev20220919 py3.9_cuda11.6_cudnn8.3.2_0 pytorch-nightly
[conda] pytorch-cuda 11.6 h867d48c_0 pytorch-nightly
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torch 1.13.0.dev20220916+cpu pypi_0 pypi
[conda] torchaudio 0.13.0.dev20220919 py39_cu116 pytorch-nightly
[conda] torchvision 0.14.0.dev20220916+cpu pypi_0 pypi
```
| 1 |
4,691 | 85,538 |
Iterative Global Pruning Cause GPU Memory Leak
|
high priority, module: nn, module: memory usage, triaged
|
### π Describe the bug
# Bug Description
According to the official [tutorial](https://pytorch.org/tutorials/intermediate/pruning_tutorial.html#iterative-pruning), pruning should work if applied for multiple times.
This works just fine for per-module pruning, as demonstrated in the tutorial.
**However, it doesn't work for global pruning, in that it will cause GPU memory leak.**
Specifically, the CUDA memory footprint of the process (as observed through `torch.cuda.memory_allocated()`) will steadily increase until a `RuntimeError` of `Cuda out of memory` is thrown.
The increase in size is about the same as the total parameter size of the model. For example, an FP32 model with 37.5M parameters will leak memory in increments of `37.5M * 4B = 150MB`.
I tried `gc.collect` and `torch.cuda.empty_cache()`, neither helped.
# Reproducing Example
Here's a minimal example that reproduces this problem:
```python
import torch
import torch.nn as nn
import torch.nn.utils.prune as prune
import gc
def prune_once(model, prune_amount):
prune_params = []
for module in model.modules():
if isinstance(module, nn.Conv2d):
prune_params.append((module, "weight"))
prune_params.append((module, "bias"))
prune.global_unstructured(
prune_params,
pruning_method=prune.L1Unstructured,
amount=prune_amount,
)
model=nn.Sequential(nn.Conv2d(1024, 2048, 3), nn.Conv2d(2048, 1024, 3))
model.to(torch.device('cuda:0'))
for i in range(300):
prune_once(model, 0.07)
print("GPU Memory Used:", torch.cuda.memory_allocated() / 1e9)
# None of these help
gc.collect()
torch.cuda.empty_cache()
```
## Expected output
```
GPU Memory Used: 0.604028928
GPU Memory Used: 0.604028928
...(Repeat 300 times)
```
## Actual output
```
GPU Memory Used: 0.604028928
GPU Memory Used: 0.75503616
GPU Memory Used: 0.906043392
GPU Memory Used: 1.057050624
GPU Memory Used: 1.208057856
GPU Memory Used: 1.359065088
...(Stedily increase by about 150MB per iteration)
GPU Memory Used: 32.919576576
GPU Memory Used: 33.070583808
GPU Memory Used: 33.22159104
GPU Memory Used: 33.372598272
# Then throws an exception:
# RuntimeError: CUDA out of memory. Tried to allocate 72.00 MiB (GPU 0; 39.59 GiB total capacity; 31.50 GiB already allocated; 5.85 GiB free; 31.93 GiB reserved in total by PyTorch)
```
### Versions
Collecting environment information...
PyTorch version: 1.9.1+cu111
Is debug build: False
CUDA used to build PyTorch: 11.1
ROCM used to build PyTorch: N/A
OS: CentOS Linux 7 (Core) (x86_64)
GCC version: (GCC) 4.8.5 20150623 (Red Hat 4.8.5-44)
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.17
Python version: 3.8.13 (default, Mar 28 2022, 11:38:47) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-3.10.0-1160.49.1.el7.x86_64-x86_64-with-glibc2.17
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.1
[pip3] torch==1.9.1+cu111
[pip3] torchaudio==0.9.1
[pip3] torchvision==0.10.1+cu111
[conda] _tflow_select 2.3.0 mkl
[conda] blas 1.0 mkl
[conda] cudatoolkit 10.2.89 hfd86e86_1
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py38h7f8727e_0
[conda] mkl_fft 1.3.1 py38hd3c417c_0
[conda] mkl_random 1.2.2 py38h51133e4_0
[conda] numpy 1.23.1 py38h6c91a56_0
[conda] numpy-base 1.23.1 py38ha15fc14_0
[conda] tensorflow 2.8.2 mkl_py38hb41d75a_0
[conda] tensorflow-base 2.8.2 mkl_py38hf890080_0
[conda] torch 1.9.1+cu111 pypi_0 pypi
[conda] torchaudio 0.9.1 pypi_0 pypi
[conda] torchvision 0.10.1+cu111 pypi_0 pypi
cc @ezyang @gchanan @zou3519 @albanD @mruberry @jbschlosser @walterddr @kshitij12345 @saketh-are
| 5 |
4,692 | 85,533 |
[functorch] transforms like jacrev, jacfwd, grad, etc don't work with BatchNorm
|
triaged, module: functorch
|
### π Describe the bug
```python
import torch
import functorch
net = torch.nn.BatchNorm2d(10)
net.train()
t = torch.randn(3, 3, 10, 10)
functorch.jacrev(net)(t)
```
Error:
```
RuntimeError: During a grad (vjp, jvp, grad, etc) transform, the function provided attempted to call in-place operation (aten::add_.Tensor) that would mutate a captured Tensor. This is not supported; please rewrite the function being transformed to explicitly accept the mutated Tensor(s) as inputs.
```
This also leads to failures on multiple models from https://github.com/pytorch/benchmark
### Versions
master
cc @zou3519 @Chillee @samdow @soumith
| 2 |
4,693 | 85,520 |
Implement `rand_like` ref and implement nvfuser_impl for `uniform` prim
|
triaged, module: nvfuser
|
### π Describe the bug
In the TorchDynamo+AOT_Autograd+Primtorch stack, Dropout is currently implemented as a decomposition when traced by AOT_Autograd. The decomposition calls `rand_like` to provide the randomization. Currently, `rand_like` is neither implemented as a Primtorch Ref and the only randomization implemented in the prims is `uniform` which nvFuser does not implement, yet.
As a work around, nvFuser will add an nvprim for `rand_like`. #85077
The real solution is to add a ref for `rand_like` that calls the prim `uniform`. In order for this to work, nvFuser needs to also implement `uniform`.
nvFuser has an implementation of `uniform` that is waiting to be checked into upstream. [nvFuser Dev Fork PR 1986](https://github.com/csarofeen/pytorch/pull/1986)
### Versions
TOT
| 1 |
4,694 | 85,514 |
The reload `MultiLabelMarginLoss` will have different gradients on cuda
|
oncall: jit
|
### π Describe the bug
The reload `MultiLabelMarginLoss` will have different gradients on cuda
```py
import torch
device = 'cuda'
class M(torch.nn.Module):
def __init__(self):
super().__init__()
self.arg_class = torch.nn.MultiLabelMarginLoss()
self.input_1 = torch.tensor([[15, 0, 0, 0]], dtype=torch.int64, device=device)
def forward(self, input_0):
arg_class = self.arg_class
input_1 = self.input_1
return arg_class(input_0, input_1)
input = torch.tensor([[-0.2898, -0.9470, -0.4042, -0.6775]], dtype=torch.float32, device=device)
m = M().to(device)
jit_m = torch.jit.trace(m, (input.clone(), ))
input1 = input.clone().requires_grad_()
jit_m(input.clone())
jit_m(input1).sum().backward()
print(input1.grad)
import io
buffer = io.BytesIO()
torch.jit.save(jit_m, buffer)
buffer.seek(0)
new_m = torch.jit.load(buffer)
input2 = input.clone().requires_grad_()
new_m(input2).sum().backward()
print(input2.grad)
```
```
tensor([[-2.2500, 1.0000, 1.0000, 1.0000]], device='cuda:0')
tensor([[-2.2500, 0.7500, 0.7500, 0.7500]], device='cuda:0')
```
This is caused by the out-of-bound index `15`. By contrast, on cpu, it will just raise an error that the target is out of bound
### Versions
```
PyTorch version: 1.13.0.dev20220919
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: 10.0.0-4ubuntu1
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.9.13 (main, Aug 25 2022, 23:26:10) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-125-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.6.124
GPU models and configuration:
GPU 0: NVIDIA RTX A6000
GPU 1: NVIDIA RTX A6000
GPU 2: NVIDIA RTX A6000
GPU 3: NVIDIA RTX A6000
Nvidia driver version: 510.68.02
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.2.2
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.2.2
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.2.2
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.2.2
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.2.2
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.2.2
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.2.2
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] functorch==0.2.1
[pip3] numpy==1.23.1
[pip3] torch==1.13.0.dev20220919
[pip3] torchaudio==0.13.0.dev20220919
[pip3] torchvision==0.14.0.dev20220919
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.3.1 h2bc3f7f_2
[conda] functorch 0.2.1 pypi_0 pypi
[conda] magma-cuda110 2.5.2 1 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.23.1 py39h6c91a56_0
[conda] numpy-base 1.23.1 py39ha15fc14_0
[conda] pytorch 1.13.0.dev20220919 py3.9_cuda11.6_cudnn8.3.2_0 pytorch-nightly
[conda] pytorch-cuda 11.6 h867d48c_0 pytorch-nightly
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torch 1.13.0.dev20220916+cpu pypi_0 pypi
[conda] torchaudio 0.13.0.dev20220919 py39_cu116 pytorch-nightly
[conda] torchvision 0.14.0.dev20220916+cpu pypi_0 pypi
```
| 0 |
4,695 | 85,513 |
Measure impact of JIT decompositions, reconsider the design
|
oncall: jit
|
We landed https://github.com/pytorch/pytorch/pull/84976. What it does is:
- We have some decompositions written in Python.
- We want to use them from C++, in subsystems like forward-mode AD.
- The design we have is to TorchScript the Python so that it is callable from C++.
Unfortunately this has been causing some problems:
- TorchScript doesn't work with Python 3.11 so this blocks the Python 3.11 binaries (https://github.com/pytorch/pytorch/pull/85509#pullrequestreview-1117733698)
- flaky onnx failures might be related? https://github.com/pytorch/pytorch/issues/85445
- We are unsure of how much this adds to the startup time of `import torch`. Distributed folks may be sensitive to that.
Alternatively, it would not be too difficult to directly call the Python (from C++) instead of relying on TorchScript to shepherd the code through. This may be a better design in the long-term, the question is, should we do anything about this issue in the short term.
cc @soulitzer @malfet
| 3 |
4,696 | 85,505 |
The reload model has different (and strange) forward computation from original model with `LSTMCell`
|
oncall: jit
|
### π Describe the bug
The reload module has different (and strange) forward computation from original module with `LSTMCell`
```py
import torch
device = 'cuda'
torch.random.manual_seed(64383)
class M(torch.nn.Module):
def __init__(self):
super().__init__()
input_size = 10
hidden_size = 20
arg_class = torch.nn.LSTMCell(input_size, hidden_size, device=device)
inp_1_0 = torch.empty([2, 20], dtype=torch.float32, device=device)
inp_1_0.uniform_(-64, 63)
inp_1_1 = torch.empty([2, 20], dtype=torch.float32, device=device)
inp_1_1.uniform_(-1, 1)
self.arg_class = arg_class
self.input_1 = [inp_1_0, inp_1_1]
def forward(self, input_0):
arg_class = self.arg_class
input_1 = self.input_1
fn_res = arg_class(input_0, input_1)
return fn_res
input = torch.empty([2, 10], dtype=torch.float32, device=device)
input.uniform_(-16, 127)
m = M().to(device)
jit_m = torch.jit.trace(m, input.clone())
try:
torch.autograd.functional.jacobian(jit_m, (input.clone().requires_grad_(), ), vectorize=True, strategy='reverse-mode')
torch.autograd.functional.jacobian(jit_m, (input.clone().requires_grad_(), ), vectorize=True, strategy='forward-mode')
except Exception as e:
print(e)
import io
buffer = io.BytesIO()
torch.jit.save(jit_m, buffer)
buffer.seek(0)
new_m = torch.jit.load(buffer)
try:
torch.autograd.functional.jacobian(new_m, (input.clone().requires_grad_(), ), vectorize=True, strategy='reverse-mode')
print(torch.autograd.functional.jacobian(new_m, (input.clone().requires_grad_(), ), vectorize=True, strategy='forward-mode'))
except Exception as e:
print(e)
```
```
RuntimeError: Trying to use forward AD with _thnn_fused_lstm_cell that does not support it because it has not been implemented yet.
Please file an issue to PyTorch at https://github.com/pytorch/pytorch/issues/new?template=feature-request.yml so that we can prioritize its implementation.
Note that forward AD support for some operators require PyTorch to be built with TorchScript and for JIT to be enabled. If the environment var PYTORCH_JIT=0 is set or if the library is not built with TorchScript, some operators may no longer be used with forward AD.
[[tensor([[[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]]],
[[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]]]], device='cuda:0')], [tensor([[[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]]],
[[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]]]], device='cuda:0')]]
```
The reload model will return a wrong gradient in forward mode after a computation of reverse mode. By contrast, the original model will return a "Not implemented" error.
More interestingly, without the computation of reverse mode, the reload model will return a "Not implemented" error like the original model. Thus, it seems that both reloading and reverse mode computation affects this issue.
### Versions
```
PyTorch version: 1.13.0.dev20220919
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: 10.0.0-4ubuntu1
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.9.13 (main, Aug 25 2022, 23:26:10) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-125-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.6.124
GPU models and configuration:
GPU 0: NVIDIA RTX A6000
GPU 1: NVIDIA RTX A6000
GPU 2: NVIDIA RTX A6000
GPU 3: NVIDIA RTX A6000
Nvidia driver version: 510.68.02
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.2.2
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.2.2
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.2.2
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.2.2
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.2.2
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.2.2
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.2.2
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] functorch==0.2.1
[pip3] numpy==1.23.1
[pip3] torch==1.13.0.dev20220919
[pip3] torchaudio==0.13.0.dev20220919
[pip3] torchvision==0.14.0.dev20220919
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.3.1 h2bc3f7f_2
[conda] functorch 0.2.1 pypi_0 pypi
[conda] magma-cuda110 2.5.2 1 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.23.1 py39h6c91a56_0
[conda] numpy-base 1.23.1 py39ha15fc14_0
[conda] pytorch 1.13.0.dev20220919 py3.9_cuda11.6_cudnn8.3.2_0 pytorch-nightly
[conda] pytorch-cuda 11.6 h867d48c_0 pytorch-nightly
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torch 1.13.0.dev20220916+cpu pypi_0 pypi
[conda] torchaudio 0.13.0.dev20220919 py39_cu116 pytorch-nightly
[conda] torchvision 0.14.0.dev20220916+cpu pypi_0 pypi
```
| 0 |
4,697 | 85,499 |
Execute smoke test for Better Transformer feature
|
module: ci, triaged
|
### π The feature, motivation and pitch
Run the smoke test with the RC binary (available on September 31) on A100 machines.
### Alternatives
_No response_
### Additional context
_No response_
cc @seemethere @malfet @pytorch/pytorch-dev-infra
| 1 |
4,698 | 93,686 |
Unexpected assertionError when export tensor.numel
|
triaged, bug, oncall: pt2
|
@voznesenskym Hi, I got this unexpected result when I was playing with the export API.
Repro 1:
```python
import torch
import torchdynamo
def f(x: torch.Tensor):
return x.numel()
torchdynamo.config.capture_scalar_outputs = True
print(torchdynamo.explain(f, torch.ones(1)))
torchdynamo.export(f, torch.ones(1))
```
The above code outputs the following:
```
('Dynamo produced 0 graphswith -1 graph break and 0 ops\n Break reasons: \n\nTorchDynamo compilation metrics:\nFunction Runtimes (s)\n--------------------------------------------------- --------------\nconvert_frame_assert.<locals>._convert_frame_assert 0.0057', [], [], [], [])
Traceback (most recent call last):
File "test_dynamo.py", line 8, in <module>
torchdynamo.export(f, torch.ones(1))
File "/usr/lib/python3.8/site-packages/torchdynamo-1.13.0.dev0-py3.8-linux-x86_64.egg/torchdynamo/eval_frame.py", line 508, in export
assert graph is not None, "whole graph export entails exactly one call"
AssertionError: whole graph export entails exactly one call
```
Try
Repro 2:
```python
import torch
import torchdynamo
def f(x: torch.Tensor):
return x.numel() + x.sin().numel()
torchdynamo.config.capture_scalar_outputs = True
print(torchdynamo.explain(f, torch.ones(1)))
torchdynamo.export(f, torch.ones(1))
```
The above code produces the following error:
```
('Dynamo produced 1 graphswith 0 graph break and 0 ops\n Break reasons: \n\nTorchDynamo compilation metrics:\nFunction Runtimes (s)\n--------------------------------------------------- --------------\nconvert_frame_assert.<locals>._convert_frame_assert 0.0084', [{Guard(name='x', source=<GuardSource.LOCAL: 0>, create_fn=<function GuardBuilder.TENSOR_MATCH at 0x7fe53c4c8f70>, is_volatile=False, guard_types=['TENSOR_MATCH'], code_list=None, obj_weakref=<weakref at 0x7fe53d444130; dead>, guarded_class_weakref=<weakref at 0x7fe5452804a0; to 'torch._C._TensorMeta' at 0x55f9ccd80230 (Tensor)>)}], [GraphModule()], [[]], [])
flat both [tensor([1.])] flat_results_traced [2]
Traceback (most recent call last):
File "test_dynamo.py", line 8, in <module>
torchdynamo.export(f, torch.ones(1))
File "/usr/lib/python3.8/site-packages/torchdynamo-1.13.0.dev0-py3.8-linux-x86_64.egg/torchdynamo/eval_frame.py", line 517, in export
matched_output_elements_positions = produce_matching(flat_both, flat_results_traced)
File "/usr/lib/python3.8/site-packages/torchdynamo-1.13.0.dev0-py3.8-linux-x86_64.egg/torchdynamo/eval_frame.py", line 464, in produce_matching
assert (
AssertionError: Dynamo input and output is a strict subset of traced input/output
```
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 4 |
4,699 | 85,492 |
[AO] In sparisty schedulers, rename `last_epoch` to steps
|
oncall: quantization, triaged
|
It's safer to define t, dt, n and t_0 in terms of steps instead of epochs, because when the data is large (takes days to run 1 epoch), users might want to prune within one epoch.
_Originally posted by @junesg in https://github.com/pytorch/pytorch/pull/85232#discussion_r977067386_
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo
| 0 |
4,700 | 85,485 |
`max_pool2d_with_indices(self, ...)` shouldn't need to save `self` for backward
|
module: autograd, triaged
|
## Issue description
`max_pool2d_with_indices(self, ...)` saves self for backward: https://github.com/pytorch/pytorch/blob/bcf93181a0ca5db75bd038db0d5f7e4cee733db7/tools/autograd/derivatives.yaml#L2208-L2209 .
It shouldn't be necessary to save the storage of self for backward. max_pool2d is an operation that takes a big tensor and select elements from it; the backward pass should just involve taking the grad_output and the indices and scattering it to a big tensor.
cc @ezyang @albanD @gqchen @pearu @nikitaved @soulitzer @Lezcano @Varal7
| 12 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.