Serial Number
int64 1
6k
| Issue Number
int64 75.6k
112k
| Title
stringlengths 3
357
| Labels
stringlengths 3
241
⌀ | Body
stringlengths 9
74.5k
⌀ | Comments
int64 0
867
|
---|---|---|---|---|---|
4,501 | 86,919 |
Importing torch 1.12.0 breaks subprocess module
|
needs reproduction, oncall: binaries, triaged, module: macos
|
### 🐛 Describe the bug
With torch `1.12.0` just doing `import torch` breaks the `subprocess` standard module. However this works fine with the previous version of torch (`1.11.0`).
Here is my Python test code:
```python
import subprocess
print(subprocess.run(["echo", "foo"]).returncode)
import torch
print(subprocess.run(["echo", "foo"]).returncode)
```
From a fresh environment:
```
$ pip install torch==1.11.0
Collecting torch==1.11.0
Downloading torch-1.11.0-cp39-none-macosx_10_9_x86_64.whl (129.9 MB)
|████████████████████████████████| 129.9 MB 160 kB/s
Collecting typing-extensions
Using cached typing_extensions-4.4.0-py3-none-any.whl (26 kB)
Installing collected packages: typing-extensions, torch
Successfully installed torch-1.11.0 typing-extensions-4.4.0
$
$ python ./test.py # this works
foo
0
/Users/tobyroseman/miniconda3/lib/python3.9/site-packages/torch/_masked/__init__.py:223: UserWarning: Failed to initialize NumPy: No module named 'numpy' (Triggered internally at /Users/distiller/project/pytorch/torch/csrc/utils/tensor_numpy.cpp:68.)
example_input = torch.tensor([[-3, -2, -1], [0, 1, 2]])
foo
0
$
$ pip install torch==1.12.0
Collecting torch==1.12.0
Downloading torch-1.12.0-cp39-none-macosx_10_9_x86_64.whl (133.6 MB)
|████████████████████████████████| 133.6 MB 7.4 MB/s
Requirement already satisfied: typing-extensions in /Users/tobyroseman/miniconda3/lib/python3.9/site-packages (from torch==1.12.0) (4.4.0)
Installing collected packages: torch
Attempting uninstall: torch
Found existing installation: torch 1.11.0
Uninstalling torch-1.11.0:
Successfully uninstalled torch-1.11.0
Successfully installed torch-1.12.0
$
$
$ python ./test.py # this doesn't work after importing torch
foo
0
-4
```
I'm seeing this behavior on an Intel based MacBook Pro. However it works on an M1 MacBook Pro.
### Versions
With torch `1.12.0` installed:
Collecting environment information...
Traceback (most recent call last):
File "/private/tmp/collect_env.py", line 505, in <module>
main()
File "/private/tmp/collect_env.py", line 488, in main
output = get_pretty_env_info()
File "/private/tmp/collect_env.py", line 483, in get_pretty_env_info
return pretty_str(get_env_info())
File "/private/tmp/collect_env.py", line 330, in get_env_info
pip_version, pip_list_output = get_pip_packages(run_lambda)
File "/private/tmp/collect_env.py", line 302, in get_pip_packages
out = run_with_pip(sys.executable + ' -mpip')
File "/private/tmp/collect_env.py", line 290, in run_with_pip
for line in out.splitlines()
AttributeError: 'NoneType' object has no attribute 'splitlines'
With torch `1.11.0` installed:
/Users/tobyroseman/miniconda3/lib/python3.9/site-packages/torch/_masked/__init__.py:223: UserWarning: Failed to initialize NumPy: No module named 'numpy' (Triggered internally at /Users/distiller/project/pytorch/torch/csrc/utils/tensor_numpy.cpp:68.)
example_input = torch.tensor([[-3, -2, -1], [0, 1, 2]])
Collecting environment information...
PyTorch version: 1.11.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 12.3 (x86_64)
GCC version: Could not collect
Clang version: 13.0.0 (clang-1300.0.29.3)
CMake version: version 3.12.1
Libc version: N/A
Python version: 3.9.12 (main, Apr 5 2022, 01:53:17) [Clang 12.0.0 ] (64-bit runtime)
Python platform: macOS-10.16-x86_64-i386-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] torch==1.11.0
[conda] torch 1.11.0 pypi_0 pypi
cc @ezyang @seemethere @malfet @albanD
| 2 |
4,502 | 86,918 |
torch.cat on empty tensor is bogus
|
triaged, module: edge cases
|
### 🐛 Describe the bug
torch.cat on Tensors of shape (0,) ignores the dim argument to cat:
```
x = torch.randn([0])
result = torch.cat([x, x], dim=999)
```
It should most likely error in this case.
### Versions
master
| 3 |
4,503 | 86,910 |
[FSDP] Investigate `torch.cuda.current_stream()` usage in post-backward
|
oncall: distributed, triaged, module: fsdp
|
In FSDP's post-backward hook, `torch.cuda.current_stream()` sometimes returns the default computation stream and sometimes returns the all-gather/unshard stream, even though the execution should not be in the all-gather/unshard stream.
There is a `self._streams["post_backward"].wait_stream(torch.cuda.current_stream())` call before the gradient reduction that I cannot explain; the call originates from the initial Fairscale PR introducing FSDP. Changing it to `wait_stream(self._streams["computation"])` breaks unit tests (computation correctness), while changing it to `wait_stream(self._streams["unshard"])` seems to work (but needs to be verified thoroughly).
This is a conceptual problem and does not block any progress.
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501
| 0 |
4,504 | 86,890 |
View-based advanced indexing (Integer array/LongTensor indexing) of nested_tensor
|
feature, triaged, module: advanced indexing
|
### 🚀 The feature, motivation and pitch
Integer array/LongTensor indexing is a type of advanced indexing and as such typically returns a copy of a tensor. This feature request is proposing that something along these lines or index_select are implemented for nested_tensor, and made to return a view.
This feature has been requested a few times for normal tensors e.g. here https://discuss.pytorch.org/t/index-select-same-storage-as-the-original-tensor/4368/4 and here https://discuss.pytorch.org/t/tensor-slice-views/24694/5 -- both times it was noted that this does not fit with the normal strided tensor layout. However, it does seem that this should be possible for nested_tensor. One area where it is relevant is in multi-task learning e.g. different classification heads with a different number of labels. In this context, mutliple heads could be being stored in a nested_tensor. We may recieve a batch with task_ids corresponding to inputs and want to do something along the lines of `inputs ⊕ weights[task_ids]`. At a cursory glance, it appears such a view is possible with nested_tensor. In fact, it looks like this is what `_nested_view_from_buffer` in the C++ API allows.
### Alternatives
The current workaround would be to use a list comprehension on the index array and reconstruct a nested_tensor with copies of the data.
### Additional context
_No response_
| 1 |
4,505 | 86,888 |
Broadcasting add for nested_tensor
|
triaged, module: nestedtensor
|
### 🚀 The feature, motivation and pitch
It would be useful when multiple a cutoff based ordinal regression heads of the CORAL variety https://github.com/Raschka-research-group/coral-pytorch to be able to do broadcasting addition, like so:
```
>>> import torch
>>> import torch.nested
>>> torch.nested.as_nested_tensor([torch.tensor([3.4, 2.3, 4.5])]) + torch.nested.nested_tensor([torch.tensor([-2.3, 0.4, 5.3]), torch.tensor([-2.3, 0.4, 5.3])])
/home/frankier/edu/doc/bert_ordinal/nestedplay/.venv/lib/python3.10/site-packages/torch/nested/__init__.py:84: UserWarning: The PyTorch API of nested tensors is in prototype stage and will change in the near future. (Triggered internally at ../aten/src/ATen/NestedTensorImpl.cpp:175.)
return torch._nested_tensor_from_tensor_list(tensor_list, dtype, None, device, None)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
RuntimeError: add does not support broadcasting when given a NestedTensor
```
### Alternatives
Copy stuff like so (current workaround):
```
>>> torch.nested.as_nested_tensor([torch.tensor([3.4, 2.3, 4.5]),torch.tensor([3.4, 2.3, 4.5])]) + torch.nested.nested_tensor([torch.tensor([-2.3, 0.4, 5.3]), torch.tensor([-2.3,
0.4, 5.3])])
nested_tensor([
tensor([1.1000, 2.7000, 9.8000]),
tensor([1.1000, 2.7000, 9.8000])
])
```
### Additional context
_No response_
cc @cpuhrsch @jbschlosser @bhosmer @drisspg @mikaylagawarecki
| 2 |
4,506 | 86,887 |
DISABLED test_variant_consistency_jit_linalg_lu_factor_ex_cuda_complex64 (__main__.TestJitCUDA)
|
triaged, module: flaky-tests, skipped, module: unknown
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_variant_consistency_jit_linalg_lu_factor_ex_cuda_complex64&suite=TestJitCUDA) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/8859723172).
Over the past 3 hours, it has been determined flaky in 1 workflow(s) with 1 failures and 1 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT BE ALARMED IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_variant_consistency_jit_linalg_lu_factor_ex_cuda_complex64`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
| 8 |
4,507 | 86,877 |
compile torch from source
|
needs reproduction, module: build, triaged
|
### 🐛 Describe the bug
Cuda version in my local machine or system env is 11.7 but 11.3 in my conda env. I set **_CUDA_HOME=/usr/lcoal/cuda_** when I compiled torch but failed ... And It finished with success when I downgraded cuda-version in local machine or upgraded cuda-version in my conda env. In other words, I have to keep cuda version in local machine consistent with that inside conda env, which is extremely inconvenient for me...
What can I do to solve it? THANKS
### Versions
torch 1.12.0
cc @malfet @seemethere
| 3 |
4,508 | 93,531 |
TorchInductor CPU Performance Dashboard
|
triaged, oncall: pt2, module: cpu inductor
|
Dashboard to track the performance of torchinductor on CPU.
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 279 |
4,509 | 86,849 |
`torch.distributed.all_reduce` allocates excess GPU memory when using NCCL backend
|
oncall: distributed, module: nccl, has workaround
|
### 🐛 Describe the bug
### Problem
If I perform the following steps in a distributed setting (NCCL backend):
1. Create two tensors on two workers, one tensor on GPU0 and the other on GPU1.
2. Run `torch.distributed.all_reduce` on these tensors across the two processes.
Then this will result in the GPU1 worker allocating a non-trivial amount of memory (~800mb) on GPU0.
This problem seems to occur when using torch 1.12.1 with CUDA 11.3 but does not occur when I use torch 1.9.0 with CUDA 11.1.
### Minimal reproducing example
The following script will reproduce this behavior, e.g. if you run this script and watch `nvidia-smi` you will see that GPU0 allocates roughly 2x the memory of GPU1.
```python
import time
import torch
import torch.multiprocessing as mp
mp = mp.get_context("forkserver") # Also tried with "spawn"
import torch.distributed as dist
import socket
from contextlib import closing
from torch import multiprocessing as mp
def find_free_port(address: str = "127.0.0.1") -> int:
"""Helper function to find a free port for distributed tests below"""
with closing(socket.socket(socket.AF_INET, socket.SOCK_STREAM)) as s:
s.bind((address, 0))
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
port = s.getsockname()[1]
return port
def init_and_run_all_reduce(worker_ind: int, port: int):
print(f"Worker {worker_ind}: started")
print(f"Worker {worker_ind}: creating TCP store")
store = torch.distributed.TCPStore(
host_name="127.0.0.1", port=port, world_size=2, is_master=worker_ind == 0,
)
print(f"Worker {worker_ind}: Starting process group")
dist.init_process_group(
backend="nccl", store=store, rank=worker_ind, world_size=2,
)
device = torch.device(worker_ind)
t = torch.tensor([worker_ind], device=device)
print(f"Worker {worker_ind}: Creating tensor {t} and running all_reduce on it")
dist.all_reduce(t)
print(f"Worker {worker_ind}: Done, reduced tensor: {t}")
print(f"Worker {worker_ind} entering 10 second sleep")
time.sleep(10)
dist.barrier()
if __name__ == "__main__":
processes = []
port = find_free_port()
for i in range(2):
processes.append(
mp.Process(
target=init_and_run_all_reduce,
kwargs={"worker_ind": i, "port": port},
)
)
processes[-1].start()
time.sleep(1)
for p in processes:
p.join()
```
### Versions
```
PyTorch version: 1.12.1
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.3 LTS (x86_64)
GCC version: (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.31
Python version: 3.9.13 (main, Aug 25 2022, 23:26:10) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-91-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: Could not collect
GPU models and configuration:
GPU 0: NVIDIA RTX A6000
GPU 1: NVIDIA RTX A6000
GPU 2: NVIDIA RTX A6000
GPU 3: NVIDIA RTX A6000
GPU 4: NVIDIA RTX A6000
GPU 5: NVIDIA RTX A6000
GPU 6: NVIDIA RTX A6000
GPU 7: NVIDIA RTX A6000
Nvidia driver version: 495.29.05
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.1
[pip3] torch==1.12.1
[pip3] torchvision==0.13.1
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.3.1 h2bc3f7f_2
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.23.1 py39h6c91a56_0
[conda] numpy-base 1.23.1 py39ha15fc14_0
[conda] pytorch 1.12.1 py3.9_cuda11.3_cudnn8.3.2_0 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchvision 0.13.1 py39_cu113 pytorch
```
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
| 5 |
4,510 | 86,848 |
.view(dtype) on a quantized tensor throws SegmentationFault
|
oncall: quantization, triaged
|
### 🐛 Describe the bug
If you try to view a quantized tensor as any other data type, python panics and throws a `SegmentationFault`. Here is a repro:
```python
x_fp = torch.arange(32, dtype=torch.int8).view(torch.float32) # This works fine
print(x_fp.view(torch.int8)) # This works fine
qx = torch.quantize_per_tensor(x_fp, 1-2, 0, torch.qint8)
print(qx.view(torch.float32)) # This dies with a Segmentation Fault
```
## Expected behavior
Either of two options:
1. Throw a meaningful error message
2. Change the view of the tensor's storage
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel
### Versions
```
Collecting environment information...
PyTorch version: 1.13.0a0+gite605e72
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: 6.0.0-1ubuntu2 (tags/RELEASE_600/final)
CMake version: version 3.22.1
Libc version: glibc-2.27
Python version: 3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.4.0-1069-aws-x86_64-with-glibc2.27
Is CUDA available: False
CUDA runtime version: 11.3.109
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Probably one of the following:
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.0.5
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.1.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] mypy==0.960
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.21.6
[pip3] pytorch-sphinx-theme==0.0.24
[pip3] torch==1.13.0a0+git76148f7
[conda] magma-cuda113 2.5.2 1 pytorch
[conda] mkl 2022.0.1 h06a4308_117
[conda] mkl-include 2022.0.1 h06a4308_117
[conda] numpy 1.21.6 pypi_0 pypi
[conda] numpy-base 1.21.5 py39hb8be1f0_1
[conda] pytorch-sphinx-theme 0.0.24 dev_0 <develop>
[conda] torch 1.13.0a0+git76148f7 dev_0 <develop>
```
| 1 |
4,511 | 86,830 |
Distributed collective ops fail in `inference_mode` for CPU-only
|
oncall: distributed, triaged, module: c10d
|
### 🐛 Describe the bug
Calling collectives inside `@torch.inference_mode()` fails for gloo but works for nccl.
Repro, call using torchrun:
```
import torch.distributed as dist
import torch
@torch.inference_mode()
def test_inference():
tensor_list = [torch.zeros(1) for _ in range(dist.get_world_size())]
tensor = torch.tensor([dist.get_rank()], dtype=torch.float32)
dist.all_gather(tensor_list, tensor)
print(tensor_list)
def main():
dist.init_process_group("gloo")
print('rank', dist.get_rank())
test_inference()
if __name__ == '__main__':
main()
```
From discussion with @mrshenli and @albanD:
> The error comes from the fact that a Tensor is written inplace without inference mode being enabled.
My first guess would be that gloo is using a worker thread but does not propagate the current TLS from the user thread to this worker thread. In particular, inference mode status is not propagated and thus this error.
> yep Gloo uses threads to send data, And yep, Gloo [does not propagate TLS](https://github.com/pytorch/pytorch/blob/86f914e9966e91b3d3e7c1504f5b1f00a9498d88/torch/csrc/distributed/c10d/ProcessGroupGloo.cpp#L772-L774)
> You can use the utilities from [https://github.com/.../aten/src/ATen/ThreadLocalState.h](https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/ThreadLocalState.h) to capture the current state and restore it in the worker thread.
### Versions
Latest master
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @penguinwu
| 3 |
4,512 | 86,819 |
Could not run select_backward [vmap] [dlrm] [functorch]
|
triaged, module: functorch
|
### 🐛 Describe the bug
What I am trying to do:
1. Create an input and target tensor
2. Create a list of models for ensemble (dlrm models from torchbench)
3. Use vmap (from functorch) to create an ensemble
4. Obtain the vmap function and pass it to aot_function (aot_autograd), to obtain the forward and backward graphs
NOTE: This workflow has been tested for hf_Bert, resnet and GPT2, and works fine for them.
Repro:
```
import torch
import torch.fx as fx
from functorch import (combine_state_for_ensemble,
make_functional_with_buffers, vmap)
from functorch._src.named_members_polyfill import (_named_buffers,
_named_parameters)
from functorch.compile import aot_function, aot_module, config, print_compile
from torch._subclasses import FakeTensor, FakeTensorMode
config.use_fake_tensor = True
from torchbenchmark import load_model_by_name
def fake_compiler(fx_g: fx.GraphModule, inps):
print(fx_g.code)
output_node = [node for node in fx_g.graph.nodes if node.op == 'output'][0]
output_data = [node.meta['val'] if node is not None else node for node in output_node.args[0]]
def new_f(*args):
return output_data
return new_f
Model = load_model_by_name("dlrm")
num_models = 2
batch_size = 1000
model_list = [
Model(device="cuda", test="train", batch_size = batch_size) for _ in range(num_models)
]
b_models = [model_list[i].model for i in range(num_models)]
inp = model_list[0].example_inputs
loss_fn = model_list[0].loss_fn
targets = model_list[0].targets
print(torch.cuda.memory_allocated())
func_model, params, buffers = combine_state_for_ensemble(b_models)
for p in params:
p.requires_grad = True
def compute_loss_dlrm(params, buffers, batch, targets):
gen = func_model(params, buffers, *batch)
loss = loss_fn(gen, targets)
return loss
parallel_func = vmap(compute_loss_dlrm, in_dims=(0, 0, None, None), randomness="same")
aot_func = aot_function(parallel_func, fake_compiler)
out = aot_func(params, buffers, inp, targets)
print(out.size())
print(type(out))
print(out)
print(out.device)
out.sum().backward()
```
Error: Stack Trace
```
/scratch/sanketpurandare/work/pytorch/torch/nn/functional.py:2388: UserWarning: There is a performance drop because we have not yet implemented the batching rule for aten::embedding_bag.padding_idx. Please file us an issue on GitHub so that we can prioritize its implementation. (Triggered internally at /scratch/sanketpurandare/work/pytorch/aten/src/ATen/functorch/BatchedFallback.cpp:84.)
ret, _, _, _ = torch.embedding_bag(
/scratch/sanketpurandare/work/pytorch/torch/nn/functional.py:2388: UserWarning: There is a performance drop because we have not yet implemented the batching rule for aten::embedding_bag.padding_idx. Please file us an issue on GitHub so that we can prioritize its implementation. (Triggered internally at /scratch/sanketpurandare/work/pytorch/aten/src/ATen/functorch/BatchedFallback.cpp:84.)
ret, _, _, _ = torch.embedding_bag(
Traceback (most recent call last):
File "/scratch/sanketpurandare/work/pytorch/torch/_subclasses/fake_tensor.py", line 821, in __torch_dispatch__
r = func(*args, **kwargs)
File "/scratch/sanketpurandare/work/pytorch/torch/_ops.py", line 257, in __call__
return self._op(*args, **kwargs or {})
NotImplementedError: Could not run 'aten::select_backward' with arguments from the 'SparseMeta' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::select_backward' is only available for these backends: [CPU, CUDA, HIP, MPS, IPU, XPU, HPU, VE, Meta, PrivateUse1, PrivateUse2, PrivateUse3, FPGA, ORT, Vulkan, Metal, QuantizedCPU, QuantizedCUDA, QuantizedHIP, QuantizedMPS, QuantizedIPU, QuantizedXPU, QuantizedHPU, QuantizedVE, QuantizedMeta, QuantizedPrivateUse1, QuantizedPrivateUse2, QuantizedPrivateUse3, CustomRNGKeyId, MkldnnCPU, SparseCsrCPU, SparseCsrCUDA, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMeta, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, AutogradNestedTensor, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PythonDispatcher].
Undefined: registered at /scratch/sanketpurandare/work/pytorch/build/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:21389 [default backend kernel]
CPU: registered at /scratch/sanketpurandare/work/pytorch/build/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:21389 [default backend kernel]
CUDA: registered at /scratch/sanketpurandare/work/pytorch/build/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:21389 [default backend kernel]
HIP: registered at /scratch/sanketpurandare/work/pytorch/build/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:21389 [default backend kernel]
MPS: registered at /scratch/sanketpurandare/work/pytorch/build/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:21389 [default backend kernel]
IPU: registered at /scratch/sanketpurandare/work/pytorch/build/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:21389 [default backend kernel]
XPU: registered at /scratch/sanketpurandare/work/pytorch/build/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:21389 [default backend kernel]
HPU: registered at /scratch/sanketpurandare/work/pytorch/build/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:21389 [default backend kernel]
VE: registered at /scratch/sanketpurandare/work/pytorch/build/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:21389 [default backend kernel]
Meta: registered at /scratch/sanketpurandare/work/pytorch/build/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:21389 [default backend kernel]
PrivateUse1: registered at /scratch/sanketpurandare/work/pytorch/build/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:21389 [default backend kernel]
PrivateUse2: registered at /scratch/sanketpurandare/work/pytorch/build/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:21389 [default backend kernel]
PrivateUse3: registered at /scratch/sanketpurandare/work/pytorch/build/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:21389 [default backend kernel]
FPGA: registered at /scratch/sanketpurandare/work/pytorch/build/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:21389 [default backend kernel]
ORT: registered at /scratch/sanketpurandare/work/pytorch/build/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:21389 [default backend kernel]
Vulkan: registered at /scratch/sanketpurandare/work/pytorch/build/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:21389 [default backend kernel]
Metal: registered at /scratch/sanketpurandare/work/pytorch/build/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:21389 [default backend kernel]
QuantizedCPU: registered at /scratch/sanketpurandare/work/pytorch/build/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:21389 [default backend kernel]
QuantizedCUDA: registered at /scratch/sanketpurandare/work/pytorch/build/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:21389 [default backend kernel]
QuantizedHIP: registered at /scratch/sanketpurandare/work/pytorch/build/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:21389 [default backend kernel]
QuantizedMPS: registered at /scratch/sanketpurandare/work/pytorch/build/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:21389 [default backend kernel]
QuantizedIPU: registered at /scratch/sanketpurandare/work/pytorch/build/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:21389 [default backend kernel]
QuantizedXPU: registered at /scratch/sanketpurandare/work/pytorch/build/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:21389 [default backend kernel]
QuantizedHPU: registered at /scratch/sanketpurandare/work/pytorch/build/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:21389 [default backend kernel]
QuantizedVE: registered at /scratch/sanketpurandare/work/pytorch/build/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:21389 [default backend kernel]
QuantizedMeta: registered at /scratch/sanketpurandare/work/pytorch/build/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:21389 [default backend kernel]
QuantizedPrivateUse1: registered at /scratch/sanketpurandare/work/pytorch/build/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:21389 [default backend kernel]
QuantizedPrivateUse2: registered at /scratch/sanketpurandare/work/pytorch/build/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:21389 [default backend kernel]
QuantizedPrivateUse3: registered at /scratch/sanketpurandare/work/pytorch/build/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:21389 [default backend kernel]
CustomRNGKeyId: registered at /scratch/sanketpurandare/work/pytorch/build/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:21389 [default backend kernel]
MkldnnCPU: registered at /scratch/sanketpurandare/work/pytorch/build/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:21389 [default backend kernel]
SparseCsrCPU: registered at /scratch/sanketpurandare/work/pytorch/build/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:21389 [default backend kernel]
SparseCsrCUDA: registered at /scratch/sanketpurandare/work/pytorch/build/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:21389 [default backend kernel]
BackendSelect: fallthrough registered at /scratch/sanketpurandare/work/pytorch/aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback]
Python: registered at /scratch/sanketpurandare/work/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:140 [backend fallback]
FuncTorchDynamicLayerBackMode: registered at /scratch/sanketpurandare/work/pytorch/aten/src/ATen/functorch/DynamicLayer.cpp:488 [backend fallback]
Functionalize: registered at /scratch/sanketpurandare/work/pytorch/aten/src/ATen/FunctionalizeFallbackKernel.cpp:291 [backend fallback]
Named: registered at /scratch/sanketpurandare/work/pytorch/aten/src/ATen/core/NamedRegistrations.cpp:7 [backend fallback]
Conjugate: registered at /scratch/sanketpurandare/work/pytorch/aten/src/ATen/ConjugateFallback.cpp:18 [backend fallback]
Negative: registered at /scratch/sanketpurandare/work/pytorch/aten/src/ATen/native/NegateFallback.cpp:18 [backend fallback]
ZeroTensor: registered at /scratch/sanketpurandare/work/pytorch/aten/src/ATen/ZeroTensorFallback.cpp:86 [backend fallback]
ADInplaceOrView: fallthrough registered at /scratch/sanketpurandare/work/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:64 [backend fallback]
AutogradOther: registered at /scratch/sanketpurandare/work/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:14589 [autograd kernel]
AutogradCPU: registered at /scratch/sanketpurandare/work/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:14589 [autograd kernel]
AutogradCUDA: registered at /scratch/sanketpurandare/work/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:14589 [autograd kernel]
AutogradHIP: registered at /scratch/sanketpurandare/work/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:14589 [autograd kernel]
AutogradXLA: registered at /scratch/sanketpurandare/work/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:14589 [autograd kernel]
AutogradMPS: registered at /scratch/sanketpurandare/work/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:14589 [autograd kernel]
AutogradIPU: registered at /scratch/sanketpurandare/work/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:14589 [autograd kernel]
AutogradXPU: registered at /scratch/sanketpurandare/work/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:14589 [autograd kernel]
AutogradHPU: registered at /scratch/sanketpurandare/work/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:14589 [autograd kernel]
AutogradVE: registered at /scratch/sanketpurandare/work/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:14589 [autograd kernel]
AutogradLazy: registered at /scratch/sanketpurandare/work/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:14589 [autograd kernel]
AutogradMeta: registered at /scratch/sanketpurandare/work/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:14589 [autograd kernel]
AutogradPrivateUse1: registered at /scratch/sanketpurandare/work/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:14589 [autograd kernel]
AutogradPrivateUse2: registered at /scratch/sanketpurandare/work/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:14589 [autograd kernel]
AutogradPrivateUse3: registered at /scratch/sanketpurandare/work/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:14589 [autograd kernel]
AutogradNestedTensor: registered at /scratch/sanketpurandare/work/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:14589 [autograd kernel]
Tracer: registered at /scratch/sanketpurandare/work/pytorch/torch/csrc/autograd/generated/TraceType_1.cpp:15586 [kernel]
AutocastCPU: fallthrough registered at /scratch/sanketpurandare/work/pytorch/aten/src/ATen/autocast_mode.cpp:460 [backend fallback]
AutocastCUDA: fallthrough registered at /scratch/sanketpurandare/work/pytorch/aten/src/ATen/autocast_mode.cpp:331 [backend fallback]
FuncTorchBatched: registered at /scratch/sanketpurandare/work/pytorch/aten/src/ATen/functorch/BatchRulesViews.cpp:512 [kernel]
FuncTorchVmapMode: fallthrough registered at /scratch/sanketpurandare/work/pytorch/aten/src/ATen/functorch/VmapModeRegistrations.cpp:28 [backend fallback]
Batched: registered at /scratch/sanketpurandare/work/pytorch/aten/src/ATen/BatchingRegistrations.cpp:1068 [kernel]
VmapMode: fallthrough registered at /scratch/sanketpurandare/work/pytorch/aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]
FuncTorchGradWrapper: registered at /scratch/sanketpurandare/work/pytorch/aten/src/ATen/functorch/TensorWrapper.cpp:189 [backend fallback]
PythonTLSSnapshot: registered at /scratch/sanketpurandare/work/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:148 [backend fallback]
FuncTorchDynamicLayerFrontMode: registered at /scratch/sanketpurandare/work/pytorch/aten/src/ATen/functorch/DynamicLayer.cpp:484 [backend fallback]
PythonDispatcher: registered at /scratch/sanketpurandare/work/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:144 [backend fallback]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/scratch/sanketpurandare/work/testing/test_patch.py", line 59, in <module>
out = aot_func(params, buffers, inp, targets)
File "/scratch/sanketpurandare/work/pytorch/functorch/_src/aot_autograd.py", line 671, in returned_function
compiled_fn = create_aot_dispatcher_function(
File "/scratch/sanketpurandare/work/pytorch/functorch/_src/aot_autograd.py", line 524, in create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/scratch/sanketpurandare/work/pytorch/functorch/_src/aot_autograd.py", line 367, in aot_dispatch_autograd
fx_g = make_fx(joint_forward_backward)(*joint_inputs)
File "/scratch/sanketpurandare/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 660, in wrapped
t = dispatch_trace(wrap_key(func, args, fx_tracer), tracer=fx_tracer, concrete_args=tuple(phs))
File "/scratch/sanketpurandare/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 408, in dispatch_trace
graph = tracer.trace(root, concrete_args)
File "/scratch/sanketpurandare/work/pytorch/torch/fx/_symbolic_trace.py", line 739, in trace
(self.create_arg(fn(*args)),),
File "/scratch/sanketpurandare/work/pytorch/torch/fx/_symbolic_trace.py", line 614, in flatten_fn
tree_out = root_fn(*tree_args)
File "/scratch/sanketpurandare/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 422, in wrapped
out = f(*tensors)
File "/scratch/sanketpurandare/work/pytorch/functorch/_src/aot_autograd.py", line 166, in joint_forward_backward
backward_out = torch.autograd.grad(
File "/scratch/sanketpurandare/work/pytorch/torch/autograd/__init__.py", line 300, in grad
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/scratch/sanketpurandare/work/pytorch/torch/utils/_python_dispatch.py", line 101, in __torch_dispatch__
return old.__torch_dispatch__(func, types, args, kwargs)
File "/scratch/sanketpurandare/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 448, in __torch_dispatch__
return self.inner_torch_dispatch(func, types, args, kwargs)
File "/scratch/sanketpurandare/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 473, in inner_torch_dispatch
out = proxy_call(self, func, args, kwargs)
File "/scratch/sanketpurandare/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 312, in proxy_call
out = func(*args, **kwargs)
File "/scratch/sanketpurandare/work/pytorch/torch/_ops.py", line 257, in __call__
return self._op(*args, **kwargs or {})
File "/scratch/sanketpurandare/work/pytorch/torch/utils/_python_dispatch.py", line 101, in __torch_dispatch__
return old.__torch_dispatch__(func, types, args, kwargs)
File "/scratch/sanketpurandare/work/pytorch/torch/_subclasses/fake_tensor.py", line 826, in __torch_dispatch__
return run_fallback_kernel(
File "/scratch/sanketpurandare/work/pytorch/torch/_subclasses/fake_tensor.py", line 949, in run_fallback_kernel
r = func(*args, **kwargs)
File "/scratch/sanketpurandare/work/pytorch/torch/_ops.py", line 257, in __call__
return self._op(*args, **kwargs or {})
NotImplementedError: Could not run 'aten::select_backward' with arguments from the 'SparseCUDA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::select_backward' is only available for these backends: [CPU, CUDA, HIP, MPS, IPU, XPU, HPU, VE, Meta, PrivateUse1, PrivateUse2, PrivateUse3, FPGA, ORT, Vulkan, Metal, QuantizedCPU, QuantizedCUDA, QuantizedHIP, QuantizedMPS, QuantizedIPU, QuantizedXPU, QuantizedHPU, QuantizedVE, QuantizedMeta, QuantizedPrivateUse1, QuantizedPrivateUse2, QuantizedPrivateUse3, CustomRNGKeyId, MkldnnCPU, SparseCsrCPU, SparseCsrCUDA, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMeta, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, AutogradNestedTensor, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PythonDispatcher].
Undefined: registered at /scratch/sanketpurandare/work/pytorch/build/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:21389 [default backend kernel
```
### Versions
Collecting environment information...
PyTorch version: 1.14.0a0+git97de281
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: 6.0.0-1ubuntu2 (tags/RELEASE_600/final)
CMake version: version 3.22.5
Libc version: glibc-2.27
Python version: 3.9.13 (main, Aug 25 2022, 23:26:10) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-1069-aws-x86_64-with-glibc2.27
Is CUDA available: True
CUDA runtime version: 11.6.112
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-40GB
GPU 1: NVIDIA A100-SXM4-40GB
GPU 2: NVIDIA A100-SXM4-40GB
GPU 3: NVIDIA A100-SXM4-40GB
GPU 4: NVIDIA A100-SXM4-40GB
GPU 5: NVIDIA A100-SXM4-40GB
GPU 6: NVIDIA A100-SXM4-40GB
GPU 7: NVIDIA A100-SXM4-40GB
Nvidia driver version: 510.47.03
cuDNN version: Probably one of the following:
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.0.5
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.1.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: False
Versions of relevant libraries:
[pip3] bert-pytorch==0.0.1a4
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.23.1
[pip3] torch==1.14.0a0+git97de281
[pip3] torchaudio==0.12.0a0+5e96671
[pip3] torchmetrics==0.9.1
[pip3] torchrec-nightly==2022.4.26
[pip3] torchtext==0.14.0a0+8b35599
[pip3] torchvision==0.14.0a0+d1b2f4a
[pip3] torchx-nightly==2022.6.15
[conda] bert-pytorch 0.0.1a4 dev_0 <develop>
[conda] blas 1.0 mkl
[conda] magma-cuda116 2.6.1 1 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-include 2022.0.1 h06a4308_117
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.21.2 pypi_0 pypi
[conda] numpy-base 1.23.1 py39ha15fc14_0
[conda] torch 1.14.0a0+git97de281 dev_0 <develop>
[conda] torchaudio 0.12.0a0+5e96671 dev_0 <develop>
[conda] torchdynamo 0.2.0 dev_0 <develop>
[conda] torchmetrics 0.9.1 pypi_0 pypi
[conda] torchrec-nightly 2022.4.26 pypi_0 pypi
[conda] torchtext 0.14.0a0+8b35599 dev_0 <develop>
[conda] torchvision 0.14.0a0+d1b2f4a dev_0 <develop>
[conda] torchx-nightly 2022.6.15 pypi_0 pyp
cc @zou3519 @Chillee @samdow @soumith
| 3 |
4,513 | 86,818 |
Forward hooks for ScriptModules
|
oncall: jit
|
Is it possible to add support for registering forward hooks for ScriptModules? <br>
**Use case:** I am trying to get activations from an intermediate layer of a pretrained model that was JIT scripted. It would be nice to have the support for forward hooks on the scripted model, in order to avoid copying the state dict from the JIT model to nn.Sequential model and run forward pass through the Sequential model.
Is there any alternative solution I can try?
| 0 |
4,514 | 86,816 |
JIT model returns different value on cpu with uniform-initialized input
|
oncall: jit
|
### 🐛 Describe the bug
JIT model returns different value on cpu with uniform-initialized input
```py
import torch
class M(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, inp):
inp1 = torch.div(inp, torch.tensor(-16, dtype=torch.float32, device='cpu'))
inp = torch.nn.functional.tanhshrink(inp1)
fn_res = inp.abs()
fn_res_1 = fn_res / torch.nn.functional.tanhshrink(fn_res)
return fn_res_1
fn = M().to('cpu')
torch.random.manual_seed(31195)
inp = torch.empty([10], dtype=torch.float32, memory_format=torch.contiguous_format)
inp.uniform_(-4, 7)
print(inp)
print(fn(inp.clone()))
jit_fn = torch.jit.trace(fn, (inp.clone(),))
print(jit_fn(inp.clone()))
```
```
inp:
tensor([ 2.6257, 6.6212, -0.4189, -0.8199, 2.5157, 6.4133, 5.3943, 6.7126,
-1.9639, -3.2350])
normal model output:
tensor([1.3910e+06, 6.1375e+03, inf, inf, 1.8369e+06, 7.3721e+03,
2.0092e+04, 5.6738e+03, **1.0527e+07**, 4.0147e+05])
JIT model output:
tensor([1391032.8750, 6137.4810, inf, inf, 1836885.3750,
7372.0679, 20091.6387, 5673.8164, **5263488.0000**, 415805.7188])
```
However, when I use the `inp` with the same value to call (without uniform-initialization), the value will be the same
```py
inp = torch.tensor([ 2.6257, 6.6212, -0.4189, -0.8199, 2.5157, 6.4133, 5.3943, 6.7126,
-1.9639, -3.2350], dtype=torch.float32,)
print(fn(inp.clone()))
jit_fn = torch.jit.trace(fn, (inp.clone(),))
print(jit_fn(inp.clone()))
```
```
tensor([1.3911e+06, 6.1375e+03, inf, inf, 1.8368e+06, 7.3719e+03,
2.0091e+04, 5.6739e+03, 1.0527e+07, 4.0148e+05])
tensor([1.3911e+06, 6.1375e+03, inf, inf, 1.8368e+06, 7.3719e+03,
2.0091e+04, 5.6739e+03, 1.0527e+07, 4.1581e+05])
```
Besides, this issue only happens on cpu
```py
import torch
class M(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, inp):
inp1 = torch.div(inp, torch.tensor(-16, dtype=torch.float32, device='cuda'))
inp = torch.nn.functional.tanhshrink(inp1)
fn_res = inp.abs()
fn_res_1 = fn_res / torch.nn.functional.tanhshrink(fn_res)
return fn_res_1
fn = M().to('cuda')
torch.random.manual_seed(31195)
inp = torch.empty([10], dtype=torch.float32, memory_format=torch.contiguous_format)
inp.uniform_(-4, 7)
inp = inp.to('cuda')
print(fn(inp.clone()))
jit_fn = torch.jit.trace(fn, (inp.clone(),))
print(jit_fn(inp.clone()))
```
On cuda, the results are the same
```
tensor([1.3910e+06, 6.1375e+03, inf, inf, 1.8369e+06, 7.3721e+03,
2.0092e+04, 5.6738e+03, 1.0527e+07, 4.0147e+05], device='cuda:0')
tensor([1.3910e+06, 6.1375e+03, inf, inf, 1.8369e+06, 7.3721e+03,
2.0092e+04, 5.6738e+03, 1.0527e+07, 4.0147e+05], device='cuda:0')
```
### Versions
```
Collecting environment information...
PyTorch version: 1.14.0.dev20221012
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.2.0-19ubuntu1) 11.2.0
Clang version: 14.0.0-1ubuntu1
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.9.13 (main, Aug 25 2022, 23:26:10) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.19.5-051905-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.6.124
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060
Nvidia driver version: 515.65.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] functorch==0.2.1
[pip3] numpy==1.23.3
[pip3] torch==1.14.0.dev20221012
[pip3] torchaudio==0.13.0.dev20221012
[pip3] torchvision==0.15.0.dev20221012
[conda] _tflow_select 2.3.0 mkl
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.3.1 h2bc3f7f_2
[conda] functorch 0.2.1 pypi_0 pypi
[conda] libblas 3.9.0 12_linux64_mkl conda-forge
[conda] libcblas 3.9.0 12_linux64_mkl conda-forge
[conda] liblapack 3.9.0 12_linux64_mkl conda-forge
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.23.3 pypi_0 pypi
[conda] numpy-base 1.23.1 py39ha15fc14_0
[conda] pytorch 1.14.0.dev20221012 py3.9_cuda11.6_cudnn8.3.2_0 pytorch-nightly
[conda] pytorch-cuda 11.6 h867d48c_0 pytorch-nightly
[conda] pytorch-mutex 1.0 cuda pytorch-nightly
[conda] tensorflow-base 2.9.1 mkl_py39h353358b_0
[conda] torchaudio 0.13.0.dev20220923+cu116 pypi_0 pypi
[conda] torchvision 0.14.0.dev20220923+cu116 pypi_0 pypi
```
| 0 |
4,515 | 86,814 |
Expanding the parameters of `torch.svd_lowrank`
|
triaged, module: linear algebra
|
### 🚀 The feature, motivation and pitch
Hi,
Currently, the `torch.svd_lowrank` function implements a very basic version of Halko, et al. (2009)'s algorithm. The `sklearn` version, `sklearn.utils.extmath.randomized_svd` implements a number of parameters that facilitate noisy problems. In particular, the `n_oversamples` parameter increases the dimension of the random projections in the algorithm, which allows one to operate on noisier datasets without performing extra power iterations.
While the current implementation of `torch.svd_lowrank` is probably sufficient for absolutely low rank problems with discontinuous and clean spectra, more oversamples are required for many _relatively_ low rank problems with continuous spectral decay. For instance, we are working with low rank approximation of matrices of size 100,000 x 40,000. We are taking anywhere from rank 80-200 approximations of these matrices. For the purposes of our algorithm, *accurate* recovery of the singular values, especially those near the parameter `k`, are important. We find that for our data, our `n_oversamples` is typically `10*k` in order to accurately recover the true singular values near the rank of the matrix.
Implementing this would not be hard: it amount to adding a kwarg and changing a few lines of the `svd_lowrank` calls to `get_approximate_basis`
### Alternatives
_No response_
### Additional context
_No response_
cc @jianyuh @nikitaved @pearu @mruberry @walterddr @IvanYashchuk @xwang233 @Lezcano
| 2 |
4,516 | 86,808 |
[MPS] Add support for aten::erfinv.out for MPS backend
|
good first issue, triaged, module: mps
|
### 🐛 Describe the bug
First time contributors are welcome! 🙂
Add support for [aten::erfinv.out](https://pytorch.org/docs/stable/generated/torch.erfinv.html) for MPS backend. Generic support for adding operations to MPS backend is captured here: https://github.com/pytorch/pytorch/wiki/MPS-Backend#adding-op-for-mps-backend
### Versions
N/A
cc @kulinseth @albanD @malfet @razarmehr @abhudev
| 8 |
4,517 | 86,804 |
JIT model will have a different jacobian after the first computation
|
oncall: jit
|
### 🐛 Describe the bug
JIT model will have a different jacobian after the first computation
```py
import torch
class M(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, _input_tensor):
_input_tensor_1 = torch.nn.functional.celu(_input_tensor)
_input_tensor_2 = torch.mul(_input_tensor_1, torch.tensor(-2, dtype=torch.float64, device='cpu'))
_input_tensor_2 = _input_tensor_1 / _input_tensor_2
_input_tensor = _input_tensor_2
fn_res = _input_tensor.logdet()
return fn_res
fn = M().to('cpu')
torch.random.manual_seed(5160)
inp = torch.empty([5, 5], dtype=torch.float64, memory_format=torch.contiguous_format)
inp.uniform_(-64, 7)
inp = inp.to('cpu')
jit_fn = torch.jit.script(fn)
from torch.autograd.functional import jacobian
# these two are the same
print(jacobian(fn, inp.clone()))
print(jacobian(jit_fn, inp.clone()))
# this will be different
print(jacobian(jit_fn, inp.clone()))
```
```
tensor([[0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00],
[0.0000e+00, 0.0000e+00, 6.3193e+52, 0.0000e+00, 0.0000e+00],
[0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00],
[0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00],
[0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00]],
dtype=torch.float64)
tensor([[0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00],
[0.0000e+00, 0.0000e+00, 6.3193e+52, 0.0000e+00, 0.0000e+00],
[0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00],
[0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00],
[0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00]],
dtype=torch.float64)
tensor([[ 4.7020e-04, 0.0000e+00, 0.0000e+00, 0.0000e+00, -1.8723e+75],
[ 0.0000e+00, 2.5916e+50, 6.3193e+52, 0.0000e+00, -5.3604e+72],
[ 0.0000e+00, -4.3133e+53, 0.0000e+00, 0.0000e+00, 0.0000e+00],
[-6.2500e-02, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00],
[ 0.0000e+00, 0.0000e+00, 0.0000e+00, -6.5622e+85, 1.6687e+94]],
dtype=torch.float64)
```
This happens on both cpu and cuda.
Not sure whether this is related to #85877
### Versions
```
Collecting environment information...
PyTorch version: 1.12.1
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.2.0-19ubuntu1) 11.2.0
Clang version: 14.0.0-1ubuntu1
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.9.13 (main, Aug 25 2022, 23:26:10) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.19.5-051905-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.7.99
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060
Nvidia driver version: 515.65.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] functorch==0.2.1
[pip3] numpy==1.23.3
[pip3] torch==1.13.0.dev20220923+cu116
[pip3] torchaudio==0.13.0.dev20220923+cu116
[pip3] torchvision==0.14.0.dev20220923+cu116
[conda] _tflow_select 2.3.0 mkl
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.3.1 h2bc3f7f_2
[conda] functorch 0.2.1 pypi_0 pypi
[conda] libblas 3.9.0 12_linux64_mkl conda-forge
[conda] libcblas 3.9.0 12_linux64_mkl conda-forge
[conda] liblapack 3.9.0 12_linux64_mkl conda-forge
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.23.3 pypi_0 pypi
[conda] numpy-base 1.23.1 py39ha15fc14_0
[conda] pytorch 1.12.1 py3.9_cuda11.3_cudnn8.3.2_0 pytorch
[conda] pytorch-cuda 11.7 h67b0de4_0 pytorch-nightly
[conda] pytorch-mutex 1.0 cuda pytorch-nightly
[conda] tensorflow-base 2.9.1 mkl_py39h353358b_0
[conda] torch 1.12.1 pypi_0 pypi
[conda] torchaudio 0.13.0.dev20220923+cu116 pypi_0 pypi
[conda] torchvision 0.14.0.dev20220923+cu116 pypi_0 pypi
```
| 0 |
4,518 | 86,798 |
TF32 conv_transpose2d with groups has bad precision compared to fp32
|
module: numerical-stability, module: cuda, triaged, module: tf32
|
### 🐛 Describe the bug
## Discussion
It's unclear to me if this is a bug or expected behavior, so I'm opening this issue to discuss. https://pytorch.org/docs/stable/notes/cuda.html#tf32-on-ampere suggests that TF32's precision isn't as good as fp32, though the numbers look very different to me in this particular case
## Repro
```py
import torch
import torch.nn.functional as F
from torch.testing import make_tensor
torch.manual_seed(0)
torch.backends.cudnn.allow_tf32 = True
x = make_tensor(1, 2 * 4, 5, 5, dtype=torch.float32, device='cuda')
w = make_tensor(2 * 4, 8, 3, 3, dtype=torch.float32, device='cuda')
result = F.conv_transpose2d(x, w, groups=2).double()
expected = F.conv_transpose2d(x.double(), w.double(), groups=2)
amax = result.sub(expected).div(expected).abs().argmax()
print("% difference: ", result.sub(expected).div(expected).abs().max())
print("values: ", result.view(-1)[amax], expected.view(-1)[amax])
```
Gives the following:
```
% difference: tensor(0.2066, device='cuda:0', dtype=torch.float64)
values: tensor(0.0278, device='cuda:0', dtype=torch.float64) tensor(0.0350, device='cuda:0', dtype=torch.float64)
```
There is a 20% relative difference!
Furthermore, setting `torch.backends.cudnn.allow_tf32 = False` makes the difference go back down to 7.6725e-05. Also, using `groups=1` also makes the difference go down to something in the 1e-05 range.
### Versions
nightly; using a A100 GPU
cc @ezyang @gchanan @ngimel @zasdfgbnm @ptrblck
| 5 |
4,519 | 86,791 |
We don't have an op for vulkan_prepack::conv2d_clamp_prepack but it isn't a special case.
|
module: convolution, oncall: mobile, module: vulkan
|
### 🐛 Describe the bug
To reproduce, go to goggle colab and run this simple snipet
```python
from torch import nn
import torch
from torch.utils.mobile_optimizer import optimize_for_mobile
print(torch.__version__)
with torch.no_grad():
x = torch.zeros(1, 3, 640, 640)
model = torch.nn.Conv2d(3, 3, kernel_size=1)
script_model = torch.jit.trace(model, x)
optimized_traced = optimize_for_mobile(script_model, backend='vulkan')
optimized_traced._save_for_lite_interpreter("sample_data/t.ptl")
```
```
1.12.1+cu113
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
[<ipython-input-22-48d86b414602>](https://localhost:8080/#) in <module>
9 model = torch.nn.Conv2d(3, 3, kernel_size=1)
10 script_model = torch.jit.trace(model, x)
---> 11 optimized_traced = optimize_for_mobile(script_model, backend='vulkan')
12 optimized_traced._save_for_lite_interpreter("sample_data/t.ptl")
[/usr/local/lib/python3.7/dist-packages/torch/utils/mobile_optimizer.py](https://localhost:8080/#) in optimize_for_mobile(script_module, optimization_blocklist, preserved_methods, backend)
65 preserved_methods_str)
66 elif backend == 'vulkan':
---> 67 optimized_cpp_module = torch._C._jit_pass_vulkan_optimize_for_mobile(script_module._c, preserved_methods_str)
68 elif backend == 'metal':
69 optimized_cpp_module = torch._C._jit_pass_metal_optimize_for_mobile(script_module._c, preserved_methods_str)
RuntimeError: 0 INTERNAL ASSERT FAILED at "../torch/csrc/jit/ir/alias_analysis.cpp":608, please report a bug to PyTorch. We don't have an op for vulkan_prepack::conv2d_clamp_prepack but it isn't a special case. Argument types: Tensor, Tensor, int[], int[], int[], int, NoneType, NoneType,
```
### Versions
pytorch version 1.12.1+cu113
| 4 |
4,520 | 86,782 |
Poisson sampling on GPU fails for high rates
|
module: cuda, triaged, module: random
|
### 🐛 Describe the bug
For rates > 1e10, the Poisson sampling on the GPU delivers incorrect results
```python
import torch
torch.set_default_dtype(torch.float64)
a = torch.ones((100, 100))*1e10
b_cpu = torch.poisson(a.to("cpu"))
b_cuda = torch.poisson(a.to("cuda"))
print(b_cpu.mean())
print(b_cuda.mean())
```
### Versions
Collecting environment information...
PyTorch version: 1.12.1+cu116
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: 6.0.0-1ubuntu2 (tags/RELEASE_600/final)
CMake version: version 3.10.2
Libc version: glibc-2.17
Python version: 3.7.13 (default, Mar 29 2022, 02:18:16) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.4.0-125-generic-x86_64-with-debian-buster-sid
Is CUDA available: True
CUDA runtime version: Could not collect
GPU models and configuration:
GPU 0: NVIDIA GeForce GTX 1050 Ti
GPU 1: NVIDIA GeForce RTX 2080 SUPER
Nvidia driver version: 515.65.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.5.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.21.5
[pip3] numpydoc==1.4.0
[pip3] torch==1.12.1+cu116
[pip3] torchvision==0.13.1+cu116
[conda] blas 1.0 mkl
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py37h7f8727e_0
[conda] mkl_fft 1.3.1 py37hd3c417c_0
[conda] mkl_random 1.2.2 py37h51133e4_0
[conda] numpy 1.21.5 py37h6c91a56_3
[conda] numpy-base 1.21.5 py37ha15fc14_3
[conda] numpydoc 1.4.0 pypi_0 pypi
[conda] torch 1.12.1+cu116 pypi_0 pypi
[conda] torchvision 0.13.1+cu116 pypi_0 pypi
cc @ngimel @pbelevich
| 0 |
4,521 | 86,770 |
DISABLED test_vmapjvpall_linalg_lu_cuda_float32 (__main__.TestOperatorsCUDA)
|
triaged, module: flaky-tests, skipped, module: functorch
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_vmapjvpall_linalg_lu_cuda_float32&suite=TestOperatorsCUDA) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/8836194077).
Over the past 3 hours, it has been determined flaky in 1 workflow(s) with 1 failures and 1 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT BE ALARMED IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_vmapjvpall_linalg_lu_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
cc @zou3519 @Chillee @samdow @soumith
| 7 |
4,522 | 86,733 |
DISABLED test_vmapjvpvjp_linalg_lu_cuda_float32 (__main__.TestOperatorsCUDA)
|
triaged, module: flaky-tests, skipped, module: functorch
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_vmapjvpvjp_linalg_lu_cuda_float32&suite=TestOperatorsCUDA) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/8828597209).
Over the past 3 hours, it has been determined flaky in 5 workflow(s) with 5 failures and 5 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT BE ALARMED IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_vmapjvpvjp_linalg_lu_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
cc @zou3519 @Chillee @samdow @soumith
| 11 |
4,523 | 86,718 |
Autograd doc does not mention torch.autograd.set_grad_enabled
|
module: docs, module: autograd, triaged, actionable
|
### 📚 The doc issue
[Autograd doc](https://pytorch.org/docs/stable/notes/autograd.html#locally-disable-grad-doc) does not mention `torch.autograd.set_grad_enabled`.
### Suggest a potential alternative/fix
Add reference to [`torch.autograd.set_grad_enabled`](https://pytorch.org/docs/stable/generated/torch.autograd.set_grad_enabled.html) used as function for disabling autograd in current thread.
cc @svekars @holly1238 @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved @soulitzer @Lezcano @Varal7
| 8 |
4,524 | 86,717 |
NVFuser `FusionRootMappingMultipleBroadcast_CUDA` raises exception on sm_80+
|
module: cuda, module: ci, triaged, module: nvfuser
|
### 🐛 Describe the bug
See here for example (only change that test is compiled with sm_86 and run on A10G )
https://github.com/pytorch/pytorch/actions/runs/3222501479/jobs/5272211415
```
[ RUN ] NVFuserTest.FusionRootMappingMultipleBroadcast_CUDA
unknown file: Failure
C++ exception with description "Should not be mappable: iS0{i0} of T0_l[ iS0{i0} ] and iS1{i0} of T1_l[ iS1{i0}, bS2{1} ]
Exception raised from checkIdMapped at /home/nshulga/git/pytorch/pytorch/torch/csrc/jit/codegen/cuda/test/test_gpu.cpp:3084 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x6b (0x7f63622479bb in /home/nshulga/git/pytorch/pytorch/build/lib/libc10.so)
frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0xbf (0x7f63622428bf in /home/nshulga/git/pytorch/pytorch/build/lib/libc10.so)
frame #2: <unknown function> + 0x621c5c (0x55d14faf4c5c in ./bin/test_jit)
frame #3: torch::jit::NVFuserTest_FusionRootMappingMultipleBroadcast_CUDA_Test::TestBody() + 0x2f3 (0x55d14faf5cc3 in ./bin/test_jit)
frame #4: void testing::internal::HandleExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) + 0x4a (0x55d14fc118aa in ./bin/test_jit)
frame #5: <unknown function> + 0x72b85f (0x55d14fbfe85f in ./bin/test_jit)
frame #6: <unknown function> + 0x72bb42 (0x55d14fbfeb42 in ./bin/test_jit)
frame #7: <unknown function> + 0x72c3b5 (0x55d14fbff3b5 in ./bin/test_jit)
frame #8: testing::internal::UnitTestImpl::RunAllTests() + 0xe2f (0x55d14fc0a6df in ./bin/test_jit)
frame #9: testing::UnitTest::Run() + 0x98 (0x55d14fc0aad8 in ./bin/test_jit)
frame #10: main + 0xfb (0x55d14f7043cb in ./bin/test_jit)
frame #11: __libc_start_main + 0xe7 (0x7f63612c0c87 in /lib/x86_64-linux-gnu/libc.so.6)
frame #12: _start + 0x2a (0x55d14f75e79a in ./bin/test_jit)
" thrown in the test body.
[ FAILED ] NVFuserTest.FusionRootMappingMultipleBroadcast_CUDA (1 ms)
```
### Versions
CI
cc @ngimel @seemethere @pytorch/pytorch-dev-infra
| 2 |
4,525 | 86,714 |
NVFuser `FusionComputeAtMultiBCast_CUDA` and `FusionDetectSelfMappedDomains_CUDA` does not raise exception on sm_80+
|
module: cuda, module: ci, triaged, module: nvfuser
|
### 🐛 Describe the bug
See here for example (only change that test is compiled with sm_86 and run on A10G )
https://github.com/pytorch/pytorch/actions/runs/3222501479/jobs/5272211415
```
[ RUN ] NVFuserTest.FusionDetectSelfMappedDomains_CUDA
/home/nshulga/git/pytorch/pytorch/torch/csrc/jit/codegen/cuda/test/test_gpu.cpp:3828: Failure
Expected: tv1->computeAt(tv4, 1) throws an exception.
Actual: it doesn't.
[ FAILED ] NVFuserTest.FusionDetectSelfMappedDomains_CUDA (0 ms)
```
### Versions
CI
cc @ngimel @seemethere @pytorch/pytorch-dev-infra
| 3 |
4,526 | 86,710 |
DISABLED test_attn_cuda (__main__.TestMin)
|
triaged, module: flaky-tests, skipped, module: functorch
|
Platforms: linux, win, windows
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_attn_cuda&suite=TestMin) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/8828175188).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 4 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT BE ALARMED IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_attn_cuda`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
cc @zou3519 @Chillee @samdow @soumith
| 29 |
4,527 | 86,704 |
Performance tests mnist_hogwild-cpu_memory CPU memory increase by 30%
|
triaged, module: regression
|
### 🐛 Describe the bug
We are observing following performance regression
mnist_hogwild-cpu_memory
1.12.1 cu116 (MB) : 307.691
1.13.0 cu116 (MB) : 395.91
CUDA 11.6, python 3.9 pytorch 1.12.1 vs 1.14 master
This confirms with the following test:
```
>> import torch
>>> print(f"torch: {torch.__version__}")
torch: 1.12.1
>>> import psutil
>>> psutil.Process().memory_info().rss / (1024 * 1024)
196.52734375
```
vs
```
>>> import torch
>>> psutil.Process().memory_info().rss / (1024 * 1024)
287.75390625
>>> print(f"torch: {torch.__version__}")
torch: 1.14.0.dev20221011
```
### Versions
1.13 and 1.14 master
cc @ezyang @gchanan @zou3519
| 5 |
4,528 | 86,694 |
Feature request: Deterministic test input generation
|
feature, module: tests, triaged
|
### 🚀 The feature, motivation and pitch
Today, PyTorch tests, especially OpInfo tests used in test_ops, generate random inputs. We then send the random inputs through tests and then compare them with some expected output with some pre-specified tolerance. This leads to some potential issues:
- test flakiness. It is often the case that a different set of random inputs will cause the test to fail because the new result is just beyond the bounds of the tolerance. From anecdotal experience this happens more often than a legitimate bug being discovered.
- inability to reproduce tests. There are also cases where it is difficult to reproduce a failure in CI (because the test inputs in CI produced a random tensor, but on a local machine the test inputs are a different random tensor).
Proposal: we should associate each OpInfo sample with a seed value that can be used to deterministically generate the same input (modulo floating point differences) across all (or most) platforms and devices. This will help test flakiness and test reproducibility, but may come at the cost of fewer bugs caught (because the seed will never change unless someone manually changes it).
### Alternatives
Instead of pre-associating each OpInfo sample with a seed, we could generate a new seed every time the test suite is run and print it out in the logs. This would make it so that our tests are reproducible but not help the flakiness problem
### Additional context
cc our testing tsars (@mruberry, @ngimel), and also @pytorch/pytorch-dev-infra (I'm curious how many flaky tests come up as a result of different random inputs being generated)
cc @mruberry
| 6 |
4,529 | 86,684 |
[ONNX] AssertionError: A mismatch between the number of arguments (5) and their descriptors (4) was found at symbolic function 'scatter'
|
needs reproduction, module: onnx, triaged
|
### 🐛 Describe the bug
Dear experts:
Thanks for your dedication to the great project pytorch. I was intended to use scatter_add_ and scatter_ where reduce is 'multiply' ([TORCH.TENSOR.SCATTER_](https://pytorch.org/docs/stable/generated/torch.Tensor.scatter_.html)) to achieve torch.scatter_reduce mean ops which is not supported currently [[ONNX] Support aten::scatter_reduce](https://github.com/pytorch/pytorch/issues/84260).
However, when I uesd torch.onnx.export() function to try to convert .pt model to .onnx model, I got the error : " AssertionError: A mismatch between the number of arguments (5) and their descriptors (4) was found at symbolic function 'scatter'". It turns out that scatter_(dim, index, src, reduce='multiply') is wrong
I simplified my code as follows:
```python
import torch
import torch.nn as nn
from torch.onnx import utils as onnx_utils
class Model(nn.Module):
def __init__(self):
super(Model, self).__init__()
def forward(self, x, index, ratio):
print("x: ", x)
print("index: ", index)
print("ratio: ", ratio)
index = index.unsqueeze(-1).expand_as(x)
# not supported yet
# y_mean_with_scatter_reduce = torch.zeros_like(x).scatter_reduce(1, index, x, reduce="mean", include_self=False)
# print("y_mean with scatter_reduce and slice: ", y_mean_with_scatter_reduce[:, :index.max()+1, :])
y_sum = torch.zeros((index.shape[0], index.max() + 1, index.shape[-1]), dtype=x.dtype).scatter_add_(1, index, x)
print("y_sum: ", y_sum)
ratio = ratio.unsqueeze(-1).expand_as(y_sum)
ratio = torch.ones_like(ratio) / ratio
index_mul = torch.tensor(range(index.max() + 1)).unsqueeze(1).expand_as(y_sum)
y_mean = y_sum.to(dtype=ratio.dtype).scatter_(1, index_mul, ratio, reduce='multiply')
print("y_mean: ", y_mean)
return y_mean
model = Model()
model.eval()
x = torch.tensor(range(48)).view((2,6,4))
index = torch.tensor([[0,1,1,2,2,2], [0,1,2,3,3,4]])
ratio = torch.tensor([[1,2,3,float('inf'),float('inf')], [1,1,1,2,1]])
args = (x, index, ratio)
opset_version=16
model(x, index, ratio)
torch.onnx.export(model, args, 'model.onnx', opset_version=opset_version)
```
The model output is the same as I expect like follows:
```
x: tensor([[[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11],
[12, 13, 14, 15],
[16, 17, 18, 19],
[20, 21, 22, 23]],
[[24, 25, 26, 27],
[28, 29, 30, 31],
[32, 33, 34, 35],
[36, 37, 38, 39],
[40, 41, 42, 43],
[44, 45, 46, 47]]])
y_mean: tensor([[[ 0., 1., 2., 3.],
[ 6., 7., 8., 9.],
[16., 17., 18., 19.],
[ 0., 0., 0., 0.],
[ 0., 0., 0., 0.]],
[[24., 25., 26., 27.],
[28., 29., 30., 31.],
[32., 33., 34., 35.],
[38., 39., 40., 41.],
[44., 45., 46., 47.]]])
```
But torch.onnx.export() shows the error information as follows:
```
Traceback (most recent call last):
File "onnx_export_test.py", line 45, in <module>
torch.onnx.export(model, args, 'model.onnx', opset_version=opset_version)
File "/mnt/workspace/workgroup/anaconda3/envs/torch1.12/lib/python3.8/site-packages/torch/onnx/__init__.py", line 350, in export
return utils.export(
File "/mnt/workspace/workgroup/anaconda3/envs/torch1.12/lib/python3.8/site-packages/torch/onnx/utils.py", line 163, in export
_export(
File "/mnt/workspace/workgroup/anaconda3/envs/torch1.12/lib/python3.8/site-packages/torch/onnx/utils.py", line 1074, in _export
graph, params_dict, torch_out = _model_to_graph(
File "/mnt/workspace/workgroup/anaconda3/envs/torch1.12/lib/python3.8/site-packages/torch/onnx/utils.py", line 731, in _model_to_graph
graph = _optimize_graph(
File "/mnt/workspace/workgroup/anaconda3/envs/torch1.12/lib/python3.8/site-packages/torch/onnx/utils.py", line 308, in _optimize_graph
graph = _C._jit_pass_onnx(graph, operator_export_type)
File "/mnt/workspace/workgroup/anaconda3/envs/torch1.12/lib/python3.8/site-packages/torch/onnx/__init__.py", line 416, in _run_symbolic_function
return utils._run_symbolic_function(*args, **kwargs)
File "/mnt/workspace/workgroup/anaconda3/envs/torch1.12/lib/python3.8/site-packages/torch/onnx/utils.py", line 1406, in _run_symbolic_function
return symbolic_fn(g, *inputs, **attrs)
File "/mnt/workspace/workgroup/anaconda3/envs/torch1.12/lib/python3.8/site-packages/torch/onnx/symbolic_helper.py", line 204, in wrapper
assert len(arg_descriptors) >= len(args), (
AssertionError: A mismatch between the number of arguments (5) and their descriptors (4) was found at symbolic function 'scatter'. If you believe this is not due to custom symbolic implementation within your code or an external library, please file an issue at https://github.com/pytorch/pytorch/issues/new?template=bug-report.yml to report this bug.
```
Additionally, I wanna ask when do you plan to be able to support scatter_reduce ops. Thanks a lot!
### Versions
torch 1.12.0+cu102
onnx 1.12.0
onnxconverter-common 1.12.2
onnxruntime 1.12.1
onnxruntime-tools 1.7.0
tf2onnx 1.12.1
| 1 |
4,530 | 86,683 |
Documentation and typing hints for RProp
|
module: docs, module: optimizer, triaged, actionable
|
It seems that the RProp's `foreach` parameter is not well documented, and also the `rprop.pyi` file does not reflect the additional `foreach` parameter available in `rprop.py`.
https://github.com/pytorch/pytorch/blob/3a2cfbb813e19c1648b23079f704829f9997425d/torch/optim/rprop.pyi#L5
Seems that `foreach` controls whether to use `_multi_tensor_rprop` or `_single_tensor_rprop`. But their difference isn't obvious to me. My guess is that for the "multi" case, each parameter has its own $\eta_t$?
cc @svekars @holly1238 @vincentqb @jbschlosser @albanD
| 2 |
4,531 | 86,679 |
"upsample_nearest2d_out_frame" not implemented for 'BFloat16'
|
triaged, module: bfloat16, module: interpolation
|
### 🐛 Describe the bug
Nearest upsampling with `torch.nn.functional.interpolate` does not work in `bfloat16`. Minimal code to reproduce.
```
import torch
import torch.nn.functional as F
image = torch.randn(1, 4, 32, 32).to(device="cuda", dtype=torch.bfloat16)
out = F.interpolate(image, size=(64, 64), mode="nearest")
```
This throws an error
```
File ~/.pyenv/versions/3.9.14/envs/diffusers-env/lib/python3.9/site-packages/torch/nn/functional.py:3910, in interpolate(input, size, scale_factor, mode, align_corners, recompute_scale_factor, antialias)
3908 return torch._C._nn.upsample_nearest1d(input, output_size, scale_factors)
3909 if input.dim() == 4 and mode == "nearest":
-> 3910 return torch._C._nn.upsample_nearest2d(input, output_size, scale_factors)
3911 if input.dim() == 5 and mode == "nearest":
3912 return torch._C._nn.upsample_nearest3d(input, output_size, scale_factors)
RuntimeError: "upsample_nearest2d_out_frame" not implemented for 'BFloat16'
```
`F.interpolate` with `nearest` mode is used a lot unets which are the backbone diffusion models like stable diffusion. Due to this at the moment it's not possible to use Stable Diffusion with `bfloat16` without manual casting. cf https://github.com/huggingface/diffusers/pull/792
### Versions
```
PyTorch version: 1.12.1+cu116
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux 10 (buster) (x86_64)
GCC version: (Debian 8.3.0-6) 8.3.0
Clang version: Could not collect
CMake version: version 3.13.4
Libc version: glibc-2.28
Python version: 3.9.14 (main, Sep 22 2022, 15:50:51) [GCC 8.3.0] (64-bit runtime)
Python platform: Linux-4.19.0-22-cloud-amd64-x86_64-with-glibc2.28
Is CUDA available: True
CUDA runtime version: 11.0.221
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-40GB
GPU 1: NVIDIA A100-SXM4-40GB
Nvidia driver version: 510.47.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.3
[pip3] torch==1.12.1+cu116
[pip3] torchaudio==0.12.1+cu116
[pip3] torchvision==0.13.1+cu116
[conda] numpy 1.19.5 py37h3e96413_3 conda-forge
```
| 5 |
4,532 | 86,676 |
Pytorch built for Jetson errors if CUDA is not found
|
module: build, module: cuda, triaged, module: arm
|
### 🐛 Describe the bug
I've been building torch wheel from source for arm/jetson using instructions here: https://forums.developer.nvidia.com/t/pytorch-for-jetson/72048
It works fine on Jetson, but it doesn't work on ARM machine without Nvidia GPU:
```
(py39) [ubuntu@arm-dev ~]: python
Python 3.9.13 | packaged by conda-forge | (main, May 27 2022, 17:32:26)
[GCC 10.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/ubuntu/.miniforge3/envs/py39/lib/python3.9/site-packages/torch/__init__.py", line 196, in <module>
_load_global_deps()
File "/home/ubuntu/.miniforge3/envs/py39/lib/python3.9/site-packages/torch/__init__.py", line 149, in _load_global_deps
ctypes.CDLL(lib_path, mode=ctypes.RTLD_GLOBAL)
File "/home/ubuntu/.miniforge3/envs/py39/lib/python3.9/ctypes/__init__.py", line 374, in __init__
self._handle = _dlopen(self._name, mode)
OSError: libcurand.so.10: cannot open shared object file: No such file or directory
```
For the sake of the simplicity, it'd be useful if we can have one single wheel that works seamlessly b/w CPU and GPU on arm servers. Like how it does for x86.
@albanD suggests that It looks like our main shared library has a dependency on some CUDA stuff (which it shouldn't).
### Versions
Here's the env from jetson TX2. `import torch` works fine here but not on ARM server without cuda.
```
(base) [qure@orbitty ssd]: python collect_env.py
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.5 LTS (aarch64)
GCC version: (Ubuntu/Linaro 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: Could not collect
CMake version: version 3.10.2
Libc version: glibc-2.27
Python version: 3.10.6 | packaged by conda-forge | (main, Aug 22 2022, 20:27:34) [GCC 10.4.0] (64-bit runtime)
Python platform: Linux-4.9.140-tegra-aarch64-with-glibc2.27
Is CUDA available: N/A
CUDA runtime version: 10.2.89
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Probably one of the following:
/usr/lib/aarch64-linux-gnu/libcudnn.so.8.0.0
/usr/lib/aarch64-linux-gnu/libcudnn_adv_infer.so.8.0.0
/usr/lib/aarch64-linux-gnu/libcudnn_adv_train.so.8.0.0
/usr/lib/aarch64-linux-gnu/libcudnn_cnn_infer.so.8.0.0
/usr/lib/aarch64-linux-gnu/libcudnn_cnn_train.so.8.0.0
/usr/lib/aarch64-linux-gnu/libcudnn_ops_infer.so.8.0.0
/usr/lib/aarch64-linux-gnu/libcudnn_ops_train.so.8.0.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[conda] No relevant packages
```
cc @malfet @seemethere @ngimel
| 0 |
4,533 | 93,528 |
[TorchInductor] Add support for Pascal GPUs (P100, GTX 1080, etc)
|
triaged, oncall: pt2
|
According to @ptillet supporting Pascal GPUs with Triton should be possible if we:
1) Disable float16 support (and give a good error message saying fp16 requires a newer GPU)
2) Use `aten` backend for matmul/conv
3) May need to remove an assert on the Triton side?
4) Test it to make sure things work
This might be a good first issue if someone has a Pascal GPU handy to test on.
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 26 |
4,534 | 86,667 |
Adding a linear layer leads to failure of `optimize_for_mobile`
|
triage review, oncall: jit, module: mkldnn
|
### 🐛 Describe the bug
Hi! I'm trying to optimize my model with `optimize_for_mobile`, and find that simply adding an additional linear layer to the model will cause failure, though a model with only a linear layer works fine (which means the linear layer itself will not cause this issue). See the codes below.
```python
import torch
from torch.utils.mobile_optimizer import optimize_for_mobile
import torch.nn as nn
class MyModule(nn.Module):
def __init__(self):
super().__init__()
self.conv = nn.Conv2d(
2, 1, kernel_size=(1,1),
)
self.linear = nn.Linear(2, 1)
def forward(self, x):
x = torch.nn.functional.pad(
x, pad=[0, 1, 0, 1, 1, 0], mode="constant", value=0.5,
)
x = self.conv(x)
x = self.linear(x) # <-- works fine if we remove this line
return x
mod = MyModule()
x = torch.zeros((2,1,1,1), dtype=torch.float32)
out = mod(x)
print(f'eager: out = {out}') # <-- works fine
exported = torch.jit.trace(mod, [x])
exported = torch.jit.optimize_for_inference(exported) # <-- works fine
exported = optimize_for_mobile(exported) # <-- Exception here!
eout = exported(x)
print(f'JIT: eout = {eout}')
assert torch.allclose(out, eout)
```
<details><summary>Click to expand a long log!</summary>
```python
Traceback (most recent call last):
File "/home/colin/code/test/nnsmith/backends/factory.py", line 92, in checked_exec
return executable(input)
File "/home/colin/code/test/nnsmith/backends/torchjit.py", line 50, in closure
output = exported(*input_ts.values())
File "/home/colin/miniconda3/envs/py39/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
NotImplementedError: The following operation failed in the TorchScript interpreter.
Traceback of TorchScript (most recent call last):
/home/colin/miniconda3/envs/py39/lib/python3.9/site-packages/torch/nn/modules/linear.py(114): forward
/home/colin/miniconda3/envs/py39/lib/python3.9/site-packages/torch/nn/modules/module.py(1118): _slow_forward
/home/colin/miniconda3/envs/py39/lib/python3.9/site-packages/torch/nn/modules/module.py(1130): _call_impl
/home/colin/code/test/nnsmith/materialize/torch/symbolnet.py(348): forward
/home/colin/miniconda3/envs/py39/lib/python3.9/site-packages/torch/nn/modules/module.py(1118): _slow_forward
/home/colin/miniconda3/envs/py39/lib/python3.9/site-packages/torch/nn/modules/module.py(1130): _call_impl
/home/colin/miniconda3/envs/py39/lib/python3.9/site-packages/torch/jit/_trace.py(967): trace_module
/home/colin/miniconda3/envs/py39/lib/python3.9/site-packages/torch/jit/_trace.py(750): trace
/home/colin/code/test/nnsmith/backends/torchjit.py(40): make_backend
/home/colin/miniconda3/envs/py39/lib/python3.9/site-packages/multipledispatch/dispatcher.py(435): __call__
/home/colin/code/test/nnsmith/backends/factory.py(65): checked_make_backend
/home/colin/code/test/nnsmith/backends/factory.py(69): checked_compile
/home/colin/code/test/nnsmith/backends/factory.py(112): checked_compile_and_exec
/home/colin/code/test/nnsmith/cli/model_exec.py(70): verify_testcase
/home/colin/code/test/nnsmith/cli/fuzz.py(159): validate_and_report
/home/colin/code/test/nnsmith/cli/fuzz.py(194): run
/home/colin/code/test/nnsmith/cli/fuzz.py(201): main
/home/colin/miniconda3/envs/py39/lib/python3.9/site-packages/hydra/core/utils.py(186): run_job
/home/colin/miniconda3/envs/py39/lib/python3.9/site-packages/hydra/_internal/hydra.py(119): run
/home/colin/miniconda3/envs/py39/lib/python3.9/site-packages/hydra/_internal/utils.py(453): <lambda>
/home/colin/miniconda3/envs/py39/lib/python3.9/site-packages/hydra/_internal/utils.py(213): run_and_report
/home/colin/miniconda3/envs/py39/lib/python3.9/site-packages/hydra/_internal/utils.py(452): _run_app
/home/colin/miniconda3/envs/py39/lib/python3.9/site-packages/hydra/_internal/utils.py(389): _run_hydra
/home/colin/miniconda3/envs/py39/lib/python3.9/site-packages/hydra/main.py(90): decorated_main
/home/colin/code/test/nnsmith/cli/fuzz.py(205): <module>
RuntimeError: Could not run 'prepacked::linear_clamp_prepack' with arguments from the 'MkldnnCPU' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'prepacked::linear_clamp_prepack' is only available for these backends: [Dense, UNKNOWN_TENSOR_TYPE_ID, QuantizedXPU, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, SparseCPU, SparseCUDA, SparseHIP, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, SparseVE, UNKNOWN_TENSOR_TYPE_ID, NestedTensorCUDA, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID].
CPU: registered at ../aten/src/ATen/native/xnnpack/RegisterOpContextClass.cpp:95 [kernel]
BackendSelect: fallthrough registered at ../aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback]
Python: registered at ../aten/src/ATen/core/PythonFallbackKernel.cpp:133 [backend fallback]
Named: registered at ../aten/src/ATen/core/NamedRegistrations.cpp:7 [backend fallback]
Conjugate: registered at ../aten/src/ATen/ConjugateFallback.cpp:18 [backend fallback]
Negative: registered at ../aten/src/ATen/native/NegateFallback.cpp:18 [backend fallback]
ZeroTensor: registered at ../aten/src/ATen/ZeroTensorFallback.cpp:86 [backend fallback]
ADInplaceOrView: fallthrough registered at ../aten/src/ATen/core/VariableFallbackKernel.cpp:64 [backend fallback]
AutogradOther: fallthrough registered at ../aten/src/ATen/core/VariableFallbackKernel.cpp:35 [backend fallback]
AutogradCPU: fallthrough registered at ../aten/src/ATen/core/VariableFallbackKernel.cpp:39 [backend fallback]
AutogradCUDA: fallthrough registered at ../aten/src/ATen/core/VariableFallbackKernel.cpp:47 [backend fallback]
AutogradXLA: fallthrough registered at ../aten/src/ATen/core/VariableFallbackKernel.cpp:51 [backend fallback]
AutogradMPS: fallthrough registered at ../aten/src/ATen/core/VariableFallbackKernel.cpp:59 [backend fallback]
AutogradXPU: fallthrough registered at ../aten/src/ATen/core/VariableFallbackKernel.cpp:43 [backend fallback]
AutogradHPU: fallthrough registered at ../aten/src/ATen/core/VariableFallbackKernel.cpp:68 [backend fallback]
AutogradLazy: fallthrough registered at ../aten/src/ATen/core/VariableFallbackKernel.cpp:55 [backend fallback]
Tracer: registered at ../torch/csrc/autograd/TraceTypeManual.cpp:295 [backend fallback]
AutocastCPU: fallthrough registered at ../aten/src/ATen/autocast_mode.cpp:481 [backend fallback]
Autocast: fallthrough registered at ../aten/src/ATen/autocast_mode.cpp:324 [backend fallback]
Batched: registered at ../aten/src/ATen/BatchingRegistrations.cpp:1064 [backend fallback]
VmapMode: fallthrough registered at ../aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]
Functionalize: registered at ../aten/src/ATen/FunctionalizeFallbackKernel.cpp:89 [backend fallback]
PythonTLSSnapshot: registered at ../aten/src/ATen/core/PythonFallbackKernel.cpp:137 [backend fallback]
```
</details>
Something interesting is that, code snippets below works fine.
1. Remove the linear layer in forward pass.
```python
def forward(self, x):
x = torch.nn.functional.pad(
x, pad=[0, 1, 0, 1, 1, 0], mode="constant", value=0.5,
)
x = self.conv(x)
# x = self.linear(x) # <-- works fine if we remove this line
return x
```
2. Use a single linear layer.
```python
class MyModule(nn.Module):
def __init__(self):
super().__init__()
self.linear = nn.Linear(2, 1)
def forward(self, x):
x = self.linear(x)
return x
mod = MyModule()
x = torch.zeros((2,1,1,2), dtype=torch.float32)
out = mod(x)
print(f'eager: out = {out}') # <-- works fine
exported = torch.jit.trace(mod, [x])
exported = torch.jit.optimize_for_inference(exported)
exported = optimize_for_mobile(exported) # <-- works fine
eout = exported(x)
print(f'JIT: eout = {eout}')
assert torch.allclose(out, eout)
```
3. While making no change to the original model, remove `optimize_for_inference`.
```python
exported = torch.jit.trace(mod, [x])
# exported = torch.jit.optimize_for_inference(exported) # <-- works fine if we remove this line
exported = optimize_for_mobile(exported) # <-- works fine if we remove the above line
```
### Versions
```python
PyTorch version: 1.12.1+cu116
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.2.0-19ubuntu1) 11.2.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.9.13 (main, Aug 25 2022, 23:26:10) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-48-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.6.124
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
Nvidia driver version: 515.65.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.3
[pip3] torch==1.12.1+cu116
[pip3] torchaudio==0.12.1+cu116
[pip3] torchvision==0.13.1+cu116
[conda] numpy 1.23.3 pypi_0 pypi
[conda] torch 1.12.1+cu116 pypi_0 pypi
[conda] torchaudio 0.12.1+cu116 pypi_0 pypi
[conda] torchvision 0.13.1+cu116 pypi_0 pypi
```
cc @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @VitalyFedyunin
| 1 |
4,535 | 86,662 |
libtorch throws `required keyword attribute 'profiled_view_size' has the wrong type` on Linux
|
oncall: jit
|
### 🐛 Describe the bug
On Linux, using `libtorch` 1.12.1 with the cxx11 ABI produces the following error when attempting to run inference on a TorchScript model:
```
required keyword attribute 'profiled_view_size' has the wrong type
```
Here is the stacktrace:
```
* frame #0: 0x00007ffff7d57160 libc++abi.so.1`__cxa_throw
frame #1: 0x00007fffe1f9e3c3 libtorch_cpu.so`torch::jit::VectorAttributeValue<long, (torch::jit::AttributeKind)5>::ValueType& torch::jit::Node::getAttr<torch::jit::VectorAttributeValue<long, (torch::jit::AttributeKind)5> >(c10::Symbol) const + 307
frame #2: 0x00007fffe1f9c9a3 libtorch_cpu.so`torch::jit::(anonymous namespace)::attributesEqualCSE(torch::jit::Node const*, torch::jit::Node const*) + 5715
frame #3: 0x00007fffe1f9dcb7 libtorch_cpu.so`torch::jit::EqualNode::operator()(torch::jit::Node const*, torch::jit::Node const*) const + 167
frame #4: 0x00007fffe200ecd5 libtorch_cpu.so`std::_Hashtable<torch::jit::Node*, torch::jit::Node*, std::allocator<torch::jit::Node*>, std::__detail::_Identity, torch::jit::EqualNode, torch::jit::HashNode, std::__detail::_Mod_range_hashing, std::__detail::_Default_ranged_hash, std::__detail::_Prime_rehash_policy, std::__detail::_Hashtable_traits<true, true, true> >::_M_find_before_node(unsigned long, torch::jit::Node* const&, unsigned long) const + 101
frame #5: 0x00007fffe200cb6f libtorch_cpu.so`torch::jit::(anonymous namespace)::CommonSubexpressionEliminator::run(torch::jit::Block*, std::function<torch::jit::Node* (torch::jit::Node*)>) + 6127
frame #6: 0x00007fffe200e832 libtorch_cpu.so`torch::jit::EliminateCommonSubexpression(std::shared_ptr<torch::jit::Graph> const&) + 1346
frame #7: 0x00007fffe22144b8 libtorch_cpu.so`torch::jit::runPreAutodiffPassPipeline(std::shared_ptr<torch::jit::Graph>&) + 1368
frame #8: 0x00007fffe2218308 libtorch_cpu.so`torch::jit::ProfilingGraphExecutorImpl::runProfilingOptimizations(std::shared_ptr<torch::jit::Graph>&, unsigned long) + 248
frame #9: 0x00007fffe22197d5 libtorch_cpu.so`torch::jit::ProfilingGraphExecutorImpl::getOptimizedPlanFor(std::vector<c10::IValue, std::allocator<c10::IValue> >&, c10::optional<unsigned long>) + 2133
frame #10: 0x00007fffe2219e09 libtorch_cpu.so`torch::jit::ProfilingGraphExecutorImpl::getPlanFor(std::vector<c10::IValue, std::allocator<c10::IValue> >&, c10::optional<unsigned long>) + 121
frame #11: 0x00007fffe21dab16 libtorch_cpu.so`torch::jit::GraphExecutorImplBase::run(std::vector<c10::IValue, std::allocator<c10::IValue> >&) + 166
frame #12: 0x00007fffe1e6e6c0 libtorch_cpu.so`torch::jit::Method::operator()(std::vector<c10::IValue, std::allocator<c10::IValue> >, std::unordered_map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, c10::IValue, std::hash<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, c10::IValue> > > const&) const + 384
frame #13: 0x00000000006be513 MyAppName`torch::jit::Module::forward(std::vector<c10::IValue, std::allocator<c10::IValue> >, std::unordered_map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, c10::IValue, std::hash<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, c10::IValue> > > const&) + 131
```
Interestingly, this does not occur when using the exact same code on OSX. It also occurs whether using CPU or CUDA.
Here is some of the relevant code:
```c++
/* loading model */
try {
if (use_cuda && torch::cuda::is_available()) {
PRINTF("Using CUDA\n");
this->use_cuda = true;
this->device = torch::kCUDA;
}
this->model = torch::jit::load(model_filename, this->device);
this->model.eval();
} catch (const c10::Error &e) {
printf("Error loading model: %s\n", e.what());
exit(1);
}
```
```c++
/* do inference */
torch::InferenceMode guard;
auto inputs = this->getModelInputs(refImage, refPose, impImages, imgPoses, projParams, noImpFrames);
auto outputs = this->model.forward(inputs).toGenericDict();
```
```c++
/* preprocess data */
std::vector<torch::IValue>
getModelInputs(ORUChar4TSImage *refImage, Matrix4f refPose,
ORUChar4TSImage **impImages, Matrix4f *imgPoses,
Vector4f projParams, int noImpFrames) {
std::vector<torch::IValue> inputs;
// images -> tensor
if (noImpFrames != NUM_REFERENCE_FRAMES) {
printf("Incorrect number of reference frames provided\n");
exit(1);
}
auto refImageT = normalizeAndSetTensorFromImage(refImage);
if (this->use_cuda) {
refImageT = refImageT.to(this->device);
}
auto impImagesT = torch::empty({noImpFrames, 3, IMAGE_HEIGHT, IMAGE_WIDTH},
torch::TensorOptions().device(torch::kCPU).dtype<float>());
for (int32_t k = 0; k < noImpFrames; ++k) {
impImagesT[k] = normalizeAndSetTensorFromImage(impImages[k]);
}
if (this->use_cuda) {
impImagesT = impImagesT.to(this->device);
}
// poses -> tensor
auto refPoseT = torch::empty({4, 4},
torch::TensorOptions().device(this->device).dtype<float>());
for (int32_t i = 0; i < 4; ++i) {
for (int32_t j = 0; j < 4; ++j) {
refPoseT.index_put_({i, j}, refPose(i, j));
}
}
auto impPosesT = torch::empty({noImpFrames, 4, 4},
torch::TensorOptions().device(this->device).dtype<float>());
for (int32_t k = 0; k < noImpFrames; ++k) {
for (int32_t i = 0; i < 4; ++i) {
for (int32_t j = 0; j < 4; ++j) {
impPosesT.index_put_({k, i, j}, imgPoses[k](i, j));
}
}
}
c10::Dict<std::string, torch::Tensor> refDict;
refDict.insert("image_b3hw", refImageT.unsqueeze(0));
refDict.insert("world_T_cam_b44", refPoseT.unsqueeze(0));
refDict.insert("cam_T_world_b44", refPoseT.inverse().unsqueeze(0));
c10::Dict<std::string, torch::Tensor> impDict;
impDict.insert("image_b3hw", impImagesT.unsqueeze(0));
impDict.insert("world_T_cam_b44", impPosesT.unsqueeze(0));
impDict.insert("cam_T_world_b44", impPosesT.inverse().unsqueeze(0));
// intrinsics -> tensor
for (uint32_t i = 0; i < 5; ++i) {
auto refIntrinsicsT = torch::eye(4,
torch::TensorOptions().device(this->device).dtype<float>());
refIntrinsicsT.index_put_({0, 0}, projParams.x / std::pow(2, i + 1)); // fx
refIntrinsicsT.index_put_({1, 1}, projParams.y / std::pow(2, i + 1)); // fy
refIntrinsicsT.index_put_({0, 2}, projParams.z / std::pow(2, i + 1)); // cx
refIntrinsicsT.index_put_({1, 2}, projParams.w / std::pow(2, i + 1)); // cy
auto impIntrinsicsT = refIntrinsicsT.clone().repeat({noImpFrames, 1, 1});
std::ostringstream stringStream;
stringStream << "K_s" << i << "_b44";
refDict.insert(stringStream.str(), refIntrinsicsT.unsqueeze(0));
impDict.insert(stringStream.str(), impIntrinsicsT.unsqueeze(0));
stringStream.clear();
stringStream.str(std::string());
stringStream << "invK_s" << i << "_b44";
refDict.insert(stringStream.str(), refIntrinsicsT.inverse().unsqueeze(0));
impDict.insert(stringStream.str(), impIntrinsicsT.inverse().unsqueeze(0));
}
inputs.emplace_back(std::move(refDict));
inputs.emplace_back(std::move(impDict));
inputs.emplace_back(torch::zeros({1}, torch::TensorOptions().device(this->device).dtype<bool>()));
inputs.emplace_back(torch::ones({1}, torch::TensorOptions().device(this->device).dtype<bool>()));
inputs.emplace_back(torch::zeros({1}, torch::TensorOptions().device(this->device).dtype<bool>()));
return inputs;
}
```
### Versions
```
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.15.0-1018-gcp-x86_64-with-glibc2.31
Is CUDA available: N/A
CUDA runtime version: 11.6.124
GPU models and configuration: GPU 0: Tesla T4
Nvidia driver version: 520.61.05
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.6.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
Versions of relevant libraries:
[pip3] No relevant packages
[conda] No relevant packages
```
| 3 |
4,536 | 86,660 |
libtorch make failed
|
module: build, triaged
|
### 🐛 Describe the bug
when I use libtorch with cmakelists:
```
cmake_minimum_required(VERSION 3.15)
project(torch_test)
set(APP_NAME torch_test)
set(CMAKE_PREFIX_PATH "/home/hy/Downloads/libtorch-shared-with-deps-1.8.2+cu111/libtorch/")
IF(NOT CMAKE_BUILD_TYPE)
SET(CMAKE_BUILD_TYPE Release)
ENDIF()
set(OpenCV_DIR "/usr/local/share/OpenCV")
find_package(OpenCV REQUIRED)
include_directories(
${OpenCV_INCLUDE_DIRS}
)
set(CUDA_DIR "/usr/local/cuda-11.1")
find_package(CUDA REQUIRED)
set(Torch_DIR "/home/hy/Downloads/libtorch-shared-with-deps-1.8.2+cu111/libtorch/share/cmake/Torch")
find_package(Torch REQUIRED)
include_directories(
${Torch_INCLUDE_DIRS}
)
file(GLOB SOURCE_FILES torch.cpp)
ADD_EXECUTABLE(${APP_NAME} ${SOURCE_FILES})
target_link_libraries(${APP_NAME} ${TORCH_LIBRARIES})
target_link_libraries(
${APP_NAME}
${OpenCV_LIBS}
# ${Torch_LIBRARIES}
${catkin_LIBRARIES}
)
set_property(TARGET ${PROJECT_NAME} PROPERTY CXX_STANDARD 14)
```
my CPP fiel as this ,it works.
```
#include <torch/script.h>
#include <torch/torch.h>
#include <iostream>
using namespace torch;
using namespace std;
int main()
{
torch::Tensor output = torch::randn({ 3,2 });
std::cout <<output<<endl;
std::cout <<torch::cuda::is_available()<<endl;
return 0;
}
```
however ,when I use cmakelists as this :
```
cmake_minimum_required(VERSION 3.15)
project(thrust)
set(APP_NAME thrust)
IF(NOT CMAKE_BUILD_TYPE)
SET(CMAKE_BUILD_TYPE Release)
ENDIF()
set(CMAKE_PREFIX_PATH "/home/hy/Downloads/libtorch-shared-with-deps-1.8.2+cu111/libtorch/")
set(Torch_DIR "/home/hy/Downloads/libtorch-shared-with-deps-1.8.2+cu111/libtorch/share/cmake/Torch")
find_package(Torch REQUIRED)
include_directories(${Torch_INCLUDE_DIRS})
set(OpenCV_DIR /usr/local/share/OpenCV)
find_package(OpenCV REQUIRED)
include_directories(
${OpenCV_INCLUDE_DIRS}
)
find_package(CUDA REQUIRED)
include_directories(${CUDA_INCLUDE_DIRS})
link_directories(${CUDA_LIBRARY_DIRS})
set(CUDA_NVCC_PLAGS ${CUDA_NVCC_PLAGS};-std=c++14;-g;-G;-gencode;arch=compute_30;code=sm_30)
file(GLOB SOURCE_FILES thrusts.cu)
cuda_add_executable(${PROJECT_NAME} ${SOURCE_FILES})
# target_link_libraries(${APP_NAME} ${TORCH_LIBRARIES})
target_link_libraries(
${APP_NAME}
${TORCH_LIBRARIES}
${OpenCV_LIBS}
${CUDA_LIBRARIES}
)
set_property(TARGET ${PROJECT_NAME} PROPERTY CXX_STANDARD 14)
```
it got error :
```
-- Caffe2: CUDA detected: 11.1
-- Caffe2: CUDA nvcc is: /usr/local/cuda-11.1/bin/nvcc
-- Caffe2: CUDA toolkit directory: /usr/local/cuda-11.1
-- Caffe2: Header version is: 11.1
-- Found cuDNN: v8.1.1 (include: /usr/local/cuda-11.1/include, library: /usr/local/cuda-11.1/lib64/libcudnn.so)
-- /usr/local/cuda-11.1/lib64/libnvrtc.so shorthash is 1f6b333a
-- Autodetected CUDA architecture(s): 6.1
-- Added CUDA NVCC flags for: -gencode;arch=compute_61,code=sm_61
-- Configuring done
CMake Warning at /home/hy/Downloads/libtorch-shared-with-deps-1.8.2+cu111/libtorch/share/cmake/Caffe2/Modules_CUDA_fix/upstream/FindCUDA.cmake:1915 (add_executable):
Cannot generate a safe runtime search path for target thrust because files
in some directories may conflict with libraries in implicit directories:
runtime library [libnvToolsExt.so.1] in /usr/lib/x86_64-linux-gnu may be hidden by files in:
/usr/local/cuda-11.1/lib64
Some of these libraries may not be found correctly.
Call Stack (most recent call first):
CMakeLists.txt:31 (cuda_add_executable)
-- Generating done
-- Build files have been written to: /home/hy/project/scripts/cpp_test/build
[ 50%] Building NVCC (Device) object CMakeFiles/thrust.dir/thrust_generated_thrusts.cu.o
/usr/local/opencv/include/opencv4/opencv2/stitching/detail/warpers.hpp(235): warning: overloaded virtual function "cv::detail::PlaneWarper::buildMaps" is only partially overridden in class "cv::detail::AffineWarper"
/usr/local/opencv/include/opencv4/opencv2/stitching/detail/warpers.hpp(235): warning: overloaded virtual function "cv::detail::PlaneWarper::warp" is only partially overridden in class "cv::detail::AffineWarper"
/usr/local/opencv/include/opencv4/opencv2/stitching/detail/blenders.hpp(100): warning: overloaded virtual function "cv::detail::Blender::prepare" is only partially overridden in class "cv::detail::FeatherBlender"
/usr/local/opencv/include/opencv4/opencv2/stitching/detail/blenders.hpp(127): warning: overloaded virtual function "cv::detail::Blender::prepare" is only partially overridden in class "cv::detail::MultiBandBlender"
/usr/local/opencv/include/opencv4/opencv2/stitching/detail/warpers.hpp(235): warning: overloaded virtual function "cv::detail::PlaneWarper::buildMaps" is only partially overridden in class "cv::detail::AffineWarper"
/usr/local/opencv/include/opencv4/opencv2/stitching/detail/warpers.hpp(235): warning: overloaded virtual function "cv::detail::PlaneWarper::warp" is only partially overridden in class "cv::detail::AffineWarper"
/usr/local/opencv/include/opencv4/opencv2/stitching/detail/blenders.hpp(100): warning: overloaded virtual function "cv::detail::Blender::prepare" is only partially overridden in class "cv::detail::FeatherBlender"
/usr/local/opencv/include/opencv4/opencv2/stitching/detail/blenders.hpp(127): warning: overloaded virtual function "cv::detail::Blender::prepare" is only partially overridden in class "cv::detail::MultiBandBlender"
Scanning dependencies of target thrust
[100%] Linking CXX executable thrust
CMakeFiles/thrust.dir/thrust_generated_thrusts.cu.o:在函数‘c10::IValue::toDouble() const’中:
/home/hy/Downloads/libtorch-shared-with-deps-1.8.2+cu111/libtorch/include/ATen/core/ivalue.h:456:对‘c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)’未定义的引用
CMakeFiles/thrust.dir/thrust_generated_thrusts.cu.o:在函数‘c10::IValue::toInt() const’中:
/home/hy/Downloads/libtorch-shared-with-deps-1.8.2+cu111/libtorch/include/ATen/core/ivalue.h:503:对‘c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)’未定义的引用
CMakeFiles/thrust.dir/thrust_generated_thrusts.cu.o:在函数‘c10::IValue::toComplexDouble() const’中:
/home/hy/Downloads/libtorch-shared-with-deps-1.8.2+cu111/libtorch/include/ATen/core/ivalue_inl.h:137:对‘c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)’未定义的引用
CMakeFiles/thrust.dir/thrust_generated_thrusts.cu.o:在函数‘c10::IValue::toTensor() const &’中:
/home/hy/Downloads/libtorch-shared-with-deps-1.8.2+cu111/libtorch/include/ATen/core/ivalue_inl.h:161:对‘c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)’未定义的引用
CMakeFiles/thrust.dir/thrust_generated_thrusts.cu.o:在函数‘c10::intrusive_ptr<c10::TensorImpl, c10::UndefinedTensorImpl>::retain_()’中:
/home/hy/Downloads/libtorch-shared-with-deps-1.8.2+cu111/libtorch/include/c10/util/intrusive_ptr.h:231:对‘c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)’未定义的引用
CMakeFiles/thrust.dir/thrust_generated_thrusts.cu.o:/home/hy/Downloads/libtorch-shared-with-deps-1.8.2+cu111/libtorch/include/c10/util/intrusive_ptr.h:231: 跟着更多未定义的参考到 c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)
collect2: error: ld returned 1 exit status
CMakeFiles/thrust.dir/build.make:162: recipe for target 'thrust' failed
make[2]: *** [thrust] Error 1
CMakeFiles/Makefile2:75: recipe for target 'CMakeFiles/thrust.dir/all' failed
make[1]: *** [CMakeFiles/thrust.dir/all] Error 2
Makefile:83: recipe for target 'all' failed
make: *** [all] Error 2
```
after I remove torch ,my program works. so I think the problem is in there.but the first example show I can use torch correctly. I dont kown how to solve it. please help me .thank you !
### Versions
Collecting environment information...
PyTorch version: 1.10.1+cu111
Is debug build: False
CUDA used to build PyTorch: 11.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.5 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: Could not collect
CMake version: version 3.15.3
Libc version: glibc-2.27
Python version: 3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.4.0-126-generic-x86_64-with-glibc2.27
Is CUDA available: True
CUDA runtime version: 11.1.74
GPU models and configuration: GPU 0: GeForce GTX 1050 Ti
Nvidia driver version: 455.23.05
libtorch: libtorch-shared-with-deps-1.8.2+cu111
cc @malfet @seemethere
| 0 |
4,537 | 86,644 |
[NvFuser] INTERNAL ASSERT FAIL "ScalarType should be static for Tensors in fusion for amp optimization"
|
triaged, module: assert failure, module: nvfuser
|
### 🐛 Describe the bug
[NvFuser] INTERNAL ASSERT FAIL "ScalarType should be static for Tensors in fusion for amp optimization" with multiple nested if-contidions
```py
import torch
class M(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, input):
input_1 = input
# bug cause
if input.dtype == torch.bfloat16:
input_1 = torch.nn.functional.tanhshrink(input)
if input_1.dtype == torch.int8:
if input_1.dtype == torch.complex64:
input.cos_()
input_1 = torch.mul(input, torch.tensor(-13, dtype=torch.float32, device='cuda'))
input = input_1
fn_res = torch.deg2rad(input, )
return fn_res
fn = M().to('cuda')
torch.random.manual_seed(65358)
input_tensor = torch.empty([3, 2], dtype=torch.float32, memory_format=torch.contiguous_format)
input_tensor.uniform_(-64, 1)
inp = input_tensor.to('cuda')
jit_fn = torch.jit.script(fn)
for i in range(5):
print(i)
jit_fn(inp)
```
```
0
1
RuntimeError: scalar_type.has_value() INTERNAL ASSERT FAILED at "/opt/conda/conda-bld/pytorch_1659484809662/work/torch/csrc/jit/codegen/cuda/graph_fuser.cpp":1938, please report a bug to PyTorch. ScalarType should be static for Tensors in fusion for amp optimization
```
### Versions
```
Collecting environment information...
PyTorch version: 1.12.1
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.2.0-19ubuntu1) 11.2.0
Clang version: 14.0.0-1ubuntu1
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.9.13 (main, Aug 25 2022, 23:26:10) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.19.5-051905-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.7.99
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060
Nvidia driver version: 515.65.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] functorch==0.2.1
[pip3] numpy==1.23.3
[pip3] torch==1.13.0.dev20220923+cu116
[pip3] torchaudio==0.13.0.dev20220923+cu116
[pip3] torchvision==0.14.0.dev20220923+cu116
[conda] _tflow_select 2.3.0 mkl
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.3.1 h2bc3f7f_2
[conda] functorch 0.2.1 pypi_0 pypi
[conda] libblas 3.9.0 12_linux64_mkl conda-forge
[conda] libcblas 3.9.0 12_linux64_mkl conda-forge
[conda] liblapack 3.9.0 12_linux64_mkl conda-forge
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.23.3 pypi_0 pypi
[conda] numpy-base 1.23.1 py39ha15fc14_0
[conda] pytorch 1.12.1 py3.9_cuda11.3_cudnn8.3.2_0 pytorch
[conda] pytorch-cuda 11.7 h67b0de4_0 pytorch-nightly
[conda] pytorch-mutex 1.0 cuda pytorch-nightly
[conda] tensorflow-base 2.9.1 mkl_py39h353358b_0
[conda] torch 1.12.1 pypi_0 pypi
[conda] torchaudio 0.13.0.dev20220923+cu116 pypi_0 pypi
[conda] torchvision 0.14.0.dev20220923+cu116 pypi_0 pypi
```
| 0 |
4,538 | 86,641 |
RFC(from users): nn.Module behavior with in-place changes
|
triaged, needs design, module: functorch
|
## Background
As a part of [functorch rationalization](https://docs.google.com/document/d/1vL9mjHKeRR-uzBazt4W-N4NftqR5I1IDlszHUUanAIw/edit#heading=h.gngvuf6wr2h2) (integrating functorch into pytorch/pytorch), we're considering how to consolidate [`make_functional`](https://pytorch.org/functorch/stable/generated/functorch.make_functional.html?highlight=make_functional#functorch.make_functional) from functorch with [`stateless.functional_call`](https://pytorch.org/docs/stable/generated/torch.nn.utils.stateless.functional_call.html?highlight=functional_call#torch.nn.utils.stateless.functional_call). We weren't certain what the expected behavior here is and wanted to get some user feedback
## What should the following produce?
Consider that we have some model Foo that updates the number of iterations it has run whenever the forward is called
```python
class Foo(nn.Module):
def __init__(self):
self.iter = 0
self.weight = torch.randn(3)
def forward(self, x):
self.iter += 1
out = x * self.weight
return out.sum()
```
### Question 1
In this example, should the last line print out "has run 0 time(s)" or "has run 1 time(s)"?
```python
model = Foo()
model_fn, params = make_functional(model)
# note since the API might change, please ignore what this currently does in practice
# and just share what you would expect
#
# here we get the grad wrt parameters of running with a random input
grad(model_fn, argnums=0)(params, torch.randn(3))
print(f"has run {model.iter} time(s)")
```
### Question 2
Does it change anything if iter is a tensor? Should we treat ints differently?
```python
class Foo(nn.Module):
def __init__(self):
self.iter = torch.tensor([0])
self.weight = torch.randn(3)
def forward(self, x):
self.iter.add_(1)
out = x * self.weight
return out.sum()
model_fn, params, buffers = make_functional_with_buffers(model)
grad(model_fn, argnums=0)(params, buffers, torch.randn(3))
print(f"has run {model.iter.item()} time(s)")
```
## Ask
As users of functorch, it would be great to know your thoughts on either or both questions (partial answers are still valuable)!
cc @zou3519 @Chillee @soumith @ain-soph (if you're still willing to give feedback on the nn API)
| 2 |
4,539 | 86,627 |
[ONNX] CSE pass in export pollutes Scope information
|
module: onnx, triaged, module: regression, bug
|
### 🐛 Describe the bug
## Description
There are two issues found with recently adding CSE pass to ONNX export.
1. CSE pass does not update Scope information. Common subexpressions from different modules are eliminated while not updating scope information for the node that is left. The result is that it appears the node's 'users' from other modules are accessing internal information from the module where the node belongs to.
2. Some ONNX passes are re-creating duplicated Constants after the CSE pass.
Both issues are illustrated in below example.
```python
import torch
class M(torch.nn.Module):
def __init__(self, bias):
super().__init__()
self.bias = bias
def forward(self, x):
return x + self.bias
class N(torch.nn.Module):
def __init__(self, layers: int = 3):
super().__init__()
# 'bias' is same value for all layers, hence common sub expression.
self.layers = torch.nn.ModuleList(
[M(bias=torch.tensor([1.0])) for i in range(layers)]
)
def forward(self, x):
for layer in self.layers:
x = layer(x)
return x
x = torch.randn(8192, 1, 1)
model = N()
torch.onnx.export(
model,
x,
"model.onnx",
verbose=True,
opset_version=15,
input_names=["x"],
dynamic_axes={"x": [0]},
)
```
Observe ONNX graph:
```python
Exported graph: graph(%x : Float(*, 1, 1, strides=[1, 1, 1], requires_grad=0, device=cpu)):
%/layers.0/Constant_output_0 : Float(1, strides=[1], requires_grad=0, device=cpu) = onnx::Constant[value={1}, onnx_name="/layers.0/Constant"](), scope: __main__.N::/__main__.M::layers.0 # repro_oct_cse.py:101:0
%/layers.0/Add_output_0 : Float(*, 1, 1, strides=[1, 1, 1], requires_grad=0, device=cpu) = onnx::Add[onnx_name="/layers.0/Add"](%x, %/layers.0/Constant_output_0), scope: __main__.N::/__main__.M::layers.0 # repro_oct_cse.py:101:0
%/layers.0/Constant_1_output_0 : Float(1, strides=[1], requires_grad=0, device=cpu) = onnx::Constant[value={1}, onnx_name="/layers.0/Constant_1"](), scope: __main__.N::/__main__.M::layers.0 # repro_oct_cse.py:101:0
%/layers.1/Add_output_0 : Float(*, 1, 1, strides=[1, 1, 1], requires_grad=0, device=cpu) = onnx::Add[onnx_name="/layers.1/Add"](%/layers.0/Add_output_0, %/layers.0/Constant_1_output_0), scope: __main__.N::/__main__.M::layers.1 # repro_oct_cse.py:101:0
%/layers.0/Constant_2_output_0 : Float(1, strides=[1], requires_grad=0, device=cpu) = onnx::Constant[value={1}, onnx_name="/layers.0/Constant_2"](), scope: __main__.N::/__main__.M::layers.0 # repro_oct_cse.py:101:0
%6 : Float(*, 1, 1, strides=[1, 1, 1], requires_grad=0, device=cpu) = onnx::Add[onnx_name="/layers.2/Add"](%/layers.1/Add_output_0, %/layers.0/Constant_2_output_0), scope: __main__.N::/__main__.M::layers.2 # repro_oct_cse.py:101:0
return (%6)
```
The same constant value `onnx::Constant[value={1}]` still appeared 3 times,
`%/layers.0/Constant_output_0`, `%/layers.0/Constant_1_output_0`, `%/layers.0/Constant_2_output_0`.
And the Scope information is incorrect. All 3 constants are considered belonging to `layers.0`, where as
they should be considered belonging to `layers.0`, `layers.1` and `layers.2` respectively.
This **breaks** node naming and local function export as both depends on Scope information to figure out nodes' original module.
For comparison, this is the graph with cse disabled during export
```python
Exported graph: graph(%x : Float(*, 1, 1, strides=[1, 1, 1], requires_grad=0, device=cpu)):
%/layers.0/Constant_output_0 : Float(1, strides=[1], requires_grad=0, device=cpu) = onnx::Constant[value={1}, onnx_name="/layers.0/Constant"](), scope: __main__.N::/__main__.M::layers.0 # repro_oct_cse.py:101:0
%/layers.0/Add_output_0 : Float(*, 1, 1, strides=[1, 1, 1], requires_grad=0, device=cpu) = onnx::Add[onnx_name="/layers.0/Add"](%x, %/layers.0/Constant_output_0), scope: __main__.N::/__main__.M::layers.0 # repro_oct_cse.py:101:0
%/layers.1/Constant_output_0 : Float(1, strides=[1], requires_grad=0, device=cpu) = onnx::Constant[value={1}, onnx_name="/layers.1/Constant"](), scope: __main__.N::/__main__.M::layers.1 # repro_oct_cse.py:101:0
%/layers.1/Add_output_0 : Float(*, 1, 1, strides=[1, 1, 1], requires_grad=0, device=cpu) = onnx::Add[onnx_name="/layers.1/Add"](%/layers.0/Add_output_0, %/layers.1/Constant_output_0), scope: __main__.N::/__main__.M::layers.1 # repro_oct_cse.py:101:0
%/layers.2/Constant_output_0 : Float(1, strides=[1], requires_grad=0, device=cpu) = onnx::Constant[value={1}, onnx_name="/layers.2/Constant"](), scope: __main__.N::/__main__.M::layers.2 # repro_oct_cse.py:101:0
%6 : Float(*, 1, 1, strides=[1, 1, 1], requires_grad=0, device=cpu) = onnx::Add[onnx_name="/layers.2/Add"](%/layers.1/Add_output_0, %/layers.2/Constant_output_0), scope: __main__.N::/__main__.M::layers.2 # repro_oct_cse.py:101:0
return (%6)
```
## Proposal
CSE to update scope with common ancestor, whenever it finds two nodes of common sub expression.
For above example, the scopes of the 3 nodes are
```
scope: __main__.N::/__main__.M::layers.0
scope: __main__.N::/__main__.M::layers.1
scope: __main__.N::/__main__.M::layers.2
```
Hence, they have common ancestor `scope: __main__.N::`. Ideally the node should be moved upward in graph to reflect the structural changes as well.
## Alternative Solution
ONNX exporter to run additional pass following CSE to update scope information based on user scopes.
### Versions
PyTorch master:
torch 1.13.0a0+git585452b
cc @thiagocrepaldi @AllenTiTaiWang
| 3 |
4,540 | 86,618 |
Move functorch tests from functorch/test/* to test/*; delete functorch CI configs
|
module: ci, triaged, module: functorch
|
## Issue description
We're at the tail end of moving functorch code out of the functorch/ folder and into the pytorch directory structure. One of the last remaining items is to move the functorch tests from `functorch/test/*` to `test/functorch/*`. The main motivation for this change is that pytorch testing infrastructure generally assumes that PyTorch tests are under `test/*`.
In addition to moving `functorch/test/*` to under `test/*`, we are also proposing deleting the functorch CI configurations and running functorch tests as a part of regular PyTorch CI tests. functorch is no longer a separate library from PyTorch and the tests for its features should run wherever PyTorch core tests run. Concretely, this would involve deleting the functorch shards and increasing the shard count for PyTorch configs to maintain TTS if appropriate.
I'm opening this issue to ask for feedback from the pytorch-dev-infra folks and if there are any other considerations; cc @seemethere @malfet @pytorch/pytorch-dev-infra @Chillee @samdow @soumith and folks we've worked with around this (@malfet, @huydhn, @janeyx99).
| 8 |
4,541 | 86,616 |
JIT returns different values for a model on cuda and returns a strange error message on cpu
|
oncall: jit, module: correctness (silent)
|
### 🐛 Describe the bug
JIT returns different values for a model on cuda and returns a strange error message on cpu
On cuda, the return values are different with and without JIT
```py
import torch
device = torch.device('cuda')
class M(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, inp):
inp1 = torch.add(inp, torch.tensor(-15, dtype=torch.float32, device=device))
inp.sin_()
inp2 = torch.chunk(inp1, 1, dim=0)[0]
inp = inp2
return inp.neg()
fn = M().to(device)
torch.random.manual_seed(35221)
inp = torch.empty([5], dtype=torch.float32, device=device)
inp.uniform_(-64, 31)
inp = inp.to(device)
print(fn(inp.clone()))
jit_fn = torch.jit.trace(fn, (inp.clone(), ))
print(jit_fn(inp.clone()))
```
```
tensor([47.3687, 70.6420, 70.5528, 6.1166, 78.2205], device='cuda:0')
tensor([15.8151, 14.2125, 14.1608, 14.4847, 15.3789], device='cuda:0')
```
---
On cpu, the JIT model will return a strange error message "RuntimeError: _Map_base::at"
```py
import torch
device = torch.device('cpu')
class M(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, inp):
inp1 = torch.add(inp, torch.tensor(-15, dtype=torch.float32, device=device))
inp.sin_()
inp2 = torch.chunk(inp1, 1, dim=0)[0]
inp = inp2
return inp.neg()
fn = M().to(device)
torch.random.manual_seed(35221)
inp = torch.empty([5], dtype=torch.float32, device=device)
inp.uniform_(-64, 31)
inp = inp.to(device)
print(fn(inp.clone()))
jit_fn = torch.jit.trace(fn, (inp.clone(), ))
print(jit_fn(inp.clone()))
```
```
tensor([42.1028, 54.1574, 62.6319, -9.9247, 2.0634])
RuntimeError: _Map_base::at
```
### Versions
```
Collecting environment information...
PyTorch version: 1.12.1
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.2.0-19ubuntu1) 11.2.0
Clang version: 14.0.0-1ubuntu1
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.9.13 (main, Aug 25 2022, 23:26:10) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.19.5-051905-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.7.99
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060
Nvidia driver version: 515.65.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] functorch==0.2.1
[pip3] numpy==1.23.3
[pip3] torch==1.13.0.dev20220923+cu116
[pip3] torchaudio==0.13.0.dev20220923+cu116
[pip3] torchvision==0.14.0.dev20220923+cu116
[conda] _tflow_select 2.3.0 mkl
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.3.1 h2bc3f7f_2
[conda] functorch 0.2.1 pypi_0 pypi
[conda] libblas 3.9.0 12_linux64_mkl conda-forge
[conda] libcblas 3.9.0 12_linux64_mkl conda-forge
[conda] liblapack 3.9.0 12_linux64_mkl conda-forge
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.23.3 pypi_0 pypi
[conda] numpy-base 1.23.1 py39ha15fc14_0
[conda] pytorch 1.12.1 py3.9_cuda11.3_cudnn8.3.2_0 pytorch
[conda] pytorch-cuda 11.7 h67b0de4_0 pytorch-nightly
[conda] pytorch-mutex 1.0 cuda pytorch-nightly
[conda] tensorflow-base 2.9.1 mkl_py39h353358b_0
[conda] torch 1.12.1 pypi_0 pypi
[conda] torchaudio 0.13.0.dev20220923+cu116 pypi_0 pypi
[conda] torchvision 0.14.0.dev20220923+cu116 pypi_0 pypi
```
| 0 |
4,542 | 86,613 |
Decomposition table is ignored with use_functionalize=True in AOT Autograd
|
triaged, module: functorch, module: aotdispatch
|
### 🐛 Describe the bug
```py
import torch
from functorch.compile import aot_function
from functorch.compile import make_boxed_compiler
import functorch
functorch.compile.config.use_functionalize = True # default
def func(a):
return torch.nn.functional.silu(a)
def raise_error(*args):
raise RuntimeError("Expected error")
d = {torch.ops.aten.silu_backward.default: raise_error}
def fw_compiler(gm, args):
return gm
aot_fn = aot_function(func, fw_compiler=make_boxed_compiler(fw_compiler), decompositions=d)
a = torch.randn(3, 3, device="cuda", requires_grad=True)
try:
aot_fn(a).backward(torch.ones_like(a))
print("No error!")
except RuntimeError as e:
print(e)
```
```
No error!
```
Setting `functorch.compile.config.use_functionalize = False` works as expected (specified table is used).
### Versions
Latest master.
cc @zou3519 @Chillee @samdow @soumith
| 0 |
4,543 | 86,612 |
Nonoptimal trace of silu_backward with AOT Autograd
|
triaged, module: functorch, module: aotdispatch
|
### 🐛 Describe the bug
The backward pass of `silu` creates an unnecessary tensor full of ones to do subtraction:
```py
import torch
from functorch.compile import aot_function
from functorch.compile import make_boxed_compiler
from functorch.compile import get_aot_graph_name
def func(a):
return torch.nn.functional.silu(a)
def fw_compiler(gm, args):
print(get_aot_graph_name())
gm.graph.print_tabular()
return gm
aot_fn = aot_function(func, fw_compiler=make_boxed_compiler(fw_compiler))
a = torch.randn(3, 3, device="cuda", requires_grad=True)
aot_fn(a).backward(torch.ones_like(a))
```
Output:
```py
model_forward_0
opcode name target args kwargs
------------- --------- ----------------- -------------------- --------
placeholder primals_1 primals_1 () {}
call_function silu aten.silu.default (primals_1,) {}
output output output ([silu, primals_1],) {}
model_backward_0
opcode name target args kwargs
------------- ---------- ----------------------- ------------------- ----------------------------------------
placeholder primals_1 primals_1 () {}
placeholder tangents_1 tangents_1 () {}
call_function sigmoid aten.sigmoid.default (primals_1,) {}
call_function empty_like aten.empty_like.default (sigmoid,) {'memory_format': torch.preserve_format}
call_function fill aten.fill.Scalar (empty_like, 1) {}
call_function sub aten.sub.Tensor (fill, sigmoid) {}
call_function mul aten.mul.Tensor (primals_1, sub) {}
call_function add aten.add.Scalar (mul, 1) {}
call_function mul_1 aten.mul.Tensor (sigmoid, add) {}
call_function mul_2 aten.mul.Tensor (tangents_1, mul_1) {}
output output output ([mul_2],) {}
```
Here's the C++ code:
https://github.com/pytorch/pytorch/blob/4a5fdc56ec692fe5e39b8f5d2da6be16434c5a02/aten/src/ATen/native/Activation.cpp#L487
`1 - input_sigmoid` is recorded as a sequence of empty_like, fill, and sub.
### Versions
Latest master.
cc @zou3519 @Chillee @samdow @soumith @ezyang
| 2 |
4,544 | 86,601 |
NVFuser batch norm with prims: internal assert failure from test suite
|
module: tests, triaged, module: nvfuser, module: primTorch
|
I noticed that when I run the following test locally:
```
python test/test_ops.py TestCommonCUDA.test_python_ref_executor_ops_nvprims_native_batch_norm_executor_nvfuser_cuda_float32
```
I get this error:
```
File "/raid/hirsheybar/pytorch/torch/_prims/nvfuser_executor.py", line 205, in nvfuser_execute
fusion, unflatten_spec = make_nvfuser_fusion(gm, *nv_template_args) # type: ignore[misc]
File "/raid/hirsheybar/pytorch/torch/_prims/nvfuser_executor.py", line 178, in make_nvfuser_fusion
fd.add_output(o)
RuntimeError: downcast_ptr != nullptr INTERNAL ASSERT FAILED at "../torch/csrc/jit/codegen/cuda/utils.h":136, please report a bug to PyTorch.
```
I'm not sure why that test isn't failing in CI - one possibility is that I have a debug build locally, and the failure only manifests in a debug build.
cc @mruberry @ezyang @ngimel @Lezcano @fdrocha
| 0 |
4,545 | 86,598 |
`squeeze_` fails with JIT but succeeds without it
|
oncall: jit
|
### 🐛 Describe the bug
`squeeze_` fails with JIT but succeeds without it
```py
import torch
def fn(input):
return input.squeeze_(dim=3)
input = torch.empty([5, 1, 5, 1], dtype=torch.float64, device='cpu')
input.uniform_(-64, 127)
print(fn(input.clone()))
jit_fn = torch.jit.trace(fn, input.clone())
```
```
tensor([[[ 57.4385, -4.1074, -42.2959, -59.3506, -37.6112]],
[[ -1.8995, 122.6732, -19.9084, 105.3027, 46.4563]],
[[ 39.0169, -47.2261, -43.9141, -56.1353, 71.8304]],
[[106.2245, -1.1742, 27.4605, 41.4658, 82.8431]],
[[119.8366, 66.7670, 5.1189, 17.5239, -60.9487]]],
dtype=torch.float64)
IndexError: Dimension out of range (expected to be in range of [-3, 2], but got 3)
```
### Versions
```
PyTorch version: 1.13.0.dev20221001
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.2.0-19ubuntu1) 11.2.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.9.13 (main, Aug 25 2022, 23:26:10) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.19.5-051905-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.7.99
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060
Nvidia driver version: 515.65.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.3
[pip3] torch==1.13.0.dev20221001
[pip3] torchaudio==0.13.0.dev20221001
[pip3] torchvision==0.14.0.dev20221001
[conda] blas 1.0 mkl
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.23.3 pypi_0 pypi
[conda] numpy-base 1.23.1 py39ha15fc14_0
[conda] pytorch 1.13.0.dev20221001 py3.9_cuda11.7_cudnn8.5.0_0 pytorch-nightly
[conda] pytorch-cuda 11.7 h67b0de4_0 pytorch-nightly
[conda] pytorch-mutex 1.0 cuda pytorch-nightly
[conda] torch 1.13.0.dev20220923+cu116 pypi_0 pypi
[conda] torchaudio 0.13.0.dev20220923+cu116 pypi_0 pypi
[conda] torchvision 0.14.0.dev20220923+cu116 pypi_0 pypi
```
| 0 |
4,546 | 86,597 |
JIT returns different values for `cos + frac` on cpu
|
oncall: jit, module: correctness (silent)
|
### 🐛 Describe the bug
JIT returns different values for `cos + frac` on cpu
```py
import torch
def fn(input):
input = torch.cos(input)
return torch.frac(input,)
input = torch.tensor(26.9859, device='cpu')
print(fn(input))
jit_frac = torch.jit.trace(fn, input)
print(jit_frac(input))
```
```
tensor(-0.2786)
tensor(0.7214)
```
By contrast, it has the same value on cuda
```py
import torch
def fn(input):
input = torch.cos(input)
return torch.frac(input,)
input = torch.tensor(26.9859, device='cuda')
print(fn(input))
jit_frac = torch.jit.trace(fn, input)
print(jit_frac(input))
```
```
tensor(-0.2786)
tensor(-0.2786)
```
More interestingly, if we split the operations, it will also return the correct value
```py
import torch
def fn(input):
return torch.frac(input,)
input = torch.tensor(26.9859, device='cpu')
input = torch.cos(input)
print(fn(input))
jit_frac = torch.jit.trace(fn, input)
print(jit_frac(input))
```
```
tensor(-0.2786)
tensor(-0.2786)
```
---
Besides, JIT will also return different values for `sinh + cos`
```py
import torch
def fn(input):
res = torch.sinh(input, )
return torch.cos(res)
torch.random.manual_seed(58454)
input= torch.empty([4], dtype=torch.float32)
input.uniform_(-16, 7)
print(fn(input.clone()))
jit_fn = torch.jit.trace(fn, input)
print(jit_fn(input.clone()))
```
```
tensor([-0.1261, 0.9493, 0.3920, 0.9994])
tensor([-0.1261, 0.9493, 0.5037, 0.9994])
```
### Versions
```
PyTorch version: 1.13.0.dev20221001
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.2.0-19ubuntu1) 11.2.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.9.13 (main, Aug 25 2022, 23:26:10) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.19.5-051905-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.7.99
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060
Nvidia driver version: 515.65.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.3
[pip3] torch==1.13.0.dev20221001
[pip3] torchaudio==0.13.0.dev20221001
[pip3] torchvision==0.14.0.dev20221001
[conda] blas 1.0 mkl
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.23.3 pypi_0 pypi
[conda] numpy-base 1.23.1 py39ha15fc14_0
[conda] pytorch 1.13.0.dev20221001 py3.9_cuda11.7_cudnn8.5.0_0 pytorch-nightly
[conda] pytorch-cuda 11.7 h67b0de4_0 pytorch-nightly
[conda] pytorch-mutex 1.0 cuda pytorch-nightly
[conda] torch 1.13.0.dev20220923+cu116 pypi_0 pypi
[conda] torchaudio 0.13.0.dev20220923+cu116 pypi_0 pypi
[conda] torchvision 0.14.0.dev20220923+cu116 pypi_0 pypi
```
| 0 |
4,547 | 86,596 |
`CTCLoss` returns a different value with JIT on cuda
|
oncall: jit, module: cuda, module: correctness (silent)
|
### 🐛 Describe the bug
`CTCLoss` returns a different value with JIT on cuda
```py
import torch
torch.random.manual_seed(3564)
def get_fn():
arg_class = torch.nn.CTCLoss()
inp1 = torch.randint(-16, 1, [16, 30], dtype=torch.int64, device='cuda')
inp2 = torch.randint(-4, 2, [16], dtype=torch.int64, device='cuda')
inp3 = torch.randint(-1, 1, [16], dtype=torch.int64, device='cuda')
def fn(inp):
fn_res = arg_class(inp, inp1, inp2, inp3)
return fn_res
return fn
fn = get_fn()
inp = torch.empty([1, 16, 20], dtype=torch.float32)
inp.uniform_(-32, 15)
inp_a = inp.clone().to('cuda')
inp_b = inp.clone().to('cuda')
inp_c = inp.clone().to('cuda')
res = fn(inp_a)
jit_fn = torch.jit.trace(fn, (inp_b, ))
jit_res = jit_fn(inp_c)
print(res) # tensor(12.9426, device='cuda:0')
print(jit_res) # tensor(14.2359, device='cuda:0')
```
### Versions
```
PyTorch version: 1.13.0.dev20221001
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.2.0-19ubuntu1) 11.2.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.9.13 (main, Aug 25 2022, 23:26:10) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.19.5-051905-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.7.99
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060
Nvidia driver version: 515.65.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.3
[pip3] torch==1.13.0.dev20221001
[pip3] torchaudio==0.13.0.dev20221001
[pip3] torchvision==0.14.0.dev20221001
[conda] blas 1.0 mkl
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.23.3 pypi_0 pypi
[conda] numpy-base 1.23.1 py39ha15fc14_0
[conda] pytorch 1.13.0.dev20221001 py3.9_cuda11.7_cudnn8.5.0_0 pytorch-nightly
[conda] pytorch-cuda 11.7 h67b0de4_0 pytorch-nightly
[conda] pytorch-mutex 1.0 cuda pytorch-nightly
[conda] torch 1.13.0.dev20220923+cu116 pypi_0 pypi
[conda] torchaudio 0.13.0.dev20220923+cu116 pypi_0 pypi
[conda] torchvision 0.14.0.dev20220923+cu116 pypi_0 pypi
```
cc @ngimel
| 0 |
4,548 | 86,595 |
JIT model with `relu+div+sgn` will crash when computing the gradient
|
oncall: jit, module: crash, module: functorch
|
### 🐛 Describe the bug
JIT model with `relu+div+sgn` will crash (segmentation fault (core dumped)) when computing the gradient by using `torch.autograd.functional.jacobian` after running the model
```py
import torch
class M(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, input):
input = torch.nn.functional.relu(input)
input = torch.div(input, torch.tensor(2, dtype=torch.float32, device='cpu'))
fn_res = torch.sgn(input, )
return fn_res
fn = M().to('cpu')
torch.random.manual_seed(47202)
input = torch.rand([5, 5, 5], dtype=torch.float64)
jit_fn = torch.jit.trace(fn, input)
from torch.autograd.functional import jacobian
jit_fn(input.clone().requires_grad_())
jacobian(jit_fn, input.clone().requires_grad_())
```
```
segmentation fault (core dumped)
```
Interestingly, it will not crash if we directly call the `jacobian` without running the model at first
### Versions
```
PyTorch version: 1.13.0.dev20221001
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.2.0-19ubuntu1) 11.2.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.9.13 (main, Aug 25 2022, 23:26:10) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.19.5-051905-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.7.99
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060
Nvidia driver version: 515.65.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.3
[pip3] torch==1.13.0.dev20221001
[pip3] torchaudio==0.13.0.dev20221001
[pip3] torchvision==0.14.0.dev20221001
[conda] blas 1.0 mkl
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.23.3 pypi_0 pypi
[conda] numpy-base 1.23.1 py39ha15fc14_0
[conda] pytorch 1.13.0.dev20221001 py3.9_cuda11.7_cudnn8.5.0_0 pytorch-nightly
[conda] pytorch-cuda 11.7 h67b0de4_0 pytorch-nightly
[conda] pytorch-mutex 1.0 cuda pytorch-nightly
[conda] torch 1.13.0.dev20220923+cu116 pypi_0 pypi
[conda] torchaudio 0.13.0.dev20220923+cu116 pypi_0 pypi
[conda] torchvision 0.14.0.dev20220923+cu116 pypi_0 pypi
```
cc @zou3519 @Chillee @samdow @soumith
| 0 |
4,549 | 86,594 |
JIT model with mean will crash when computing the gradients on cuda
|
triage review, oncall: jit, module: crash
|
### 🐛 Describe the bug
The JIT model with mean will crash when computing the gradients on cuda.
It seems that it frees an invalid pointer
```py
import torch
class M(torch.nn.Module):
def __init__(self):
super().__init__()
self.dim = -1
self.keepdim = False
def forward(self, input):
dim = self.dim
keepdim = self.keepdim
input = torch.sub(input, torch.tensor(9, dtype=torch.float32, device='cuda'))
fn_res = torch.mean(input, dim, keepdim=keepdim, )
fn_res = torch.sub(fn_res, torch.tensor(-3, dtype=torch.float32, device='cuda'))
return fn_res
fn = M().to('cuda')
torch.random.manual_seed(56288)
inp = torch.empty([1, 64], dtype=torch.float32, memory_format=torch.contiguous_format)
inp.uniform_(-128, 63)
def clone(tensor):
return tensor.clone().to('cuda').requires_grad_()
jit_fn = torch.jit.trace(fn, clone(inp))
from torch.autograd.functional import jacobian
jit_fn(clone(inp))
jacobian(jit_fn, (clone(inp), ))
jacobian(jit_fn, (clone(inp), ), vectorize=True, strategy='forward-mode')
```
```
free(): invalid pointer
[1] 3996840 IOT instruction (core dumped) python mean.py
```
### Versions
```
PyTorch version: 1.13.0.dev20221001
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.2.0-19ubuntu1) 11.2.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.9.13 (main, Aug 25 2022, 23:26:10) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.19.5-051905-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.7.99
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060
Nvidia driver version: 515.65.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.3
[pip3] torch==1.13.0.dev20221001
[pip3] torchaudio==0.13.0.dev20221001
[pip3] torchvision==0.14.0.dev20221001
[conda] blas 1.0 mkl
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.23.3 pypi_0 pypi
[conda] numpy-base 1.23.1 py39ha15fc14_0
[conda] pytorch 1.13.0.dev20221001 py3.9_cuda11.7_cudnn8.5.0_0 pytorch-nightly
[conda] pytorch-cuda 11.7 h67b0de4_0 pytorch-nightly
[conda] pytorch-mutex 1.0 cuda pytorch-nightly
[conda] torch 1.13.0.dev20220923+cu116 pypi_0 pypi
[conda] torchaudio 0.13.0.dev20220923+cu116 pypi_0 pypi
[conda] torchvision 0.14.0.dev20220923+cu116 pypi_0 pypi
```
cc @ezyang @zou3519 @gqchen @pearu @nikitaved @soulitzer @Lezcano @Varal7 @Chillee @samdow @soumith
| 0 |
4,550 | 86,590 |
Easy way to "freeze" BatchNorm running_mean/running_var
|
module: nn, triaged, needs research
|
### 🚀 The feature, motivation and pitch
In most transfer learning applications, it is often useful to freeze some layers of the CNN (e.g. resnet50) up to the last convolutional layers in order to train only the last layers. Usually, I simply set requires_grad=False to all parameters in a simple for loop:
for param in net.parameters():
param.requires_grad = False
This way, I can alternate freely between model.train() and model.eval(), knowing which weights will get updated. However, since batchnorm parameters (running_mean/running_var) are not updated during backprop, the requires_grad attribute set on them is pretty much ignored, contrary to the training attribute of the layer which is used to compute bn_training => after training, I end up with a model where all batchnorm params have changed.
Here's my request: could bn_training somehow also depend on the requires_grad attribute of running_mean/running_var, i.e. something like:
if self.training and self.running_mean.requires_grad and self.running_var.requires_grad:
bn_training = True
else:
bn_training = (self.running_mean is None) and (self.running_var is None)
Note that I've seen a lot of codes where people don't know that their feature extractor is not entirely frozen, which can lead to a lot of incorrect/non reproducible results in research papers for example.
### Alternatives
_No response_
### Additional context
_No response_
cc @albanD @mruberry @jbschlosser @walterddr @kshitij12345 @saketh-are
| 1 |
4,551 | 86,572 |
Instructions for Selective Build for Mobile Linux Platform
|
oncall: mobile
|
### 📚 The doc issue
1. The docs. at https://pytorch.org/tutorials/prototype/tracing_based_selective_build.html include instructions for iOS/Android, but not for Linux. Please could you add instructions for Linux?
2. The instructions on that page don't include a call to `optimize_for_mobile()`. Is it a required step or can selective build work without `optimize_for_mobile()`?
3. I also want to know a quick way to be able to check the approximate size change for my PyTorch build for the set of models I want to work with - is there any way to do that w/o doing a complete selective build?
The page above says:
> The custom build is still under development, and we will continue improving its size in the future. Note, however, that the APIs are subject to change in future versions.
4. What other size improvements can be expected in the future?
### Suggest a potential alternative/fix
1. Inclde steps for Linux based build
2. Clarofy if `optimize_for_mobile()` is a required step and what it does - I can't find documentation about this
3. ??
4. ??
| 1 |
4,552 | 86,563 |
[functorch] colab links on functorch 0.2.0 website should be linked to a permalinked version of the colabs
|
module: docs, triaged, module: functorch
|
### 🐛 Describe the bug
Visit the AOT Autograd tutorial at: https://pytorch.org/functorch/stable/notebooks/aot_autograd_optimizations.html
Click the colab icon so you can run the tutorial...it's linking to a colab dir that no longer exists in the repo, so you'll
receive:
~~~
Notebook not found
There was an error loading this notebook. Ensure that the file is accessible and try again.
Ensure that you have permission to view this notebook in GitHub and authorize Colaboratory to use the GitHub API.
https://github.com/pytorch/functorch/blob/main/notebooks/colab/aot_autograd_optimizations.ipynb
Fetch for https://api.github.com/repos/pytorch/functorch/contents/notebooks/colab?per_page=100&ref=main failed: {
"message": "Not Found",
"documentation_url": "https://docs.github.com/rest/reference/repos#get-repository-content"
}
CustomError: Fetch for https://api.github.com/repos/pytorch/functorch/contents/notebooks/colab?per_page=100&ref=main failed: {
"message": "Not Found",
"documentation_url": "https://docs.github.com/rest/reference/repos#get-repository-content"
}
at new mK (https://colab.research.google.com/v2/external/external_polymer_binary.js?vrz=colab-20221006-060055-RC01_479328028:2421:77)
at wa.program_ (https://colab.research.google.com/v2/external/external_polymer_binary.js?vrz=colab-20221006-060055-RC01_479328028:2405:427)
at ya (https://colab.research.google.com/v2/external/external_polymer_binary.js?vrz=colab-20221006-060055-RC01_479328028:20:336)
at wa.next_ (https://colab.research.google.com/v2/external/external_polymer_binary.js?vrz=colab-20221006-060055-RC01_479328028:18:479)
at za.next (https://colab.research.google.com/v2/external/external_polymer_binary.js?vrz=colab-20221006-060055-RC01_479328028:21:206)
at b (https://colab.research.google.com/v2/external/external_polymer_binary.js?vrz=colab-20221006-060055-RC01_479328028:21:468)
~~~
The notebook can be loaded into colab by going to the jupyter notebook in github, and then clicking the colab button there.. (albeit with an odd run in colab icon still showing even in colab) but a typical user will not want to go figure out the correct steps and path to run:
https://github.com/pytorch/pytorch/blob/master/functorch/notebooks/aot_autograd_optimizations.ipynb
### Versions
Collecting environment information...
PyTorch version: 1.14.0.dev20221009+cu116
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: 6.0.0-1ubuntu2 (tags/RELEASE_600/final)
CMake version: version 3.22.6
Libc version: glibc-2.26
Python version: 3.7.14 (default, Sep 8 2022, 00:06:44) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.10.133+-x86_64-with-Ubuntu-18.04-bionic
Is CUDA available: True
CUDA runtime version: 11.2.152
GPU models and configuration: GPU 0: Tesla T4
Nvidia driver version: 460.32.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.1.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.1.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.1.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.1.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.1.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.1.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.1.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.21.6
[pip3] torch==1.14.0.dev20221009+cu116
[pip3] torchaudio==0.12.1+cu113
[pip3] torchsummary==1.5.1
[pip3] torchvision==0.15.0.dev20221009+cu116
cc @svekars @holly1238 @zou3519 @Chillee @samdow @soumith
| 3 |
4,553 | 86,558 |
Data conversion ops ignore `memory_format=torch.contiguous_format`
|
triaged, module: primTorch
|
### 🐛 Describe the bug
**Bug**: a _contiguous_ format is requested for a _non-contiguous_ input, but a _non-contiguous view_ is returned.
Likely this is the same for all data conversion ops:
`bool`, `bfloat16`, `byte`, `char`, `double`, `float`, `half`, `int`, `long`, `short`.
Probably only happens when input dtype matches the conversion op (like `torch.bool` and `.bool()`).
**Expected**: a _contiguous copy_ is returned instead. Note: in primTorch, `to` returns a _contiguous copy_ in this case.
```py
#!/usr/bin/env python3
import torch
input = torch.tensor([[True, False, False], [False, False, False]])
args = ()
kwargs = {'memory_format': torch.contiguous_format}
print("is contiguous?", input.is_contiguous())
input = input.t()
print("is contiguous after transpose?", input.is_contiguous())
print("is view?", input._is_view())
op=lambda x, *args, **kwargs: x.bool(*args, **kwargs)
torch_result = op(input, *args, **kwargs)
print()
print("torch is view?", torch_result._is_view())
print("torch is contiguous?", torch_result.is_contiguous())
```
Output:
```
is contiguous? True
is contiguous after transpose? False
is view? True
torch is view? True
torch is contiguous? False
```
### Versions
master (67434c70df5df353944f6ba876d9dd06b669bacd)
cc @ezyang @mruberry @ngimel @Lezcano @fdrocha
| 2 |
4,554 | 86,554 |
[NvFuser] would change the output for some inaccurate dtype
|
triaged, module: nvfuser
|
### 🐛 Describe the bug
NvFuser would change the output for some inaccurate dtype.
```py
import torch
torch._C._jit_set_nvfuser_single_node_mode(True)
def f(input, other):
return torch.fmod(input, other)
input = torch.tensor([-64.1, -21.9], dtype=torch.bfloat16, device='cuda')
other = torch.tensor(0.1999, dtype=torch.float64, device='cuda')
jit_f = torch.jit.trace(f, (input, other))
print(f(input, other))
print(jit_f(input, other))
# tensor([-0.1377, -0.0537], device='cuda:0', dtype=torch.bfloat16)
# tensor([-0.0320, -0.0859], device='cuda:0', dtype=torch.bfloat16)
```
The results are much different for `fmod`
The root cause of this issue could be that NvFuser changed the computation logic for bfloat16, which is inaccurate dtype and could behave totally differently in different execution
---
Another example is `add`/`sub`
```py
import torch
device = 'cuda'
class M(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, input, other):
fn_res = torch.add(input, other, )
return fn_res
input = torch.tensor([[[0.1733]]], dtype=torch.bfloat16, device=device)
other = torch.tensor(59.8794, dtype=torch.float64, device=device)
m = M().to(device)
torch._C._jit_set_nvfuser_single_node_mode(True)
jit_m = torch.jit.trace(m, (input.clone(), other.clone()))
print(jit_m(input.clone(), other.clone()))
torch._C._jit_set_nvfuser_single_node_mode(False)
jit_m = torch.jit.trace(m, (input.clone(), other.clone()))
print(jit_m(input.clone(), other.clone()))
```
```
tensor([[[60.]]], device='cuda:0', dtype=torch.bfloat16)
tensor([[[60.2500]]], device='cuda:0', dtype=torch.bfloat16)
```
### Versions
```
PyTorch version: 1.13.0.dev20221001
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.2.0-19ubuntu1) 11.2.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.9.13 (main, Aug 25 2022, 23:26:10) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.19.5-051905-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.7.99
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060
Nvidia driver version: 515.65.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.3
[pip3] torch==1.13.0.dev20221001
[pip3] torchaudio==0.13.0.dev20221001
[pip3] torchvision==0.14.0.dev20221001
[conda] blas 1.0 mkl
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.23.3 pypi_0 pypi
[conda] numpy-base 1.23.1 py39ha15fc14_0
[conda] pytorch 1.13.0.dev20221001 py3.9_cuda11.7_cudnn8.5.0_0 pytorch-nightly
[conda] pytorch-cuda 11.7 h67b0de4_0 pytorch-nightly
[conda] pytorch-mutex 1.0 cuda pytorch-nightly
[conda] torch 1.13.0.dev20220923+cu116 pypi_0 pypi
[conda] torchaudio 0.13.0.dev20220923+cu116 pypi_0 pypi
[conda] torchvision 0.14.0.dev20220923+cu116 pypi_0 pypi
```
| 0 |
4,555 | 86,553 |
`topk` will return the wrong value and could read out-of-bound value after jit
|
oncall: jit
|
### 🐛 Describe the bug
`topk` will return the wrong value and could read out-of-bound value after jit
```py
import torch
input = torch.tensor(5.0, dtype=torch.float64)
device = 'cuda'
m = lambda inp: inp.topk(0)
print(m(input.clone().to(device)))
jit_m = torch.jit.trace(m, input.clone().to(device))
print(m(input.clone().to(device)))
print(jit_m(input.clone().to(device)))
print(m(input.clone().to(device)))
```
```
torch.return_types.topk(
values=tensor(0., device='cuda:0', dtype=torch.float64),
indices=tensor(0, device='cuda:0'))
torch.return_types.topk(
values=tensor(1., device='cuda:0', dtype=torch.float64),
indices=tensor(4532020583610935537, device='cuda:0'))
(tensor(1., device='cuda:0', dtype=torch.float64), tensor(4532020583610935537, device='cuda:0'))
torch.return_types.topk(
values=tensor(1., device='cuda:0', dtype=torch.float64),
indices=tensor(4532020583610935537, device='cuda:0'))
```
### Versions
```
PyTorch version: 1.13.0.dev20221001
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.2.0-19ubuntu1) 11.2.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.9.13 (main, Aug 25 2022, 23:26:10) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.19.5-051905-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.7.99
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060
Nvidia driver version: 515.65.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.3
[pip3] torch==1.13.0.dev20221001
[pip3] torchaudio==0.13.0.dev20221001
[pip3] torchvision==0.14.0.dev20221001
[conda] blas 1.0 mkl
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.23.3 pypi_0 pypi
[conda] numpy-base 1.23.1 py39ha15fc14_0
[conda] pytorch 1.13.0.dev20221001 py3.9_cuda11.7_cudnn8.5.0_0 pytorch-nightly
[conda] pytorch-cuda 11.7 h67b0de4_0 pytorch-nightly
[conda] pytorch-mutex 1.0 cuda pytorch-nightly
[conda] torch 1.13.0.dev20220923+cu116 pypi_0 pypi
[conda] torchaudio 0.13.0.dev20220923+cu116 pypi_0 pypi
[conda] torchvision 0.14.0.dev20220923+cu116 pypi_0 pypi
```
| 0 |
4,556 | 86,552 |
`max_unpool` and `max_pool` will trigger INTERNAL ASSERT FAIL in JIT
|
oncall: jit
|
### 🐛 Describe the bug
`max_unpool` will trigger INTERNAL ASSERT FAIL in JIT but work normally without JIT
```py
import torch
class M(torch.nn.Module):
def __init__(self):
super().__init__()
self.indices = torch.randint(0, 2, [1, 1, 4], dtype=torch.int64)
self.kernel_size = [1]
self.output_size = [1, 1, 2]
self.padding = [1]
# invalid inputs
self.stride = [True,]
def forward(self, input):
indices = self.indices
kernel_size = self.kernel_size
stride = self.stride
padding = self.padding
output_size = self.output_size
return torch.nn.functional.max_unpool1d(input, indices, kernel_size, stride=stride, padding=padding, output_size=output_size, )
input = torch.rand([1, 1, 4], dtype=torch.float32)
fn = M().to('cpu')
print(fn(input))
jit_fn = torch.jit.script(fn)
print(jit_fn(input))
```
```
tensor([[[0.8325, 0.0918]]])
RuntimeError: isIntList() INTERNAL ASSERT FAILED at "/opt/conda/conda-bld/pytorch_1664608354311/work/aten/src/ATen/core/ivalue_inl.h":1833, please report a bug to PyTorch. Expected IntList but got Int
```
### Versions
```
PyTorch version: 1.13.0.dev20221001
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.2.0-19ubuntu1) 11.2.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.9.13 (main, Aug 25 2022, 23:26:10) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.19.5-051905-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.7.99
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060
Nvidia driver version: 515.65.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.3
[pip3] torch==1.13.0.dev20221001
[pip3] torchaudio==0.13.0.dev20221001
[pip3] torchvision==0.14.0.dev20221001
[conda] blas 1.0 mkl
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.23.3 pypi_0 pypi
[conda] numpy-base 1.23.1 py39ha15fc14_0
[conda] pytorch 1.13.0.dev20221001 py3.9_cuda11.7_cudnn8.5.0_0 pytorch-nightly
[conda] pytorch-cuda 11.7 h67b0de4_0 pytorch-nightly
[conda] pytorch-mutex 1.0 cuda pytorch-nightly
[conda] torch 1.13.0.dev20220923+cu116 pypi_0 pypi
[conda] torchaudio 0.13.0.dev20220923+cu116 pypi_0 pypi
[conda] torchvision 0.14.0.dev20220923+cu116 pypi_0 pypi
```
| 1 |
4,557 | 86,551 |
`MultiLabelMarginLoss` will return incorrect values in JIT after the first run on cuda
|
oncall: jit, module: cuda, triaged, module: nvfuser
|
### 🐛 Describe the bug
`MultiLabelMarginLoss` will return incorrect values in JIT after the first run in some cases
```py
import torch
class M(torch.nn.Module):
def __init__(self):
super().__init__()
loss = torch.nn.MultiLabelMarginLoss()
inp1 = torch.empty([1, 4], dtype=torch.int64)
inp1 = torch.randint_like(inp1, -32, 128).to('cuda')
self.loss = loss
self.inp1 = inp1
def forward(self, inp):
inp = torch.nn.functional.relu(inp)
inp = torch.nn.functional.relu(inp)
inp = torch.mul(inp, torch.tensor(13, dtype=torch.float32, device='cuda'))
inp = torch.nn.functional.relu(inp)
fn_res = self.loss(inp, self.inp1)
fn_res = torch.sin(fn_res)
return fn_res
torch.random.manual_seed(5100)
fn = M().to('cuda')
torch.random.manual_seed(5100)
inp = torch.empty([1, 4], dtype=torch.float32, memory_format=torch.contiguous_format)
inp.uniform_(-8, 15)
jit_fn = torch.jit.script(fn)
print('normal function')
for _ in range(3):
print(fn(inp.clone().to('cuda'), ))
print('jitted function')
for _ in range(3):
print(jit_fn(inp.clone().to('cuda'), ))
```
```
normal function
tensor(0.1331, device='cuda:0')
tensor(0.1331, device='cuda:0')
tensor(0.1331, device='cuda:0')
jitted function
tensor(0.1331, device='cuda:0')
tensor(-0.7621, device='cuda:0')
tensor(-0.7621, device='cuda:0')
```
---
Besides, `resize_as_` also suffers from this issue
```py
import torch
class M(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, _input_tensor, tensor):
tensor = torch.nn.functional.tanhshrink(tensor)
fn_res = _input_tensor.resize_as_(tensor, )
fn_res = torch.nn.functional.relu(fn_res)
fn_res = torch.mul(fn_res, torch.tensor(-2, dtype=torch.float32, device='cuda'))
return fn_res
fn = M().to('cuda')
torch.random.manual_seed(31029)
inp = torch.empty([], dtype=torch.float64, memory_format=torch.contiguous_format)
inp.uniform_(-2, 1)
t = torch.empty([5], dtype=torch.float64, memory_format=torch.contiguous_format)
t.uniform_(-8, 31)
jit_fn = torch.jit.script(fn)
print('normal function')
for _ in range(3):
print(fn(inp.clone().to('cuda'), t.clone().to('cuda')))
print('jitted function')
for _ in range(3):
print(jit_fn(inp.clone().to('cuda'), t.clone().to('cuda')))
```
```
normal function
tensor([-0.9099, -2.0000, -2.0000, -2.0000, -2.0000], device='cuda:0',
dtype=torch.float64)
tensor([-0.9099, -2.0000, -2.0000, -2.0000, -2.0000], device='cuda:0',
dtype=torch.float64)
tensor([-0.9099, -2.0000, -2.0000, -2.0000, -2.0000], device='cuda:0',
dtype=torch.float64)
jitted function
tensor([-0.9099, -2.0000, -2.0000, -2.0000, -2.0000], device='cuda:0',
dtype=torch.float64)
tensor([-0.9099, -0.0000, -0.0000, -0.0000, -0.0000], device='cuda:0',
dtype=torch.float64)
tensor([-0.9099, -0.0000, -0.0000, -0.0000, -0.0000], device='cuda:0',
dtype=torch.float64)
```
### Versions
```
PyTorch version: 1.13.0.dev20221001
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.2.0-19ubuntu1) 11.2.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.9.13 (main, Aug 25 2022, 23:26:10) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.19.5-051905-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.7.99
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060
Nvidia driver version: 515.65.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.3
[pip3] torch==1.13.0.dev20221001
[pip3] torchaudio==0.13.0.dev20221001
[pip3] torchvision==0.14.0.dev20221001
[conda] blas 1.0 mkl
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.23.3 pypi_0 pypi
[conda] numpy-base 1.23.1 py39ha15fc14_0
[conda] pytorch 1.13.0.dev20221001 py3.9_cuda11.7_cudnn8.5.0_0 pytorch-nightly
[conda] pytorch-cuda 11.7 h67b0de4_0 pytorch-nightly
[conda] pytorch-mutex 1.0 cuda pytorch-nightly
[conda] torch 1.13.0.dev20220923+cu116 pypi_0 pypi
[conda] torchaudio 0.13.0.dev20220923+cu116 pypi_0 pypi
[conda] torchvision 0.14.0.dev20220923+cu116 pypi_0 pypi
```
cc @ngimel
| 0 |
4,558 | 86,548 |
About autocast
|
triaged, module: amp (automated mixed precision)
|
Can i use autocast in model inference? Is that can accelerate the inference time?
cc @mcarilli @ptrblck
| 4 |
4,559 | 86,547 |
Segmentation fault (core dumped) in RTX3090
|
needs reproduction, module: cuda, triaged
|
### 🐛 Describe the bug
When I train my Neural Network in a RTX3090 with latest drivers 520.61.05 and CUDA Version: 11.8 and AMD Ryzen 9 5950X 16-Core Processor I get the error `Segmentation fault (core dumped)` and the training stop. It is frustrating.
I have installed the latest pytorch version with
`pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu116
`
FYI: To make sure the issue is not due to my code, I tried several times the same training script in a A100 GCP instance and it works fine
Thanks
### Versions
Collecting environment information...
PyTorch version: 1.12.1+cu116
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.2.0-19ubuntu1) 11.2.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.10.6 (main, Aug 10 2022, 11:40:04) [GCC 11.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-48-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.8.89
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3090
Nvidia driver version: 520.61.05
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.3
[pip3] torch==1.12.1+cu116
[pip3] torch-tensorrt==1.1.0
[pip3] torchvision==0.13.1+cu116
[conda] No relevant packages
cc @ngimel
| 3 |
4,560 | 86,544 |
Compile failed at allreduce without gloo
|
module: build, triaged
|
### 🐛 Describe the bug
```
(venv) home@daniel-tablet1:~/PycharmProjects/pytorch$ USE_GLOO=0 USE_VULKAN=1 USE_VULKAN_SHADERC_RUNTIME=1 USE_VULKAN_WRAPPER=0 CFLAGS='-Wno-error=maybe-uninitialized -Wno-error=uninitialized -Wno-error=restrict' CXXFLAGS='-Wno-error=maybe-uninitialized -Wno-error=uninitialized -Wno-error=restrict' python setup.py install
Building wheel torch-1.13.0a0+git3ee863c
-- Building version 1.13.0a0+git3ee863c
cmake --build . --target install --config Release
[5/225] Building CXX object third_party/kineto/libkineto/CMakeFiles/kineto_base.dir/src/AbstractConfig.cpp.o
In file included from /usr/include/c++/12/bits/stl_tree.h:63,
from /usr/include/c++/12/map:60,
from /home/home/PycharmProjects/pytorch/third_party/kineto/libkineto/include/AbstractConfig.h:6,
from /home/home/PycharmProjects/pytorch/third_party/kineto/libkineto/src/AbstractConfig.cpp:3:
In static member function ‘static _Tp* std::__copy_move<_IsMove, true, std::random_access_iterator_tag>::__copy_m(const _Tp*, const _Tp*, _Tp*) [with _Tp = char; bool _IsMove = false]’,
inlined from ‘_OI std::__copy_move_a2(_II, _II, _OI) [with bool _IsMove = false; _II = const char*; _OI = char*]’ at /usr/include/c++/12/bits/stl_algobase.h:495:30,
inlined from ‘_OI std::__copy_move_a1(_II, _II, _OI) [with bool _IsMove = false; _II = const char*; _OI = char*]’ at /usr/include/c++/12/bits/stl_algobase.h:522:42,
inlined from ‘_OI std::__copy_move_a(_II, _II, _OI) [with bool _IsMove = false; _II = const char*; _OI = char*]’ at /usr/include/c++/12/bits/stl_algobase.h:529:31,
inlined from ‘_OI std::copy(_II, _II, _OI) [with _II = const char*; _OI = char*]’ at /usr/include/c++/12/bits/stl_algobase.h:620:7,
inlined from ‘OutputIt fmt::v7::detail::copy_str(InputIt, InputIt, OutputIt) [with OutChar = char; InputIt = const char*; OutputIt = char*; typename std::enable_if<(! std::integral_constant<bool, (std::is_same<typename std::iterator_traits<_II>::value_type, char>::value && std::is_same<OutChar, char8_type>::value)>::value), int>::type <anonymous> = 0]’ at /home/home/PycharmProjects/pytorch/third_party/fmt/include/fmt/format.h:548:19,
inlined from ‘It fmt::v7::detail::float_writer<Char>::prettify(It) const [with It = char*; Char = char]’ at /home/home/PycharmProjects/pytorch/third_party/fmt/include/fmt/format.h:1128:26,
inlined from ‘It fmt::v7::detail::float_writer<Char>::operator()(It) const [with It = char*; Char = char]’ at /home/home/PycharmProjects/pytorch/third_party/fmt/include/fmt/format.h:1213:20,
inlined from ‘OutputIt fmt::v7::detail::write(OutputIt, T) [with Char = char; OutputIt = std::back_insert_iterator<buffer<char> >; T = double; typename std::enable_if<std::is_floating_point<T>::value, int>::type <anonymous> = 0]’ at /home/home/PycharmProjects/pytorch/third_party/fmt/include/fmt/format.h:1702:23:
/usr/include/c++/12/bits/stl_algobase.h:431:30: warning: ‘void* __builtin_memmove(void*, const void*, long unsigned int)’ specified bound between 18446744071562067967 and 18446744073709551615 exceeds maximum object size 9223372036854775807 [-Wstringop-overflow=]
431 | __builtin_memmove(__result, __first, sizeof(_Tp) * _Num);
| ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[12/225] Building CXX object third_party/kineto/libkineto/CMakeFiles/kineto_base.dir/src/Logger.cpp.o
/home/home/PycharmProjects/pytorch/third_party/kineto/libkineto/src/Logger.cpp:28:32: warning: unknown option after ‘#pragma GCC diagnostic’ kind [-Wpragmas]
28 | #pragma GCC diagnostic ignored "-Wglobal-constructors"
| ^~~~~~~~~~~~~~~~~~~~~~~
/home/home/PycharmProjects/pytorch/third_party/kineto/libkineto/src/Logger.cpp:28:32: note: did you mean ‘-felide-constructors’?
[16/225] Building CXX object test_cpp_c10d/CMakeFiles/example_allreduce.dir/example/allreduce.cpp.o
FAILED: test_cpp_c10d/CMakeFiles/example_allreduce.dir/example/allreduce.cpp.o
/usr/bin/ccache /usr/bin/c++ -DHAVE_MALLOC_USABLE_SIZE=1 -DHAVE_MMAP=1 -DHAVE_SHM_OPEN=1 -DHAVE_SHM_UNLINK=1 -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DUSE_DISTRIBUTED -DUSE_EXTERNAL_MZCRC -DUSE_RPC -DUSE_TENSORPIPE -D_FILE_OFFSET_BITS=64 -I/home/home/PycharmProjects/pytorch/build/aten/src -I/home/home/PycharmProjects/pytorch/aten/src -I/home/home/PycharmProjects/pytorch/build -I/home/home/PycharmProjects/pytorch -I/home/home/PycharmProjects/pytorch/cmake/../third_party/benchmark/include -I/home/home/PycharmProjects/pytorch/third_party/onnx -I/home/home/PycharmProjects/pytorch/build/third_party/onnx -I/home/home/PycharmProjects/pytorch/third_party/foxi -I/home/home/PycharmProjects/pytorch/build/third_party/foxi -I/home/home/PycharmProjects/pytorch/torch/csrc/distributed -I/home/home/PycharmProjects/pytorch/torch/csrc/api -I/home/home/PycharmProjects/pytorch/torch/csrc/api/include -I/home/home/PycharmProjects/pytorch/c10/.. -isystem /home/home/PycharmProjects/pytorch/cmake/../third_party/googletest/googlemock/include -isystem /home/home/PycharmProjects/pytorch/cmake/../third_party/googletest/googletest/include -isystem /home/home/PycharmProjects/pytorch/third_party/protobuf/src -isystem /home/home/PycharmProjects/pytorch/third_party/gemmlowp -isystem /home/home/PycharmProjects/pytorch/third_party/neon2sse -isystem /home/home/PycharmProjects/pytorch/third_party/XNNPACK/include -isystem /home/home/PycharmProjects/pytorch/cmake/../third_party/eigen -isystem /home/home/PycharmProjects/pytorch/third_party/ideep/mkl-dnn/third_party/oneDNN/include -isystem /home/home/PycharmProjects/pytorch/third_party/ideep/include -isystem /home/home/PycharmProjects/pytorch/third_party/ideep/mkl-dnn/include -Wno-error=maybe-uninitialized -Wno-error=uninitialized -Wno-error=restrict -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DUSE_VULKAN -DUSE_VULKAN_API -DUSE_VULKAN_SHADERC_RUNTIME -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -Wno-stringop-overflow -DHAVE_AVX512_CPU_DEFINITION -DHAVE_AVX2_CPU_DEFINITION -O3 -DNDEBUG -DNDEBUG -fPIE -DTH_HAVE_THREAD -std=gnu++14 -MD -MT test_cpp_c10d/CMakeFiles/example_allreduce.dir/example/allreduce.cpp.o -MF test_cpp_c10d/CMakeFiles/example_allreduce.dir/example/allreduce.cpp.o.d -o test_cpp_c10d/CMakeFiles/example_allreduce.dir/example/allreduce.cpp.o -c /home/home/PycharmProjects/pytorch/test/cpp/c10d/example/allreduce.cpp
/home/home/PycharmProjects/pytorch/test/cpp/c10d/example/allreduce.cpp: In function ‘int main(int, char**)’:
/home/home/PycharmProjects/pytorch/test/cpp/c10d/example/allreduce.cpp:11:3: error: ‘ProcessGroupGloo’ was not declared in this scope
11 | ProcessGroupGloo pg(store, rank, size);
| ^~~~~~~~~~~~~~~~
/home/home/PycharmProjects/pytorch/test/cpp/c10d/example/allreduce.cpp:18:13: error: ‘ones’ is not a member of ‘at’
18 | at::ones({1000, 16 * (i + 1)}, at::TensorOptions(at::CPU(at::kFloat)));
| ^~~~
/home/home/PycharmProjects/pytorch/test/cpp/c10d/example/allreduce.cpp:18:62: error: ‘CPU’ is not a member of ‘at’
18 | at::ones({1000, 16 * (i + 1)}, at::TensorOptions(at::CPU(at::kFloat)));
| ^~~
/home/home/PycharmProjects/pytorch/test/cpp/c10d/example/allreduce.cpp:18:62: note: suggested alternatives:
In file included from /home/home/PycharmProjects/pytorch/c10/core/Layout.h:3,
from /home/home/PycharmProjects/pytorch/build/aten/src/ATen/core/TensorBody.h:12,
from /home/home/PycharmProjects/pytorch/aten/src/ATen/core/jit_type.h:5,
from /home/home/PycharmProjects/pytorch/aten/src/ATen/core/function_schema.h:6,
from /home/home/PycharmProjects/pytorch/aten/src/ATen/core/function.h:3,
from /home/home/PycharmProjects/pytorch/aten/src/ATen/core/builtin_function.h:3,
from /home/home/PycharmProjects/pytorch/torch/custom_class.h:3,
from /home/home/PycharmProjects/pytorch/torch/csrc/distributed/c10d/Store.hpp:10,
from /home/home/PycharmProjects/pytorch/torch/csrc/distributed/c10d/FileStore.hpp:8,
from /home/home/PycharmProjects/pytorch/test/cpp/c10d/example/allreduce.cpp:2:
/home/home/PycharmProjects/pytorch/c10/core/Backend.h:30:3: note: ‘c10::Backend::CPU’
30 | CPU,
| ^~~
In file included from /home/home/PycharmProjects/pytorch/c10/core/Backend.h:4:
/home/home/PycharmProjects/pytorch/c10/core/DispatchKey.h:401:3: note: ‘c10::DispatchKey::CPU’
401 | CPU, // registered at build/aten/src/ATen/RegisterCPU.cpp
| ^~~
In file included from /home/home/PycharmProjects/pytorch/c10/core/Device.h:3,
from /home/home/PycharmProjects/pytorch/build/aten/src/ATen/core/TensorBody.h:11:
/home/home/PycharmProjects/pytorch/c10/core/DeviceType.h:16:3: note: ‘c10::DeviceType::CPU’
16 | CPU = 0,
| ^~~
/home/home/PycharmProjects/pytorch/test/cpp/c10d/example/allreduce.cpp:23:34: error: ‘ProcessGroup’ was not declared in this scope
23 | std::vector<c10::intrusive_ptr<ProcessGroup::Work>> pending;
| ^~~~~~~~~~~~
/home/home/PycharmProjects/pytorch/test/cpp/c10d/example/allreduce.cpp:23:52: error: template argument 1 is invalid
23 | std::vector<c10::intrusive_ptr<ProcessGroup::Work>> pending;
| ^~
/home/home/PycharmProjects/pytorch/test/cpp/c10d/example/allreduce.cpp:23:52: error: template argument 2 is invalid
/home/home/PycharmProjects/pytorch/test/cpp/c10d/example/allreduce.cpp:23:55: error: template argument 1 is invalid
23 | std::vector<c10::intrusive_ptr<ProcessGroup::Work>> pending;
| ^~~~~~~
/home/home/PycharmProjects/pytorch/test/cpp/c10d/example/allreduce.cpp:23:55: error: template argument 2 is invalid
/home/home/PycharmProjects/pytorch/test/cpp/c10d/example/allreduce.cpp:26:5: error: ‘pending’ was not declared in this scope
26 | pending.push_back(pg.allreduce(tmp));
| ^~~~~~~
/home/home/PycharmProjects/pytorch/test/cpp/c10d/example/allreduce.cpp:26:23: error: ‘pg’ was not declared in this scope
26 | pending.push_back(pg.allreduce(tmp));
| ^~
/home/home/PycharmProjects/pytorch/test/cpp/c10d/example/allreduce.cpp:30:21: error: ‘pending’ was not declared in this scope
30 | for (auto& work : pending) {
| ^~~~~~~
/home/home/PycharmProjects/pytorch/test/cpp/c10d/example/allreduce.cpp:8:7: warning: unused variable ‘rank’ [-Wunused-variable]
8 | int rank = atoi(getenv("RANK"));
| ^~~~
[25/225] Building CXX object test_api/CMakeFiles/test_api.dir/autograd.cpp.o
ninja: build stopped: subcommand failed.
```
```
(venv) home@daniel-tablet1:~/PycharmProjects/pytorch$ git rev-parse HEAD
3ee863cb7c07699b94e36715a82a25744560f2d9
(venv) home@daniel-tablet1:~/PycharmProjects/pytorch$ git submodule status --recursive
7e1e1fe3858c63c251c637ae41a20de425dde96f android/libs/fbjni (v0.1.0-12-g7e1e1fe)
4dfe081cf6bcd15db339cf2680b9281b8451eeb3 third_party/FP16 (4dfe081)
b408327ac2a15ec3e43352421954f5b1967701d1 third_party/FXdiv (b408327)
c07e3a0400713d546e0dea2d5466dd22ea389c73 third_party/NNPACK (heads/master)
7d2a4e9931a82adc3814275b6219a03e24e36b4c third_party/QNNPACK (heads/master)
ae108ef49aa5623b896fc93d4298c49d1750d9ba third_party/XNNPACK (remotes/origin/test_294865623-2079-gae108ef49)
e991355c02b93fe17713efe04cbc2e278e00fdbd third_party/benchmark (v1.5.5)
5916273f79a21551890fd3d56fc5375a78d1598d third_party/cpuinfo (5916273)
d106ddb991a56c3df1b6d51b2409e36ba8181ce4 third_party/cub (1.9.10)
43709ab96c47e26eebcdac72f93f946d44ceffa8 third_party/cudnn_frontend (v0.5-4-g43709ab)
3147391d946bb4b6c68edd901f2add6ac1f31f8c third_party/eigen (3.4.0)
2e9be65810107a9595da717f95d21924b73be833 third_party/fbgemm (v0.2.0~102)
8b35b4cffb62ecb58a903bf91cb7537d7a672211 third_party/fbgemm/third_party/asmjit (8b35b4c)
ed8b86a253800bafdb7b25c5c399f91bff9cb1f3 third_party/fbgemm/third_party/cpuinfo (ed8b86a)
cbf019de22c8dd37b2108da35b2748fd702d1796 third_party/fbgemm/third_party/googletest (release-1.8.0-2102-gcbf019de)
d0cede9c90c5257537c293517a21376408b549fa third_party/flatbuffers (v2.0.5)
cd4af11efc9c622896a3e4cb599fa28668ca3d05 third_party/fmt (7.0.3)
c278588e34e535f0bb8f00df3880d26928038cad third_party/foxi (heads/master)
3fb5c176c17c765a3492cd2f0321b0dab712f350 third_party/gemmlowp/gemmlowp (remotes/origin/revert-87-master-135-g3fb5c17)
c22a5cfba94edf8ea4f53a174d38aa0c629d070f third_party/gloo (c22a5cf)
e2239ee6043f73722e7aa812a459f54a28552929 third_party/googletest (release-1.8.0-2745-ge2239ee6)
02b17c5748c9349dcc586c359af800c684d9b1ab third_party/ideep (pytorch-rls-v2.6.0)
888a87a954e4fddb4d81fd10858eb834f2441b46 third_party/ideep/mkl-dnn (graph-v0.5)
52b5f107dd9cf10910aaa19cb47f3abf9b349815 third_party/ideep/mkl-dnn/third_party/oneDNN (v0.1-rc-8734-g52b5f107d)
8abaed637d56f1337d6e1d2c4026e25c1eade724 third_party/ios-cmake (heads/master)
f708eb0cb8b97c54ef631ba48c8e2ca76ff6c62e third_party/kineto (f708eb0)
2591ab91c3898c9f6544fff04660276537d32ffd third_party/kineto/libkineto/third_party/fmt (7.0.3-165-g2591ab91)
7aca84427f224eeed3144123d5230d5871e93347 third_party/kineto/libkineto/third_party/googletest (release-1.8.0-2484-g7aca8442)
7e515921295adaab72adf56ea71a0fafb0ecb5f3 third_party/nccl/nccl (v2.10.3-1)
97a126f08ce318023be604d03f88bf0820a9464a third_party/neon2sse (97a126f)
96046b8ccfb8e6fa82f6b2b34b3d56add2e8849c third_party/onnx (v1.11.0)
e776aa0275e293707b6a0901e0e8d8a8a3679508 third_party/onnx/third_party/benchmark (v1.4.0-19-ge776aa0)
59a2ac2745d8a57ac94c6accced73620d59fb844 third_party/onnx/third_party/pybind11 (v2.6.0)
c153211418a7c57ce071d9ce2a41f8d1c85a878f third_party/onnx-tensorrt (release/6.0-2-gc153211)
765f5ee823a67a866f4bd28a9860e81f3c811ce8 third_party/onnx-tensorrt/third_party/onnx (v1.3.0-142-g765f5ee8)
e776aa0275e293707b6a0901e0e8d8a8a3679508 third_party/onnx-tensorrt/third_party/onnx/third_party/benchmark (v1.4.0-19-ge776aa0)
a1041190c8b8ff0cd9e2f0752248ad5e3789ea0c third_party/onnx-tensorrt/third_party/onnx/third_party/pybind11 (v2.0.0-353-ga1041190)
6a00cbc4a9b8e68b71caf7f774b3f9c753ae84d5 third_party/onnx-tensorrt/third_party/onnx/third_party/pybind11/tools/clang (6a00cbc)
ea778e37710c07723435b1be58235996d1d43a5a third_party/pocketfft (release_for_eigen)
d1eca4e4b421cd2997495c4b4e65cea6be4e9b8a third_party/protobuf (v3.7.0-rc.2-1279-gd1eca4e4b)
5b7683f49e1e9223cf9927b24f6fd3d6bd82e3f8 third_party/protobuf/third_party/benchmark (v1.2.0)
5ec7f0c4a113e2f18ac2c6cc7df51ad6afc24081 third_party/protobuf/third_party/googletest (release-1.8.0-1696-g5ec7f0c4)
072586a71b55b7f8c584153d223e95687148a900 third_party/psimd (heads/master)
a134dd5d4cee80cce15db81a72e7f929d71dd413 third_party/pthreadpool (0.1-128-ga134dd5)
8de7772cc72daca8e947b79b83fea46214931604 third_party/pybind11 (v2.6.2)
4cfedc426c4e2fc52e3f5c2b4297e15ed8d6b8c7 third_party/python-enum (heads/master)
f45429b087dd7d5bc78bb40dc7cf06425c252d67 third_party/python-peachpy (remotes/origin/pre-generated)
15e31431af97e5e64b80af0a3f598d382bcdd49a third_party/python-six (1.11.0)
e0a003ee838b75d11763aa9c3ef17bf71a725bff third_party/sleef (3.5.1-30-ge0a003e)
a51a90bc609bb73db8ea13841b5cf7aa4344d4a9 third_party/tbb (2018_U6)
52791a2fd214b2a9dc5759d36725909c1daa7f2e third_party/tensorpipe (remotes/origin/gh/beauby/37/base)
aee0f9d9b5b87796ee8a0ab26b7587ec30e8858e third_party/tensorpipe/third_party/googletest (release-1.8.0-2420-gaee0f9d9)
910b55815be16109f04f4180e9adee14fb4ce281 third_party/tensorpipe/third_party/libnop (910b558)
1dff88e5161cba5c59276d2070d2e304e4dcb242 third_party/tensorpipe/third_party/libuv (v1.41.0)
a23996fce38ff6ccfbcdc09f1e63f2c4be5ea2ef third_party/tensorpipe/third_party/pybind11 (v2.2.4-1-ga23996fc)
6a00cbc4a9b8e68b71caf7f774b3f9c753ae84d5 third_party/tensorpipe/third_party/pybind11/tools/clang (6a00cbc)
aec56a52fbab207fc639a1937d1e708a282edca8 third_party/zstd (fuzz-corpora-170-gaec56a52)
(venv) home@daniel-tablet1:~/PycharmProjects/pytorch$ pip freeze
astunparse==1.6.3
certifi==2022.9.24
cffi==1.15.1
charset-normalizer==2.1.1
cmake==3.24.1.1
dataclasses==0.6
ffmpeg-python==0.2.0
filelock==3.8.0
future==0.18.2
huggingface-hub==0.10.0
idna==3.4
intel-openmp==2022.2.0
mkl==2022.2.0
mkl-include==2022.2.0
more-itertools==8.14.0
ninja==1.10.2.4
numpy==1.23.3
packaging==21.3
pycparser==2.21
pyparsing==3.0.9
PyYAML==6.0
regex==2022.9.13
requests==2.28.1
six==1.16.0
tbb==2021.7.0
tokenizers==0.12.1
torch==1.12.1
tqdm==4.64.1
transformers==4.22.2
typing_extensions==4.4.0
urllib3==1.26.12
-e git+ssh://git@github.com/openai/whisper.git@9e653bd0ea0f1e9493cb4939733e9de249493cfb#egg=whisper
```
### Versions
```
(venv) home@daniel-tablet1:~/PycharmProjects$ python3 ./collect_env.py
Collecting environment information...
PyTorch version: 1.12.1+cu102
Is debug build: False
CUDA used to build PyTorch: 10.2
ROCM used to build PyTorch: N/A
OS: Ubuntu Kinetic Kudu (development branch) (x86_64)
GCC version: (Ubuntu 12.2.0-3ubuntu1) 12.2.0
Clang version: 14.0.6-2
CMake version: version 3.24.1
Libc version: glibc-2.36
Python version: 3.10.7 (main, Sep 8 2022, 14:34:29) [GCC 12.2.0] (64-bit runtime)
Python platform: Linux-5.19.13-surface-x86_64-with-glibc2.36
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.3
[pip3] torch==1.12.1
[conda] Could not collect
```
I get a different error replacing `USE_GLOO=0` with `USE_DISTRIBUTED=0`:
```
(venv) home@daniel-tablet1:~/PycharmProjects/pytorch$ USE_DISTRIBUTED=0 USE_VULKAN=1 USE_VULKAN_SHADERC_RUNTIME=1 USE_VULKAN_WRAPPER=0 CFLAGS='-Wno-error=maybe-uninitialized -Wno-error=uninitialized -Wno-error=restrict' CXXFLAGS='-Wno-error=maybe-uninitialized -Wno-error=uninitialized -Wno-error=restrict' python setup.py install
Building wheel torch-1.13.0a0+git3ee863c
-- Building version 1.13.0a0+git3ee863c
cmake --build . --target install --config Release
[28/156] Building CXX object test_api/CMakeFiles/test_api.dir/dataloader.cpp.o
FAILED: test_api/CMakeFiles/test_api.dir/dataloader.cpp.o
/usr/bin/ccache /usr/bin/c++ -DHAVE_MALLOC_USABLE_SIZE=1 -DHAVE_MMAP=1 -DHAVE_SHM_OPEN=1 -DHAVE_SHM_UNLINK=1 -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DUSE_EXTERNAL_MZCRC -D_FILE_OFFSET_BITS=64 -I/home/home/PycharmProjects/pytorch/build/aten/src -I/home/home/PycharmProjects/pytorch/aten/src -I/home/home/PycharmProjects/pytorch/build -I/home/home/PycharmProjects/pytorch -I/home/home/PycharmProjects/pytorch/cmake/../third_party/benchmark/include -I/home/home/PycharmProjects/pytorch/third_party/onnx -I/home/home/PycharmProjects/pytorch/build/third_party/onnx -I/home/home/PycharmProjects/pytorch/third_party/foxi -I/home/home/PycharmProjects/pytorch/build/third_party/foxi -I/home/home/PycharmProjects/pytorch/build/caffe2/../aten/src -I/home/home/PycharmProjects/pytorch/torch/csrc/api -I/home/home/PycharmProjects/pytorch/torch/csrc/api/include -I/home/home/PycharmProjects/pytorch/c10/.. -isystem /home/home/PycharmProjects/pytorch/cmake/../third_party/googletest/googlemock/include -isystem /home/home/PycharmProjects/pytorch/cmake/../third_party/googletest/googletest/include -isystem /home/home/PycharmProjects/pytorch/third_party/protobuf/src -isystem /home/home/PycharmProjects/pytorch/third_party/gemmlowp -isystem /home/home/PycharmProjects/pytorch/third_party/neon2sse -isystem /home/home/PycharmProjects/pytorch/third_party/XNNPACK/include -isystem /home/home/PycharmProjects/pytorch/cmake/../third_party/eigen -isystem /home/home/PycharmProjects/pytorch/third_party/ideep/mkl-dnn/third_party/oneDNN/include -isystem /home/home/PycharmProjects/pytorch/third_party/ideep/include -isystem /home/home/PycharmProjects/pytorch/third_party/ideep/mkl-dnn/include -isystem /home/home/PycharmProjects/pytorch/third_party/googletest/googletest/include -isystem /home/home/PycharmProjects/pytorch/third_party/googletest/googletest -Wno-error=maybe-uninitialized -Wno-error=uninitialized -Wno-error=restrict -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DUSE_VULKAN -DUSE_VULKAN_API -DUSE_VULKAN_SHADERC_RUNTIME -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -Wno-stringop-overflow -DHAVE_AVX512_CPU_DEFINITION -DHAVE_AVX2_CPU_DEFINITION -O3 -DNDEBUG -DNDEBUG -fPIE -DTH_HAVE_THREAD -Wno-unused-variable -Wno-maybe-uninitialized -Wno-unused-but-set-parameter -std=gnu++14 -MD -MT test_api/CMakeFiles/test_api.dir/dataloader.cpp.o -MF test_api/CMakeFiles/test_api.dir/dataloader.cpp.o.d -o test_api/CMakeFiles/test_api.dir/dataloader.cpp.o -c /home/home/PycharmProjects/pytorch/test/cpp/api/dataloader.cpp
In file included from /usr/include/c++/12/memory:63,
from /home/home/PycharmProjects/pytorch/third_party/googletest/googletest/include/gtest/gtest.h:57,
from /home/home/PycharmProjects/pytorch/test/cpp/api/dataloader.cpp:1:
In static member function ‘static _Tp* std::__copy_move<_IsMove, true, std::random_access_iterator_tag>::__copy_m(const _Tp*, const _Tp*, _Tp*) [with _Tp = long unsigned int; bool _IsMove = false]’,
inlined from ‘_OI std::__copy_move_a2(_II, _II, _OI) [with bool _IsMove = false; _II = const long unsigned int*; _OI = long unsigned int*]’ at /usr/include/c++/12/bits/stl_algobase.h:495:30,
inlined from ‘_OI std::__copy_move_a1(_II, _II, _OI) [with bool _IsMove = false; _II = const long unsigned int*; _OI = long unsigned int*]’ at /usr/include/c++/12/bits/stl_algobase.h:522:42,
inlined from ‘_OI std::__copy_move_a(_II, _II, _OI) [with bool _IsMove = false; _II = __gnu_cxx::__normal_iterator<const long unsigned int*, vector<long unsigned int> >; _OI = __gnu_cxx::__normal_iterator<long unsigned int*, vector<long unsigned int> >]’ at /usr/include/c++/12/bits/stl_algobase.h:529:31,
inlined from ‘_OI std::copy(_II, _II, _OI) [with _II = __gnu_cxx::__normal_iterator<const long unsigned int*, vector<long unsigned int> >; _OI = __gnu_cxx::__normal_iterator<long unsigned int*, vector<long unsigned int> >]’ at /usr/include/c++/12/bits/stl_algobase.h:620:7,
inlined from ‘std::vector<_Tp, _Alloc>& std::vector<_Tp, _Alloc>::operator=(const std::vector<_Tp, _Alloc>&) [with _Tp = long unsigned int; _Alloc = std::allocator<long unsigned int>]’ at /usr/include/c++/12/bits/vector.tcc:244:21:
/usr/include/c++/12/bits/stl_algobase.h:431:30: error: argument 1 null where non-null expected [-Werror=nonnull]
431 | __builtin_memmove(__result, __first, sizeof(_Tp) * _Num);
| ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/usr/include/c++/12/bits/stl_algobase.h:431:30: note: in a call to built-in function ‘void* __builtin_memmove(void*, const void*, long unsigned int)’
cc1plus: some warnings being treated as errors
[37/156] Building CXX object test_lazy/CMakeFiles/test_lazy.dir/test_lazy_ops.cpp.o
/home/home/PycharmProjects/pytorch/test/cpp/lazy/test_lazy_ops.cpp: In lambda function:
/home/home/PycharmProjects/pytorch/test/cpp/lazy/test_lazy_ops.cpp:967:25: warning: ‘bool c10::isIntegralType(ScalarType)’ is deprecated: isIntegralType is deprecated. Please use the overload with 'includeBool' parameter instead. [-Wdeprecated-declarations]
967 | isIntegralType(type) ? torch::Scalar(1) : torch::Scalar(1.0);
| ~~~~~~~~~~~~~~^~~~~~
In file included from /home/home/PycharmProjects/pytorch/c10/core/Scalar.h:11,
from /home/home/PycharmProjects/pytorch/build/aten/src/ATen/core/TensorBody.h:16,
from /home/home/PycharmProjects/pytorch/aten/src/ATen/core/Tensor.h:3,
from /home/home/PycharmProjects/pytorch/aten/src/ATen/Tensor.h:3,
from /home/home/PycharmProjects/pytorch/torch/csrc/lazy/backend/backend_device.h:7,
from /home/home/PycharmProjects/pytorch/test/cpp/lazy/test_lazy_ops_util.h:4,
from /home/home/PycharmProjects/pytorch/test/cpp/lazy/test_lazy_ops.cpp:6:
/home/home/PycharmProjects/pytorch/c10/core/ScalarType.h:239:20: note: declared here
239 | static inline bool isIntegralType(ScalarType t) {
| ^~~~~~~~~~~~~~
ninja: build stopped: subcommand failed.
```
To avoid spamming, I pasted the error above instead of another issue and will try `BUILD_TEST=0`.
cc @malfet @seemethere
| 2 |
4,561 | 86,539 |
cuda.list_gpu_processes() uses the 'wrong' device order (PCI_BUS_ID)
|
module: cuda, triaged
|
### 🐛 Describe the bug
As far as I can tell, all other torch.cuda functions use the same device order (where the largest or most capable GPU is ordered first, versus when it was found on the PCI bus). However, list_gpu_processes uses the bus order.
This is easy to test if you have 2 GPUs, with the least capable one listed first in nvidia-smi. For me:
# nvidia-smi order
# 0. GTX 745
# 1. GTX 1080 Ti
import torch
# demonstrate the PyTorch cuda order
print(torch.cuda.get_device_name(0)) # prints: NVIDIA GeForce GTX 1080 Ti
print(torch.cuda.get_device_name(1)) # prints: NVIDIA GeForce GTX 745
# specify the 0th device
dev = torch.device("cuda", 0)
# verify no processes
print(torch.cuda.list_gpu_processes(0)) # prints: GPU:0 no processes are running
print(torch.cuda.list_gpu_processes(1)) # prints: GPU:1 no processes are running
# add a variable to the device
tempvar = torch.zeros([10,10]).to(dev)
# verify wrong order processes
print(torch.cuda.list_gpu_processes(0)) # prints: GPU:0 no processes are running
print(torch.cuda.list_gpu_processes(1)) # prints: GPU:1 process. 100492 uses 437 MB GPU memory
### Versions
Collecting environment information...
PyTorch version: 1.12.1
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.2.0-19ubuntu1) 11.2.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.10.6 (main, Oct 7 2022, 20:19:58) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-48-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
GPU models and configuration:
GPU 0: NVIDIA GeForce GTX 745
GPU 1: NVIDIA GeForce GTX 1080 Ti
Nvidia driver version: 510.85.02
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.23.1
[pip3] torch==1.12.1
[pip3] torchvision==0.13.1
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.3.1 h2bc3f7f_2
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py310h7f8727e_0
[conda] mkl_fft 1.3.1 py310hd6ae3a3_0
[conda] mkl_random 1.2.2 py310h00e6091_0
[conda] numpy 1.23.1 py310h1794996_0
[conda] numpy-base 1.23.1 py310hcba007f_0
[conda] pytorch 1.12.1 py3.10_cuda11.3_cudnn8.3.2_0 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchvision 0.13.1 py310_cu113 pytorch
cc @ngimel
| 3 |
4,562 | 93,524 |
Test for multiple instances inference
|
triaged, oncall: pt2
|
Need to add a test for multiple instances inference.
One scenario is the cache contention, please see [#1400 ](https://github.com/pytorch/torchdynamo/pull/1400).
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 2 |
4,563 | 93,523 |
[functorch] [vmap] [SymInt][fake tensor]
|
triaged, module: functorch
|
Goal:
1. Create an input and target tensor
2. Create a list of models for ensemble
3. Use vmap to create an ensemble
4. Obtain the vmap function and pass it to aot_function, to obtain the forward and backward graphs
5. Although the weights and parameters can fit into memory, the activations might not fit in memory and hence, I set `config.use_fake_tensor = True`
Repro:
```
from typing import List
import torch
from functorch.compile import aot_function, print_compile, config, aot_module
from functorch import make_functional_with_buffers, vmap, combine_state_for_ensemble
from functorch._src.named_members_polyfill import _named_parameters, _named_buffers
from torchvision.models import resnet18
config.use_fake_tensor = True
def fake_compiler(fx_g, inps):
print(fx_g.code)
output_node = [node for node in fx_g.graph.nodes if node.op == 'output'][0]
output_data = [node.meta['val'] for node in output_node.args[0]]
def new_f(*args):
return output_data
return new_f
inp = torch.randn(32, 3, 224, 224, dtype=torch.float32).cuda()
targets = torch.zeros(32, dtype=int).cuda()
b_models:List[torch.nn.Module] = [resnet18().cuda() for _ in range(5)]
func_model, params, buffers = combine_state_for_ensemble(b_models)
for p in params:
p.requires_grad = True
def compute_loss(weights, buffers, batch, targets):
output = func_model(weights, buffers, batch)
loss = torch.nn.functional.nll_loss(output,targets)
return loss
parallel_func = vmap(compute_loss, in_dims=(0,0,None, None))
aot_func = aot_function(parallel_func, fake_compiler)
out = aot_func(params, buffers, inp, targets)
out.mean().backward()
```
Error Stack Trace:
```
Traceback (most recent call last):
File "/net/rcstorenfs02/ifs/rc_labs/idreos_lab/users/spurandare/develop/testing/test_patch.py", line 36, in <module>
out = aot_func(params, buffers, inp, targets)
File "/net/rcstorenfs02/ifs/rc_labs/idreos_lab/users/spurandare/develop/pytorch/functorch/_src/aot_autograd.py", line 670, in returned_function
compiled_fn = create_aot_dispatcher_function(
File "/net/rcstorenfs02/ifs/rc_labs/idreos_lab/users/spurandare/develop/torchdynamo/torchdynamo/utils.py", line 76, in time_wrapper
r = func(*args, **kwargs)
File "/net/rcstorenfs02/ifs/rc_labs/idreos_lab/users/spurandare/develop/pytorch/functorch/_src/aot_autograd.py", line 523, in create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/net/rcstorenfs02/ifs/rc_labs/idreos_lab/users/spurandare/develop/pytorch/functorch/_src/aot_autograd.py", line 350, in aot_dispatch_autograd
out = flat_fn(*flat_args)
File "/net/rcstorenfs02/ifs/rc_labs/idreos_lab/users/spurandare/develop/pytorch/functorch/_src/aot_autograd.py", line 650, in flat_fn
tree_out = fn(*args, **kwargs)
File "/net/rcstorenfs02/ifs/rc_labs/idreos_lab/users/spurandare/develop/pytorch/functorch/_src/vmap.py", line 362, in wrapped
return _flat_vmap(
File "/net/rcstorenfs02/ifs/rc_labs/idreos_lab/users/spurandare/develop/pytorch/functorch/_src/vmap.py", line 35, in fn
return f(*args, **kwargs)
File "/net/rcstorenfs02/ifs/rc_labs/idreos_lab/users/spurandare/develop/pytorch/functorch/_src/vmap.py", line 486, in _flat_vmap
vmap_level = _vmap_increment_nesting(batch_size, randomness)
TypeError: _vmap_increment_nesting(): incompatible function arguments. The following argument types are supported:
1. (arg0: int, arg1: str) -> int
Invoked with: t0.size(0), 'error'
```
Debugging to check the values passed to `_vmap_increment_nesting()`
Debug Trace:
```
-> vmap_level = _vmap_increment_nesting(batch_size, randomness)
(Pdb) print(batch_size)
t0.size(0)
(Pdb) print(randomness)
error
(Pdb) print(type(batch_size))
<class 'torch.SymIntNode'>
(Pdb) exit()
```
cc @zou3519 @Chillee @samdow @soumith @kshitij12345 @janeyx99
| 3 |
4,564 | 86,537 |
Running JIT trace for many times leads to OOM
|
oncall: jit, module: memory usage, triaged
|
### 🐛 Describe the bug
Hi! I find that running JIT trace for many different models in one process could lead to OOM. I think there are some memory leak problems here. I guess maybe currently torch doesn't delete a compiled graph even though its life cycle has ended.
I think this issue is worth fixing because in NAS (neural architecture search) people may do this.
Code to reproduce this issue:
```python
import random
import torch
import torch.nn as nn
class MyModule(nn.Module):
def __init__(self, num_layers: int, input_dim: int) -> None:
super().__init__()
layer = nn.Sequential(
nn.Linear(input_dim, input_dim),
nn.LeakyReLU(),
)
self.layers = nn.Sequential(*[
layer for _ in range(num_layers)
])
def forward(self, x: torch.Tensor) -> torch.Tensor:
return self.layers(x)
if __name__ == '__main__':
import os, psutil
process = psutil.Process(os.getpid())
times = 0
while True:
num_layers = random.randint(1, 15)
input_dim = random.randint(1, 20)
m = MyModule(num_layers, input_dim)
x = torch.randn((1, input_dim))
exported = torch.jit.trace(m, x)
o = exported(x)
times += 1
if times % 100 == 0:
mem_mb = process.memory_info().rss / (1024 ** 2)
print(f'{times} times, mem usage: {mem_mb} MB')
```
Logs:
```python
100 times, mem usage: 368.4140625 MB
200 times, mem usage: 512.015625 MB
300 times, mem usage: 648.3984375 MB
400 times, mem usage: 812.625 MB
500 times, mem usage: 959.3203125 MB
600 times, mem usage: 1099.828125 MB
700 times, mem usage: 1243.9453125 MB
800 times, mem usage: 1380.0703125 MB
900 times, mem usage: 1519.7734375 MB
1000 times, mem usage: 1662.6015625 MB
1100 times, mem usage: 1790.734375 MB
1200 times, mem usage: 1940.265625 MB
1300 times, mem usage: 2081.95703125 MB
1400 times, mem usage: 2219.88671875 MB
1500 times, mem usage: 2365.80859375 MB
1600 times, mem usage: 2509.328125 MB
1700 times, mem usage: 2657.62109375 MB
1800 times, mem usage: 2821.58984375 MB
1900 times, mem usage: 2967.51171875 MB
2000 times, mem usage: 3113.43359375 MB
2100 times, mem usage: 3249.04296875 MB
2200 times, mem usage: 3383.87890625 MB
2300 times, mem usage: 3520.26171875 MB
2400 times, mem usage: 3664.63671875 MB
2500 times, mem usage: 3799.21484375 MB
2600 times, mem usage: 3947.01953125 MB
2700 times, mem usage: 4081.85546875 MB
2800 times, mem usage: 4227.51953125 MB
...
```
If we use the code below, the issue will disappear.
```python
while True:
num_layers = random.randint(1, 15)
input_dim = random.randint(1, 20)
m = MyModule(num_layers, input_dim)
x = torch.randn((1, input_dim))
o = m(x) # do not compile
```
### Versions
```python
Collecting environment information...
PyTorch version: 1.12.1+cu116
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.2.0-19ubuntu1) 11.2.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.9.13 (main, Aug 25 2022, 23:26:10) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-48-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.6.124
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
Nvidia driver version: 515.65.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.3
[pip3] torch==1.12.1+cu116
[pip3] torchaudio==0.12.1+cu116
[pip3] torchvision==0.13.1+cu116
[conda] numpy 1.23.3 pypi_0 pypi
[conda] torch 1.12.1+cu116 pypi_0 pypi
[conda] torchaudio 0.12.1+cu116 pypi_0 pypi
[conda] torchvision 0.13.1+cu116 pypi_0 pypi
```
| 6 |
4,565 | 86,532 |
Conv2d will crash by using `jit.trace`
|
oncall: jit, module: crash
|
### 🐛 Describe the bug
Conv2d will crash by using `jit.trace` but work normally without it when `weight` tensor is processed by some operation
```py
import torch
class M(torch.nn.Module):
def __init__(self):
super().__init__()
self.stride = [1, 1]
self.padding = [1, 1]
self.dilation = [1, 1]
self.groups = 1
def forward(self, input, weight, bias):
stride = self.stride
padding = self.padding
dilation = self.dilation
groups = self.groups
weight = torch.sin(weight)
return torch.nn.functional.conv2d(input, weight, bias=bias, stride=stride, padding=padding, dilation=dilation, groups=groups, )
fn = M().to('cpu')
torch.random.manual_seed(57461)
input = torch.empty([1, 1, 1, 32], dtype=torch.float32, memory_format=torch.contiguous_format)
input.uniform_(-2, 3)
weight = torch.empty([1, 1, 3, 3], dtype=torch.float32, memory_format=torch.contiguous_format)
weight.uniform_(-2, 31)
bias = torch.empty([1], dtype=torch.float32, memory_format=torch.contiguous_format)
bias.uniform_(-2, 1)
inputs = (input, weight, bias)
print('normal')
print(fn(*inputs))
jit_fn = torch.jit.trace(fn, inputs)
print('jitted')
print(jit_fn(*inputs))
```
```
normal
tensor([[[[-0.1752, -2.4198, -0.3692, 0.7270, 1.5662, 2.2545, -1.0823,
2.6698, 1.4883, -1.6568, 0.8448, 1.9900, 0.7963, 1.2628,
0.2481, -1.7248, -0.0482, -0.3879, -0.5180, 2.2316, -0.7775,
2.9247, 0.9076, -0.7647, 2.5395, -2.3155, 1.9133, 0.4259,
2.3740, 1.0395, -2.2315, 2.5118]]]])
jitted
[1] 3965638 segmentation fault (core dumped) python ../buglists/conv2d.py
```
### Versions
```
PyTorch version: 1.13.0.dev20221001
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.2.0-19ubuntu1) 11.2.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.9.13 (main, Aug 25 2022, 23:26:10) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.19.5-051905-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.7.99
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060
Nvidia driver version: 515.65.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.3
[pip3] torch==1.13.0.dev20221001
[pip3] torchaudio==0.13.0.dev20221001
[pip3] torchvision==0.14.0.dev20221001
[conda] blas 1.0 mkl
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.23.3 pypi_0 pypi
[conda] numpy-base 1.23.1 py39ha15fc14_0
[conda] pytorch 1.13.0.dev20221001 py3.9_cuda11.7_cudnn8.5.0_0 pytorch-nightly
[conda] pytorch-cuda 11.7 h67b0de4_0 pytorch-nightly
[conda] pytorch-mutex 1.0 cuda pytorch-nightly
[conda] torch 1.13.0.dev20220923+cu116 pypi_0 pypi
[conda] torchaudio 0.13.0.dev20220923+cu116 pypi_0 pypi
[conda] torchvision 0.14.0.dev20220923+cu116 pypi_0 pypi
```
| 0 |
4,566 | 86,529 |
[NvFuser] JIT model with `mul+atan+sgn` will access illegal memory on cuda when computing gradient
|
triaged, module: nvfuser
|
### 🐛 Describe the bug
JIT model with `mul+atan+sgn` will access illegal memory on cuda when computing gradient
```py
import torch
class M(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, input):
input = torch.mul(input, torch.tensor(13, dtype=torch.float32, device='cuda'))
input = torch.atan(input)
fn_res = torch.sgn(input, )
return fn_res
fn = M().to('cuda')
torch.random.manual_seed(18986)
inp = torch.empty([5, 5, 5], dtype=torch.float64)
inp.uniform_(-1, 127)
inp = inp.to('cuda')
from torch.autograd.functional import jacobian
jit_fn = torch.jit.script(fn)
jit_fn(inp.clone().requires_grad_())
print(jacobian(jit_fn, inp.clone().requires_grad_()))
```
### Versions
```
PyTorch version: 1.13.0.dev20221001
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.2.0-19ubuntu1) 11.2.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.9.13 (main, Aug 25 2022, 23:26:10) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.19.5-051905-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.7.99
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060
Nvidia driver version: 515.65.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.3
[pip3] torch==1.13.0.dev20221001
[pip3] torchaudio==0.13.0.dev20221001
[pip3] torchvision==0.14.0.dev20221001
[conda] blas 1.0 mkl
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.23.3 pypi_0 pypi
[conda] numpy-base 1.23.1 py39ha15fc14_0
[conda] pytorch 1.13.0.dev20221001 py3.9_cuda11.7_cudnn8.5.0_0 pytorch-nightly
[conda] pytorch-cuda 11.7 h67b0de4_0 pytorch-nightly
[conda] pytorch-mutex 1.0 cuda pytorch-nightly
[conda] torch 1.13.0.dev20220923+cu116 pypi_0 pypi
[conda] torchaudio 0.13.0.dev20220923+cu116 pypi_0 pypi
[conda] torchvision 0.14.0.dev20220923+cu116 pypi_0 pypi
```
| 8 |
4,567 | 93,522 |
Support cpp wrapper code
|
triaged, oncall: pt2
|
Currently wrapper.py generates python code that invokes generated kernels and external kernels. This would incur Python overhead which can be avoided if the wrapper can generate c++ code that invokes these kernels via C++ directly.
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 5 |
4,568 | 86,525 |
[Distributed: RPC] Sending `nn.Parameter` as RPC argument automatically detaches from the computation graph
|
oncall: distributed
|
### 🐛 Describe the bug
While using RPC call to distributedly train a model, if the remote model's `parameters()` are `nn.Parameter`s, it automatically detaches from the original computation graph when calling `model_rref.to_here()`. This results in wrong gradients in the distributed backward pass.
The policies of `torch.Tensor` and `torch.nn.Parameter` are different and undocumented. The behavior is not intuitive and **invisible**.
### 1. Send `nn.Module` that contains `nn.Parameter` as argument to RPC call:
```python
import os
import random
from threading import Lock
import atexit
import numpy as np
import torch
import torch.distributed.autograd as dist_autograd
import torch.distributed.rpc as rpc
import torch.nn as nn
WORLD_RANK = RANK = int(os.getenv('RANK'))
WORLD_SIZE = int(os.getenv('WORLD_SIZE'))
AUTOGRAD_LOCK = Lock()
fmt = '{name} ({tensor.__class__.__module__}.{tensor.__class__.__qualname__}): param={tensor.data}, grad={tensor.grad}, grad_fn={tensor.grad_fn}'.format
def get_model():
model = nn.Linear(1, 1)
nn.init.ones_(model.weight)
nn.init.zeros_(model.bias)
return model
def worker_init():
random.seed(RANK)
np.random.seed(RANK)
torch.manual_seed(RANK)
torch.backends.cudnn.enabled = False
torch.backends.cudnn.benchmark = False
torch.backends.cudnn.deterministic = True
print(f'Worker init => Rank {RANK}')
def main():
rpc.init_rpc(name=f'worker{RANK}', rank=RANK, world_size=WORLD_SIZE)
atexit.register(rpc.shutdown, graceful=True)
rpc.api._barrier(worker_names={f'worker{i}' for i in range(WORLD_SIZE)})
worker_init()
rpc.api._barrier(worker_names={f'worker{i}' for i in range(WORLD_SIZE)})
if RANK == 0:
model = get_model()
train(model)
else:
pass # listening for RPC calls
def compute_loss(model_rref, x):
if isinstance(model_rref, nn.Module):
model = model_rref
else:
model = model_rref.to_here()
print()
print('Before RPC forward:')
for name, param in model.named_parameters():
print(f' worker{RANK} => {fmt(name=name, tensor=param)}')
out = model(x)
return out.mean()
def convert_param_type(model):
for param in model.parameters():
param.__class__ = torch.Tensor
def train(model):
x = torch.tensor([[1.0], [2.0], [3.0], [4.0]])
print()
print('Before forward:')
for name, param in model.named_parameters():
print(f' {fmt(name=name, tensor=param)}')
# convert_param_type(model)
model_rref = rpc.RRef(model)
with dist_autograd.context() as ctx:
loss0 = rpc.rpc_sync('worker0', compute_loss, args=(model_rref, x[: len(x) // 2]))
loss1 = rpc.rpc_sync('worker1', compute_loss, args=(model_rref, x[len(x) // 2 :]))
loss = torch.mean(torch.stack([loss0, loss1]))
print()
print(f'Loss: {loss}')
dist_autograd.backward(ctx, [loss])
with AUTOGRAD_LOCK:
all_local_grads = dist_autograd.get_gradients(ctx)
for p, g in all_local_grads.items():
if p.grad is not None:
p.grad = p.grad.add(g)
else:
p.grad = g
print()
print('After backward:')
for name, param in model.named_parameters():
print(f' {fmt(name=name, tensor=param)}')
if __name__ == '__main__':
main()
```
Result: only the gradient on the local worker (worker0) is collected. Because `model_rref.to_here()` on the remote (worker1) automatically detaches `nn.Parameter`s.
As we can see the type of `param` is `nn.Paramter` and `param.grad_fn` is `None` on worker1:
```console
$ torchrun --nnode 1 --nproc_per_node 2 rpc_test.py
WARNING:torch.distributed.run:
*****************************************
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
*****************************************
Worker init => Rank 1
Worker init => Rank 0
Before forward:
weight (torch.nn.parameter.Parameter): param=tensor([[1.]]), grad=None, grad_fn=None
bias (torch.nn.parameter.Parameter): param=tensor([0.]), grad=None, grad_fn=None
Before RPC forward:
worker0 => weight (torch.nn.parameter.Parameter): param=tensor([[1.]]), grad=None, grad_fn=None
worker0 => bias (torch.nn.parameter.Parameter): param=tensor([0.]), grad=None, grad_fn=None
Before RPC forward:
worker1 => weight (torch.nn.parameter.Parameter): param=tensor([[1.]]), grad=None, grad_fn=None
worker1 => bias (torch.nn.parameter.Parameter): param=tensor([0.]), grad=None, grad_fn=None
Loss: 2.5
After backward:
weight (torch.nn.parameter.Parameter): param=tensor([[1.]]), grad=tensor([[0.7500]]), grad_fn=None
bias (torch.nn.parameter.Parameter): param=tensor([0.]), grad=tensor([0.5000]), grad_fn=None
```
We get the wrong gradients:
```python
weight.grad = tensor([[0.7500]])
bais.grad = tensor([[0.7500]])
```
### 2. Convert all `nn.Parameter`s to `torch.Tensor`s before RPC calls:
The script is same as the above one, but uncomment the `# convert_param_type(model)`:
```python
convert_param_type(model) # change __class__ of all `nn.Parameter`s
model_rref = rpc.RRef(model)
with dist_autograd.context() as ctx:
loss0 = rpc.rpc_sync('worker0', compute_loss, args=(model_rref, x[: len(x) // 2]))
loss1 = rpc.rpc_sync('worker1', compute_loss, args=(model_rref, x[len(x) // 2 :]))
loss = torch.mean(torch.stack([loss0, loss1]))
print()
print(f'Loss: {loss}')
dist_autograd.backward(ctx, [loss])
with AUTOGRAD_LOCK:
all_local_grads = dist_autograd.get_gradients(ctx)
for p, g in all_local_grads.items():
if p.grad is not None:
p.grad = p.grad.add(g)
else:
p.grad = g
```
Result:
As we can see the type of `param` is `torch.Tensor` and `param.grad_fn` is not `None` on worker1. It points to the original module on worker0 and can be traced by distributed autograd.
```console
torchrun --nnode 1 --nproc_per_node 2 rpc_test.py
WARNING:torch.distributed.run:
*****************************************
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
*****************************************
Worker init => Rank 1
Worker init => Rank 0
Before forward:
weight (torch.nn.parameter.Parameter): param=tensor([[1.]]), grad=None, grad_fn=None
bias (torch.nn.parameter.Parameter): param=tensor([0.]), grad=None, grad_fn=None
Before RPC forward:
worker0 => weight (torch.Tensor): param=tensor([[1.]]), grad=None, grad_fn=None
worker0 => bias (torch.Tensor): param=tensor([0.]), grad=None, grad_fn=None
Before RPC forward:
worker1 => weight (torch.Tensor): param=tensor([[1.]]), grad=None, grad_fn=<CppFunction object at 0x7fbc145feca0>
worker1 => bias (torch.Tensor): param=tensor([0.]), grad=None, grad_fn=<CppFunction object at 0x7fbc145feca0>
Loss: 2.5
After backward:
weight (torch.Tensor): param=tensor([[1.]]), grad=tensor([[2.5000]]), grad_fn=None
bias (torch.Tensor): param=tensor([0.]), grad=tensor([1.]), grad_fn=None
```
We get the correct gradients:
```python
weight.grad = tensor([[2.5000]])
bais.grad = tensor([[1.]])
```
### Versions
```
Collecting environment information...
PyTorch version: 1.12.1
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (conda-forge gcc 10.4.0-16) 10.4.0
Clang version: 10.0.1
CMake version: version 3.22.1
Libc version: glibc-2.31
Python version: 3.8.12 | packaged by conda-forge | (default, Jan 30 2022, 23:42:07) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.10
Is CUDA available: True
CUDA runtime version: 11.6.124
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3070
Nvidia driver version: 516.94
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] functorch==0.2.1
[pip3] mypy==0.990+dev.589ad1c17eeb220a41ba41425b61b8593f8bc42d
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.23.1
[pip3] torch==1.12.1
[pip3] torchopt==0.5.1.dev66+gd738101
[pip3] torchvision==0.13.1
[pip3] torchviz==0.0.2
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.6.0 habf752d_9
[conda] functorch 0.2.1 pypi_0
[conda] libblas 3.9.0 12_linux64_mkl
[conda] libcblas 3.9.0 12_linux64_mkl
[conda] liblapack 3.9.0 12_linux64_mkl
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py38h7f8727e_0
[conda] mkl_fft 1.3.1 py38hd3c417c_0
[conda] mkl_random 1.2.2 py38h51133e4_0
[conda] numpy 1.23.1 py38h6c91a56_0
[conda] numpy-base 1.23.1 py38ha15fc14_0
[conda] pytorch 1.12.1 py3.8_cuda11.6_cudnn8.3.2_0
[conda] pytorch-mutex 1.0 cuda
[conda] torchopt 0.5.1.dev66+gd738101 pypi_0
[conda] torchvision 0.13.1 py38_cu116
[conda] torchviz 0.0.2 pypi_0
```
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
| 0 |
4,569 | 93,520 |
Tooling Issue Tracking
|
triaged, oncall: pt2
|
### Minifier
- [x] [Accuracy minifier](https://github.com/pytorch/pytorch/issues/93715)
- [x] Ensure FX graphs dumped by hooks and `TORCHINDUCTOR_TRACE=1` are runnable in the minifier
- [ ] https://github.com/pytorch/pytorch/issues/93696
- [ ] https://github.com/pytorch/pytorch/issues/93694
- [x] https://github.com/pytorch/pytorch/issues/93673 @williamwen42
- [x] Cleanup minifier print outs, currently a lot of exceptions are printed @mlazos https://github.com/pytorch/pytorch/pull/87537
- [ ] Ensure minifier runs are sensible in torchbench (check that the output isn't hidden somehwere)
### General UX
- [x] Single debug folder @mlazos https://github.com/pytorch/pytorch/pull/87438
- [ ] Enable debug dump of ir/log/minifier into single folder with one flag when there is an issue @mlazos
Automatically run minifier with correct mode setting (aot autograd or dynamo) on exception
- [x] https://github.com/pytorch/torchdynamo/issues/1383 @mlazos
- [x] https://github.com/pytorch/torchdynamo/issues/1376 @mlazos
- [x] Indicator that dynamo is still running @msaroufim
- [x] Always have next step for user in error messages
- [ ] Single config for all components (`torch.compile.config`)
### Documentation
- [ ] Stride check example in debug runbook
- [ ] Accuracy minifier example in debug runbook
- [x] FAQ once people have started to try these tools out and we have early feedback @msaroufim
### Compilation Statistics
- [ ] https://github.com/pytorch/pytorch/issues/93675
### PyTorch Profiler Integration ([main issue](https://github.com/pytorch/pytorch/issues/93717))
- [ ] Fix links missing between CPU and GPU traces (figure out which metadata is required for this)
- [x] Add fused kernel naming to make it clearer which kernels are the results of fusion @mlazos (https://github.com/pytorch/pytorch/pull/88624)
- [ ] Add support in TorchDynamo to handle profiler context (currently they are ignored)
Currently disabled because most backends don't support it
- [ ] Identify additional requirements for integrating with the profiler infra
### Exceptions
- [ ] Fix handling of RAISE_VARARGS to show a nicer error message than Unsupported
### Logging
- [x] [Log steps of compilation](https://github.com/pytorch/pytorch/issues/93674)
- [x] Hooks for logging FX graph to external source
- [x] https://github.com/pytorch/torchdynamo/issues/1462 @mlazos
- [x] https://github.com/pytorch/torchdynamo/issues/1167 @williamwen42
- [x] Add logging to AOT and make it interoperate with TorchInductor trace @mlazos
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 0 |
4,570 | 86,518 |
[ONNX] Memory leak
|
module: onnx, triaged
|
Exporting 170,000 single op models used 11 GB of ram. There may be a leak somewhere.
| 1 |
4,571 | 93,519 |
[DDPOptimizer] Compiled subgraphs sometimes return lists
|
triaged, oncall: pt2
|
```python
def compile_submod(self, submod, args, kwargs):
"""
Compile the submodule,
using a wrapper to make sure its output is always a tuple,
which is required by AotAutograd based compilers
"""
assert len(kwargs) == 0, "We assume only args for these modules"
class WrapperModule(torch.nn.Module):
def __init__(self, compiled_submod, unwrap_singleton_tuple):
super().__init__()
self.compiled_submod = compiled_submod
self.unwrap_singleton_tuple = unwrap_singleton_tuple
def forward(self, *args):
x = self.compiled_submod(*args)
# TODO(whc)
# for some reason the isinstance check is necessary if I split one node per submod
# - even though I supposedly wrapped the output in a tuple in those cases, the real
# compiled module was still returning a tensor
if self.unwrap_singleton_tuple and (isinstance(x, tuple) or isinstance(x, list)): # <- HERE
return x[0]
return x
unwrap_singleton_tuple = False
for sn in submod.graph.nodes:
if sn.op == "output":
if not isinstance(sn.args[0], tuple):
unwrap_singleton_tuple = True
sn.args = (sn.args,) # <- Make it a tuple
submod.recompile()
wrapper = WrapperModule(
self.compiler(submod, args),
unwrap_singleton_tuple,
)
return wrapper
```
See the line marked `HERE`. We need to add an option for `isinstance(x, list)` for reasons I don't understand. We can see on the line marked "Make it a tuple" that we are already wrapping the output as a tuple, so why is the output sometimes appearing as a list?
Appears to be an inductor-only issue, I can't repro with aot_eager. aot_nvfuser fails but it appears to be for unrelated jit scripting issues...
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 1 |
4,572 | 93,718 |
Inductor doesn't fuse outer dimension softmax into a single kernel.
|
triaged, oncall: pt2
|
http://ix.io/4cog
We generate two kernels:

The issue seems to lie in the memory dependencies not matching up.
cc: @jansel @ngimel
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 1 |
4,573 | 86,506 |
[ONNX] Create an input adapter for suppling torch module input to onnxruntime
|
module: onnx, triaged, needs design, onnx-triaged, onnx-needs-info
|
ONNX runtime does not take inputs the same way as pytorch does. Currently there is a step in the verifier that massages the input into what onnx runtime expects. It would be helpful to extract this into a standalone adapter.
| 1 |
4,574 | 86,494 |
Automatic broadcasting for batch addition for sparse tensors
|
module: sparse, triaged, enhancement
|
### 🚀 The feature, motivation and pitch
```python
import torch
#torch.utils.backcompat.broadcast_warning.enabled=True
m1 = torch.ones(3,4)
m2 = torch.ones(4)
torch.add(m1, m2)
print("This is working.")
m1 = torch.rand((6,10),
out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False)
m1 = m1.to_sparse()
m2 = torch.rand((1,10),
out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False)
m2 = m2.to_sparse()
print("This will not work")
torch.add(m1,m2)
#m1 + m2
```
It would be great to have the same dimension broadcasting for sparse tensor as we have for dense tensors.
### Alternatives
_No response_
### Additional context
_No response_
cc @nikitaved @pearu @cpuhrsch @amjames @bhosmer
| 1 |
4,575 | 86,493 |
`mem_get_info` reserves memory and can not be destroyed / deallocated.
|
module: cuda, triaged
|
### 🐛 Describe the bug
When running mem_get_info it reserves memory and can not be destroyed. Especially if running multi-processing, the memory is allocated for every process. Probably it is related to nvidia C function `cudaMemGetInfo`. It is suggested that `cudaDeviceReset` is run after to deallocate the memory, but no such binding exists in pytorch c binding library.
https://stackoverflow.com/questions/64854862/free-memory-occupied-by-cudamemgetinfo
### Minimal working example:
```python
import torch
print(torch.cuda.list_gpu_processes())
torch.cuda.mem_get_info(0)
print(torch.cuda.list_gpu_processes())
print(torch.cuda.empty_cache())
print(torch.cuda.list_gpu_processes())
```
### Output:
```
GPU:0
no processes are running
GPU:0
process 120408 uses 843.000 MB GPU memory
GPU:0
process 120408 uses 843.000 MB GPU memory
```
### Expected Output:
```
GPU:0
no processes are running
GPU:0
no processes are running
GPU:0
no processes are running
```
### Versions
Collecting environment information...
PyTorch version: 1.12.1
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.31
Python version: 3.9.12 (main, Jun 1 2022, 11:38:51) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.15.0-1018-gcp-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: Could not collect
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-40GB
GPU 1: NVIDIA A100-SXM4-40GB
GPU 2: NVIDIA A100-SXM4-40GB
GPU 3: NVIDIA A100-SXM4-40GB
GPU 4: NVIDIA A100-SXM4-40GB
GPU 5: NVIDIA A100-SXM4-40GB
GPU 6: NVIDIA A100-SXM4-40GB
GPU 7: NVIDIA A100-SXM4-40GB
Nvidia driver version: 495.46
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] mypy==0.982
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.23.1
[pip3] torch==1.12.1
[pip3] torch-fidelity==0.3.0
[pip3] torchaudio==0.12.1
[pip3] torchvision==0.13.1
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.3.1 h2bc3f7f_2
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.23.1 py39h6c91a56_0
[conda] numpy-base 1.23.1 py39ha15fc14_0
[conda] pytorch 1.12.1 py3.9_cuda11.3_cudnn8.3.2_0 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torch-fidelity 0.3.0 pypi_0 pypi
[conda] torchaudio 0.12.1 py39_cu113 pytorch
[conda] torchvision 0.13.1 py39_cu113 pytorch
cc @ngimel
| 2 |
4,576 | 86,481 |
onnx.export make size operations return Tensor instead of int
|
module: onnx, triaged
|
### 🐛 Describe the bug
`Tensor.size` [is documented](https://pytorch.org/docs/stable/generated/torch.Tensor.size.html#torch.Tensor.size) to return an `int` if you pass the `dim` argument. Similarly, the Tensor indexing operations return an `int` most of the time. However, when tracing via `torch.onnx.export` these somehow do not work as advertised and instead return a `Tensor` that is a scalar.
Given:
```python
import torch
import torch.nn as nn
class ExampleIdentity(nn.Module):
def __init__(self) -> None:
super().__init__()
def forward(self, x: torch.Tensor) -> torch.Tensor:
print('Indexing:', type(x.shape[-1]))
print('Using size:', type(x.size(-1)))
return x
x = torch.rand((1, 3, 224, 244))
ei = ExampleIdentity()
print('\n\nRunning normally:')
ei(x)
print('\n\nVia onnx.export:')
with open('/tmp/output.onnx', 'wb') as outf:
torch.onnx.export(ei, x, outf)
```
And running that:
```bash
$ python3.10 /tmp/example.py
Running normally:
Indexing: <class 'int'>
Using size: <class 'int'>
Via onnx.export:
Indexing: <class 'torch.Tensor'>
Using size: <class 'torch.Tensor'>
```
### Versions
```
Collecting environment information...
PyTorch version: 1.12.1+cu102
Is debug build: False
CUDA used to build PyTorch: 10.2
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux 11 (bullseye) (x86_64)
GCC version: (Debian 10.2.1-6) 10.2.1 20210110
Clang version: Could not collect
CMake version: version 3.18.4
Libc version: glibc-2.31
Python version: 3.10.6 (main, Sep 20 2022, 13:16:58) [GCC 10.2.1 20210110] (64-bit runtime)
Python platform: Linux-5.10.0-18-amd64-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: Could not collect
GPU models and configuration: GPU 0: Quadro P520
Nvidia driver version: 470.141.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.3
[pip3] torch==1.12.1
[pip3] torchvision==0.13.1
[conda] Could not collect
```
| 6 |
4,577 | 86,479 |
FSDP support to load DDP optim checkpoints
|
oncall: distributed, triaged, better-engineering, module: fsdp
|
### 🚀 The feature, motivation and pitch
FSDP optimizer checkpoint loading expects params to be keyed by FQN, but DDP saves checkpoints with param IDs.
FSDP does provide `rekey_optim_state_dict` to change the key type appropriately, but user has to manually call this code when attempting to load in a checkpoint. This makes integration with higher level trainer libs a bit more challenging.
Propose for FSDP to just inspect the state_dict and rekey appropriately if needed.
### Alternatives
_No response_
### Additional context
_No response_
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang @kwen2501
| 1 |
4,578 | 86,467 |
torch.tensor obj automatically moved to shared memory upon Process launch
|
module: multiprocessing, triaged
|
### 🐛 Describe the bug
It appears that when spawning a new multiprocessing.Process a torch.tensor object that is part of the Process' args is automatically moved to shared memory. Is this the intended behaviour? In contrast, a native int, a list, numpy array etc remain in a non-shared memory space.
Here is a code reproducing the issue:
```
import os
import numpy as np
import torch
import torch.multiprocessing as mp
def setup(rank: int, world_size: int):
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '29500'
torch.distributed.init_process_group("gloo", rank=rank, world_size=world_size)
def worker_fn(rank, num_procs, counter):
setup(rank, num_procs)
counter[0] += 1
if rank == 0:
if torch.is_tensor(counter):
print(f'\tinside multiproc, rank_{rank}:\tis counter shared:', counter.is_shared())
print(f'\t{"inside proc after push:":<35} {"counter":>10}: {counter[0]}')
def run(counter, num_procs):
print(f'\t{"start:":<35} {"counter":>10}: {counter[0]}')
processes = [mp.Process(target=worker_fn, args=(rank, num_procs, counter)) for rank in range(num_procs)]
for proc in processes:
proc.start()
for proc in processes:
proc.join()
print(f'\t{"outside proc:":<35} {"counter":>10}: {counter[0]}')
counter[0] += 1
print(f'\t{"outside proc after one more push:":<35} {"counter":>10}: {counter[0]}')
if __name__=="__main__":
mp.set_start_method('spawn', force=True)
tests = (
([0, ], 'native list of int based counter'),
(np.array([0, ], dtype=int), 'np.array of int based counter'),
(torch.tensor([0, ], dtype=torch.int), 'torch.tensor based counter'),
)
num_procs = 2
num_iter = 2
print(f'\n\nrunning on {num_procs} processors for {num_iter} iterations')
for counter, name in tests:
print(f'\n\ncase: {name}')
if torch.is_tensor(counter):
print('\n\tbefore launching multiprocs:\tis counter shared:', counter.is_shared())
for iter_ in range(num_iter):
print(f'\niter {iter_}')
run(counter, num_procs)
```
And here is the output when run on my machine.
```
case: native list of int based counter
iter 0
start: counter: 0
inside proc after push: counter: 1
outside proc: counter: 0
outside proc after one more push: counter: 1
iter 1
start: counter: 1
inside proc after push: counter: 2
outside proc: counter: 1
outside proc after one more push: counter: 2
case: np.array of int based counter
iter 0
start: counter: 0
inside proc after push: counter: 1
outside proc: counter: 0
outside proc after one more push: counter: 1
iter 1
start: counter: 1
inside proc after push: counter: 2
outside proc: counter: 1
outside proc after one more push: counter: 2
case: torch.tensor based counter
before launching multiprocs: is counter shared: False
iter 0
start: counter: 0
inside multiproc, rank_0: is counter shared: True
inside proc after push: counter: 1
outside proc: counter: 2
outside proc after one more push: counter: 3
iter 1
start: counter: 3
inside multiproc, rank_0: is counter shared: True
inside proc after push: counter: 4
outside proc: counter: 5
outside proc after one more push: counter: 6
```
In the first and second case (list of int and numpy array of int) the counter update inside the launched Process affects neither the counter value in other launched Process's nor the counter value in the launching process. In the last case (torch.tensor of int list) updating the counter in one Process has side effects everywhere. Also our explicit check makes it clear that the somewhere during the Process launch the torch.tensor is moved to shared memory.
### Versions
Collecting environment information...
PyTorch version: 1.12.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 12.6 (x86_64)
GCC version: Could not collect
Clang version: 14.0.0 (clang-1400.0.29.102)
CMake version: Could not collect
Libc version: N/A
Python version: 3.9.14 (main, Sep 10 2022, 07:55:53) [Clang 13.0.0 (clang-1300.0.29.30)] (64-bit runtime)
Python platform: macOS-12.6-x86_64-i386-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] mypy==0.982
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.23.3
[pip3] torch==1.12.1
[pip3] xitorch==0.3.0
[conda] Could not collect
cc @VitalyFedyunin
| 0 |
4,579 | 86,465 |
Wrong results with torch.linalg.inv on batched matrices when using cuda
|
module: cuda, triaged, module: linear algebra, module: correctness (silent), module: magma
|
### 🐛 Describe the bug
When trying to invert multiple matrices at once with an initial batch size dimension, the result is completely off if the tensor to invert is on a cuda device.
Steps to reproduce:
```python
amatrix = torch.rand(5,3,3)
torch.linalg.inv(amatrix[0])
torch.linalg.inv(amatrix[0].cuda())
torch.linalg.inv(amatrix)
torch.linalg.inv(amatrix.cuda())
```
```python
>>> amatrix = torch.rand(5,3,3)
>>> torch.linalg.inv(amatrix[0])
tensor([[ 4.8674, -4.3110, -0.8659],
[-2.0176, 0.9240, 3.4274],
[-3.9164, 4.6243, 0.3675]])
>>> torch.linalg.inv(amatrix[0].cuda())
tensor([[ 4.8674, -4.3110, -0.8659],
[-2.0176, 0.9240, 3.4274],
[-3.9164, 4.6243, 0.3675]], device='cuda:0')
>>> torch.linalg.inv(amatrix)
tensor([[[ 4.8674, -4.3110, -0.8659],
[-2.0176, 0.9240, 3.4274],
[-3.9164, 4.6243, 0.3675]],
[[ 2.7000, -4.1113, 1.5560],
[-0.1757, 2.7767, -1.4721],
[-0.5517, -1.0877, 1.8265]],
[[ 1.5498, -0.1105, -0.2822],
[ 1.4577, 1.6909, -1.7640],
[-2.4122, -0.5478, 2.2266]],
[[-0.7832, -0.1020, 3.3803],
[ 5.7228, -1.4989, -7.8944],
[-7.5880, 3.6522, 9.4335]],
[[-4.9874, 5.5891, 1.3973],
[ 2.9066, -1.1185, -2.3003],
[ 0.6056, -0.9546, 1.0999]]])
>>> torch.linalg.inv(amatrix.cuda())
tensor([[[ -2.2865, 2.0979, 2.0907],
[ 22.3898, -5.2104, -35.2302],
[ -1.3390, -0.2901, 3.7796]],
[[ -2.5799, 2.3801, 0.6247],
[ 1.7680, -0.1084, -0.6976],
[ -0.7683, -0.1366, 1.0912]],
[[ -0.1224, 1.3583, -0.1425],
[ 1.5276, 0.5215, -0.9947],
[ -0.4010, -1.0566, 1.1865]],
[[ -1.5202, 0.4430, 2.8945],
[ 2.5907, 3.1441, -8.2405],
[ -2.0677, -4.8118, 11.4331]],
[[ -3.8695, 5.0462, -0.7982],
[ 1.9668, -0.5551, -1.2615],
[ -0.1290, -0.1451, 1.0779]]], device='cuda:0')
```
### Versions
```bash
[trougnouf@d]: ~/tmp>$ python collect_env.py
Collecting environment information...
PyTorch version: 1.12.1
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Arch Linux (x86_64)
GCC version: (GCC) 12.2.0
Clang version: 14.0.6
CMake version: version 3.24.2
Libc version: glibc-2.36
Python version: 3.10.7 (main, Sep 6 2022, 21:22:27) [GCC 12.2.0] (64-bit runtime)
Python platform: Linux-5.15.70-1-lts-x86_64-with-glibc2.36
Is CUDA available: True
CUDA runtime version: 11.7.99
GPU models and configuration: GPU 0: NVIDIA GeForce GTX 1070
Nvidia driver version: 515.76
cuDNN version: Probably one of the following:
/usr/lib/libcudnn.so.8.5.0
/usr/lib/libcudnn_adv_infer.so.8.5.0
/usr/lib/libcudnn_adv_train.so.8.5.0
/usr/lib/libcudnn_cnn_infer.so.8.5.0
/usr/lib/libcudnn_cnn_train.so.8.5.0
/usr/lib/libcudnn_ops_infer.so.8.5.0
/usr/lib/libcudnn_ops_train.so.8.5.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] mypy==0.971
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.23.3
[pip3] numpydoc==1.4.0
[pip3] pytorch-msssim==0.2.1
[pip3] torch==1.12.1
[pip3] torchmetrics==0.9.3
[pip3] torchvision==0.13.1a0+b97f035
[conda] Could not collect
```
cc @ngimel @jianyuh @nikitaved @pearu @mruberry @walterddr @IvanYashchuk @xwang233 @Lezcano
| 9 |
4,580 | 86,456 |
`SyncBatchNorm` doesn't work with subclass of `torch.Tensor`
|
oncall: distributed, tensor subclass
|
### 🐛 Describe the bug
Multi-gpu data parallel pipeline with `SyncBatchNorm` doesn't seem to work with subclasses of `torch.Tensor`.
To reproduce, save the following script as `test.py` and run it with a basic ddp: `torchrun --nnodes=1 --nproc_per_node=2 test.py` with 2 gpus on 1 node:
```python
import torch
import torch.distributed as dist
from torchvision import models
class SubTensor(torch.Tensor):
pass
def run():
dist.init_process_group("nccl")
rank = dist.get_rank()
x = SubTensor(torch.zeros(1, 3, 128, 128)).to(rank)
# x = torch.zeros(1, 3, 128, 128).to(rank) # regular tensor works fine
mod = models.resnet50(weights=None).to(rank)
mod = torch.nn.SyncBatchNorm.convert_sync_batchnorm(mod)
mod = torch.nn.parallel.DistributedDataParallel(mod, device_ids=[rank], output_device=rank)
out = mod(x)
if __name__ == "__main__":
run()
```
```
Traceback (most recent call last):
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1186, in _call_impl
File "test.py", line 22, in <module>
run()
File "test.py", line 18, in run
out = mod(x)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1186, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 1009, in forward
return forward_call(*input, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 1009, in forward
output = self._run_ddp_forward(*inputs, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 970, in _run_ddp_forward
output = self._run_ddp_forward(*inputs, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 970, in _run_ddp_forward
return module_to_run(*inputs[0], **kwargs[0])
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1186, in _call_impl
return module_to_run(*inputs[0], **kwargs[0])
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1186, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/torchvision/models/resnet.py", line 285, in forward
return self._forward_impl(x)
File "/opt/conda/lib/python3.8/site-packages/torchvision/models/resnet.py", line 270, in _forward_impl
return forward_call(*input, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/torchvision/models/resnet.py", line 285, in forward
x = self.relu(x)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1186, in _call_impl
return self._forward_impl(x)
File "/opt/conda/lib/python3.8/site-packages/torchvision/models/resnet.py", line 270, in _forward_impl
x = self.relu(x)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1186, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/activation.py", line 102, in forward
return F.relu(input, inplace=self.inplace)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/functional.py", line 1453, in relu
return forward_call(*input, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/activation.py", line 102, in forward
return F.relu(input, inplace=self.inplace)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/functional.py", line 1453, in relu
return handle_torch_function(relu, (input,), input, inplace=inplace)
File "/opt/conda/lib/python3.8/site-packages/torch/overrides.py", line 1528, in handle_torch_function
return handle_torch_function(relu, (input,), input, inplace=inplace)
File "/opt/conda/lib/python3.8/site-packages/torch/overrides.py", line 1528, in handle_torch_function
result = torch_func_method(public_api, types, args, kwargs)
File "/opt/conda/lib/python3.8/site-packages/torch/_tensor.py", line 1089, in __torch_function__
result = torch_func_method(public_api, types, args, kwargs)
File "/opt/conda/lib/python3.8/site-packages/torch/_tensor.py", line 1089, in __torch_function__
ret = func(*args, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/functional.py", line 1455, in relu
ret = func(*args, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/functional.py", line 1455, in relu
result = torch.relu_(input)
RuntimeError: Output 0 of SyncBatchNormBackward is a view and is being modified inplace. This view was created inside a custom Function (or because an input was returned as-is) and the autograd logic to handle view+inplace would override the custom backward associated with the custom Function, leading to incorrect gradients. This behavior is forbidden. You can fix this by cloning the output of the custom Function.
result = torch.relu_(input)
RuntimeError: Output 0 of SyncBatchNormBackward is a view and is being modified inplace. This view was created inside a custom Function (or because an input was returned as-is) and the autograd logic to handle view+inplace would override the custom backward associated with the custom Function, leading to incorrect gradients. This behavior is forbidden. You can fix this by cloning the output of the custom Function.
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 431) of binary: /opt/conda/bin/python
Traceback (most recent call last):
File "/opt/conda/bin/torchrun", line 33, in <module>
sys.exit(load_entry_point('torch', 'console_scripts', 'torchrun')())
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 345, in wrapper
return f(*args, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/run.py", line 761, in main
run(args)
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/run.py", line 752, in run
elastic_launch(
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 132, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 246, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
```
### Versions
PyTorch version: 1.13.0a0+d321be6
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.23.3
Libc version: glibc-2.31
Python version: 3.8.13 | packaged by conda-forge | (default, Mar 25 2022, 06:04:10) [GCC 10.3.0] (64-bit runtime)
Python platform: Linux-5.4.0-126-generic-x86_64-with-glibc2.10
Is CUDA available: True
CUDA runtime version: 11.7.99
GPU models and configuration:
GPU 0: NVIDIA TITAN Xp COLLECTORS EDITION
GPU 1: NVIDIA TITAN Xp COLLECTORS EDITION
Nvidia driver version: 510.85.02
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.5.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] mypy==0.982
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.22.4
[pip3] pytorch-ignite==0.4.10
[pip3] pytorch-quantization==2.1.2
[pip3] torch==1.13.0a0+d321be6
[pip3] torch-tensorrt==1.2.0a0
[pip3] torchtext==0.11.0a0
[pip3] torchvision==0.14.0a0
[conda] mkl 2020.4 h726a3e6_304 conda-forge
[conda] mkl-include 2020.4 h726a3e6_304 conda-forge
[conda] numpy 1.22.4 py38h99721a1_0 conda-forge
[conda] pytorch-ignite 0.4.10 pypi_0 pypi
[conda] pytorch-quantization 2.1.2 pypi_0 pypi
[conda] torch 1.13.0a0+d321be6 pypi_0 pypi
[conda] torch-tensorrt 1.2.0a0 pypi_0 pypi
[conda] torchtext 0.11.0a0 pypi_0 pypi
[conda] torchvision 0.14.0a0 pypi_0 pypi
see also https://github.com/Project-MONAI/MONAI/issues/5283
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @ezyang
| 4 |
4,581 | 86,455 |
(JIT) x:Optional[T] cannot not expect content type after `if x is None or x.shape[0]==1`
|
oncall: jit
|
### 🐛 Describe the bug
(JIT) x:Optional[T] cannot not expect content type after `if x is None or x.shape[0]==1`
```py
RuntimeError:
'Optional[Tensor]' object has no attribute or method 'shape'.:
File "/home/ainl/library/sharkshark-4k/upscale/model/bsvd/model.py", line 95
self.fold = self.c//fold_div
# Case1: In the start or end stage, the memory is empty
if self.center is None or self.center.shape[-1] == 1:
~~~~~~~~~~~~~~~~~ <--- HERE
```
`self.center` is `Optional[Tensor]`.
In prior statement, since we check `or`, so we can expect the self.center is not None.
However JIT type inference does not track it.
### Versions
Collecting environment information...
PyTorch version: 1.12.1+cu113
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.8.12 (default, Oct 12 2021, 13:49:34) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.15.0-48-generic-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: 11.4.48
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3080
Nvidia driver version: 515.65.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.6.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.19.2
[pip3] torch==1.12.1+cu113
[pip3] torch-cluster==1.5.9
[pip3] torch-geometric==1.7.2
[pip3] torch-scatter==2.0.7
[pip3] torch-spline-conv==1.2.1
[pip3] torch-tb-profiler==0.2.0
[pip3] torch-tensorrt==1.2.0
[pip3] torchaudio==0.12.1+cu113
[pip3] torchdata==0.3.0
[pip3] torchfile==0.1.0
[pip3] torchtext==0.12.0
[pip3] torchvision==0.13.1+cu113
[conda] _pytorch_select 0.1 cpu_0
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.3.1 h2bc3f7f_2
[conda] cudatoolkit-dev 11.4.0 h5e8e339_5 conda-forge
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] libmklml 2019.0.5 h06a4308_0
[conda] mkl 2020.2 256
[conda] mkl-service 2.3.0 py38he904b0f_0
[conda] mkl_fft 1.3.0 py38h54f3939_0
[conda] mkl_random 1.1.1 py38h0573a6f_0
[conda] numpy 1.19.2 py38h54aff64_0
[conda] numpy-base 1.19.2 py38hfa32c7d_0
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torch 1.12.1+cu113 pypi_0 pypi
[conda] torch-cluster 1.5.9 pypi_0 pypi
[conda] torch-geometric 1.7.2 pypi_0 pypi
[conda] torch-scatter 2.0.7 pypi_0 pypi
[conda] torch-spline-conv 1.2.1 pypi_0 pypi
[conda] torch-tb-profiler 0.2.0 pypi_0 pypi
[conda] torch-tensorrt 1.2.0 pypi_0 pypi
[conda] torchaudio 0.12.1+cu113 pypi_0 pypi
[conda] torchdata 0.3.0 pypi_0 pypi
[conda] torchfile 0.1.0 pypi_0 pypi
[conda] torchtext 0.12.0 pypi_0 pypi
[conda] torchvision 0.13.1+cu113 pypi_0 pypi
| 0 |
4,582 | 93,716 |
End-to-End AMP training with GradScaler
|
triaged, oncall: pt2
|
https://github.com/pytorch/torchdynamo/blob/main/benchmarks/training_loss.py performs end-to-end training with TorchDynamo and TorchInductor with AMP. However, we are not testing AMP training in a standard manner.
AMP training has [GradScaler](https://pytorch.org/docs/stable/amp.html#gradient-scaling) which performs scaling, monitors gradients, discards batch sizes if gradients are inf/nan. More importantly, GradScaler is stateful. All this warrants an end to end training testing with TorchDynamo and TorchInductor stack.
There would be many great articles on AMP. This is the first one I stumbled upon - https://spell.ml/blog/mixed-precision-training-with-pytorch-Xuk7YBEAACAASJam
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh @yanboliang
cc @xuzhao9 if torchbench already has AMP setup for end to end training
cc @msaroufim if you are interested in this
| 0 |
4,583 | 86,449 |
torch.cuda.empty_cache() is not working
|
module: cuda, triaged
|
### 🐛 Describe the bug
Hi, I want to see how much does the pytorch model use GPU, but I went I try to test more than 2 models, `torch.cuda.empty_cache()` seems not working.
The code is as below, the first `logger` shows 0, but the second shows more than 0
```
model = transformers.T5ForConditionalGeneration.from_pretrained('t5-base')
torch.cuda.reset_peak_memory_stats()
torch.cuda.empty_cache()
torch.cuda.synchronize()
now_mem = torch.cuda.max_memory_allocated()
logger.info(f'Load original model, mem:{now_mem}')
train(model,dataloader,...)
model = transformers.T5ForConditionalGeneration.from_pretrained('t5-base')
torch.cuda.reset_peak_memory_stats()
torch.cuda.empty_cache()
torch.cuda.synchronize()
now_mem = torch.cuda.max_memory_allocated()
logger.info(f'Load original model, mem:{now_mem}')
train(model,dataloader,...)
```
### Versions
```
Collecting environment information...
PyTorch version: 1.12.1+cu102
Is debug build: False
CUDA used to build PyTorch: 10.2
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.2 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.17
Python version: 3.7.13 (default, Mar 29 2022, 02:18:16) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.13.0-48-generic-x86_64-with-debian-bullseye-sid
Is CUDA available: True
CUDA runtime version: 11.7.99
GPU models and configuration: GPU 0: Tesla V100-SXM2-32GB
Nvidia driver version: 515.48.07
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.21.6
[pip3] torch==1.12.1
[conda] numpy 1.21.6 pypi_0 pypi
[conda] torch 1.12.1 pypi_0 pypi
```
cc @ngimel
| 2 |
4,584 | 86,446 |
Improve FX naming for getitem calls
|
triaged, module: fx, module: aotdispatch
|
### 🐛 Describe the bug
When I have a tuple destructure, I end up with
```
getitem_0 = operator.getitem(add_0, 0)
getitem_1 = operator.getitem(add_0, 1)
getitem_2 = operator.getitem(add_0, 2)
getitem_3 = operator.getitem(add_0, 3)
```
names in FX IR. The getitem variable names are useless, they tell me nothing about where the name came from. A better naming scheme would see past the operator.getitem and derive a name based on the input variable name to the getitem call, e.g.,
```
add_0_0 = operator.getitem(add_0, 0)
add_0_1 = operator.getitem(add_0, 1)
add_0_2 = operator.getitem(add_0, 2)
add_0_3 = operator.getitem(add_0, 3)
```
### Versions
master
cc @SherlockNoMad @soumith
| 1 |
4,585 | 86,444 |
Dedicated function for shallow_copy_and_detach
|
module: autograd, triaged, tensor subclass
|
### 🐛 Describe the bug
Today when a tensor is internally shallow copy and detached, it is traced as a detach. But a detach has a particular semantic meaning: it means that we are intending to detach ALL levels of autograd. But that's not really the intention here; we are just breaking the autograd history at the current level; you should still be able to differentiably run the code later. By tracing it as detach, we lose this semantic info. So emit these as something else; maybe even just alias.
Related https://github.com/pytorch/pytorch/issues/71725
cc @albanD @zou3519 @gqchen @pearu @nikitaved @soulitzer @Lezcano @Varal7 we talked about doing it this way, but no one did it, writing it down so we don't forget
### Versions
master
| 0 |
4,586 | 86,443 |
Stack trace preservation should work on plain use of make_fx / AOTAutograd
|
triaged, module: aotdispatch
|
### 🐛 Describe the bug
The current stack preservation logic from #83558 only works for Dynamo. It would be nice if it worked for bare use of `make_fx` and `AOTAutograd`. I hacked this up to work:
1. I think the anomaly mode context manager should be pushed into `create_joint_forward_backward`, where we actually run the forwards. This guarantees it gets run by all the codepaths. Right now it is applied only on `aot_module_simplified`, which means you don't get it for the other AOTAutograd codepaths
2. make_fx has no way of turning on `record_stack_traces` in its public API. Actually, we should just do the @zdevito trick and make this cheap enough that we can turn it on unconditionally
3. You want this diff
```
diff --git a/torch/fx/traceback.py b/torch/fx/traceback.py
index a07b36b997..cee7626e5c 100644
--- a/torch/fx/traceback.py
+++ b/torch/fx/traceback.py
@@ -54,7 +54,7 @@ def format_stack() -> List[str]:
return current_stack.copy()
else:
# fallback to traceback.format_stack()
- return traceback.format_stack()
+ return traceback.format_list(traceback.extract_stack()[:-1])
@compatibility(is_backward_compatible=False)
```
cc @SherlockNoMad @Chillee @albanD
### Versions
master
| 0 |
4,587 | 86,433 |
DISABLED test_rmsprop (optim.test_optim.TestOptim)
|
triaged, module: flaky-tests, skipped, module: dynamo
|
Platforms: dynamo
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_rmsprop&suite=TestOptim) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/8754116789).
Over the past 3 hours, it has been determined flaky in 1 workflow(s) with 1 failures and 1 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT BE ALARMED IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_rmsprop`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
cc @vincentqb @jbschlosser @jansel @mlazos @soumith @voznesenskym @yanboliang @penguinwu @anijain2305
| 18 |
4,588 | 93,715 |
[minifier] Accuracy minification
|
triaged, oncall: pt2
|
- [ ] For integer tensors, currently we have torch.zeros for rand_strided (https://github.com/pytorch/torchdynamo/blob/main/torchdynamo/testing.py#L254). This is ok for Exceptions, but not really useful for accuracy. We need to get the maximum value from the original tensor and then replace zeros with randint(max_value)
* `git revert 35590bb9ae5781d93e483540e9492e1409347800`
* `TORCHDYNAMO_REPRO_AFTER="dynamo" TORCHDYNAMO_REPRO_LEVEL=4 benchmarks/huggingface.py -d cuda --inductor --training --float32 --accuracy --only=BertForQuestionAnswering` shows that accuracy divergence was observed
* But `python /tmp/minifier_anijain/minifier_launcher.py` does not minify
* Changing torch.zeros to torch.randint(10, ..) as mentioned above would fix it. So, the goal is to get the max_value plumbed through.
- [ ] We currently do fwd_output.sum().backward() - This could hide indexing issues, because the loss is same for all the tensors. We should provide torch.rand_like() input gradients. More context here - https://github.com/pytorch/torchdynamo/pull/1414#issuecomment-1262801213
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh @williamwen42 @yanboliang
| 1 |
4,589 | 86,427 |
[functorch] [aot_autograd]
|
triaged, module: vmap, module: functionalization, module: aotdispatch
|
This is what I am trying to do:
1. Create an input and target tensor
2. Create a list of models for ensemble
3. Use vmap to create an ensemble
4. Obtain the vmap function and pass it to aot_function, to obtain the forward and backward graphs
Repro:
```
from typing import List
import torch
from torch._subclasses import FakeTensorMode, FakeTensor
from functorch.compile import aot_function, print_compile, config, aot_module
from functorch import make_functional_with_buffers, vmap, combine_state_for_ensemble
from functorch._src.named_members_polyfill import _named_parameters, _named_buffers
from torchvision.models import resnet18
g = {}
def fake_wrapper(gtype):
def fake_compiler(fx_g, inps):
print(fx_g.code)
nonlocal gtype
g[gtype] = fx_g
return fx_g
return fake_compiler
inp = torch.randn(32, 3, 224, 224, dtype=torch.float32).cuda()
targets = torch.zeros(32, dtype=int).cuda()
b_models:List[torch.nn.Module] = [resnet18().cuda() for _ in range(5)]
func_model, params, buffers = combine_state_for_ensemble(b_models)
for p in params:
p.requires_grad = True
def compute_loss(weights, buffers, batch, targets):
output = func_model(weights, buffers, batch)
loss = torch.nn.functional.nll_loss(output,targets)
return loss
parallel_func = vmap(compute_loss, in_dims=(0,0,None, None))
aot_func = aot_function(parallel_func, fake_wrapper("forward"), fake_wrapper("backward"))
out = aot_func(params, buffers, inp, targets)
out.mean().backward()
```
Error:
`RuntimeError: !self.requires_grad() || self.is_contiguous() INTERNAL ASSERT FAILED at “/scratch/anijain/work/pytorch/aten/src/ATen/native/TensorShape.cpp”:3609, please report a bug to PyTorch. as_strided_scatter is currently only supported for contiguous inputs
While executing %new_empty_strided_1 : [#users=1] = call_function[target=torch.ops.aten.new_empty_strided.default](args = (%copy__1, [5, 512, 32, 7, 7], [25088, 49, 125440, 7, 1]), kwargs = {})
Original traceback:
Gradient addition node due to multiple use of tensor around:`
cc @zou3519 @bdhirsh @ezyang @soumith
| 15 |
4,590 | 93,712 |
Use opinfo segfaulting list to protect inductor run internally
|
triaged, oncall: pt2
|
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 0 |
4,591 | 86,381 |
OpInfo Tests To Validate that All Operators Are Being Tested With Strided Tensors
|
module: tests, triaged
|
### 🚀 The feature, motivation and pitch
Many of the OpInfos such as [Unary OpInfos](https://github.com/pytorch/pytorch/pull/85976) do not currently test with strided inputs. We should add a test that validates all OpInfos are being tested with strided inputs.
cc @mruberry
| 2 |
4,592 | 86,356 |
`conv_transpose` is not similar to `nn.grad.conv_input` when `output_padding` is passed with non-default values.
|
module: nn, triaged, actionable
|
### 🐛 Describe the bug
```python
import torch
import torch.nn as nn
device = 'cuda'
t = torch.rand(2, 2, 4, 4, device=device)
w = torch.rand(2, 2, 4, 5, device=device)
b = None
stride = (3, 2)
padding = (1, 2)
dilation = (4, 4)
output_padding = (2, 3) # Errors with `RuntimeError: grad_output[2:] shape[4, 4] must be equal to output size 4 5`
# output_padding = 0 # Works if we don't specify a non-default output_padding
groups = 2
actual = torch.conv_transpose2d(t, w, b, groups=groups, stride=stride, padding=padding, dilation=dilation, output_padding=output_padding)
expected = nn.grad.conv2d_input(actual.shape, w, t, stride, padding,dilation, groups)
torch.testing.assert_close(actual, expected)
```
We try to use `grad.conv2d_input` as reference for Complex support for conv_transpose, but it fails for samples when `output_padding` is passed with non-default value. `nn.grad.conv2d_input` doesn't take that argument (probably doesn't require it).
### Versions
master
cc @albanD @mruberry @jbschlosser @walterddr @saketh-are
| 3 |
4,593 | 86,351 |
Jetson JIT: Memory Leak on inference after optimize_for_inference
|
oncall: jit
|
### 🐛 Describe the bug
After investigating a memory leak that happens only on one hardware platform, I found that torch::jit::optimize_for_inference induces a change in the torchscript that causes a memory leak on Jetson devices.
Running on the same libtorch compiled from source on Windows, Mac and Linux with the same optimisation applied does not cause any memory leak.
To reproduce, you can run a small model such as Retinaface that is traced, with and without torch::jit::optimize_for_inference.
I have attached two torchscript code files below. bad causes the memory leak and good does not.
[good___torch_mangle_205.txt](https://github.com/pytorch/pytorch/files/9721370/good___torch_mangle_205.txt)
-> torch::jit::optimize_for_inference ->
[bad___torch_mangle_205.txt](https://github.com/pytorch/pytorch/files/9721373/bad___torch_mangle_205.txt)
Only obvious change I can see is conv + relu fusion.
The only significant difference to the other working platforms I can find is: Jetson uses CUDA 10.2 / CUDNN 8.0, Jetson is ARM64, Jetson has shared GPU memory.
### Versions
1.12.1 and 1.13.0-dev2022.08.13
| 0 |
4,594 | 86,340 |
Add complex support for SparseAdam and LBFGS optimizers
|
module: optimizer, triaged, module: complex, actionable
|
As per title
These optimizers are less high priority than the other in https://github.com/pytorch/pytorch/issues/65711 and so are not being worked on by the core team right now.
Feel free to pick this up if you're interested.
cc @vincentqb @jbschlosser @ezyang @anjali411 @dylanbespalko @mruberry @Lezcano @nikitaved
| 3 |
4,595 | 86,339 |
Add `maximize` support to LBFGS optimizer
|
module: optimizer, triaged, actionable
|
As per title.
This is lower priority than the other optimizers in https://github.com/pytorch/pytorch/issues/68052 and so is not being worked on right now by the core team.
cc @vincentqb @jbschlosser
| 0 |
4,596 | 86,326 |
`torch.special.round` doesn't support the same dtypes as `torch.round`
|
triaged, module: special
|
### 🐛 Describe the bug
`torch.special.round` is declared as an alias of `torch.round` in the `OpInfo`. But they behave differently (due to a bug?):
```py
In [2]: torch.special.round(torch.tensor([2], dtype=torch.short))
RuntimeError: "round_vml_cpu" not implemented for 'Short'
In [3]: torch.round(torch.tensor([2], dtype=torch.short))
Out[3]: tensor([2], dtype=torch.int16)
In [4]: torch.tensor([2], dtype=torch.short).round()
Out[4]: tensor([2], dtype=torch.int16)
```
The error message is from `AT_DISPATCH_SWITCH`. I haven't looked into why it happens.
Aliases are tested in `test_variant_consistency_eager`, but that test only checks against `torch.float` and `torch.cfloat` (see `_variant_ops`). So we didn't notice. If I allow testing for all dtypes there, I get the same error:
```py
# old:
# _variant_ops = partial(
# ops, dtypes=OpDTypes.supported, allowed_dtypes=(torch.float, torch.cfloat)
# )
# new:
_variant_ops = partial(
ops, dtypes=OpDTypes.supported
)
```
```bash
# run after allowing all types in _variant_ops
python -m pytest test/test_ops.py -k test_variant_consistency_eager_round -vv --capture=no
```
output:
```
FAILED test/test_ops.py::TestCommonCPU::test_variant_consistency_eager_round_cpu_int16 - RuntimeError: "round_vml_cpu" not implemented for 'Short'
FAILED test/test_ops.py::TestCommonCPU::test_variant_consistency_eager_round_cpu_int32 - RuntimeError: "round_vml_cpu" not implemented for 'Int'
FAILED test/test_ops.py::TestCommonCPU::test_variant_consistency_eager_round_cpu_int64 - RuntimeError: "round_vml_cpu" not implemented for 'Long'
FAILED test/test_ops.py::TestCommonCPU::test_variant_consistency_eager_round_cpu_int8 - RuntimeError: "round_vml_cpu" not implemented for 'Char'
FAILED test/test_ops.py::TestCommonCPU::test_variant_consistency_eager_round_cpu_uint8 - RuntimeError: "round_vml_cpu" not implemented for 'Byte'
FAILED test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_round_cuda_int16 - RuntimeError: "round_cuda" not implemented for 'Short'
FAILED test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_round_cuda_int32 - RuntimeError: "round_cuda" not implemented for 'Int'
FAILED test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_round_cuda_int64 - RuntimeError: "round_cuda" not implemented for 'Long'
FAILED test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_round_cuda_int8 - RuntimeError: "round_cuda" not implemented for 'Char'
FAILED test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_round_cuda_uint8 - RuntimeError: "round_cuda" not implemented for 'Byte'
```
### Versions
master (82d9592f1ba)
cc @mruberry @kshitij12345
| 3 |
4,597 | 86,315 |
Feature request: Tests for `int` should be tests for `numbers.Integral`
|
feature, triaged, module: numpy, module: python frontend
|
### 🐛 Describe the bug
Some code, such as that in `BatchSampler`, tests whether a value, e.g. `batch_size` is an integer with `isinstance(batch_size, int)`.
This fails in a case like this:
```py
for batch_size in np.logspace(1, 4, 7, dtype=int):
# train a model with this batch size
```
because even though `batch_size` behaves just like an int, it's not an _instance_ of int.
The specific error in this case highlights that some logic is not quite right:
```
ValueError: batch_size should be a positive integer value, but got batch_size=10
```
Note that the issue isn't related to `np.logspace` specifically, just any int taken from any NumPy array:
```py
assert isinstance(np.array([1])[0], int) == False
assert isinstance(np.array([1])[0], numbers.Integral) == True
```
It would be better to replace `isinstance(batch_size, int)` with `isinstance(batch_size, numbers.Integral)`.
Unless there's something I don't understand, this is probably a good general rule for the whole codebase. Typically these numbers are then passed to `range()` or `slice()` which have the requirement of `Integral`, _not_ the narrower requirement of `int`.
### Versions
PyTorch version: 1.12.1+cu116
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.31
Python version: 3.10.7 (main, Sep 7 2022, 15:22:19) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: Could not collect
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3090
Nvidia driver version: 516.94
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] mypy==0.971
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.23.3
[pip3] torch==1.12.1+cu116
[pip3] torchaudio==0.12.1+cu116
[pip3] torchvision==0.13.1+cu116
[conda] Could not collect
cc @mruberry @rgommers @albanD
| 1 |
4,598 | 86,298 |
AOT Autograd Device Partitioning
|
triaged, module: functorch
|
### 🚀 The feature, motivation and pitch
Pytorch Eager will parallelize the backward based on devices. We should do something similar in AOT Autograd, perhaps with subprocesses.
### Alternatives
_No response_
### Additional context
_No response_
cc @zou3519 @Chillee @samdow @soumith
| 0 |
4,599 | 86,287 |
JIT `lgamma` will return `inf` only with dual input in forward mode
|
oncall: jit
|
### 🐛 Describe the bug
JIT model with `lgamma` will return `inf` only with dual input in forward mode.
For any other input, it will return the correct value. It seems that there is an issue in the JIT and forward dual input
```py
import torch
def fn(input):
input = torch.mul(input, torch.tensor(5, dtype=torch.float32, device='cuda'))
input = torch.mul(input, torch.tensor(-16, dtype=torch.float32, device='cuda'))
input = torch.div(input, torch.tensor(9, dtype=torch.float32, device='cuda'))
fn_res = torch.lgamma(input, )
return torch.add(fn_res, torch.tensor(2, dtype=torch.float32, device='cuda'))
torch.random.manual_seed(40277)
inp= torch.empty([3], dtype=torch.float16, memory_format=torch.contiguous_format)
inp.uniform_(-32, 63)
inp = inp.to('cuda')
def clone(inp):
return inp.clone().requires_grad_()
print(fn(clone(inp)))
jit_fn = torch.jit.script(fn)
print(jit_fn(clone(inp)))
with torch.autograd.forward_ad.dual_level():
print(jit_fn(clone(inp)))
tan = torch.rand_like(inp)
dual_inp = torch.autograd.forward_ad.make_dual(clone(inp), tan)
print(jit_fn(dual_inp))
```
```
tensor([-2156., -738., 621.], device='cuda:0', dtype=torch.float16,
grad_fn=<AddBackward0>)
tensor([-2156., -738., 621.], device='cuda:0', dtype=torch.float16,
grad_fn=<AddBackward0>)
tensor([-2156., -738., 621.], device='cuda:0', dtype=torch.float16,
grad_fn=<AddBackward0>)
tensor([ inf, inf, 621.], device='cuda:0', dtype=torch.float16,
grad_fn=<AddBackward0>)
```
### Versions
```
PyTorch version: 1.13.0.dev20221001
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.2.0-19ubuntu1) 11.2.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.9.13 (main, Aug 25 2022, 23:26:10) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.19.5-051905-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.7.99
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060
Nvidia driver version: 515.65.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.3
[pip3] torch==1.13.0.dev20221001
[pip3] torchaudio==0.13.0.dev20221001
[pip3] torchvision==0.14.0.dev20221001
[conda] blas 1.0 mkl
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.23.3 pypi_0 pypi
[conda] numpy-base 1.23.1 py39ha15fc14_0
[conda] pytorch 1.13.0.dev20221001 py3.9_cuda11.7_cudnn8.5.0_0 pytorch-nightly
[conda] pytorch-cuda 11.7 h67b0de4_0 pytorch-nightly
[conda] pytorch-mutex 1.0 cuda pytorch-nightly
[conda] torch 1.13.0.dev20220923+cu116 pypi_0 pypi
[conda] torchaudio 0.13.0.dev20220923+cu116 pypi_0 pypi
[conda] torchvision 0.14.0.dev20220923+cu116 pypi_0 pypi
```
| 0 |
4,600 | 86,279 |
`torch.multinomial` on MPS crashes with `Error: total bytes of NDArray > 2**32'`
|
triaged, module: regression, module: mps
|
### 🐛 Describe the bug
After https://github.com/pytorch/pytorch/pull/80760 added MPS version of multinomial op, operations with replacement fails for arrays of more than 32K elements with non-recoverable error:
```
% python -c "import torch;print(torch.multinomial(torch.ones(1, 32768, device='mps'), 2, replacement=True))"
/AppleInternal/Library/BuildRoots/4883e71d-37bd-11ed-b0ef-b25c5e9b9057/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShaders/MPSCore/Types/MPSNDArray.mm:724: failed assertion `[MPSNDArray initWithDevice:descriptor:] Error: total bytes of NDArray > 2**32'
zsh: abort python -c
```
### Versions
Nightly
cc @kulinseth @albanD @DenisVieriu97 @razarmehr @abhudev
| 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.