Serial Number
int64 1
6k
| Issue Number
int64 75.6k
112k
| Title
stringlengths 3
357
| Labels
stringlengths 3
241
β | Body
stringlengths 9
74.5k
β | Comments
int64 0
867
|
---|---|---|---|---|---|
4,701 | 85,475 |
Issue with converting Comet model to ONNX. Split-node error.
|
module: onnx, triaged
|
I'm trying to convert the [Comet](https://unbabel.github.io/COMET/html/index.html) model to ONNX. In particular I'm working with the referenceless version of the metric ("wmt21-comet-qe-mqm"). It is based on BERT and contains a regression head.
The export to ONNX succeeds. And the resulting model passes the checks. However, performing inference fails with an error - Cannot split using values in 'split' attribute. Below I provide details of the error.
I wonder if the ONNX supports such configuration. I also prepared a short script that may be used to reproduce this issue.
```python
import os
from transformers import AutoTokenizer
import torch
from comet import download_model, load_from_checkpoint
import onnxruntime as ort
import onnx
model = load_from_checkpoint(download_model("wmt21-comet-qe-mqm"))
CONVERTED_COMET_DIR = 'CONVERTED_COMET'
CONVERTED_COMET_MODEL_PATH = os.path.join(CONVERTED_COMET_DIR, 'model.onnx')
tokenizer = AutoTokenizer.from_pretrained('xlm-roberta-large')
MODEL_MAX_LENGTH = tokenizer.model_max_length
try:
os.makedirs(CONVERTED_COMET_DIR)
except FileExistsError:
pass
input_names = ["src_input_ids", "src_attention_mask", "mt_input_ids", "mt_attention_mask"]
inputs = {key: torch.ones(1, MODEL_MAX_LENGTH, dtype=torch.int64) for key in input_names}
symbolic_names = {0: "batch_size"}
torch.onnx.export(
model,
(*[inputs[key] for key in input_names],),
CONVERTED_COMET_MODEL_PATH,
opset_version=15,
do_constant_folding=True,
input_names=input_names,
output_names=["score"],
dynamic_axes={key: symbolic_names for key in input_names},
)
onnx.checker.check_model(CONVERTED_COMET_MODEL_PATH)
ort_sess = ort.InferenceSession(CONVERTED_COMET_MODEL_PATH)
input_src = tokenizer("Algo estΓ‘ mal aquΓ...", return_tensors="np", max_length=MODEL_MAX_LENGTH, padding='max_length')
input_mt = tokenizer("CoΕ tu nie gra...", return_tensors="np", max_length=MODEL_MAX_LENGTH, padding='max_length')
inp = {"src_input_ids": input_src['input_ids'],
"src_attention_mask": input_src['attention_mask'],
"mt_input_ids": input_mt['input_ids'],
"mt_attention_mask": input_mt['attention_mask']}
outputs = ort_sess.run(None, inp)
```
The error details
```
2022-09-22 12:34:13.957212579 [E:onnxruntime:, sequential_executor.cc:368 Execute] Non-zero status code returned while running Split node. Name:'Split_5573' Status Message: Cannot split using values in 'split' attribute. Axis=0 Input shape={1,512} NumOutputs=1 Num entries in 'split' (must equal number of outputs) was 1 Sum of sizes in 'split' (must equal size of selected axis) was 8
---------------------------------------------------------------------------
Fail Traceback (most recent call last)
/tmp/ipykernel_7700/1124372680.py in <module>
60 inp = prepare_pair("Algo estΓ‘ mal aquΓ...", "CoΕ tu nie gra...")
61
---> 62 outputs = ort_sess.run(None, inp)
/opt/conda/lib/python3.7/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py in run(self, output_names, input_feed, run_options)
198 output_names = [output.name for output in self._outputs_meta]
199 try:
--> 200 return self._sess.run(output_names, input_feed, run_options)
201 except C.EPFail as err:
202 if self._enable_fallback:
Fail: [ONNXRuntimeError] : 1 : FAIL : Non-zero status code returned while running Split node. Name:'Split_5573' Status Message: Cannot split using values in 'split' attribute. Axis=0 Input shape={1,512} NumOutputs=1 Num entries in 'split' (must equal number of outputs) was 1 Sum of sizes in 'split' (must equal size of selected axis) was 8
```
### Versions
Collecting environment information...
PyTorch version: 1.11.0
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux 10 (buster) (x86_64)
GCC version: (Debian 8.3.0-6) 8.3.0
Clang version: Could not collect
CMake version: version 3.13.4
Libc version: glibc-2.10
Python version: 3.7.12 | packaged by conda-forge | (default, Oct 26 2021, 06:08:53) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-4.19.0-21-cloud-amd64-x86_64-with-debian-10.12
Is CUDA available: False
CUDA runtime version: 11.3.109
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.21.6
[pip3] pytorch-lightning==1.6.4
[pip3] torch==1.11.0
[pip3] torch-xla==1.11
[pip3] torchmetrics==0.8.2
[pip3] torchvision==0.12.0+cu113
[conda] blas 2.115 mkl conda-forge
[conda] blas-devel 3.9.0 15_linux64_mkl conda-forge
[conda] cudatoolkit 11.3.1 ha36c431_9 nvidia
[conda] dlenv-pytorch-1-11-gpu 1.0.20220630 py37hc1c1d6d_0 file:///tmp/conda-pkgs
[conda] libblas 3.9.0 15_linux64_mkl conda-forge
[conda] libcblas 3.9.0 15_linux64_mkl conda-forge
[conda] liblapack 3.9.0 15_linux64_mkl conda-forge
[conda] liblapacke 3.9.0 15_linux64_mkl conda-forge
[conda] mkl 2022.1.0 h84fe81f_915 conda-forge
[conda] mkl-devel 2022.1.0 ha770c72_916 conda-forge
[conda] mkl-include 2022.1.0 h84fe81f_915 conda-forge
[conda] numpy 1.21.6 pypi_0 pypi
[conda] pytorch 1.11.0 py3.7_cuda11.3_cudnn8.2.0_0 pytorch
[conda] pytorch-lightning 1.6.4 pypi_0 pypi
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchmetrics 0.8.2 pypi_0 pypi
[conda] torchvision 0.12.0+cu113 pypi_0 pypi
Additional libraries:
unbabel-comet==1.1.2
transformers==4.17.0
| 5 |
4,702 | 93,684 |
Can we rewrite numpy operators to pytorch operators?
|
good first issue, triaged, module: numpy
|
Currently, numpy operations cause graph break, while often exist in training / inference code. But many operators have same semantic in torch operators. Like [YOLOP](https://github.com/hustvl/YOLOP/blob/084d0bb19fa2db4ba40a6e1d6b120f0153d2f335/lib/core/evaluate.py#L230), it would get more than 5x speedup during inference, if we manual replace all the possible `numpy` to `torch`, so they can run in gpu.
So can we dynamicly rewrite these numpy code to torch code using torchdynamo, rewrite pattern may like:
`x = tensor.numpy(); b = np.a(x)` to `x = torch.a(tensor); b = x.numpy()`
or rewrite all support numpy operator `y = numpy.a(x)` to `x = torch.from_numpy(x); y = torch.a(x); y = y.numpy()` and then erase all the redundant `y = y.numpy(); y = torch.from_numpy(y)`.
cc @mruberry @rgommers
| 12 |
4,703 | 85,450 |
Cannot index into a tensor using indices from another device - regression from 1.12
|
triaged, module: regression, module: advanced indexing
|
### π Describe the bug
This code works on MPS in pytorch stable `1.12.1`, and also I presume in CUDA (this comes from the [stable-diffusion model](https://github.com/CompVis/stable-diffusion/blob/69ae4b35e0a0f6ee1af8bb9a5d0016ccb27e36dc/ldm/models/diffusion/ddpm.py#L1030), which was trained on CUDA originally).
_Note: you will need to launch python with `PYTORCH_ENABLE_MPS_FALLBACK=1` to get a CPU fallback for `aten::index.Tensor`. that might also explain why it works in stable._
```python
from torch import tensor
tensor([1.], device='cpu')[tensor([0], device='mps')]
# result: tensor([1.])
```
fails in nightly `1.13.0.dev20220917` with:
```
RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu)
```
I was able to [change the model code to fix it](https://github.com/Birch-san/stable-diffusion/commit/46f793b5ecd5bdd70ee8c9da9c6f379aa566afc5). but since this worked on CUDA and it's a pattern people are using in the wild: other models may be affected.
### Versions
```
PyTorch version: 1.13.0.dev20220917
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 12.5 (arm64)
GCC version: Could not collect
Clang version: 13.0.0 (clang-1300.0.29.30)
CMake version: version 3.22.1
Libc version: N/A
Python version: 3.10.4 (main, Mar 31 2022, 03:37:37) [Clang 12.0.0 ] (64-bit runtime)
Python platform: macOS-12.5-arm64-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.22.4
[pip3] pytorch-lightning==1.7.5
[pip3] torch==1.13.0.dev20220917
[pip3] torch-fidelity==0.3.0
[pip3] torchdiffeq==0.2.3
[pip3] torchmetrics==0.9.3
[pip3] torchtyping==0.1.4
[pip3] torchvision==0.14.0.dev20220917
[conda] numpy 1.22.4 pypi_0 pypi
[conda] pytorch-lightning 1.7.5 pypi_0 pypi
[conda] torch 1.13.0.dev20220917 pypi_0 pypi
[conda] torch-fidelity 0.3.0 pypi_0 pypi
[conda] torchdiffeq 0.2.3 pypi_0 pypi
[conda] torchmetrics 0.9.3 pypi_0 pypi
[conda] torchtyping 0.1.4 pypi_0 pypi
[conda] torchvision 0.14.0.dev20220917 pypi_0 pypi
```
cc @kulinseth @albanD
| 4 |
4,704 | 85,439 |
`aminmax` will trigger INTERNAL ASSERT if input is empty on cuda
|
good first issue, triaged, actionable, module: edge cases
|
### π Describe the bug
`aminmax` will trigger INTERNAL ASSERT if input is empty on cuda
```py
import torch
def fn(input):
return input.aminmax(dim=0)
device = "cuda"
input = torch.empty([36, 0], dtype=torch.float32)
input.uniform_(-16, 3)
fn(input.to(device))
# RuntimeError: iter.numel() > 0 && iter.ntensors() - iter.noutputs() == 1 && iter.noutputs() >= 1 INTERNAL ASSERT FAILED at "/opt/conda/conda-bld/pytorch_1663571524876/work/aten/src/ATen/native/cuda/Reduce.cuh":1134, please report a bug to PyTorch.
```
On cpu, it will just return empty tensor
### Versions
torch: '1.13.0.dev20220919'
cc @pbelevich
| 23 |
4,705 | 85,438 |
Prim Output Spec is Not Always Consistent With Eager
|
triaged, module: primTorch
|
### π Describe the bug
For instance, [chunk prim](https://github.com/pytorch/pytorch/blob/master/torch/_refs/__init__.py#L2577 ) returns a tuple while [chunk aten](https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/native_functions.yaml#L1269) returns a list.
To test this you could test the specs [here](https://github.com/pytorch/pytorch/pull/85417#discussion_r976977602), among other places.
### Versions
master
cc @ezyang @mruberry @ngimel @Lezcano @fdrocha
| 0 |
4,706 | 85,425 |
Feature Request: Deterministic Algorithm for MaxPool3d
|
feature, module: nn, good first issue, triaged, module: determinism, module: pooling
|
### π The feature, motivation and pitch
I am working on an encoder that involves use of [torch.nn.MaxPool3d](https://pytorch.org/docs/stable/generated/torch.nn.MaxPool3d.html#torch.nn.MaxPool3d)
I was wondering if there can be a deterministic version for the same
### Alternatives
_No response_
### Additional context
_No response_
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @kurtamohler @kshitij12345 @saketh-are
| 6 |
4,707 | 85,397 |
torch.nn.utils.prune.remove reorders the parameters of a module unexpectedly
|
triaged, module: pruning
|
### π Describe the bug
PyTorch has a pruning tool called `torch.nn.utils.prune`. It includes a function `torch.nn.utils.prune.remove`, which should restore the model parameter structure back to normal, except it keeps the effect of the pruning.
Since nn.module.state_dict() returns an **Ordered**Dict, I would expect that there is some internal ordering of the parameters of a module, and this ordering should NOT be changed by pruning (as long as I called `torch.nn.utils.prune.remove` to remove the reparameterization).
However, during some testing, I found that pruning and `torch.nn.utils.prune.remove` **does** change the order of the parameters.
This means, pruning would break some code that depends on this ordering. (I'm trying to prune [this model](https://github.com/openai/guided-diffusion), which depends on the ordering to save the checkpoint during training.)
For example, in the following code, the `weight` and `bias` of this Conv2d module is swapped unexpectedly:
```python
import torch
import torch.nn as nn
import torch.nn.utils.prune as prune
module=nn.Conv2d(1, 6, 3)
print(list(module.state_dict().keys())) #output: ['weight', 'bias']
prune.random_unstructured(module, name="weight", amount=0.3)
prune.remove(module, name="weight")
print(list(module.state_dict().keys())) #output: ['bias', 'weight']
```
Expected output:
```
['weight', 'bias']
['weight', 'bias']
```
Actual output:
```
['weight', 'bias']
['bias', 'weight']
```
### Versions
```
Collecting environment information...
PyTorch version: 1.9.1+cu111
Is debug build: False
CUDA used to build PyTorch: 11.1
ROCM used to build PyTorch: N/A
OS: CentOS Linux 7 (Core) (x86_64)
GCC version: (GCC) 4.8.5 20150623 (Red Hat 4.8.5-44)
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.17
Python version: 3.8.13 (default, Mar 28 2022, 11:38:47) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-3.10.0-1160.49.1.el7.x86_64-x86_64-with-glibc2.17
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.1
[pip3] torch==1.9.1+cu111
[pip3] torchaudio==0.9.1
[pip3] torchvision==0.10.1+cu111
[conda] _tflow_select 2.3.0 mkl
[conda] blas 1.0 mkl
[conda] cudatoolkit 10.2.89 hfd86e86_1
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py38h7f8727e_0
[conda] mkl_fft 1.3.1 py38hd3c417c_0
[conda] mkl_random 1.2.2 py38h51133e4_0
[conda] numpy 1.23.1 py38h6c91a56_0
[conda] numpy-base 1.23.1 py38ha15fc14_0
[conda] tensorflow 2.8.2 mkl_py38hb41d75a_0
[conda] tensorflow-base 2.8.2 mkl_py38hf890080_0
[conda] torch 1.9.1+cu111 pypi_0 pypi
[conda] torchaudio 0.9.1 pypi_0 pypi
[conda] torchvision 0.10.1+cu111 pypi_0 pypi
```
| 0 |
4,708 | 85,392 |
please report a bug to PyTorch. Expected Object but got PyObject
|
triaged, module: torchbind
|
### π Describe the bug
There is an extention in Nvidia's fastertransformer which code like this:
https://github.dev/NVIDIA/FasterTransformer/blob/9e26bcd948e5fd59246e84e84225cef0c399c7d4/src/fastertransformer/models/vit/ViT.h#L19
but calling it in python got error:
```
python3.10/site-packages/torch/include/ATen/core/ivalue_inl.h":123, please report a bug to PyTorch. Expected Object but got PyObject
Traceback (most recent call last):
File "/home/infer_vit_op.py", line 92, in run_vitnv_op
vit = torch.classes.VisionTransformer.Class(vit_weights.weights,
RuntimeError: isObject()INTERNAL ASSERT FAILED at "/home/.local/lib/python3.10/site-packages/torch/include/ATen/core/ivalue_inl.h":123, please report a bug to PyTorch. Expected Object but got PyObject
```
https://github.com/NVIDIA/FasterTransformer/issues/321
### Versions
1.13
| 1 |
4,709 | 85,390 |
Please put back missing rocm builds of Torch Vision.
|
oncall: binaries, module: rocm, triaged
|
### π The doc issue
https://pytorch.org/get-started/previous-versions/
Several of the commands in this doc are broken because of missing rocm builds. I have an RX 580 and I can't use the newer versions. I have been trying all kinds of ridiculous workarounds to get it working, but if someone would put the old versions back, I might be able to get it working with supported builds.
### Suggest a potential alternative/fix
Replace the missing builds so the commands will work again.
cc @ezyang @seemethere @malfet @jeffdaily @sunway513 @jithunnair-amd @ROCmSupport @KyleCZH
| 0 |
4,710 | 85,387 |
very strange speed of torch.bmm with specific tensor shape
|
module: performance, module: cuda, triaged, module: linear algebra
|
### π Describe the bug
I test the below code in a A100 GPU.
It outputs
avg_time: 1.97
avg_time: 0.115
avg_time: 2.09
The speed is much faster when the row number is 256.
I know Cuda performance depends on tensor shape, but it should not have so much gap.
```python
import torch
import time
torch.backends.cudnn.benchmark = True
num_iter = 1000
def run(a, b, half=True):
a = a.cuda()
b = b.cuda()
if half:
a = a.half()
b = b.half()
# cudnn warmup
for _ in range(50):
_ = torch.bmm(a, b)
torch.cuda.synchronize()
t0 = time.time()
for _ in range(num_iter):
_ = torch.bmm(a, b)
torch.cuda.synchronize()
t1 = time.time()
avg_time = (t1 - t0) / num_iter * 1000
print(f'avg_time: {avg_time:.3g}')
return avg_time
if __name__ == '__main__':
print('Pytorch version\t:', torch.__version__)
print('CUDA version\t:', torch.version.cuda)
bs = 128
is_half = True
n = 255
inp0 = torch.randn([bs, n, 1024])
inp1 = torch.randn([bs, 1024, n])
run(inp0, inp1, is_half)
n = 256
inp0 = torch.randn([bs, n, 1024])
inp1 = torch.randn([bs, 1024, n])
run(inp0, inp1, is_half)
n = 257
inp0 = torch.randn([bs, n, 1024])
inp1 = torch.randn([bs, 1024, n])
run(inp0, inp1, is_half)
```
### Versions
Collecting environment information...
PyTorch version: 1.12.1
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: CentOS Linux 7 (Core) (x86_64)
GCC version: (GCC) 7.3.0
Clang version: Could not collect
CMake version: version 2.8.12.2
Libc version: glibc-2.17
Python version: 3.10.4 (main, Mar 31 2022, 08:41:55) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-3.10.0-957.el7.x86_64-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: 11.2.152
GPU models and configuration:
GPU 0: A100-SXM-80GB
GPU 1: A100-SXM-80GB
GPU 2: A100-SXM-80GB
GPU 3: A100-SXM-80GB
GPU 4: A100-SXM-80GB
GPU 5: A100-SXM-80GB
GPU 6: A100-SXM-80GB
GPU 7: A100-SXM-80GB
Nvidia driver version: 460.32.03
cuDNN version: Probably one of the following:
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.1.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.1
[pip3] torch==1.12.1
[pip3] torch-tb-profiler==0.4.0
[pip3] torchaudio==0.12.1
[pip3] torchsummary==1.5.1
[pip3] torchvision==0.13.1
[conda] blas 1.0 mkl defaults
[conda] cudatoolkit 11.3.1 h2bc3f7f_2 defaults
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640 defaults
[conda] mkl-service 2.4.0 py310h7f8727e_0 defaults
[conda] mkl_fft 1.3.1 py310hd6ae3a3_0 defaults
[conda] mkl_random 1.2.2 py310h00e6091_0 defaults
[conda] numpy 1.23.1 py310h1794996_0 defaults
[conda] numpy-base 1.23.1 py310hcba007f_0 defaults
[conda] pytorch 1.12.1 py3.10_cuda11.3_cudnn8.3.2_0 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torch-tb-profiler 0.4.0 pypi_0 pypi
[conda] torchaudio 0.12.1 py310_cu113 pytorch
[conda] torchsummary 1.5.1 pypi_0 pypi
[conda] torchvision 0.13.1 py310_cu113 pytorch
cc @VitalyFedyunin @ngimel @jianyuh @nikitaved @pearu @mruberry @walterddr @IvanYashchuk @xwang233 @Lezcano
| 0 |
4,711 | 85,386 |
[FSDP] MixedPrecision, CPUOffload, BackwardPrefetch etc should be documented
|
oncall: distributed, triaged, better-engineering, module: fsdp
|
### π The doc issue
These are auxiliary data structures that do not show up in master documentation, should fix before 1.13 release: https://pytorch.org/docs/master/fsdp.html?highlight=fullyshardeddataparallel#torch.distributed.fsdp.FullyShardedDataParallel
### Suggest a potential alternative/fix
_No response_
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang @kwen2501
| 2 |
4,712 | 85,377 |
CI fails for test_compare_cpu_nn_functional_embedding_cuda_float32 which is not reproducible locally
|
module: cuda, module: tests, triaged, module: embedding
|
### π Describe the bug
The PR https://github.com/pytorch/pytorch/pull/85011 to add the test `test_compare_cpu` compares GPU and CPU results.
In the CI workflow, the test `test_compare_cpu` with the op `nn.functional.embedding` fails with this error message:
```
======================================================================
FAIL [0.009s]: test_compare_cpu_nn_functional_embedding_cuda_float32 (__main__.TestCommonCUDA)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1950, in wrapper
method(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 378, in instantiated_test
result = test(self, **param_kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 815, in test_wrapper
return test(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 975, in only_fn
return fn(slf, *args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1283, in wrapper
fn(*args, **kwargs)
File "/var/lib/jenkins/workspace/test/test_ops.py", line 187, in test_compare_cpu
self.assertEqual(cuda_results, cpu_results)
File "/opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2412, in assertEqual
assert_equal(
File "/opt/conda/lib/python3.10/site-packages/torch/testing/_comparison.py", line 1093, in assert_equal
raise error_metas[0].to_error(msg)
AssertionError: Tensor-likes are not close!
Mismatched elements: 15 / 45 (33.3%)
Greatest absolute difference: 0.5051435921341181 at index (0, 0, 3) (up to 1e-05 allowed)
Greatest relative difference: 0.9674892692698572 at index (0, 0, 2) (up to 1.3e-06 allowed)
```
However when I run this on `us.quansight.dev`. I can not reproduce the error and I get an unexpected success
```
================================================== 434 passed, 501 skipped, 47841 deselected, 26 xfailed, 11 warnings in 34.78s ==================================================
(pytorch2) srossross@qgpu1:~/pytorch$ py.test test/test_ops.py -v -k test_compare_cpu_nn_functional_embedding_cuda_float32
============================================================================== test session starts ===============================================================================
platform linux -- Python 3.8.13, pytest-7.1.2, pluggy-1.0.0 -- /home/srossross/.conda/envs/pytorch2/bin/python
cachedir: .pytest_cache
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/home/srossross/pytorch/.hypothesis/examples')
rootdir: /home/srossross/pytorch, configfile: pytest.ini
plugins: hypothesis-6.54.4
collected 48802 items / 48801 deselected / 1 selected
test/test_ops.py::TestCommonCUDA::test_compare_cpu_nn_functional_embedding_cuda_float32 FAILED [100%]
==================================================================================== FAILURES ====================================================================================
______________________________________________________ TestCommonCUDA.test_compare_cpu_nn_functional_embedding_cuda_float32 ______________________________________________________
Unexpected success
============================================================================ short test summary info =============================================================================
FAILED test/test_ops.py::TestCommonCUDA::test_compare_cpu_nn_functional_embedding_cuda_float32
=============================================================== 1 failed, 48801 deselected, 11 warnings in 15.09s ================================================================
```
### Versions
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.27
Python version: 3.9.0 (default, Nov 15 2020, 14:28:56) [GCC 7.3.0] (64-bit runtime)
Python platform: Linux-4.15.0-147-generic-x86_64-with-glibc2.27
Is CUDA available: N/A
CUDA runtime version: Could not collect
GPU models and configuration:
GPU 0: Quadro RTX 8000
GPU 1: Quadro RTX 8000
Nvidia driver version: 515.65.01
cuDNN version: Probably one of the following:
/usr/local/cuda-10.1/targets/x86_64-linux/lib/libcudnn.so.7
/usr/local/cuda-10.2.89/targets/x86_64-linux/lib/libcudnn.so.7
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
Versions of relevant libraries:
[pip3] mypy==0.960
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.21.6
[conda] numpy 1.21.6 pypi_0 pypi
cc @ngimel @mruberry
| 8 |
4,713 | 85,375 |
Inconsistency between geometric distributions
|
module: distributions, module: molly-guard, triaged
|
### π Describe the bug
`torch.distributions.Geometric` and `torch.Tensor.geometric_` are inconsistent in whether they start from 0 or 1. The former will most commonly return 0 when sampled from, in effect giving you the number of failures before your first success when running repeated Bernoulli trials. The latter tells you on which Bernoulli trial you got your first success. So the latter is the former plus 1. This could be cause for confusion when working with geometric distributions, but changing it would obviously be a breaking change.
```python
In [119]: import torch
In [120]: d = torch.distributions.Geometric(0.3)
In [121]: d.sample()
Out[121]: tensor(0.)
In [122]: [d.sample().item() for _ in range(10)] # starts at 0
Out[122]: [1.0, 3.0, 2.0, 0.0, 1.0, 0.0, 5.0, 8.0, 1.0, 2.0]
In [125]: torch.zeros(10).geometric_(0.9) # starts at 1, will never give 0
Out[125]: tensor([1., 1., 1., 1., 1., 1., 1., 2., 1., 1.])
```
### Versions
```
Collecting environment information...
PyTorch version: 1.13.0.dev20220813
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 12.5 (arm64)
GCC version: Could not collect
Clang version: 13.1.6 (clang-1316.0.21.2.5)
CMake version: Could not collect
Libc version: N/A
Python version: 3.8.9 (default, Apr 13 2022, 08:48:06) [Clang 13.1.6 (clang-1316.0.21.2.5)] (64-bit runtime)
Python platform: macOS-12.5-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.1
[pip3] torch==1.13.0.dev20220813
[pip3] torch-tb-profiler==0.4.0
[pip3] torchaudio==0.13.0.dev20220813
[pip3] torchinfo==1.7.0
[pip3] torchvision==0.14.0.dev20220813
[conda] Could not collect
```
cc @fritzo @neerajprad @alicanb @nikitaved
| 3 |
4,714 | 85,366 |
More windows for filtering and spectral analysis
|
triaged, module: scipy compatibility
|
### π The feature, motivation and pitch
#### Introduction
This feature request looks to add more windows for filtering and spectral analysis. I've noticed that we're supporting only the following windows:
- `barlett_window`
- `blackman_window`
- `hamming_window`
- `hann_window`
- `kaiser_window`
But there are many more windows that are pretty useful. For instance, the Dolph-Chebyshev window which optimizes for the minimum main lobe width for a given side-lobe specification.
So, I'm proposing we add the following windows:
- [ ] `chebyshev_window`
- [ ] `taylor_window`
- [x] `cosine_window`
- [x] `gaussian_window`
- [x] `exponential_window`
- [ ] `kbd_window`
- [x] `nutall_window`
We could add more to the list.
#### Motivation
One of the things I came across is filtering signals when trying to create custom datasets. The window function that I tried to use was Dolph-Chebyshev, but PyTorch does not support it at the moment. Other functions such as Kaiser-Bessel Derived window function is broadly used in audio signal processing.
### Alternatives
The solutions are quite straightforward.
From the list of proposed windows, the most complex one is the Dolph-Chebyshev window which requires an inverse FFT. But this is easy to apply as the IFFT implementation exists in PyTorch already. This window would require creating CUDA and CPU kernels. I already have an implementation that I could share in a draft PR if this feature is accepted.
The other window functions are quite simple and easy to implement, namely, the cosine window, exponential window, and gaussian window. I also have implementations for these.
A sneak peek of the CPU `chebyshev_window_kernel`:
This calculates the window coefficients in the frequency domain. The IFFT would be applied after the call to the kernel's finished.
```cpp
static void chebyshev_window_kernel(TensorIteratorBase& iter, int64_t window_length, double attenuation) {
AT_DISPATCH_FLOATING_TYPES_AND(kBFloat16, iter.dtype(), "chebyshev_window_cpu", [&](){
const int64_t n = window_length - 1;
const scalar_t beta = static_cast<scalar_t>(std::cosh(1.0 / n * std::acosh(std::pow(10, attenuation / 20.0))));
cpu_kernel(iter, [=](scalar_t a){
auto x = beta * static_cast<scalar_t>(std::cos(c10::pi<double> * a / window_length));
return static_cast<scalar_t>(chebyshev_polynomial_t_forward(x, n) / std::pow(10, attenuation / 20.0));
});
});
}
```
And for the `cosine_window` function:
Does not need a kernel, I believe.
```cpp
Tensor cosine_window(
int64_t window_length,
bool periodic,
c10::optional<ScalarType> dtype_opt,
c10::optional<Layout> layout,
c10::optional<Device> device,
c10::optional<bool> pin_memory) {
// See [Note: hacky wrapper removal for TensorOptions]
ScalarType dtype = c10::dtype_or_default(dtype_opt);
TensorOptions options = TensorOptions().dtype(dtype).layout(layout).device(device).pinned_memory(pin_memory);
window_function_checks("cosine_window", options, window_length);
if (window_length == 0) {
return at::empty({0}, options);
}
if (window_length == 1) {
return native::ones({1}, dtype, layout, device, pin_memory);
}
if (periodic) {
window_length += 1;
}
auto window = native::arange(window_length, dtype, layout, device, pin_memory)
.add(0.5).mul_(c10::pi<double>).mul_(static_cast<double>(1.0 / window_length)).sin_();
return periodic ? window.narrow(0, 0, window_length - 1) : window;
}
```
The other window functions would follow the same or similar approach.
### Additional context
#### References
- Window functions:
- https://sites.google.com/site/stevedtran/course/intro-to-digital-signal-processing/notes2/windowing/type-of-windowing
- Kaiser-Bessel Derived Window:
- Bosi, Marina, and Richard E. Goldberg. Introduction to Digital Audio Coding and Standards. Dordrecht: Kluwer, 2003.
- Dolph-Chebyshev Window:
- https://ccrma.stanford.edu/~jos/sasp/Dolph_Chebyshev_Window.html
- Taylor Window:
- Armin Doerry, "Catalog of Window Taper Functions for
Sidelobe Control", 2017 https://www.researchgate.net/profile/Armin_Doerry/publication/316281181_Catalog_of_Window_Taper_Functions_for_Sidelobe_Control/links/58f92cb2a6fdccb121c9d54d/Catalog-of-Window-Taper-Functions-for-Sidelobe-Control.pdf
- Nutall Window:
- A. Nuttall, βSome windows with very good sidelobe behavior,β IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 29, no. 1, pp. 84-91, Feb 1981. [DOI:10.1109/TASSP.1981.1163506](https://doi.org/10.1109/TASSP.1981.1163506)
| 12 |
4,715 | 85,351 |
Point community docs to master
|
triaged, module: doc infra
|
### π The doc issue
We need to make sure that the docs under [community](https://github.com/pytorch/pytorch/tree/master/docs/source/community) in the left nav or other places point to master.

### Suggest a potential alternative/fix
_No response_
cc @ezyang @zou3519 @holly1238
| 0 |
4,716 | 85,342 |
JIT fuser issues with {ceil,floor,round,trunc}(int8)
|
oncall: jit, NNC
|
### π Describe the bug
I had to remove the testing for these functions in https://github.com/pytorch/pytorch/pull/85144 as they did not seem to work after those changes.
### Versions
master
| 0 |
4,717 | 85,335 |
functorch aten::scatter_add_ not implemented
|
triaged, module: functorch
|
### π Describe the bug
Hi ππΌ ,
Thank you for maintaining this amazing project.
When attempting to use `functorch` with `torch-geometric` I encounter the following error. Please let me know if there is any additional information I can provide or anything I can do to help.
Thank you,
Matt
```python
from functorch import combine_state_for_ensemble, vmap
from torch import nn
from torch_geometric.nn import GCNConv
from torch_geometric.data import Data
import torch
NUM_MODELS = 10
INPUT_SIZE = 8
NUM_NODES, NUM_EDGES = 4, 8
# create a model
class Model(nn.Module):
def __init__(self, input_size: int) -> None:
super().__init__()
self.conv1 = GCNConv(input_size, 2, add_self_loops=False).jittable()
def forward(self, x: torch.Tensor, edge_index: torch.Tensor) -> torch.Tensor:
return self.conv1(x, edge_index)
# create the data
xs = torch.randn(NUM_MODELS, NUM_NODES, INPUT_SIZE, dtype=torch.float)
edge_indices = torch.randint(0, 3, (NUM_MODELS, 2, NUM_EDGES), dtype=torch.long)
# create functional models
models = [Model(INPUT_SIZE) for _ in range(NUM_MODELS)]
fmodel, params, buffers = combine_state_for_ensemble(models)
# complete a forward pass with the data
res = vmap(fmodel)(params, buffers, xs, edge_indices)
```
Produces the error...
```
(.venv) matthewlemay@Matthews-MacBook-Air ~/G/torch-func [0|1]> python3 run.py
/Users/matthewlemay/Github/torch-func/.venv/lib/python3.10/site-packages/torch_scatter/scatter.py:21: UserWarning: There is a performance drop because we have not yet implemented the batching rule for aten::scatter_add_. Please file us an issue on GitHub so that we can prioritize its implementation. (Triggered internally at /Users/runner/work/functorch/functorch/functorch/csrc/BatchedFallback.cpp:85.)
return out.scatter_add_(dim, index, src)
Traceback (most recent call last):
File "/Users/matthewlemay/Github/torch-func/run.py", line 30, in <module>
res = vmap(fmodel)(params, buffers, xs, edge_indices)
File "/Users/matthewlemay/Github/torch-func/.venv/lib/python3.10/site-packages/functorch/_src/vmap.py", line 365, in wrapped
batched_outputs = func(*batched_inputs, **kwargs)
File "/Users/matthewlemay/Github/torch-func/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/Users/matthewlemay/Github/torch-func/.venv/lib/python3.10/site-packages/functorch/_src/make_functional.py", line 282, in forward
return self.stateless_model(*args, **kwargs)
File "/Users/matthewlemay/Github/torch-func/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/Users/matthewlemay/Github/torch-func/run.py", line 19, in forward
return self.conv1(x, edge_index)
File "/Users/matthewlemay/Github/torch-func/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/var/folders/hh/vh54hqf544n7qf9vxn1lt8_00000gn/T/matthewlemay_pyg/tmp97j0p1uv.py", line 219, in forward
edge_index, edge_weight = gcn_norm( # yapf: disable
File "/Users/matthewlemay/Github/torch-func/.venv/lib/python3.10/site-packages/torch_geometric/nn/conv/gcn_conv.py", line 67, in gcn_norm
deg = scatter_add(edge_weight, idx, dim=0, dim_size=num_nodes)
File "/Users/matthewlemay/Github/torch-func/.venv/lib/python3.10/site-packages/torch_scatter/scatter.py", line 29, in scatter_add
return scatter_sum(src, index, dim, out, dim_size)
File "/Users/matthewlemay/Github/torch-func/.venv/lib/python3.10/site-packages/torch_scatter/scatter.py", line 21, in scatter_sum
return out.scatter_add_(dim, index, src)
RuntimeError: vmap: aten::scatter_add_(self, *extra_args) is not possible because there exists a Tensor `other` in extra_args that has more elements than `self`. This happened due to `other` being vmapped over but `self` not being vmapped over at level 1. Please try to use out-of-place operators instead of aten::scatter_add_. If said operator is being called inside the PyTorch framework, please file a bug report instead.
```
### Versions
```
PyTorch version: 1.12.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 12.6 (x86_64)
GCC version: Could not collect
Clang version: 14.0.0 (clang-1400.0.29.102)
CMake version: version 3.24.2
Libc version: N/A
Python version: 3.10.6 (main, Aug 30 2022, 05:12:36) [Clang 13.1.6 (clang-1316.0.21.2.5)] (64-bit runtime)
Python platform: macOS-12.6-x86_64-i386-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] functorch==0.2.1
[pip3] numpy==1.23.3
[pip3] torch==1.12.1
[pip3] torch-cluster==1.6.0
[pip3] torch-geometric==2.1.0.post1
[pip3] torch-scatter==2.0.9
[pip3] torch-sparse==0.6.15
[pip3] torch-spline-conv==1.2.1
[conda] Could not collect
```
cc @zou3519 @Chillee @samdow @soumith
| 4 |
4,718 | 85,329 |
Crash in `torch.package.PackageExporter`
|
module: crash, triaged, oncall: package/deploy, imported, module: edge cases
|
### π Describe the bug
Passing invalid input to `f` or `importer` in `torch.package.PackageExporter`, can causes a crash with Aborted (core dumped).
## code
```python
import torch
torch.package.PackageExporter(f=1, importer=1)
```
## output
```bash
terminate called after throwing an instance of 'pybind11::error_already_set'
what(): AttributeError: 'int' object has no attribute 'write'
Aborted (core dumped)
```
### Versions
PyTorch version: 1.12.1+cu102
Is debug build: False
CUDA used to build PyTorch: 10.2
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.31
Python version: 3.8.10 (default, Jun 22 2022, 20:18:18) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.4.0-117-generic-x86_64-with-glibc2.29
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.18.5
[pip3] torch==1.12.1
[conda] Could not collect
| 1 |
4,719 | 93,680 |
AOT Autograd traces have instability in defining the same Graph
|
triaged, oncall: pt2, module: aotdispatch
|
This might not be an actionable repro, yet as nvFuser still has failures during the TorchBench Benchmark `hf_Bert` and the nvprims knob is not currently added to TorchDynamo to enable prim execution. However, I am presenting enough info to repro if you would like to proceed.
Need Ivan's fork of TorchDynamo: https://github.com/IvanYashchuk/torchdynamo
You can cherry-pick the commit: `c5de8c501f6146955b42e21b6a1ad5f614eca01e`
The command I used:
```
PYTORCH_NVFUSER_DUMP=python_definition python benchmarks/torchbench.py --training -d cuda --fast --nvprims-nvfuser --skip-accuracy-check --only attention_is_all_you_need_pytorch --float32
```
Here are two definitions that are identical but have their inputs re-ordered. The fusion being captured is an `add+layer_norm`.
```
def nvfuser_fusion_id7(fd : FusionDefinition) -> None :
T0 = fd.define_tensor(symbolic_sizes=[-1, -1, -1], contiguous=[True, True, True], dtype=DataType.Float)
T1 = fd.define_tensor(symbolic_sizes=[-1, -1, -1], contiguous=[True, True, True], dtype=DataType.Float)
T2 = fd.define_tensor(symbolic_sizes=[-1], contiguous=[True], dtype=DataType.Float)
T3 = fd.define_tensor(symbolic_sizes=[-1], contiguous=[True], dtype=DataType.Float)
T4 = fd.ops.add(T0, T1)
T5 = fd.ops.broadcast_in_dim(T2, output_shape=[4, 512, 768], broadcast_dims=[2])
T6 = fd.ops.broadcast_in_dim(T3, output_shape=[4, 512, 768], broadcast_dims=[2])
T7, T8 = fd.ops.var_mean(T4, axes=[2], correction=0, keepdim=False)
T9 = fd.ops.broadcast_in_dim(T7, output_shape=[4, 512, 1], broadcast_dims=[0, 1])
T10 = fd.ops.broadcast_in_dim(T8, output_shape=[4, 512, 1], broadcast_dims=[0, 1])
S11 = fd.define_constant(1.00000e-12)
T12 = fd.ops.add(T9, S11)
T13 = fd.ops.broadcast_in_dim(T10, output_shape=[4, 512, 768], broadcast_dims=[0, 1, 2])
T14 = fd.ops.rsqrt(T12)
T15 = fd.ops.sub(T4, T13)
T16 = fd.ops.broadcast_in_dim(T14, output_shape=[4, 512, 768], broadcast_dims=[0, 1, 2])
T17 = fd.ops.mul(T15, T16)
T18 = fd.ops.mul(T17, T5)
T19 = fd.ops.add(T18, T6)
T20 = fd.ops.cast(T19, dtype=DataType.Float)
fd.add_output(T4)
fd.add_output(T10)
fd.add_output(T14)
fd.add_output(T20)
```
```
def nvfuser_fusion_id10(fd : FusionDefinition) -> None :
T0 = fd.define_tensor(symbolic_sizes=[-1], contiguous=[True], dtype=DataType.Float)
T1 = fd.define_tensor(symbolic_sizes=[-1, -1, -1], contiguous=[True, True, True], dtype=DataType.Float)
T2 = fd.define_tensor(symbolic_sizes=[-1, -1, -1], contiguous=[True, True, True], dtype=DataType.Float)
T3 = fd.define_tensor(symbolic_sizes=[-1], contiguous=[True], dtype=DataType.Float)
T4 = fd.ops.broadcast_in_dim(T0, output_shape=[4, 512, 768], broadcast_dims=[2])
T5 = fd.ops.add(T1, T2)
T6 = fd.ops.broadcast_in_dim(T3, output_shape=[4, 512, 768], broadcast_dims=[2])
T7, T8 = fd.ops.var_mean(T5, axes=[2], correction=0, keepdim=False)
T9 = fd.ops.broadcast_in_dim(T7, output_shape=[4, 512, 1], broadcast_dims=[0, 1])
T10 = fd.ops.broadcast_in_dim(T8, output_shape=[4, 512, 1], broadcast_dims=[0, 1])
S11 = fd.define_constant(1.00000e-12)
T12 = fd.ops.add(T9, S11)
T13 = fd.ops.broadcast_in_dim(T10, output_shape=[4, 512, 768], broadcast_dims=[0, 1, 2])
T14 = fd.ops.rsqrt(T12)
T15 = fd.ops.sub(T5, T13)
T16 = fd.ops.broadcast_in_dim(T14, output_shape=[4, 512, 768], broadcast_dims=[0, 1, 2])
T17 = fd.ops.mul(T15, T16)
T18 = fd.ops.mul(T17, T6)
T19 = fd.ops.add(T18, T4)
T20 = fd.ops.cast(T19, dtype=DataType.Float)
fd.add_output(T5)
fd.add_output(T10)
fd.add_output(T14)
fd.add_output(T20)
```
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 0 |
4,720 | 85,302 |
Remove `TypedStorage` and use only `UntypedStorage`
|
triaged, module: python frontend
|
Now that typed storages have been removed from the C++ side and `torch.UntypedStorage` is in place, we can remove `torch.TypedStorage` and all of its subclasses, and just use `UntypedStorage` instead. First, we should add warnings to all of the methods of `TypedStorage` and its subclasses for at least one whole release.
| 0 |
4,721 | 85,300 |
torchrun substitutes host names for IP addresses
|
oncall: distributed, triaged, module: elastic
|
### π Describe the bug
# Observed Behavior
The `torchrun` command takes host arguments. If I specify those as IP addresses, `torchrun` appears to do a reverse lookup and then use the host names it finds for actually making connections. This fails in some environments.
The following demonstrates the reverse lookup. I created a fake host entry for 192.168.1.99 in /etc/hosts and specified the addresses numerically. Nevertheless, `torchrun` looks up the name and uses it for error reporting (and presumably connecting as well).
```
$ ve
(venv) $ sh -x bug
+ torchrun --master_addr 192.168.1.99 --rdzv_endpoint 192.168.1.99:1234 --no_python /bin/env
[W socket.cpp:558] [c10d] The client socket has failed to connect to [fake-host]:1234 (errno: 113 - No route to host).
[W socket.cpp:558] [c10d] The client socket has failed to connect to fake-host:1234 (errno: 113 - No route to host).
[E socket.cpp:610] [c10d] The client socket has failed to connect to any network address of (192.168.1.99, 1234).
```
The upshot is that `torchrun` fails in an environment where `python -m torch.distributed.launcher` works fine.
# Desired Behavior
If the user specifies hosts as IP addresses, `torchrun` should consistently use those IP addresses for both reporting and making connections.
### Versions
PyTorch version: 1.12.1+cu113
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.2.0-19ubuntu1) 11.2.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.10.4 (main, Jun 29 2022, 12:14:53) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-47-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.7.99
GPU models and configuration: GPU 0: NVIDIA RTX A6000
Nvidia driver version: 515.65.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.5.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.3
[pip3] torch==1.12.1+cu113
[pip3] torchaudio==0.12.1+cu113
[pip3] torchmore==0.1.0
[pip3] torchvision==0.13.1+cu113
[conda] Could not collect
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang @kwen2501
| 10 |
4,722 | 85,296 |
Have NVIDIA driver and other related dependencies as part of the Linux AMI
|
module: ci, triaged
|
### π The feature, motivation and pitch
At the moment, NVIDIA driver and some other related dependencies like nvidia-docker2 are installed in the CI because they are not part of the base Amazon Linux AMI that we are using [ami-096198a0bccc6bad4](https://github.com/fairinternal/pytorch-gha-infra/blob/main/runners/main.tf#L59). Specifically, the step is here:
* https://github.com/pytorch/pytorch/blob/master/.github/workflows/_linux-test.yml#L76-L85
* https://github.com/pytorch/pytorch/blob/master/.github/workflows/_binary-test-linux.yml#L173-L184
AFAIK, there is an ongoing task to host these dependencies on S3 to improve the CI reliability https://github.com/pytorch/pytorch/issues/75703. Nevertheless, it makes sense to eventually have https://github.com/pytorch/pytorch/blob/master/.github/scripts/install_nvidia_utils_linux.sh as part of the Linux AMI we are using for CUDA testing. As an example, Windows AMI already has them.
### Alternatives
_No response_
### Additional context
_No response_
cc @seemethere @malfet @pytorch/pytorch-dev-infra @atalman
| 1 |
4,723 | 85,258 |
nvFuser support for {ceil,floor,round,trunc}(int)
|
triaged, module: nvfuser
|
### π Describe the bug
In https://github.com/pytorch/pytorch/pull/85144, we add support for integers for these functions. In this case, they simply return a copy of the input.
For now, I'm just xfailing those tests, but we should add support for this in `nvfuser`.
### Versions
master
| 0 |
4,724 | 85,254 |
FX Graph Mode Quantization - Generate static quantization network
|
oncall: quantization, triaged, fx
|
### π Describe the bug
## Network code
e4 = self.elu(self.bn4(self.conv4(e3)[:, :, :-1, :].contiguous()))
e5 = self.elu(self.bn5(self.conv5(e4)[:, :, :-1, :].contiguous()))
## Network automatically generated by torch.fx
contiguous_3 = quantize_per_tensor_18.contiguous(); quantize_per_tensor_18 = None
bn4 = self.bn4(contiguous_3); contiguous_3 = None
elu_3 = self.elu(bn4); bn4 = None
conv5 = self.conv5(elu_3)
dequantize_22 = conv5.dequantize(); conv5 = None
getitem_4 = dequantize_22[(slice(None, None, None), slice(None, None, None), slice(None, -1, None), slice(None, None, None))]; dequantize_22 = None
_scale_9 = self._scale_9
_zero_point_9 = self._zero_point_9
quantize_per_tensor_23 = torch.quantize_per_tensor(getitem_4, _scale_9, _zero_point_9, torch.quint8); getitem_4 = _scale_9 = _zero_point_9 = None
contiguous_4 = quantize_per_tensor_23.contiguous(); quantize_per_tensor_23 = None
bn5 = self.bn5(contiguous_4); contiguous_4 = None
dequantize_25 = bn5.dequantize(); bn5 = None
elu_4 = self.elu(dequantize_25); dequantize_25 = None
## report errors
Traceback (most recent call last):
File "/mnt/raid/userspace/fanhaipeng/anaconda3/envs/torch13e/lib/python3.8/site-packages/torch/fx/graph_module.py", line 277, in __call__
raise e
File "/mnt/raid/userspace/fanhaipeng/anaconda3/envs/torch13e/lib/python3.8/site-packages/torch/fx/graph_module.py", line 267, in __call__
return super(self.cls, obj).__call__(*args, **kwargs) # type: ignore[misc]
File "/mnt/raid/userspace/fanhaipeng/anaconda3/envs/torch13e/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "<eval_with_key>.7", line 54, in forward
File "/mnt/raid/userspace/fanhaipeng/anaconda3/envs/torch13e/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/mnt/raid/userspace/fanhaipeng/anaconda3/envs/torch13e/lib/python3.8/site-packages/torch/ao/nn/quantized/modules/activation.py", line 83, in forward
return torch.ao.nn.quantized.functional.elu(
File "/mnt/raid/userspace/fanhaipeng/anaconda3/envs/torch13e/lib/python3.8/site-packages/torch/ao/nn/quantized/functional.py", line 494, in elu
raise ValueError("Input to 'quantized.elu' must be quantized!")
ValueError: Input to 'quantized.elu' must be quantized!
### Versions
Collecting environment information...
PyTorch version: 1.13.0.dev20220912+cu113
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Ubuntu 16.04.6 LTS (x86_64)
GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.12) 5.4.0 20160609
Clang version: 3.8.0-2ubuntu4 (tags/RELEASE_380/final)
CMake version: Could not collect
Libc version: glibc-2.23
Python version: 3.8.13 (default, Mar 28 2022, 11:38:47) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-4.4.0-210-generic-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: 7.5.17
GPU models and configuration:
GPU 0: TITAN Xp
GPU 1: TITAN Xp
GPU 2: TITAN Xp
GPU 3: TITAN Xp
Nvidia driver version: 455.23.05
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.5.1.10
/usr/lib/x86_64-linux-gnu/libcudnn.so.6.0.21
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.3
[pip3] torch==1.13.0.dev20220912+cu113
[pip3] torchaudio==0.13.0.dev20220912+cu113
[pip3] torchsummary==1.5.1
[pip3] torchvision==0.14.0.dev20220912+cu113
[conda] numpy 1.23.3 pypi_0 pypi
[conda] torch 1.13.0.dev20220912+cu113 pypi_0 pypi
[conda] torchaudio 0.13.0.dev20220912+cu113 pypi_0 pypi
[conda] torchsummary 1.5.1 pypi_0 pypi
[conda] torchvision 0.14.0.dev20220912+cu113 pypi_0 pypi
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel @ezyang @SherlockNoMad @EikanWang @wenzhe-nrv @soumith
| 2 |
4,725 | 85,235 |
Add `persistent` option to `nn.Module.buffers`.
|
module: nn, triaged, actionable
|
### π The feature, motivation and pitch
For convenience, it would be nice if the [`nn.Module.buffers`](https://pytorch.org/docs/stable/generated/torch.nn.Module.html#torch.nn.Module.buffers) could distinguish between persistent and non-persistent buffers.
```python
def buffers(self, recurse: bool = False, persistent: Optional[bool] = None) -> Iterator[Tensor]:
```
Which yields an iterator for either only persistent buffers (`True`), non-persistent buffers (`False`) or both (`None`).
### Alternatives
_No response_
### Additional context
Currently, one either has to make use of the private attribute `nn.Module._non_persistent_buffers_set`, or filter `self.buffers` via the entries of the `state_dict`.
cc @albanD @mruberry @jbschlosser @walterddr @kshitij12345 @saketh-are
| 1 |
4,726 | 85,234 |
c10d all_gather aborts with Signal 8 (SIGFPE) when tensor.numel() == 0
|
oncall: distributed, module: c10d
|
### π Describe the bug
Repro:
Create script "all_gather_test.py" containing:
```
import torch
torch.distributed.init_process_group('gloo')
# works
x = torch.ones((1, 1, 2))
y = [torch.zeros_like(x) for _ in range(torch.distributed.get_world_size())]
torch.distributed.all_gather(y, x)
print(y)
# aborts with
# exitcode : -8 (pid: 98824)
# error_file: <N/A>
# traceback : Signal 8 (SIGFPE) received by PID 98824
x = torch.ones((1, 0, 2))
y = [torch.zeros_like(x) for _ in range(torch.distributed.get_world_size())]
torch.distributed.all_gather(y, x)
print(y)
```
Run with :
`torchrun --nproc_per_node 4 all_gather_test.py`
I would expect the result to be a list of empty tensors; instead i get process aborts with signal 8 on all ranks.
### Versions
pythorch nightly Sept/15/2022
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @osalpekar @jiayisuse @SciPioneer @H-Huang @kwen2501
| 1 |
4,727 | 85,231 |
[asan] ctc_loss fails test_make_fx_fake_exhaustive with ASAN
|
module: nn, triaged, module: sanitizers, oncall: pt2, module: fakeTensor, module: ProxyTensor
|
### π Describe the bug
```
==729==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x61f00041847c at pc 0x7f01905b4988 bp 0x7ffc03665d90 sp 0x7ffc03665d88
[1680](https://github.com/pytorch/pytorch/actions/runs/3074016256/jobs/4966582993#step:10:1681)
READ of size 4 at 0x61f00041847c thread T0
[1681](https://github.com/pytorch/pytorch/actions/runs/3074016256/jobs/4966582993#step:10:1682)
#0 0x7f01905b4987 in std::tuple<at::Tensor, at::Tensor> at::native::(anonymous namespace)::ctc_loss_cpu_template<float, (c10::ScalarType)4>(at::Tensor const&, at::Tensor const&, c10::ArrayRef<long>, c10::ArrayRef<long>, long)::'lambda'(long, long)::operator()(long, long) const (/opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so+0xdc2a987)
[1682](https://github.com/pytorch/pytorch/actions/runs/3074016256/jobs/4966582993#step:10:1683)
#1 0x7f018fb23557 in std::function<void (long, long)>::operator()(long, long) const (/opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so+0xd199557)
```
The test fails on CI for PR https://github.com/pytorch/pytorch/pull/84752
Ref: https://github.com/pytorch/pytorch/pull/84752#issuecomment-1243847936
### Versions
[PR ](https://github.com/pytorch/pytorch/pull/84752)
cc @ezyang @gchanan @zou3519 @albanD @mruberry @jbschlosser @walterddr @saketh-are @SherlockNoMad @soumith
| 2 |
4,728 | 85,230 |
[MPS] load checkpoints gives zero weights when map_location is mps
|
triaged, has workaround, module: correctness (silent), module: mps
|
### π Describe the bug
When loading checkpoints of a larger model directly to MPS the weights are zero. For CPU it works. I tested it also with a tiny model and there it seems to work, so to replicate you need to download our weights (see code). It may be related to #79384 and #78551 . As a workaround it works to move model to CPU, set the weights there and then move everything to MPS. Here is an example to replicate:
```
import torch
import requests
# Download checkpoint file
ckpt='FastSurferVINN_training_state_coronal.pkl'
fileurl='https://b2share.fz-juelich.de/api/files/0114331a-f788-48d2-9d09-f85d7494ed48/FastSurferVINN_training_state_coronal.pkl'
response = requests.get(fileurl, verify=False)
with open(ckpt, 'wb') as f:
f.write(response.content)
# CPU load works:
model_state = torch.load(ckpt, map_location="cpu")
print(model_state["model_state"]["inp_block.bn0.weight"])
# ouput: tensor([2.0432, 1.2577, 4.1133, 7.4062, 3.9921, 1.8011, 2.0956])
# MPS load gives zeros:
model_state = torch.load(ckpt, map_location="mps")
print(model_state["model_state"]["inp_block.bn0.weight"])
#output tensor([0., 0., 0., 0., 0., 0., 0.], device='mps:0')
```
### Versions
Collecting environment information...
PyTorch version: 1.13.0.dev20220917
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 13.0 (arm64)
GCC version: Could not collect
Clang version: 14.0.0 (clang-1400.0.29.102)
CMake version: Could not collect
Libc version: N/A
Python version: 3.9.6 (default, Aug 5 2022, 15:21:02) [Clang 14.0.0 (clang-1400.0.29.102)] (64-bit runtime)
Python platform: macOS-13.0-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.2
[pip3] torch==1.13.0.dev20220917
[pip3] torchio==0.18.83
[pip3] torchvision==0.14.0.dev20220916
[conda] Could not collect
cc @kulinseth @albanD
| 4 |
4,729 | 85,229 |
TracerWarning: Output nr 1. of the traced function does not match the corresponding output of the Python function.
|
oncall: jit, triaged
|
### π Describe the bug
model https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth
my code
```
# importing module
import sys
sys.path.append('./')
import torch
import torchvision
from gfpgan.archs.gfpganv1_clean_arch import GFPGANv1Clean
model_path = "GFPGANv1.3.pth"
model = GFPGANv1Clean(
out_size=512,
num_style_feat=512,
channel_multiplier=2,
decoder_load_path=None,
fix_decoder=False,
num_mlp=8,
input_is_latent=True,
different_w=True,
narrow=1,
sft_half=True)
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
loadnet = torch.load(model_path)
if 'params_ema' in loadnet:
keyname = 'params_ema'
else:
keyname = 'params'
model.load_state_dict(loadnet[keyname], strict=True)
# Switch the model to eval model
model.eval()
model = model.to(device)
example1 = torch.rand(1, 3, 512, 512)
# Use torch.jit.trace to generate a torch.jit.ScriptModule via tracing.
traced_script_module = torch.jit.trace(model, (example1))
# Save the TorchScript model
traced_script_module.save("gfpganv1_clean_model.pt")
```
function. Detailed error:
Tensor-likes are not close!
Mismatched elements: 784625 / 786432 (99.8%)
Greatest absolute difference: 0.1886063814163208 at index (0, 2, 375, 303) (up to 1e-05 allowed)
Greatest relative difference: 3520.956626506024 at index (0, 1, 238, 193) (up to 1e-05 allowed)
_check_trace(
### Versions
Collecting environment information...
PyTorch version: 1.12.1+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 7 Professional
GCC version: (x86_64-posix-seh-rev3, Built by MinGW-W64 project) 12.1.0
Clang version: 14.0.4
CMake version: version 3.24.0-rc1
Libc version: N/A
Python version: 3.8.6 (tags/v3.8.6:db45529, Sep 23 2020, 15:52:53) [MSC v.1927 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-7-6.1.7601-SP1
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.19.5
[pip3] pytorch2caffe==0.1.0
[pip3] torch==1.12.1
[pip3] torchaudio==0.12.1
[pip3] torchvision==0.13.1
[pip3] torchviz==0.0.2
[conda] Could not collect
| 1 |
4,730 | 85,227 |
topk returns different results with the same input in cuda and cpu
|
module: cuda, triaged, module: sorting and selection
|
### π Describe the bug
torch.topk() has different results for input located at cuda and cpu. To reproduce my error, i'm putting my python script and data at https://github.com/Tengxu-Sun/Pytorch-Topk-Bug.
### Versions
pytorch: 1.10.2
cudakit: 11.3
GPU: 2080 Ti
CPU: Intel(R) Xeon(R) Gold 5120 CPU @ 2.20GHz
According to my test, pytorch 1.9, 1.8, 1.7 and 1.6 also have this error.
### Expected behavior
CUDA and CPU return the same result and diff should be 0.
cc @ngimel
| 4 |
4,731 | 85,217 |
Segmentation fault in native_batch_norm
|
module: crash, module: error checking, triaged, module: norms and normalization
|
### π Describe the bug
Segmentation fault in `native_batch_norm`.
### Example to reproduce
```python
import torch
input = torch.full((5, 5,), 1, dtype=torch.float64, requires_grad=False)
weight = torch.full((14, 9, 12, 0, 6, 0, 15, 0, 0, 0,), -1.5e+300, dtype=torch.float64, requires_grad=False)
bias = torch.full((5,), 1, dtype=torch.float64, requires_grad=False)
running_mean = torch.full((0,), 1, dtype=torch.float64, requires_grad=False)
running_var = torch.full((0,), 1, dtype=torch.float64, requires_grad=False)
training = True
momentum = 0
eps = 0
torch.native_batch_norm(input, weight, bias, running_mean, running_var, training, momentum, eps)
```
### Result
```segmentation fault```
### Expected behavior
Graceful termination or a RuntimeError to be thrown.
### Note
This bug was discovered using fuzz testing.
### Versions
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux 11 (bullseye) (x86_64)
GCC version: (Debian 10.2.1-6) 10.2.1 20210110
Clang version: 11.0.1-2
CMake version: version 3.22.1
Libc version: glibc-2.31
Python version: 3.9.12 (main, Jun 1 2022, 11:38:51) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.10.0-16-amd64-x86_64-with-glibc2.31
Is CUDA available: N/A
CUDA runtime version: Could not collect
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
Versions of relevant libraries:
[pip3] numpy==1.22.3
[pip3] torch==1.11.0a0+gitbc2c6ed
[conda] mkl 2022.0.1 h06a4308_117
[conda] mkl-include 2022.0.1 h06a4308_117
[conda] numpy 1.22.3 py39h7a5d4dd_0
[conda] numpy-base 1.22.3 py39hb8be1f0_0
[conda] torch 1.11.0a0+gitbc2c6ed pypi_0 pypi
| 0 |
4,732 | 85,215 |
Floating point exception in gather gradient computation.
|
module: crash, module: error checking, triaged, module: edge cases
|
### π Describe the bug
Floating point exception in `gather` gradient computation.
### Example to reproduce
```python
import torch
grad = torch.full((2, 0, 0, 6, 6,), 0, dtype=torch.float64, requires_grad=True)
self = torch.full((2, 0, 0, 6, 6,), 0, dtype=torch.float64, requires_grad=True)
dim = 0
index = torch.full((2, 0, 0, 6, 6,), 0, dtype=torch.float64, requires_grad=True)
sparse_grad = True
res = torch.gather(self, dim, index, sparse_grad=sparse_grad)
grad_out = torch.zeros_like(res)
torch.autograd.backward(res, grad_tensors=grad_out)
```
### Result
```floating point exception```
### Expected behavior
Graceful termination or a RuntimeError to be thrown.
### Note
This bug was discovered using fuzz testing.
### Versions
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux 11 (bullseye) (x86_64)
GCC version: (Debian 10.2.1-6) 10.2.1 20210110
Clang version: 11.0.1-2
CMake version: version 3.22.1
Libc version: glibc-2.31
Python version: 3.9.12 (main, Jun 1 2022, 11:38:51) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.10.0-16-amd64-x86_64-with-glibc2.31
Is CUDA available: N/A
CUDA runtime version: Could not collect
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
Versions of relevant libraries:
[pip3] numpy==1.22.3
[pip3] torch==1.11.0a0+gitbc2c6ed
[conda] mkl 2022.0.1 h06a4308_117
[conda] mkl-include 2022.0.1 h06a4308_117
[conda] numpy 1.22.3 py39h7a5d4dd_0
[conda] numpy-base 1.22.3 py39hb8be1f0_0
[conda] torch 1.11.0a0+gitbc2c6ed pypi_0 pypi
| 0 |
4,733 | 85,214 |
Segmentation fault in mkldnn_reorder_conv2d_weight and mkldnn_reorder_conv3d_weight
|
module: crash, triaged, module: mkldnn, module: edge cases
|
### π Describe the bug
Segmentation fault in `mkldnn_reorder_conv2d_weight` and `mkldnn_reorder_conv3d_weight`.
### Example to reproduce
```python
import torch
self = torch.full((3, 3, 1, 1,), 1, dtype=torch.float32, requires_grad=False).to_mkldnn()
padding = []
stride = [65534, 65534]
dilation = [65534, 65534]
groups = 0
torch._C._nn.mkldnn_reorder_conv2d_weight(self, padding, stride, dilation, groups)
import torch
self = torch.full((32, 3, 3, 3, 3,), 1, dtype=torch.float32, requires_grad=False).to_mkldnn()
padding = []
stride = [1250999896764, 1250999896764, 1250999896764]
dilation = [1250999896764, 1250999896764, 1250999896764]
groups = 0
torch._C._nn.mkldnn_reorder_conv3d_weight(self, padding, stride, dilation, groups)
```
### Result
```segmentation fault```
### Expected behavior
Graceful termination or a RuntimeError to be thrown.
### Note
This bug was discovered using fuzz testing.
### Versions
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux 11 (bullseye) (x86_64)
GCC version: (Debian 10.2.1-6) 10.2.1 20210110
Clang version: 11.0.1-2
CMake version: version 3.22.1
Libc version: glibc-2.31
Python version: 3.9.12 (main, Jun 1 2022, 11:38:51) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.10.0-16-amd64-x86_64-with-glibc2.31
Is CUDA available: N/A
CUDA runtime version: Could not collect
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
Versions of relevant libraries:
[pip3] numpy==1.22.3
[pip3] torch==1.11.0a0+gitbc2c6ed
[conda] mkl 2022.0.1 h06a4308_117
[conda] mkl-include 2022.0.1 h06a4308_117
[conda] numpy 1.22.3 py39h7a5d4dd_0
[conda] numpy-base 1.22.3 py39hb8be1f0_0
[conda] torch 1.11.0a0+gitbc2c6ed pypi_0 pypi
cc @ezyang @gchanan @zou3519 @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @VitalyFedyunin
| 0 |
4,734 | 85,211 |
OSError libstdc++.so.6 at import
|
needs reproduction, oncall: binaries
|
### π Describe the bug
I have a fresh virtual environment with pytorch installed from `pip install -r requirements.txt` which contain
```
tensorly
numpy
torch
torchvision
torchtyping
scipy
opt_einsum
matplotlib
tqdm
prettytable
antares
numba
-e .
```
I then run a simple `import torch` which gives the following output:
```
python3
Python 3.10.6 (main, Aug 1 2022, 20:38:21) [GCC 5.4.0 20160609] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/sebwood/workspace/MemSE/.env/lib/python3.10/site-packages/torch/__init__.py", line 201, in <module>
_load_global_deps()
File "/home/sebwood/workspace/MemSE/.env/lib/python3.10/site-packages/torch/__init__.py", line 154, in _load_global_deps
ctypes.CDLL(lib_path, mode=ctypes.RTLD_GLOBAL)
File "/home/linuxbrew/.linuxbrew/opt/python@3.10/lib/python3.10/ctypes/__init__.py", line 374, in __init__
self._handle = _dlopen(self._name, mode)
OSError: libstdc++.so.6: cannot open shared object file: No such file or directory
```
### Versions
```
python collect_env.py
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.10.6 (main, Aug 1 2022, 20:38:21) [GCC 5.4.0 20160609] (64-bit runtime)
Python platform: Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
Is CUDA available: N/A
CUDA runtime version: Could not collect
GPU models and configuration: GPU 0: NVIDIA GeForce GTX 1650 SUPER
Nvidia driver version: 516.94
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
Versions of relevant libraries:
[pip3] numpy==1.23.3
[pip3] torch==1.12.1
[pip3] torchtyping==0.1.4
[pip3] torchvision==0.13.1
[conda] Could not collect
```
cc @ezyang @seemethere @malfet
| 2 |
4,735 | 85,201 |
performance between manually created graph and CUDAGraph.replay
|
triaged, module: cuda graphs
|
### π Describe the bug
I manually designed a simple neural network and made some experiments as follows:

class **_ManuallyCreatedGraph_** is my manually designed in source code , which inputs a captured graph and calls cuda runtime API **_cudaGraphAddChildGraphNode_** to construct a new cuda graph and launches it
I found the duration by **_torch.cuda.CUDAGraph.replay_** is obviously shorter than that of **_ManuallyCreatedGraph_**. I was confused .
Could you give some suggestionsοΌ Thanks in advance!
### Versions
pytorch version : 1.12.0a0+git818477c
cuda version : 11.7
cc @mcarilli @ezyang
| 8 |
4,736 | 93,677 |
pytorch core test failure: RuntimeError: Can't call numpy() on Tensor that requires grad. Use tensor.detach().numpy() instead.
|
triaged, oncall: pt2, module: aotdispatch
|
Repro:
cherry-pick https://github.com/pytorch/pytorch/pull/85183
```
PYTORCH_TEST_WITH_INDUCTOR=1 pytest test/distributions/test_distributions.py -k test_pareto
```
Error:
```
Traceback (most recent call last):
File "/fsx/users/binbao/pytorch/test/distributions/test_distributions.py", line 2556, in test_pareto
self._check_log_prob(Pareto(scale, alpha), ref_log_prob)
File "/fsx/users/binbao/pytorch/test/distributions/test_distributions.py", line 837, in _check_log_prob
asset_fn(i, val.squeeze(), log_prob)
File "/fsx/users/binbao/pytorch/test/distributions/test_distributions.py", line 2550, in ref_log_prob
def ref_log_prob(idx, x, log_prob):
File "/fsx/users/binbao/conda/envs/release/lib/python3.9/site-packages/scipy/stats/_distn_infrastructure.py", line 1884, in logpdf
def logpdf(self, x, *args, **kwds):
File "/fsx/users/binbao/conda/envs/release/lib/python3.9/site-packages/scipy/stats/_distn_infrastructure.py", line 1907, in <graph break in logpdf>
args, loc, scale = self._parse_args(*args, **kwds)
File "/fsx/users/binbao/conda/envs/release/lib/python3.9/site-packages/scipy/stats/_distn_infrastructure.py", line 1908, in <graph break in logpdf>
x, loc, scale = map(asarray, (x, loc, scale))
File "/fsx/users/binbao/pytorch/torch/_tensor.py", line 945, in __array__
return self.numpy()
RuntimeError: Can't call numpy() on Tensor that requires grad. Use tensor.detach().numpy() instead.
----------------------------------------------------------------------------------------------------- Captured log call -----------------------------------------------------------------------------------------------------
ERROR torchdynamo.convert_frame:convert_frame.py:357 WON'T CONVERT test_pareto /fsx/users/binbao/pytorch/test/distributions/test_distributions.py line 2535
due to:
Traceback (most recent call last):
File "/fsx/users/binbao/torchdynamo-tip/torchdynamo/variables/tensor.py", line 266, in create
assert (
AssertionError: torch.* op returned non-Tensor dtype call_function <built-in function get_default_dtype>
from user code:
File "/fsx/users/binbao/pytorch/test/distributions/test_distributions.py", line 2541, in test_pareto
self.assertEqual(Pareto(scale_1d, 0.5).mean, inf)
Set torchdynamo.config.verbose=True for more information
```
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 3 |
4,737 | 85,157 |
NestedTensor 2.0 issue tracking
|
triaged, module: nestedtensor
|
# Summary
This is used as a top level doc to track all 1.13 issues for nestedtensors
# Issues:
- [x] Complete #83775 (move`torch.nested_tensor`)
- [x] Improve documentation in torch.nested, document op coverage https://github.com/pytorch/pytorch/pull/85186
- [x] Improve perf of NT matmul #85064 https://github.com/pytorch/pytorch/pull/85311
- [x] Improve perf of NT bmm ~https://github.com/pytorch/pytorch/pull/85441~ https://github.com/pytorch/pytorch/pull/85894
- [x] Add fastpath for matmul using bmm https://github.com/pytorch/pytorch/pull/85857
- [x] ~Improve perf of NT softmax https://github.com/pytorch/pytorch/pull/85441~ approach seemed to regress for bmm
- [x] https://github.com/pytorch/pytorch/issues/85376 https://github.com/pytorch/pytorch/pull/85593
- [x] Only allow one "-1" in reshape https://github.com/pytorch/pytorch/pull/85691
- [x] ~[Tutorial](https://pytorch.org/tutorials/prototype/nestedtensor) display issues~
- [x] The [NestedTensor Tutorial](https://github.com/pytorch/tutorials/blob/main/prototype_source/nestedtensor.py) has been updated to have more reasonable baseline to compare against and utilizes _sdpa under the hood through nn.mha.
- [ ] #91508
# Non Blocking issues:
- [ ] #91471
# Push
- [ ] Do we want to expose `from_padded_tensor` or potentially `torch._nested_from_padded_and_nested_example`?
cc @cpuhrsch @jbschlosser @bhosmer @mikaylagawarecki
| 4 |
4,738 | 85,149 |
Pytorch on iOS (iPhone X & XR) throwing can't allocate memory exception. Ref Logs:
|
oncall: mobile, module: ios
|
### π Describe the bug
The following operation failed in the TorchScript interpreter.
Β
Traceback of TorchScript, serialized code (most recent call last):
Β
Β File "code/__torch__/ai360_net.py", line 34, in forward
Β
Β Β Β confidences = []
Β
Β Β Β locations = []
Β
Β Β Β x0 = (self.base_net).forward(x, )
Β
Β Β Β Β Β Β Β Β Β ~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
Β
Β Β Β _1 = self.extras
Β
Β Β Β _2 = getattr(_1, "0")
Β
Β File "code/__torch__/torch/nn/modules/container/___torch_mangle_189.py", line 47, in forward
Β
Β Β Β input0 = (_0).forward(input, )
Β
Β Β Β input1 = (_1).forward(input0, )
Β
Β Β Β input2 = (_2).forward(input1, )
Β
Β Β Β Β Β Β Β Β Β Β Β Β Β ~~~~~~~~~~~ <--- HERE
Β
Β Β Β input3 = (_3).forward(input2, )
Β
Β Β Β input4 = (_4).forward(input3, )
Β
Β File "code/__torch__/ai360_net/___torch_mangle_16.py", line 13, in forward
Β
Β Β Β Β Β _0 = torch.add(x, (self.conv).forward(x, ), alpha=1)
Β
Β Β Β else:
Β
Β Β Β Β Β _0 = (self.conv).forward(x, )
Β
Β Β Β Β Β Β Β Β Β Β Β ~~~~~~~~~~~~~~~~~~ <--- HERE
Β
Β Β Β return _0
Β
Β File "code/__torch__/torch/nn/modules/container/___torch_mangle_15.py", line 23, in forward
Β
Β Β Β _6 = getattr(self, "6")
Β
Β Β Β _7 = getattr(self, "7")
Β
Β Β Β input0 = (_0).forward(input, )
Β
Β Β Β Β Β Β Β Β Β Β Β Β Β ~~~~~~~~~~~ <--- HERE
Β
Β Β Β input1 = (_1).forward(input0, )
Β
Β Β Β input2 = (_2).forward(input1, )
Β
Β File "code/__torch__/torch/nn/modules/conv/___torch_mangle_10.py", line 9, in forward
Β
Β def forward(self: __torch__.torch.nn.modules.conv.___torch_mangle_10.Conv2d,
Β
Β Β Β input: Tensor) -> Tensor:
Β
Β Β Β _0 = (self).conv2d_forward(input, self.weight, )
Β
Β Β Β Β Β Β Β Β Β ~~~~~~~~~~~~~~~~~~~~ <--- HERE
Β
Β Β Β return _0
Β
Β def conv2d_forward(self: __torch__.torch.nn.modules.conv.___torch_mangle_10.Conv2d,
Β
Β File "code/__torch__/torch/nn/modules/conv/___torch_mangle_10.py", line 29, in conv2d_forward
Β
Β Β Β Β Β _11 = _4
Β
Β Β Β else:
Β
Β Β Β Β Β _11 = torch.conv2d(input, weight, None, [1, 1], [0, 0], [1, 1], 1)
Β
Β Β Β Β Β Β Β Β Β Β Β ~~~~~~~~~~~~ <--- HERE
Β
Β Β Β return _11
Β
Β
Β
Traceback of TorchScript, original code (most recent call last):
Β
Β File "/home/tony/anaconda3/envs/pytorch1.4/lib/python3.6/site-packages/torch/nn/modules/container.py", line 100, in forward
Β
Β Β Β def forward(self, input):
Β
Β Β Β Β Β Β Β for module in self:
Β
Β Β Β Β Β Β Β Β Β Β Β input = module(input)
Β
Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β ~~~~~~ <--- HERE
Β
Β Β Β Β Β Β Β return input
Β
Β File "/home/tony/anaconda3/envs/pytorch1.4/lib/python3.6/site-packages/torch/nn/modules/container.py", line 100, in forward
Β
Β Β Β def forward(self, input):
Β
Β Β Β Β Β Β Β for module in self:
Β
Β Β Β Β Β Β Β Β Β Β Β input = module(input)
Β
Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β ~~~~~~ <--- HERE
Β
Β Β Β Β Β Β Β return input
Β
Β File "/home/tony/anaconda3/envs/pytorch1.4/lib/python3.6/site-packages/torch/nn/modules/conv.py", line 345, in forward
Β
Β Β Β def forward(self, input):
Β
Β Β Β Β Β Β Β return self.conv2d_forward(input, self.weight)
Β
Β Β Β Β Β Β Β Β Β Β Β Β Β Β ~~~~~~~~~~~~~~~~~~~ <--- HERE
Β
Β Β File "/home/tony/anaconda3/envs/pytorch1.4/lib/python3.6/site-packages/torch/nn/modules/conv.py", line 341, in conv2d_forward
Β
Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β weight, self.bias, self.stride,
Β
Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β _pair(0), self.dilation, self.groups)
Β
Β Β Β Β Β Β Β return F.conv2d(input, weight, self.bias, self.stride,
Β
Β Β Β Β Β Β Β Β Β Β Β Β Β Β ~~~~~~~~ <--- HERE
Β
Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β self.padding, self.dilation, self.groups)
Β
**RuntimeError: [enforce fail at CPUAllocator.cpp:68] . DefaultCPUAllocator: can't allocate memory: you tried to allocate 4860032 bytes. Error code 12 (Cannot allocate memory)**
Β
### Versions
Pytorch 3.0
| 1 |
4,739 | 85,147 |
torch::quantile performance?
|
module: performance, triaged
|
### π Describe the bug
Computing quantiles is ~10X slower than expected
```python
import time
import torch
import numpy as np
t = torch.rand(1_000_000)
a = t.numpy()
start = time.time()
t.median()
end = time.time()
print("torch median", end - start)
start = time.time()
t.quantile(torch.tensor([0.25, 0.75]))
end = time.time()
print("torch quant ", end - start)
start = time.time()
np.median(a)
end = time.time()
print("numpy median", end - start)
start = time.time()
np.quantile(a, [0.25, 0.75])
end = time.time()
print("numpy quant ", end - start)
```
```
torch median 0.013309478759765625
torch quant 0.10049819946289062
numpy median 0.012769222259521484
numpy quant 0.014006376266479492
```
### Versions
```
Collecting environment information...
PyTorch version: 1.12.1+cu113
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.3 LTS (x86_64)
GCC version: (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0
Clang version: Could not collect
CMake version: version 3.22.4
Libc version: glibc-2.31
Python version: 3.8.10 (default, Nov 26 2021, 20:14:08) [GCC 9.3.0] (64-bit runtime)
Python platform: Linux-5.13.0-27-generic-x86_64-with-glibc2.29
Is CUDA available: True
CUDA runtime version: Could not collect
GPU models and configuration:
GPU 0: NVIDIA A100-PCIE-40GB
GPU 1: NVIDIA A100-PCIE-40GB
GPU 2: NVIDIA A100-PCIE-40GB
GPU 3: NVIDIA A100-PCIE-40GB
Nvidia driver version: 495.46
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.3.2
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.3.2
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.3.2
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.3.2
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.3.2
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.3.2
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.3.2
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.21.5
[pip3] torch==1.12.1+cu113
[conda] Could not collect
```
cc @VitalyFedyunin @ngimel
| 0 |
4,740 | 85,127 |
[ONNX] Using values from a different tensor to index a tensor returns a tensor with incorrect shape in exported ONNX model
|
oncall: jit, module: onnx, triaged, bug
|
### π Describe the bug
# Bug Description
**Note: The complete code for reproducing the issue is at the very bottom.**
**PyTorch Version: 1.12.1 (installed using the following command)**
```
pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu116
```
When I use the values of a different tensor to index a tensor, the returned result has the incorrect shape. For example, for the following two tensors:
```
input = torch.tensor([[[0, 4, 2],
[3, 0, 7]],
[[0, 4, 6],
[3, 7, 5]]], dtype=torch.int32) # shape: (2, 2, 3)
indices = torch.tensor([[2, 0, 1]], dtype=torch.int64) # shape: (1, 3)
```
the PyTorch expression `input[:, :, indices[0, 0]]` returns a tensor with the shape of (2, 2). However, when I export the TorchScript of this expression into ONNX model, the ONNX model's inference output has the shape of (2, 2, 1). The `SimpleFailureModel` module in the code at the very bottom is a minimal example that is used to reproduced this issue.
This issue may seem trivial and may seem to have limited impact, but when it's used in conjunction with other functions or operators, it could trigger consequential runtime issues during ONNX inference. For example, if I were to use `torch.cat()` on `input[:, :, indices[0, 0]]`, `input[:, :, indices[0, 1]]`, `input[:, :, indices[0, 2]]` to concatenate the three tensors along `dim=-1` (full expression being `torch.cat((input[:, :, indices[0, 0]], input[:, :, indices[0, 1]], input[:, :, indices[0, 2]]), dim=-1)`), the correct shape of the returned tensor in PyTorch should be (2, 6), and ONNX Runtime, on the other hand, would return a tensor with the incorrect shape of (2, 2, 3). And now if I were to access the element at indices `[0, 4]` in the resulting concatenated tensor, PyTorch would return perfectly valid results while ONNX Runtime would return an "indices element out of data bounds" runtime error during inference because in ONNX the concatenated result **only has size of 2 in axis-1** and we are trying to access the element at index 0 in axis-0 and **index 4 in axis-1**. This is the direct result of the incorrect shape issues mentioned earlier when we try to obtain a new tensor by using the values of a different to index a tensor. More specifically, the error message in this example case is the following:
```
[ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Non-zero status code returned while running Gather node. Name:'Gather_19' Status Message: indices element out of data bounds, idx=4 must be within the inclusive range [-2,1]
```
The `RuntimeFailureModel` module in the code at the very bottom is an example that is used to reproduced this runtime error and illustrate how the indexing issue mentioned in earlier can have consequential impact.
# Expected Behavior
The inference output tensor of the exported ONNX model from TorchScript should have the same shape as the one in PyTorch when we use values from a different tensor to index a tensor.
# Interesting Observations On Previous PyTorch Releases
**For experiments on previous PyTorch versions, I used the wheel files from [https://download.pytorch.org/whl/torch/](https://download.pytorch.org/whl/torch/)**
I also tried running the same export and inference code on PyTorch `1.10.0`, `1.10.1`, `1.10.2`, `1.11.0`, `1.12.0`, `1.12.1`, and they all exhibit the same erroneous behavior. However, there's a difference between versions before and after `1.12`. In the `SimpleFailureModel` example, ONNX models exported from TorchScript using PyTorch versions before `1.12` will have the returned shape of `int32[Gatheroutput_dim_0,Gatheroutput_dim_1,Gatheroutput_dim_2]` (visualized in Netron); however, starting from `1.12`, the exported ONNX model will ahve the returned shape of `int32[input_dynamic_axes_1,input_dynamic_axes_2,1]`. Even though all the versions listed above exhibit the incorrect behavior, the problematic effects of the ones from versions before `1.12` tend to be "hidden" when it's used in conjunction with a large number of other operators and functions. But still, the runtime error in `RuntimeFailureModel` is encountered and can be reproduced in all versions listed above. For your convenience, I have also attached the exported ONNX files (both `SimpleFailureModel` and `RuntimeFailureModel`) from TorchScript using all the listed versions above.
# Complete Code
```python
import torch
import torch.nn as nn
import numpy as np
import onnx
import onnxruntime as ort
from typing import List
# A simple and minimal model for reproducing the issue
class SimpleFailureModel(nn.Module):
def __init__(self):
super().__init__()
def forward(self, x, indices):
return x[:, :, indices[0, 0]]
# A more complex model demonstrating how the errorneous export (from torch.jit.script to ONNX) issue
# shown in "SimpleFailureModel" above may trigger issues (e.g. "indices element out of data bounds")
# during inference.
class RuntimeFailureModel(nn.Module):
def __init__(self):
super().__init__()
def forward(self, x, indices):
a = x[:, :, indices[0, 0]]
b = x[:, :, indices[0, 1]]
c = x[:, :, indices[0, 2]]
cat = torch.cat((a, b, c), dim=-1)
return cat[0, 4]
TORCH_VERSION = "torch_1.12.1"
SIMPLE_FAILURE_ONNX_PATH = "SimpleFailure_" + TORCH_VERSION + ".onnx"
RUNTIME_FAILURE_ONNX_PATH = "RuntimeFailure_" + TORCH_VERSION + ".onnx"
input = torch.tensor([[[0, 4, 2],
[3, 0, 7]],
[[0, 4, 6],
[3, 7, 5]]], dtype=torch.int32) # shape: (2, 2, 3)
print("input.size():", input.size())
indices = torch.tensor([[2, 0, 1]], dtype=torch.int64) # shape: (1, 3)
print("indices.size():", indices.size())
simple_failure_model = SimpleFailureModel()
simple_failure_model_script = torch.jit.script(simple_failure_model)
torch.onnx.export(simple_failure_model_script, # the model's script module
(input, indices), # model input (or a tuple for multiple inputs)
SIMPLE_FAILURE_ONNX_PATH, # where to save the model (can be a file or file-like object)
export_params=True, # store the trained parameter weights inside the model file
opset_version=14, # the ONNX version to export the model to
input_names = ["input", "indices"], # the model's input names
output_names = ["output"], # the model's output names
dynamic_axes={"input": [0, 1, 2],
"indices": [0, 1]},
do_constant_folding=True)
runtime_failure_model = RuntimeFailureModel()
runtime_failure_model_script = torch.jit.script(runtime_failure_model)
torch.onnx.export(runtime_failure_model_script, # the model's script module
(input, indices), # model input (or a tuple for multiple inputs)
RUNTIME_FAILURE_ONNX_PATH, # where to save the model (can be a file or file-like object)
export_params=True, # store the trained parameter weights inside the model file
opset_version=14, # the ONNX version to export the model to
input_names = ["input", "indices"], # the model's input names
output_names = ["output"], # the model's output names
dynamic_axes={"input": [0, 1, 2],
"indices": [0, 1]},
do_constant_folding=True)
EP_LIST = ["CPUExecutionProvider"] # A list of ONNX execution providers
# SimpleFailureModel Inference
onnx.checker.check_model(onnx.load(SIMPLE_FAILURE_ONNX_PATH))
simple_failure_model_ort_session = ort.InferenceSession(SIMPLE_FAILURE_ONNX_PATH, providers=EP_LIST)
output_exp = simple_failure_model(input, indices)
output_act = simple_failure_model_ort_session.run(None, {"input": input.numpy(), "indices": indices.numpy()})[0]
print("---------- Expected v.s. Actual (SimpleFailureModel) ----------")
print("output_exp.size():", output_exp.size())
print("output_exp.dtype:", output_exp.dtype)
# print("output_exp:", output_exp)
print("output_act.shape:", output_act.shape)
print("output_act.dtype:", output_act.dtype)
# print("output_act:", output_act)
# RuntimeFailureModel Inference
onnx.checker.check_model(onnx.load(RUNTIME_FAILURE_ONNX_PATH))
runtime_failure_model_ort_session = ort.InferenceSession(RUNTIME_FAILURE_ONNX_PATH, providers=EP_LIST)
output_exp = runtime_failure_model(input, indices)
output_act = runtime_failure_model_ort_session.run(None, {"input": input.numpy(), "indices": indices.numpy()})[0]
print("---------- Expected v.s. Actual (RuntimeFailureModel) ----------")
print("output_exp.size():", output_exp.size())
print("output_exp.dtype:", output_exp.dtype)
# print("output_exp:", output_exp)
print("output_act.shape:", output_act.shape)
print("output_act.dtype:", output_act.dtype)
# print("output_act:", output_act)
```
### Versions
Collecting environment information...
PyTorch version: 1.12.1+cu116
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 10 Home
GCC version: Could not collect
Clang version: Could not collect
CMake version: version 3.23.1
Libc version: N/A
Python version: 3.9.12 (tags/v3.9.12:b28265d, Mar 23 2022, 23:52:46) [MSC v.1929 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.19044-SP0
Is CUDA available: True
CUDA runtime version: 11.6.124
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3090
Nvidia driver version: 516.59
cuDNN version: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\cudnn_ops_train64_8.dll
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] mypy==0.961
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.22.3
[pip3] torch==1.12.1+cu116
[pip3] torchaudio==0.12.1+cu116
[pip3] torchvision==0.13.1+cu116
[conda] Could not collect
| 16 |
4,741 | 85,122 |
PT Dispatcher confusing error message "There were no tensor arguments to this function"
|
triaged, module: dispatch
|
### π Describe the bug
I am utilizing the dispatcher to call an operator that I implemented
```c++
c10::intrusive_ptr<Work> allgather(
const c10::intrusive_ptr<ProcessGroup>& process_group,
const std::vector<std::vector<at::Tensor>>& output_tensors,
const std::vector<at::Tensor>& input_tensors,
const AllgatherOptions& opts) {
static auto op = c10::Dispatcher::singleton()
.findSchemaOrThrow("c10d::allgather_", "")
.typed<std::tuple<
std::vector<std::vector<at::Tensor>>,
c10::intrusive_ptr<Work>>(
const std::vector<std::vector<at::Tensor>>&,
const std::vector<at::Tensor>&,
const c10::intrusive_ptr<::c10d::ProcessGroup>&,
int64_t)>();
return std::get<1>(op.call(
output_tensors, input_tensors, process_group, opts.timeout.count()));
}
```
and hitting an error at `op.call()`
```
NotImplementedError: There were no tensor arguments to this function (e.g., you passed an empty list of Tensors), but no fallback function is registered for schema c10d::allgather_. This usually means that this function requires a non-empty list of Tensors, or that you (the operator writer) forgot to register a fallback function. Available functions are [CPU, CUDA, BackendSelect, Python, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradMPS, AutogradXPU, AutogradHPU, AutogradLazy, Tracer, AutocastCPU, AutocastCUDA, Batched, VmapMode, PythonTLSSnapshot].
```
I'm not sure how to reproduce this issue on current operators. Here is a small PR that you can check out and run to reproduce the error: https://github.com/pytorch/pytorch/pull/85115
### Versions
trunk
| 0 |
4,742 | 85,121 |
make_traced() doesn't respect setting the seed
|
module: tests, triaged, module: primTorch
|
One immediate impact of this is that `test_python_ref_executor` won't work for any ops which use `wrapper_set_seed()`.
cc @ezyang @ngimel @Lezcano @fdrocha
| 0 |
4,743 | 93,675 |
Need operator fallback stats
|
triaged, oncall: pt2
|
In our internal benchmark enablement work, we had to set `.debug=True` and eyeball the logs to know if inductor or nvfuser is running. If we have the stats for which ops fall back to eager vs. which ops are run with inductor (or nvfuser) it will be much easier to tell if the desired backend is being used, which will be super helpful.
Format-wise it could be (just one idea):
op_name | fallback to eager
| --- | ----------- |
aten::mm | no
aten::cat | yes
Best if we can show the source line of the op (where it is in the original module file), but without it is fine too.
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 5 |
4,744 | 85,116 |
[ONNX][bug] `nn.Transformer` contains unsupported tensor scalar type
|
module: onnx, triaged, onnx-triaged, bug
|
### π Describe the bug
PyTorch fails to export a model containing an `nn.Transformer` module. It fails with `RuntimeError: unexpected tensor scalar type`. Here's a minimal repro script:
```python
import torch
import torch.nn as nn
class M(nn.Module):
def __init__(self):
super().__init__()
self.transformer = nn.Transformer(d_model=128)
def forward(self, x, y):
return self.transformer(x, y)
module = M()
module = torch.jit.script(module)
x = torch.randn([10, 1, 128])
y = torch.randn([10, 1, 128])
dummy_input = (x, y)
torch.onnx.export(module, dummy_input, 'test.onnx', verbose=True, opset_version=14)
```
### Versions
```
Collecting environment information...
PyTorch version: 1.12.1+cu113
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: Could not collect
CMake version: version 3.19.6
Libc version: glibc-2.27
Python version: 3.9.12 (main, Apr 5 2022, 06:56:58) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-4.15.0-118-generic-x86_64-with-glibc2.27
Is CUDA available: True
CUDA runtime version: 11.3.58
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce GTX 1080 Ti
GPU 2: NVIDIA GeForce RTX 3090
Nvidia driver version: 465.19.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.7.5.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn.so.8.1.0
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.1.0
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.1.0
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.1.0
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.1.0
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.1.0
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.1.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] mypy==0.942
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.22.4
[pip3] pytest-mypy==0.9.1
[pip3] torch==1.12.1+cu113
[pip3] torch-tb-profiler==0.3.1
[pip3] torch-tensorrt==1.1.0
[pip3] torchtext==0.11.0a0+d697db5
[conda] libblas 3.9.0 12_linux64_mkl conda-forge
[conda] libcblas 3.9.0 12_linux64_mkl conda-forge
[conda] liblapack 3.9.0 12_linux64_mkl conda-forge
[conda] magma-cuda112 2.5.2 1 pytorch
[conda] mkl 2021.4.0 h8d4b97c_729 conda-forge
[conda] mkl-include 2021.3.0 h06a4308_520
[conda] numpy 1.22.4 pypi_0 pypi
[conda] torch 1.12.1+cu113 pypi_0 pypi
[conda] torch-tb-profiler 0.3.1 pypi_0 pypi
[conda] torch-tensorrt 1.1.0 pypi_0 pypi
[conda] torchtext 0.11.0a0+d697db5 pypi_0 pypi
```
| 11 |
4,745 | 93,674 |
Need easier way to tell which step of the optimized path fails (dynamo + inductor)
|
triaged, oncall: pt2, module: dynamo
|
During internal benchmark enablement, there are multiple times where Dynamo silently starts to not generate any graph and not calling Inductor, but the workflow doesn't throw any error because Dynamo is (rightfully) "optional" with graceful eager fallback.
I learned from Michael Lazos that he plans to add operator stats to show which ops fall back to eager vs. using Inductor. But I believe more importantly we need easier way to tell which step of the compilation workflow has issue that causes the fallback, so that on the performance engineer side we can go in and figure out why the fallback happens.
Specifically it would be great to have:
1. A mode where we always error on fallback (maybe some ops only have fallback? in that case can have whitelist for those ops)
2. We want a way to know which step failed. The logs is currently a bit long and I also donβt know exactly what they mean in terms of what fails. If we can have βStep 1: XYZ, Step 2: XYZ, Step 3: XYZ, failed!β printed in the log, itβs easier to know that it's Step 3 that failed.
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh @voznesenskym @yanboliang @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @desertfire
| 4 |
4,746 | 85,113 |
[ONNX] Produce error message for incorrect number of dummy inputs instead of Internal assert failure
|
needs reproduction, module: onnx, triaged, onnx-triaged
|
### π Describe the bug
I've included a minimal repro script below. The key point to observe is that the module expects two arguments, `x` and `y`, whereas we provide only `x` as the dummy input to the ONNX export call.
Instead of an internal assertion failure, I expect to see an error message stating that I provided an incorrect number of dummy inputs to `torch.onnx.export`.
```python
import torch
import torch.nn as nn
class M(nn.Module):
def __init__(self):
super().__init__()
self.transformer = nn.Transformer(
d_model=128,
nhead=8,
num_encoder_layers=6,
num_decoder_layers=6,
dim_feedforward=4*128,
dropout=0.1)
def forward(self, x, y):
output = self.transformer(
x, y,
self.transformer.generate_square_subsequent_mask(x.shape[0]),
self.transformer.generate_square_subsequent_mask(y.shape[0]),
None, # memory mask
None, # src padding mask
None, # dest padding mask
None) # memory padding mask
return output
module = M()
module = torch.jit.script(module)
x = torch.randn([1, 10, 128])
dummy_input = x
torch.onnx.export(module, dummy_input, 'test.onnx', verbose=True)
```
### Versions
```
Collecting environment information...
PyTorch version: 1.12.1+cu113
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: Could not collect
CMake version: version 3.19.6
Libc version: glibc-2.27
Python version: 3.9.12 (main, Apr 5 2022, 06:56:58) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-4.15.0-118-generic-x86_64-with-glibc2.27
Is CUDA available: True
CUDA runtime version: 11.3.58
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce GTX 1080 Ti
GPU 2: NVIDIA GeForce RTX 3090
Nvidia driver version: 465.19.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.7.5.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn.so.8.1.0
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.1.0
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.1.0
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.1.0
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.1.0
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.1.0
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.1.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] mypy==0.942
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.22.4
[pip3] pytest-mypy==0.9.1
[pip3] torch==1.12.1+cu113
[pip3] torch-tb-profiler==0.3.1
[pip3] torch-tensorrt==1.1.0
[pip3] torchtext==0.11.0a0+d697db5
[conda] libblas 3.9.0 12_linux64_mkl conda-forge
[conda] libcblas 3.9.0 12_linux64_mkl conda-forge
[conda] liblapack 3.9.0 12_linux64_mkl conda-forge
[conda] magma-cuda112 2.5.2 1 pytorch
[conda] mkl 2021.4.0 h8d4b97c_729 conda-forge
[conda] mkl-include 2021.3.0 h06a4308_520
[conda] numpy 1.22.4 pypi_0 pypi
[conda] torch 1.12.1+cu113 pypi_0 pypi
[conda] torch-tb-profiler 0.3.1 pypi_0 pypi
[conda] torch-tensorrt 1.1.0 pypi_0 pypi
[conda] torchtext 0.11.0a0+d697db5 pypi_0 pypi
```
| 2 |
4,747 | 93,673 |
Minifier improvements/consistency
|
triaged, oncall: pt2, module: minifier
|
- [x] Add tests
- [x] TORCHDYNAMO_REPRO_AFTER="dynamo" should also write to `minifier_launcher.py` similar to "aot" counterpart.
- [x] Extending accuracy testing for REPRO_AFTER="aot"
- [x] Consistent usage of REPRO_LEVEL between "dynamo" and "aot"
- [x] Better code sharing wherever possible
- [x] Enable seg fault repro as well for "Dynamo". It already exists for "aot"
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 1 |
4,748 | 85,105 |
Python dispatch for PyOps needs to respect tensor subclasses
|
triaged, module: dispatch, module: __torch_dispatch__
|
### π Describe the bug
.
### Versions
.
cc @Chillee @ezyang @zou3519 @albanD @samdow
| 0 |
4,749 | 85,098 |
install libtorch cxx11 ABI as default in PyTorch pip installation
|
oncall: binaries, triaged
|
### π The feature, motivation and pitch
Existing ["pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cpu"](https://github.com/awslabs/multi-model-server/blob/c58e29b29faf8e45abcb73088b011f86b40845ef/docs/batch_inference_with_mms.md#L1) installs libtorch Pre-cxx11 ABI on linux.
TorchServe cpp uses folly whose build requires CXX11_ABI on linux. So TorchSever is not able to use libtorch Pre-cxx11 ABI from pip installation on linux. It has to[ install libtorch again](https://github.com/pytorch/serve/blob/cpp_backend_lxning/cpp/build.sh#L160) on linux; otherwise, the build gets undefined reference/unresolved symbol errors.
However, there is no problem on Mac. For example, TorchSever [uses libtorch](https://github.com/pytorch/serve/blob/cpp_backend_lxning/cpp/build.sh#L238) from "pip3 install torch torchvision torchaudio".
Is it possible to install libtorch cxx11 ABI as default in PyTorch pip installation on linux?
### Alternatives
_No response_
### Additional context
_No response_
cc @seemethere @malfet @osalpekar @atalman
| 1 |
4,750 | 93,672 |
[ddp] must set `static_graph=False` when running with dynamo
|
triaged, module: ddp, oncall: pt2
|
[static_graph](https://pytorch.org/docs/master/generated/torch.nn.parallel.DistributedDataParallel.html#torch.nn.parallel.DistributedDataParallel) docs from the pytorch docs:
> When set to True, DDP knows the trained graph is static. Static graph means 1) The set of used and unused parameters will not change during the whole training loop; in this case, it does not matter whether users set find_unused_parameters = True or not. 2) How the graph is trained will not change during the whole training loop (meaning there is no control flow depending on iterations). When static_graph is set to be True, DDP will support cases that can not be supported in the past: 1) Reentrant backwards. 2) Activation checkpointing multiple times. 3) Activation checkpointing when model has unused parameters. 4) There are model parameters that are outside of forward function. 5) Potentially improve performance when there are unused parameters, as DDP will not search graph in each iteraton to detect unused parameters when static_graph is set to be True. To check whether you can set static_graph to be True, one way is to check ddp logging data at the end of your previous model training, if ddp_logging_data.get("can_set_static_graph") == True, mostly you can set static_graph = True as well.
When training resnet50 with eager + DDP on torchbench, we can set `static_graph=True`.
But when training with torchdynamo + inductor + DDP, we need to set `static_graph=False`, otherwise we get this error:
```
Traceback (most recent call last):
File "/data/home/dberard/miniconda/envs/bench-fast/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/data/home/dberard/miniconda/envs/bench-fast/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/data/home/dberard/miniconda/envs/bench-fast/lib/python3.8/site-packages/submitit/core/_submit.py", line 11, in <module>
submitit_main()
File "/data/home/dberard/miniconda/envs/bench-fast/lib/python3.8/site-packages/submitit/core/submission.py", line 72, in submitit_main
process_job(args.folder)
File "/data/home/dberard/miniconda/envs/bench-fast/lib/python3.8/site-packages/submitit/core/submission.py", line 65, in process_job
raise error
File "/data/home/dberard/miniconda/envs/bench-fast/lib/python3.8/site-packages/submitit/core/submission.py", line 54, in process_job
result = delayed.result()
File "/data/home/dberard/miniconda/envs/bench-fast/lib/python3.8/site-packages/submitit/core/utils.py", line 133, in result
self._result = self.function(*self.args, **self.kwargs)
File "ddp_experiments.py", line 151, in __call__
return trainer_class(self.args, model_class, model_args=self.model_args).measure()
File "/fsx/users/dberard/scratch-local/bench-fast/benchmark/torchbenchmark/util/distributed/core_model/trainer.py", line 79, in measure
self.benchmark.invoke()
File "/fsx/users/dberard/scratch-local/bench-fast/benchmark/torchbenchmark/util/model.py", line 190, in invoke
self.train()
File "/data/home/dberard/miniconda/envs/bench-fast/lib/python3.8/site-packages/torchdynamo/eval_frame.py", line 166, in _fn
return fn(*args, **kwargs)
File "/fsx/users/dberard/scratch-local/bench-fast/benchmark/torchbenchmark/util/framework/vision/model_factory.py", line 75, in train
def train(self):
File "/data/home/dberard/miniconda/envs/bench-fast/lib/python3.8/site-packages/functorch/_src/monkey_patching.py", line 77, in _backward
return _old_backward(*args, **kwargs)
File "/data/home/dberard/miniconda/envs/bench-fast/lib/python3.8/site-packages/torch/_tensor.py", line 484, in backward
torch.autograd.backward(
File "/data/home/dberard/miniconda/envs/bench-fast/lib/python3.8/site-packages/torch/autograd/__init__.py", line 191, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: Your training graph has changed in this iteration, e.g., one parameter is unused in first iteration, but then got used in the second iteration. this is not compatible with static_graph set to True.
Exception raised from autograd_hook at /scratch/dberard/bench-fast/pytorch/torch/csrc/distributed/c10d/reducer.cpp:668 (most recent call first):
frame #0: <unknown function> + 0x104595 (0x7fc383030595 in /data/home/dberard/miniconda/envs/bench-fast/lib/python3.8/site-packages/torch/lib/libc10.so)
```
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 3 |
4,751 | 85,096 |
Update `use_deterministic_algorithms` documentation and tests to include `nn.functional` counterparts for all `nn` modules
|
module: docs, module: tests, triaged, module: determinism
|
### π The doc issue
The `nn.functional` counterparts for all of the `nn` modules mentioned in the [`use_deterministic_algorithms` documentation](https://pytorch.org/docs/1.12/generated/torch.use_deterministic_algorithms.html?highlight=use_deterministic#torch.use_deterministic_algorithms) share the same determinism behavior, because the modules just call the functional variant. The documentation doesn't make that clear.
Also, the nondeterministic alert tests only cover the modules, not the functional variant.
### Suggest a potential alternative/fix
The documentation should be updated to include the functional counterparts, and the tests for these modules should be updated to include the functional variant as well.
cc @svekars @holly1238 @mruberry
| 0 |
4,752 | 85,093 |
Memoizing AOT Autograd Input Conversion Breaks Training with Tied Parameters
|
triaged, module: functorch
|
### π Describe the bug
AOT Autograd converts inputs to a new set of TensorImpls, either by converting them to fake tensor or by detaching the tensors and then adding back in the gradient `x.detach().requires_grad_(x.requires_grad)`.
When that conversion is memoized, so that two tied parameters are converted to the same FakeTensor/Detached Tensor, models with tied parameters fail. You can replicate this failure:
```
diff --git a/functorch/_src/aot_autograd.py b/functorch/_src/aot_autograd.py
index 6bb4401784..74c2577c73 100644
--- a/functorch/_src/aot_autograd.py
+++ b/functorch/_src/aot_autograd.py
@@ -435,8 +435,14 @@ def create_aot_dispatcher_function(
# (resnet50_quantized_qat and mobilenet_v2_quantized_qat)
# because they use a "running-mean" style op that requires
# resizing the running counter buffers stored on the module.
+ seen = {}
+
def make_input(x):
- return x.detach().requires_grad_(x.requires_grad)
+ if id(x) in seen:
+ return seen[id(x)]
+ out = x.detach().requires_grad_(x.requires_grad)
+ seen[id(x)] = out
+ return out
```
and by running in torchdynamo
```python benchmarks/huggingface.py --ci -d cuda --float32 --backend=aot_nop --training --only=RobertaForCausalLM```
We would like to memoize the the conversion and reflect shared identities of Parameters so that inplace metadata mutations are reflected on tied parameters. We need to fix the tied parameter gradient issue to do so.
### Versions
master
cc @zou3519 @Chillee @samdow @soumith
| 1 |
4,753 | 85,088 |
reentrant torch.utils.checkpoint does not work with NamedTuple outputs
|
high priority, triage review, oncall: distributed, module: checkpoint
|
### π Describe the bug
The reentrant (default) version of torch.utils.checkpoint does not work with NamedTuple outputs, it instead returns back a regular tuple so the outputs cannot be accessed via their original names. For a repro, see:
```
import torch
from torch.utils.checkpoint import checkpoint
from collections import namedtuple
Tup = namedtuple("Tup", "a b c")
tup = Tup(torch.ones(10, requires_grad=True), torch.ones(10, requires_grad=True), torch.ones(10, requires_grad=True))
def foo(tup):
return Tup(tup.a + tup.b, tup.b, tup.a + tup.b + tup.c)
import pdb ; pdb.set_trace()
out = checkpoint(foo, tup, use_reentrant=False)
```
### Versions
main
cc @ezyang @gchanan @zou3519 @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang @kwen2501
| 3 |
4,754 | 85,082 |
[NNC] loop vectorization fails, `Ramp` and `Broadcast` undefined
|
triaged, NNC
|
### π Describe the bug
Loop vectorization introduces `Ramp` and `Broadcast`, which are both undefined.
As an aside, is this supposed to be the case that the loops are still explicit in the generated code or could that be the reason why the code (without vectorisation) is two orders of magnitude slower than plain Pytorch?
```python
import torch
import torch.utils.benchmark as benchmark
import torch._C._te as te
def construct_gather(n: int, backend, dtype=torch.float32):
A = te.BufHandle("A", [n, n], dtype)
INDEX = te.BufHandle("INDEX", [n, n], torch.long)
B = te.BufHandle("B", [n, n], dtype)
i = te.VarHandle("i", torch.int)
j = te.VarHandle("j", torch.int)
store = te.Store.make(A, [i, j], B.load([INDEX.load([j + i*n]), j])) # flatten index manually for now
for_j = te.For.make(j, te.ExprHandle.int(0), te.ExprHandle.int(n), store)
for_i = te.For.make(i, te.ExprHandle.int(0), te.ExprHandle.int(n), for_j)
loopnest = te.LoopNest(te.Block([for_i]), [A, B, INDEX])
loopnest.vectorize_inner_loops() #this line causes the issue
loopnest.prepare_for_codegen()
stmt = te.simplify(loopnest.root_stmt())
return te.construct_codegen(backend, stmt, [A, B, INDEX])
if __name__ == "__main__":
n = 2000
torch.manual_seed(42)
d = torch.randn((n, n), device=0)
index = (torch.rand((n, n), device=0) * (n-2)).long()
to = torch.zeros((n, n), device=0)
print("Pytorch GPU")
t1 = benchmark.Timer("torch.gather(d, 0, index, out=to)", globals={"d": d, "index": index, "to": to})
print(t1.timeit(2))
print("NNC/CUDA GPU")
gather_cuda = construct_gather(n, "cuda")
t0 = benchmark.Timer("gather.call([to, d, index])", globals={"d": d, "index": index, "to": to, "gather":gather_cuda})
print(t0.timeit(2))
```
Gives the following error message:
```
RuntimeError Traceback (most recent call last)
[<ipython-input-12-d7d2ee163d59>](https://localhost:8080/#) in <module>
35
36 print("NNC/CUDA GPU")
---> 37 gather_cuda = construct_gather(n, "cuda")
38 t0 = benchmark.Timer("gather.call([to, d, index])", globals={"d": d, "index": index, "to": to, "gather":gather_cuda})
39 print(t0.timeit(2))
[<ipython-input-12-d7d2ee163d59>](https://localhost:8080/#) in construct_gather(n, backend, dtype)
21 stmt = te.simplify(loopnest.root_stmt())
22
---> 23 return te.construct_codegen(backend, stmt, [A, B, INDEX])
24
25 if __name__ == "__main__":
RuntimeError: default_program(22): error: identifier "Ramp" is undefined
default_program(23): error: identifier "Broadcast" is undefined
2 errors detected in the compilation of "default_program".
nvrtc compilation failed:
#define NAN __int_as_float(0x7fffffff)
#define POS_INFINITY __int_as_float(0x7f800000)
#define NEG_INFINITY __int_as_float(0xff800000)
template<typename T>
__device__ T maximum(T a, T b) {
return isnan(a) ? a : (a > b ? a : b);
}
template<typename T>
__device__ T minimum(T a, T b) {
return isnan(a) ? a : (a < b ? a : b);
}
extern "C" __global__
void func(float* A, float* B, long long* INDEX) {
{
for (int i = 0; i < 2000; i++) {
for (int j_outer = 0; j_outer < 250; j_outer++) {
long long v = __ldg(INDEX + Ramp(8 * (j_outer + 250 * i), 1, 8));
float v_1 = __ldg(B + ((Broadcast(0ll, 8)) + (Broadcast(1ll, 8)) * (long long)(Ramp(8 * j_outer, 1, 8))) + (Broadcast(2000ll, 8)) * v);
A[Ramp(8 * (j_outer + 250 * i), 1, 8)] = v_1;
} }}
}
```
### Versions
I ran this on Google colab:
```
Collecting environment information...
PyTorch version: 1.12.1+cu113
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: 6.0.0-1ubuntu2 (tags/RELEASE_600/final)
CMake version: version 3.22.6
Libc version: glibc-2.26
Python version: 3.7.13 (default, Apr 24 2022, 01:04:09) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.10.133+-x86_64-with-Ubuntu-18.04-bionic
Is CUDA available: True
CUDA runtime version: 11.1.105
GPU models and configuration: GPU 0: Tesla T4
Nvidia driver version: 460.32.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.0.5
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.21.6
[pip3] torch==1.12.1+cu113
[pip3] torchaudio==0.12.1+cu113
[pip3] torchsummary==1.5.1
[pip3] torchtext==0.13.1
[pip3] torchvision==0.13.1+cu113
[conda] Could not collect
```
| 0 |
4,755 | 85,081 |
primTorch/nvfuser: have a way to check that refs are added to __all__
|
feature, triaged, module: nvfuser, module: primTorch
|
### π Describe the bug
In primTorch, with regular PyTorch refs, it's easy to check them based on the `_refs` prefix and look them up in `__all__`. For nvfuser-specific ones, it's not yet clear what to do. There's no `__all__` and the prefix is different: `ops.nvprims`.
See https://github.com/pytorch/pytorch/pull/84792 for more context.
### Versions
master
cc @ezyang @mruberry @ngimel @Lezcano @fdrocha
| 1 |
4,756 | 85,078 |
libtorch create a tensor is very slow, who can tell me why
|
module: performance, module: cpp, triaged
|
I use libtorch create a tensor,but the time consume too long,
why create a tensor will consume such long time
#include <iostream>
#include <torch/torch.h>
int main() {
auto start = std::chrono::system_clock::now();
auto ccc = torch::zeros({ 100, 100, 1000 });
auto end = std::chrono::system_clock::now();
auto duration = std::chrono::duration_cast<std::chrono::milliseconds>(end - start);
std::cout << "using time: " << duration.count() << " ms" << std::endl;
return 0;
}
using time: 23 ms
Thanks
cc @VitalyFedyunin @ngimel @jbschlosser
| 0 |
4,757 | 85,072 |
Segmentation fault in `torch.jit.wait`
|
oncall: jit
|
### π Describe the bug
Passing `None` to `torch.jit.wait` can cause a Segmentation fault.
## code
```python
import torch
torch.jit.wait(None)
```
## output
```bash
Segmentation fault (core dumped)
```
### Versions
PyTorch version: 1.12.1+cu102
Is debug build: False
CUDA used to build PyTorch: 10.2
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.31
Python version: 3.8.10 (default, Jun 22 2022, 20:18:18) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.4.0-117-generic-x86_64-with-glibc2.29
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.18.5
[pip3] torch==1.12.1
[conda] Could not collect
| 0 |
4,758 | 85,058 |
Selectively sync internal Meta discussions / posts to dev-discuss.pytorch.org
|
triaged
|
### π The feature, motivation and pitch
We have an internal developer discussion group at Meta and it would be beneficial to open collaboration to have those posts in PyTorch Dev Discussion (Not Q&A) selectively shared by a bot to dev-discuss.pytorch.org. As a stretch, having dev-discuss.pytorch.org sync internally with a bot post would also be valuable.
### Alternatives
1. Keep conversations and discussion separate as it is right now
2. Continue to actively push Meta internal folks to leverage dev-discuss.pytorch.org, but keep it manual
3. Use Workplace or other systems to match Meta internals
### Additional context
_No response_
| 2 |
4,759 | 85,036 |
Add an opaque epilogue in AOTAutograd for aliasing/mutations
|
triaged, module: functionalization, module: functorch
|
Today when using `aot_function` or `aot_module` to trace an arbitrary pytorch program, functionalization will run by default, creating a program graph with no mutations that gets shipped off to the compiler (whatever compiler you passed into `aot_function`).
However, when the entire *program* being compiled contain external aliasing/mutations (e.g. the outputs alias the inputs, or the inputs get mutated), then we are still forced to reflect that in the traced program. When we send the traced program to a compiler however, we want the program that the compiler sees to be 100% functional - so we'd like to "hide" parts of the graph from the compiler, forcing them to run in an epilogue.
(1) input mutations
When functionalization notices that the program it was passed includes mutations to inputs, it needs to preserve those input mutations. It does that by adding extra `input.copy_(new_input_value)` calls to the end of the graph.
A more subtle issue to deal with is input metadata mutations, like `input.transpose_()`.
Keeping the `copy_()` calls around in the graph and sending it off to various compiler passes is dangerous, because passes that assume a functional graph might silently do the wrong thing when they see mutating operators (for example, moving around the `copy_()` call or removing it completely).
(2) outputs aliasing inputs
Given a function like:
```
def f(x):
y = x.view(-1)
z = y + y
return (y, z)
```
One of the outputs (`y`) aliases an input (`x`). See the context here: nvfuser currently doesn't make guarantees around preserving aliasing when its performs fusions, and it might effectively turn the `x.view(-1)` into an `x.view_copy(-1)`.
To ensure that output-input aliasing is preserved properly, we can detect when an output aliases an input, regenerate the alias (probably with an `as_strided()` call, and run that logic to re-generate the output in an opaque epilogue.
**Status**
There's a prototype that implements some of this logic in [this PR](https://github.com/pytorch/pytorch/pull/82602#discussion_r946062470) (specifically, it adds an epilogue where input mutations are stored, but not output aliasing).
One subtle that came up is the fact that `aot_autograd` currently stores the output of the compiled graph in an `autograd.Function`. `autograd.Function` only allows mutations to program inputs under certain [conditions](https://github.com/pytorch/pytorch/pull/82602#discussion_r938902183), so one open question is where this epilogue should actually be stored and run.
Some options:
(1) Store (and run) the epilogue inside of the `autograd.Function` object. At mentioned above, this isn't allowed in certain cases
(2) Update `aot_function` to return something different, like `tuple(autograd.function, epilogue)`. This would be BC breaking, and require the caller (e.g. dynamo) to explicitly run the epilogue
(3) Update `aot_function to return something different, e.g. a `torch.nn.module()` that first runs the `autograd.Function`, and then runs the epilogue. This is also technically BC-breaking, but less so because running the epilogue could be done transparently in the nn module.
Open to other ideas!
cc @ezyang @soumith @zou3519 @Chillee @samdow
| 2 |
4,760 | 85,027 |
Custom autograd functions are not inlined when export mode is ONNX_ATEN_FALLBACK
|
module: onnx, triaged
|
### π Describe the bug
I am working with a partner, who are using operator_export_type=OperatorExportTypes.ONNX_ATEN_FALLBACK to export a greater number of models without error with the trade off of running inference with the PyTorch aten library included.
They are trying to add support for models that contain custom autograd functions, and have been testing with PyTorch nightly (since this PR went in: https://github.com/pytorch/pytorch/pull/74765).
What we would like to see is:
When operator_export_type=OperatorExportTypes.ONNX_ATEN_FALLBACK and
1. All functionality is supported by ONNX
then any custom autograd functions are inlined and the graph is exported with all ONNX ops
2. When there are aten ops in the custom autograd function
then any custom autograd functions are inlined and the graph is exported with the aten ops
What we observe currently:
1. operator_export_type=OperatorExportTypes.ONNX
If the functionality in the custom autograd function is supported by ONNX ops then the graph is exported to ONNX correctly.
Export fails if the custom autograd has an aten op with the error message βCould not export operator aten:XXXβ
2. operator_export_type=OperatorExportTypes.ONNX_FALLTHROUGH
The custom autograd functions is exported as a PythonOp node whether or not the functionality of the custom autograd functions is supported by ONNX.
3. operator_export_type=OperatorExportTypes.ONNX_ATEN_FALLBACK
The export fails with the error βCouldnβt export Python operator <CustomAutogradFunctionName>β
Small sample here demonstrating the above findings: https://github.com/natke/custom_autograd/blob/main/model.py.
### Versions
```
Python platform: macOS-10.16-x86_64-i386-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.1
[pip3] torch==1.13.0.dev20220908
[pip3] torchaudio==0.13.0.dev20220908
[pip3] torchvision==0.14.0.dev20220908
[conda] blas 1.0 mkl
[conda] mkl 2021.4.0 hecd8cb5_637
[conda] mkl-service 2.4.0 py39h9ed2024_0
[conda] mkl_fft 1.3.1 py39h4ab4a9b_0
[conda] mkl_random 1.2.2 py39hb2f4e1b_0
[conda] numpy 1.23.1 py39h2e5f0a9_0
[conda] numpy-base 1.23.1 py39h3b1a694_0
[conda] pytorch 1.13.0.dev20220908 py3.9_0 pytorch-nightly
[conda] torchaudio 0.13.0.dev20220908 py39_cpu pytorch-nightly
[conda] torchvision 0.14.0.dev20220908 py39_cpu pytorch-nightly
```
| 0 |
4,761 | 85,025 |
[CheckpointWrapper] Revamp API design
|
oncall: distributed, better-engineering, module: fsdp
|
### π The feature, motivation and pitch
CheckpointWrapper today is still in a private / prototype stage but nonetheless has gotten quite hacky with several different arguments, some of which don't work with each other, see this comment for details: https://github.com/pytorch/pytorch/pull/84908#discussion_r970933952.
In particular the following is confusing:
1. checkpoint_impl ignored if checkpoint_fn is specified
2. Both of the above are ignored if offload_to_cpu is specified
3. checkpoint_fn_args, checkpoint_fn_kwargs only used when checkpoint_fn is specified,
and adding additional options will likely only exacerbate the issue. We should revamp the design here to make it more user-friendly.
### Alternatives
_No response_
### Additional context
_No response_
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang @kwen2501
| 0 |
4,762 | 84,994 |
Cuda tensor is zero when passed through multiprocessing queue
|
module: multiprocessing, triaged
|
### π Describe the bug
I encounter an issue when passing cuda tensors through the multiprocessing queue. The first tensor sent is always 0 when using the following code snippet.
```python
import time
import torch
import torch.multiprocessing as mp
def process(q, N):
for _ in range(N):
q.put(torch.tensor(1).to("cuda"))
while True:
time.sleep(1)
if __name__ == '__main__':
q = mp.Queue()
N = 5
p = mp.Process(target=process, args=(q, N))
p.start()
for _ in range(N):
print(q.get())
````
When running the above code snippet I get:
```
tensor(0, device='cuda:0')
tensor(1, device='cuda:0')
tensor(1, device='cuda:0')
tensor(1, device='cuda:0')
tensor(1, device='cuda:0')
```
, which is incorrect. The correct result should be to receive only `tensor(1, device='cuda:0')`.
I am able to reproduce this error consistently on multiple versions of pytorch and cuda.
Sometimes the second tensor sent is also corrupted.
Strangely enough, the error does not occur if you create the tensor directly on the cuda device with `torch.ones(1, device="cuda")`.
### Versions
PyTorch version: 1.13.0.dev20220913+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 10 Enterprise
GCC version: Could not collect
Clang version: Could not collect
CMake version: version 3.20.21032501-MSVC_2
Libc version: N/A
Python version: 3.8.10 (tags/v3.8.10:3d8993a, May 3 2021, 11:48:03) [MSC v.1928 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.19044-SP0
Is CUDA available: True
CUDA runtime version: 11.7.64
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3090
Nvidia driver version: 516.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.3
[pip3] torch==1.13.0.dev20220913+cu117
[pip3] torchaudio==0.13.0.dev20220913+cu117
[pip3] torchvision==0.14.0.dev20220913+cu117
[pip3] torchviz==0.0.2
[conda] Could not collect
cc @VitalyFedyunin
| 2 |
4,763 | 84,990 |
Segmentation fault in `torch.futures.collect_all`
|
module: crash, module: error checking, triaged
|
### π Describe the bug
In `torch.futures.collect_all` when `futures` got invalid input, it causes a Segmentation fault.
## code
```python
import torch
input = (None,)
torch.futures.collect_all(futures=input)
```
## output
```bash
Segmentation fault (core dumped)
```
### Versions
PyTorch version: 1.12.1+cu102
Is debug build: False
CUDA used to build PyTorch: 10.2
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.31
Python version: 3.8.10 (default, Jun 22 2022, 20:18:18) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.4.0-117-generic-x86_64-with-glibc2.29
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.18.5
[pip3] torch==1.12.1
[conda] Could not collect
| 0 |
4,764 | 84,972 |
Add unit tests for test decorators
|
feature, module: tests, triaged
|
If a test decorator is not working properly, then the test it wraps can become invalid. For instance, in the case of #84874, tests wrapped with `expectedFailureMeta` were never getting run, even though the printout from running these tests falsely indicates that the tests were run. There was nothing obvious to indicate that it wasn't working properly until I accidentally stumbled upon it.
It seems like it would be a good idea to write unit tests for some of the decorators, if they don't already have a test, to make sure they're working as expected and to give us more assurance that our tests are valid. Who knows, maybe there are other decorators with similar problems right now.
cc @mruberry
| 1 |
4,765 | 84,944 |
test_warp_softmax_64bit_indexing_cuda_float16 takes ~147GB of CPU memory and is very slow
|
module: memory usage, triaged, module: testing
|
### π Describe the bug
test_warp_softmax_64bit_indexing_cuda_float16 takes ~147GB of CPU memory and is very slow
Reproduce:
```
/usr/bin/time -v python test/test_nn.py -v -k test_warp_softmax_64bit_indexing_cuda_float16
```
Output:
```
test_warp_softmax_64bit_indexing_cuda_float16 (__main__.TestNNDeviceTypeCUDA) ... ok
----------------------------------------------------------------------
Ran 1 test in 105.592s
OK
Command being timed: "python test/test_nn.py -v -k test_warp_softmax_64bit_indexing_cuda_float16"
User time (seconds): 776.69
System time (seconds): 2031.00
Percent of CPU this job got: 2576%
Elapsed (wall clock) time (h:mm:ss or m:ss): 1:48.96
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 147604400
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 184895836
Voluntary context switches: 14118
Involuntary context switches: 2191694
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
```
Considering the test was taking ~84 GB of ram before (https://github.com/pytorch/pytorch/pull/67922#issuecomment-963899593), but it takes 140 GB now, something could have gone wrong. Also, the test name was changed in https://github.com/pytorch/pytorch/pull/84182.
### Versions
torch_commit: https://github.com/pytorch/pytorch/commit/7a9ab5c232f54430704456d18a22f99838489817
V100 32 GB
cc @ptrblck @ngimel @malfet @eqy
| 8 |
4,766 | 84,937 |
DISABLED test_random_seed (__main__.TestDataLoaderUtils)
|
module: dataloader, triaged, skipped
|
Platforms: asan
This test was disabled because it is failing on master ([recent examples](http://torch-ci.com/failure/test_random_seed%2CTestDataLoaderUtils))
Examples: (look for "The action has timed out"):
* https://hud.pytorch.org/pytorch/pytorch/commit/dbd38f63f5731f8403edfdf9d5956ca872453dd3
* https://hud.pytorch.org/pytorch/pytorch/commit/d6733327829fa02295c239ad96a26bef8afa6da4
* https://hud.pytorch.org/pytorch/pytorch/commit/31cc03cc132020244f6985aefdcd1c05babc2e17
cc @SsnL @VitalyFedyunin @ejguan @NivekT
| 3 |
4,767 | 84,936 |
CPU and MPS floating point math is different (in a significant way)
|
module: numerical-stability, triaged, module: mps
|
### π Describe the bug
I am getting radically different results running the [CURL](https://github.com/sjmoran/CURL) model on the cpu versus the mps device (on pytorch 1.12.1 and nightly). I stepped through the debugger to find the underlying difference in calculation, andβin shortβit appears that cpu floating point math works differently from mps floating point math. This is not a bug with CURL.
Here's a minimal example that doesn't work the way I expect:
```python
import torch
mps_device = torch.device("mps")
x = 0.1
cpu_tensor = torch.exp(torch.tensor(x))
mps_tensor = torch.exp(torch.tensor(x, device=mps_device))
print(cpu_tensor - cpu_tensor) # prints 0
print(mps_tensor - mps_tensor) # prints 0
print(cpu_tensor - mps_tensor) # prints 1.1921e-07
print(cpu_tensor - mps_tensor.cpu()) # prints 1.1921e-07
print(cpu_tensor.to(mps_device) - mps_tensor) # prints 1.1921e-07
```
As an experienced Metal programmer, I have a sneaky suspicion this is due to the Metal shader "fast math" being enabled for the MPS Graph shaders. It's [enabled by default](https://developer.apple.com/documentation/metal/mtlcompileoptions/1515914-fastmathenabled) when compiling custom Metal kernel libraries. I don't know MPS Graph well enough to test this hypothesis. If this is the underlying cause, I'm hoping there's a way to disable this behavior.
### Versions
Here's my pytorch 1.12.1 environment info:
```
PyTorch version: 1.12.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 13.0 (arm64)
GCC version: Could not collect
Clang version: 14.0.0 (clang-1400.0.29.102)
CMake version: version 3.23.2
Libc version: N/A
Python version: 3.9.13 | packaged by conda-forge | (main, May 27 2022, 17:01:00) [Clang 13.0.1 ] (64-bit runtime)
Python platform: macOS-13.0-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] adabelief-pytorch==0.2.1
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.23.3
[pip3] torch==1.12.1
[pip3] torch-tb-profiler==0.4.0
[pip3] torchmetrics==0.9.3
[pip3] torchvision==0.13.1
[pip3] torchviz==0.0.2
[conda] adabelief-pytorch 0.2.1 pypi_0 pypi
[conda] numpy 1.23.3 pypi_0 pypi
[conda] torch 1.12.1 pypi_0 pypi
[conda] torch-tb-profiler 0.4.0 pypi_0 pypi
[conda] torchmetrics 0.9.3 pypi_0 pypi
[conda] torchvision 0.13.1 pypi_0 pypi
[conda] torchviz 0.0.2 pypi_0 pypi
```
And here's my pytorch nightly env:
```
PyTorch version: 1.13.0.dev20220913
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 13.0 (arm64)
GCC version: Could not collect
Clang version: 14.0.0 (clang-1400.0.29.102)
CMake version: version 3.23.2
Libc version: N/A
Python version: 3.9.13 | packaged by conda-forge | (main, May 27 2022, 17:01:00) [Clang 13.0.1 ] (64-bit runtime)
Python platform: macOS-13.0-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] adabelief-pytorch==0.2.1
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.23.3
[pip3] torch==1.13.0.dev20220913
[pip3] torch-tb-profiler==0.4.0
[pip3] torchmetrics==0.9.3
[pip3] torchvision==0.13.1
[pip3] torchviz==0.0.2
[conda] adabelief-pytorch 0.2.1 pypi_0 pypi
[conda] numpy 1.23.3 pypi_0 pypi
[conda] torch 1.13.0.dev20220913 pypi_0 pypi
[conda] torch-tb-profiler 0.4.0 pypi_0 pypi
[conda] torchmetrics 0.9.3 pypi_0 pypi
[conda] torchvision 0.13.1 pypi_0 pypi
[conda] torchviz 0.0.2 pypi_0 pypi
```
cc @kulinseth @albanD
| 17 |
4,768 | 84,935 |
[FSDP] Test optimizer state dict with CPU offload
|
triaged, module: fsdp
|
The current [FSDP optimizer state dict tests](https://github.com/pytorch/pytorch/blob/master/test/distributed/fsdp/test_fsdp_optim_state.py) do not test with CPU offload enabled. This misses bugs like the one fixed in https://github.com/pytorch/pytorch/pull/84708. We should add some tests with CPU offload enabled to exercise the case where optimizer states are on CPU instead of GPU.
cc @zhaojuanmao @mrshenli @rohan-varma
| 0 |
4,769 | 84,934 |
RuntimeError: input_shape.size() > 0 || reshape.size() > 0INTERNAL ASSERT FAILED
|
needs reproduction, module: onnx, triaged, onnx-needs-info
|
### π Describe the bug
I am getting this error when I try to convert a PyTorch Model to ONNX.
```
/home/ashishpapanai/miniconda3/envs/parseq/lib/python3.7/site-packages/torch/__init__.py:676: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
assert condition, message
/home/ashishpapanai/miniconda3/envs/parseq/lib/python3.7/site-packages/timm/models/vision_transformer.py:201: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
qkv = self.qkv(x).reshape(B, N, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4)
/home/ashishpapanai/Desktop/MAVI_Text/parseq/strhub/models/parseq/system.py:129: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if testing and (tgt_in == self.eos_id).any(dim=-1).all():
/home/ashishpapanai/miniconda3/envs/parseq/lib/python3.7/site-packages/torch/onnx/symbolic_helper.py:716: UserWarning: allowzero=0 by default. In order to honor zero value in shape use allowzero=1
warnings.warn("allowzero=0 by default. In order to honor zero value in shape use allowzero=1")
/home/ashishpapanai/miniconda3/envs/parseq/lib/python3.7/site-packages/torch/onnx/symbolic_helper.py:325: UserWarning: Type cannot be inferred, which might cause exported graph to produce incorrect results.
warnings.warn("Type cannot be inferred, which might cause exported graph to produce incorrect results.")
Traceback (most recent call last):
File "demo_read.py", line 96, in <module>
main()
File "/home/ashishpapanai/miniconda3/envs/parseq/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 28, in decorate_context
return func(*args, **kwargs)
File "demo_read.py", line 71, in main
verbose = True)
File "/home/ashishpapanai/miniconda3/envs/parseq/lib/python3.7/site-packages/torch/onnx/__init__.py", line 320, in export
custom_opsets, enable_onnx_checker, use_external_data_format)
File "/home/ashishpapanai/miniconda3/envs/parseq/lib/python3.7/site-packages/torch/onnx/utils.py", line 111, in export
custom_opsets=custom_opsets, use_external_data_format=use_external_data_format)
File "/home/ashishpapanai/miniconda3/envs/parseq/lib/python3.7/site-packages/torch/onnx/utils.py", line 729, in _export
dynamic_axes=dynamic_axes)
File "/home/ashishpapanai/miniconda3/envs/parseq/lib/python3.7/site-packages/torch/onnx/utils.py", line 501, in _model_to_graph
module=module)
File "/home/ashishpapanai/miniconda3/envs/parseq/lib/python3.7/site-packages/torch/onnx/utils.py", line 216, in _optimize_graph
graph = torch._C._jit_pass_onnx(graph, operator_export_type)
File "/home/ashishpapanai/miniconda3/envs/parseq/lib/python3.7/site-packages/torch/onnx/__init__.py", line 373, in _run_symbolic_function
return utils._run_symbolic_function(*args, **kwargs)
File "/home/ashishpapanai/miniconda3/envs/parseq/lib/python3.7/site-packages/torch/onnx/utils.py", line 1032, in _run_symbolic_function
return symbolic_fn(g, *inputs, **attrs)
File "/home/ashishpapanai/miniconda3/envs/parseq/lib/python3.7/site-packages/torch/onnx/symbolic_opset9.py", line 568, in view
return reshape(g, self, size)
File "/home/ashishpapanai/miniconda3/envs/parseq/lib/python3.7/site-packages/torch/onnx/symbolic_opset9.py", line 76, in reshape
return sym_help._reshape_helper(g, self, shape)
File "/home/ashishpapanai/miniconda3/envs/parseq/lib/python3.7/site-packages/torch/onnx/symbolic_helper.py", line 717, in _reshape_helper
return g.op("Reshape", input, shape, allowzero_i=allowzero)
File "/home/ashishpapanai/miniconda3/envs/parseq/lib/python3.7/site-packages/torch/onnx/utils.py", line 928, in _graph_op
torch._C._jit_pass_onnx_node_shape_type_inference(n, _params_dict, opset_version)
RuntimeError: input_shape.size() > 0 || reshape.size() > 0INTERNAL ASSERT FAILED at "../torch/csrc/jit/passes/onnx/shape_type_inference.cpp":448, please report a bug to PyTorch. Reshape node should have at least one input size > 0 when constant folding.
```
## Converting: ParSeq (https://github.com/baudm/parseq)
### Code used for conversion
```
model1 = load_from_checkpoint(args.checkpoint, **kwargs).eval().to(args.device)
img_transform1 = SceneTextDataModule.get_transform(model1.hparams.img_size)
# model2 = load_from_checkpoint(args.hi_checkpoint, **kwargs).eval().to(args.device)
# img_transform2 = SceneTextDataModule.get_transform(model2.hparams.img_size)
img_dir = args.image_dir[0]
folder_list = os.listdir(img_dir)
#dummy_input = torch.rand(1, 3, 32, 128)
for folder in folder_list:
img_list = os.listdir(os.path.join(img_dir,folder))
ptr = open('read_output/'+folder+'.txt', 'w')
#print(folder)
for fname in img_list:
# Load image and prepare for input
image = Image.open(os.path.join(img_dir,folder,fname)).convert('RGB')
image1 = img_transform1(image).unsqueeze(0).to(args.device)
#model1.to_onnx('parseq.onnx', input, opset_version=14)
#return "Done"
#english
torch.onnx.export(model1,
image1,
'ParSeq.onnx',
export_params=True,
opset_version=14,
do_constant_folding=True,
input_names=['input'],
output_names=['output'],
verbose = True)
```
### Versions
```
Collecting environment information...
PyTorch version: 1.10.2+cu102
Is debug build: False
CUDA used to build PyTorch: 10.2
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.23.2
Libc version: glibc-2.17
Python version: 3.7.13 (default, Mar 29 2022, 02:18:16) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.13.0-44-generic-x86_64-with-debian-bullseye-sid
Is CUDA available: True
CUDA runtime version: Could not collect
GPU models and configuration: GPU 0: NVIDIA GeForce MX250
Nvidia driver version: 470.129.06
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] botorch==0.6.4
[pip3] gpytorch==1.8.1
[pip3] numpy==1.21.6
[pip3] pytorch-lightning==1.6.5
[pip3] torch==1.10.2
[pip3] torchmetrics==0.9.3
[pip3] torchvision==0.11.3
[conda] botorch 0.6.4 pypi_0 pypi
[conda] gpytorch 1.8.1 pypi_0 pypi
[conda] numpy 1.21.6 pypi_0 pypi
[conda] pytorch-lightning 1.6.5 pypi_0 pypi
[conda] torch 1.10.2 pypi_0 pypi
[conda] torchmetrics 0.9.3 pypi_0 pypi
[conda] torchvision 0.11.3 pypi_0 pypi
```
| 1 |
4,770 | 84,932 |
Separate doc and binaries build
|
triaged, module: doc infra
|
### π The feature, motivation and pitch
At the moment, if we would want to update a stable version of the docs after the release, we'd have to build new binaries and add a new tag. We need to be able to update just the stable versions of the docs w/o building everything else.
### Alternatives
_No response_
### Additional context
_No response_
cc @ezyang @zou3519 @holly1238 @malfet
| 0 |
4,771 | 84,925 |
`is_pinned()` support in PrimTorch and FakeTensor.
|
triaged, module: primTorch, module: fakeTensor
|
### π The feature, motivation and pitch
Trying to add `Tensor.to` in PrimTorch in #84802. ATen implementation supports pin_memory, which is unfortunately not supported by FakeTensor nor PrimTorch.
See comment here: https://github.com/pytorch/pytorch/pull/84802#issuecomment-1244155840
Without `is_pinned()` supported in FakeTensor or PrimTorch, we can't safely implement/fallback `Tensor.to` in Prim. Looking at this comment here (https://github.com/pytorch/pytorch/blob/7a9ab5c232f54430704456d18a22f99838489817/torch/_subclasses/fake_tensor.py#L849-L850) seems like it's something planned for FakeTensor already. Would be nice if we can prioritize it.
### Alternatives
_No response_
### Additional context
_No response_
cc @ezyang @mruberry @ngimel @Lezcano @fdrocha
| 1 |
4,772 | 84,922 |
Functorch functionalization causes increased memory usage
|
triaged, module: functionalization, module: functorch
|
### π Describe the bug
Functorch functionalization causes more memory usages. It was turned on in PR #84435
See script below. It's a TIMM resnet50 model and was working fine with batch size 128 on a 16 GB V100, with or without functorch fusion.
However, after that PR, if we do `memory_efficient_fusion` on that model, it gets OOM.
```python
import torch
import timm
from functorch.compile import memory_efficient_fusion
x = torch.randn(128, 3, 224, 224, dtype=torch.float, device='cuda')
model = timm.create_model('resnet50')
model = model.cuda()
model = model.train()
# model = memory_efficient_fusion(model)
out = model(x)
out.sum().backward()
torch.cuda.synchronize()
```
### Versions
pytorch https://github.com/pytorch/pytorch/commit/e68df8e4a14ce1fbedf6b20e132b11ec7b151f8a
timm https://github.com/rwightman/pytorch-image-models/commit/fa8c84eede55b36861460cc8ee6ac201c068df4d
torchvision https://github.com/pytorch/vision/commit/84dcf695d64c15f8a0be845ac65901bdde845429
torchdynamo https://github.com/pytorch/torchdynamo/commit/fe3173f7e6c804e6330ac187ea8e4101f45ff9a2
V100 16 GB
cc @bdhirsh @ezyang @soumith @zou3519 @Chillee @samdow @ngimel @ptrblck @csarofeen @kevinstephano
| 2 |
4,773 | 84,870 |
Re-Running PR Sanity Check after Adding `skip-pr-sanity-checks` Label Still Fails
|
module: ci, triaged
|
The PR https://github.com/pytorch/pytorch/pull/83665 exceeds the 2k LOC limit for the PR sanity check. After adding the label `skip-pr-sanity-checks` and re-running the failing check, the check continues to fail when the expected behavior is for it to now pass.
(I have since rebased the PR since one of the preceding PRs had some flaky build failures.)
cc @seemethere @malfet @pytorch/pytorch-dev-infra
| 4 |
4,774 | 84,864 |
torch.utils.checkpoint (with use_reentrant=False) doesn't work with all PyTorch features that set TLS
|
module: checkpoint, triaged, module: __torch_dispatch__, tensor subclass
|
## Issue description
torch.utils.checkpoint (with use_reentrant=False) returns silently incorrect results with some PyTorch features that set TLS. This includes: functorch (see https://github.com/pytorch/functorch/issues/993), torch_dispatch Mode, and possibly also more.
The root of the issue is that:
- checkpoint needs to re-run the function passed to it to compute intermediates
- This function must be re-run with the same global state as the original function was run
- checkpoint hard-codes some the state it needs:(https://github.com/pytorch/pytorch/blob/2698f99dc7a2efe6d60fffa43beb545901a57c9b/torch/utils/checkpoint.py#L398-L408)
## Code example
Repro for torch_dispatch Mode:
```py
import torch
from torch.utils._python_dispatch import TorchDispatchMode
import torch.utils.checkpoint
class NoRandomnessMode(TorchDispatchMode):
def __torch_dispatch__(self, func, types, args=(), kwargs=None):
if kwargs is None:
kwargs = {}
import pdb; pdb.set_trace()
if func == torch.ops.aten.rand.default:
return torch.ones(*args, **kwargs) / 2
rs = func(*args, **kwargs)
return rs
def f(x):
y = torch.rand(x.shape)
return x.clone() * y
def g(x):
return torch.utils.checkpoint.checkpoint(f, x, use_reentrant=False)
x = torch.ones([], requires_grad=True)
with NoRandomnessMode():
y = g(x)
y.backward()
# assertion failure
assert torch.allclose(x.grad, y)
```
## Resolution
The implication for torch_dispatch mode is that checkpointing needs to somehow save the mode stack at the time of the forward pass and then restore it when it re-runs the forward pass.
Robustness: this problem sounds similar to [Aten/ThreadLocalState.h](https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/ThreadLocalState.h) -- in ATen, there is one place that encapsulates all TLS. If we had something like that for Python, then if anyone comes along to add some TLS in the future, checkpoint would automatically work with it.
cc @Chillee @ezyang @albanD @samdow
| 1 |
4,775 | 84,863 |
View consistency for PrimTorch+nvFuser tests
|
triaged, module: nvfuser, module: primTorch
|
### π The feature, motivation and pitch
`test/test_ops.py:_ref_test_helper` checks whether the output of a reference implementation returns the same `._is_view()` as eager.
The nvFuser execution function always returns a fresh copy currently so `tensor._is_view()` is False. These tests are currently skipped.
How important is view consistency? It's not yet clear whether it's feasible to implement for nvFuser execution and how difficult it is.
### Alternatives
_No response_
### Additional context
Latest master.
cc @ezyang @mruberry @ngimel @Lezcano @fdrocha
| 12 |
4,776 | 84,860 |
Feature Request: deterministic adaptive_avg_pool2d_backward_cuda
|
feature, module: cuda, triaged, module: determinism
|
### π Describe the bug
def __init__(self, inp, oup, reduction=32):
super(CoordAtt, self).__init__()
self.pool_h = nn.AdaptiveAvgPool2d((None, 1))
self.pool_w = nn.AdaptiveAvgPool2d((1, None))
mip = max(8, inp // reduction)
### Versions
PyTorch version: 1.12.1+cu113
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: 6.0.0-1ubuntu2 (tags/RELEASE_600/final)
CMake version: version 3.22.6
Libc version: glibc-2.26
Python version: 3.7.13 (default, Apr 24 2022, 01:04:09) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.10.133+-x86_64-with-Ubuntu-18.04-bionic
Is CUDA available: False
CUDA runtime version: 11.1.105
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.0.5
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.21.6
[pip3] torch==1.12.1+cu113
[pip3] torchaudio==0.12.1+cu113
[pip3] torchsummary==1.5.1
[pip3] torchtext==0.13.1
[pip3] torchvision==0.13.1+cu113
[conda] Could not collect
cc @ngimel @mruberry @kurtamohler
| 0 |
4,777 | 93,667 |
14k github models on PyTorch 2.0 pass rates dashboard
|
triaged, oncall: pt2, module: dynamo
|
We are weekly running dynamo-eager, dynamo-eager-fullgraph, export and inductor on 14k ```nn.Modules``` crawled from 1.4k github projects to get coverage level, find and fix bugs. This dashboard page is used to track the pass rates of different backends.
How to repro:
* Checkout https://github.com/jansel/pytorch-jit-paritybench
* Batch evaluation with different backends:
* dynamo-eager: ```python main.py --backend eager```
* dynamo-eager-fullgraph: ```python main.py --backend eager --fullgraph```
* export: ```python main.py --compile_mode export```
* inductor: ```python main.py```
* Adhoc evaluation:
* ```pytest ./generated/{filename}.py -k test_{n}``` (e.g, ```pytest ./generated/test_KunpengLi1994_VSRN.py -k test_002```)
* ```python -e ./generated/{filename}.py --backend eager```
Bugs umbrella task(#92670)
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh @voznesenskym @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @desertfire @davidberard98
| 43 |
4,778 | 84,847 |
ONNX exporter error
|
needs reproduction, module: onnx, triaged, onnx-needs-info
|
### π Describe the bug
when try to convert yolov5 weight from torch to ONNX, I got this error "**_RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument weight in method wrapper___slow_conv2d_forward)_**" . I have checked and definitely that input tensor and model loaded on the same device (cuda). This error was raised from line 216 of rpn.py in torchvision (r.append(top_n_idx + offset)). Someone can help me? Thank you for your help
### Versions
Collecting environment information...
PyTorch version: 1.11.0+cu102
Is debug build: False
CUDA used to build PyTorch: 10.2
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.24.0
Libc version: glibc-2.31
Python version: 3.8.13 (default, Mar 28 2022, 11:38:47) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.13.0-41-generic-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: 11.2.152
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 2080
Nvidia driver version: 515.43.04
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.4.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.22.4
[pip3] torch==1.11.0
[pip3] torch-tensorrt==1.1.0
[pip3] torch2trt==0.4.0
[pip3] torchsummary==1.5.1
[pip3] torchvision==0.12.0
[conda] numpy 1.22.4 pypi_0 pypi
[conda] torch 1.11.0 pypi_0 pypi
[conda] torch-tensorrt 1.1.0 pypi_0 pypi
[conda] torch2trt 0.4.0 pypi_0 pypi
[conda] torchsummary 1.5.1 pypi_0 pypi
[conda] torchvision 0.12.0 pypi_0 pypi
| 1 |
4,779 | 84,843 |
Support different NSE in batches of CSR and CSC tensors
|
module: sparse, open source, cla signed, release notes: sparse, no-stale
|
This PR enables batched CSR/CSC tensors that batches may have different NSE counts.
For instance, with the current master we have
```python
>>> a = torch.tensor([[[1, 2], [3, 4]], [[0, 12], [21, 0]]])
>>> a.to_sparse_csr()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
RuntimeError: Expect the same number of specified elements per batch.
```
because the NSE of the first and second batches are different, 4 and 2, respectively.
This PR implements a strided-to-sparse-CSR/CSC conversion algorithm that supports CSR/CSC batches with different NSE counts. For instance:
```python
>>> a = torch.tensor([[[1, 2], [3, 4]], [[0, 12], [21, 0]]])
>>> b = a.to_sparse_csr()
>>> b
tensor(crow_indices=tensor([[0, 2, 4],
[0, 1, 2]]),
col_indices=tensor([[0, 1, 0, 1],
[1, 0, 0, 0]]),
values=tensor([[ 1, 2, 3, 4],
[12, 21, 0, 0]]), size=(2, 2, 2), nnz=4,
layout=torch.sparse_csr)
>>> b[0]
tensor(crow_indices=tensor([0, 2, 4]),
col_indices=tensor([0, 1, 0, 1]),
values=tensor([1, 2, 3, 4]), size=(2, 2), nnz=4,
layout=torch.sparse_csr)
>>> b[1]
tensor(crow_indices=tensor([0, 1, 2]),
col_indices=tensor([1, 0]),
values=tensor([12, 21]), size=(2, 2), nnz=2, layout=torch.sparse_csr)
```
that is, if the NSE of a batch is smaller than the maximum NSE over all batches, the corresponding rows in `col_indices`/`values` are padded with zeros as placeholders. Algorithms on batched CSR/CSC tensors must not access the padded parts of these tensors, that is, the algorithms should use the last element of the corresponding `crow_indices` row as the NSE value rather than the value of `.values().shape[0]` that holds the maximum NSE over all batches.
Performance-wise, the strided-to-sparse-CSR/CSC conversion algorithms in master and in this PR, are roughly equivalent:
```python
# master branch:
n [2]: a = torch.rand(10, 10, 1000, 1000)
In [3]: a = torch.where(a==0, 0.1, a) # required for master, optional for the PR
In [4]: %timeit a.to_sparse_csr()
2.25 s Β± 9.84 ms per loop (mean Β± std. dev. of 7 runs, 1 loop each)
In [5]: a_cuda = a.cuda()
In [6]: %timeit a_cuda.to_sparse_csr()
55.2 ms Β± 6.95 Β΅s per loop (mean Β± std. dev. of 7 runs, 10 loops each)
```
```python
# this PR
In [2]: a = torch.rand(10, 10, 1000, 1000)
In [3]: a = torch.where(a==0, 0.1, a) # required for master, optional for the PR
In [4]: %timeit a.to_sparse_csr()
2.12 s Β± 2.13 ms per loop (mean Β± std. dev. of 7 runs, 1 loop each)
In [5]: a_cuda = a.cuda()
In [6]: %timeit a_cuda.to_sparse_csr(); torch.cuda.synchronize()
47.2 ms Β± 10.4 Β΅s per loop (mean Β± std. dev. of 7 runs, 10 loops each)
```
The performance of `to_sparse_csr()` on CUDA tensors increased by 15% with this PR.
A strided-to-sparse-BSR/BSC conversion with variable NSE support will be implemented as a follow-up.
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #84843
* #85272
* #85271
* #85270
* #85269
* #85268
cc @nikitaved @cpuhrsch @amjames @bhosmer
| 4 |
4,780 | 84,840 |
TypeError: finfo(): argument 'type' (position 1) must be torch.dtype, not HFProxy
|
triaged, module: fx
|
### π Describe the bug
I suspect that https://github.com/pytorch/pytorch/pull/84011 broke some Hugging Face models tracing this way:
```
-- Process 0 terminated with the following error:
Traceback (most recent call last):
File "/Users/pbelevich/miniconda3/envs/tau/lib/python3.9/site-packages/torch/multiprocessing/spawn.py", line 69, in _wrap
fn(i, *args)
File "/Users/pbelevich/PycharmProjects/tau/pippy/utils.py", line 134, in run_worker
run_master(pp_ranks_per_dp_group[rank], args, *extra_args)
File "/Users/pbelevich/PycharmProjects/tau/test/local_test_forward_hf_gpt2.py", line 59, in run_master
gpt2_pipe = Pipe.from_tracing(gpt2, MULTI_USE_PARAM_CONFIG, tracer=PiPPyHFTracer(), concrete_args=concrete_args)
File "/Users/pbelevich/PycharmProjects/tau/pippy/IR.py", line 775, in from_tracing
graph = _pipeline_tracer.trace(mod, **kwargs)
File "/Users/pbelevich/PycharmProjects/tau/pippy/hf/utils.py", line 259, in trace
graph = super().trace(*args, **kwargs)
File "/Users/pbelevich/miniconda3/envs/tau/lib/python3.9/site-packages/transformers/utils/fx.py", line 994, in trace
self.graph = super().trace(root, concrete_args=concrete_args)
File "/Users/pbelevich/miniconda3/envs/tau/lib/python3.9/site-packages/torch/fx/_symbolic_trace.py", line 738, in trace
(self.create_arg(fn(*args)),),
File "/Users/pbelevich/miniconda3/envs/tau/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 906, in forward
outputs = block(
File "/Users/pbelevich/miniconda3/envs/tau/lib/python3.9/site-packages/torch/fx/_symbolic_trace.py", line 716, in module_call_wrapper
return self.call_module(mod, forward, args, kwargs)
File "/Users/pbelevich/miniconda3/envs/tau/lib/python3.9/site-packages/transformers/utils/fx.py", line 901, in call_module
return super().call_module(m, forward, args, kwargs)
File "/Users/pbelevich/miniconda3/envs/tau/lib/python3.9/site-packages/torch/fx/_symbolic_trace.py", line 433, in call_module
return forward(*args, **kwargs)
File "/Users/pbelevich/miniconda3/envs/tau/lib/python3.9/site-packages/torch/fx/_symbolic_trace.py", line 709, in forward
return _orig_module_call(mod, *args, **kwargs)
File "/Users/pbelevich/miniconda3/envs/tau/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/Users/pbelevich/PycharmProjects/tau/pippy/IR.py", line 822, in forward
return self.mod(*args, **kwargs)
File "/Users/pbelevich/miniconda3/envs/tau/lib/python3.9/site-packages/torch/fx/_symbolic_trace.py", line 716, in module_call_wrapper
return self.call_module(mod, forward, args, kwargs)
File "/Users/pbelevich/miniconda3/envs/tau/lib/python3.9/site-packages/transformers/utils/fx.py", line 901, in call_module
return super().call_module(m, forward, args, kwargs)
File "/Users/pbelevich/miniconda3/envs/tau/lib/python3.9/site-packages/torch/fx/_symbolic_trace.py", line 433, in call_module
return forward(*args, **kwargs)
File "/Users/pbelevich/miniconda3/envs/tau/lib/python3.9/site-packages/torch/fx/_symbolic_trace.py", line 709, in forward
return _orig_module_call(mod, *args, **kwargs)
File "/Users/pbelevich/miniconda3/envs/tau/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/Users/pbelevich/miniconda3/envs/tau/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 406, in forward
attn_outputs = self.attn(
File "/Users/pbelevich/miniconda3/envs/tau/lib/python3.9/site-packages/torch/fx/_symbolic_trace.py", line 716, in module_call_wrapper
return self.call_module(mod, forward, args, kwargs)
File "/Users/pbelevich/miniconda3/envs/tau/lib/python3.9/site-packages/transformers/utils/fx.py", line 901, in call_module
return super().call_module(m, forward, args, kwargs)
File "/Users/pbelevich/miniconda3/envs/tau/lib/python3.9/site-packages/torch/fx/_symbolic_trace.py", line 433, in call_module
return forward(*args, **kwargs)
File "/Users/pbelevich/miniconda3/envs/tau/lib/python3.9/site-packages/torch/fx/_symbolic_trace.py", line 709, in forward
return _orig_module_call(mod, *args, **kwargs)
File "/Users/pbelevich/miniconda3/envs/tau/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/Users/pbelevich/miniconda3/envs/tau/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 347, in forward
attn_output, attn_weights = self._attn(query, key, value, attention_mask, head_mask)
File "/Users/pbelevich/miniconda3/envs/tau/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 209, in _attn
mask_value = torch.finfo(attn_weights.dtype).min
TypeError: finfo(): argument 'type' (position 1) must be torch.dtype, not HFProxy
```
cc @ezyang @SherlockNoMad @soumith @kwen2501
### Versions
the bug started occurring after 1.13.0dev20220825, I blame commit https://github.com/pytorch/pytorch/commit/8b8942b11464bbe042b731bc332d65297161353a
| 0 |
4,781 | 84,839 |
Profiler Hangs on Non-Blocking H2D Transfer in Non-Default Stream
|
oncall: profiler
|
### π Describe the bug
Running the following script hangs at:
```
STAGE:2022-09-11 17:08:41 59397:59397 ActivityProfilerController.cpp:294] Completed Stage: Warm Up
```
```
import torch
from torch.profiler import profile, ProfilerActivity
from torch.testing._internal.common_utils import get_cycles_per_ms
stream = torch.cuda.Stream()
sleep_duration_ms = 25
sleep_duration_cycles = int(sleep_duration_ms * get_cycles_per_ms())
tensor_numel = 1 * 1024 * 1024
NON_BLOCKING = True
USE_SEPARATE_STREAM = True
cpu_tensor = torch.ones((tensor_numel,))
cpu_tensor = cpu_tensor.pin_memory()
stream_context = (
torch.cuda.stream(stream) if USE_SEPARATE_STREAM
else torch.cuda.stream(torch.cuda.current_stream())
)
with profile(activities=[ProfilerActivity.CPU, ProfilerActivity.CUDA]) as prof:
with stream_context:
torch.cuda._sleep(sleep_duration_cycles)
gpu_tensor = cpu_tensor.to(torch.cuda.current_device(), non_blocking=NON_BLOCKING)
print(f"non-blocking={NON_BLOCKING}")
print(f"use separate stream={USE_SEPARATE_STREAM}")
prof.export_chrome_trace(f"trace_{NON_BLOCKING}_{USE_SEPARATE_STREAM}.json")
```
- If we use `USE_SEPARATE_STREAM = False` instead, then the profiler does not hang.
- If we do not include the H2D transfer (`gpu_tensor = ...`), then the profiler does not hang.
- If we do not include the `torch.cuda._sleep(sleep_duration_cycles)`, then the profiler does not hang.
- If we run the `torch.cuda._sleep(sleep_duration_cycles)` in the default stream but the H2D copy in the separate stream, then the profiler does hang.
- If we use `non_blocking=False`, then the profiler does not hang.
- If we do not use the profiler, then the code does not hang.
### Versions
```
PyTorch version: 1.13.0a0+gita879a8a
Is debug build: False
CUDA used to build PyTorch: 11.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: 6.0.0-1ubuntu2 (tags/RELEASE_600/final)
CMake version: version 3.21.0
Libc version: glibc-2.27
Python version: 3.8.10 (default, May 19 2021, 18:05:58) [GCC 7.3.0] (64-bit runtime)
Python platform: Linux-5.4.0-1069-aws-x86_64-with-glibc2.10
Is CUDA available: True
CUDA runtime version: 11.1.105
GPU models and configuration: GPU 0: NVIDIA A100-SXM4-40GB
Nvidia driver version: 510.47.03
cuDNN version: Probably one of the following:
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.0.5
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.1.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] mypy==0.950
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.20.3
[pip3] torch==1.10.0a0+gitaa378da
[pip3] torchaudio==0.9.0
[pip3] torchdistx==0.3.0.dev0+cu111
[pip3] torchmultimodal==0.1.0a0
[pip3] torchtext==0.10.0a0+27695bb
[pip3] torchvision==0.11.0a0+31d5336
[conda] blas 1.0 mkl
[conda] magma-cuda110 2.5.2 1 pytorch
[conda] magma-cuda111 2.5.2 1 pytorch
[conda] mkl 2021.3.0 h06a4308_520
[conda] mkl-include 2021.3.0 h06a4308_520
[conda] mkl-service 2.4.0 pypi_0 pypi
[conda] mkl_fft 1.3.0 py38h42c9631_2
[conda] mkl_random 1.2.1 py38ha9443f7_2
[conda] numpy 1.19.0 pypi_0 pypi
[conda] numpy-base 1.20.3 py38h74d4b33_0
[conda] pytorch-sphinx-theme 0.0.24 pypi_0 pypi
[conda] torch 1.9.0 pypi_0 pypi
[conda] torchaudio 0.9.0 pypi_0 pypi
[conda] torchdistx 0.3.0.dev0+cu111 pypi_0 pypi
[conda] torchmultimodal 0.1.0a0 dev_0 <develop>
[conda] torchtext 0.10.0a0+27695bb pypi_0 pypi
[conda] torchvision 0.11.0a0+31d5336 pypi_0 pypi
```
My PyTorch build is from source with some unrelated commits on top. I think it can be effectively treated as `master` or `viable/strict`.
cc @robieta @chaekit @aaronenyeshi @ngimel @nbcsm @guotuofeng @guyang3532 @gaoteng-git @tiffzhaofb
| 0 |
4,782 | 84,835 |
Batch multiplication for torch.sparse matrix multiplication
|
module: sparse, triaged, module: linear algebra
|
### π The feature, motivation and pitch
In the same way, there is a batchwise multiplication for ```torch.mm``` named ```torch.bmm``` I would appreciate if you would add a **batchwise multiplication** for ```torch.sparse.mm``` and ```torch.sparse.addmm``` that creates a sparse gradient.
### Alternatives
_No response_
### Additional context
The sparse functions can be found ```[here](https://pytorch.org/docs/stable/sparse.html)```
cc @nikitaved @pearu @cpuhrsch @amjames @bhosmer @jianyuh @mruberry @walterddr @IvanYashchuk @xwang233 @Lezcano
| 2 |
4,783 | 84,831 |
INTERNAL ASSERT FAILED for _jit_pass_vulkan_optimize_for_mobile (Google Colab)
|
oncall: jit, oncall: mobile, module: vulkan
|
### π Describe the bug
Running into an issue with the mobile_optimizer when using Vulkan as the backend.
```
elif backend == 'vulkan':
---> 67 optimized_cpp_module = torch._C._jit_pass_vulkan_optimize_for_mobile(script_module._c, preserved_methods_str)
```
Error Message:
```RuntimeError: 0 INTERNAL ASSERT FAILED at "/opt/conda/conda-bld/pytorch_1662793958506/work/torch/csrc/jit/ir/alias_analysis.cpp":608, please report a bug to PyTorch. We don't have an op for vulkan_prepack::create_conv2d_context but it isn't a special case. Argument types: Tensor, Tensor, int[], int[], int[], int, NoneType, NoneType, ```
### Versions
Collecting environment information...
PyTorch version: 1.13.0.dev20220910
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: 6.0.0-1ubuntu2 (tags/RELEASE_600/final)
CMake version: version 3.22.6
Libc version: glibc-2.17
Python version: 3.7.13 (default, Mar 29 2022, 02:18:16) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.10.133+-x86_64-with-debian-buster-sid
Is CUDA available: True
CUDA runtime version: 11.1.105
GPU models and configuration: GPU 0: Tesla P100-PCIE-16GB
Nvidia driver version: 460.32.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.0.5
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.21.5
[pip3] pytorch-lightning==1.6.0.dev0
[pip3] torch==1.13.0.dev20220910
[pip3] torchmetrics==0.9.3
[pip3] torchtext==0.14.0.dev20220910
[pip3] torchvision==0.14.0.dev20220910
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.3.1 h2bc3f7f_2
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py37h7f8727e_0
[conda] mkl_fft 1.3.1 py37hd3c417c_0
[conda] mkl_random 1.2.2 py37h51133e4_0
[conda] numpy 1.21.5 py37he7a7128_2
[conda] numpy-base 1.21.5 py37hf524024_2
[conda] pytorch 1.13.0.dev20220910 py3.7_cuda11.3_cudnn8.3.2_0 pytorch-nightly
[conda] pytorch-lightning 1.6.0.dev0 pypi_0 pypi
[conda] pytorch-mutex 1.0 cuda pytorch-nightly
[conda] torchmetrics 0.9.3 pypi_0 pypi
[conda] torchtext 0.14.0.dev20220910 py37 pytorch-nightly
[conda] torchvision 0.14.0.dev20220910 py37_cu113 pytorch-nightly
| 3 |
4,784 | 84,813 |
MPS: allow selecting specific MTLDevice by registryID via environment variable
|
triaged, enhancement, module: mps
|
### π The feature, motivation and pitch
Hello :wave: _very_ new to pytorch and running it on both M1 macs and on a mac with a couple Radeons.
I would like to be able to use more than one of the Radeons at times (in different processes), or to select one specifically so I can use the other GPU for other tasks while I work on other things, etc.
I think this should (hopefully?) be pretty trivial -- I'm not familiar enough with the build system and intricacies of conda/pip etc to have run a build just yet, but I have a draft PR on my fork here:
https://github.com/jaeschliman/pytorch/pull/1
Open to any changes, pointers, etc.
Thanks!
### Alternatives
At first considered selecting by peerGroup and peerIndex, but Apple's docs state that registryID is stable across processes, so it looks like the correct choice.
### Additional context
https://developer.apple.com/documentation/metal/mtldevice/2915737-registryid?language=objc
cc @kulinseth @albanD
| 2 |
4,785 | 84,805 |
compiling failed from source
|
module: sparse, module: build, module: cuda, triaged
|
### π Describe the bug
I wanna install pytorch from source. When I began to compile , build process stopped as following :

### Versions
in conda:
cudatoolkit 11.2.72
magma-cuda112 2.5.2
system:
cuda 11.2
driver 460.32.03
cc @nikitaved @pearu @cpuhrsch @amjames @bhosmer @malfet @seemethere @ngimel
| 2 |
4,786 | 84,782 |
macOS Pyinstaller: libc++abi: terminating with uncaught exception of type c10::Error: Type c10::intrusive_ptr<ConvPackedParamsBase<2>> could not be converted to any of the known types
|
module: cpp-extensions, triaged, module: third_party, needs research, module: m1
|
### π Describe the bug
I'm trying to package a python app using pyinstaller.
Torch: 1.12.1
Pyinstaller: 5.3
OS: macOS 12.5 (M1)
[UPDATE] This appears to be a problem with `torchvision`. If I remove the dependency, no crash.
When I try to run my UNIX executable, I get the following:
```
[41110] WARNING: file already exists but should not: /var/folders/9l/0lk2nvvs7j39q893l1k0qvcc0000gn/T/_MEI9VPQ38/pyarrow/lib.cpython-310-darwin.so
[41110] WARNING: file already exists but should not: /var/folders/9l/0lk2nvvs7j39q893l1k0qvcc0000gn/T/_MEI9VPQ38/torch/_C.cpython-310-darwin.so
[41110] WARNING: file already exists but should not: /var/folders/9l/0lk2nvvs7j39q893l1k0qvcc0000gn/T/_MEI9VPQ38/torch/_C_flatbuffer.cpython-310-darwin.so
[41110] WARNING: file already exists but should not: /var/folders/9l/0lk2nvvs7j39q893l1k0qvcc0000gn/T/_MEI9VPQ38/torch/_dl.cpython-310-darwin.so
libc++abi: terminating with uncaught exception of type c10::Error: Type c10::intrusive_ptr<ConvPackedParamsBase<2>> could not be converted to any of the known types.
Exception raised from operator() at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/jit_type.h:1735 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >) + 81 (0x18a356951 in libc10.dylib)
frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) + 98 (0x18a355022 in libc10.dylib)
frame #2: c10::detail::getTypePtr_<c10::intrusive_ptr<ConvPackedParamsBase<2>, c10::detail::intrusive_target_default_null_type<ConvPackedParamsBase<2> > > >::call()::'lambda'()::operator()() const + 276 (0x1d894caa4 in libtorch_cpu.dylib)
frame #3: c10::Type::SingletonOrSharedTypePtr<c10::Type> c10::getTypePtrCopy<c10::intrusive_ptr<ConvPackedParamsBase<2>, c10::detail::intrusive_target_default_null_type<ConvPackedParamsBase<2> > > >() + 27 (0x1d894c82b in libtorch_cpu.dylib)
frame #4: c10::detail::infer_schema::(anonymous namespace)::createArgumentVector(c10::ArrayRef<c10::detail::infer_schema::ArgumentDef>) + 210 (0x1d82fb0a2 in libtorch_cpu.dylib)
frame #5: c10::detail::infer_schema::make_function_schema(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >&&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >&&, c10::ArrayRef<c10::detail::infer_schema::ArgumentDef>, c10::ArrayRef<c10::detail::infer_schema::ArgumentDef>) + 86 (0x1d82fae56 in libtorch_cpu.dylib)
frame #6: c10::detail::infer_schema::make_function_schema(c10::ArrayRef<c10::detail::infer_schema::ArgumentDef>, c10::ArrayRef<c10::detail::infer_schema::ArgumentDef>) + 67 (0x1d82fb893 in libtorch_cpu.dylib)
frame #7: std::__1::unique_ptr<c10::FunctionSchema, std::__1::default_delete<c10::FunctionSchema> > c10::detail::inferFunctionSchemaFromFunctor<at::Tensor (*)(at::Tensor, c10::intrusive_ptr<ConvPackedParamsBase<2>, c10::detail::intrusive_target_default_null_type<ConvPackedParamsBase<2> > > const&, double, long long)>() + 85 (0x1d89e5945 in libtorch_cpu.dylib)
frame #8: torch::CppFunction::CppFunction<at::Tensor (at::Tensor, c10::intrusive_ptr<ConvPackedParamsBase<2>, c10::detail::intrusive_target_default_null_type<ConvPackedParamsBase<2> > > const&, double, long long)>(at::Tensor (*)(at::Tensor, c10::intrusive_ptr<ConvPackedParamsBase<2>, c10::detail::intrusive_target_default_null_type<ConvPackedParamsBase<2> > > const&, double, long long), std::__1::enable_if<c10::guts::is_function_type<at::Tensor (at::Tensor, c10::intrusive_ptr<ConvPackedParamsBase<2>, c10::detail::intrusive_target_default_null_type<ConvPackedParamsBase<2> > > const&, double, long long)>::value, std::nullptr_t>::type) + 122 (0x1d89e586a in libtorch_cpu.dylib)
frame #9: at::native::(anonymous namespace)::TORCH_LIBRARY_IMPL_init_quantized_QuantizedCPU_4(torch::Library&) + 35 (0x1d89e3b93 in libtorch_cpu.dylib)
frame #10: torch::detail::TorchLibraryInit::TorchLibraryInit(torch::Library::Kind, void (*)(torch::Library&), char const*, c10::optional<c10::DispatchKey>, char const*, unsigned int) + 203 (0x1d81a762b in libtorch_cpu.dylib)
frame #11: _GLOBAL__sub_I_qconv.cpp + 178 (0x1d89e8cb2 in libtorch_cpu.dylib)
frame #12: invocation function for block in dyld4::Loader::findAndRunAllInitializers(dyld4::RuntimeState&) const + 182 (0x220f2ee4f in dyld)
frame #13: invocation function for block in dyld3::MachOAnalyzer::forEachInitializer(Diagnostics&, dyld3::MachOAnalyzer::VMAddrConverter const&, void (unsigned int) block_pointer, void const*) const + 129 (0x220f55911 in dyld)
frame #14: invocation function for block in dyld3::MachOFile::forEachSection(void (dyld3::MachOFile::SectionInfo const&, bool, bool&) block_pointer) const + 557 (0x220f4ce26 in dyld)
frame #15: dyld3::MachOFile::forEachLoadCommand(Diagnostics&, void (load_command const*, bool&) block_pointer) const + 129 (0x220f1bdb3 in dyld)
frame #16: dyld3::MachOFile::forEachSection(void (dyld3::MachOFile::SectionInfo const&, bool, bool&) block_pointer) const + 179 (0x220f4cbb7 in dyld)
frame #17: dyld3::MachOAnalyzer::forEachInitializerPointerSection(Diagnostics&, void (unsigned int, unsigned int, unsigned char const*, bool&) block_pointer) const + 118 (0x220f55342 in dyld)
frame #18: dyld3::MachOAnalyzer::forEachInitializer(Diagnostics&, dyld3::MachOAnalyzer::VMAddrConverter const&, void (unsigned int) block_pointer, void const*) const + 386 (0x220f555b4 in dyld)
frame #19: dyld4::Loader::findAndRunAllInitializers(dyld4::RuntimeState&) const + 144 (0x220f2ed82 in dyld)
frame #20: dyld4::Loader::runInitializersBottomUp(dyld4::RuntimeState&, dyld3::Array<dyld4::Loader const*>&) const + 178 (0x220f2ef0e in dyld)
frame #21: dyld4::Loader::runInitializersBottomUp(dyld4::RuntimeState&, dyld3::Array<dyld4::Loader const*>&) const + 149 (0x220f2eef1 in dyld)
frame #22: dyld4::Loader::runInitializersBottomUp(dyld4::RuntimeState&, dyld3::Array<dyld4::Loader const*>&) const + 149 (0x220f2eef1 in dyld)
frame #23: dyld4::Loader::runInitializersBottomUpPlusUpwardLinks(dyld4::RuntimeState&) const + 108 (0x220f2efb2 in dyld)
frame #24: dyld4::APIs::dlopen_from(char const*, int, void*) + 592 (0x220f3de00 in dyld)
frame #25: py_dl_open + 135 (0x1457cef77 in _ctypes.cpython-310-darwin.so)
<omitting python frames>
```
### Versions
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.23.1
[pip3] pytorch-lightning==1.4.2
[pip3] torch==1.12.1
[pip3] torch-fidelity==0.3.0
[pip3] torchaudio==0.12.1
[pip3] torchmetrics==0.6.0
[pip3] torchvision==0.13.1
[conda] numpy 1.23.1 pypi_0 pypi
[conda] pytorch-lightning 1.4.2 pypi_0 pypi
[conda] torch 1.13.0.dev20220907 pypi_0 pypi
[conda] torch-fidelity 0.3.0 pypi_0 pypi
[conda] torchmetrics 0.6.0 pypi_0 pypi
[conda] torchvision 0.14.0.dev20220907 pypi_0 pypi
cc @malfet @zou3519
| 6 |
4,787 | 84,751 |
Allow passing dict (as opposed to OrderedDict) to nn.Sequential
|
module: nn, triaged, needs research
|
Feature request: `torch.nn.Sequential` now accepts `collection.OrderedDict` as a way of naming the modules it contains. However, since vanilla `dict` is ordered in python>=3.7 and pytorch currently targets python>=3.7, passing a `dict` should be accepted (ideally, any map str->nn.Module). Currently this throws an error (nn.Sequential insists on collections.OrderedDict).
cc @albanD @mruberry @jbschlosser @walterddr @kshitij12345 @saketh-are
| 15 |
4,788 | 93,666 |
Use graph partitioner to remove ops that can be captured with cudagraphs in TorchInductor
|
triaged, oncall: pt2
|
Currently, in TorchInductor cudagraphs usage is all-or-nothing. This test decided if we use it:
https://github.com/pytorch/torchdynamo/blob/116ae7f868fc46c197cb08094f3117525d1b9de8/torchinductor/compile_fx.py#L63-L68
This could lead to some performance cliffs where adding a single bad op to your graph flips off cudagraphs and hurts performance for unrelated ops.
We should try partitioning the graph to allow using cudagraphs on fragments where it is supported. Many of the huggingface models use CPU tensors, which forces cudagraphs off. It possible we could see some speedups there.
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 0 |
4,789 | 84,721 |
PyTorch-DirectML RFC
|
feature, triaged
|
### π The feature, motivation and pitch
We propose to add a DirectML backend as an out-of-tree plugin extension for PyTorch. The DirectML backend will provide hardware acceleration across all DirectX12 compatible GPUs for Windows and the Windows Subsystem for Linux. The RFC PR can be found here: https://github.com/pytorch/rfcs/pull/46.
### Alternatives
_No response_
### Additional context
_No response_
| 0 |
4,790 | 84,717 |
Test aten decompositions match their alias information
|
triaged, better-engineering, module: primTorch
|
Came up in discussion with @samdow and @Chillee.
Given an aten operator that has some schema, its decomposition should match the schema's alias information. For example, an out-of-place operation like at::sin must not mutate its input or return a Tensor whose storage aliases the input's.
We have decompositions of ATen operators somewhere in the codebase. We also have decomposition OpInfo tests. So this issue is just to add additional checks that an operator's decomposition is faithful to its schema
cc @ezyang @mruberry @ngimel @Lezcano @fdrocha
| 4 |
4,791 | 84,716 |
[ONNX] Speed up unit tests
|
module: onnx, triaged, better-engineering
|
Reduce runtime for tests in onnx.
## Profile
```
Program: pytest test/onnx/test_pytorch_onnx_onnxruntime.py -k version_17
789.755 <module> <string>:1
[2773 frames hidden] <string>, runpy, pytest, _pytest, plu...
778.248 __call__ unittest/case.py:650
ββ 778.248 run torch/testing/_internal/common_utils.py:2034
ββ 778.242 _run_with_retry torch/testing/_internal/common_utils.py:1961
ββ 778.242 run unittest/case.py:558
[57 frames hidden] unittest, contextlib, <built-in>, _py...
776.002 _callTestMethod unittest/case.py:549
ββ 303.927 instantiated_test torch/testing/_internal/common_utils.py:247
β ββ 295.628 wrapper pytorch_test_common.py:112
β ββ 295.627 wrapper pytorch_test_common.py:51
β ββ 295.627 test_rnn test_pytorch_onnx_onnxruntime.py:12500
β ββ 295.627 _dispatch_rnn_test test_pytorch_onnx_onnxruntime.py:9173
β ββ 109.771 _lstm_test test_pytorch_onnx_onnxruntime.py:9282
β β ββ 109.450 run_test onnx_test_common.py:82
β β ββ 109.450 _run_test onnx_test_common.py:99
β β ββ 109.450 run_model_test onnx_test_common.py:36
β β ββ 108.890 verify torch/onnx/verification.py:605
β β ββ 103.088 _export torch/onnx/utils.py:1380
β β ββ 102.224 _model_to_graph torch/onnx/utils.py:1040
β β ββ 76.864 _optimize_graph torch/onnx/utils.py:541
β β β ββ 38.422 _run_symbolic_function torch/onnx/utils.py:1730
β β β β ββ 34.782 lstm torch/onnx/symbolic_opset9.py:4466
β β β β ββ 34.776 wrapper torch/onnx/symbolic_helper.py:301
β β β β ββ 23.189 _lstm_packed torch/onnx/symbolic_opset9.py:4434
β β β β β ββ 23.126 _generic_rnn torch/onnx/symbolic_opset9.py:4171
β β β β β ββ 20.277 transform_weights torch/onnx/symbolic_opset9.py:4286
β β β β β ββ 17.787 <genexpr> torch/onnx/symbolic_opset9.py:4292
β β β β β ββ 17.779 reform_weights torch/onnx/symbolic_opset9.py:4265
β β β β β ββ 17.590 <listcomp> torch/onnx/symbolic_opset9.py:4267
β β β β β ββ 17.533 _slice_helper torch/onnx/symbolic_helper.py:704
β β β β β ββ 17.297 _slice torch/onnx/symbolic_opset10.py:336
β β β β β ββ 14.634 _graph_op torch/onnx/_patch_torch.py:19
β β β β β ββ 10.123 PyCapsule._jit_pass_onnx_node_shape_type_inference <built-in>:0
β β β β β [2 frames hidden] <built-in>
β β β β ββ 11.568 _lstm_full torch/onnx/symbolic_opset9.py:4402
β β β β ββ 11.548 _generic_rnn torch/onnx/symbolic_opset9.py:4171
β β β β ββ 10.117 transform_weights torch/onnx/symbolic_opset9.py:4286
β β β β ββ 8.870 <genexpr> torch/onnx/symbolic_opset9.py:4292
β β β β ββ 8.867 reform_weights torch/onnx/symbolic_opset9.py:4265
β β β β ββ 8.769 <listcomp> torch/onnx/symbolic_opset9.py:4267
β β β β ββ 8.742 _slice_helper torch/onnx/symbolic_helper.py:704
β β β β ββ 8.633 _slice torch/onnx/symbolic_opset10.py:336
β β β ββ 26.808 PyCapsule._jit_pass_onnx_graph_shape_type_inference <built-in>:0
β β β [2 frames hidden] <built-in>
β β ββ 18.103 _create_jit_graph torch/onnx/utils.py:916
β β ββ 18.047 _trace_and_get_graph_from_model torch/onnx/utils.py:848
β β ββ 18.015 _get_trace_graph torch/jit/_trace.py:1138
β β ββ 18.005 _call_impl torch/nn/modules/module.py:1184
β β ββ 18.005 forward torch/jit/_trace.py:94
β β ββ 16.789 wrapper torch/jit/_trace.py:104
β β ββ 12.148 _call_impl torch/nn/modules/module.py:1184
β β ββ 12.147 _slow_forward torch/nn/modules/module.py:1164
β ββ 98.451 _gru_test test_pytorch_onnx_onnxruntime.py:9350
β β ββ 98.128 run_test onnx_test_common.py:82
β β ββ 98.128 _run_test onnx_test_common.py:99
β β ββ 98.128 run_model_test onnx_test_common.py:36
β β ββ 97.712 verify torch/onnx/verification.py:605
β β ββ 92.431 _export torch/onnx/utils.py:1380
β β ββ 91.787 _model_to_graph torch/onnx/utils.py:1040
β β ββ 71.784 _optimize_graph torch/onnx/utils.py:541
β β β ββ 36.852 _run_symbolic_function torch/onnx/utils.py:1730
β β β β ββ 33.353 symbolic torch/onnx/symbolic_opset9.py:4564
β β β β ββ 33.350 wrapper torch/onnx/symbolic_helper.py:301
β β β β ββ 22.248 _rnn_packed torch/onnx/symbolic_opset9.py:4536
β β β β β ββ 22.202 _generic_rnn torch/onnx/symbolic_opset9.py:4171
β β β β β ββ 20.135 transform_weights torch/onnx/symbolic_opset9.py:4286
β β β β β ββ 17.671 <genexpr> torch/onnx/symbolic_opset9.py:4292
β β β β β ββ 17.669 reform_weights torch/onnx/symbolic_opset9.py:4265
β β β β β ββ 17.510 <listcomp> torch/onnx/symbolic_opset9.py:4267
β β β β β ββ 17.477 _slice_helper torch/onnx/symbolic_helper.py:704
β β β β β ββ 17.342 _slice torch/onnx/symbolic_opset10.py:336
β β β β β ββ 15.157 _graph_op torch/onnx/_patch_torch.py:19
β β β β β ββ 10.545 PyCapsule._jit_pass_onnx_node_shape_type_inference <built-in>:0
β β β β β [2 frames hidden] <built-in>
β β β β ββ 11.101 _rnn_full torch/onnx/symbolic_opset9.py:4507
β β β β ββ 11.081 _generic_rnn torch/onnx/symbolic_opset9.py:4171
β β β β ββ 10.036 transform_weights torch/onnx/symbolic_opset9.py:4286
β β β β ββ 8.806 <genexpr> torch/onnx/symbolic_opset9.py:4292
β β β β ββ 8.806 reform_weights torch/onnx/symbolic_opset9.py:4265
β β β β ββ 8.737 <listcomp> torch/onnx/symbolic_opset9.py:4267
β β β β ββ 8.712 _slice_helper torch/onnx/symbolic_helper.py:704
β β β β ββ 8.634 _slice torch/onnx/symbolic_opset10.py:336
β β β ββ 24.573 PyCapsule._jit_pass_onnx_graph_shape_type_inference <built-in>:0
β β β [2 frames hidden] <built-in>
β β ββ 14.916 _create_jit_graph torch/onnx/utils.py:916
β β ββ 14.862 _trace_and_get_graph_from_model torch/onnx/utils.py:848
β β ββ 14.817 _get_trace_graph torch/jit/_trace.py:1138
β β ββ 14.811 _call_impl torch/nn/modules/module.py:1184
β β ββ 14.811 forward torch/jit/_trace.py:94
β β ββ 13.683 wrapper torch/jit/_trace.py:104
β β ββ 9.260 _call_impl torch/nn/modules/module.py:1184
β β ββ 9.260 _slow_forward torch/nn/modules/module.py:1164
β ββ 87.405 _elman_rnn_test test_pytorch_onnx_onnxruntime.py:9181
β ββ 86.774 run_test onnx_test_common.py:82
β ββ 86.774 _run_test onnx_test_common.py:99
β ββ 86.773 run_model_test onnx_test_common.py:36
β ββ 85.647 verify torch/onnx/verification.py:605
β ββ 78.367 _export torch/onnx/utils.py:1380
β ββ 77.105 _model_to_graph torch/onnx/utils.py:1040
β ββ 42.625 _optimize_graph torch/onnx/utils.py:541
β β ββ 20.128 _run_symbolic_function torch/onnx/utils.py:1730
β β β ββ 13.245 symbolic torch/onnx/symbolic_opset9.py:4564
β β β ββ 13.240 wrapper torch/onnx/symbolic_helper.py:301
β β β ββ 8.815 _rnn_packed torch/onnx/symbolic_opset9.py:4536
β β β ββ 8.725 _generic_rnn torch/onnx/symbolic_opset9.py:4171
β β ββ 12.593 PyCapsule._jit_pass_onnx_graph_shape_type_inference <built-in>:0
β β [2 frames hidden] <built-in>
β ββ 26.437 _create_jit_graph torch/onnx/utils.py:916
β ββ 26.333 _trace_and_get_graph_from_model torch/onnx/utils.py:848
β ββ 26.261 _get_trace_graph torch/jit/_trace.py:1138
β ββ 26.248 _call_impl torch/nn/modules/module.py:1184
β ββ 26.246 forward torch/jit/_trace.py:94
β ββ 24.035 wrapper torch/jit/_trace.py:104
β ββ 15.227 _call_impl torch/nn/modules/module.py:1184
β β ββ 15.226 _slow_forward torch/nn/modules/module.py:1164
β ββ 8.795 <genexpr> torch/jit/_trace.py:114
ββ 271.394 wrapper pytorch_test_common.py:51
β ββ 29.974 test_index_put_loop test_pytorch_onnx_onnxruntime.py:2351
β β ββ 29.930 run_test onnx_test_common.py:82
β β ββ 29.913 _run_test onnx_test_common.py:99
β β ββ 29.913 run_model_test onnx_test_common.py:36
β β ββ 29.888 verify torch/onnx/verification.py:605
β β ββ 24.322 _export torch/onnx/utils.py:1380
β β ββ 23.918 _model_to_graph torch/onnx/utils.py:1040
β β ββ 14.552 _optimize_graph torch/onnx/utils.py:541
β β ββ 9.139 _run_symbolic_function torch/onnx/utils.py:1730
β β ββ 9.027 prim_loop torch/onnx/symbolic_opset9.py:6298
β ββ 10.137 wrapper pytorch_test_common.py:112
β ββ 8.031 test_index_put_slice_index test_pytorch_onnx_onnxruntime.py:2235
β β ββ 8.031 run_test onnx_test_common.py:82
β ββ 7.961 test_instancenorm3d_runningstats test_pytorch_onnx_onnxruntime.py:3771
ββ 11.971 test_avgpool_3d_ceil test_pytorch_onnx_onnxruntime.py:1420
β ββ 9.299 _VariableFunctionsClass.randn <built-in>:0
β [2 frames hidden] <built-in>
ββ 10.681 wrapper pytorch_test_common.py:112
```
| 3 |
4,792 | 93,665 |
Explore TBB for TorchInductor C++ backend
|
triaged, oncall: pt2, module: cpu inductor
|
Currently, the TorchInductor C++ backend uses OpenMP, though especially for small kernels there is pretty high warmup time to launch threads. And we have seen our 1-thread performance give bigger speedups over eager than our N-thread performance.
[Thread Building Blocks](https://www.intel.com/content/www/us/en/developer/tools/oneapi/onetbb.html) might deliver better performance here, so we should explore it as an alternative.
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 13 |
4,793 | 84,711 |
Add documentation about backward graph gc behavior
|
module: docs, triaged
|
### π The doc issue
At least add documentation about
> pyobject traversal only travels one-hop into the backward graph past the tensor, so appending that extra node to the graph prevents gc from clearing the cycle.
_Originally posted by @soulitzer in https://github.com/pytorch/pytorch/issues/84413#issuecomment-1234817185_
And add link to this in `checkpoint` documentation.
### Suggest a potential alternative/fix
_No response_
cc @svekars @holly1238
| 0 |
4,794 | 84,710 |
About the different ways to print models
|
module: printing, triaged
|
### π Describe the bug
Hi. I seem to have found a bug in a print model
```
import torchvision.models as models
vgg = models.vgg16( pretrained=True)
net = vgg.features[0:1]
print(net)
net[0].out_channels = 32
print(net) # Note: the bug is here, the result should not be Conv2d(3, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
# The following results
# torch.Size([64, 3, 3, 3]) torch.Size([64])
for parameters in net.parameters():
print(parameters.shape)
summary(net ,(3,112,112))
```
### Versions
```
Collecting environment information...
PyTorch version: 1.9.0
Is debug build: False
CUDA used to build PyTorch: 11.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.5 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.10
Python version: 3.7.10 (default, Feb 26 2021, 18:47:35) [GCC 7.3.0] (64-bit runtime)
Python platform: Linux-4.15.0-142-generic-x86_64-with-debian-buster-sid
Is CUDA available: True
CUDA runtime version: 11.1.105
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060
Nvidia driver version: 515.48.07
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.0.5
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] info-nce-pytorch==0.1.4
[pip3] numpy==1.20.3
[pip3] numpy-quaternion==2022.4.2
[pip3] torch==1.9.0
[pip3] torchelastic==0.2.0
[pip3] torchtext==0.10.0
[pip3] torchvision==0.10.0
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.1.74 h6bb024c_0 nvidia
[conda] info-nce-pytorch 0.1.4 pypi_0 pypi
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py37h7f8727e_0
[conda] mkl_fft 1.3.1 py37hd3c417c_0
[conda] mkl_random 1.2.2 py37h51133e4_0
[conda] numpy 1.20.3 pypi_0 pypi
[conda] numpy-quaternion 2022.4.2 pypi_0 pypi
[conda] pytorch 1.9.0 py3.7_cuda11.1_cudnn8.0.5_0 pytorch
[conda] torchelastic 0.2.0 pypi_0 pypi
[conda] torchtext 0.10.0 py37 pytorch
[conda] torchvision 0.10.0 py37_cu111 pytorch```
| 0 |
4,795 | 84,705 |
Set dtype if tensor converted to numpy
|
needs reproduction, triaged, module: numpy
|
### π The feature, motivation and pitch
Currently I have tensors which I need to convert and then subtract them. If I subtract in torch, I get a vector with entries unequal zero. But I subtract the converted tensors in numpy, I get a zero tensor.
Very likely it is caused by precision loss in the conversion.
So I like to fix the dtype when I use tensor.numpy(), which might solve the issue.
### Alternatives
_No response_
### Additional context
_No response_
cc @mruberry @rgommers
| 1 |
4,796 | 84,697 |
NotImplementedError: The operator aten::native_group_norm_backward
|
triaged, module: mps
|
### π Describe the bug
While running [this](https://huggingface.co/blog/annotated-diffusion) notebook on PyTorch-1.13.0 for Apple M1 chip, i've faced the issue above on follow line of code: `loss.backward()`.
Here's the full error log:
```
---------------------------------------------------------------------------
NotImplementedError Traceback (most recent call last)
Cell In [27], line 20
17 if step % 100 == 0:
18 print("Loss:", loss.item())
---> 20 loss.backward()
21 optimizer.step()
23 # save generated images
File /opt/homebrew/Caskroom/miniforge/base/envs/torch/lib/python3.9/site-packages/torch/_tensor.py:484, in Tensor.backward(self, gradient, retain_graph, create_graph, inputs)
474 if has_torch_function_unary(self):
475 return handle_torch_function(
476 Tensor.backward,
477 (self,),
(...)
482 inputs=inputs,
483 )
--> 484 torch.autograd.backward(
485 self, gradient, retain_graph, create_graph, inputs=inputs
486 )
File /opt/homebrew/Caskroom/miniforge/base/envs/torch/lib/python3.9/site-packages/torch/autograd/__init__.py:191, in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables, inputs)
186 retain_graph = create_graph
188 # The reason we repeat same the comment below is that
189 # some Python versions print out the first line of a multi-line function
190 # calls in the traceback and some print out the last line
--> 191 Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
192 tensors, grad_tensors_, retain_graph, create_graph, inputs,
193 allow_unreachable=True, accumulate_grad=True)
NotImplementedError: The operator 'aten::native_group_norm_backward' is not current implemented for the MPS device. If you want this op to be added in priority during the prototype phase of this feature, please comment on https://github.com/pytorch/pytorch/issues/77764. As a temporary fix, you can set the environment variable `PYTORCH_ENABLE_MPS_FALLBACK=1` to use the CPU as a fallback for this op. WARNING: this will be slower than running natively on MPS.
```
### Versions
Collecting environment information...
PyTorch version: 1.13.0.dev20220908
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 12.5.1 (arm64)
GCC version: Could not collect
Clang version: 14.0.0 (clang-1400.0.29.102)
CMake version: version 3.24.1
Libc version: N/A
Python version: 3.9.13 | packaged by conda-forge | (main, May 27 2022, 17:01:00) [Clang 13.0.1 ] (64-bit runtime)
Python platform: macOS-12.5.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.2
[pip3] torch==1.13.0.dev20220908
[pip3] torchaudio==0.13.0.dev20220908
[pip3] torchvision==0.14.0.dev20220908
[conda] numpy 1.23.2 pypi_0 pypi
[conda] torch 1.13.0.dev20220908 pypi_0 pypi
[conda] torchaudio 0.13.0.dev20220908 pypi_0 pypi
[conda] torchvision 0.14.0.dev20220908 pypi_0 pypi
cc @kulinseth @albanD @malfet @DenisVieriu97 @razarmehr @abhudev
| 2 |
4,797 | 93,662 |
Dynamo eager with sparse tensors gives wrong numeric results
|
triaged, oncall: pt2
|
Steps to reproduce:
1. Patch in https://github.com/pytorch/pytorch/pull/84641 (I think this fixes enough bugs that dynamo successfully captures some sparse code; you can compare with a run on master to confirm this)
2. Run `PYTORCH_TEST_WITH_DYNAMO=1 python test/test_sparse.py -k test_softmax_cpu_float64`
The unit test fails due to tensor difference.
Here is the info level log.
<details>
```
/data/users/ezyang/pytorch-tmp/torch/backends/cudnn/__init__.py:91: UserWarning: PyTorch was compiled without cuDNN/MIOpen support. To use cuDNN/MIOpen, rebuild PyTorch making sure the library is visible to the build system.
warnings.warn(
/data/users/ezyang/torchdynamo/torchdynamo/eval_frame.py:106: DeprecationWarning: The 'warn' function is deprecated, use 'warning' instead
logging.warn(
WARNING:root:Using TorchDynamo with a context manager will be deprecated soon.Please read https://github.com/pytorch/torchdynamo#usage-example to use TorchDynamo using an annotation.
torchdynamo.convert_frame: [ERROR] WON'T CONVERT setUp /data/users/ezyang/pytorch-tmp/test/test_sparse.py line 97
due to:
Traceback (most recent call last):
File "/data/users/ezyang/torchdynamo/torchdynamo/variables/user_defined.py", line 77, in call_method
return super().call_method(tx, args, kwargs)
TypeError: call_method() missing 1 required positional argument: 'kwargs'
from user code:
File "/data/users/ezyang/pytorch-tmp/test/test_sparse.py", line 98, in setUp
TestCase.setUp(self)
Set torchdynamo.config.verbose=True for more information
==========
torchdynamo.symbolic_convert: [WARNING] Graph break: inline in skipfiles: genSparseTensor /data/users/ezyang/pytorch-tmp/torch/testing/_internal/common_utils.py from user code at File "/data/users/ezyang/pytorch-tmp/test/test_sparse.py", line 116, in _gen_sparse
x, i, v = self.genSparseTensor(with_size, sparse_dim, nnz, not coalesced, dtype=dtype, device=device)
torchdynamo.convert_frame: [INFO] ORIGINAL BYTECODE test_op /data/users/ezyang/pytorch-tmp/test/test_sparse.py line 3337
3338 0 LOAD_GLOBAL 0 (isinstance)
2 LOAD_FAST 2 (with_size)
4 LOAD_GLOBAL 1 (Number)
6 CALL_FUNCTION 2
8 POP_JUMP_IF_FALSE 20
3339 10 LOAD_FAST 2 (with_size)
12 BUILD_LIST 1
14 LOAD_FAST 0 (sparse_dims)
16 BINARY_MULTIPLY
18 STORE_FAST 2 (with_size)
3341 >> 20 LOAD_DEREF 2 (self)
22 LOAD_METHOD 2 (_gen_sparse)
24 LOAD_FAST 0 (sparse_dims)
26 LOAD_FAST 1 (nnz)
28 LOAD_FAST 2 (with_size)
30 LOAD_DEREF 1 (dtype)
32 LOAD_DEREF 0 (device)
34 LOAD_FAST 3 (coalesced)
36 CALL_METHOD 6
38 UNPACK_SEQUENCE 3
40 STORE_FAST 4 (x)
42 STORE_FAST 5 (i)
44 STORE_FAST 6 (v)
3343 46 LOAD_CONST 1 (<code object sparse_log at 0x7fda3338cbe0, file "/data/users/ezyang/pytorch-tmp/test/test_sparse.py", line 3343>)
48 LOAD_CONST 2 ('TestSparse.test_softmax.<locals>.test_op.<locals>.sparse_log')
50 MAKE_FUNCTION 0
52 STORE_FAST 7 (sparse_log)
3347 54 LOAD_GLOBAL 3 (range)
56 LOAD_FAST 4 (x)
58 LOAD_METHOD 4 (sparse_dim)
60 CALL_METHOD 0
62 LOAD_FAST 4 (x)
64 LOAD_METHOD 5 (dense_dim)
66 CALL_METHOD 0
68 BINARY_ADD
70 CALL_FUNCTION 1
72 GET_ITER
>> 74 EXTENDED_ARG 1
76 FOR_ITER 462 (to 540)
78 STORE_FAST 8 (dim)
3351 80 LOAD_DEREF 6 (sparse_softmax)
82 LOAD_FAST 4 (x)
84 LOAD_FAST 8 (dim)
86 CALL_FUNCTION 2
88 STORE_FAST 9 (y)
3352 90 LOAD_DEREF 5 (softmax_to_dense)
92 LOAD_FAST 4 (x)
94 LOAD_FAST 8 (dim)
96 CALL_FUNCTION 2
98 STORE_FAST 10 (r1)
3353 100 LOAD_FAST 9 (y)
102 LOAD_METHOD 6 (to_dense)
104 CALL_METHOD 0
106 STORE_FAST 11 (r2)
3354 108 LOAD_GLOBAL 7 (print)
110 LOAD_FAST 10 (r1)
112 CALL_FUNCTION 1
114 POP_TOP
3355 116 LOAD_GLOBAL 7 (print)
118 LOAD_FAST 11 (r2)
120 CALL_FUNCTION 1
122 POP_TOP
3356 124 LOAD_DEREF 2 (self)
126 LOAD_METHOD 8 (assertEqual)
128 LOAD_FAST 10 (r1)
130 LOAD_FAST 11 (r2)
132 CALL_METHOD 2
134 POP_TOP
3359 136 LOAD_GLOBAL 9 (torch)
138 LOAD_ATTR 10 (sparse)
140 LOAD_METHOD 11 (softmax)
142 LOAD_FAST 4 (x)
144 LOAD_FAST 8 (dim)
146 CALL_METHOD 2
148 STORE_FAST 12 (y1)
3360 150 LOAD_DEREF 2 (self)
152 LOAD_METHOD 8 (assertEqual)
154 LOAD_FAST 9 (y)
156 LOAD_FAST 12 (y1)
158 CALL_METHOD 2
160 POP_TOP
3363 162 LOAD_GLOBAL 9 (torch)
164 LOAD_ATTR 10 (sparse)
166 LOAD_METHOD 12 (log_softmax)
168 LOAD_FAST 4 (x)
170 LOAD_FAST 8 (dim)
172 CALL_METHOD 2
174 STORE_FAST 13 (ly1)
3364 176 LOAD_DEREF 2 (self)
178 LOAD_METHOD 8 (assertEqual)
180 LOAD_FAST 13 (ly1)
182 LOAD_FAST 7 (sparse_log)
184 LOAD_FAST 12 (y1)
186 CALL_FUNCTION 1
188 CALL_METHOD 2
190 POP_TOP
3369 192 LOAD_DEREF 7 (to_dense)
194 LOAD_FAST 4 (x)
196 LOAD_GLOBAL 13 (float)
198 LOAD_CONST 3 ('-inf')
200 CALL_FUNCTION 1
202 LOAD_CONST 4 (('fill_value',))
204 CALL_FUNCTION_KW 2
206 STORE_FAST 14 (x1)
3370 208 LOAD_DEREF 3 (softmax_jacobian_analytic)
210 LOAD_FAST 14 (x1)
212 LOAD_FAST 8 (dim)
214 CALL_FUNCTION 2
216 STORE_FAST 15 (J)
3371 218 LOAD_FAST 15 (J)
220 LOAD_ATTR 14 (shape)
222 LOAD_CONST 5 (0)
224 BINARY_SUBSCR
226 LOAD_FAST 4 (x)
228 LOAD_ATTR 14 (shape)
230 LOAD_FAST 8 (dim)
232 BINARY_SUBSCR
234 COMPARE_OP 2 (==)
236 POP_JUMP_IF_TRUE 242
238 LOAD_ASSERTION_ERROR
240 RAISE_VARARGS 1
3372 >> 242 LOAD_FAST 15 (J)
244 LOAD_ATTR 14 (shape)
246 LOAD_FAST 8 (dim)
248 LOAD_CONST 6 (1)
250 BINARY_ADD
252 BINARY_SUBSCR
254 LOAD_FAST 4 (x)
256 LOAD_ATTR 14 (shape)
258 LOAD_FAST 8 (dim)
260 BINARY_SUBSCR
262 COMPARE_OP 2 (==)
264 EXTENDED_ARG 1
266 POP_JUMP_IF_TRUE 272
268 LOAD_ASSERTION_ERROR
270 RAISE_VARARGS 1
3375 >> 272 LOAD_DEREF 4 (softmax_jacobian_autograd)
274 LOAD_FAST 14 (x1)
276 LOAD_FAST 8 (dim)
278 CALL_FUNCTION 2
280 STORE_FAST 16 (J2)
3376 282 LOAD_DEREF 2 (self)
284 LOAD_METHOD 8 (assertEqual)
286 LOAD_FAST 15 (J)
288 LOAD_FAST 16 (J2)
290 CALL_METHOD 2
292 POP_TOP
3379 294 LOAD_DEREF 4 (softmax_jacobian_autograd)
296 LOAD_FAST 4 (x)
298 LOAD_FAST 8 (dim)
300 CALL_FUNCTION 2
302 STORE_FAST 17 (J3)
3380 304 LOAD_DEREF 2 (self)
306 LOAD_METHOD 8 (assertEqual)
308 LOAD_FAST 15 (J)
310 LOAD_FAST 17 (J3)
312 CALL_METHOD 2
314 POP_TOP
3390 316 LOAD_DEREF 4 (softmax_jacobian_autograd)
318 LOAD_FAST 14 (x1)
320 LOAD_FAST 8 (dim)
322 LOAD_CONST 7 (True)
324 LOAD_CONST 8 (('log',))
326 CALL_FUNCTION_KW 3
328 STORE_FAST 18 (J2_log)
3393 330 LOAD_DEREF 4 (softmax_jacobian_autograd)
332 LOAD_FAST 4 (x)
334 LOAD_FAST 8 (dim)
336 LOAD_CONST 7 (True)
338 LOAD_CONST 8 (('log',))
340 CALL_FUNCTION_KW 3
342 STORE_FAST 19 (J3_log)
3395 344 LOAD_FAST 15 (J)
346 LOAD_METHOD 15 (transpose)
348 LOAD_CONST 5 (0)
350 LOAD_FAST 8 (dim)
352 LOAD_CONST 6 (1)
354 BINARY_ADD
356 CALL_METHOD 2
358 STORE_FAST 15 (J)
3396 360 LOAD_FAST 18 (J2_log)
362 LOAD_METHOD 15 (transpose)
364 LOAD_CONST 5 (0)
366 LOAD_FAST 8 (dim)
368 LOAD_CONST 6 (1)
370 BINARY_ADD
372 CALL_METHOD 2
374 STORE_FAST 18 (J2_log)
3397 376 LOAD_FAST 19 (J3_log)
378 LOAD_METHOD 15 (transpose)
380 LOAD_CONST 5 (0)
382 LOAD_FAST 8 (dim)
384 LOAD_CONST 6 (1)
386 BINARY_ADD
388 CALL_METHOD 2
390 STORE_FAST 19 (J3_log)
3398 392 LOAD_DEREF 2 (self)
394 LOAD_METHOD 8 (assertEqual)
396 LOAD_FAST 15 (J)
398 LOAD_FAST 18 (J2_log)
400 LOAD_FAST 10 (r1)
402 BINARY_MULTIPLY
404 CALL_METHOD 2
406 POP_TOP
3399 408 LOAD_DEREF 2 (self)
410 LOAD_METHOD 8 (assertEqual)
412 LOAD_FAST 15 (J)
414 LOAD_FAST 19 (J3_log)
416 LOAD_FAST 10 (r1)
418 BINARY_MULTIPLY
420 CALL_METHOD 2
422 POP_TOP
3401 424 LOAD_FAST 8 (dim)
426 LOAD_CONST 5 (0)
428 COMPARE_OP 2 (==)
430 POP_JUMP_IF_FALSE 74
3403 432 LOAD_GLOBAL 9 (torch)
434 LOAD_ATTR 16 (float32)
436 STORE_FAST 20 (other_dtype)
3404 438 LOAD_GLOBAL 9 (torch)
440 LOAD_ATTR 10 (sparse)
442 LOAD_ATTR 11 (softmax)
444 LOAD_FAST 4 (x)
446 LOAD_FAST 8 (dim)
448 LOAD_FAST 20 (other_dtype)
450 LOAD_CONST 9 (('dtype',))
452 CALL_FUNCTION_KW 3
454 STORE_FAST 21 (y2)
3405 456 LOAD_DEREF 2 (self)
458 LOAD_METHOD 8 (assertEqual)
460 LOAD_FAST 21 (y2)
462 LOAD_ATTR 17 (dtype)
464 LOAD_FAST 20 (other_dtype)
466 CALL_METHOD 2
468 POP_TOP
3406 470 LOAD_DEREF 2 (self)
472 LOAD_METHOD 8 (assertEqual)
474 LOAD_FAST 21 (y2)
476 LOAD_FAST 12 (y1)
478 LOAD_METHOD 18 (type)
480 LOAD_FAST 20 (other_dtype)
482 CALL_METHOD 1
484 CALL_METHOD 2
486 POP_TOP
3408 488 LOAD_GLOBAL 9 (torch)
490 LOAD_ATTR 10 (sparse)
492 LOAD_ATTR 12 (log_softmax)
494 LOAD_FAST 4 (x)
496 LOAD_FAST 8 (dim)
498 LOAD_FAST 20 (other_dtype)
500 LOAD_CONST 9 (('dtype',))
502 CALL_FUNCTION_KW 3
504 STORE_FAST 22 (ly2)
3409 506 LOAD_DEREF 2 (self)
508 LOAD_METHOD 8 (assertEqual)
510 LOAD_FAST 22 (ly2)
512 LOAD_ATTR 17 (dtype)
514 LOAD_FAST 20 (other_dtype)
516 CALL_METHOD 2
518 POP_TOP
3410 520 LOAD_DEREF 2 (self)
522 LOAD_METHOD 8 (assertEqual)
524 LOAD_FAST 22 (ly2)
526 LOAD_FAST 13 (ly1)
528 LOAD_METHOD 18 (type)
530 LOAD_FAST 20 (other_dtype)
532 CALL_METHOD 1
534 CALL_METHOD 2
536 POP_TOP
538 JUMP_ABSOLUTE 74
>> 540 LOAD_CONST 0 (None)
542 RETURN_VALUE
torchdynamo.convert_frame: [INFO] MODIFIED BYTECODE test_op /data/users/ezyang/pytorch-tmp/test/test_sparse.py line 3337
3337 0 LOAD_DEREF 2 (self)
2 LOAD_ATTR 2 (_gen_sparse)
4 LOAD_FAST 0 (sparse_dims)
6 LOAD_FAST 1 (nnz)
8 LOAD_FAST 2 (with_size)
10 LOAD_DEREF 1 (dtype)
12 LOAD_DEREF 0 (device)
14 LOAD_FAST 3 (coalesced)
16 CALL_FUNCTION 6
18 LOAD_CLOSURE 0 (device)
20 LOAD_CLOSURE 1 (dtype)
22 LOAD_CLOSURE 2 (self)
24 LOAD_CLOSURE 3 (softmax_jacobian_analytic)
26 LOAD_CLOSURE 4 (softmax_jacobian_autograd)
28 LOAD_CLOSURE 5 (softmax_to_dense)
30 LOAD_CLOSURE 6 (sparse_softmax)
32 LOAD_CLOSURE 7 (to_dense)
34 BUILD_TUPLE 8
36 LOAD_CONST 10 (<code object <graph break in test_op> at 0x7fd8309aef50, file "/data/users/ezyang/pytorch-tmp/test/test_sparse.py", line 3341>)
38 LOAD_CONST 11 ('__resume_at_38_0')
40 MAKE_FUNCTION 8 (closure)
42 ROT_TWO
44 CALL_FUNCTION 1
46 RETURN_VALUE
torchdynamo.convert_frame: [INFO] GUARDS:
-
local 'nnz' CONSTANT_MATCH"
{
'guard_types': ['EQUALS_MATCH'],
'code': ['___check_type_id(nnz, 94813225457600)', 'nnz == 10'],
'obj_weakref': None
'guarded_class': <weakref at 0x7fda413d1180; to 'type' at 0x563b6c9f97c0 (int)>
}
-
local 'self' TYPE_MATCH"
{
'guard_types': ['TYPE_MATCH'],
'code': ['___check_type_id(self, 94814090822400)'],
'obj_weakref': <weakref at 0x7fd83087cc20; to 'TestSparseCPU' at 0x7fd831093880>
'guarded_class': <weakref at 0x7fd831508900; to 'type' at 0x563ba0340300 (TestSparseCPU)>
}
-
local 'dtype' CONSTANT_MATCH"
{
'guard_types': ['EQUALS_MATCH'],
'code': ["str(dtype) == 'torch.float64'"],
'obj_weakref': None
'guarded_class': <weakref at 0x7fda32ff3270; to 'type' at 0x7fd9f88413a0 (dtype)>
}
-
local 'device' CONSTANT_MATCH"
{
'guard_types': ['EQUALS_MATCH'],
'code': ['___check_type_id(device, 94813225422688)', "device == 'cpu'"],
'obj_weakref': None
'guarded_class': <weakref at 0x7fda413e4e00; to 'type' at 0x563b6c9f0f60 (str)>
}
-
local 'coalesced' CONSTANT_MATCH"
{
'guard_types': ['ID_MATCH'],
'code': ['___check_obj_id(coalesced, 94813225495552)'],
'obj_weakref': None
'guarded_class': <weakref at 0x7fda413d1450; to 'type' at 0x563b6ca02a40 (bool)>
}
-
local 'with_size' EQUALS_MATCH"
{
'guard_types': ['LIST_LENGTH', 'EQUALS_MATCH'],
'code': ['___check_type_id(with_size, 94813225459872)', 'len(with_size) == 1', '___check_type_id(with_size[0], 94813225457600)', 'with_size == [3]'],
'obj_weakref': None
'guarded_class': <weakref at 0x7fda413d7d60; to 'type' at 0x563b6c9fa0a0 (list)>
}
-
local 'sparse_dims' CONSTANT_MATCH"
{
'guard_types': ['EQUALS_MATCH'],
'code': ['___check_type_id(sparse_dims, 94813225457600)', 'sparse_dims == 1'],
'obj_weakref': None
'guarded_class': <weakref at 0x7fda413d1180; to 'type' at 0x563b6c9f97c0 (int)>
}
-
local 'with_size[0]' CONSTANT_MATCH"
{
'guard_types': ['EQUALS_MATCH'],
'code': ['___check_type_id(with_size[0], 94813225457600)', 'with_size[0] == 3'],
'obj_weakref': None
'guarded_class': <weakref at 0x7fda413d1180; to 'type' at 0x563b6c9f97c0 (int)>
}
-
global 'Number' FUNCTION_MATCH"
{
'guard_types': None,
'code': None,
'obj_weakref': None
'guarded_class': None
}
-
global 'isinstance' BUILTIN_MATCH"
{
'guard_types': None,
'code': None,
'obj_weakref': None
'guarded_class': None
}
torchdynamo.convert_frame: [ERROR] WON'T CONVERT <graph break in test_op> /data/users/ezyang/pytorch-tmp/test/test_sparse.py line 3341
due to:
Traceback (most recent call last):
File "/data/users/ezyang/torchdynamo/torchdynamo/variables/tensor.py", line 204, in create
torch.distributed.get_rank,
AttributeError: module 'torch.distributed' has no attribute 'get_rank'
from user code:
File "/data/users/ezyang/pytorch-tmp/test/test_sparse.py", line 3347, in <graph break in test_op>
for dim in range(x.sparse_dim() + x.dense_dim()):
Set torchdynamo.config.verbose=True for more information
==========
torchdynamo.convert_frame: [ERROR] WON'T CONVERT sparse_softmax /data/users/ezyang/pytorch-tmp/test/test_sparse.py line 3172
due to:
Traceback (most recent call last):
File "/data/users/ezyang/torchdynamo/torchdynamo/variables/tensor.py", line 204, in create
torch.distributed.get_rank,
AttributeError: module 'torch.distributed' has no attribute 'get_rank'
from user code:
File "/data/users/ezyang/pytorch-tmp/test/test_sparse.py", line 3187, in sparse_softmax
if dim < sparse.sparse_dim():
Set torchdynamo.config.verbose=True for more information
==========
torchdynamo.output_graph: [INFO] TRACED GRAPH
__compiled_fn_1 <eval_with_key>.1 opcode name target args kwargs
------------- -------- ------------------------------------------------------- ------------------ ------------------------------------------------------
placeholder sparse sparse () {}
call_method coalesce coalesce (sparse,) {}
call_function full <built-in method full of type object at 0x7fd9f88450e0> ((3,), -inf) {'dtype': torch.float64, 'device': device(type='cpu')}
call_method _indices _indices (coalesce,) {}
call_method t t (_indices,) {}
call_method _values _values (coalesce,) {}
call_function softmax <function softmax at 0x7fd9337c11f0> (full, 0) {}
call_function ne <built-in function ne> (softmax, softmax) {}
call_function setitem <built-in function setitem> (softmax, ne, 0) {}
output output output ((softmax,),) {}
torchdynamo.convert_frame: [INFO] ORIGINAL BYTECODE softmax_to_dense /data/users/ezyang/pytorch-tmp/test/test_sparse.py line 3147
3164 0 LOAD_FAST 0 (sparse)
2 LOAD_ATTR 0 (dtype)
4 STORE_FAST 2 (dtype)
3165 6 LOAD_FAST 0 (sparse)
8 LOAD_ATTR 1 (device)
10 STORE_FAST 3 (device)
3166 12 LOAD_DEREF 1 (to_dense)
14 LOAD_FAST 0 (sparse)
16 LOAD_GLOBAL 2 (float)
18 LOAD_CONST 1 ('inf')
20 CALL_FUNCTION 1
22 UNARY_NEGATIVE
24 LOAD_CONST 2 (('fill_value',))
26 CALL_FUNCTION_KW 2
28 STORE_FAST 4 (dense)
3167 30 LOAD_DEREF 0 (F)
32 LOAD_METHOD 3 (softmax)
34 LOAD_FAST 4 (dense)
36 LOAD_FAST 1 (dim)
38 CALL_METHOD 2
40 STORE_FAST 5 (r)
3169 42 LOAD_CONST 3 (0)
44 LOAD_FAST 5 (r)
46 LOAD_FAST 5 (r)
48 LOAD_FAST 5 (r)
50 COMPARE_OP 3 (!=)
52 STORE_SUBSCR
3170 54 LOAD_FAST 5 (r)
56 RETURN_VALUE
torchdynamo.convert_frame: [INFO] MODIFIED BYTECODE softmax_to_dense /data/users/ezyang/pytorch-tmp/test/test_sparse.py line 3147
3147 0 LOAD_GLOBAL 4 (__compiled_fn_1)
2 LOAD_FAST 0 (sparse)
4 CALL_FUNCTION 1
6 UNPACK_SEQUENCE 1
8 RETURN_VALUE
torchdynamo.convert_frame: [INFO] GUARDS:
-
local 'F' FUNCTION_MATCH"
{
'guard_types': ['ID_MATCH'],
'code': ['___check_obj_id(F, 140570849544208)'],
'obj_weakref': None
'guarded_class': <weakref at 0x7fda4138ca90; to 'type' at 0x563b6c9f6e60 (module)>
}
-
local 'dim' CONSTANT_MATCH"
{
'guard_types': ['EQUALS_MATCH'],
'code': ['___check_type_id(dim, 94813225457600)', 'dim == 0'],
'obj_weakref': None
'guarded_class': <weakref at 0x7fda413d1180; to 'type' at 0x563b6c9f97c0 (int)>
}
-
local 'sparse' TENSOR_MATCH"
{
'guard_types': ['TENSOR_MATCH'],
'code': None,
'obj_weakref': <weakref at 0x7fd83087d7c0; to 'Tensor' at 0x7fd83087ce00>
'guarded_class': <weakref at 0x7fd9d63e2180; to 'torch._C._TensorMeta' at 0x563b706941b0 (Tensor)>
}
-
local 'to_dense' FUNCTION_MATCH"
{
'guard_types': ['ID_MATCH'],
'code': ['___check_obj_id(to_dense, 140566504178304)'],
'obj_weakref': None
'guarded_class': <weakref at 0x7fda413f04a0; to 'type' at 0x563b6c9d5c00 (function)>
}
-
global 'zip' BUILTIN_MATCH"
{
'guard_types': None,
'code': None,
'obj_weakref': None
'guarded_class': None
}
-
global 'float' BUILTIN_MATCH"
{
'guard_types': None,
'code': None,
'obj_weakref': None
'guarded_class': None
}
-
global 'torch' FUNCTION_MATCH"
{
'guard_types': None,
'code': None,
'obj_weakref': None
'guarded_class': None
}
F
======================================================================
FAIL: test_softmax_cpu_float64 (__main__.TestSparseCPU)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/data/users/ezyang/pytorch-tmp/torch/testing/_internal/common_device_type.py", line 378, in instantiated_test
result = test(self, **param_kwargs)
File "/data/users/ezyang/pytorch-tmp/torch/testing/_internal/common_utils.py", line 3276, in wrapped
f(self, *args, **kwargs, coalesced=True)
File "/data/users/ezyang/pytorch-tmp/test/test_sparse.py", line 3412, in test_softmax
test_op(1, 10, [3], coalesced)
File "/data/users/ezyang/pytorch-tmp/test/test_sparse.py", line 3337, in test_op
def test_op(sparse_dims, nnz, with_size, coalesced):
File "/data/users/ezyang/pytorch-tmp/test/test_sparse.py", line 3356, in <graph break in test_op>
self.assertEqual(r1, r2)
File "/data/users/ezyang/pytorch-tmp/torch/testing/_internal/common_utils.py", line 2411, in assertEqual
assert_equal(
File "/data/users/ezyang/pytorch-tmp/torch/testing/_comparison.py", line 1093, in assert_equal
raise error_metas[0].to_error(msg)
AssertionError: Tensor-likes are not close!
Mismatched elements: 3 / 3 (100.0%)
Greatest absolute difference: 0.6566471149084403 at index (0,) (up to 1e-07 allowed)
Greatest relative difference: 1.0 at index (0,) (up to 1e-07 allowed)
----------------------------------------------------------------------
Ran 1 test in 0.384s
FAILED (failures=1)
tensor([0., 0., 0.], dtype=torch.float64)
tensor([0.6566, 0.1139, 0.2295], dtype=torch.float64)
```
</details>
In particular, notice there is a sparse tensor in the guard set:
```
-
local 'sparse' TENSOR_MATCH"
{
'guard_types': ['TENSOR_MATCH'],
'code': None,
'obj_weakref': <weakref at 0x7f730555f360; to 'Tensor' at 0x7f73055d2b80>
'guarded_class': <weakref at 0x7f74aa5913b0; to 'torch._C._TensorMeta' at 0x5627ea2b4af0 (Tensor)>
}
```
My suspicion is that it has to do with sparse tensors guards. I checked the TensorGuard logic, and there are no sparse specific checks as far as I can tell. But I couldn't figure out how to easily tell if a guard was matching or not, the debug level doesn't say it.
cc @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 0 |
4,798 | 84,692 |
Error when trying to export MONAI model to ONNX
|
module: onnx, triaged, onnx-needs-info
|
### π Describe the bug
When trying to export the SwinUNETR model from [MONAI](https://github.com/Project-MONAI/MONAI/blob/dev/monai/networks/nets/swin_unetr.py), I get the error:
```
RuntimeError: Failed to export an ONNX attribute 'onnx::Gather', since it's not constant, please try to make things (e.g., kernel size) static if possible.
```
In a different issue, I read that this issue might get fixed by changing `x_shape = x.size()` to `x_shape = [int(s) for s in x.size()]` in the problematic code -- I found out that problem manifests at `proj_out()`. Doing this, though, results in a different error:
```
RuntimeError: Unsupported: ONNX export of instance_norm for unknown channel size.
```
Making this change in all places where I find `x_shape = x.size()` results in a floating point exception!
Here is a minimal example demonstrating the issue:
```python
from monai.networks.nets import SwinUNETR
import torch
if __name__ == '__main__':
model = SwinUNETR(img_size=(96, 96, 96),
in_channels=1,
out_channels=5,
feature_size=48,
drop_rate=0.0,
attn_drop_rate=0.0,
dropout_path_rate=0.0,
use_checkpoint=True,
)
inputs = [torch.randn([1,1,96,96,96])]
input_names = ['input']
output_names = ['output']
torch.onnx.export(
model,
tuple(inputs), 'model.onnx',
verbose=False,
input_names=input_names,
output_names=output_names,
dynamic_axes=None,
opset_version=11,
)
```
### Versions
Collecting environment information...
PyTorch version: 1.12.1.post200
Is debug build: False
CUDA used to build PyTorch: 11.2
ROCM used to build PyTorch: N/A
OS: CentOS Linux release 7.9.2009 (Core) (x86_64)
GCC version: (GCC) 9.3.1 20200408 (Red Hat 9.3.1-2)
Clang version: Could not collect
CMake version: version 3.20.4
Libc version: glibc-2.17
Python version: 3.10.6 | packaged by conda-forge | (main, Aug 22 2022, 20:36:39) [GCC 10.4.0] (64-bit runtime)
Python platform: Linux-3.10.0-1160.36.2.el7.x86_64-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: 11.1.105
GPU models and configuration:
GPU 0: Tesla T4
GPU 1: Tesla T4
GPU 2: Tesla T4
GPU 3: Tesla T4
GPU 4: Tesla T4
Nvidia driver version: 510.39.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.2
[pip3] torch==1.12.1.post200
[conda] cudatoolkit 11.7.0 hd8887f6_10
[conda] magma 2.5.4 h6103c52_2
[conda] mkl 2022.1.0 h84fe81f_915
[conda] numpy 1.23.2 py310h53a5b5f_0
[conda] pytorch 1.12.1 cuda112py310h51fe464_200
| 4 |
4,799 | 84,691 |
test_public_bindings is not robust to various build options
|
triaged, topic: build
|
After building pytorch with USE_DISTRIBUTED=0, test_public_bindings fails locally with:
```
======================================================================
ERROR: test_correct_module_names (__main__.TestPublicBindings)
----------------------------------------------------------------------
Traceback (most recent call last):
File "test/test_public_bindings.py", line 370, in test_correct_module_names
for _, modname, ispkg in pkgutil.walk_packages(path=torch.__path__, prefix=torch.__name__ + '.'):
File "/raid/rzou/pt/workspace-env/lib/python3.7/pkgutil.py", line 107, in walk_packages
yield from walk_packages(path, info.name+'.', onerror)
File "/raid/rzou/pt/workspace-env/lib/python3.7/pkgutil.py", line 107, in walk_packages
yield from walk_packages(path, info.name+'.', onerror)
File "/raid/rzou/pt/workspace-env/lib/python3.7/pkgutil.py", line 92, in walk_packages
__import__(info.name)
File "/raid/rzou/pt/workspace/torch/distributed/algorithms/_comm_hooks/__init__.py", line 2, in <module>
from . import default_hooks as default
File "/raid/rzou/pt/workspace/torch/distributed/algorithms/_comm_hooks/default_hooks.py", line 6, in <module>
class DefaultState(object):
File "/raid/rzou/pt/workspace/torch/distributed/algorithms/_comm_hooks/default_hooks.py", line 24, in DefaultState
process_group: dist.ProcessGroup
AttributeError: module 'torch.distributed' has no attribute 'ProcessGroup'
----------------------------------------------------------------------
```
It would be nice for test_public_bindings to ignore distributed if it wasn't built.
| 0 |
4,800 | 84,685 |
[TensorExpr] applying `rfactor` for a Mul Reducer with init value different than 1 results in wrong results
|
oncall: jit
|
### π Describe the bug
`rfactor` initializes the `rfac_init` with the initializer of the original reduce op.
If the original Reducer uses Mul as the ReduceInteraction with an init value different than 1, the result after `rfactor` will be wrong (the same issue for an Add reducer with an init value different than 0).
https://github.com/pytorch/pytorch/blob/8bd9fe3f493073bf8f4a2e428c3048096fb36052/torch/csrc/jit/tensorexpr/loopnest.cpp#L3380-L3381
### CPP UT to reproduce:
The tensor of size `2 x 8` has values all set to 1.
The reduce axis is the last dim (where `size == 8`).
The Reducer has the init value = 2. The ReduceInteraction is Mul.
The expected result will be `tensor([ 2, 2])`.
With `rfactor`, the result becomes `tensor([ 4, 4])`.
```cpp
TEST(Reductions, ReduceCustomProductWithRfactor) {
const int M = 2;
const int N = 8;
BufHandle b("b", {M, N}, kFloat);
std::vector<float> in(M * N);
for (const auto i : c10::irange(M)) {
for (const auto j : c10::irange(N)) {
in[i * N + j] = 1;
}
}
std::vector<float> out(M, -1.f);
Reducer product(
ExprHandle(2.f), [](ExprHandle a, ExprHandle b) { return a * b; });
Tensor c = Reduce("product", {M}, product, b, {N});
LoopNest nest({c});
// rfactor
auto loops = nest.getLoopStmtsFor(c);
ForPtr mi, mo, tail;
BufPtr rf;
constexpr int kChunkSize = 8;
nest.splitWithTail(loops[1], kChunkSize, &mi, &tail);
TORCH_CHECK(nest.rfactor(nest.getLoopBodyFor(c), loops[1], &rf));
nest.prepareForCodegen();
StmtPtr s = nest.root_stmt();
s = IRSimplifier::simplify(s);
std::cout << "final stmt: \n" << *s << "\n";
SimpleIREvaluator cg(s, {b, c});
cg.call({in, out});
float expected = 2;
for (const auto i : c10::irange(N)) {
// NOLINTNEXTLINE(bugprone-narrowing-conversions,cppcoreguidelines-narrowing-conversions)
expected *= 1;
}
for (const auto i : c10::irange(M)) {
ASSERT_EQ(out[i], expected);
}
}
```
#### Output log:
```bash
Expected equality of these values:
out[i]
Which is: 4
expected
Which is: 2
[ FAILED ] Reductions.ReduceCustomProductWithRfactor (1 ms)
[----------] 1 test from Reductions (1 ms total)
[----------] Global test environment tear-down
[==========] 1 test from 1 test suite ran. (1 ms total)
[ PASSED ] 0 tests.
[ FAILED ] 1 test, listed below:
[ FAILED ] Reductions.ReduceCustomProductWithRfactor
1 FAILED TEST
```
### Stmt without and with `rfactor`:
The **correct** final stmt without `rfactor`:
```
{
for (int i = 0; i < 2; i++) {
product[i] = 2.f;
for (int i_1 = 0; i_1 < 8; i_1++) {
product[i] = (product[i]) * (b[i_1 + 8 * i]);
}
}
}
```
The **wrong** final stmt with `rfactor`:
```
{
for (int i = 0; i < 2; i++) {
product[i] = 2.f;
product_rfac[i] = 2.f;
for (int i_inner = 0; i_inner < 8; i_inner++) {
product_rfac[i] = (product_rfac[i]) * (b[i_inner + 8 * i]);
}
product[i] = (product[i]) * (product_rfac[i]);
}
}
```
### Versions
```
PyTorch version: 1.13.0a0+git8bd9fe3
```
| 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.