Serial Number
int64 1
6k
| Issue Number
int64 75.6k
112k
| Title
stringlengths 3
357
| Labels
stringlengths 3
241
β | Body
stringlengths 9
74.5k
β | Comments
int64 0
867
|
---|---|---|---|---|---|
601 | 110,361 |
BatchNorm layer 'num_batches_tracked' overwritten with default value when loading empty state_dict
|
module: nn, triaged
|
### π Describe the bug
BatchNorm layer 'num_batches_tracked' overwritten with default value when loading empty state_dict.
Probably should be fixed in: /torch/nn/modules/batchnorm.py lines 103-108 by checking if a value already exists.
```python
from collections import OrderedDict
from torchvision.models import mobilenet_v2, MobileNet_V2_Weights
sd_key = 'features.0.1.num_batches_tracked'
model = mobilenet_v2(weights=MobileNet_V2_Weights.IMAGENET1K_V1.DEFAULT)
print(f"before loading: {model.state_dict()['features.0.1.num_batches_tracked']}")
empty_dict = OrderedDict()
model.load_state_dict(empty_dict, strict=False)
print(f"after loading: {model.state_dict()['features.0.1.num_batches_tracked']}")
```
expected output:
```
before loading: 736839
after loading: 736839
```
actual output:
```
before loading: 736839
after loading: 0
```
### Versions
Collecting environment information...
PyTorch version: 2.0.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 13.4 (x86_64)
GCC version: Could not collect
Clang version: 14.0.3 (clang-1403.0.22.14.1)
CMake version: Could not collect
Libc version: N/A
Python version: 3.9.10 (main, Jan 15 2022, 11:48:04) [Clang 13.0.0 (clang-1300.0.29.3)] (64-bit runtime)
Python platform: macOS-13.4-x86_64-i386-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Pro
Versions of relevant libraries:
[pip3] numpy==1.25.1
[pip3] torch==2.0.1
[pip3] torchinfo==1.8.0
[pip3] torchvision==0.15.2
[pip3] torchviz==0.0.2
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| 1 |
602 | 110,356 |
Dropout signature inconsistent between `torch.dropout`, `torch.nn.Dropout` and `torch.nn.functional.dropout`
|
module: nn, triaged, module: python frontend
|
### π Describe the bug
Both `torch.nn.Dropout` and `torch.nn.functional.dropout` default to `p=0.5` and `train=True`. However, `torch.dropout` requires them to be set manually.
```python
>>> import torch
>>> x = torch.randn(10, 20)
>>> torch.nn.Dropout()(x)
...
>>> torch.nn.functional.dropout(x)
...
>>> torch.dropout(x)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: dropout() missing 2 required positional argument: "p", "train"
```
Other functions are affected similarly:
- `torch.nn.functional.alpha_dropout`
- `torch.nn.functional.feature_alpha_dropout`
For consistency, we should also set the defaults for these functions in the `torch.*` module.
### Versions
Tested on `torch==2.0.1` and `torch==2.2.0.dev20230912+cu121`
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| 0 |
603 | 110,354 |
[dynamo] Added support for type for generated custom object
|
triaged, open source, module: dynamo, ciflow/inductor
|
Fixes #109056
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng @ezyang for the issue.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 7 |
604 | 110,353 |
feat(inductor): Improve compilation speed and add `SGD` Optimizer back to Inductor
|
triaged, open source, topic: not user facing, module: inductor, module: dynamo, ciflow/inductor
|
Originally disabled in https://github.com/pytorch/pytorch/pull/105438
However, compilation times are plenty speedy to me:

Could have benefited from perf changes to scheduler like cycle-detection optimizations.
CC @mlazos as original code author
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler
| 8 |
605 | 110,351 |
feat(inductor): Add `RAdam` to Inductor by converting data-dependent control-flow to `torch.where`
|
triaged, open source, module: inductor, module: dynamo, ciflow/inductor, release notes: optim
|
For small epochs (adaptive learning rate = 1), this will be more costly than not computing `rect` and `adaptive_lr`, but unlikely by much - they are all fused into a single kernel anyway.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler
| 3 |
606 | 110,347 |
ONNX export: TransformerEncoder is exported with fixed input dims
|
module: onnx, triaged
|
## Issue description
When exporting to ONNX a module that contains `torch.nn.TransformerEncoder`, the time dimention is fixed to the one used during export, and the model fails to deal with arbitrary sequence lengths.
## Code example
Given the following module:
```python
from torch import nn
class Model(nn.Module):
def __init__(self):
super().__init__()
self.emb = nn.Embedding(15, 96)
enc_layer = nn.TransformerEncoderLayer (
96,
nhead=12,
dim_feedforward=96,
dropout=0.2,
batch_first=True
)
self.attn_layers = nn.TransformerEncoder(
enc_layer,
num_layers=10,
enable_nested_tensor=True
)
def forward(self, x):
x = self.emb(x)
return self.attn_layers(x)
```
Exported as follows:
```python
import torch
model = Model()
torch.onnx.export(
model,
args=(torch.randint(0, 15, (1, 20))),
f="model.onnx",
opset_version=16,
export_params=True,
do_constant_folding=True,
input_names=["input"],
output_names=["output"],
dynamic_axes={
"input": {0: "batch_size", 1: "time"},
"output": {0: "batch_size", 1: "time"},
}
)
```
and ran using `onnxruntime` as follows:
```python
import numpy as np
import onnxruntime
sess = onnxruntime.InferenceSession("model.onnx")
input = np.expand_dims(
np.array([1, 2, 3], dtype=np.int64),
0
)
output = sess.run(None, {"input": input})
```
I get the following error:
```python
Traceback (most recent call last):
File "D:\lab\issue_with_enc.py", line 49, in <module>
output = sess.run(None, {"input": input})
File "D:\lab\.env\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 217, in run
return self._sess.run(output_names, input_feed, run_options)
onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running Reshape node. Name:'/attn_layers/layers.0/self_attn/Reshape_4' Status Message: C:\a\_work\1\s\onnxruntime\core\providers\cpu\tensor\reshape_helper.h:41 onnxruntime::ReshapeHelper::ReshapeHelper gsl::narrow_cast<int64_t>(input_shape.Size()) == size was false. The input tensor cannot be reshaped to the requested shape. Input shape:{3,1,96}, requested shape:{20,12,8}
```
## System Info
PyTorch version: 2.2.0.dev20230930+cpu
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 11 Pro
GCC version: Could not collect
Clang version: 15.0.2
CMake version: version 3.26.3
Libc version: N/A
Python version: 3.10.10 (tags/v3.10.10:aad5f6a, Feb 7 2023, 17:20:36) [MSC v.1929 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.22621-SP0
Is CUDA available: False
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: NVIDIA GeForce MX350
Nvidia driver version: 528.49
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture=9
CurrentClockSpeed=2918
DeviceID=CPU0
Family=198
L2CacheSize=5120
L2CacheSpeed=
Manufacturer=GenuineIntel
MaxClockSpeed=2918
Name=11th Gen Intel(R) Core(TM) i7-1195G7 @ 2.90GHz
ProcessorType=3
Revision=
Versions of relevant libraries:
[pip3] numpy==1.24.3
[pip3] pytorch-lightning==2.0.6
[pip3] torch==2.2.0.dev20230930+cpu
[pip3] torchmetrics==1.0.1
[conda] Could not collect
| 1 |
607 | 110,345 |
feat(inductor): Improve `Adamax` to be better fused by Inductor and enable it
|
triaged, open source, module: inductor, module: dynamo, ciflow/inductor, release notes: optim
|
In order to avoid running into https://github.com/pytorch/pytorch/issues/110342, we replace complicated `cat + max` logic into a simple `torch.maximum`. (see also [here](https://github.com/facebookresearch/fairseq/blob/7409af7f9a7b6ddac4cbfe7cafccc715b3c1b21e/fairseq/optim/adamax.py#L151) in fairseq repo - the same operation with a different, less-explicit syntax).
Like https://github.com/pytorch/pytorch/pull/110339, we also ensure that `step` is on device.
As noted in https://github.com/pytorch/pytorch/issues/107006#issuecomment-1741841283, it is likely that `num_kernels=2` is optimal, unless one can move the `numel=1, shape=(0,)` scalar computations entirely into CPU.
That being said, in the foreach case, a parallel foreach scalar computation on GPU doesn't seem too bad.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler
| 1 |
608 | 110,342 |
[inductor]: Not handling `ConcatKernel/NopKernel` fusions leads to suboptimal fusions
|
oncall: pt2
|
### π The feature, motivation and pitch
Instead, perhaps try to refactor it as `ComputedBuffer`
Examples: Adamax
Already bad with `config.aggressive_fusion = True`:

Worse when `config.aggressive_fusion = False`:

With `config.aggressive_fusion = False`, these kernels are treated as not sharing reads/writes.
Further, the presence of these `ConcatKernel` nodes seems to obstruct fusion of other kernels, despite later being partly pruned away during kernel codegen.
### Alternatives
_No response_
### Additional context
_No response_
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 4 |
609 | 110,340 |
[dynamo] Add support for itertools.repeat
|
triaged, open source, topic: not user facing, module: dynamo, ciflow/inductor
|
Implemented call_repeat function in builtin.py to handle itertools.repeat within the Dynamo context. This addresses issue #110286 where support for itertools.repeat was requested.
Fixes #110286
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 3 |
610 | 110,339 |
feat(optimizer): `Adagrad` will use `device` when `capturable` - True always when compiling with dynamo
|
triaged, open source, module: inductor, release notes: optim
|
Partial fix: https://github.com/pytorch/pytorch/issues/107006
CC: @mlazos as issue creator
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler
| 3 |
611 | 110,338 |
[cpu] explicitly vectorize trigamma & polygamma
|
module: cpu, open source, topic: not user facing
|
Fixes #ISSUE_NUMBER
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| 3 |
612 | 110,336 |
fix(inductor): `ForeachKernelSchedulerNode` group shape should be opaque for graph debug
|
triaged, open source, release notes: fx, module: inductor, ciflow/inductor
|
~~Shape is assumed by `TensorMetadata` to be torch.Shape/tuple, however, some of the scheduler node groups utilize `int`, so convert to tuple.~~
Root cause is actually `foreach` scheduler node having silent-error group of int, when in fact it ought to be opaque `foreach`.
**Previously:** silent error / confusing shape of (0,)

**Now:** clear that it is foreach which does not have well-defined shape:

~~Alternate might be to create list of shapes for each of its subnodes. Actually, for debuggability sake, I may prefer this. We can ensure that the recursive generation of this string is only done dynamically in a debug code path. Else, incrementally computing it on initialization of ForeachKernel may also be feasible.~~ This is quite infeasible for 100s of params.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler
| 1 |
613 | 110,334 |
Perf-Drop (factor=2) Ubuntu-vs-Windows on same PC (dual-boot)
|
module: windows, module: cuda, triaged
|
### π Describe the bug
On my Dual-Boot-PC (Ubuntu + Windows) same Torch-Code (same folder on NTFS-drive) shows factor 2 runtime slowdown on Windows.
I disabled write Parquet-Files and shuffle turned off, to exclude Filesystem or other issues. I do not use the Compile-feature, afaik.
Both: torch.device("cuda"), torch.get_default_dtype=torch.float32, torch.cuda.is_available()=True
python -m cProfile -o test.cprof mnistBenchmark.py # (.cprof attached for both systems)
Overall Runtume: Windows=30s, Ubtuntu=15s, Ubuntu+cProfile=23s (cProfile does some Ubuntu-only slowdown, but this is not the issue here).
It would need some hours to cleanup the code form confidential details, so I dont want to upload it to public at the moment, but could send python files on a non-public-channel. I think the issue is of general nature as it happens with textbook-MNIST-examples and affects all solvers (at least Adam and my own new solver in the same way).
### Versions
Hardware: PC, 32GB-DDR5, Intel i5-12400, RTX-4060-GPU, fast M2-PCIe4-SSD.
Common-Software: Python 3.11.5, CUDA-12.2-Update2, all System-Updates.
Linux: Ubuntu 22.04, NVIDIA-Linux-x86_64-535.113.01
Windows 10, 537.32 (but last 2 verions had same issue).
Simple Net-config: MNIST, 7960 parameter, Float32, Relu, Shuffle=Off, WriteToParquet=Off, 60000/256=235 steps x 10 epochs, many optimizers (e.g. Adam).
pip list # Windows 10 + Python 3.11.5 + Cuda-12.2
numpy 1.25.2
torch 2.2.0.dev20230929+cu121 # use dev for Cuda on Windows
// no nvidia- modules
pip list # Ubuntu 22.04 + Python 3.11.5 + Cuda-12.2
numpy 1.26.0
nvidia-cublas-cu11 11.10.3.66
nvidia-cuda-runtime-cu11 11.7.99
nvidia-cudnn-cu11 8.5.0.96
torch 2.0.1
##################################
Collecting environment information...
PyTorch version: 2.2.0.dev20230929+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 10 Pro
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A
Python version: 3.11.5 (tags/v3.11.5:cce6ba9, Aug 24 2023, 14:38:34) [MSC v.1936 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.19045-SP0
Is CUDA available: True
CUDA runtime version: 12.2.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4060
Nvidia driver version: 537.42
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture=9
CurrentClockSpeed=2500
DeviceID=CPU0
Family=205
L2CacheSize=7680
L2CacheSpeed=
Manufacturer=GenuineIntel
MaxClockSpeed=2500
Name=12th Gen Intel(R) Core(TM) i5-12400
ProcessorType=3
Revision=
Versions of relevant libraries:
[pip3] numpy==1.25.2
[pip3] torch==2.2.0.dev20230929+cu121
[pip3] torchaudio==2.2.0.dev20230929+cu121
[pip3] torchvideo==0.0.0
[pip3] torchvision==0.17.0.dev20230929+cu121
[conda] Could not collect
cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @vladimir-aubrecht @iremyux @Blackhex @cristianPanaite @ptrblck
| 7 |
614 | 110,332 |
`torch.jit.load()` might unresponsive in IBM s390x when loading some certain torchscript saved by x86 machine.
|
oncall: jit
|
### π Describe the bug
## Step to reproduce
1. Load a `resnet50` model on a **x86 machine** and save it as an torchscript.
```python
# load modules
import torch
import numpy as np
model = torch.hub.load('pytorch/vision:v0.10.0', 'resnet50', pretrained=True).to('cuda')
model.eval()
# Try to export the model to torchscript
import torch.jit
jit_script = torch.jit.script(model)
jit_script.save('resnet50.pt')
```
2. Copy the exported torchscript file `resnet50.pt` to a linuxone (IBM s390X) machine, then run the following code to load the model:
```python
import torch
import torch.jit
from torch.serialization import LoadEndianness
torch.serialization.set_default_load_endianness(LoadEndianness.LITTLE)
jit_script = torch.jit.load('resnet50.pt', map_location='cpu')
```
3. Then the loading process never completed, and if you inspect the os with `top` command, you will find `python` is keep allocating memory, until the system kill the process.

4. As a contract, if I complete the Step1 with a s390x machine, then everything works as expected.
## Other information
I believe this error may be related to the `byteorder` used when loading the torchscript, as discussed in issues #101688 and #101973. Although this problem should have been resolved in version 2.1.0, there might still be some bugs when loading a torchscript, which could be causing the issue.
### Versions
1. x86 machine used to save the `resnet50.pt` torchscript:
```
Collecting environment information...
PyTorch version: 2.0.1
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 11 Pro Insider Preview
GCC version: Could not collect
Clang version: Could not collect
CMake version: version 3.25.1
Libc version: N/A
Python version: 3.11.4 | packaged by Anaconda, Inc. | (main, Jul 5 2023, 13:47:18) [MSC v.1916 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.23550-SP0
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060 Laptop GPU
Nvidia driver version: 537.13
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture=9
CurrentClockSpeed=3201
DeviceID=CPU0
Family=107
L2CacheSize=4096
L2CacheSpeed=
Manufacturer=AuthenticAMD
MaxClockSpeed=3201
Name=AMD Ryzen 7 5800H with Radeon Graphics
ProcessorType=3
Revision=20480
Versions of relevant libraries:
[pip3] numpy==1.25.2
[pip3] torch==2.0.1
[pip3] torchaudio==2.0.2
[pip3] torchvision==0.15.2
[conda] Could not collect
```
2. s390x machine (aka LinuxOne) used to load the `resnet50.pt` torchscript:
```
PyTorch version: 2.1.0a0+git591cb77
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (s390x)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.14.0-284.25.1.el9_2.s390x-s390x-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: False
CPU:
Architecture: s390x
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Big Endian
CPU(s): 2
On-line CPU(s) list: 0,1
Vendor ID: IBM/S390
Machine type: 8561
Thread(s) per core: 1
Core(s) per socket: 1
Socket(s) per book: 1
Book(s) per drawer: 1
Drawer(s): 2
CPU dynamic MHz: 5200
CPU static MHz: 5200
BogoMIPS: 3241.00
Dispatching mode: horizontal
Flags: esan3 zarch stfle msa ldisp eimm dfp edat etf3eh highgprs te vx vxd vxe gs vxe2 vxp sort dflt sie
Hypervisor: z/VM 7.2.0
Hypervisor vendor: IBM
Virtualization type: full
L1d cache: 256 KiB (2 instances)
L1i cache: 256 KiB (2 instances)
L2d cache: 4 MiB (1 instance)
L2i cache: 4 MiB (1 instance)
L3 cache: 256 MiB
L4 cache: 960 MiB
NUMA node(s): 1
NUMA node0 CPU(s): 0,1
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Not affected
Vulnerability Spectre v1: Mitigation; __user pointer sanitization
Vulnerability Spectre v2: Mitigation; etokens
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.21.5
[pip3] torch==2.1.0a0+git591cb77
[conda] Could not collect
```
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| 0 |
615 | 110,331 |
torch.Tensor.__repr__ causes torch.compile to error: "got an unexpected keyword argument 'tensor_contents'"
|
oncall: pt2
|
### π Describe the bug
It seems PyTorch's compile machinery somewhere is dependent on `torch.Tensor.__repr__`!! I always change `torch.Tensor.__repr__` so I can see the shape of the tensor right away and it's great debugging aid. This is normal and, in Python, one may change `__repr__` and therefore it is not very "pythonic" for any code to strongly depend on this. The string returned by __repr__ is for humans. However, after I change torch.Tensor.__repr__, compiled model's forward throws below error:
```
debug_wrapper raised TypeError: <lambda>() got an unexpected keyword argument 'tensor_contents'
Set torch._dynamo.config.verbose=True for more information
You can suppress this exception and fall back to eager by setting:
torch._dynamo.config.suppress_errors = True
File "/home/shitals/GitHubSrc/gptplay/scripts/play.py", line 301, in <lambda>
torch.Tensor.__repr__ = lambda self: f"{tuple(self.shape)}:{normal_repr(self)}" # type: ignore
TypeError: <lambda>() got an unexpected keyword argument 'tensor_contents'
The above exception was the direct cause of the following exception:
File "/home/shitals/GitHubSrc/gptplay/scripts/play.py", line 55, in forward
y = torch.nn.functional.scaled_dot_product_attention(q, k, v, attn_mask=None, dropout_p=self.flash_attn_dropout_val if self.training else 0, is_causal=True)
File "/home/shitals/GitHubSrc/gptplay/scripts/play.py", line 95, in forward
x = x + self.attn(self.ln_1(x))
File "/home/shitals/GitHubSrc/gptplay/scripts/play.py", line 164, in forward
x = block(x)
File "/home/shitals/GitHubSrc/gptplay/scripts/play.py", line 322, in <module>
y = model(x)
torch._dynamo.exc.BackendCompilerFailed: debug_wrapper raised TypeError: <lambda>() got an unexpected keyword argument 'tensor_contents'
Set torch._dynamo.config.verbose=True for more information
You can suppress this exception and fall back to eager by setting:
torch._dynamo.config.suppress_errors = True
```
Above behavior happens in current 2.0.1 stable release as well as 2.2 nightly build.
Below is complete code that reproduces the error:
```
import math
import inspect
from dataclasses import dataclass
import torch
import torch.nn as nn
from torch.nn import functional as F
class LayerNorm(nn.Module):
""" LayerNorm but with an optional bias. PyTorch doesn't support simply bias=False """
def __init__(self, ndim, bias):
super().__init__()
self.weight = nn.Parameter(torch.ones(ndim))
self.bias = nn.Parameter(torch.zeros(ndim)) if bias else None
def forward(self, input):
return F.layer_norm(input, self.weight.shape, self.weight, self.bias, 1e-5)
class CausalSelfAttention(nn.Module):
def __init__(self, config):
super().__init__()
assert config.n_embd % config.n_head == 0
# key, query, value projections for all heads, but in a batch
self.c_attn = nn.Linear(config.n_embd, 3 * config.n_embd, bias=config.attn_kv_bias)
# output projection
self.c_proj = nn.Linear(config.n_embd, config.n_embd, bias=config.attn_proj_bias)
# regularization
self.attn_dropout = nn.Dropout(config.attn_dropout)
self.resid_dropout = nn.Dropout(config.resid_dropout)
self.n_head = config.n_head
self.n_embd = config.n_embd
self.flash_attn_dropout_val = config.attn_dropout
# flash attention make GPU go brrrrr but support is only in PyTorch >= 2.0
self.flash = hasattr(torch.nn.functional, 'scaled_dot_product_attention')
if not self.flash:
print("WARNING: using slow attention. Flash Attention requires PyTorch >= 2.0")
# causal mask to ensure that attention is only applied to the left in the input sequence
self.register_buffer("bias", torch.tril(torch.ones(config.block_size, config.block_size))
.view(1, 1, config.block_size, config.block_size))
def forward(self, x):
B, T, C = x.size() # batch size, sequence length, embedding dimensionality (n_embd)
# calculate query, key, values for all heads in batch and move head forward to be the batch dim
q, k, v = self.c_attn(x).split(self.n_embd, dim=2)
k = k.view(B, T, self.n_head, C // self.n_head).transpose(1, 2) # (B, nh, T, hs)
q = q.view(B, T, self.n_head, C // self.n_head).transpose(1, 2) # (B, nh, T, hs)
v = v.view(B, T, self.n_head, C // self.n_head).transpose(1, 2) # (B, nh, T, hs)
# causal self-attention; Self-attend: (B, nh, T, hs) x (B, nh, hs, T) -> (B, nh, T, T)
if self.flash:
# efficient attention using Flash Attention CUDA kernels
y = torch.nn.functional.scaled_dot_product_attention(q, k, v, attn_mask=None, dropout_p=self.flash_attn_dropout_val if self.training else 0, is_causal=True)
else:
# manual implementation of attention
att = (q @ k.transpose(-2, -1)) * (1.0 / math.sqrt(k.size(-1)))
att = att.masked_fill(self.bias[:,:,:T,:T] == 0, float('-inf'))
att = F.softmax(att, dim=-1)
att = self.attn_dropout(att)
y = att @ v # (B, nh, T, T) x (B, nh, T, hs) -> (B, nh, T, hs)
y = y.transpose(1, 2).contiguous().view(B, T, C) # re-assemble all head outputs side by side
# output projection
y = self.resid_dropout(self.c_proj(y))
return y
class MLP(nn.Module):
def __init__(self, config):
super().__init__()
self.c_fc = nn.Linear(config.n_embd, 4 * config.n_embd, bias=config.mlp_bias)
self.gelu = nn.GELU()
self.c_proj = nn.Linear(4 * config.n_embd, config.n_embd, bias=config.mlp_bias)
self.dropout = nn.Dropout(config.mlp_dropout)
def forward(self, x):
x = self.c_fc(x)
x = self.gelu(x)
x = self.c_proj(x)
x = self.dropout(x)
return x
class Block(nn.Module):
def __init__(self, config):
super().__init__()
self.ln_1 = LayerNorm(config.n_embd, bias=config.layer_norm_bias)
self.attn = CausalSelfAttention(config)
self.ln_2 = LayerNorm(config.n_embd, bias=config.layer_norm_bias)
self.mlp = MLP(config)
def forward(self, x):
x = x + self.attn(self.ln_1(x))
x = x + self.mlp(self.ln_2(x))
return x
@dataclass
class GPTConfig:
block_size: int = 1024
vocab_size: int = 50304 # GPT-2 vocab_size of 50257, padded up to nearest multiple of 64 for efficiency
n_layer: int = 12
n_head: int = 12
n_embd: int = 768
mlp_bias: bool = True
attn_proj_bias: bool = True
attn_kv_bias: bool = False
layer_norm_bias: bool = True
attn_dropout: float = 0.0 # applied on softmax of QK^T
mlp_dropout: float = 0.0
resid_dropout: float = 0.0
embed_dropout: float = 0.0
class GPT(nn.Module):
def __init__(self, config):
super().__init__()
assert config.vocab_size is not None
assert config.block_size is not None
self.config = config
self.transformer = nn.ModuleDict(dict(
wte = nn.Embedding(config.vocab_size, config.n_embd),
wpe = nn.Embedding(config.block_size, config.n_embd),
embed_dropout = nn.Dropout(config.embed_dropout),
h = nn.ModuleList([Block(config) for _ in range(config.n_layer)]),
ln_f = LayerNorm(config.n_embd, bias=config.layer_norm_bias),
))
self.lm_head = nn.Linear(config.n_embd, config.vocab_size, bias=False)
# with weight tying when using torch.compile() some warnings get generated:
# "UserWarning: functional_call was passed multiple values for tied weights.
# This behavior is deprecated and will be an error in future versions"
# not 100% sure what this is, so far seems to be harmless. TODO investigate
self.transformer.wte.weight = self.lm_head.weight # https://paperswithcode.com/method/weight-tying
# init all weights
self.apply(self._init_weights)
# apply special scaled init to the residual projections, per GPT-2 paper
for pn, p in self.named_parameters():
if pn.endswith('c_proj.weight'):
torch.nn.init.normal_(p, mean=0.0, std=0.02/math.sqrt(2 * config.n_layer))
def _init_weights(self, module):
if isinstance(module, nn.Linear):
torch.nn.init.normal_(module.weight, mean=0.0, std=0.02)
if module.bias is not None:
torch.nn.init.zeros_(module.bias)
elif isinstance(module, nn.Embedding):
torch.nn.init.normal_(module.weight, mean=0.0, std=0.02)
def forward(self, idx, only_last=False):
device = idx.device
b, t = idx.size()
assert t <= self.config.block_size, f"Cannot forward sequence of length {t}, block size is only {self.config.block_size}"
pos = torch.arange(0, t, dtype=torch.long, device=device) # shape (t)
# forward the GPT model itself
tok_emb = self.transformer.wte(idx) # token embeddings of shape (b, t, n_embd)
pos_emb = self.transformer.wpe(pos) # position embeddings of shape (t, n_embd)
x = self.transformer.embed_dropout(tok_emb + pos_emb)
for block in self.transformer.h:
x = block(x)
x = self.transformer.ln_f(x)
if not only_last:
logits = self.lm_head(x)
else:
# inference-time mini-optimization: only forward the lm_head on the very last position
logits = self.lm_head(x[:, [-1], :]) # note: using list [-1] to preserve the time dim
return logits
def crop_block_size(self, block_size):
# model surgery to decrease the block size if necessary
# e.g. we may load the GPT2 pretrained model checkpoint (block size 1024)
# but want to use a smaller block size for some smaller, simpler model
assert block_size <= self.config.block_size
self.config.block_size = block_size
self.transformer.wpe.weight = nn.Parameter(self.transformer.wpe.weight[:block_size])
for block in self.transformer.h:
if hasattr(block.attn, 'bias'):
block.attn.bias = block.attn.bias[:,:,:block_size,:block_size]
@classmethod
def from_pretrained(cls, model_type, override_args=None):
assert model_type in {'gpt2', 'gpt2-medium', 'gpt2-large', 'gpt2-xl'}
override_args = override_args or {} # default to empty dict
# only dropout can be overridden see more notes below
assert all(k == 'dropout' for k in override_args)
from transformers import GPT2LMHeadModel
print("loading weights from pretrained gpt: %s" % model_type)
# n_layer, n_head and n_embd are determined from model_type
config_args = {
'gpt2': dict(n_layer=12, n_head=12, n_embd=768), # 124M params
'gpt2-medium': dict(n_layer=24, n_head=16, n_embd=1024), # 350M params
'gpt2-large': dict(n_layer=36, n_head=20, n_embd=1280), # 774M params
'gpt2-xl': dict(n_layer=48, n_head=25, n_embd=1600), # 1558M params
}[model_type]
print("forcing vocab_size=50257, block_size=1024, bias=True")
config_args['vocab_size'] = 50257 # always 50257 for GPT model checkpoints
config_args['block_size'] = 1024 # always 1024 for GPT model checkpoints
config_args['bias'] = True # always True for GPT model checkpoints
# we can override the dropout rate, if desired
if 'dropout' in override_args:
print(f"overriding dropout rate to {override_args['dropout']}")
config_args['dropout'] = override_args['dropout']
# create a from-scratch initialized minGPT model
config = GPTConfig(**config_args)
model = GPT(config)
sd = model.state_dict()
sd_keys = sd.keys()
sd_keys = [k for k in sd_keys if not k.endswith('.attn.bias')] # discard this mask / buffer, not a param
# init a huggingface/transformers model
model_hf = GPT2LMHeadModel.from_pretrained(model_type)
sd_hf = model_hf.state_dict()
# copy while ensuring all of the parameters are aligned and match in names and shapes
sd_keys_hf = sd_hf.keys()
sd_keys_hf = [k for k in sd_keys_hf if not k.endswith('.attn.masked_bias')] # ignore these, just a buffer
sd_keys_hf = [k for k in sd_keys_hf if not k.endswith('.attn.bias')] # same, just the mask (buffer)
transposed = ['attn.c_attn.weight', 'attn.c_proj.weight', 'mlp.c_fc.weight', 'mlp.c_proj.weight']
# basically the openai checkpoints use a "Conv1D" module, but we only want to use a vanilla Linear
# this means that we have to transpose these weights when we import them
assert len(sd_keys_hf) == len(sd_keys), f"mismatched keys: {len(sd_keys_hf)} != {len(sd_keys)}"
for k in sd_keys_hf:
if any(k.endswith(w) for w in transposed):
# special treatment for the Conv1D weights we need to transpose
assert sd_hf[k].shape[::-1] == sd[k].shape
with torch.no_grad():
sd[k].copy_(sd_hf[k].t())
else:
# vanilla copy over the other parameters
assert sd_hf[k].shape == sd[k].shape
with torch.no_grad():
sd[k].copy_(sd_hf[k])
return model
@torch.no_grad()
def generate(self, idx, max_new_tokens, temperature=1.0, top_k=None):
"""
Take a conditioning sequence of indices idx (LongTensor of shape (b,t)) and complete
the sequence max_new_tokens times, feeding the predictions back into the model each time.
Most likely you'll want to make sure to be in model.eval() mode of operation for this.
"""
for _ in range(max_new_tokens):
# if the sequence context is growing too long we must crop it at block_size
idx_cond = idx if idx.size(1) <= self.config.block_size else idx[:, -self.config.block_size:]
# forward the model to get the logits for the index in the sequence
logits, _ = self(idx_cond)
# pluck the logits at the final step and scale by desired temperature
logits = logits[:, -1, :] / temperature
# optionally crop the logits to only the top k options
if top_k is not None:
v, _ = torch.topk(logits, min(top_k, logits.size(-1)))
logits[logits < v[:, [-1]]] = -float('Inf')
# apply softmax to convert logits to (normalized) probabilities
probs = F.softmax(logits, dim=-1)
# sample from the distribution
idx_next = torch.multinomial(probs, num_samples=1)
# append sampled index to the running sequence and continue
idx = torch.cat((idx, idx_next), dim=1)
return idx
def get_model(n_layer: int, n_embd: int, n_head: int,
vocab_size: int, context_length: int,
mlp_bias: bool,
attn_proj_bias: bool, # for projection layers in attention
attn_kv_bias: bool, # for kv in attention
attn_dropout: float, # dropout for attention layer
mlp_dropout: float, # dropout for feedforward layer
layer_norm_bias: bool, # for layer norm
resid_dropout: float, # dropout for residual in attention
embed_dropout: float # dropout for embedding layer
):
gpt_config = GPTConfig(block_size=context_length,
vocab_size=vocab_size,
n_layer=n_layer,
n_head=n_head,
n_embd=n_embd,
mlp_bias=mlp_bias,
attn_proj_bias=attn_proj_bias,
attn_kv_bias=attn_kv_bias,
layer_norm_bias=layer_norm_bias,
attn_dropout=attn_dropout,
mlp_dropout=mlp_dropout,
resid_dropout=resid_dropout,
embed_dropout=embed_dropout)
return GPT(gpt_config)
if __name__ == '__main__':
# comment out below to get rid of error
normal_repr = torch.Tensor.__repr__
torch.Tensor.__repr__ = lambda self: f"{tuple(self.shape)}:{normal_repr(self)}" # type: ignore
# simple test of model creation and forward pass
model = get_model(n_layer=6, n_embd=384, n_head=6,
vocab_size=230, context_length=256,
mlp_bias=False,
attn_proj_bias=False,
attn_kv_bias=True,
attn_dropout=0.2,
mlp_dropout=0.2,
layer_norm_bias=False,
resid_dropout=0.2,
embed_dropout=0.2)
model = model.to('cuda')
model = torch.compile(model)
model.train()
x = torch.randint(0, 10, (64, 256), dtype=torch.int64).to('cuda')
amp_ctx = torch.amp.autocast(device_type='cuda', dtype=torch.bfloat16)
with amp_ctx:
y = model(x)
print(y)
```
I think the bug is important because it indicates somewhere in PyTorch code base there is a strong dependency on exactly what is returned by `torch.Tensor.__repr__`. This makes that implementation very fragile and I think it could most likely be changed simply by using proper API.
### Versions
PyTorch version: 2.2.0.dev20230929+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: version 3.27.2
Libc version: glibc-2.31
Python version: 3.11.4 (main, Jul 5 2023, 14:15:25) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-1046-azure-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.2.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA A100 80GB PCIe
Nvidia driver version: 535.104.05
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.5
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.5
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.5
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.5
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.5
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.5
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.5
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 48 bits physical, 48 bits virtual
CPU(s): 24
On-line CPU(s) list: 0-23
Thread(s) per core: 1
Core(s) per socket: 24
Socket(s): 1
NUMA node(s): 1
Vendor ID: AuthenticAMD
CPU family: 25
Model: 1
Model name: AMD EPYC 7V13 64-Core Processor
Stepping: 1
CPU MHz: 2445.433
BogoMIPS: 4890.86
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 768 KiB
L1i cache: 768 KiB
L2 cache: 12 MiB
L3 cache: 96 MiB
NUMA node0 CPU(s): 0-23
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl tsc_reliable nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core invpcid_single vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr rdpru arat umip vaes vpclmulqdq rdpid fsrm
Versions of relevant libraries:
[pip3] flake8==6.0.0
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.23.5
[pip3] numpydoc==1.5.0
[pip3] pytorch-triton==2.1.0+6e4932cda8
[pip3] torch==2.2.0.dev20230929+cu118
[pip3] torchaudio==2.2.0.dev20230929+cu118
[pip3] torchvision==0.17.0.dev20230929+cu118
[pip3] triton==2.0.0
[conda] _tflow_select 2.3.0 mkl
[conda] blas 1.0 mkl
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2023.1.0 h213fc3f_46343
[conda] mkl-service 2.4.0 py311h5eee18b_1
[conda] mkl_fft 1.3.6 py311ha02d727_1
[conda] mkl_random 1.2.2 py311ha02d727_1
[conda] numpy 1.23.5 py311h08b1b3b_1
[conda] numpy-base 1.23.5 py311hf175353_1
[conda] numpydoc 1.5.0 py311h06a4308_0
[conda] pytorch-cuda 11.8 h7e8668a_5 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] pytorch-triton 2.1.0+6e4932cda8 pypi_0 pypi
[conda] tensorflow 2.12.0 mkl_py311h34a0fa1_0
[conda] tensorflow-base 2.12.0 mkl_py311he5f8e37_0
[conda] torch 2.2.0.dev20230929+cu118 pypi_0 pypi
[conda] torchaudio 2.2.0.dev20230929+cu118 pypi_0 pypi
[conda] torchtriton 2.0.0 py311 pytorch
[conda] torchvision 0.17.0.dev20230929+cu118 pypi_0 pypi
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 5 |
616 | 110,329 |
[vision hash update] update the pinned vision hash
|
open source, ciflow/trunk, topic: not user facing, ciflow/inductor
|
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/_update-commit-hash.yml).
Update the pinned vision hash.
| 4 |
617 | 110,327 |
[pytorch] add should_deepcopy flag to AveragedModel
|
fb-exported, release notes: optim
|
Summary:
# Context
AveragedModel always deepcopies the passed in model. This can pose an issue if the model cannot be deepcopied, like FSDP models. Furthermore, users may want to do this logic themselves
# This diff
Adds `should_deepcopy` flag (default=True for backwards compatibility) so that if users want to pass an FSDP model, they can by turning the flag off
Test Plan: existing unit tests pass
Differential Revision: D49622071
| 4 |
618 | 110,325 |
[wip, dynamo] run all guard fail hooks after cache miss
|
module: dynamo, ciflow/inductor
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #110325
Attempt number 2 at https://github.com/pytorch/pytorch/issues/108950.
Will do testing before this PR is ready for review.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 2 |
619 | 110,315 |
torch._dynamo.exc.Unsupported: unexpected sourceless type bases: (<class 'torchrec.streamable.Pipelineable'>,)
|
good first issue, ezyang's list, oncall: pt2, module: dynamo, module: export
|
This is happening when trying to use `torch.export` on some code involving KJTs.
Internal stack trace: P840857345
Repro: P840859224
Notes:
- This is happening when we trace the construction of KJT, then pass that KJT as input to a module that we have configured to be `preserve_module_call_signature`.
- If I remove the KJT construction, and pass a KJT we got directly from the input, things work (strange!)
- Ed suggested commenting out [this check](https://github.com/pytorch/pytorch/blob/359c2a53f59aab460fb15ba60f40fda537c33aeb/torch/_dynamo/variables/builtin.py#L1113). When we do this, things will pass, although I gather that this is unsafe because we would not be properly installing guards.
@ezyang says he knows what's going on and can post an OSS repro when he has some time
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng @avikchaudhuri @gmagogsfm
| 3 |
620 | 110,313 |
[ONNX][DO NOT REVIEW] Experiment passing 'dynamic' to dort backend
|
open source, release notes: onnx, module: dynamo, ciflow/inductor
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #110313
* #110477
* #110178
* #108376
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 1 |
621 | 110,311 |
[inductor] Decompose boolean min/max into all/any
|
open source, topic: not user facing, module: inductor, ciflow/inductor
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #110311
* #110310
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler
| 1 |
622 | 110,310 |
[ATen] Support multi dim any and all reductions
|
open source, release notes: python_frontend, topic: new features, ciflow/mps
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #110310
This adds a new overload to `all` and `any` with support for multiple reduction dims.
```
all.dims(Tensor self, int[1]? dim=None, bool keepdim=False) -> Tensor
any.dims(Tensor self, int[1]? dim=None, bool keepdim=False) -> Tensor
```
| 13 |
623 | 110,304 |
pytorch_stargan: "_inductor/fx_passes/joint_graph.py", line 166, in constant_fold_uniform_value KeyError "val"
|
triaged, oncall: pt2
|
Repro:
```
python benchmarks/dynamo/torchbench.py --bfloat16 --accuracy --inference --device cuda --export-aot-inductor --only pytorch_stargan
```
| 0 |
624 | 110,303 |
Fix for out of bounds read in mobile interpreter FORMAT opcode handler
|
fb-exported, release notes: jit
|
Summary:
The FORMAT opcode for the mobile TorchScript interpreter contained an out of bounds read issue leading to memory corruption.
This change adds an explicit check that the number of inputs passed to the format method called when handling the FORMAT opcode is a valid and within bounds of the stack.
Test Plan: contbuild + OSS signals
Differential Revision: D49739095
| 2 |
625 | 110,301 |
Fix for out of bounds read in mobile interpreter INTERFACE_CALL opcode handler
|
fb-exported, release notes: mobile
|
Summary:
The INTERFACE_CALL opcode for the mobile TorchScript interpreter contained an out of bounds read issue leading to memory corruption.
This change adds an explicit check that the number of inputs passed to the format method called when handling the INTERFACE_CALL opcode is a valid and within bounds of the stack.
Test Plan: contbuild + OSS signals
Differential Revision: D49739450
| 2 |
626 | 110,300 |
Fix for out of bounds registers_ access in mobile TorchScript interpreter
|
fb-exported, release notes: mobile
|
Summary:
The TorchScript interpreter had multiple opcodes whose logic had the potential to access the registers_ array out of bounds.
This change ensures that all registers_ accesses are in bounds or an exception will be thrown.
Test Plan: contbuild + OSS signals
Differential Revision: D49748737
| 2 |
627 | 110,295 |
Tests modify global state cause later tests to fail
|
module: ci, triaged, module: devx
|
This is more of a tracker for tests that modify global state somehow, and then tests that run after them change outcome depending on whether the first test was run or not. I want to keep track of this to see how often it happens and why it happens.
The tests afterwards generally get marked as flaky because they usually succeed on file level retry, which starts at the failing test and doesn't run the first test.
Examples of this include:
* Test 1 adds to dictionary that is imported or declared at the module level. Test 2 reads that dictionary and changes outcome after seeing the new addition
* Test 1 imports extra modules. Test 2 crawls through imports to check functions in each module. The new module causes the outcome to change.
* Test 1 sets an attribute. Test 2 sees the new attribute. (https://github.com/pytorch/pytorch/pull/110254)
Fixes can include cleaning up after the function or moving tests to run in different files (then CI will run in different processes).
cc @seemethere @malfet @pytorch/pytorch-dev-infra @ZainRizvi @kit1980 @huydhn
| 0 |
628 | 110,291 |
AOT Autograd Neg View Support
|
triaged, module: functionalization, oncall: pt2, module: aotdispatch
|
### π Describe the bug
The following fails with aot_eager. It would be great if aot autograd could both a) handle neg_view correctly and b) remove ._neg_view() from the graph that is passed to inductor.
```
import torch
device = torch.device
def forward(inp):
a = inp._neg_view()
b = a.add_(3)
return a, b
inp = torch.rand([5, 5]).cuda()
out_eager = forward(inp.clone())
out_comp = torch._dynamo.optimize("aot_eager")(forward)(inp)
torch.testing.assert_close(out_comp, out_eager)
```
> AssertionError: Tensor-likes are not close!
### Versions
master
cc @bdhirsh @ezyang @msaroufim @wconstab @anijain2305
| 5 |
629 | 110,288 |
PYTORCH_TEST_WITH_DYNAMO=1 pytest -n 4 test/test_nestedtensor.py fails
|
high priority, triaged, module: nestedtensor, oncall: pt2
|
Running `PYTORCH_TEST_WITH_DYNAMO=1 pytest -n 4 test/test_nestedtensor.py` causes a non-deterministic number of failures (ranging between 14-70 for me). Perturbing the Dynamo config in CI (e.g. https://github.com/pytorch/pytorch/pull/110107) seems to expose the failures in CI, but CI is otherwise green. I might disable them for now so that we can land stuff without risking flaky CI. Opening this issue to track the problem.

cc @ezyang @gchanan @kadeng @cpuhrsch @jbschlosser @bhosmer @drisspg @msaroufim @wconstab @bdhirsh @anijain2305
| 2 |
630 | 110,285 |
TypeError: Got unsupported ScalarType BFloat16
|
triaged, module: numpy
|
### π Describe the bug
Converting bfloat16 to numpy fails:
```python
tensor = torch.ones((10, ), dtype=torch.bfloat16)
item.numpy()
```
### Versions
```python
(.venv) β lightning git:(introduce_cache) β python collect_env.py
Collecting environment information...
PyTorch version: 2.0.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 13.5.2 (arm64)
GCC version: Could not collect
Clang version: 14.0.3 (clang-1403.0.22.14.1)
CMake version: Could not collect
Libc version: N/A
Python version: 3.9.15 (main, Dec 5 2022, 15:51:18) [Clang 14.0.0 (clang-1400.0.29.202)] (64-bit runtime)
Python platform: macOS-13.5.2-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Max
Versions of relevant libraries:
[pip3] mypy==1.0.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.1
[pip3] pytorch-lightning==1.9.0
[pip3] torch==2.0.1
[pip3] torchdata==0.6.1
[pip3] torchmetrics==0.11.1
[pip3] torchvision==0.15.1
[conda] Could not collect
```
cc @mruberry @rgommers
| 4 |
631 | 110,281 |
feat(Pipeline Parallelism): use mincut optimization for local communication optimization
|
oncall: distributed
|
### π The feature, motivation and pitch
Currently, the pipeline parallelism API requires the user to make a decision about the partitioning:
https://github.com/pytorch/pytorch/blob/758735b739e119dfa66c447b81e8bd391ab765f7/torch/distributed/pipeline/sync/pipe.py#L224
However, I believe an _automated partitioner_ that reduces the cross-device communication could be employed.
### Proposal
Of course, the device placement issue is hard enough (partitioning the bwd + fwd workload / memory to be evenly distributed across devices), but even once the graphs have been approximately placed, one may still want to search over a small radius to repartition operations so as to reduce cross-device communication.
To do so, one may employ the mincut optimization to find the minimum cut of communication in a local region, similar to the min_cut_partitioner in inductor:
https://github.com/pytorch/pytorch/blob/758735b739e119dfa66c447b81e8bd391ab765f7/torch/_functorch/partitioners.py#L253
### Considerations
1. In e.g. [torchgpipe](https://arxiv.org/pdf/2004.09910.pdf) only a coarse-grained global partitioning strategy (`torchgpipe.balance`) is utilized, but not a local strategy.
### Alternatives
1. Unclear if this should be the role of PyTorch, which should provide the _capability_ for pipeline parallelism but not necessarily the fine-grained decisions.
- Nonetheless, I believe that providing a default Torch partitioner / local repartitioner may still be useful to many users.
### References
1. [Alpa - Inter+Intra Operator Parallelism](https://www.usenix.org/system/files/osdi22-zheng-lianmin.pdf) - unclear if the "local" min cut approach would be complementary with e2e principled, topology-aware approach with cost model like Alpa.
CC @wconstab
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @penguinwu
| 0 |
632 | 110,277 |
Allow nn.Transformer to be exported as ONNX.
|
module: onnx, triaged, open source, release notes: onnx
|
Fixes #110255.
During ONNX export tracing, the `is_causal` parameter is not actually a `bool` but instead `tensor(bool)`. The native function `scaled_dot_product_attention` does not seem to handle this case, so we can just cast it to a Python `bool` for now.
| 1 |
633 | 110,272 |
Update hipify mappings and rocm_version.h path
|
module: rocm, open source, topic: not user facing
|
cc @jeffdaily @sunway513 @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang
| 1 |
634 | 110,269 |
[DO NOT MERGE] Fix CMake static build
|
triaged, open source, ciflow/binaries, ciflow/trunk, topic: not user facing
|
This PR tries to solve the static build issues in CMake build system. Fixes #110238
| 16 |
635 | 110,268 |
[S366352] Avoid creating ncclStartEvent for deadlocking
|
fb-exported, release notes: distributed (c10d)
|
Summary: enableTiming will make us create one more cudaEvent per collective: https://fburl.com/code/6hqzik63. From S366352, it looks we run into deadlock in cudaEventDestroy() so reducing the number of events may help. Also I don't really see ncclStartEvent used anywhere for desyncDebug.
Test Plan: sandcastle
Differential Revision: D49760869
| 2 |
636 | 110,266 |
[FX/const_fold][bugfix] Set codegen on wrapping GraphModule
|
fb-exported, release notes: fx
|
Summary: The split module pass does not respect retaining custom codegen. This meant const fold was broken if we use concrete args or torch.export etc. that set codegen. Fix this by copying over codegen from the original graph we're doing folding on.
Test Plan: Added test to test_fx_const_fold.
Differential Revision: D49760518
| 12 |
637 | 110,265 |
[dynamo][guard-refactor] Global Guard Accessor
|
module: dynamo, ciflow/inductor
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #110590
* #110735
* #110589
* __->__ #110265
* #108839
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 1 |
638 | 110,259 |
Cannot avoid kineto_LIBRARY-NOTFOUND error when using pre-built pytorch
|
oncall: binaries, oncall: profiler
|
### π Describe the bug
When using the pre-built pytorch like this example in the docs:
https://pytorch.org/cppdocs/installing.html
I am getting the following error:
```
CMake Warning at cmake-build-debug/_deps/pytorch-src/torch/share/cmake/Torch/TorchConfig.cmake:22 (message):
static library kineto_LIBRARY-NOTFOUND not found.
```
I inspected the zip and saw that Kineto is expected to be `ON`, and will warn with that message. However, Kineto is not included in the releases.
### Versions
N/A
cc @seemethere @malfet @robieta @chaekit @aaronenyeshi @nbcsm @guotuofeng @guyang3532 @gaoteng-git @tiffzhaofb @dzhulgakov @davidberard98
| 0 |
639 | 110,255 |
ONNX export of torch.nn.Transformer still fails
|
module: onnx, triaged, oncall: transformer/mha
|
### π Describe the bug
```python
import numpy as np
import torch
model = torch.nn.Transformer(d_model=16,
nhead=4,
num_encoder_layers=1,
num_decoder_layers=1,
batch_first=False)
src = torch.tensor(np.random.randn(3,1,16).astype(np.float32))
tgt = torch.tensor(np.random.randn(5,1,16).astype(np.float32))
torch.onnx.export(model,
{'src':src, 'tgt':tgt},
'torch_nn_transformer.onnx',
verbose=False,
input_names=['src', 'tgt'],
opset_version=15,
output_names=['out'],
dynamic_axes={
'src': {0: 'src_len'},
'tgt': {0: 'tgt_len'},
'out': {0: 'tgt_len'}
})
```
```
C:\Program Files\Python310\lib\site-packages\torch\nn\modules\transformer.py:282: UserWarning: enable_nested_tensor is True, but self.use_nested_tensor is False because encoder_layer.self_attn.batch_first was not True(use batch_first for better inference performance)
warnings.warn(f"enable_nested_tensor is True, but self.use_nested_tensor is False because {why_not_sparsity_fast_path}")
Traceback (most recent call last):
File "D:\ROOT\PythonExperiments\onnx_export_fail.py", line 13, in <module>
torch.onnx.export(model,
File "C:\Program Files\Python310\lib\site-packages\torch\onnx\utils.py", line 516, in export
_export(
File "C:\Program Files\Python310\lib\site-packages\torch\onnx\utils.py", line 1613, in _export
graph, params_dict, torch_out = _model_to_graph(
File "C:\Program Files\Python310\lib\site-packages\torch\onnx\utils.py", line 1135, in _model_to_graph
graph, params, torch_out, module = _create_jit_graph(model, args)
File "C:\Program Files\Python310\lib\site-packages\torch\onnx\utils.py", line 1011, in _create_jit_graph
graph, torch_out = _trace_and_get_graph_from_model(model, args)
File "C:\Program Files\Python310\lib\site-packages\torch\onnx\utils.py", line 915, in _trace_and_get_graph_from_model
trace_graph, torch_out, inputs_states = torch.jit._get_trace_graph(
File "C:\Program Files\Python310\lib\site-packages\torch\jit\_trace.py", line 1290, in _get_trace_graph
outs = ONNXTracedModule(
File "C:\Program Files\Python310\lib\site-packages\torch\nn\modules\module.py", line 1519, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\Program Files\Python310\lib\site-packages\torch\nn\modules\module.py", line 1528, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Program Files\Python310\lib\site-packages\torch\jit\_trace.py", line 138, in forward
graph, out = torch._C._create_graph_by_tracing(
File "C:\Program Files\Python310\lib\site-packages\torch\jit\_trace.py", line 129, in wrapper
outs.append(self.inner(*trace_inputs))
File "C:\Program Files\Python310\lib\site-packages\torch\nn\modules\module.py", line 1519, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\Program Files\Python310\lib\site-packages\torch\nn\modules\module.py", line 1528, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Program Files\Python310\lib\site-packages\torch\nn\modules\module.py", line 1509, in _slow_forward
result = self.forward(*input, **kwargs)
File "C:\Program Files\Python310\lib\site-packages\torch\nn\modules\transformer.py", line 206, in forward
output = self.decoder(tgt, memory, tgt_mask=tgt_mask, memory_mask=memory_mask,
File "C:\Program Files\Python310\lib\site-packages\torch\nn\modules\module.py", line 1519, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\Program Files\Python310\lib\site-packages\torch\nn\modules\module.py", line 1528, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Program Files\Python310\lib\site-packages\torch\nn\modules\module.py", line 1509, in _slow_forward
result = self.forward(*input, **kwargs)
File "C:\Program Files\Python310\lib\site-packages\torch\nn\modules\transformer.py", line 460, in forward
output = mod(output, memory, tgt_mask=tgt_mask,
File "C:\Program Files\Python310\lib\site-packages\torch\nn\modules\module.py", line 1519, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\Program Files\Python310\lib\site-packages\torch\nn\modules\module.py", line 1528, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Program Files\Python310\lib\site-packages\torch\nn\modules\module.py", line 1509, in _slow_forward
result = self.forward(*input, **kwargs)
File "C:\Program Files\Python310\lib\site-packages\torch\nn\modules\transformer.py", line 847, in forward
x = self.norm2(x + self._mha_block(x, memory, memory_mask, memory_key_padding_mask, memory_is_causal))
File "C:\Program Files\Python310\lib\site-packages\torch\nn\modules\transformer.py", line 865, in _mha_block
x = self.multihead_attn(x, mem, mem,
File "C:\Program Files\Python310\lib\site-packages\torch\nn\modules\module.py", line 1519, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\Program Files\Python310\lib\site-packages\torch\nn\modules\module.py", line 1528, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Program Files\Python310\lib\site-packages\torch\nn\modules\module.py", line 1509, in _slow_forward
result = self.forward(*input, **kwargs)
File "C:\Program Files\Python310\lib\site-packages\torch\nn\modules\activation.py", line 1241, in forward
attn_output, attn_output_weights = F.multi_head_attention_forward(
File "C:\Program Files\Python310\lib\site-packages\torch\nn\functional.py", line 5440, in multi_head_attention_forward
attn_output = scaled_dot_product_attention(q, k, v, attn_mask, dropout_p, is_causal)
TypeError: scaled_dot_product_attention(): argument 'is_causal' (position 6) must be bool, not Tensor
```
BTW:
1. I'm using nightly build because otherwise I have problems with torch.nn.MultiHeadAttention export (see #110070 )
2. Mb it's a good idea to add some part of this to onnx tests. What do you think? It will show any problems with export of any part of the transformer (attention/decoder/encoder/normalization). And I think you will be getting a lot of issues like this one because of transformers popularity...
### Versions
Collecting environment information...
PyTorch version: 2.2.0.dev20230928+cpu
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 10 Enterprise
GCC version: Could not collect
Clang version: Could not collect
CMake version: version 3.27.0-rc2
Libc version: N/A
Python version: 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.19045-SP0
Is CUDA available: False
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4070 Laptop GPU
Nvidia driver version: 536.23
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture=9
CurrentClockSpeed=3601
DeviceID=CPU0
Family=107
L2CacheSize=8192
L2CacheSpeed=
Manufacturer=AuthenticAMD
MaxClockSpeed=3601
Name=AMD Ryzen 7 7745HX with Radeon Graphics
ProcessorType=3
Revision=24834
Versions of relevant libraries:
[pip3] numpy==1.26.0
[pip3] torch==2.2.0.dev20230928+cpu
[pip3] torchaudio==2.2.0.dev20230928+cpu
[pip3] torchvision==0.17.0.dev20230928+cpu
[conda] Could not collect
cc @jbschlosser @bhosmer @cpuhrsch @erichan1 @drisspg
| 1 |
640 | 110,253 |
[fbcode] s,\bnp\.bool\b,bool,
|
caffe2, module: mkldnn, fb-exported
|
Test Plan: sandcastle and visual inspection
Differential Revision: D49196618
cc @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen
| 11 |
641 | 110,252 |
cuda/tf32 docs are outdated
|
module: docs, module: cuda, triaged, module: tf32
|
### π The doc issue
Would it be possible to update:
https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices
1. there is the new way to define precision which is not just `torch.backends.cuda.matmul.allow_tf32` but 3 levels as defined by:
https://pytorch.org/docs/stable/generated/torch.set_float32_matmul_precision.html#torch.set_float32_matmul_precision
it'd be awesome to update the cuda doc to reflect that and to update the benchmarks and the error with medium precision.
2. I'd imagine that `torch.set_float32_matmul_precision("medium")` is really relevant for H100 as it says:
> if a fast matrix multiplication algorithm using that datatype internally is available.
or is it available in A100 as well?
Thank you!
cc @svekars @carljparker @ptrblck @zasdfgbnm
| 4 |
642 | 110,250 |
Accessing Particular Nightly Builds Don't Work
|
oncall: binaries, triaged
|
### π Describe the bug
I would like to use a particular nightly release version of libtorch. I need to do this because:
1. Official version releases are sparse
2. Relying on "latest" is dangerous -- consistent builds/deployments is much safer
On the official page:
https://pytorch.org/get-started/locally/
I can only see the official releases or the nightly releases:

However, in the github workflows, I can see that date stamped releases are being uploaded as well. For example, with this run flavor:
https://github.com/pytorch/pytorch/actions/runs/6335907738/job/17207923080
I can see:
```
record_file=libtorch-cxx11-abi-shared-with-deps-2.2.0.dev20230928+cpu.zip
..
record_file=libtorch-cxx11-abi-shared-with-deps-latest.zip
```
However, attempting to access these urls results in an 404. If I attempt a similar URL with the wheel storage:
https://download.pytorch.org/whl/nightly/cpu/torch-2.1.0.dev20230901-cp310-none-macosx_11_0_arm64.whl
It works as expected. Can this be fixed?
Thanks,
Anthony
### Versions
N/A
cc @seemethere @malfet
| 7 |
643 | 110,249 |
`torch.func.functional_call` does not work with `__torch_function__ ` Tensor-like objects
|
triaged, enhancement, module: __torch_function__, module: functorch
|
### π Describe the bug
The [documentation](https://pytorch.org/docs/stable/notes/extending.html#extending-torch-with-a-tensor-like-type) indicates that `torch` can be extended with `torch.Tensor`-like objects via the `__torch_function__` mechanism for dynamic dispatch. However, `torch.func.functional_call` does not accept such objects as inputs. For example,
```py
# begin code copied from the documentation: https://pytorch.org/docs/stable/notes/extending.html#extending-torch-with-a-tensor-like-type
HANDLED_FUNCTIONS = {}
class ScalarTensor(object):
def __init__(self, N, value):
self._N = N
self._value = value
def __repr__(self):
return "ScalarTensor(N={}, value={})".format(self._N, self._value)
def tensor(self):
return self._value * torch.eye(self._N)
@classmethod
def __torch_function__(cls, func, types, args=(), kwargs=None):
if kwargs is None:
kwargs = {}
if func not in HANDLED_FUNCTIONS or not all(
issubclass(t, (torch.Tensor, ScalarTensor))
for t in types
):
return NotImplemented
return HANDLED_FUNCTIONS[func](*args, **kwargs)
import functools
def implements(torch_function):
"""Register a torch function override for ScalarTensor"""
def decorator(func):
functools.update_wrapper(func, torch_function)
HANDLED_FUNCTIONS[torch_function] = func
return func
return decorator
@implements(torch.mean)
def mean(input):
return float(input._value) / input._N
# end code copied from documentation
model = torch.nn.Linear(3, 3, bias=False)
input = torch.randn(5, 3)
torch.func.functional_call(model, {"weight": ScalarTensor(3, 0.1)}, (input,))
```
produces an error
```
/usr/local/lib/python3.10/dist-packages/torch/nn/utils/_named_member_accessor.py in swap_tensor(module, name, tensor, allow_missing)
40 and tensor is not None
41 ):
---> 42 raise TypeError(f"{tensor} is not an instance of torch.Tensor")
43 if "." in name:
44 raise KeyError('tensor name can\'t contain "."')
```
### Versions
Collecting environment information...
PyTorch version: 2.0.1+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.27.4
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.120+-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 2
On-line CPU(s) list: 0,1
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU @ 2.20GHz
CPU family: 6
Model: 79
Thread(s) per core: 2
Core(s) per socket: 1
Socket(s): 1
Stepping: 0
BogoMIPS: 4399.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx smap xsaveopt arat md_clear arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32 KiB (1 instance)
L1i cache: 32 KiB (1 instance)
L2 cache: 256 KiB (1 instance)
L3 cache: 55 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0,1
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable; SMT Host state unknown
Vulnerability Meltdown: Vulnerable
Vulnerability Mmio stale data: Vulnerable
Vulnerability Retbleed: Vulnerable
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled, PBRSB-eIBRS: Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable
Versions of relevant libraries:
[pip3] numpy==1.23.5
[pip3] torch==2.0.1+cu118
[pip3] torchaudio==2.0.2+cu118
[pip3] torchdata==0.6.1
[pip3] torchsummary==1.5.1
[pip3] torchtext==0.15.2
[pip3] torchvision==0.15.2+cu118
[pip3] triton==2.0.0
[conda] Could not collect
cc @hameerabbasi @rgommers @peterbell10 @ezyang @zou3519 @Chillee @samdow @kshitij12345 @janeyx99
| 3 |
644 | 110,248 |
DISABLED test_detach_cpu_float16 (__main__.TestNestedTensorDeviceTypeCPU)
|
triaged, module: flaky-tests, module: nestedtensor, skipped, oncall: pt2
|
Platforms: dynamo
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_detach_cpu_float16&suite=TestNestedTensorDeviceTypeCPU) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/17230279068).
Over the past 3 hours, it has been determined flaky in 11 workflow(s) with 33 failures and 11 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_detach_cpu_float16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_nestedtensor.py`
cc @cpuhrsch @jbschlosser @bhosmer @drisspg @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 3 |
645 | 110,243 |
[inductor] Triton mm codegen uses `make_block_ptr`
|
triaged, open source, topic: not user facing, module: inductor, ciflow/inductor
|
Partially fixes: https://github.com/pytorch/pytorch/issues/109420
No perf degr (on RTX 4070 Laptop Ada Lovelace):
Before:

After:

The reason that the curve is not equal to cublas/triton for smaller M/N is because the triton autotune configs are not tuned for commercial GPUs. See: https://github.com/pytorch/pytorch/issues/109489
`uint4x2_mixed_mm` is skipped due to complicated use of the ptrs to do bitwise hacks.
CC: @jansel @drisspg
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler
| 3 |
646 | 110,241 |
[ONNX] Replace torchscript with new graph building
|
module: onnx, open source, release notes: onnx
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #110241
This PR works with https://github.com/microsoft/onnxscript/pull/1071, providing a general idea/starting point of new graph building without TorchScriptGraph. Tested with only `test_add` unittest case in `test_fx_to_onnx_onnxruntime.py`.
NOTE: still need to test with:
1. `add_module_call`
2. multiple outputs
3. initializers
4. sequence type and split node
More detail should be found in https://github.com/microsoft/onnxscript/pull/1071
| 1 |
647 | 110,238 |
PyTorch with non-shared build (building a single shared lib) is unsupported
|
module: build, triaged, module: static linking
|
### π Describe the bug
Building with `BUILD_SHARED_LIBS=0` to produce a single `libtorch.so` appears to be broken when using Python/torch rather than C++/libtorch.
Full build command:
```
BUILD_SHARED_LIBS=0 TORCH_CUDA_ARCH_LIST="9.0" CUDA_HOME="/usr/local/cuda" CMAKE_PREFIX_PATH="$(dirname $(which conda))/../" USE_SYSTEM_NCCL=1 USE_MAGMA=1 USE_OPENCV=1 BUILD_CAFFE2=0 BUILD_CAFFE2_OPS=0 BUILD_TEST=0 USE_CUSPARSELT=1 USE_FBGEMM=0 USE_KINETO=0 USE_METAL=0 USE_NATIVE_ARCH=1 USE_NNPACK=0 USE_QNNPACK=0 USE_VALGRIND=0 USE_XNNPACK=0 USE_ITT=0 USE_C10D_UCC=0 USE_GLOO=0 USE_C10D_MPI=0 ONNX_ML=0 USE_TENSORPIPE=0 BUILD_NVFUSER=0 BUILD_FUNCTORCH=1 BUILD_SPLIT_CUDA=0 python3 setup.py develop
```
The following output is observed:
```
>>> import torch
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/workspace/pytorch/torch/__init__.py", line 235, in <module>
_load_global_deps()
File "/workspace/pytorch/torch/__init__.py", line 194, in _load_global_deps
raise err
File "/workspace/pytorch/torch/__init__.py", line 175, in _load_global_deps
ctypes.CDLL(lib_path, mode=ctypes.RTLD_GLOBAL)
File "/usr/lib/python3.10/ctypes/__init__.py", line 374, in __init__
self._handle = _dlopen(self._name, mode)
OSError: /workspace/pytorch/torch/lib/libtorch_global_deps.so: cannot open shared object file: No such file or directory
```
Running with ` TORCH_USE_RTLD_GLOBAL=1` to bypass the above failure seems to indicate that something is being loaded twice:
```
>>> import torch
terminate called after throwing an instance of 'c10::Error'
what(): Tried to register multiple backend fallbacks for the same dispatch key Conjugate; previous registration registered at /workspace/pytorch/aten/src/ATen/ConjugateFallback.cpp:17, new registration registered at /workspace/pytorch/aten/src/ATen/ConjugateFallback.cpp:17
Exception raised from registerFallback at /workspace/pytorch/aten/src/ATen/core/dispatch/Dispatcher.cpp:401 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x98 (0xfffe1caf24d8 in /workspace/pytorch/torch/lib/libshm.so)
frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0xf8 (0xfffe16cbdd28 in /workspace/pytorch/torch/lib/libshm.so)
frame #2: c10::Dispatcher::registerFallback(c10::DispatchKey, c10::KernelFunction, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x558 (0xfffe27e16a88 in /workspace/pytorch/torch/lib/libtorch_python.so)
frame #3: torch::Library::_fallback(torch::CppFunction&&) & + 0x1c4 (0xfffe27e53e54 in /workspace/pytorch/torch/lib/libtorch_python.so)
frame #4: <unknown function> + 0x21b8790 (0xfffe27bd8790 in /workspace/pytorch/torch/lib/libtorch_python.so)
frame #5: <unknown function> + 0x21bc00c (0xfffe27bdc00c in /workspace/pytorch/torch/lib/libtorch_python.so)
frame #6: <unknown function> + 0x16824d4 (0xfffe270a24d4 in /workspace/pytorch/torch/lib/libtorch_python.so)
frame #7: <unknown function> + 0x5624 (0xfffe37405624 in /lib/ld-linux-aarch64.so.1)
frame #8: <unknown function> + 0x572c (0xfffe3740572c in /lib/ld-linux-aarch64.so.1)
frame #9: _dl_catch_exception + 0xd0 (0xfffe3722d1f0 in /usr/lib/aarch64-linux-gnu/libc.so.6)
frame #10: <unknown function> + 0xbf5c (0xfffe3740bf5c in /lib/ld-linux-aarch64.so.1)
frame #11: _dl_catch_exception + 0x78 (0xfffe3722d198 in /usr/lib/aarch64-linux-gnu/libc.so.6)
frame #12: <unknown function> + 0xc2fc (0xfffe3740c2fc in /lib/ld-linux-aarch64.so.1)
frame #13: <unknown function> + 0x796e4 (0xfffe371796e4 in /usr/lib/aarch64-linux-gnu/libc.so.6)
frame #14: _dl_catch_exception + 0x78 (0xfffe3722d198 in /usr/lib/aarch64-linux-gnu/libc.so.6)
frame #15: _dl_catch_error + 0x40 (0xfffe3722d260 in /usr/lib/aarch64-linux-gnu/libc.so.6)
frame #16: <unknown function> + 0x791c0 (0xfffe371791c0 in /usr/lib/aarch64-linux-gnu/libc.so.6)
frame #17: dlopen + 0x54 (0xfffe37179784 in /usr/lib/aarch64-linux-gnu/libc.so.6)
<omitting python frames>
Aborted (core dumped)
```
### Versions
(upstream source build)
```
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (aarch64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.27.5
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.2.0-1008-nvidia-64k-aarch64-with-glibc2.35
Is CUDA available: N/A
CUDA runtime version: 12.2.140
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: NVIDIA Graphics Device
Nvidia driver version: 535.115
```
cc @malfet @seemethere
| 7 |
648 | 110,234 |
add full permuted
|
module: inductor, ciflow/inductor
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #110234
* #110242
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler
| 1 |
649 | 110,231 |
[fbcode] small fix on dataloader and content_understanding/main
|
fb-exported, release notes: dataloader
|
Summary:
* when dl shuts down, sometimes the order of object `__del__` and object clean up may mess up, which may raise exception complaining certain properties of the obj is not defined. adding a catch to avoid it from happening.
* remove the extra `register_new_resolver` as pointed in https://www.internalfb.com/diff/D49448666?dst_version_fbid=132094109961892&transaction_fbid=1285292328796395
Test Plan: cu-cli launch -p xray_video -c xrv_mae_vit_test --no-maybe-pull -l
Reviewed By: ejguan
Differential Revision: D49710645
| 8 |
650 | 110,222 |
[RELAND] Disallow skipping dynamo
|
ciflow/trunk, topic: not user facing, module: dynamo, ciflow/inductor, module: export
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #110222
Previous discussion: https://github.com/pytorch/pytorch/pull/109476
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 1 |
651 | 110,210 |
DISABLED test_noncontiguous_samples__native_batch_norm_legit_cuda_float32 (__main__.TestCommonCUDA)
|
triaged, module: flaky-tests, skipped, oncall: pt2
|
Platforms: inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_noncontiguous_samples__native_batch_norm_legit_cuda_float32&suite=TestCommonCUDA) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/17203542961).
Over the past 3 hours, it has been determined flaky in 2 workflow(s) with 2 failures and 2 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_noncontiguous_samples__native_batch_norm_legit_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_ops.py`
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 2 |
652 | 110,205 |
RuntimeError: Expected packed scalar Tensor to be of dimension 1. Got 0 instead.
|
module: optimizer, triaged, module: regression, actionable
|
### π Describe the bug
This code used to work with previous versions of torch. The bug happens when passing the betas-parameter as a tensor. It only occurs when running on GPU.
```python
import torch
from torch import Tensor, jit, nn
device = torch.device("cuda")
model = nn.Linear(4,4).to(device=device)
optimizer_config = {
"lr": 0.001,
"betas": torch.tensor([0.9000, 0.9990]),
}
optim = torch.optim.Adam(model.parameters(), **optimizer_config)
x = torch.randn(5, 4, device=device)
model(x).norm().backward()
optim.step()
```
### Versions
<details>
```
PyTorch version: 2.0.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.27.5
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.2.0-33-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.2.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060
Nvidia driver version: 535.104.12
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.5
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.5
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.5
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.5
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.5
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.5
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.5
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: GenuineIntel
Model name: 11th Gen Intel(R) Core(TM) i7-11850H @ 2.50GHz
CPU family: 6
Model: 141
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
Stepping: 1
CPU max MHz: 4800,0000
CPU min MHz: 800,0000
BogoMIPS: 4992.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l2 invpcid_single cdp_l2 ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves split_lock_detect dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq rdpid movdiri movdir64b fsrm avx512_vp2intersect md_clear ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 384 KiB (8 instances)
L1i cache: 256 KiB (8 instances)
L2 cache: 10 MiB (8 instances)
L3 cache: 24 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-15
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flake8==6.1.0
[pip3] flake8-annotations==3.0.1
[pip3] flake8-bugbear==23.9.16
[pip3] flake8-comprehensions==3.14.0
[pip3] flake8-docstrings==1.7.0
[pip3] flake8-pyi==23.6.0
[pip3] flake8-rst==0.8.0
[pip3] flake8-rst-docstrings==0.3.0
[pip3] mypy==1.5.1
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.0
[pip3] torch==2.0.1
[pip3] torchinfo==1.8.0
[pip3] triton==2.0.0
[conda] Could not collect
```
</details>
cc @vincentqb @jbschlosser @albanD @janeyx99 @crcrpar
| 4 |
653 | 110,194 |
cudaMallocAsync cause too much fragmentation.
|
module: cuda, module: memory usage, triaged, module: CUDACachingAllocator
|
### π Describe the bug
I try to use [CUDAβs built-in asynchronous allocator](https://developer.nvidia.com/blog/using-cuda-stream-ordered-memory-allocator-part-1/) by setting `PYTORCH_CUDA_ALLOC_CONF=backend:cudaMallocAsync`. Then I encounter OOM with over 20GB memory 'missing':
```
torch.cuda.OutOfMemoryError: Allocation on device 5 would exceed allowed memory. (out of memory)
Currently allocated : 54.14 GiB
Requested : 686.41 MiB
Device limit : 79.35 GiB
Free (according to CUDA): 27.19 MiB
PyTorch limit (set by user-supplied memory fraction)
: 17179869184.00 GiB
```
This dose not like what the [post](https://developer.nvidia.com/blog/using-cuda-stream-ordered-memory-allocator-part-1/) saying:
If a memory allocation request made using cudaMallocAsync canβt be serviced due to fragmentation of the corresponding memory pool, the CUDA driver defragments the pool by remapping unused memory in the pool to a contiguous portion of the GPUβs virtual address space. Remapping existing pool memory instead of allocating new memory from the OS also helps keep the applicationβs memory footprint low.
This task can barely run with PyTorch's allocator, yet, still with significant fragmentation:
```
(after 3 iterations) memory (MB) | allocated: 50900.0244140625 | max allocated: 68742.84326171875 | reserved: 76064.0 | max reserved: 76064.0
```
### Versions
Collecting environment information...
PyTorch version: 2.0.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Alibaba Group Enterprise Linux Server 7.2 (Paladin) (x86_64)
GCC version: (GCC) 8.3.1 20190311 (Red Hat 8.3.1-3)
Clang version: Could not collect
CMake version: version 3.27.4
Libc version: glibc-2.32
Python version: 3.8.13 (default, Mar 28 2022, 11:38:47) [GCC 7.5.0] (64-bit runtime)
Python platform:
Is CUDA available: True
CUDA runtime version: 11.7.64
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-80GB
GPU 1: NVIDIA A100-SXM4-80GB
GPU 2: NVIDIA A100-SXM4-80GB
GPU 3: NVIDIA A100-SXM4-80GB
GPU 4: NVIDIA A100-SXM4-80GB
GPU 5: NVIDIA A100-SXM4-80GB
GPU 6: NVIDIA A100-SXM4-80GB
GPU 7: NVIDIA A100-SXM4-80GB
Nvidia driver version: 470.154
cuDNN version: Probably one of the following:
/usr/lib64/libcudnn.so.8.9.1
/usr/lib64/libcudnn_adv_infer.so.8.9.1
/usr/lib64/libcudnn_adv_train.so.8.9.1
/usr/lib64/libcudnn_cnn_infer.so.8.9.1
/usr/lib64/libcudnn_cnn_train.so.8.9.1
/usr/lib64/libcudnn_ops_infer.so.8.9.1
/usr/lib64/libcudnn_ops_train.so.8.9.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-75,124-127
Off-line CPU(s) list: 76-123
Thread(s) per core: 1
Core(s) per socket: 32
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 106
Model name: Intel(R) Xeon(R) Platinum 8369B CPU @ 2.90GHz
Stepping: 6
CPU MHz: 3440.970
CPU max MHz: 3500.0000
CPU min MHz: 800.0000
BogoMIPS: 5800.00
Virtualization: VT-x
L1d cache: 48K
L1i cache: 32K
L2 cache: 1280K
L3 cache: 49152K
NUMA node0 CPU(s): 0-31,64-95
NUMA node1 CPU(s): 32-63,96-127
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq rdpid fsrm md_clear pconfig flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.19.5
[pip3] torch==2.0.1+cu117
[pip3] torchaudio==2.0.2+cu117
[pip3] torchmetrics==1.1.2
[pip3] torchvision==0.15.2+cu117
[pip3] triton==2.0.0
[conda] magma-cuda117 2.6.1 1 pytorch
[conda] mkl 2023.2.0 pypi_0 pypi
[conda] mkl-include 2023.2.0 pypi_0 pypi
[conda] numpy 1.19.5 pypi_0 pypi
[conda] torch 2.0.1+cu117 pypi_0 pypi
[conda] torchaudio 2.0.2+cu117 pypi_0 pypi
[conda] torchmetrics 1.1.2 pypi_0 pypi
[conda] torchvision 0.15.2+cu117 pypi_0 pypi
[conda] triton 2.0.0 pypi_0 pypi
cc @ptrblck
| 1 |
654 | 110,181 |
CI: rocm `(default, 1, 3, linux.rocm.gpu)` is very slow
|
module: rocm, module: ci, triaged, module: devx
|
## Current Status
ongoing
## Issue looks like
Taking ~2.5 hours on https://github.com/pytorch/pytorch/pull/110167

Taking 3.5+ hours on https://github.com/pytorch/pytorch/pull/109976

## User impact
Slower merging
## Root cause
Seems like it may have been introduced in https://github.com/pytorch/pytorch/pull/109817 @malfet

## Mitigation
Not sure
## Prevention/followups
Investigate cause of slow running time or split up tests into smaller test jobs. Try to make the tests run in similar time to CUDA tests (~1.5 hours)
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @seemethere @malfet @pytorch/pytorch-dev-infra @ZainRizvi @kit1980 @huydhn @clee2000
```[tasklist]
### Tasks
```
| 5 |
655 | 110,180 |
Squeeze and unsqeeze 3d to 4d for sdpa
| null |
# Summary
If input two SDPA is 3 dimensional we expand it to be 4 dimensional by adding an outer "unsqueezed" dimension. This enables the 3 dimensional input to not immediately fail the fused_attention checks.
| 2 |
656 | 110,178 |
[ONNX] Add sanity check in CI for onnxbench
|
open source, topic: not user facing
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #110313
* #110477
* __->__ #110178
* #108376
ONNX CI to run benchmark with `--quick` to validate the onnxbench infra.
| 1 |
657 | 110,175 |
Module-level bufferization for torch dynamo module spanning multiple `fx.GraphModule`
|
feature, triaged, oncall: pt2, module: aotdispatch
|
### π The feature, motivation and pitch
Some details across:
https://github.com/pytorch/pytorch/issues/109240#issuecomment-1737764251
https://github.com/pytorch/pytorch/issues/106596#issuecomment-1737835937
In summary:
1. It is likely that since we do not analyze buffer aliases (read/write) which may occur in graph breaks, we require copying inputs whenever they are mutated by the fx graphs. (TODO: add source reference?)
2. This can be complicated for scenarios like https://github.com/pytorch/pytorch/issues/109240#issuecomment-1737764251 where we are unsure when we want to commit the mutations back to the original buffer - which would be unnecessarily constrained by the time-sensitive nature of potential eager-mode mutations.
Solution:
1. If we can analyze the code in the eager mode computations (r/w accesses to given buffer), then we can better plan for bufferization across the entire module, reducing copies in between graph breaks and fx graphs. However, pointer alias analysis is generally impossible to do in opaque code (due to pointer encapsulation), so this may be completely infeasible.
However, it _may_ be the case that we know that _certain_ fx graph ouputs can _never_ be mutated by eager code - because they are **owned** (i.e. created) by the graph, hence there can be no pointer alias to it without explicit reference to the pointer in the eager code.
### Alternatives
Unclear
### Additional context
Unclear if any of this is already tracked in torch dynamo [`SideEffects`](https://github.com/pytorch/pytorch/blob/main/torch/_dynamo/side_effects.py)
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 1 |
658 | 110,170 |
[inductor] Added decomposition for _upsample_bilinear2d_aa
|
open source, topic: not user facing, module: inductor, ciflow/inductor
|
Description:
- Added decomposition for _upsample_bilinear2d_aa
Benchmark results:
```
[----------------------------------- Interpolate bilinear, AA=true, cpu ----------------------------------]
| Eager | Compiled
1 threads: ------------------------------------------------------------------------------------------------
Input (1, 3, 345, 456) -> (271, 272), torch.uint8, torch.contiguous_format | 623.0 | 1882.2
Input (1, 3, 345, 456) -> (271, 272), torch.float32, torch.contiguous_format | 1014.1 | 1466.0
Input (1, 3, 345, 456) -> (271, 272), torch.uint8, torch.channels_last | 251.3 | 2232.1
Input (1, 3, 345, 456) -> (271, 272), torch.float32, torch.channels_last | 1537.4 | 1764.9
Input (4, 3, 345, 456) -> (271, 272), torch.uint8, torch.contiguous_format | 2333.5 | 7733.9
Input (4, 3, 345, 456) -> (271, 272), torch.float32, torch.contiguous_format | 4366.2 | 5806.6
Input (4, 3, 345, 456) -> (271, 272), torch.uint8, torch.channels_last | 925.6 | 8910.6
Input (4, 3, 345, 456) -> (271, 272), torch.float32, torch.channels_last | 6194.6 | 7525.3
Input (1, 3, 345, 456) -> (567, 678), torch.uint8, torch.contiguous_format | 2170.2 | 3020.9
Input (1, 3, 345, 456) -> (567, 678), torch.float32, torch.contiguous_format | 2483.4 | 2512.8
Input (1, 3, 345, 456) -> (567, 678), torch.uint8, torch.channels_last | 534.7 | 4060.9
Input (1, 3, 345, 456) -> (567, 678), torch.float32, torch.channels_last | 5169.1 | 3782.0
Input (4, 3, 345, 456) -> (567, 678), torch.uint8, torch.contiguous_format | 7954.9 | 12137.4
Input (4, 3, 345, 456) -> (567, 678), torch.float32, torch.contiguous_format | 9919.4 | 9874.2
Input (4, 3, 345, 456) -> (567, 678), torch.uint8, torch.channels_last | 2021.2 | 16475.9
Input (4, 3, 345, 456) -> (567, 678), torch.float32, torch.channels_last | 20689.3 | 14815.4
Times are in microseconds (us).
[--------------------------------- Interpolate bilinear, AA=true, cuda ---------------------------------]
| Eager | Compiled
1 threads: ----------------------------------------------------------------------------------------------
Input (1, 3, 345, 456) -> (271, 272), torch.float32, torch.contiguous_format | 11.7 | 92.8
Input (1, 3, 345, 456) -> (271, 272), torch.float32, torch.channels_last | 31.8 | 101.2
Input (4, 3, 345, 456) -> (271, 272), torch.float32, torch.contiguous_format | 42.2 | 100.5
Input (4, 3, 345, 456) -> (271, 272), torch.float32, torch.channels_last | 84.8 | 109.9
Input (1, 3, 345, 456) -> (567, 678), torch.float32, torch.contiguous_format | 29.3 | 109.3
Input (1, 3, 345, 456) -> (567, 678), torch.float32, torch.channels_last | 52.8 | 109.6
Input (4, 3, 345, 456) -> (567, 678), torch.float32, torch.contiguous_format | 94.8 | 144.2
Input (4, 3, 345, 456) -> (567, 678), torch.float32, torch.channels_last | 187.4 | 112.3
Times are in microseconds (us).
```
[Source](https://github.com/vfdev-5/pth-inductor-dev/blob/master/perf_interp_bilinear_aa.py)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler
| 4 |
659 | 110,168 |
Add cuSPARSELt to the nightlies
|
oncall: binaries, module: build, triaged
|
## Issue description
cuSPASRELt is a library that offers accelerated semi-structured sparse matmul. We are looking to integrate v0.5.0
The download link can be found here: https://developer.nvidia.com/cusparselt-downloads
The library is ~ 20 Mb.
We merged build support for cuSPASRELt in [this PR](https://github.com/pytorch/pytorch/pull/103700)
To build pytorch with cuSPASRELt, we need to set two env variables,
```
USE_CUSPARSELT=1
CUSPARSELT_ROOT=/path/to/cusparselt/download
```
cc @seemethere @malfet
| 3 |
660 | 110,165 |
[DeviceMesh] Record PT-D API usage on parent mesh dim in MeshEnv
|
release notes: distributed (fsdp)
|
https://github.com/pytorch/pytorch/issues/109392
When we have different compositions, we need to know whether we need to turn on TP extension inside FSDP. Recording the PT-D API usage on parent mesh dim can help us detect this.
| 1 |
661 | 110,162 |
Fix for PyTorch mobile flatbuffer loader out of bounds reads
|
fb-exported, release notes: mobile
|
Summary:
The mobile_ivalue_size field in the mobile_bytecode flatbuffer schema can be larger than the ivalues vector. This introduces potential for memory corruption when parsing the mobile_bytecode Module.
This diff fixes the issue by ensuring that mobile_ivalue_size is less than the size of the ivalues vector.
Test Plan: contbuild & OSS CI
Differential Revision: D49687548
| 2 |
662 | 110,156 |
Add _worker_end_fn_t to the DataLoader
|
module: dataloader, triaged, module: data
|
### π The feature, motivation and pitch
For my particular use case, I need the dataset to perform some operations when the data loaders workers are terminated. When the `iteration_end` value in the `worker.py` file is True.
Similar to `_worker_init_fn_t` for setting the seeds, I would like to have a `_worker_end_fn_t` that takes the dataset as an argument.
### Alternatives
I haven't considered any other solutions but right now, I need to patch PyTorch.
### Additional context
This is required for the dataset to persist some data on termination.
cc @SsnL @VitalyFedyunin @ejguan @dzhulgakov
| 2 |
663 | 110,155 |
[TESTING] Capture scalar/dynamic by default
|
module: dynamo, ciflow/inductor
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #110155
* #109893
Signed-off-by: Edward Z. Yang <ezyang@meta.com>
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 1 |
664 | 110,154 |
ValueError: args contained 2 None's after flattening. When exporting a ScriptModule or ScriptFunction, no args may be None because that breaks type propagation.
|
module: onnx, triaged, module: multi-headed-attention
|
### π Describe the bug
when I export model to ONNX I found the error `RuntimeError: ScalarType UNKNOWN_SCALAR is an unexpected tensor scalar type`.
then I debug 1 line to 1 line , I found `torch.nn.MultiheadAttention` thrown it !
I pick a small unit to debug it
```python
import torch
from torch import Tensor, nn
d_model = 256
dropout = 0.0
n_heads = 8
model = nn.MultiheadAttention(d_model, n_heads, dropout=dropout)
tgt = torch.randn(6, 1, 256, dtype=torch.float32)
q = torch.randn(6, 1, 256, dtype=torch.float32)
k = torch.randn(6, 1, 256, dtype=torch.float32)
args = (
tgt, q, k
)
scripted_model = torch.jit.script(model)
scripted_model.save("model.pt")
torch.onnx.export(model=scripted_model, args=args, f="model.onnx", verbose=True,opset_version=17)
```
then an error thrown
```sh
=========== Diagnostic Run torch.onnx.export version 2.1.0a0+b5021ba ===========
verbose: False, log level: 40
======================= 0 NONE 0 NOTE 0 WARNING 0 ERROR ========================
Traceback (most recent call last):
File "script_save_export_onnx.py", line 331, in <module>
torch.onnx.export(model=scripted_model, args=args, f="model.onnx", verbose=True,opset_version=17)
File "/usr/local/lib/python3.10/dist-packages/torch/onnx/utils.py", line 507, in export
_export(
File "/usr/local/lib/python3.10/dist-packages/torch/onnx/utils.py", line 1567, in _export
graph, params_dict, torch_out = _model_to_graph(
File "/usr/local/lib/python3.10/dist-packages/torch/onnx/utils.py", line 1124, in _model_to_graph
graph, params, torch_out, module = _create_jit_graph(model, args)
File "/usr/local/lib/python3.10/dist-packages/torch/onnx/utils.py", line 968, in _create_jit_graph
_check_flatten_did_not_remove(args, flattened_args)
File "/usr/local/lib/python3.10/dist-packages/torch/onnx/utils.py", line 956, in _check_flatten_did_not_remove
raise ValueError(
ValueError: args contained 2 None's after flattening. When exporting a ScriptModule or ScriptFunction, no args may be None because that breaks type propagation.
```
### Versions
docker pull nvcr.io/nvidia/pytorch:23.07-py3
| 0 |
665 | 110,148 |
Torch.onnx.dynamo_export stuck at reshape
|
module: onnx, triaged, module: dynamo, module: export
|
## Issue description
I am trying out dynamo_export to convert my pytorch model to onnx and it seemed to be stuck at reshaping a tensor of all things.
## Code example
The model code can be found [here](https://github.com/andresprados/SPIGA/blob/main/spiga/models/spiga.py), and the portion that caused the error is in the trace stack below:
<details>
<summary>Click to expand</summary>
> /home/evas/anaconda3/envs/CH_test/lib/python3.10/site-packages/torch/onnx/_internal/exporter.py:130: UserWarning: torch.onnx.dynamo_export only implements opset version 18 for now. If you need to use a different opset version, please register them with register_custom_op.
> warnings.warn(
> /home/evas/anaconda3/envs/CH_test/lib/python3.10/site-packages/torch/nn/functional.py:4358: UserWarning: Default grid_sample and affine_grid behavior has changed to align_corners=False since 1.3.0. Please specify align_corners=True if the old behavior is desired. See the documentation of grid_sample for details.
> warnings.warn(
> /home/evas/anaconda3/envs/CH_test/lib/python3.10/site-packages/torch/nn/functional.py:4296: UserWarning: Default grid_sample and affine_grid behavior has changed to align_corners=False since 1.3.0. Please specify align_corners=True if the old behavior is desired. See the documentation of grid_sample for details.
> warnings.warn(
> Traceback (most recent call last):
> File β/home/evas/anaconda3/envs/CH_test/lib/python3.10/site-packages/torch/_dynamo/utils.pyβ, line 1433, in run_node
> return getattr(args[0], node.target)(*args[1:], **kwargs)
> File β/home/evas/anaconda3/envs/CH_test/lib/python3.10/site-packages/torch/utils/_stats.pyβ, line 20, in wrapper
> return fn(*args, **kwargs)
> File β/home/evas/anaconda3/envs/CH_test/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.pyβ, line 1304, in torch_dispatch
> return self.dispatch(func, types, args, kwargs)
> File β/home/evas/anaconda3/envs/CH_test/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.pyβ, line 1510, in dispatch
> return decomposition_table[func](*args, **kwargs)
> File β/home/evas/anaconda3/envs/CH_test/lib/python3.10/site-packages/torch/_refs/init.pyβ, line 4450, in view
> return _reshape_view_helper(a, shape, allow_copy=False)
> File β/home/evas/anaconda3/envs/CH_test/lib/python3.10/site-packages/torch/_refs/init.pyβ, line 3583, in reshape_view_helper
> shape = utils.infer_size(shape, a.numel())
> File β/home/evas/anaconda3/envs/CH_test/lib/python3.10/site-packages/torch/prims_common/init.pyβ, line 824, in infer_size
> numel == newsize or (dim is not None and newsize > 0 and numel % newsize == 0),
> File β/home/evas/anaconda3/envs/CH_test/lib/python3.10/site-packages/torch/init.pyβ, line 361, in bool
> return self.node.bool()
> File β/home/evas/anaconda3/envs/CH_test/lib/python3.10/site-packages/torch/fx/experimental/symbolic_shapes.pyβ, line 1036, in bool
> return self.guard_bool(ββ, 0)
> File β/home/evas/anaconda3/envs/CH_test/lib/python3.10/site-packages/torch/fx/experimental/symbolic_shapes.pyβ, line 1006, in guard_bool
> r = self.shape_env.evaluate_expr(self.expr, self.hint, fx_node=self.fx_node)
> File β/home/evas/anaconda3/envs/CH_test/lib/python3.10/site-packages/torch/fx/experimental/recording.pyβ, line 246, in wrapper
> return event.run(self)
> File β/home/evas/anaconda3/envs/CH_test/lib/python3.10/site-packages/torch/fx/experimental/recording.pyβ, line 151, in run
> return self.f(args, **kwargs)
> File β/home/evas/anaconda3/envs/CH_test/lib/python3.10/site-packages/torch/fx/experimental/symbolic_shapes.pyβ, line 3834, in evaluate_expr
> concrete_val = self.size_hint(orig_expr)
> File β/home/evas/anaconda3/envs/CH_test/lib/python3.10/site-packages/torch/fx/experimental/symbolic_shapes.pyβ, line 3633, in size_hint
> raise self._make_data_dependent_error(result_expr, expr)
> torch.fx.experimental.symbolic_shapes.GuardOnDataDependentSymNode: It appears that youβre trying to get a value out of symbolic int/float whose value is data-dependent (and thus we do not know the true value.) The expression we were trying to evaluate is Eq(2i0, 98) (unhinted: Eq(2i0, 98)). Scroll up to see where each of these data-dependent accesses originally occurred.
>
> The above exception was the direct cause of the following exception:
>
> Traceback (most recent call last):
> File β/home/evas/anaconda3/envs/CH_test/lib/python3.10/site-packages/torch/_dynamo/utils.pyβ, line 1352, in get_fake_value
> return wrap_fake_exception(
> File β/home/evas/anaconda3/envs/CH_test/lib/python3.10/site-packages/torch/_dynamo/utils.pyβ, line 921, in wrap_fake_exception
> return fn()
> File β/home/evas/anaconda3/envs/CH_test/lib/python3.10/site-packages/torch/_dynamo/utils.pyβ, line 1353, in
> lambda: run_node(tx.output, node, args, kwargs, nnmodule)
> File β/home/evas/anaconda3/envs/CH_test/lib/python3.10/site-packages/torch/_dynamo/utils.pyβ, line 1452, in run_node
> raise RuntimeError(fn_str + str(e)).with_traceback(e.traceback) from e
> File β/home/evas/anaconda3/envs/CH_test/lib/python3.10/site-packages/torch/_dynamo/utils.pyβ, line 1433, in run_node
> return getattr(args[0], node.target)(*args[1:], **kwargs)
> File β/home/evas/anaconda3/envs/CH_test/lib/python3.10/site-packages/torch/utils/_stats.pyβ, line 20, in wrapper
> return fn(*args, **kwargs)
> File β/home/evas/anaconda3/envs/CH_test/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.pyβ, line 1304, in torch_dispatch
> return self.dispatch(func, types, args, kwargs)
> File β/home/evas/anaconda3/envs/CH_test/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.pyβ, line 1510, in dispatch
> return decomposition_table[func](args, **kwargs)
> File β/home/evas/anaconda3/envs/CH_test/lib/python3.10/site-packages/torch/_refs/init.pyβ, line 4450, in view
> return _reshape_view_helper(a, shape, allow_copy=False)
> File β/home/evas/anaconda3/envs/CH_test/lib/python3.10/site-packages/torch/_refs/init.pyβ, line 3583, in reshape_view_helper
> shape = utils.infer_size(shape, a.numel())
> File β/home/evas/anaconda3/envs/CH_test/lib/python3.10/site-packages/torch/prims_common/init.pyβ, line 824, in infer_size
> numel == newsize or (dim is not None and newsize > 0 and numel % newsize == 0),
> File β/home/evas/anaconda3/envs/CH_test/lib/python3.10/site-packages/torch/init.pyβ, line 361, in bool
> return self.node.bool()
> File β/home/evas/anaconda3/envs/CH_test/lib/python3.10/site-packages/torch/fx/experimental/symbolic_shapes.pyβ, line 1036, in bool
> return self.guard_bool(ββ, 0)
> File β/home/evas/anaconda3/envs/CH_test/lib/python3.10/site-packages/torch/fx/experimental/symbolic_shapes.pyβ, line 1006, in guard_bool
> r = self.shape_env.evaluate_expr(self.expr, self.hint, fx_node=self.fx_node)
> File β/home/evas/anaconda3/envs/CH_test/lib/python3.10/site-packages/torch/fx/experimental/recording.pyβ, line 246, in wrapper
> return event.run(self)
> File β/home/evas/anaconda3/envs/CH_test/lib/python3.10/site-packages/torch/fx/experimental/recording.pyβ, line 151, in run
> return self.f(args, **kwargs)
> File β/home/evas/anaconda3/envs/CH_test/lib/python3.10/site-packages/torch/fx/experimental/symbolic_shapes.pyβ, line 3834, in evaluate_expr
> concrete_val = self.size_hint(orig_expr)
> File β/home/evas/anaconda3/envs/CH_test/lib/python3.10/site-packages/torch/fx/experimental/symbolic_shapes.pyβ, line 3633, in size_hint
> raise self._make_data_dependent_error(result_expr, expr)
> RuntimeError: Failed running call_method reshape((FakeTensor(β¦, device=βcuda:0β, size=(1, i0, 2), grad_fn=), 1, 98, -1), **{}):
> It appears that youβre trying to get a value out of symbolic int/float whose value is data-dependent (and thus we do not know the true value.) The expression we were trying to evaluate is Eq(2i0, 98) (unhinted: Eq(2i0, 98)). Scroll up to see where each of these data-dependent accesses originally occurred.
>
> During handling of the above exception, another exception occurred:
>
> Traceback (most recent call last):
> File β/home/evas/anaconda3/envs/CH_test/lib/python3.10/site-packages/torch/onnx/_internal/exporter.pyβ, line 1195, in dynamo_export
> ).export()
> File β/home/evas/anaconda3/envs/CH_test/lib/python3.10/site-packages/torch/onnx/_internal/exporter.pyβ, line 941, in export
> graph_module = self.options.fx_tracer.generate_fx(
> File β/home/evas/anaconda3/envs/CH_test/lib/python3.10/site-packages/torch/onnx/_internal/fx/dynamo_graph_extractor.pyβ, line 199, in generate_fx
> graph_module, graph_guard = torch._dynamo.export(
> File β/home/evas/anaconda3/envs/CH_test/lib/python3.10/site-packages/torch/_dynamo/eval_frame.pyβ, line 1216, in inner
> result_traced = opt_f(*args, **kwargs)
> File β/home/evas/anaconda3/envs/CH_test/lib/python3.10/site-packages/torch/_dynamo/eval_frame.pyβ, line 406, in _fn
> return fn(*args, **kwargs)
> File β/home/evas/anaconda3/envs/CH_test/lib/python3.10/site-packages/torch/onnx/_internal/fx/dynamo_graph_extractor.pyβ, line 154, in wrapped
> return output_adapter.apply(model_func(*args, **kwargs))
> File β/home/evas/anaconda3/envs/CH_test/lib/python3.10/site-packages/torch/_dynamo/eval_frame.pyβ, line 554, in catch_errors
> return callback(frame, cache_entry, hooks, frame_state)
> File β/home/evas/anaconda3/envs/CH_test/lib/python3.10/site-packages/torch/_dynamo/convert_frame.pyβ, line 140, in _fn
> return fn(*args, **kwargs)
> File β/home/evas/anaconda3/envs/CH_test/lib/python3.10/site-packages/torch/_dynamo/convert_frame.pyβ, line 380, in _convert_frame_assert
> return _compile(
> File β/home/evas/anaconda3/envs/CH_test/lib/python3.10/site-packages/torch/_dynamo/convert_frame.pyβ, line 559, in _compile
> guarded_code = compile_inner(code, one_graph, hooks, transform)
> File β/home/evas/anaconda3/envs/CH_test/lib/python3.10/site-packages/torch/_dynamo/utils.pyβ, line 190, in time_wrapper
> r = func(*args, **kwargs)
> File β/home/evas/anaconda3/envs/CH_test/lib/python3.10/site-packages/torch/_dynamo/convert_frame.pyβ, line 481, in compile_inner
> out_code = transform_code_object(code, transform)
> File β/home/evas/anaconda3/envs/CH_test/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.pyβ, line 1028, in transform_code_object
> transformations(instructions, code_options)
> File β/home/evas/anaconda3/envs/CH_test/lib/python3.10/site-packages/torch/_dynamo/convert_frame.pyβ, line 451, in transform
> tracer.run()
> File β/home/evas/anaconda3/envs/CH_test/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.pyβ, line 2094, in run
> super().run()
> File β/home/evas/anaconda3/envs/CH_test/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.pyβ, line 739, in run
> and self.step()
> File β/home/evas/anaconda3/envs/CH_test/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.pyβ, line 702, in step
> getattr(self, inst.opname)(inst)
> File β/home/evas/anaconda3/envs/CH_test/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.pyβ, line 403, in wrapper
> return inner_fn(self, inst)
> File β/home/evas/anaconda3/envs/CH_test/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.pyβ, line 1135, in CALL_FUNCTION
> self.call_function(fn, args, {})
> File β/home/evas/anaconda3/envs/CH_test/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.pyβ, line 576, in call_function
> self.push(fn.call_function(self, args, kwargs))
> File β/home/evas/anaconda3/envs/CH_test/lib/python3.10/site-packages/torch/_dynamo/variables/functions.pyβ, line 307, in call_function
> return super().call_function(tx, args, kwargs)
> File β/home/evas/anaconda3/envs/CH_test/lib/python3.10/site-packages/torch/_dynamo/variables/functions.pyβ, line 261, in call_function
> return super().call_function(tx, args, kwargs)
> File β/home/evas/anaconda3/envs/CH_test/lib/python3.10/site-packages/torch/_dynamo/variables/functions.pyβ, line 90, in call_function
> return tx.inline_user_function_return(
> File β/home/evas/anaconda3/envs/CH_test/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.pyβ, line 612, in inline_user_function_return
> result = InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
> File β/home/evas/anaconda3/envs/CH_test/lib/python3.10/site-packages/torch/dynamo/symbolic_convert.pyβ, line 2221, in inline_call
> return cls.inline_call(parent, func, args, kwargs)
> File "/home/evas/anaconda3/envs/CH_test/lib/python3.10/site-packages/torch/dynamo/symbolic_convert.py", line 2343, in inline_call
> tracer.run()
> File β/home/evas/anaconda3/envs/CH_test/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.pyβ, line 739, in run
> and self.step()
> File β/home/evas/anaconda3/envs/CH_test/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.pyβ, line 702, in step
> getattr(self, inst.opname)(inst)
> File β/home/evas/anaconda3/envs/CH_test/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.pyβ, line 403, in wrapper
> return inner_fn(self, inst)
> File β/home/evas/anaconda3/envs/CH_test/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.pyβ, line 1135, in CALL_FUNCTION
> self.call_function(fn, args, {})
> File β/home/evas/anaconda3/envs/CH_test/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.pyβ, line 576, in call_function
> self.push(fn.call_function(self, args, kwargs))
> File β/home/evas/anaconda3/envs/CH_test/lib/python3.10/site-packages/torch/_dynamo/variables/functions.pyβ, line 307, in call_function
> return super().call_function(tx, args, kwargs)
> File β/home/evas/anaconda3/envs/CH_test/lib/python3.10/site-packages/torch/_dynamo/variables/functions.pyβ, line 261, in call_function
> return super().call_function(tx, args, kwargs)
> File β/home/evas/anaconda3/envs/CH_test/lib/python3.10/site-packages/torch/_dynamo/variables/functions.pyβ, line 90, in call_function
> return tx.inline_user_function_return(
> File β/home/evas/anaconda3/envs/CH_test/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.pyβ, line 612, in inline_user_function_return
> result = InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
> File β/home/evas/anaconda3/envs/CH_test/lib/python3.10/site-packages/torch/dynamo/symbolic_convert.pyβ, line 2221, in inline_call
> return cls.inline_call(parent, func, args, kwargs)
> File "/home/evas/anaconda3/envs/CH_test/lib/python3.10/site-packages/torch/dynamo/symbolic_convert.py", line 2343, in inline_call
> tracer.run()
> File β/home/evas/anaconda3/envs/CH_test/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.pyβ, line 739, in run
> and self.step()
> File β/home/evas/anaconda3/envs/CH_test/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.pyβ, line 702, in step
> getattr(self, inst.opname)(inst)
> File β/home/evas/anaconda3/envs/CH_test/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.pyβ, line 403, in wrapper
> return inner_fn(self, inst)
> File β/home/evas/anaconda3/envs/CH_test/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.pyβ, line 1135, in CALL_FUNCTION
> self.call_function(fn, args, {})
> File β/home/evas/anaconda3/envs/CH_test/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.pyβ, line 576, in call_function
> self.push(fn.call_function(self, args, kwargs))
> File β/home/evas/anaconda3/envs/CH_test/lib/python3.10/site-packages/torch/_dynamo/variables/misc.pyβ, line 617, in call_function
> return self.obj.call_method(tx, self.name, args, kwargs).add_options(self)
> File β/home/evas/anaconda3/envs/CH_test/lib/python3.10/site-packages/torch/_dynamo/variables/tensor.pyβ, line 693, in call_method
> return wrap_fx_proxy(
> File β/home/evas/anaconda3/envs/CH_test/lib/python3.10/site-packages/torch/_dynamo/variables/builder.pyβ, line 1289, in wrap_fx_proxy
> return wrap_fx_proxy_cls(
> File β/home/evas/anaconda3/envs/CH_test/lib/python3.10/site-packages/torch/_dynamo/variables/builder.pyβ, line 1376, in wrap_fx_proxy_cls
> example_value = get_fake_value(proxy.node, tx)
> File β/home/evas/anaconda3/envs/CH_test/lib/python3.10/site-packages/torch/_dynamo/utils.pyβ, line 1381, in get_fake_value
> raise UserError(
> torch._dynamo.exc.UserError: Tried to use data-dependent value in the subsequent computation. This can happen when we encounter unbounded dynamic value that is unknown during tracing time.You will need to explicitly give hint to the compiler. Please take a look at constrain_as_value OR constrain_as_size APIs
>
> from user code:
> File β/home/evas/Downloads/SPIGA/SPIGA/spiga/models/spiga.pyβ, line 82, in forward
> embedded_ft = self.extract_embedded(pts_proj, visual_field, step)
> File β/home/evas/Downloads/SPIGA/SPIGA/spiga/models/spiga.pyβ, line 124, in extract_embedded
> shape_ft = self.calculate_distances(pts_proj)
> File β/home/evas/Downloads/SPIGA/SPIGA/spiga/models/spiga.pyβ, line 163, in calculate_distances
> dist_wo_self = dist[:, self.diagonal_mask, :].reshape(B, L, -1)
>
> dist: tensor([[[-0.0072, -0.0387],
> [-0.0144, -0.0772],
> [-0.0311, -0.1161],
> ...,
> [ 0.1014, -0.2628],
> [ 0.1312, -0.2631],
> [ 0.2310, -0.0109]]], device='cuda:0', grad_fn=<IndexBackward0>)
> shape: (1, 98)
> The above exception was the direct cause of the following exception:
>
> Traceback (most recent call last):
> File β/home/evas/Downloads/SPIGA/SPIGA/app_CH.pyβ, line 74, in
> tracked_obj = processor.process_frame(frame, [faces[0]])
> File β/home/evas/Downloads/SPIGA/SPIGA/spiga/demo/analyze/extract/spiga_processor.pyβ, line 35, in process_frame
> features = self.processor.inference(frame, bboxes)
> File β/home/evas/Downloads/SPIGA/SPIGA/spiga/inference/framework.pyβ, line 70, in inference
> outputs = self.net_forward(batch_crops)
> File β/home/evas/Downloads/SPIGA/SPIGA/spiga/inference/framework.pyβ, line 98, in net_forward
> torch.onnx.dynamo_export(self.model, inputs).save(βwflw.onnxβ)
> File β/home/evas/anaconda3/envs/CH_test/lib/python3.10/site-packages/torch/onnx/_internal/exporter.pyβ, line 1206, in dynamo_export
> raise OnnxExporterError(
> torch.onnx.OnnxExporterError: Failed to export the model to ONNX. Generating SARIF report at βreport_dynamo_export.sarifβ. SARIF is a standard format for the output of static analysis tools. SARIF logs can be loaded in VS Code SARIF viewer extension, or SARIF web viewer ([Sarif Viewer](https://microsoft.github.io/sarif-web-component/)). Please report a bug on PyTorch Github: [Issues Β· pytorch/pytorch Β· GitHub](https://github.com/pytorch/pytorch/issues)
</details>
OS: Ubuntu 20.04
| package | Version |
| ------------- | ------------- |
| torch | 2.1.0.dev20230830+cu118 |
| torchaudio | 2.1.0.dev20230830+cu118 |
|torchmetrics |0.11.4|
|torchvision |0.16.0.dev20230830+cu118 |
|onnx |1.14.0 |
|onnxruntime-gpu |1.15.1 |
|onnxscript-preview |0.1.0.dev20230905 |
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan
| 4 |
666 | 110,146 |
[DONT MERGE][ROCm] Update magma to 2.7.2 version
|
module: rocm, open source, ciflow/trunk, topic: not user facing
|
cc @jeffdaily @sunway513 @jithunnair-amd @ROCmSupport @dllehr-amd @jataylo @hongxiayang
| 2 |
667 | 110,137 |
custom_ops._destroy("test::foo") doesn't remove abstract_impl
|
triaged, oncall: pt2
|
### π Describe the bug
self explanatory
cc @msaroufim @wconstab @bdhirsh @anijain2305 @zou3519
### Versions
main
| 0 |
668 | 110,136 |
Unbacked SymInts get reallocated whenever you repropagate fake tensors
|
triaged, oncall: pt2, module: fakeTensor, module: dynamic shapes
|
### π Describe the bug
(to be filled)
### Versions
main
cc @msaroufim @wconstab @bdhirsh @anijain2305 @eellison
| 0 |
669 | 110,135 |
logging stack_info doesn't do anything
|
triaged, oncall: pt2, module: ProxyTensor
|
### π Describe the bug
as stated
### Versions
main
cc @msaroufim @wconstab @bdhirsh @anijain2305
| 0 |
670 | 110,131 |
Some ONNX tests have been disabled because of new tensor.split signature
|
module: onnx, triaged, onnx-triaged
|
### π Describe the bug
#107484 adds a new drop_remainder=False attribute to tensor.split.
This causes some ONNX tests to fail, because the [corresponding op in ONNXScript](https://github.com/microsoft/onnxscript/blob/b6c5e3bb9729f939e979a41d8d724009df9f3b58/onnxscript/function_libs/torch_lib/ops/core.py#L6918) has the previous signature.
`torch.onnx._internal.diagnostics.infra.context.RuntimeErrorWithDiagnostic: Cannot find any perfect/nearest match of symbolic function for aten::split.Tensor,which should be registered under aten.split.Tensor.
`
Once the PR is merged, and the ONNX op script op is updated, I will fix this issue and re-enable the tests.
### Versions
Not needed
cc @seemethere @malfet @pytorch/pytorch-dev-infra
| 4 |
671 | 110,130 |
Create a new heuristic TD rule for failures coming from base commit of the pull requests
|
triaged, module: devx
|
Here is the thoughts behind this feature request:
* If a test is failing on the base commit, we might want to run it **last** on the PR knowing that it's going to fail. This allows other failures to surface and prevent the situation where reverting a change surfaces other failures hidden by the broken trunk.
* ~~On the other hand, if a pull request is deliberately created to forward fix trunk failures, running base failures **first** makes sense as this help validate the fix faster.~~ However, this should be the minority because:
* We revert first, ask question later
* Even if the fix works and the failed tests pass right away. It's the usual requirement to wait for the whole job finishes before merging, i.e. dynamo job has passed
cc @ZainRizvi @kit1980 @clee2000
| 1 |
672 | 110,129 |
[manual] Rewrite remaining mock external_deps
|
caffe2, fb-exported
|
Summary:
Remove dependencies on mock from external_deps.
Test Plan: Visual inspection and sandcastle
Reviewed By: aleivag, igorsugak
Differential Revision: D49337369
| 6 |
673 | 110,120 |
[MPS] Fix output stride holistically when input isn't contiguous
|
open source, release notes: mps, ciflow/mps
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #110120
* #109584
* #109574
* #109557
This is a long lasting bug, related issues: https://github.com/pytorch/pytorch/issues/107694, https://github.com/pytorch/pytorch/issues/94396 (and more).
In general, we want to gather cached contiguous input tensors so that we don't need to make non-contiguous input tensors contiguous before the mps graph is run, since making tensors contiguous is an inefficient operation. However, when the output has been pre-set a non-contiguous stride, which usually happens when the kernel is structured or in-place, the result becomes incorrect because the gathered input tensors are contiguous, thereby producing a contiguous output.
Therefore, this PR makes output tensor's stride contiguous when cached input tensors are gathered.
This is a POC PR which only applies the fix to `clamp` & `masked_fill_`. If it is feasible, we could think about how to apply it to every MPS Graph kernel.
| 1 |
674 | 110,119 |
[WIP] Cache `FakeTensor` propagation.
|
open source, ciflow/inductor
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #110119
Discussion at PoC PR: #109485
| 1 |
675 | 110,116 |
Skip cuda kernel launch with torch.sum when dimension length is 0
|
module: performance, module: cuda, triaged
|
### π The feature, motivation and pitch
If there is a torch tensor, e.g. `a = torch.ones([1, 1000])` and I call `a.sum(dim=0)`, torch will launch a cuda kernel for this when it's really just a view. Torch should special case this or include it inside the compiler in `torch.compile` as it is leads to significant wins in certain settings.
### Alternatives
_No response_
### Additional context
_No response_
cc @ptrblck
| 0 |
676 | 110,107 |
torch._dynamo.reset() before and after running each test
|
ciflow/trunk, release notes: releng, module: dynamo, ciflow/inductor, keep-going
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #110107
Previously, running tests with PYTORCH_TEST_WITH_DYNAMO=1 ran some tests in eager-mode pytorch because the Dynamo cache filled up. We need to reset the Dynamo state before each test in order to actually run each test using Dynamo instead of non-deterministically falling back to eager-mode.
Test Plan:
- CI
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 4 |
677 | 110,100 |
torch.export.export does not support already-fakefied inputs within FakeTensorMode
|
triaged, onnx-triaged, module: fakeTensor, module: export, release notes: export
|
### π Describe the bug
Apparently `torch.export.export` does not support already fakefied inputs that `torch._dynamo.export` does
```python
import torch
from torch._subclasses import fake_tensor
fake_mode = fake_tensor.FakeTensorMode()
class Model(torch.nn.Module):
def __init__(self) -> None:
super().__init__()
self.linear = torch.nn.Linear(2, 2)
def forward(self, x):
out = self.linear(x)
return out
with fake_mode:
x = torch.rand(5, 2, 2)
model = Model()
# gm, _ = torch._dynamo.export(model, x) # this works
exported_program = torch.export.export(model, (x,)) # this fails with AssertionError: fake mode (<torch._subclasses.fake_tensor.FakeTensorMode object at 0x7fe4259ac760>) from active fake mode 0 doesn't match mode (<torch._subclasses.fake_tensor.FakeTensorMode object at 0x7fe355ae0760>) from fake tensor input 0
```
The error is
```bash
[2023-09-26 19:46:35,122] [INFO] [real_accelerator.py:110:get_accelerator] Setting ds_accelerator to cuda (auto detect)
/opt/conda/envs/ptca/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_validation.py:114: UserWarning: WARNING: failed to get cudart_version from onnxruntime build info.
warnings.warn("WARNING: failed to get cudart_version from onnxruntime build info.")
Traceback (most recent call last):
File "repro_fake_tensor_exception_with_torch_export_export.py", line 19, in <module>
gm, _ = torch.export.export(model, x)
File "/opt/pytorch/torch/export/__init__.py", line 587, in export
return export__RC__(f, args, kwargs, dynamic_shapes=dynamic_shapes)
File "/opt/pytorch/torch/_export/__init__.py", line 92, in export__RC__
return export(f, args, kwargs)
File "/opt/pytorch/torch/_export/__init__.py", line 395, in export
raise UserError(UserErrorType.INVALID_INPUT,
torch._dynamo.exc.UserError: Expecting `args` to be a tuple of example positional inputs, got <class 'torch._subclasses.fake_tensor.FakeTensor'>
(ptca) root@88c3e49b4bcf:/opt/pytorch# python repro_fake_tensor_exception_with_torch_export_export.py
[2023-09-26 19:46:56,376] [INFO] [real_accelerator.py:110:get_accelerator] Setting ds_accelerator to cuda (auto detect)
/opt/conda/envs/ptca/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_validation.py:114: UserWarning: WARNING: failed to get cudart_version from onnxruntime build info.
warnings.warn("WARNING: failed to get cudart_version from onnxruntime build info.")
> /opt/pytorch/torch/_export/__init__.py(430)export()
-> fake_args, fake_kwargs, fake_mode = _convert_input_to_fake(gm_torch_level, args, kwargs)
(Pdb) c
Traceback (most recent call last):
File "repro_fake_tensor_exception_with_torch_export_export.py", line 19, in <module>
gm, _ = torch.export.export(model, (x,))
File "/opt/pytorch/torch/export/__init__.py", line 587, in export
return export__RC__(f, args, kwargs, dynamic_shapes=dynamic_shapes)
File "/opt/pytorch/torch/_export/__init__.py", line 92, in export__RC__
return export(f, args, kwargs)
File "/opt/pytorch/torch/_export/__init__.py", line 430, in export
fake_args, fake_kwargs, fake_mode = _convert_input_to_fake(gm_torch_level, args, kwargs)
File "/opt/pytorch/torch/_export/__init__.py", line 289, in _convert_input_to_fake
if detected_fake_mode := detect_fake_mode(fake_inps):
File "/opt/pytorch/torch/_guards.py", line 829, in detect_fake_mode
assert fake_mode is m, (
AssertionError: fake mode (<torch._subclasses.fake_tensor.FakeTensorMode object at 0x7fe4259ac760>) from active fake mode 0 doesn't match mode (<torch._subclasses.fake_tensor.FakeTensorMode object at 0x7fe355ae0760>) from fake tensor input 0
fake mode from active fake mode 0 allocated at:
File "repro_fake_tensor_exception_with_torch_export_export.py", line 4, in <module>
fake_mode = fake_tensor.FakeTensorMode()
File "/opt/pytorch/torch/_subclasses/fake_tensor.py", line 1273, in __init__
self.stack = "".join(traceback.format_stack())
fake mode from fake tensor input 0 allocated at:
File "repro_fake_tensor_exception_with_torch_export_export.py", line 19, in <module>
gm, _ = torch.export.export(model, (x,))
File "/opt/pytorch/torch/export/__init__.py", line 587, in export
return export__RC__(f, args, kwargs, dynamic_shapes=dynamic_shapes)
File "/opt/pytorch/torch/_export/__init__.py", line 92, in export__RC__
return export(f, args, kwargs)
File "/opt/pytorch/torch/_export/__init__.py", line 407, in export
gm_torch_level, _ = torch._dynamo.export(
File "/opt/pytorch/torch/_dynamo/eval_frame.py", line 1216, in inner
result_traced = opt_f(*args, **kwargs)
File "/opt/pytorch/torch/nn/modules/module.py", line 1519, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/pytorch/torch/nn/modules/module.py", line 1528, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/pytorch/torch/_dynamo/eval_frame.py", line 406, in _fn
return fn(*args, **kwargs)
File "/opt/pytorch/torch/nn/modules/module.py", line 1519, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/pytorch/torch/nn/modules/module.py", line 1528, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/pytorch/torch/_dynamo/eval_frame.py", line 554, in catch_errors
return callback(frame, cache_entry, hooks, frame_state)
File "/opt/pytorch/torch/_dynamo/convert_frame.py", line 140, in _fn
return fn(*args, **kwargs)
File "/opt/pytorch/torch/_dynamo/convert_frame.py", line 380, in _convert_frame_assert
return _compile(
File "/opt/pytorch/torch/_dynamo/convert_frame.py", line 559, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/opt/pytorch/torch/_dynamo/utils.py", line 190, in time_wrapper
r = func(*args, **kwargs)
File "/opt/pytorch/torch/_dynamo/convert_frame.py", line 481, in compile_inner
out_code = transform_code_object(code, transform)
File "/opt/pytorch/torch/_dynamo/bytecode_transformation.py", line 1028, in transform_code_object
transformations(instructions, code_options)
File "/opt/pytorch/torch/_dynamo/convert_frame.py", line 434, in transform
tracer = InstructionTranslator(
File "/opt/pytorch/torch/_dynamo/symbolic_convert.py", line 2007, in __init__
output=OutputGraph(
File "/opt/pytorch/torch/_dynamo/output_graph.py", line 291, in __init__
fake_mode = torch._subclasses.FakeTensorMode(
File "/opt/pytorch/torch/_subclasses/fake_tensor.py", line 1273, in __init__
self.stack = "".join(traceback.format_stack())
(ptca) root@88c3e49b4bcf:/opt/pytorch#
```
### Versions
pytorch main branch
cc @eellison @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan
| 3 |
678 | 110,098 |
Dynamo tests in CI seem to not run at times
|
high priority, triaged, oncall: pt2
|
We landed https://github.com/pytorch/pytorch/pull/109427. ~10-20 commits later, The Dynamo CI shard consistently started failing (repro: `PYTORCH_TEST_WITH_DYNAMO=1 python test_ops.py -k test_out_warning__refs_cat_cpu`, [example logs](https://github.com/pytorch/pytorch/actions/runs/6315629259/job/17148986023)). We are not sure why this happened.
Possibly related:
- https://github.com/pytorch/pytorch/issues/107444
cc @ezyang @gchanan @kadeng @seemethere @malfet @pytorch/pytorch-dev-infra @msaroufim @wconstab @bdhirsh @anijain2305
| 7 |
679 | 110,096 |
GPT2ForSequenceClassification, LayoutLMForSequenceClassification: "torch._dynamo.exc.Unsupported: call_function BuiltinVariable(setattr) [HFPretrainedConfigVariable(), ConstantVariable(str), ConstantVariable(str)] {}"
|
triaged, oncall: pt2
|
Repro:
```
python benchmarks/dynamo/huggingface.py --bfloat16 --accuracy --inference --device cuda --export-aot-inductor --only GPT2ForSequenceClassification
```
Error:
```
File "/home/binbao/pytorch/torch/_dynamo/convert_frame.py", line 559, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/home/binbao/pytorch/torch/_dynamo/utils.py", line 190, in time_wrapper
r = func(*args, **kwargs)
File "/home/binbao/pytorch/torch/_dynamo/convert_frame.py", line 481, in compile_inner
out_code = transform_code_object(code, transform)
File "/home/binbao/pytorch/torch/_dynamo/bytecode_transformation.py", line 1028, in transform_code_object
transformations(instructions, code_options)
File "/home/binbao/pytorch/torch/_dynamo/convert_frame.py", line 451, in transform
tracer.run()
File "/home/binbao/pytorch/torch/_dynamo/symbolic_convert.py", line 2103, in run
super().run()
File "/home/binbao/pytorch/torch/_dynamo/symbolic_convert.py", line 743, in run
and self.step()
File "/home/binbao/pytorch/torch/_dynamo/symbolic_convert.py", line 706, in step
getattr(self, inst.opname)(inst)
File "/home/binbao/pytorch/torch/_dynamo/symbolic_convert.py", line 1234, in STORE_ATTR
.call_function(
File "/home/binbao/pytorch/torch/_dynamo/variables/builtin.py", line 656, in call_function
return super().call_function(tx, args, kwargs)
File "/home/binbao/pytorch/torch/_dynamo/variables/base.py", line 306, in call_function
unimplemented(f"call_function {self} {args} {kwargs}")
File "/home/binbao/pytorch/torch/_dynamo/exc.py", line 176, in unimplemented
raise Unsupported(msg)
torch._dynamo.exc.Unsupported: call_function BuiltinVariable(setattr) [HFPretrainedConfigVariable(), ConstantVariable(str), ConstantVariable(str)] {}
```
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 0 |
680 | 110,091 |
[inductor][Optimus]Improve logging for Optimus
|
fb-exported, module: inductor, ciflow/inductor
|
Summary: It is based on the diff D49340843. We add more logs for better debug and logging purposes.
Test Plan: N4248219
Differential Revision: D49422563
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov
| 12 |
681 | 110,089 |
sam: AssertionError at torch/_inductor/graph.py `assert isinstance(value, (TensorBox, sympy.Expr))`
|
triaged, oncall: pt2, module: inductor
|
Repro:
```
python benchmarks/dynamo/torchbench.py --bfloat16 --accuracy --inference --device cuda --inductor --only sam
```
Error:
```
File "/home/binbao/pytorch/torch/_inductor/graph.py", line 464, in run
return super().run(*args)
File "/home/binbao/pytorch/torch/fx/interpreter.py", line 138, in run
self.env[node] = self.run_node(node)
File "/home/binbao/pytorch/torch/_inductor/graph.py", line 735, in run_node
result = super().run_node(n)
File "/home/binbao/pytorch/torch/fx/interpreter.py", line 195, in run_node
return getattr(self, n.op)(n.target, args, kwargs)
File "/home/binbao/pytorch/torch/_inductor/graph.py", line 674, in output
assert isinstance(value, (TensorBox, sympy.Expr))
AssertionError
```
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov
| 9 |
682 | 110,084 |
scatter_add: Mixing 0-dim and 1-dim tensors
|
module: docs, triaged, module: python frontend, module: edge cases
|
### π Describe the bug
scatter_add currently allows 0-dim and 1-dim tensors to be mixed together
```
import torch
op = torch.ops.aten.scatter_add.default
op(torch.zeros([1]), 0, torch.zeros([], dtype=torch.long), torch.ones([2]))
op(torch.zeros([]), 0, torch.zeros([2], dtype=torch.long), torch.ones([3]))
torch.zeros([1]).scatter_add_(0, torch.zeros([], dtype=torch.long), torch.ones([2]))
torch.zeros([]).scatter_add_(0, torch.zeros([2], dtype=torch.long), torch.ones([3]))
```
However, documentation explicitly says that all three tensors should have the same number of dimensions
https://pytorch.org/docs/stable/generated/torch.Tensor.scatter_add_.html
Code comments and error messages also state that the tensors should have the same dimensionality.
The behavior is caused by the use of `ensure_nonempty_dim` in line 73 here:
https://github.com/pytorch/pytorch/blob/main/aten/src/ATen/native/ScatterGatherChecks.h#L66-L74
Notice code comment (line 66) and error message (line 74) stating number of dimensions must be the same
Is this behavior desired? If so, then we need to update code comment/error message/documentation.
### Versions
.
cc @svekars @carljparker @albanD
| 0 |
683 | 110,080 |
Devices API
|
feature, triaged, needs design, topic: new features, module: python frontend
|
### π The feature, motivation and pitch
Hi,
I would like to propose a new device API that is device-type (cuda, xla, npu, β¦) agnostic. For example, you could do:
```
torch.devices.count(type='cuda')
torch.devices.is_available(type='npu')
torch.devices.seed(42, type='xla')
```
Instead of:
```
torch.cuda.device_count()
torch_npu.npu.is_available()
torch_xla.set_rng_state(42)
```
This would greatly help with making 3rd party code device-type agnostic because, at the moment, each device type has its own API.
I would be happy to implement such an API if there is support from the community.
#108046 is loossly related and could be put under the same API namespace
(cc team: @AnthonyBarbier @HMellor)
### Alternatives
_No response_
### Additional context
_No response_
cc @albanD
| 2 |
684 | 110,078 |
Fix linalg_vector_norm ONNX export with wrong output dtype
|
module: onnx, triaged, open source, release notes: onnx
|
Up to now, the export of `aten::linalg_vector_norm` was wrongly giving as output dtype `torch.float32` although the input may be of other dtype. This does not match PyTorch behavior (e.g. see https://github.com/pytorch/pytorch/blob/23938640706214921c0c0c29b1ed4855a0536cd8/torch/_refs/linalg/__init__.py#L72 or https://github.com/pytorch/pytorch/blob/23938640706214921c0c0c29b1ed4855a0536cd8/aten/src/ATen/native/LinearAlgebra.cpp#L2713 - though the second one is quite unreadable to me, I would need to compile pytorch with `DEBUG=1` and use `TORCH_SHOW_DISPATCH_TRACE=1` to understand where the output is allocated).
Example of the issue:
```python
import torch
import torch.nn as nn
class MyModel(nn.Module):
def __init__(self):
super().__init__()
self.lin = nn.Linear(10, 20)
self.eps = 1e-12
self.dim = -1
def forward(self, x):
denom = x.norm(2.0, self.dim, keepdim=True)
print("denom", denom.dtype)
return denom
model = MyModel()
model = model.eval().to(torch.float16).to("cuda")
inp = torch.rand(8, 10, device="cuda", dtype=torch.float16)
with torch.no_grad():
res = model(inp)
torch.onnx.export(model, (inp,), f="normalize.onnx")
```
giving out

with the current implementation, while giving

in this proposed PR.
WDYT @justinchuby @BowenBao? This is an issue for the ONNX export in fp16 for some models, where ORT later raise at load time the error `FAIL : Load model from speecht5_onnx/decoder_model.onnx failed:Type Error: Type parameter (T) of Optype (MatMul) bound to different types (tensor(float) and tensor(float16)` due to this bug.
| 8 |
685 | 110,074 |
ImportError: libc10_cuda.so: cannot open shared object file: No such file or directory
|
oncall: binaries, module: build
|
### π Describe the bug
So, I am trying to run a command of mmdeploy, a repor from openmmlab, and I am having a hard time with this error, can somebody help me pls ?
For you to have the full scope of the bug :
reipig1@varntvar:~/Documents/binedge$ python mmdeploy/tools/deploy.py mmdeploy/configs/mmpose/pose-detection_onnxruntime_static.py mmpose/configs/wholebody_2d_keypoint/topdown_heatmap/coco-wholebody/cspnext-m_udp_8xb64-210e_coco-wholebody-256x192.py cspnext-m_udp-coco-wholebody_pt-in1k_210e-256x192-320fa258_20230123.pth mmpose/tests/data/coco/000000000785.jpg --work-dir mmdeploy_model/cpsnext_optimized --device cpu --dump-info
Traceback (most recent call last):
File "/m/home/home3/34/reipig1/data/Documents/binedge/mmdeploy/tools/deploy.py", line 335, in <module>
main()
File "/m/home/home3/34/reipig1/data/Documents/binedge/mmdeploy/tools/deploy.py", line 129, in main
export2SDK(
File "/u/34/reipig1/unix/.conda/envs/mmdeploy/lib/python3.10/site-packages/mmdeploy/backend/sdk/export_info.py", line 352, in export2SDK
deploy_info = get_deploy(deploy_cfg, model_cfg, work_dir, device)
File "/u/34/reipig1/unix/.conda/envs/mmdeploy/lib/python3.10/site-packages/mmdeploy/backend/sdk/export_info.py", line 267, in get_deploy
_, customs = get_model_name_customs(
File "/u/34/reipig1/unix/.conda/envs/mmdeploy/lib/python3.10/site-packages/mmdeploy/backend/sdk/export_info.py", line 61, in get_model_name_customs
task_processor = build_task_processor(
File "/u/34/reipig1/unix/.conda/envs/mmdeploy/lib/python3.10/site-packages/mmdeploy/apis/utils/utils.py", line 46, in build_task_processor
import_codebase(codebase_type, custom_module_list)
File "/u/34/reipig1/unix/.conda/envs/mmdeploy/lib/python3.10/site-packages/mmdeploy/codebase/__init__.py", line 37, in import_codebase
codebase.register_all_modules()
File "/u/34/reipig1/unix/.conda/envs/mmdeploy/lib/python3.10/site-packages/mmdeploy/codebase/mmpose/deploy/pose_detection.py", line 133, in register_all_modules
cls.register_deploy_modules()
File "/u/34/reipig1/unix/.conda/envs/mmdeploy/lib/python3.10/site-packages/mmdeploy/codebase/mmpose/deploy/pose_detection.py", line 123, in register_deploy_modules
import mmdeploy.codebase.mmdet.models
File "/u/34/reipig1/unix/.conda/envs/mmdeploy/lib/python3.10/site-packages/mmdeploy/codebase/mmdet/models/__init__.py", line 3, in <module>
from . import dense_heads # noqa: F401,F403
File "/u/34/reipig1/unix/.conda/envs/mmdeploy/lib/python3.10/site-packages/mmdeploy/codebase/mmdet/models/dense_heads/__init__.py", line 2, in <module>
from . import base_dense_head # noqa: F401,F403
File "/u/34/reipig1/unix/.conda/envs/mmdeploy/lib/python3.10/site-packages/mmdeploy/codebase/mmdet/models/dense_heads/base_dense_head.py", line 5, in <module>
from mmdet.models.dense_heads import PAAHead
File "/u/34/reipig1/unix/.conda/envs/mmdeploy/lib/python3.10/site-packages/mmdet/models/__init__.py", line 2, in <module>
from .backbones import * # noqa: F401,F403
File "/u/34/reipig1/unix/.conda/envs/mmdeploy/lib/python3.10/site-packages/mmdet/models/backbones/__init__.py", line 2, in <module>
from .csp_darknet import CSPDarknet
File "/u/34/reipig1/unix/.conda/envs/mmdeploy/lib/python3.10/site-packages/mmdet/models/backbones/csp_darknet.py", line 11, in <module>
from ..layers import CSPLayer
File "/u/34/reipig1/unix/.conda/envs/mmdeploy/lib/python3.10/site-packages/mmdet/models/layers/__init__.py", line 3, in <module>
from .bbox_nms import fast_nms, multiclass_nms
File "/u/34/reipig1/unix/.conda/envs/mmdeploy/lib/python3.10/site-packages/mmdet/models/layers/bbox_nms.py", line 5, in <module>
from mmcv.ops.nms import batched_nms
File "/u/34/reipig1/unix/.local/lib/python3.10/site-packages/mmcv/ops/__init__.py", line 2, in <module>
from .active_rotated_filter import active_rotated_filter
File "/u/34/reipig1/unix/.local/lib/python3.10/site-packages/mmcv/ops/active_rotated_filter.py", line 10, in <module>
ext_module = ext_loader.load_ext(
File "/u/34/reipig1/unix/.local/lib/python3.10/site-packages/mmcv/utils/ext_loader.py", line 13, in load_ext
ext = importlib.import_module('mmcv.' + name)
File "/u/34/reipig1/unix/.conda/envs/mmdeploy/lib/python3.10/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
ImportError: libc10_cuda.so: cannot open shared object file: No such file or directory
### Error logs
_No response_
### Minified repro
_No response_
### Versions
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 21709 100 21709 0 0 67116 0 --:--:-- --:--:-- --:--:-- 67003
mmdeploy) reipig1@varntvar:~/Documents/binedge$ python3 collect_env.py
Collecting environment information...
PyTorch version: 2.0.0
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.8 (main, Nov 24 2022, 14:13:03) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-79-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: 9.0.176
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) W-2133 CPU @ 3.60GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
Stepping: 4
CPU max MHz: 3900.0000
CPU min MHz: 1200.0000
BogoMIPS: 7200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti intel_ppin ssbd mba ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 192 KiB (6 instances)
L1i cache: 192 KiB (6 instances)
L2 cache: 6 MiB (6 instances)
L3 cache: 8.3 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-11
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS, IBPB conditional, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT vulnerable
Versions of relevant libraries:
[pip3] flake8==6.1.0
[pip3] numpy==1.23.5
[pip3] torch==2.0.0
[pip3] torchaudio==2.0.0
[pip3] torchvision==0.15.0
[conda] blas 1.0 mkl
[conda] cpuonly 2.0 0 pytorch
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2023.1.0 h213fc3f_46343
[conda] mkl-service 2.4.0 py310h5eee18b_1
[conda] mkl_fft 1.3.8 py310h5eee18b_0
[conda] mkl_random 1.2.4 py310hdb19cb5_0
[conda] numpy 1.23.5 pypi_0 pypi
[conda] numpy-base 1.25.2 py310hb5e798b_0
[conda] pytorch 2.0.0 py3.10_cpu_0 pytorch
[conda] pytorch-cuda 11.7 h778d358_5 pytorch
[conda] pytorch-mutex 1.0 cpu pytorch
[conda] torchaudio 2.0.0 py310_cpu pytorch
[conda] torchvision 0.15.0 py310_cpu pytorch
cc @seemethere @malfet @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 1 |
686 | 110,072 |
test_complex_div_underflow_overflow: increase amount of input data
|
triaged, open source, topic: not user facing
|
This ensures that even for complex float and AVX512 there would be enough data to use vectorized path. Without this change
test test_complex_div_underflow_overflow_cpu_complex64 would use non-vectorized algorithm while
test test_complex_div_underflow_overflow_cpu_complex128 would use vectorized algorithm on s390x.
Same behaviour is happening on x86_64 with AVX2.
| 2 |
687 | 110,071 |
Fix clang-tidy errors misc-definitions-in-headers
|
fb-exported, release notes: jit
|
Summary: Issue in graph_task.h seeme to be very real because each translation unit that includes this header will have own copy of the variable so IDs might repeat.
Differential Revision: D49635426
| 8 |
688 | 110,067 |
Add IWYU pragma for umbrella header
|
fb-exported
|
Summary:
Include cleaner tools would benefit from IWYU pragmas that define implementation headers and exported headers. See for more information.
```
https://github.com/include-what-you-use/include-what-you-use/blob/master/docs/IWYUPragmas.md
```
Test Plan: sandcastle_pass
Differential Revision: D49632855
| 4 |
689 | 110,065 |
[BUG] Elastic cannot kill all subprocesses after sending sigterm.
|
oncall: distributed, module: multiprocessing, triaged, module: elastic
|
### π Describe the bug
```python
def _close(self, death_sig: signal.Signals, timeout: int = 30) -> None:
if not self.subprocess_handlers:
return
for handler in self.subprocess_handlers.values():
if handler.proc.poll() is None:
log.warning(
"Sending process %s closing signal %s", handler.proc.pid, death_sig.name
)
handler.close(death_sig=death_sig)
end = time.monotonic() + timeout
for handler in self.subprocess_handlers.values():
time_to_wait = end - time.monotonic()
if time_to_wait <= 0:
break
try:
handler.proc.wait(time_to_wait)
except subprocess.TimeoutExpired:
# Ignore the timeout expired exception, since
# the child process will be forcefully terminated via SIGKILL
pass
for handler in self.subprocess_handlers.values():
if handler.proc.poll() is None:
log.warning(
"Unable to shutdown process %s via %s, forcefully exiting via %s",
handler.proc.pid, death_sig, _get_kill_signal()
)
handler.close(death_sig=_get_kill_signal())
handler.proc.wait()
```
The main process has timed out for 30 seconds and has not been terminated by funcition 'handler.close'. Using the sigkill to terminate the main process will cause the ppid of the child process under the main process to change to 1, making it impossible to completely clean up, causing the remaining processes to occupy resources. In some third-party device scenarios, the training script cannot be launched for the second time. Only after using pkill can work. Thus, I offer a solution which will record pids of all subprocesses and check whether all the subprocesses have been killed, if not, sigkill will be send to those subprocesses.
```python
import psutil
def get_child_processes(parent_pid):
parent = psutil.Process(parent_pid)
children = parent.children(recursive=True)
return children
#before kill
parent_pid = os.getpid()
parent_pid = handler.proc.pid
child_processes = []
child_processes.append(get_child_processes(parent_pid))
#after kill
for children in child_processes:
for child in children:
if child.poll() is None:
os.kill(child.pid, _get_kill_signal())
```
### Versions
[pip3] flake8==6.0.0
[pip3] flake8-bugbear==23.3.23
[pip3] flake8-comprehensions==3.11.1
[pip3] flake8-executable==2.1.3
[pip3] flake8-logging-format==0.9.0
[pip3] flake8-pyi==23.3.1
[pip3] flake8-simplify==0.19.3
[pip3] mypy==0.960
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.23.1
[pip3] torch==2.1.0.dev20230810+cpu
[pip3] torchvision==0.15.1
[conda] numpy 1.23.1 pypi_0 pypi
[conda] torchvision 0.15.1 pypi_0 pypi
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @penguinwu @VitalyFedyunin @dzhulgakov
| 2 |
690 | 110,062 |
DISABLED test_tags_recomputed_rand (__main__.ActivationCheckpointingViaTagsTests)
|
triaged, module: flaky-tests, skipped, module: dynamo
|
Platforms: linux, rocm, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_tags_recomputed_rand&suite=ActivationCheckpointingViaTagsTests) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/17123686365).
Over the past 3 hours, it has been determined flaky in 15 workflow(s) with 45 failures and 15 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_tags_recomputed_rand`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `dynamo/test_activation_checkpointing.py`
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 5 |
691 | 110,057 |
[dynamo] Fix comparison between SymNodeVariable and ListVariable
|
triaged, open source, topic: not user facing, module: dynamo, ciflow/inductor
|
Fixes #109504
Shuffled around an if statement to cause mis-matching types with 'is' operator to return False instead of triggering unimplemented exception
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 4 |
692 | 110,056 |
torch.onnx.export causes floating point exception with core dump for empty slice assignment
|
module: onnx, triaged
|
### π Describe the bug
A simple slice assignment called inside a module causes a floating point exception when the module is exported into the ONNX via `torch.onnx.export`.
* The full error message on a Ubuntu server is `Floating point exception (core dumped)`.
* Note that I was able to reproduce this behavior only on the Ubuntu server. The behavior on Macbook M1 Pro was just a few warning messages without failure. See the last section for more details.
# The code for reproducing the error
Let's call it `repro.py`
```python
import io
import torch
class SliceAssignZeros(torch.nn.Module):
def forward(self, x: torch.Tensor):
x[1:-1] = 0
return x
def main():
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('-s', '--size', type=int, default=2)
args = parser.parse_args()
x = torch.ones(args.size)
model = SliceAssignZeros()
print(f'input: {x}')
print(f'output: {model(x)}')
with io.BytesIO() as f:
torch.onnx.export(model, (x, ), f)
if __name__ == '__main__':
main()
```
# Steps to reproduce
## 1. Set up environment
See the details about the environment in the **Versions** section.
```
conda create -n issue python=3.10 -y
conda activate issue
conda install pytorch -c pytorch -y
pip install onnx
```
## 2. Run the following command
```
python repro.py -s 2
```
**NOTE**: If you give any positive integer value to other than 2 to the `-s` flag, the script runs normally.
# The Output
In my opinion, expected behavior are:
1. the model is exported into ONNX normally; or
2. torch warns (or throws an exception) with message indicating that the index range used for slice assignment is empty.
However, the error message on Ubuntu wasn't helpful to find out the root cause of the core dump. (Empty slice assignment doesn't look very relevant to floating point exception in my opinion.)
It was nice that the code worked on MacOS, but the warning message was still not very helpful.
## Ubuntu 22.04
```
input: tensor([1., 1.])
output: tensor([1., 1.])
Floating point exception (core dumped)
```
## MacOS Ventura 13.4.1
```
input: tensor([1., 1.])
output: tensor([1., 1.])
/Users/choijiwoong/miniconda3/envs/issue/lib/python3.10/site-packages/torch/onnx/_internal/jit_utils.py:306: UserWarning: ComputeShapeFromReshape(), shape_ratio overflows, skip shape inference. (Triggered internally at /Users/runner/work/_temp/anaconda/conda-bld/pytorch_1682343686130/work/torch/csrc/jit/passes/onnx/shape_type_inference.cpp:495.)
_C._jit_pass_onnx_node_shape_type_inference(node, params_dict, opset_version)
/Users/choijiwoong/miniconda3/envs/issue/lib/python3.10/site-packages/torch/onnx/utils.py:689: UserWarning: ComputeShapeFromReshape(), shape_ratio overflows, skip shape inference. (Triggered internally at /Users/runner/work/_temp/anaconda/conda-bld/pytorch_1682343686130/work/torch/csrc/jit/passes/onnx/shape_type_inference.cpp:495.)
_C._jit_pass_onnx_graph_shape_type_inference(
/Users/choijiwoong/miniconda3/envs/issue/lib/python3.10/site-packages/torch/onnx/utils.py:1186: UserWarning: ComputeShapeFromReshape(), shape_ratio overflows, skip shape inference. (Triggered internally at /Users/runner/work/_temp/anaconda/conda-bld/pytorch_1682343686130/work/torch/csrc/jit/passes/onnx/shape_type_inference.cpp:495.)
_C._jit_pass_onnx_graph_shape_type_inference(
================ Diagnostic Run torch.onnx.export version 2.0.1 ================
verbose: False, log level: Level.ERROR
======================= 0 NONE 0 NOTE 0 WARNING 0 ERROR ========================
```
### Versions
# Ubuntu server
```bash
(issue) $ python collect_env.py
Collecting environment information...
PyTorch version: 2.0.1
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.13 (main, Sep 11 2023, 13:44:35) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.19.0-41-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration:
GPU 0: NVIDIA RTX A6000
GPU 1: NVIDIA RTX A6000
Nvidia driver version: 510.108.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: AuthenticAMD
Model name: AMD Ryzen Threadripper PRO 3955WX 16-Cores
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU max MHz: 4402.7339
CPU min MHz: 2200.0000
BogoMIPS: 7786.12
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev sev_es
Virtualization: AMD-V
L1d cache: 512 KiB (16 instances)
L1i cache: 512 KiB (16 instances)
L2 cache: 8 MiB (16 instances)
L3 cache: 64 MiB (4 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.0
[pip3] torch==2.0.1
[conda] blas 1.0 mkl
[conda] mkl 2023.1.0 h213fc3f_46343
[conda] numpy 1.26.0 pypi_0 pypi
[conda] pytorch 2.0.1 py3.10_cpu_0 pytorch
[conda] pytorch-mutex 1.0 cpu pytorch
(issue) $ conda env export
name: issue
channels:
- pytorch
- defaults
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- blas=1.0=mkl
- bzip2=1.0.8=h7b6447c_0
- ca-certificates=2023.08.22=h06a4308_0
- filelock=3.9.0=py310h06a4308_0
- gmp=6.2.1=h295c915_3
- gmpy2=2.1.2=py310heeb90bb_0
- intel-openmp=2023.1.0=hdb19cb5_46305
- jinja2=3.1.2=py310h06a4308_0
- ld_impl_linux-64=2.38=h1181459_1
- libffi=3.4.4=h6a678d5_0
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- libuuid=1.41.5=h5eee18b_0
- markupsafe=2.1.1=py310h7f8727e_0
- mkl=2023.1.0=h213fc3f_46343
- mpc=1.1.0=h10f8cd9_1
- mpfr=4.0.2=hb69a4c5_1
- mpmath=1.3.0=py310h06a4308_0
- ncurses=6.4=h6a678d5_0
- networkx=3.1=py310h06a4308_0
- openssl=3.0.11=h7f8727e_2
- pip=23.2.1=py310h06a4308_0
- python=3.10.13=h955ad1f_0
- pytorch=2.0.1=py3.10_cpu_0
- pytorch-mutex=1.0=cpu
- readline=8.2=h5eee18b_0
- setuptools=68.0.0=py310h06a4308_0
- sqlite=3.41.2=h5eee18b_0
- sympy=1.11.1=py310h06a4308_0
- tbb=2021.8.0=hdb19cb5_0
- tk=8.6.12=h1ccaba5_0
- typing_extensions=4.7.1=py310h06a4308_0
- tzdata=2023c=h04d1e81_0
- wheel=0.38.4=py310h06a4308_0
- xz=5.4.2=h5eee18b_0
- zlib=1.2.13=h5eee18b_0
- pip:
- numpy==1.26.0
- onnx==1.14.1
- protobuf==4.24.3
prefix: /home/jiwoongchoi/anaconda3/envs/issue
```
# MacOS laptop
```bash
(issue) $ python collect_env.py
Collecting environment information...
PyTorch version: 2.0.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 13.4.1 (arm64)
GCC version: Could not collect
Clang version: 14.0.3 (clang-1403.0.22.14.1)
CMake version: version 3.25.1
Libc version: N/A
Python version: 3.10.13 (main, Sep 11 2023, 08:16:02) [Clang 14.0.6 ] (64-bit runtime)
Python platform: macOS-13.4.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Pro
Versions of relevant libraries:
[pip3] numpy==1.26.0
[pip3] torch==2.0.1
[conda] numpy 1.26.0 pypi_0 pypi
[conda] pytorch 2.0.1 py3.10_0 pytorch
(issue) $ conda env export
name: issue
channels:
- pytorch
- defaults
dependencies:
- bzip2=1.0.8=h620ffc9_4
- ca-certificates=2023.08.22=hca03da5_0
- filelock=3.9.0=py310hca03da5_0
- gmp=6.2.1=hc377ac9_3
- gmpy2=2.1.2=py310h8c48613_0
- jinja2=3.1.2=py310hca03da5_0
- libcxx=14.0.6=h848a8c0_0
- libffi=3.4.4=hca03da5_0
- markupsafe=2.1.1=py310h1a28f6b_0
- mpc=1.1.0=h8c48613_1
- mpfr=4.0.2=h695f6f0_1
- mpmath=1.3.0=py310hca03da5_0
- ncurses=6.4=h313beb8_0
- networkx=3.1=py310hca03da5_0
- openssl=3.0.11=h1a28f6b_2
- pip=23.2.1=py310hca03da5_0
- python=3.10.13=hb885b13_0
- pytorch=2.0.1=py3.10_0
- readline=8.2=h1a28f6b_0
- setuptools=68.0.0=py310hca03da5_0
- sqlite=3.41.2=h80987f9_0
- sympy=1.11.1=py310hca03da5_0
- tk=8.6.12=hb8d0fd4_0
- typing_extensions=4.7.1=py310hca03da5_0
- tzdata=2023c=h04d1e81_0
- wheel=0.38.4=py310hca03da5_0
- xz=5.4.2=h80987f9_0
- zlib=1.2.13=h5a0b063_0
- pip:
- numpy==1.26.0
- onnx==1.14.1
- protobuf==4.24.3
prefix: /Users/choijiwoong/miniconda3/envs/issue
```
| 0 |
693 | 110,052 |
[Dynamo][Test] reland testcase with state again
|
triaged, open source, topic: not user facing, module: dynamo
|
The PR https://github.com/pytorch/pytorch/pull/109713 was reverted due to a mistake. Reland the code again.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng @anijain2305 @PaliC
| 4 |
694 | 110,051 |
DISABLED test_tags_rand (__main__.ActivationCheckpointingViaTagsTests)
|
triaged, module: flaky-tests, skipped, module: dynamo
|
Platforms: linux, rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_tags_rand&suite=ActivationCheckpointingViaTagsTests) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/17118349065).
Over the past 3 hours, it has been determined flaky in 10 workflow(s) with 30 failures and 10 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_tags_rand`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `dynamo/test_activation_checkpointing.py`
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 6 |
695 | 110,049 |
[vision hash update] update the pinned vision hash
|
open source, ciflow/trunk, topic: not user facing, ciflow/inductor
|
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/_update-commit-hash.yml).
Update the pinned vision hash.
| 4 |
696 | 110,047 |
Grouped query attention
| null |
# Summary
Closed: #106730 in favor of this PR since the rebase would have been more difficult then just recreating.
There is still some edge cases I need to appropriately handle but this is how it could look.#109887
## Design choices
This is currently implemented by looking at the "num heads" dimension, size(1) and determining if it can be repeat_interleaved to fill the query num_heads.
Another option would be to pass in an explicit `groups` parameter to the top level SDPA and passing down.
| 2 |
697 | 110,040 |
DISABLED test_circular_dependencies (__main__.TestImports)
|
module: rocm, module: tests, triaged, module: flaky-tests, skipped
|
Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_circular_dependencies&suite=TestImports) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/17106258267).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_circular_dependencies`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_testing.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @mruberry
| 3 |
698 | 110,038 |
DISABLED test_tags_multiple_checkpoints (__main__.ActivationCheckpointingViaTagsTests)
|
triaged, skipped, oncall: pt2
|
Platforms: linux, rocm, slow
https://github.com/pytorch/pytorch/actions/runs/6303967068/job/17115576334
This test was disabled because it is failing on main branch ([recent examples](http://torch-ci.com/failure/dynamo%2Ftest_activation_checkpointing.py%3A%3AActivationCheckpointingViaTagsTests%3A%3Atest_tags_multiple_checkpoints)).
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 8 |
699 | 110,037 |
[experiment] rename distributed tests with backend
|
topic: not user facing, test-config/distributed
|
Fixes #ISSUE_NUMBER
| 1 |
700 | 110,036 |
Update caffe2 with LLVM-18 API change to CreateMalloc/CreateFree
|
fb-exported, NNC, release notes: jit
|
Summary: https://reviews.llvm.org/D158861 and https://reviews.llvm.org/D159418 modified the APIs for `CreateMalloc` and `CreateFree` respectively from being based off of `CallInst` to `IRBuilderBase`. Use the new API under LLVM version guards
Test Plan: CI
Differential Revision: D49612136
cc @EikanWang @jgong5
| 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.