Serial Number
int64 1
6k
| Issue Number
int64 75.6k
112k
| Title
stringlengths 3
357
| Labels
stringlengths 3
241
⌀ | Body
stringlengths 9
74.5k
⌀ | Comments
int64 0
867
|
---|---|---|---|---|---|
2,101 | 104,623 |
I propose a new overview section in the documentation
|
triaged, topic: docs
|
### 📚 The doc issue
<<Editted - 11/07/23 I found a lot of this material already existed so have reworded this issue >>
I am new to PyTorch and what would have been useful to me was an overview of the project, at level higher than the code.
I think that this would be an 'and' not an 'or' with the existing docs and may help users who like this sort of conceptual view over the project.
An overview of a typical PyTorch application, main and training data, model creation, loss and optimizer , training loop and test. I know these are also standard ML concepts, but I think it would help as they tie to specific PyTorch named concept
### Suggest a potential alternative/fix
_No response_
| 8 |
2,102 | 104,620 |
`torch.distributed.rpc.backend_registry.register_backend` fails to update `BackendType` enum
|
oncall: distributed, triaged, module: rpc
|
### 🐛 Describe the bug
The following code will fail:
```python
import torch.distributed.rpc as rpc
def construct_rpc_backend_options_handler(*args, **kwargs):
pass
def init_backend_handler(*args, **kwargs):
pass
print("before registering, id(rpc.backend_registry.BackendType):",
id(rpc.backend_registry.BackendType))
print("before registering, id(rpc.BackendType):", id(rpc.BackendType))
rpc.backend_registry.register_backend("MY_BACKEND_NAME",
construct_rpc_backend_options_handler,
init_backend_handler)
print("after registering, id(rpc.backend_registry.BackendType):",
id(rpc.backend_registry.BackendType))
print("after registering, id(rpc.BackendType):", id(rpc.BackendType))
assert hasattr(rpc.backend_registry.BackendType, "MY_BACKEND_NAME") # passes
assert hasattr(rpc.BackendType, "MY_BACKEND_NAME") # fails
```
It produces the following output for me:
```
before registering, id(rpc.backend_registry.BackendType): 93950199775984
before registering, id(rpc.BackendType): 93950199775984
after registering, id(rpc.backend_registry.BackendType): 93950206447680
after registering, id(rpc.BackendType): 93950199775984
Traceback (most recent call last):
File "/localdata/georgew/scratch/my_rpc_backend.py", line 27, in <module>
assert hasattr(rpc.BackendType, "MY_BACKEND_NAME") # fails
AssertionError
```
This appears to be due to `BackendType` being replaced with the updated object in `register_backend`, but the symbol re-exported in `torch.distributed.rpc` is still mapped to the original `BackendType`.
(Also, as a minor detail, it appears that `__all__` is set twice in `backend_registry.py`.)
### Versions
```
PyTorch version: 2.0.0+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
Clang version: 14.0.6 (https://github.com/conda-forge/clangdev-feedstock ceeebe884c3cfd7160cf5a43e147f94439fafee3)
CMake version: version 3.23.3
Libc version: glibc-2.35
Python version: 3.10.6 | packaged by conda-forge | (main, Aug 22 2022, 20:36:39) [GCC 10.4.0] (64-bit runtime)
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
[... cut ...]
Versions of relevant libraries:
[pip3] numpy==1.24.1
[pip3] torch==2.0.0+cpu
[pip3] torchvision==0.15.0+cpu
[conda] numpy 1.24.1 pypi_0 pypi
[conda] torch 2.0.0+cpu pypi_0 pypi
[conda] torchvision 0.15.0+cpu pypi_0 pypi
```
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @pietern @jjlilley @mrzzd
| 1 |
2,103 | 104,610 |
torch.compile fails with "INTERNAL ASSERT FAILED" when compiling GPT-2
|
module: onnx, triaged, oncall: pt2
|
### 🐛 Describe the bug
I am trying an experiment to run the open source GPT-2 model in pytorch through torch.compile to get an optimized, fused graph and them examine that.
On doing so, I get internal assertions firing which ask me to report the bug to pytorch.
Could someone help explain why the bug is firing, and if so, what a workaround would be?
```python
import torch
from torch.fx.experimental.proxy_tensor import make_fx
from torch.fx.experimental.symbolic_shapes import ShapeEnv
from torch._inductor.compile_fx import compile_fx_inner
import torch.fx as fx
from torch._subclasses import FakeTensorMode
import onnx
from onnx import numpy_helper
from transformers import GPT2Model, GPT2LMHeadModel, GPT2Tokenizer
def flatten(inputs):
return [[flatten(i) for i in inputs] if isinstance(inputs, (list, tuple)) else inputs]
device = torch.device("cuda")
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = GPT2Model.from_pretrained('gpt2')
model = model.to(device)
input_ids_1 = torch.tensor(
[[tokenizer.encode("Here is some text to encode Hello World", add_special_tokens=True)]], device='cuda')
input_names = None
inputs_flatten = flatten(input_ids_1)
if input_names is None:
input_names = []
for i, _ in enumerate(inputs_flatten):
input_names.append('input' + str(i+1))
opt_model = torch.compile(model, mode='max-autotune', fullgraph=True)
torch.onnx.export(opt_model, input_ids_1, "./", verbose=True, input_names=input_names,
operator_export_type=torch.onnx.OperatorExportTypes.ONNX_ATEN_FALLBACK)
```
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
RuntimeError: list_trace[idx] == nullptr INTERNAL ASSERT FAILED at "../torch/csrc/jit/frontend/tracer.cpp":1016, please report a bug to PyTorch.
```
### Versions
Collecting environment information...
PyTorch version: 2.1.0.dev20230701+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.25.0
Libc version: glibc-2.31
Python version: 3.8.10 (default, May 26 2023, 14:05:08) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-76-generic-x86_64-with-glibc2.29
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4080
Nvidia driver version: 535.54.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 39 bits physical, 48 bits virtual
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 158
Model name: Intel(R) Core(TM) i7-7700K CPU @ 4.20GHz
Stepping: 9
CPU MHz: 4200.000
CPU max MHz: 4500.0000
CPU min MHz: 800.0000
BogoMIPS: 8400.00
Virtualization: VT-x
L1d cache: 128 KiB
L1i cache: 128 KiB
L2 cache: 1 MiB
L3 cache: 8 MiB
NUMA node0 CPU(s): 0-7
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS, IBPB conditional, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Mitigation; Microcode
Vulnerability Tsx async abort: Mitigation; TSX disabled
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp md_clear flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.24.1
[pip3] pytorch-triton==2.1.0+440fd1bf20
[pip3] torch==2.1.0.dev20230701+cu121
[pip3] torchaudio==2.1.0.dev20230702+cu121
[pip3] torchvision==0.16.0.dev20230702+cu121
[pip3] triton==2.0.0
[conda] Could not collect
```
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 10 |
2,104 | 104,606 |
Failure in optimize_for_mobile when using conv1d(..., padding='same')
|
triaged, oncall: mobile
|
### 🐛 Describe the bug
If a module uses `torch.nn.functional.conv1d` with the argument `padding='same'`, `optimize_for_mobile` fails. If an integral padding argument is provided, it works as expected.
**Example:**
```python
import torch
from torch.utils.mobile_optimizer import optimize_for_mobile
class TestModule(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, input: torch.Tensor, weight: torch.Tensor) -> torch.Tensor:
return torch.nn.functional.conv1d(input, weight, padding='same')
module = TestModule()
module.eval()
scripted_module = torch.jit.script(module)
mobile_module = optimize_for_mobile(scripted_module)
```
**Output:**
```
Traceback (most recent call last):
File "/Users/gtyukasz/TorchFail/torch_fail.py", line 16, in <module>
mobile_module = optimize_for_mobile(scripted_module)
File "/Users/gtyukasz/TorchFail/venv/lib/python3.10/site-packages/torch/utils/mobile_optimizer.py", line 62, in optimize_for_mobile
optimized_cpp_module = torch._C._jit_pass_optimize_for_mobile(
RuntimeError: isList() INTERNAL ASSERT FAILED at "/Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/ivalue_inl.h":2034, please report a bug to PyTorch. Expected GenericList but got String
```
### Versions
```
PyTorch version: 2.0.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 13.4.1 (arm64)
GCC version: Could not collect
Clang version: 14.0.3 (clang-1403.0.22.14.1)
CMake version: Could not collect
Libc version: N/A
Python version: 3.10.12 (main, Jun 20 2023, 19:43:52) [Clang 14.0.3 (clang-1403.0.22.14.1)] (64-bit runtime)
Python platform: macOS-13.4.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Pro
Versions of relevant libraries:
[pip3] torch==2.0.1
[conda] Could not collect
```
| 2 |
2,105 | 104,602 |
F.adaptive_avg_pool3d(input, 1) returns infinity in half precision
|
module: numerical-stability, module: nn, module: cpu, triaged
|
### 🐛 Describe the bug
When training a network using half precision, I experience a nan values.
When looking into the network I saw that F.adaptive_avg_pool3d(input, 1) outputs infinity on some inputs.
[bad_input.zip](https://github.com/pytorch/pytorch/files/11950096/bad_input.zip)
It reproduced using different pytorch versions - 1.10-1.12
Also reproduced in different cuda versions - 10-11.7.
code reproduce:
```
import torch
import torch.nn.functional as F
input = torch.load("bad_input.pt").half()
result = F.adaptive_avg_pool3d(bad_input, 1)
print(torch.max(result))
```
The print will output infinity.
Possible duplicate issue in https://github.com/pytorch/pytorch/issues/52719.
However, the workadound of x.mean(dim={2,3,4}) does not reproduce the same output as adaptive_avg_pool3d.
### Versions
pip3] numpy==1.22.1
[pip3] pytorch-ranger==0.1.1
[pip3] pytorchvideo==0.1.5
[pip3] torch==1.10.1+cu113
[pip3] torch-optimizer==0.3.0
[pip3] torchmetrics==0.9.3
[pip3] torchvision==0.11.2+cu113
And also here:
pip3] numpy==1.21.5
[pip3] pytorch-ranger==0.1.1
[pip3] pytorchvideo==0.1.5
[pip3] torch==1.13.1+cu117
[pip3] torch-optimizer==0.3.0
[pip3] torchaudio==0.13.1+cu117
[pip3] torchelastic==0.2.0
[pip3] torchmetrics==0.9.3
[pip3] torchtext==0.13.0
[pip3] torchvision==0.14.1+cu117
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| 0 |
2,106 | 104,598 |
ERROR: THESE PACKAGES DO NOT MATCH THE HASHES FROM THE REQUIREMENTS FILE.
|
needs reproduction, oncall: binaries, triaged
|
### 🐛 Describe the bug
I try to build a docker container with a package having a pytorch as a requirement. But for some reason the same stage always fails.
Dockerfile
```python
FROM python:3.11
RUN pip install -r requirements.txt
```
requirements.txt
```
sentence-transformers==2.2.1
streamlit==1.21.0
streamlit-chat==0.0.2.2
requests==2.28.2
openai==0.27.6
langchain==0.0.191
tiktoken==0.4.0
```
I run
```
docker build . -t "demo"
```
While collecting torch, I get the following error:
```
#0 7.771 Collecting torch>=1.6.0 (from sentence-transformers==2.2.1->-r requirements.txt (line 1))
#0 7.830 Downloading torch-2.0.1-cp311-cp311-manylinux1_x86_64.whl (619.9 MB)
#0 10.95 ━━━╸ 62.4/619.9 MB 18.7 MB/s eta 0:00:30
#0 11.07 ERROR: THESE PACKAGES DO NOT MATCH THE HASHES FROM THE REQUIREMENTS FILE. If you have updated the package versions, please update the hashes. Otherwise, examine the package contents carefully; someone may have tampered with them.
#0 11.07 torch>=1.6.0 from https://files.pythonhosted.org/packages/c8/21/25020cfdd9f564a72f400ee491610e50cb212e8add8031abaa959af6451e/torch-2.0.1-cp311-cp311-manylinux1_x86_64.whl (from sentence-transformers==2.2.1->-r requirements.txt (line 1)):
#0 11.07 Expected sha256 e617b1d0abaf6ced02dbb9486803abfef0d581609b09641b34fa315c9c40766d
#0 11.07 Got 90e815ec0502ca6c5070e7930f00296ea048a88a225287138e70f21cdcb645a3
------
```
### Versions
PyTorch version: 2.0.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 13.4 (x86_64)
GCC version: Could not collect
Clang version: 14.0.3 (clang-1403.0.22.14.1)
CMake version: Could not collect
Libc version: N/A
Python version: 3.11.2 (main, Feb 16 2023, 03:07:35) [Clang 14.0.0 (clang-1400.0.29.202)] (64-bit runtime)
Python platform: macOS-13.4-x86_64-i386-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Intel(R) Core(TM) i7-1068NG7 CPU @ 2.30GHz
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.25.0
[pip3] torch==2.0.1
[pip3] torchvision==0.15.2
[conda] blas 1.0 mkl
[conda] mkl 2021.2.0 hecd8cb5_269
[conda] mkl-service 2.3.0 py38h9ed2024_1
[conda] mkl_fft 1.3.0 py38h4a7008c_2
[conda] mkl_random 1.2.1 py38hb2f4e1b_2
[conda] numpy 1.18.1 pypi_0 pypi
[conda] numpydoc 1.1.0 pyhd3eb1b0_1
[conda] pytorch-lightning 1.5.8 pypi_0 pypi
[conda] pytorch-tabnet 4.0 pypi_0 pypi
[conda] torch 2.0.1 pypi_0 pypi
[conda] torchmetrics 0.7.2 pypi_0 pypi
[conda] torchvision 0.12.0 pypi_0 pypi
cc @ezyang @gchanan @zou3519 @seemethere @malfet
| 4 |
2,107 | 104,594 |
[ONNX] Exprted Graph has Different Behavior from Eager Mode for CUDA FP16 Tensor Times a Number
|
module: onnx, triaged
|
### 🐛 Describe the bug
Use below module to produce the issue. Cast the model to FP16 module (model.to(torch.float16)) and provide FP16 tensor as input, and run on CUDA device.
```python
x_shape = [32, 196, 512]
class TestModule(torch.nn.Module):
def __init__(self):
super(TestModule, self).__init__()
self.linear = torch.nn.Linear(x_shape[-1], x_shape[-1])
def forward(self, x):
scale = x.size()[-3] ** -0.5
x = x * scale
print(x.dtype)
return self.linear(x)
```
Here x is FP16 tensor on device, scale is CPU scalar number, the print(x.dtype) on eager mode shows torch.float16 as output, means the Mul CUDA kernel is computed on FP16 (though the scale is float type).
But the exported graph is:
<img width="897" alt="截屏2023-07-04 17 51 13" src="https://github.com/pytorch/pytorch/assets/11661208/780208bf-b9dd-4fc1-b350-1a43a5c76713">
Which shows that the Mul Op is computed on float, and cause all following inputs casted to float type.
This behavior is different from the torch runing behavior, and since there are lots of Cast nodes inserted to the graph, and most of the computes are on float dtype, which is slower than on FP16 dtype, the graph execution is very inefficient, especially for some big transformer models.
### Versions
PyTorch version: 2.1.0.dev20230523+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.26.3
Libc version: glibc-2.31
Python version: 3.8.5 (default, Sep 4 2020, 07:30:14) [GCC 7.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-60-generic-x86_64-with-glibc2.10
Is CUDA available: True
CUDA runtime version: 11.7.99
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: Tesla V100-DGXS-32GB
GPU 1: Tesla V100-DGXS-32GB
GPU 2: Tesla V100-DGXS-32GB
GPU 3: Tesla V100-DGXS-32GB
Nvidia driver version: 525.85.12
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.8.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 40
On-line CPU(s) list: 0-39
Thread(s) per core: 2
Core(s) per socket: 20
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 79
Model name: Intel(R) Xeon(R) CPU E5-2698 v4 @ 2.20GHz
Stepping: 1
CPU MHz: 1200.000
CPU max MHz: 3600.0000
CPU min MHz: 1200.0000
BogoMIPS: 4397.44
L1d cache: 640 KiB
L1i cache: 640 KiB
L2 cache: 5 MiB
L3 cache: 50 MiB
NUMA node0 CPU(s): 0-39
Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT vulnerable
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 ds_cpl smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti intel_ppin ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a rdseed adx smap intel_pt xsaveopt cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts md_clear flush_l1d
Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.21.6
[pip3] numpydoc==1.1.0
[pip3] pytorch-triton==2.1.0+7d1a95b046
[pip3] torch==2.1.0.dev20230523+cu117
[pip3] torchmetrics==0.7.3
[pip3] torchvision==0.16.0.dev20230523+cu117
[conda] blas 1.0 mkl
[conda] magma-cuda110 2.5.2 1 pytorch
[conda] mkl 2020.2 256
[conda] mkl-include 2020.2 256
[conda] mkl-service 2.3.0 py38he904b0f_0
[conda] mkl_fft 1.2.0 py38h23d657b_0
[conda] mkl_random 1.1.1 py38h0573a6f_0
[conda] numpy 1.21.6 pypi_0 pypi
[conda] numpydoc 1.1.0 pyhd3eb1b0_1
[conda] pytorch-triton 2.1.0+7d1a95b046 pypi_0 pypi
[conda] torch 1.13.0.dev20220724+cu113 pypi_0 pypi
[conda] torchmetrics 0.7.3 pypi_0 pypi
[conda] torchvision 0.16.0.dev20230523+cu117 pypi_0 pypi
| 0 |
2,108 | 104,593 |
Bug in Conv/BN fuser with torch.fx
|
triaged, module: fx, oncall: fx
|
### 🐛 Describe the bug
Conv/BN fuser, as introduced in https://github.com/pytorch/pytorch/pull/47657 , only checks if the output of conv is used in multiple places. It does not check every usage of conv/bn modules.
A failure example here:
```python
import torch
from torch import nn
class ToyModel(nn.Module):
def __init__(self, *args, **kwargs) -> None:
super().__init__(*args, **kwargs)
self.conv1 = nn.Conv2d(1, 1, 1)
self.bn1 = nn.BatchNorm2d(1)
self.bn1.weight.data.normal_()
def forward(self, x, y):
return self.conv1(x) + self.bn1(self.conv1(y))
model = ToyModel()
model.eval()
a = torch.rand(64, 1, 32, 32)
b = torch.rand(64, 1, 32, 32)
from torch.fx.experimental.optimization import fuse
output = model(a, b)
print(torch.fx.symbolic_trace(model).graph)
fused_model = fuse(model)
print(fused_model.graph)
output2 = fused_model(a, b)
print((output - output2).abs().max())
```
Output:
```
graph():
%x : [#users=1] = placeholder[target=x]
%y : [#users=1] = placeholder[target=y]
%conv1 : [#users=1] = call_module[target=conv1](args = (%x,), kwargs = {})
%conv1_1 : [#users=1] = call_module[target=conv1](args = (%y,), kwargs = {})
%bn1 : [#users=1] = call_module[target=bn1](args = (%conv1_1,), kwargs = {})
%add : [#users=1] = call_function[target=operator.add](args = (%conv1, %bn1), kwargs = {})
return add
graph():
%x : [#users=1] = placeholder[target=x]
%y : [#users=1] = placeholder[target=y]
%conv1 : [#users=1] = call_module[target=conv1](args = (%x,), kwargs = {})
%conv1_1 : [#users=1] = call_module[target=conv1](args = (%y,), kwargs = {})
%add : [#users=1] = call_function[target=operator.add](args = (%conv1, %conv1_1), kwargs = {})
return add
tensor(0.3768, grad_fn=<MaxBackward1>)
```
The solution is to check the global usage count of conv/bn modules, and to only fuse modules that are used once.
### Versions
```
Collecting environment information...
PyTorch version: 1.12.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 13.4.1 (arm64)
GCC version: Could not collect
Clang version: 14.0.3 (clang-1403.0.22.14.1)
CMake version: version 3.25.1
Libc version: N/A
Python version: 3.8.13 (default, Mar 28 2022, 06:13:39) [Clang 12.0.0 ] (64-bit runtime)
Python platform: macOS-13.4.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Pro
Versions of relevant libraries:
[pip3] flake8==6.0.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.22.4
[pip3] torch==1.12.0
[pip3] torchvision==0.13.0
[conda] numpy 1.22.4 pypi_0 pypi
[conda] torch 1.12.0 pypi_0 pypi
[conda] torchvision 0.13.0 pypi_0 pypi
```
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| 1 |
2,109 | 104,591 |
version libcudnn_ops_infer.so.8 not defined in file libcudnn_ops_infer.so.8 with link time reference
|
module: cuda, triaged
|
## Issue description
when I load one torch model ,it works,but when I load two torch models,it shows:
```
Could not load library libcudnn_cnn_infer.so.8. Error: /usr/local/cuda/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8: symbol _ZN5cudnn3ops18GetInternalStreamsEP12cudnnContextiPP11CUstream_st, version libcudnn_ops_infer.so.8 not defined in file libcudnn_ops_infer.so.8 with link time reference
Please make sure libcudnn_cnn_infer.so.8 is in your library path!
```
but I can found libcudnn_cnn_infer.so.8 file in this path:/usr/local/cuda/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8
## Code example
```python
from TTS.api import TTS
from faster_whisper import WhisperModel
tts = TTS(model_name="tts_models/multilingual/multi-dataset/your_tts", progress_bar=True, gpu=True)
tts_clone = TTS(model_name="voice_conversion_models/multilingual/vctk/freevc24", progress_bar=True, gpu=True)
whispter_model = WhisperModel(WHISPER_MODEL_PATH, device="cuda", compute_type="float16")
segments, info = whispter_model.transcribe(audio_file_path, beam_size=5, word_timestamps=True)
```
error information
```
Could not load library libcudnn_cnn_infer.so.8. Error: /usr/local/cuda/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8: symbol _ZN5cudnn3ops18GetInternalStreamsEP12cudnnContextiPP11CUstream_st, version libcudnn_ops_infer.so.8 not defined in file libcudnn_ops_infer.so.8 with link time reference
Please make sure libcudnn_cnn_infer.so.8 is in your library path!
```
then I just load one tts model,it works
```python
from TTS.api import TTS
from faster_whisper import WhisperModel
tts = TTS(model_name="tts_models/multilingual/multi-dataset/your_tts", progress_bar=True, gpu=True)
#tts_clone = TTS(model_name="voice_conversion_models/multilingual/vctk/freevc24", progress_bar=True, gpu=True)
whispter_model = WhisperModel(WHISPER_MODEL_PATH, device="cuda", compute_type="float16")
segments, info = whispter_model.transcribe(audio_file_path, beam_size=5, word_timestamps=True)
```
## System Info

cc @ptrblck
| 4 |
2,110 | 104,589 |
Issue with FSDP does not reduce memory footprint when scaling up GPUs
|
oncall: distributed, triaged, module: fsdp
|
### 🐛 Describe the bug
We are running FSDP with a model using the size_based_auto wrap policy with A800 gpus. Scaling up GPUs, by extending to multi nodes , we are expecting to have smaller shard on each and overall proportional memory reduction. On the contrary, as the number of GPU cards increases, we have not observed a decrease in VRAM usage. The VRAM usage of a single graphics card actually increases with the increase in the number of cards.

We get the allocated memory using "torch.cuda.memory_allocated()".
We have defined a test model that does not have any practical meaning itself,below is the code we used for testing.
```
import os
import argparse
import functools
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms
from torch.optim.lr_scheduler import StepLR
import torch.distributed as dist
import torch.multiprocessing as mp
from torch.nn.parallel import DistributedDataParallel as DDP
from torch.utils.data.distributed import DistributedSampler
from torch.distributed.fsdp import FullyShardedDataParallel as FSDP
from torch.distributed.fsdp.fully_sharded_data_parallel import (
CPUOffload,
BackwardPrefetch,
)
from torch.distributed.fsdp.wrap import (
size_based_auto_wrap_policy,
enable_wrap,
wrap,
)
from loguru import logger
def setup():
# initialize the process group
dist.init_process_group()
def cleanup():
dist.destroy_process_group()
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 32, 3, 1)
self.conv2 = nn.Conv2d(32, 64, 3, 1)
self.temp=nn.ModuleList([nn.Linear(9216,9216)for _ in range(40)])
self.fc1 = nn.Linear(9216, 128)
self.fc2 = nn.Linear(128, 10)
def forward(self, x):
x = self.conv1(x)
x = F.relu(x)
x = self.conv2(x)
x = F.relu(x)
x = F.max_pool2d(x, 2)
x = torch.flatten(x, 1)
for sub in self.temp:
x=sub(x)
x = self.fc1(x)
x = F.relu(x)
x = self.fc2(x)
output = F.log_softmax(x, dim=1)
return output
def train(args, model, rank, world_size, train_loader, optimizer, epoch, sampler=None):
model.train()
ddp_loss = torch.zeros(2).to(rank)
if sampler:
sampler.set_epoch(epoch)
for batch_idx, (data, target) in enumerate(train_loader):
data, target = data.to(rank), target.to(rank)
optimizer.zero_grad()
output = model(data)
loss = F.nll_loss(output, target, reduction='sum')
loss.backward()
optimizer.step()
ddp_loss[0] += loss.item()
ddp_loss[1] += len(data)
dist.all_reduce(ddp_loss, op=dist.ReduceOp.SUM)
if rank == 0:
print('Train Epoch: {} \tLoss: {:.6f}'.format(epoch, ddp_loss[0] / ddp_loss[1]))
def test(model, rank, world_size, test_loader):
model.eval()
correct = 0
ddp_loss = torch.zeros(3).to(rank)
with torch.no_grad():
for data, target in test_loader:
data, target = data.to(rank), target.to(rank)
output = model(data)
ddp_loss[0] += F.nll_loss(output, target, reduction='sum').item() # sum up batch loss
pred = output.argmax(dim=1, keepdim=True) # get the index of the max log-probability
ddp_loss[1] += pred.eq(target.view_as(pred)).sum().item()
ddp_loss[2] += len(data)
dist.all_reduce(ddp_loss, op=dist.ReduceOp.SUM)
if rank == 0:
test_loss = ddp_loss[0] / ddp_loss[2]
print('Test set: Average loss: {:.4f}, Accuracy: {}/{} ({:.2f}%)\n'.format(
test_loss, int(ddp_loss[1]), int(ddp_loss[2]),
100. * ddp_loss[1] / ddp_loss[2]))
def fsdp_main(world_size, args):
setup()
rank=int(os.environ["LOCAL_RANK"])
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])
dataset1 = datasets.MNIST('../data', train=True, download=True,
transform=transform)
dataset2 = datasets.MNIST('../data', train=False,
transform=transform)
sampler1 = DistributedSampler(dataset1, rank=rank, num_replicas=world_size, shuffle=True)
sampler2 = DistributedSampler(dataset2, rank=rank, num_replicas=world_size)
train_kwargs = {'batch_size': args.batch_size, 'sampler': sampler1}
test_kwargs = {'batch_size': args.test_batch_size, 'sampler': sampler2}
cuda_kwargs = {'num_workers': world_size,
'pin_memory': True,
'shuffle': False}
train_kwargs.update(cuda_kwargs)
test_kwargs.update(cuda_kwargs)
train_loader = torch.utils.data.DataLoader(dataset1,**train_kwargs)
test_loader = torch.utils.data.DataLoader(dataset2, **test_kwargs)
my_auto_wrap_policy = functools.partial(
size_based_auto_wrap_policy,recurse=True ,min_num_params=100
)
torch.cuda.set_device(rank)
init_start_event = torch.cuda.Event(enable_timing=True)
init_end_event = torch.cuda.Event(enable_timing=True)
model = Net().to(rank)
topara=0
for item in model.parameters():
topara+=item.numel()
logger.info('total memory cost for param is %r G'%(topara*4/1024**3))
be=torch.cuda.memory_allocated(rank)
model = FSDP(model,sharding_strategy=torch.distributed.fsdp.ShardingStrategy.FULL_SHARD,auto_wrap_policy=my_auto_wrap_policy,device_id=torch.cuda.current_device())
af=torch.cuda.memory_allocated(rank)
logger.info('memory_allocated %r G for parameters'%((af-be)/1024**3))
optimizer = optim.Adadelta(model.parameters(), lr=args.lr)
scheduler = StepLR(optimizer, step_size=1, gamma=args.gamma)
init_start_event.record()
for epoch in range(1, args.epochs + 1):
train(args, model, rank, world_size, train_loader, optimizer, epoch, sampler=sampler1)
test(model, rank, world_size, test_loader)
scheduler.step()
init_end_event.record()
if rank == 0:
print(f"CUDA event elapsed time: {init_start_event.elapsed_time(init_end_event) / 1000}sec")
print(f"{model}")
if args.save_model:
# use a barrier to make sure training is done on all ranks
dist.barrier()
# state_dict for FSDP model is only available on Nightlies for now
states = model.state_dict()
if rank == 0:
torch.save(states, "mnist_cnn.pt")
cleanup()
if __name__ == '__main__':
# Training settings
parser = argparse.ArgumentParser(description='PyTorch MNIST Example')
parser.add_argument('--batch-size', type=int, default=64, metavar='N',
help='input batch size for training (default: 64)')
parser.add_argument('--test-batch-size', type=int, default=1000, metavar='N',
help='input batch size for testing (default: 1000)')
parser.add_argument('--epochs', type=int, default=10, metavar='N',
help='number of epochs to train (default: 14)')
parser.add_argument('--lr', type=float, default=1.0, metavar='LR',
help='learning rate (default: 1.0)')
parser.add_argument('--gamma', type=float, default=0.7, metavar='M',
help='Learning rate step gamma (default: 0.7)')
parser.add_argument('--no-cuda', action='store_true', default=False,
help='disables CUDA training')
parser.add_argument('--seed', type=int, default=1, metavar='S',
help='random seed (default: 1)')
parser.add_argument('--save-model', action='store_true', default=False,
help='For Saving the current Model')
args = parser.parse_args()
torch.manual_seed(args.seed)
WORLD_SIZE = torch.cuda.device_count()
fsdp_main(world_size=WORLD_SIZE,args=args)
```
We run our test code by torchrun.
```
torchrun --standalone --nnodes=1 --nproc-per-node=$CARDNUM test.py
```
We noticed that a previous [issue](https://github.com/pytorch/pytorch/issues/82001) had raised a similar question, but the problem was not resolved satisfactorily.How should fsdp be used in order to reduce the VRAM usage of the model as the number of GPU cards increases? We would like to inquire whether this is a problem with the fsdp code or if it is related to our usage.
### Versions
PyTorch version: 2.0.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
Clang version: Could not collect
CMake version: version 3.26.4
Libc version: glibc-2.35
Python version: 3.10.11 (main, May 16 2023, 00:28:57) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-3.10.0-1160.83.1.el7.x86_64-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A800-SXM4-80GB
GPU 1: NVIDIA A800-SXM4-80GB
GPU 2: NVIDIA A800-SXM4-80GB
GPU 3: NVIDIA A800-SXM4-80GB
GPU 4: NVIDIA A800-SXM4-80GB
GPU 5: NVIDIA A800-SXM4-80GB
GPU 6: NVIDIA A800-SXM4-80GB
GPU 7: NVIDIA A800-SXM4-80GB
Nvidia driver version: 525.85.12
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: GenuineIntel
BIOS Vendor ID: Intel(R) Corporation
Model name: Intel(R) Xeon(R) Platinum 8350C CPU @ 2.60GHz
BIOS Model name: Intel(R) Xeon(R) Platinum 8350C CPU @ 2.60GHz
CPU family: 6
Model: 106
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 6
Frequency boost: enabled
CPU max MHz: 2601.0000
CPU min MHz: 800.0000
BogoMIPS: 5200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb cat_l3 invpcid_single ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq md_clear pconfig spec_ctrl intel_stibp flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 80 MiB (64 instances)
L3 cache: 96 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-31,64-95
NUMA node1 CPU(s): 32-63,96-127
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Vulnerable
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: Load fences, __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.25.0
[pip3] torch==2.0.1
[pip3] torchaudio==2.0.2
[pip3] torchvision==0.15.2
[pip3] triton==2.0.0
[conda] numpy 1.25.0 pypi_0 pypi
[conda] torch 2.0.1 pypi_0 pypi
[conda] torchaudio 2.0.2 pypi_0 pypi
[conda] torchvision 0.15.2 pypi_0 pypi
[conda] triton 2.0.0 pypi_0 pypi
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
| 0 |
2,111 | 104,587 |
Conv1d step-by-step numerical error
|
needs reproduction, triaged, module: macos, module: determinism, module: arm
|
### 🐛 Describe the bug - numerical inconsistency, conv1d, conv2d (and other tags)
When I apply `F.conv1d` with some kernel and stride I want to get same result when I do it on input feature split manually shifiting this stride. That is important because error is stacking over layers of my system and creates big noise total.
All code run on cpu only.
Here is my code which shows inconsistency (non-zero numerical error):
```python3
import torch
import torch.nn.functional as F
# params for running F.conv1d:
_hop = 1
_n_frames = 10
_window = _hop * 2
_frames_step = 1
# I also provide some deterministic values to match params above
# feature = torch.randn(_n_frames * _hop, dtype=torch.float32)
feature = torch.arange(_n_frames * _hop, dtype=torch.float32)
# _kernel = torch.randn(8, 1, _window, dtype=torch.float32)
_kernel = torch.tensor([
[[-1.2982e-04, -2.1533e-01]],
[[-2.2290e-01, 2.4641e-01]],
[[-7.0638e-01, 8.7808e-01]],
[[ 9.2179e-01, 3.0907e-01]],
[[-2.3548e-01, 1.6494e+00]],
[[-1.1634e+00, 2.9117e-01]],
[[-1.6536e+00, 1.1718e+00]],
[[ 1.0997e+00, -1.2599e+00]]
], dtype=torch.float32)
print("feature", feature.shape)
print("kernel", _kernel.shape)
with torch.no_grad():
# original usage
output_orig = F.conv1d(
feature.reshape(1, 1, -1),
_kernel,
stride=_hop,
padding=0,
).cpu().transpose(1,2)
# with splits
output_splits = []
for i in range(0, _n_frames, _frames_step):
_feat = feature[i*_hop:(i+_frames_step)*_hop + (_window - _hop)]
if _feat.shape[0] < _window:
continue
output_cached_split = F.conv1d(
_feat.reshape(1, 1, -1),
_kernel,
stride=_hop,
padding=0,
).cpu().transpose(1,2)
output_splits.append(output_cached_split)
output_split = torch.cat(output_splits, dim=1)
print(output_orig.shape, output_orig.dtype, output_orig.min(), output_orig.max())
print(output_split.shape, output_split.dtype, output_split.min(), output_split.max())
error = (output_orig[0] - output_split[0]).T
error_1d = np.abs(error).mean(0)
print("Errors for each timestamp:", error_1d)
# my functions for visualization, they just print better view
from my_utils import plot_output, plot_error
plot_output(output_orig, output_split, f=None)
plt.show()
plot_error(output_orig, output_split, shift=0, f=None)
plt.show()
```
output:
```
feature torch.Size([10])
kernel torch.Size([8, 1, 2])
torch.Size([1, 9, 8]) torch.float32 tensor(-6.6867) tensor(12.9608)
torch.Size([1, 9, 8]) torch.float32 tensor(-6.6867) tensor(12.9608)
Errors for each timestamp: tensor([0.0000e+00, 0.0000e+00, 1.4901e-08, 0.0000e+00, 2.2352e-08, 1.5274e-07,
1.4901e-08, 0.0000e+00, 1.1921e-07])
```


But I expect error 0.0 everywhere.
And also I've run same code but **with double precision**.
Output:
```
feature torch.Size([10])
kernel torch.Size([8, 1, 2])
torch.Size([1, 9, 8]) torch.float64 tensor(-6.6867, dtype=torch.float64) tensor(12.9608, dtype=torch.float64)
torch.Size([1, 9, 8]) torch.float64 tensor(-6.6867, dtype=torch.float64) tensor(12.9608, dtype=torch.float64)
Errors for each timestamp: tensor([0.0000e+00, 0.0000e+00, 2.7756e-17, 0.0000e+00, 1.5266e-16, 6.2450e-17,
1.6653e-16, 0.0000e+00, 0.0000e+00], dtype=torch.float64)
```


I think there should be no numerical error at all.
I've also seen other issue about it, but I think multichannel and strides supposed to be computed differently.
https://github.com/pytorch/pytorch/issues/23042
### Versions
I use homebrew
Collecting environment information...
PyTorch version: 2.0.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 12.3.1 (x86_64)
GCC version: Could not collect
Clang version: 13.1.6 (clang-1316.0.21.2.5)
CMake version: version 3.26.3
Libc version: N/A
Python version: 3.9.17 (main, Jun 6 2023, 14:44:03) [Clang 14.0.0 (clang-1400.0.29.202)] (64-bit runtime)
Python platform: macOS-12.3.1-x86_64-i386-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Intel(R) Core(TM) i5-7267U CPU @ 3.10GHz
Versions of relevant libraries:
[pip3] numpy==1.23.1
[pip3] pytorch-lightning==1.7.4
[pip3] pytorch-ranger==0.1.1
[pip3] torch==2.0.0
[pip3] torch-audiomentations==0.10.1
[pip3] torch-optimizer==0.1.0
[pip3] torch-pitch-shift==1.2.4
[pip3] torch-stoi==0.1.2
[pip3] torch-time-stretch==1.0.2
[pip3] torchaudio==2.0.1
[pip3] torchmetrics==0.9.1
[pip3] torchsummary==1.5
[conda] Could not collect
cc @malfet @albanD @mruberry @kurtamohler
| 12 |
2,112 | 104,586 |
cpu reduce: output accumulate type when input is bfloat16 or float16
|
module: cpu, open source, Stale
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #104586
* #104585
* #104584
* #104583
* #104239
* #104238
cc @jgong5 @XiaobingSuper @sanchitintel @ashokei @jingxu10
| 2 |
2,113 | 104,579 |
Updated the documentation of torch.sparse to make it more readable.
|
triaged, open source, ciflow/trunk, release notes: sparse
|
Made a small change in docs of [`torch.sparse`](https://pytorch.org/docs/stable/sparse.html#why-and-when-to-use-sparsity). By removing the redundant "stores" word.
| 20 |
2,114 | 104,569 |
Remove cpp_custom_type_hack
|
module: cpu, open source, topic: not user facing
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #104569
* #72303
* #104535
Closes #72263
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| 5 |
2,115 | 104,568 |
Init_rpc() errors when running the test code in the TorchPRC document on two different machines
|
triaged, module: rpc
|
### 🐛 Describe the bug
I ran the test code in https://pytorch.org/docs/1.13/rpc.html on **two different machines** and got the following errors:
worker0:
python torchrpc_master.py
```
[W tensorpipe_agent.cpp:530] RPC agent for worker0 encountered error when accepting incoming pipe: eof (this error originated at tensorpipe/transport/ibv/connection_impl.cc:302)
[W tensorpipe_agent.cpp:726] RPC agent for worker0 encountered error when reading incoming request from worker1: eof (this error originated at tensorpipe/transport/ibv/connection_impl.cc:302)
```
worker1:
python torchrpc_slave.py
```
[W tensorpipe_agent.cpp:940] RPC agent for worker1 encountered error when reading incoming response from worker0: transport retry counter exceeded (this error originated at tensorpipe/transport/ibv/connection_impl.cc:478)
Traceback (most recent call last):
File "/code/cityeyes/torchrpc_slave.py", line 7, in <module>
rpc.init_rpc("worker1", rank=1, world_size=2)
File "/env/anaconda3/envs/nerf/lib/python3.9/site-packages/torch/distributed/rpc/__init__.py", line 196, in init_rpc
_init_rpc_backend(backend, store, name, rank, world_size, rpc_backend_options)
File "/env/anaconda3/envs/nerf/lib/python3.9/site-packages/torch/distributed/rpc/__init__.py", line 231, in _init_rpc_backend
rpc_agent = backend_registry.init_backend(
File "/env/anaconda3/envs/nerf/lib/python3.9/site-packages/torch/distributed/rpc/backend_registry.py", line 101, in init_backend
return backend.value.init_backend_handler(*args, **kwargs)
File "/env/anaconda3/envs/nerf/lib/python3.9/site-packages/torch/distributed/rpc/backend_registry.py", line 360, in _tensorpipe_init_backend_handler
api._all_gather(None, timeout=rpc_backend_options.rpc_timeout)
File "/env/anaconda3/envs/nerf/lib/python3.9/site-packages/torch/distributed/rpc/api.py", line 82, in wrapper
return func(*args, **kwargs)
File "/env/anaconda3/envs/nerf/lib/python3.9/site-packages/torch/distributed/rpc/api.py", line 224, in _all_gather
rpc_sync(
File "/env/anaconda3/envs/nerf/lib/python3.9/site-packages/torch/distributed/rpc/api.py", line 82, in wrapper
return func(*args, **kwargs)
File "/env/anaconda3/envs/nerf/lib/python3.9/site-packages/torch/distributed/rpc/api.py", line 809, in rpc_sync
return fut.wait()
RuntimeError: transport retry counter exceeded (this error originated at tensorpipe/transport/ibv/connection_impl.cc:478)
```
But the same code runs fine on the same machine. I want to know what causes it?
This is my code:
torchrpc_master.py
```
import os
import torch
import torch.distributed as dist
import torch.distributed.rpc as rpc
os.environ['MASTER_ADDR'] = '192.168.211.12'
os.environ['MASTER_PORT'] = '5678'
rpc.init_rpc("worker0", rank=0, world_size=2)
ret = rpc.rpc_sync("worker1", torch.add, args=(torch.ones(2), 3))
print(ret)
rpc.shutdown()
```
torchrpc_master.py
```
import os
import torch.distributed as dist
import torch.distributed.rpc as rpc
import time
os.environ['MASTER_ADDR'] = '192.168.211.12'
os.environ['MASTER_PORT'] = '5678'
rpc.init_rpc("worker1", rank=1, world_size=2)
rpc.shutdown()
```
### Versions
torch version: 1.13.1
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @gqchen @aazzolini @rohan-varma @jjlilley @osalpekar @jiayisuse @mrzzd
| 1 |
2,116 | 104,536 |
torch compile for jacrev'ed function
|
triaged, oncall: pt2, module: functorch
|
### 🐛 Describe the bug
I would like to compute the divergence of a function (\nabla \cdot f) which one can express as the trace of the Jacobian for a function from N inputs to N outputs. Additionally, would love for this to work for additional inputs e.g. for a function f(x,t) from N+1 to N, taking the divergence w.r.t. x.
See repro below for code.
The regular divf(x,t) works but the divf_compiled crashes. Two types of errors.
Here's my get_div_fn:
```
def get_div_fn(f):
@torch._dynamo.allow_in_graph
def _f(x, t):
return torch.trace(jacrev(f)(x,t))
return lambda x, t : _f(x, t)
```
If I just "return _f" without the extra lambda at the end, I get
```
torch._dynamo.exc.InternalTorchDynamoError
RuntimeError: Cannot access data pointer of Tensor that doesn't have storage
```
With the lambda in the return, I get a more specific error:
```
RuntimeError: Failed running call_function <function get_div_fn.<locals>._f at 0x7f8278b018b0>(*(FakeTensor(FakeTensor(..., device='meta', size=(3,)), cpu), FakeTensor(FakeTensor(..., device='meta', size=(1,)), cpu)), **{}):
Batching rule not implemented for aten::is_same_size. We could not generate a fallback.
(scroll up for backtrace)
```
Thanks for taking a look!
Extra motivation: div's are super useful for diffusions and flow matching to compute likelihoods. Most people estimate stochastically using tricks but it would be nice to have some convenient-to-code exact implementations, whether they are (fast, high mem) or (slow, low mem) or somewhere in between.
### Error logs
```
(myenv) 10-19-162-236:~ markgoldstein$ python hello.py
tensor(4.5000)
Traceback (most recent call last):
File "/Users/markgoldstein/miniconda3/envs/myenv/lib/python3.9/site-packages/torch/_dynamo/utils.py", line 1194, in run_node
return node.target(*args, **kwargs)
File "/Users/markgoldstein/hello.py", line 10, in _f
return torch.trace(jacrev(f)(x,t))
File "/Users/markgoldstein/miniconda3/envs/myenv/lib/python3.9/site-packages/torch/_functorch/eager_transforms.py", line 598, in wrapper_fn
flat_jacobians_per_input = compute_jacobian_stacked()
File "/Users/markgoldstein/miniconda3/envs/myenv/lib/python3.9/site-packages/torch/_functorch/eager_transforms.py", line 529, in compute_jacobian_stacked
chunked_result = vmap(vjp_fn)(basis)
File "/Users/markgoldstein/miniconda3/envs/myenv/lib/python3.9/site-packages/torch/_functorch/vmap.py", line 434, in wrapped
return _flat_vmap(
File "/Users/markgoldstein/miniconda3/envs/myenv/lib/python3.9/site-packages/torch/_functorch/vmap.py", line 39, in fn
return f(*args, **kwargs)
File "/Users/markgoldstein/miniconda3/envs/myenv/lib/python3.9/site-packages/torch/_functorch/vmap.py", line 619, in _flat_vmap
batched_outputs = func(*batched_inputs, **kwargs)
File "/Users/markgoldstein/miniconda3/envs/myenv/lib/python3.9/site-packages/torch/_functorch/eager_transforms.py", line 325, in wrapper
result = _autograd_grad(flat_primals_out, flat_diff_primals, flat_cotangents,
File "/Users/markgoldstein/miniconda3/envs/myenv/lib/python3.9/site-packages/torch/_functorch/eager_transforms.py", line 113, in _autograd_grad
grad_inputs = torch.autograd.grad(diff_outputs, inputs, grad_outputs,
File "/Users/markgoldstein/miniconda3/envs/myenv/lib/python3.9/site-packages/torch/autograd/__init__.py", line 288, in grad
grad_outputs_ = _make_grads(t_outputs, grad_outputs_, is_grads_batched=is_grads_batched)
File "/Users/markgoldstein/miniconda3/envs/myenv/lib/python3.9/site-packages/torch/autograd/__init__.py", line 56, in _make_grads
if not torch.is_same_size(out, first_grad):
RuntimeError: Batching rule not implemented for aten::is_same_size. We could not generate a fallback.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/markgoldstein/miniconda3/envs/myenv/lib/python3.9/site-packages/torch/_dynamo/utils.py", line 1152, in get_fake_value
return wrap_fake_exception(
File "/Users/markgoldstein/miniconda3/envs/myenv/lib/python3.9/site-packages/torch/_dynamo/utils.py", line 808, in wrap_fake_exception
return fn()
File "/Users/markgoldstein/miniconda3/envs/myenv/lib/python3.9/site-packages/torch/_dynamo/utils.py", line 1153, in <lambda>
lambda: run_node(tx.output, node, args, kwargs, nnmodule)
File "/Users/markgoldstein/miniconda3/envs/myenv/lib/python3.9/site-packages/torch/_dynamo/utils.py", line 1206, in run_node
raise RuntimeError(
RuntimeError: Failed running call_function <function get_div_fn.<locals>._f at 0x7fe7985b98b0>(*(FakeTensor(FakeTensor(..., device='meta', size=(3,)), cpu), FakeTensor(FakeTensor(..., device='meta', size=(1,)), cpu)), **{}):
Batching rule not implemented for aten::is_same_size. We could not generate a fallback.
(scroll up for backtrace)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/markgoldstein/hello.py", line 25, in <module>
print(divf_compiled(x,t))
File "/Users/markgoldstein/miniconda3/envs/myenv/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 209, in _fn
return fn(*args, **kwargs)
File "/Users/markgoldstein/miniconda3/envs/myenv/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 337, in catch_errors
return callback(frame, cache_size, hooks)
File "/Users/markgoldstein/miniconda3/envs/myenv/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 404, in _convert_frame
result = inner_convert(frame, cache_size, hooks)
File "/Users/markgoldstein/miniconda3/envs/myenv/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 104, in _fn
return fn(*args, **kwargs)
File "/Users/markgoldstein/miniconda3/envs/myenv/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 262, in _convert_frame_assert
return _compile(
File "/Users/markgoldstein/miniconda3/envs/myenv/lib/python3.9/site-packages/torch/_dynamo/utils.py", line 163, in time_wrapper
r = func(*args, **kwargs)
File "/Users/markgoldstein/miniconda3/envs/myenv/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 324, in _compile
out_code = transform_code_object(code, transform)
File "/Users/markgoldstein/miniconda3/envs/myenv/lib/python3.9/site-packages/torch/_dynamo/bytecode_transformation.py", line 445, in transform_code_object
transformations(instructions, code_options)
File "/Users/markgoldstein/miniconda3/envs/myenv/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 311, in transform
tracer.run()
File "/Users/markgoldstein/miniconda3/envs/myenv/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1726, in run
super().run()
File "/Users/markgoldstein/miniconda3/envs/myenv/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 576, in run
and self.step()
File "/Users/markgoldstein/miniconda3/envs/myenv/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 540, in step
getattr(self, inst.opname)(inst)
File "/Users/markgoldstein/miniconda3/envs/myenv/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 342, in wrapper
return inner_fn(self, inst)
File "/Users/markgoldstein/miniconda3/envs/myenv/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 965, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/Users/markgoldstein/miniconda3/envs/myenv/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 474, in call_function
self.push(fn.call_function(self, args, kwargs))
File "/Users/markgoldstein/miniconda3/envs/myenv/lib/python3.9/site-packages/torch/_dynamo/variables/torch.py", line 548, in call_function
tensor_variable = wrap_fx_proxy(
File "/Users/markgoldstein/miniconda3/envs/myenv/lib/python3.9/site-packages/torch/_dynamo/variables/builder.py", line 754, in wrap_fx_proxy
return wrap_fx_proxy_cls(
File "/Users/markgoldstein/miniconda3/envs/myenv/lib/python3.9/site-packages/torch/_dynamo/variables/builder.py", line 789, in wrap_fx_proxy_cls
example_value = get_fake_value(proxy.node, tx)
File "/Users/markgoldstein/miniconda3/envs/myenv/lib/python3.9/site-packages/torch/_dynamo/utils.py", line 1173, in get_fake_value
raise TorchRuntimeError() from e
torch._dynamo.exc.TorchRuntimeError:
from user code:
File "/Users/markgoldstein/hello.py", line 12, in <lambda>
return lambda x, t : _f(x, t)
You can suppress this exception and fall back to eager by setting:
torch._dynamo.config.suppress_errors = True
(myenv) 10-19-162-236:~ markgoldstein$
```
### Minified repro
# torch version 2.0
```
import torch
from torch.func import jacrev
import torch._dynamo
torch._dynamo.config.verbose = True
def get_div_fn(f):
@torch._dynamo.allow_in_graph
def _f(x, t):
return torch.trace(jacrev(f)(x,t))
return lambda x, t : _f(x, t)
# example function
f = lambda x, t: t * 3 * x
# example inputs
x = torch.tensor([1.,2.,3.])
t = torch.tensor([0.5])
# expected answer is 4.5
divf = get_div_fn(f)
divf_compiled = torch.compile(divf)
print(divf(x,t)) # works
print(divf_compiled(x,t)) # breaks
```
### Versions
```
PyTorch version: 2.0.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 12.3.1 (x86_64)
GCC version: Could not collect
Clang version: 13.0.0 (clang-1300.0.27.3)
CMake version: Could not collect
Libc version: N/A
Python version: 3.9.0 (default, Nov 15 2020, 06:25:35) [Clang 10.0.0 ] (64-bit runtime)
Python platform: macOS-10.16-x86_64-i386-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Intel(R) Core(TM) i9-9980HK CPU @ 2.40GHz
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.2
[pip3] open-clip-torch==2.20.0
[pip3] pytorch-lightning==1.4.9
[pip3] torch==2.0.0
[pip3] torch-ema==0.3
[pip3] torch-fidelity==0.3.0
[pip3] torchaudio==2.0.1
[pip3] torchdiffeq==0.2.3
[pip3] torchlaplace==0.0.4
[pip3] torchmetrics==0.5.1
[pip3] torchvision==0.15.1
[conda] numpy 1.24.2 pypi_0 pypi
[conda] open-clip-torch 2.20.0 pypi_0 pypi
[conda] pytorch-lightning 1.4.9 pypi_0 pypi
[conda] torch 2.0.0 pypi_0 pypi
[conda] torch-ema 0.3 pypi_0 pypi
[conda] torch-fidelity 0.3.0 pypi_0 pypi
[conda] torchaudio 2.0.1 pypi_0 pypi
[conda] torchdiffeq 0.2.3 pypi_0 pypi
[conda] torchlaplace 0.0.4 pypi_0 pypi
[conda] torchmetrics 0.5.1 pypi_0 pypi
[conda] torchvision 0.14.0 pypi_0 pypi
```
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @zou3519 @Chillee @samdow @kshitij12345 @janeyx99
| 6 |
2,117 | 104,535 |
Remove deprecated fbgemm operators
|
open source, ciflow/trunk, release notes: quantization, topic: bc breaking, merging
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #104569
* #72303
* __->__ #104535
These operators are not used and have been deprecated since #72690 (Feb 2022). Additionally, the `torch.jit.quantized` interface has been deprecated since #40102 (June 2020).
| 4 |
2,118 | 104,531 |
[RFC] Optional Modular Representation for FX Graph from `torch.compile()`
|
triaged, oncall: pt2
|
### 🚀 The feature, motivation and pitch
Currently, when using TorchDynamo to generate an FX graph, it inlines all modules and function calls. Such an inlining can introduce extra burden to some custom compilers since the same layer needs to be recompiled multiple times. Ideally, for the same layer that's being called multiple times, we only want to compile it once. There are majorly two solutions.
First, which is what we are doing right now, is to reverse-engineer the layer information from the flattend FX graph. However, this only works well for well-known models since we know how a layer looks like. For other custom models, it would be hard to tell (although we can still retrieve some information from the module_stack metadata).
Second, which is what we are proposing, is to enable modular representation inside FX graph (i.e., we do not inline modules). However, this approach could introduce extra effort for compilers to perform optimizations. Thus, we propose to make it an optional feature.
Following is an example of how this modular representation looks like.
```python
# input program
class MyModule(torch.nn.Module):
class MySubModule(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, x):
return x+2
def __init__(self):
super().__init__()
self.linear = torch.nn.Linear(4, 5)
self.submod = self.MySubModule()
def forward(self, x):
return self.linear(self.submod(x))
# generated code
class GraphModule(torch.nn.Module):
class SubGraphModule(torch.nn.Module):
def forward(self, L_x_ : torch.Tensor):
l_x_ = L_x_
add = l_x_ + 2; l_x_ = None
return (add,)
def forward(self, L_x_ : torch.Tensor):
l_x_ = L_x_
l__self___submod = self.SubGraphModule()
l__self___submod_1 = l__self___submod(l_x_); l_x_ = None
l__self___linear = self.L__self___linear(l__self___submod_1); l__self___submod_1 = None
return (l__self___linear,)
```
To enable this representation, we will need to following
1. Introduce two new FX graph nodes: one for the submodule itself, the other one for instatiating and calling the submodule.
2. Introduce a new FX pass that inlines the submodule.
Link to prototype: https://github.com/seanlatias/pytorch/tree/modular
cc: @SherlockNoMad @wconstab
### Alternatives
_No response_
### Additional context
Our final goal is to connect this flow with torch-xla and generate StableHLO code. In other words, we want to keep this modular representation at the MLIR level and inline there if needed.
Another use case that we have in mind for this feature, is to generate a unified training graph from `torch.compile()`, and utilize this modular representation to mark the boundary between forward, backward, and optimizer. With this approach, the compiler would have more flexibility on whether to inline or not.
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 3 |
2,119 | 104,526 |
DISABLED test_sparse_all_reduce_sum_cuda (__main__.TestDistBackendWithSpawn)
|
oncall: distributed, module: flaky-tests, skipped
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_sparse_all_reduce_sum_cuda&suite=TestDistBackendWithSpawn) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/14731815606).
Over the past 3 hours, it has been determined flaky in 2 workflow(s) with 12 failures and 2 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_sparse_all_reduce_sum_cuda`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `distributed/test_distributed_spawn.py`
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
| 1 |
2,120 | 104,516 |
vec_test_all_types_xxx with dtype c10::complex<float> and c10::complex<double> has failures on division
|
module: cpu, triaged, module: complex, module: vectorization
|
### 🐛 Describe the bug
The vectorization test `vec_test_all_types_AVX2` and `vec_test_all_types_AVX512` has failed cases on division when dtype is complex float or complex double.
Log:
```bash
# this is avx2 run:
[ FAILED ] 2 tests, listed below:
[ FAILED ] Arithmetics/2.Division, where TypeParam = at::vec::AVX2::Vectorized<c10::complex<float> >
[ FAILED ] Arithmetics/3.Division, where TypeParam = at::vec::AVX2::Vectorized<c10::complex<double> >
# this is avx512 run:
[ FAILED ] 2 tests, listed below:
[ FAILED ] Arithmetics/2.Division, where TypeParam = at::vec::AVX512::Vectorized<c10::complex<float> >
[ FAILED ] Arithmetics/3.Division, where TypeParam = at::vec::AVX512::Vectorized<c10::complex<double> >
```
### Versions
PyTorch version: 2.1.0a0+git048a975
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: CentOS Linux 7 (Core) (x86_64)
GCC version: (GCC) 9.3.1 20200408 (Red Hat 9.3.1-2)
Clang version: Could not collect
CMake version: version 3.26.3
Libc version: glibc-2.17
Python version: 3.8.16 (default, Jan 17 2023, 23:13:24) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-3.10.0-693.el7.x86_64-x86_64-with-glibc2.17
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 80
On-line CPU(s) list: 0-79
Thread(s) per core: 2
Core(s) per socket: 20
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz
Stepping: 7
CPU MHz: 3202.636
CPU max MHz: 3900.0000
CPU min MHz: 1000.0000
BogoMIPS: 5000.00
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 1024K
L3 cache: 28160K
NUMA node0 CPU(s): 0-19,40-59
NUMA node1 CPU(s): 20-39,60-79
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb cat_l3 cdp_l3 intel_pt tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req
Versions of relevant libraries:
[pip3] bert-pytorch==0.0.1a4
[pip3] clip-anytorch==2.5.2
[pip3] CoCa-pytorch==0.0.7
[pip3] dalle2-pytorch==1.14.2
[pip3] ema-pytorch==0.2.3
[pip3] functorch==1.14.0a0+b71aa0b
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.21.2
[pip3] open-clip-torch==2.20.0
[pip3] pytorch-warmup==0.1.1
[pip3] rotary-embedding-torch==0.2.3
[pip3] torch==2.1.0a0+gitfc85124
[pip3] torch-fidelity==0.3.0
[pip3] torch_geometric==2.4.0
[pip3] torch-sparse==0.6.17
[pip3] torch-struct==0.5
[pip3] torchaudio==2.0.2
[pip3] torchdata==0.6.1
[pip3] torchmetrics==0.11.4
[pip3] torchrec-nightly==2023.5.28
[pip3] torchtext==0.15.2
[pip3] torchvision==0.15.2
[pip3] triton==2.0.0
[pip3] vector-quantize-pytorch==1.6.6
[conda] bert-pytorch 0.0.1a4 dev_0 <develop>
[conda] blas 1.0 mkl
[conda] clip-anytorch 2.5.2 pypi_0 pypi
[conda] coca-pytorch 0.0.7 pypi_0 pypi
[conda] dalle2-pytorch 1.14.2 pypi_0 pypi
[conda] ema-pytorch 0.2.3 pypi_0 pypi
[conda] functorch 1.14.0a0+b71aa0b pypi_0 pypi
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-include 2022.1.0 h06a4308_224
[conda] mkl-service 2.4.0 py38h7f8727e_0
[conda] mkl_fft 1.3.1 py38hd3c417c_0
[conda] mkl_random 1.2.2 py38h51133e4_0
[conda] numpy 1.21.2 pypi_0 pypi
[conda] open-clip-torch 2.20.0 pypi_0 pypi
[conda] pytorch-warmup 0.1.1 pypi_0 pypi
[conda] rotary-embedding-torch 0.2.3 pypi_0 pypi
[conda] torch 2.1.0a0+gitfc85124 dev_0 <develop>
[conda] torch-fidelity 0.3.0 pypi_0 pypi
[conda] torch-geometric 2.4.0 pypi_0 pypi
[conda] torch-sparse 0.6.17 dev_0 <develop>
[conda] torch-struct 0.5 pypi_0 pypi
[conda] torchaudio 2.0.2 pypi_0 pypi
[conda] torchdata 0.6.1 pypi_0 pypi
[conda] torchmetrics 0.11.4 pypi_0 pypi
[conda] torchrec-nightly 2023.5.28 pypi_0 pypi
[conda] torchtext 0.15.2 pypi_0 pypi
[conda] torchvision 0.15.2 pypi_0 pypi
[conda] triton 2.0.0 pypi_0 pypi
[conda] vector-quantize-pytorch 1.6.6 pypi_0 pypi
cc @jgong5 @XiaobingSuper @sanchitintel @ashokei @jingxu10 @ezyang @anjali411 @dylanbespalko @mruberry @Lezcano @nikitaved
| 1 |
2,121 | 104,510 |
Using the latest version of Torch, when the code executes tcpstore, there is no response
|
oncall: distributed, triaged
|
### 🐛 Describe the bug
project:
[https://github.com/svc-develop-team/so-vits-svc](https://github.com/svc-develop-team/so-vits-svc)
runtime command:
D:\SoVitsSvc/train.py -c configs/config.json -m 44k
No problem was found when using 1.8.2, but the project does not seem to be able to operate with 1.8.2. Therefore I still need to use the latest version of torch,I don’t know whether it is a problem with torch or a problem with the project, but I found that it is stuck in TCPStore function through breakpoint debugging
Complete project and python environment attachment:
Python 3.9.16:
[https://drive.google.com/file/d/1sr9kCD2PIv2WOZcHeWZI7GppSEMSI1Q0/view?usp=sharing](https://drive.google.com/file/d/1sr9kCD2PIv2WOZcHeWZI7GppSEMSI1Q0/view?usp=sharing)
My SoVitsSvc Project:
[https://drive.google.com/file/d/1ptWiZAUwKfkOb-my0HZMp2ULNl1GsGLp/view?usp=sharing](https://drive.google.com/file/d/1ptWiZAUwKfkOb-my0HZMp2ULNl1GsGLp/view?usp=sharing)
Video:
[https://drive.google.com/file/d/1uwBUdjGC2_AN6iv7qnRiqSIzWXs9qBaQ/view?usp=sharing](https://drive.google.com/file/d/1uwBUdjGC2_AN6iv7qnRiqSIzWXs9qBaQ/view?usp=sharing)
can download my complete project from here for easy debugging and problem reproduction,and a more specific process through the video
may be able to notice from the video that when I switch the version to 1.8.x, TCPStore can run normally
F:/Anaconda3/envs/SoVitsSvc/Lib/site-packages/torch/distributed/rendezvous.py:line:150
### Versions
(SoVitsSvc) PS D:\SoVitsSvc> python D:\SoVitsSvc\collect_env.py
Collecting environment information...
PyTorch version: 2.0.0+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 10 Professional Workstation Edition
GCC version: (x86_64-win32-sjlj-rev0, Built by MinGW-W64 project) 8.1.0
Clang version: Could not collect
CMake version: version 3.25.2
Libc version: N/A
Python version: 3.9.16 (main, May 17 2023, 17:49:16) [MSC v.1916 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.18363-SP0
Is CUDA available: True
CUDA runtime version: 12.2.91
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3070 Ti Laptop GPU
Nvidia driver version: 536.40
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture=9
CurrentClockSpeed=2918
DeviceID=CPU0
Family=207
L2CacheSize=7680
L2CacheSpeed=
Manufacturer=GenuineIntel
MaxClockSpeed=2918
Name=12th Gen Intel(R) Core(TM) i9-12900H
ProcessorType=3
Revision=
Versions of relevant libraries:
[pip3] numpy==1.22.4
[pip3] torch==2.0.0+cu118
[pip3] torchaudio==2.0.1+cu118
[pip3] torchcrepe==0.0.20
[pip3] torchvision==0.15.1+cu118
[conda] numpy 1.22.4 pypi_0 pypi
[conda] torch 2.0.0+cu118 pypi_0 pypi
[conda] torchaudio 2.0.1+cu118 pypi_0 pypi
[conda] torchcrepe 0.0.20 pypi_0 pypi
[conda] torchvision 0.15.1+cu118 pypi_0 pypi
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
| 3 |
2,122 | 104,507 |
[error] while Implementation of pytorch DistributedParallel
|
oncall: distributed, triaged
|
## Issue description
I am based on the POMO code [POMO](https://github.com/yd-kwon/POMO) to change it to a single machine multi-GPU graphics card running mode
Let me explain the specific code logic:
First, each epoch has `train_num_episode = self_TRAINer_params ['train_episodes'] `data to run. Due to insufficient GPU memory to run at one time, Therefore, it runs in multiple batches (while part) `batch_size = min(self_trainer_params ['train_batch_size'], remaining)`. However, even if it is run in multiple batches, the required video memory is still large and the running time is slow. So instead, it runs on multiple Gpus.
Load the batch_size data, create the sampler and dataloader to divide the data, and then set the epoch. Finally, execute them one by one.
## Core Code
```
def _train_one_epoch(self,epoch):
score_AM = AverageMeter()
loss_AM = AverageMeter()
train_num_episode = self.trainer_params['train_episodes']
episode = 0
loop_cnt = 0
while episode < train_num_episode:
remaining = train_num_episode - episode
batch_size = min(self.trainer_params['train_batch_size'], remaining)
# load every epoch to DataLoader
dis_up, dis_down = self.env.load_problems(batch_size)
# (batch,node,node)->(batch,node,node,2)
batch_data = torch.stack([dis_up, dis_down], dim=-1)
single_batch_size = batch_size // 3
# create Dataloader
sampler = torch.utils.data.DistributedSampler(batch_data)
batch_dataloader = torch.utils.data.DataLoader(batch_data,batch_size = single_batch_size,shuffle=False,sampler=sampler)
sampler.set_epoch(epoch)
for batch_idx,batch in enumerate(batch_dataloader):
batch_up = batch[:,:,:,0].to(self.device)
batch_down = batch[:,:,:,1].to(self.device)
# avg_score, avg_loss = self._train_one_batch(batch_size)
current_gpu = torch.cuda.current_device()
avg_score, avg_loss = self._train_one_batch(batch_up, batch_down, current_gpu)
score_AM.update(avg_score, batch_size)
loss_AM.update(avg_loss, batch_size)
dist.barrier()
episode += batch_size
# Log First 10 Batch, only at the first epoch
if epoch == self.start_epoch:
loop_cnt += 1
if loop_cnt <= 10:
self.logger.info('Epoch {:3d}: Train {:3d}/{:3d}({:1.1f}%) Score: {:.4f}, Loss: {:.4f}'
.format(epoch, episode, train_num_episode, 100. * episode / train_num_episode,
score_AM.avg, loss_AM.avg))
# Log Once, for each epoch
self.logger.info('Epoch {:3d}: Train ({:3.0f}%) Score: {:.4f}, Loss: {:.4f}'
.format(epoch, 100. * episode / train_num_episode,
score_AM.avg, loss_AM.avg))
```
it occurs an error,But I don't know how to solve it, can someone help me?
```
for batch_idx,batch in enumerate(batch_dataloader):
File "/home/.conda/envs/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 681, in __next__
for batch_idx,batch in enumerate(batch_dataloader):
File "/home/.conda/envs/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 681, in __next__
data = self._next_data()data = self._next_data()
File "/home/.conda/envs/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 720, in _next_data
File "/home/.conda/envs/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 720, in _next_data
data = self._next_data()
File "/home/.conda/envs/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 720, in _next_data
index = self._next_index() # may raise StopIteration
File "/home/.conda/envs/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 671, in _next_index
index = self._next_index() # may raise StopIteration
File "/home/.conda/envs/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 671, in _next_index
index = self._next_index() # may raise StopIteration
File "/home/.conda/envs/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 671, in _next_index
return next(self._sampler_iter) # may raise StopIteration
File "/home/.conda/envs/lib/python3.8/site-packages/torch/utils/data/sampler.py", line 247, in __iter__
return next(self._sampler_iter) # may raise StopIteration
File "/home/.conda/envs/lib/python3.8/site-packages/torch/utils/data/sampler.py", line 247, in __iter__
return next(self._sampler_iter) # may raise StopIteration
File "/home/.conda/envs/lib/python3.8/site-packages/torch/utils/data/sampler.py", line 247, in __iter__
for idx in self.sampler:
File "/home/.conda/envs/lib/python3.8/site-packages/torch/utils/data/distributed.py", line 101, in __iter__
for idx in self.sampler:
File "/home/.conda/envs/lib/python3.8/site-packages/torch/utils/data/distributed.py", line 101, in __iter__
for idx in self.sampler:
File "/home/.conda/envs/lib/python3.8/site-packages/torch/utils/data/distributed.py", line 101, in __iter__
indices = torch.randperm(len(self.dataset), generator=g).tolist() # type: ignore[arg-type]
indices = torch.randperm(len(self.dataset), generator=g).tolist() # type: ignore[arg-type]
RuntimeErrorRuntimeError: : Expected a 'cuda' device type for generator but found 'cpu'Expected a 'cuda' device type for generator but found 'cpu'
indices = torch.randperm(len(self.dataset), generator=g).tolist() # type: ignore[arg-type]
RuntimeError: Expected a 'cuda' device type for generator but found 'cpu'
```
the complete code implementation is as follows:
```
Trainer.py
import torch
from logging import getLogger
import torch.distributed as dist
from torch.utils.data import DataLoader, TensorDataset
from TSPEnv import TSPEnv as Env
from TSPModel import TSPModel as Model
from torch.optim import Adam as Optimizer
from torch.optim.lr_scheduler import MultiStepLR as Scheduler
from utils.utils import *
class TSPTrainer:
def __init__(self,
env_params,
model_params,
optimizer_params,
trainer_params):
# save arguments
self.env_params = env_params
self.model_params = model_params
self.optimizer_params = optimizer_params
self.trainer_params = trainer_params
# saved_models folder, logger
self.logger = getLogger(name='trainer')
self.result_folder = get_result_folder()
self.result_log = LogData()
#
torch.distributed.init_process_group(backend="nccl")
# device
local_rank = torch.distributed.get_rank()
torch.cuda.set_device(local_rank)
device = torch.device("cuda",local_rank)
self.device = device
# cuda
# USE_CUDA = self.trainer_params['use_cuda']
# if USE_CUDA:
# cuda_device_num = self.trainer_params['cuda_device_num']
# torch.cuda.set_device('cuda:{}'.format(cuda_device_num[0]))
# device = torch.device('cuda', cuda_device_num[0])
# torch.set_default_tensor_type('torch.cuda.FloatTensor')
# else:
# device = torch.device('cpu')
# torch.set_default_tensor_type('torch.FloatTensor')
# Main Components
self.model = Model(**self.model_params)
self.model.to(device)
# self.model.pre_forward() = self.model.pre_forward().to(device)
self.model = torch.nn.parallel.DistributedDataParallel(self.model,device_ids=[local_rank],output_device=local_rank,find_unused_parameters=True)
self.env = Env(**self.env_params)
self.optimizer = Optimizer(self.model.parameters(), **self.optimizer_params['optimizer'])
self.scheduler = Scheduler(self.optimizer, **self.optimizer_params['scheduler'])
# Restore
self.start_epoch = 1
model_load = trainer_params['model_load']
if model_load['enable']:
checkpoint_fullname = '{path}/checkpoint-{epoch}.pt'.format(**model_load)
checkpoint = torch.load(checkpoint_fullname, map_location=device)
self.model.load_state_dict(checkpoint['model_state_dict'])
self.start_epoch = 1 + model_load['epoch']
self.result_log.set_raw_data(checkpoint['result_log'])
self.optimizer.load_state_dict(checkpoint['optimizer_state_dict'])
self.scheduler.last_epoch = model_load['epoch']-1
self.logger.info('Saved Model Loaded !!')
# utility
self.time_estimator = TimeEstimator()
def run(self):
self.time_estimator.reset(self.start_epoch)
for epoch in range(self.start_epoch, self.trainer_params['epochs']+1):
self.logger.info('=================================================================')
# LR Decay
self.scheduler.step()
# Train
train_score, train_loss = self._train_one_epoch(epoch)
self.result_log.append('train_score', epoch, train_score)
self.result_log.append('train_loss', epoch, train_loss)
############################
# Logs & Checkpoint
############################
elapsed_time_str, remain_time_str = self.time_estimator.get_est_string(epoch, self.trainer_params['epochs'])
self.logger.info("Epoch {:3d}/{:3d}: Time Est.: Elapsed[{}], Remain[{}]".format(
epoch, self.trainer_params['epochs'], elapsed_time_str, remain_time_str))
all_done = (epoch == self.trainer_params['epochs'])
model_save_interval = self.trainer_params['logging']['model_save_interval']
img_save_interval = self.trainer_params['logging']['img_save_interval']
if epoch > 1: # save latest images, every epoch
self.logger.info("Saving log_image")
image_prefix = '{}/latest'.format(self.result_folder)
util_save_log_image_with_label(image_prefix, self.trainer_params['logging']['log_image_params_1'],
self.result_log, labels=['train_score'])
util_save_log_image_with_label(image_prefix, self.trainer_params['logging']['log_image_params_2'],
self.result_log, labels=['train_loss'])
if all_done or (epoch % model_save_interval) == 0:
self.logger.info("Saving trained_model")
checkpoint_dict = {
'epoch': epoch,
'model_state_dict': self.model.state_dict(),
'optimizer_state_dict': self.optimizer.state_dict(),
'scheduler_state_dict': self.scheduler.state_dict(),
'result_log': self.result_log.get_raw_data()
}
torch.save(checkpoint_dict, '{}/checkpoint-{}.pt'.format(self.result_folder, epoch))
if all_done or (epoch % img_save_interval) == 0:
image_prefix = '{}/img/checkpoint-{}'.format(self.result_folder, epoch)
util_save_log_image_with_label(image_prefix, self.trainer_params['logging']['log_image_params_1'],
self.result_log, labels=['train_score'])
util_save_log_image_with_label(image_prefix, self.trainer_params['logging']['log_image_params_2'],
self.result_log, labels=['train_loss'])
if all_done:
self.logger.info(" *** Training Done *** ")
self.logger.info("Now, printing log array...")
util_print_log_array(self.logger, self.result_log)
def _train_one_epoch(self,epoch):
score_AM = AverageMeter()
loss_AM = AverageMeter()
train_num_episode = self.trainer_params['train_episodes']
episode = 0
loop_cnt = 0
while episode < train_num_episode:
remaining = train_num_episode - episode
batch_size = min(self.trainer_params['train_batch_size'], remaining)
#
dis_up, dis_down = self.env.load_problems(batch_size)
# (batch,node,node)->(batch,node,node,2)
batch_data = torch.stack([dis_up, dis_down], dim=-1)
single_batch_size = batch_size // 3
# Dataloader
sampler = torch.utils.data.DistributedSampler(batch_data)
batch_dataloader = torch.utils.data.DataLoader(batch_data,batch_size=single_batch_size,shuffle=False,sampler=sampler)
sampler.set_epoch(epoch)
for batch_idx,batch in enumerate(batch_dataloader):
batch_up = batch[:,:,:,0].to(self.device)
batch_down = batch[:,:,:,1].to(self.device)
# avg_score, avg_loss = self._train_one_batch(batch_size)
current_gpu = torch.cuda.current_device()
avg_score, avg_loss = self._train_one_batch(batch_up, batch_down, current_gpu)
score_AM.update(avg_score, batch_size)
loss_AM.update(avg_loss, batch_size)
dist.barrier()
episode += batch_size
# Log First 10 Batch, only at the first epoch
if epoch == self.start_epoch:
loop_cnt += 1
if loop_cnt <= 10:
self.logger.info('Epoch {:3d}: Train {:3d}/{:3d}({:1.1f}%) Score: {:.4f}, Loss: {:.4f}'
.format(epoch, episode, train_num_episode, 100. * episode / train_num_episode,
score_AM.avg, loss_AM.avg))
# Log Once, for each epoch
self.logger.info('Epoch {:3d}: Train ({:3.0f}%) Score: {:.4f}, Loss: {:.4f}'
.format(epoch, 100. * episode / train_num_episode,
score_AM.avg, loss_AM.avg))
return score_AM.avg, loss_AM.avg
def _train_one_batch(self, dis_up, dis_down, current_gpu):
# Prep
###############################################
self.model.train()
batch_size = dis_up.size(0)
#
reset_state_up,reset_state_down, _, _ = self.env.reset(dis_up,dis_down)
device = dis_up.device
prob_list = torch.zeros(size=(batch_size, self.env.pomo_size, 0)).to(device)
# shape: (batch, pomo, 0~)
# POMO Rollout
###############################################
state, reward, done = self.env.pre_step()
while not done:
selected, prob = self.model(state,reset_state_up,reset_state_down)
selected.to(self.device)
prob.to(self.device)
# shape: (batch, pomo)
# state, reward, done = self.env.step(selected)
state, reward, done = self.env.step(selected,current_gpu)
prob_list = torch.cat((prob_list, prob[:, :, None]), dim=2)
# Loss
###############################################
advantage = reward - reward.float().mean(dim=1, keepdims=True)
# shape: (batch, pomo)
log_prob = prob_list.log().sum(dim=2)
# size = (batch, pomo)
loss = -advantage * log_prob # Minus Sign: To Increase REWARD
# shape: (batch, pomo)
loss_mean = loss.mean()
# Score
###############################################
max_pomo_reward, _ = reward.max(dim=1) # get best results from pomo
score_mean = -max_pomo_reward.float().mean() # negative sign to make positive value
# Step & Return
###############################################
self.model.zero_grad()
loss_mean.backward()
self.optimizer.step()
return score_mean.item(), loss_mean.item()
```
```
main.py
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('--local_rank',
default=-1,
type=int,
help='node rank for distributed training')
args = parser.parse_args()
print(args.local_rank)
def main():
if DEBUG_MODE:
_set_debug_mode()
create_logger(**logger_params)
_print_config()
trainer = Trainer(env_params=env_params,
model_params=model_params,
optimizer_params=optimizer_params,
trainer_params=trainer_params,
)
copy_all_src(trainer.result_folder)
trainer.run()
```
## System Info
You can get the script and run it with:
```
python -m torch.distributed.launch --master_port=29501 --nnodes=1 --nproc_per_node=3 train.py
```
- PyTorch or Caffe2: Pytorch
- How you installed PyTorch (conda, pip, source): conda
- Build command you used (if compiling from source): python -m torch.distributed.launch --master_port=29501 --nnodes=1 --nproc_per_node=3 train.py
- OS: Linux 20-3xtitanxp 5.19.0-43-generic #44~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Mon May 22 13:39:36 UTC 2 x86_64 x86_64 x86_64 GNU/Linux
- PyTorch version: 1.12.1
- Python version: 3.8.13
- CUDA/cuDNN version: 11.7
- GPU models and configuration: NVIDIA TITAN Xp 3GPU
- Versions of any other relevant libraries:
matplotlib: 3.5.3
pip: 22.1.2
tensorboard: 2.11.0
tqdm: 4.64.1
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
| 2 |
2,123 | 104,506 |
TImeout in NCCL doesn't work
|
oncall: distributed, triaged
|
### 🐛 Describe the bug
The document about `init_process_group` https://pytorch.org/docs/stable/distributed.html#torch.distributed.init_process_group
suggest `timeout` has been avaliable in NCCL backend. But I cannot get the following work:
```python
import os
import time
from datetime import timedelta
import torch.distributed as dist
from torch.distributed.elastic.multiprocessing.errors import record
print(os.environ["NCCL_BLOCKING_WAIT"])
local_rank = int(os.environ["RANK"])
local_world_size = int(os.environ["WORLD_SIZE"])
server_store = dist.TCPStore(
host_name="localhost",
port=24950,
world_size=local_world_size,
is_master=(local_rank == 0),
# timeout=timedelta(seconds=60),
)
dist.init_process_group(
backend="nccl",
world_size=local_world_size,
rank=local_rank,
store=server_store,
timeout=timedelta(seconds=2),
)
@record
def main():
if dist.get_rank() == 1:
dist.barrier()
else:
time.sleep(1000)
main()
```
and run it by
```
NCCL_BLOCKING_WAIT=1 torchrun --nnode 1 --nproc_per_node 2 test.py
```
But the code still hang on and doesn't raise a timeout error. Is this a bug or did I misunderstand the usage of 'timeout' here?
### Versions
Collecting environment information...
PyTorch version: 2.0.0+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (conda-forge gcc 11.3.0-19) 11.3.0
Clang version: Could not collect
CMake version: version 3.26.3
Libc version: glibc-2.31
Python version: 3.9.16 | packaged by conda-forge | (main, Feb 1 2023, 21:39:03) [GCC 11.3.0] (64-bit runtime)
Python platform: Linux-5.4.0-148-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.7.99
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100 80GB PCIe
GPU 1: NVIDIA A100 80GB PCIe
Nvidia driver version: 530.30.02
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 43 bits physical, 48 bits virtual
CPU(s): 256
On-line CPU(s) list: 0-254
Off-line CPU(s) list: 255
Thread(s) per core: 1
Core(s) per socket: 64
Socket(s): 2
NUMA node(s): 2
Vendor ID: AuthenticAMD
CPU family: 25
Model: 1
Model name: AMD EPYC 7763 64-Core Processor
Stepping: 1
Frequency boost: enabled
CPU MHz: 1497.171
CPU max MHz: 2450.0000
CPU min MHz: 1500.0000
BogoMIPS: 4890.81
Virtualization: AMD-V
L1d cache: 2 MiB
L1i cache: 2 MiB
L2 cache: 32 MiB
L3 cache: 256 MiB
NUMA node0 CPU(s): 0-63,128-191
NUMA node1 CPU(s): 64-127,192-254
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate sme ssbd mba sev ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.3
[pip3] pytorch-lightning==1.6.4
[pip3] torch==2.0.0
[pip3] torchaudio==2.0.2
[pip3] torchmetrics==0.11.4
[pip3] torchvision==0.15.2
[pip3] triton==2.0.0
[conda] blas 2.116 mkl conda-forge
[conda] blas-devel 3.9.0 16_linux64_mkl conda-forge
[conda] libblas 3.9.0 16_linux64_mkl conda-forge
[conda] libcblas 3.9.0 16_linux64_mkl conda-forge
[conda] liblapack 3.9.0 16_linux64_mkl conda-forge
[conda] liblapacke 3.9.0 16_linux64_mkl conda-forge
[conda] mkl 2022.1.0 h84fe81f_915 conda-forge
[conda] mkl-devel 2022.1.0 ha770c72_916 conda-forge
[conda] mkl-include 2022.1.0 h84fe81f_915 conda-forge
[conda] numpy 1.24.3 py39h6183b62_0 conda-forge
[conda] pytorch-cuda 11.7 h778d358_5 pytorch
[conda] pytorch-lightning 1.6.4 pypi_0 pypi
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torch 2.0.0 pypi_0 pypi
[conda] torchaudio 2.0.2 py39_cu117 pytorch
[conda] torchmetrics 0.11.4 pypi_0 pypi
[conda] torchtriton 2.0.0 py39 pytorch
[conda] torchvision 0.15.2 py39_cu117 pytorch
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
| 13 |
2,124 | 104,505 |
Wrong functionalization of as_strided leads to wrong results
|
high priority, module: docs, triaged, oncall: pt2
|
### 🐛 Describe the bug
according to documentation (https://pytorch.org/docs/stable/generated/torch.as_strided.html), "as_strided" should override the existing strides and offset of the tensor storage (as opposed to other view operations which are composite in nature).
Thus, when functionalizating as_strided, we should not add it on top of previous view operations.
This issue leads to incorrect results of torch.compile.
reproducer:
* note that when not using a custom compiler, then torch fails to compile all together
* using core_aten_decompositions doesn't change the issue
```
import torch
import torch._dynamo
import logging
import numpy as np
import torchvision.models as models
from torch._dynamo.backends.common import aot_autograd
from torch._decomp import core_aten_decompositions
print(torch.__version__)
def main():
def inner_compiler(gm: torch.fx.GraphModule, example_inputs):
gm.graph.print_tabular()
return gm.forward
def foo(a):
e = a.diagonal()
f = e.as_strided((2,), (1,), 0)
f.add_(1.0)
return a
a = torch.randn(2, 4)
a_ref = a.clone()
torch._dynamo.reset()
aot_backend = aot_autograd(fw_compiler=inner_compiler)#, decompositions=core_aten_decompositions())
compiled_model = torch.compile(foo, dynamic=True, backend=aot_backend)
out_ref = foo(a_ref)
print(out_ref)
out = compiled_model(a)
print(out)
print(out == out_ref)
main()
```
### Error logs
when not using custom compiler:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/torch/_dynamo/output_graph.py](https://localhost:8080/#) in call_user_compiler(self, gm)
669 else:
--> 670 compiled_fn = compiler_fn(gm, self.fake_example_inputs())
671 _step_logger()(logging.INFO, f"done compiler function {name}")
42 frames
AttributeError: The underlying op of 'aten.sym_storage_offset' has no overload name 'name'
While executing %sym_storage_offset : [#users=3] = call_function[target=torch.ops.aten.sym_storage_offset](args = (%arg0_1,), kwargs = {})
Original traceback:
File "<ipython-input-3-6e27d662c8bc>", line 18, in foo
e = a.diagonal()
The above exception was the direct cause of the following exception:
BackendCompilerFailed Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/torch/_dynamo/output_graph.py](https://localhost:8080/#) in call_user_compiler(self, gm)
673 except Exception as e:
674 compiled_fn = gm.forward
--> 675 raise BackendCompilerFailed(self.compiler_fn, e) from e
676 return compiled_fn
677
BackendCompilerFailed: debug_wrapper raised AttributeError: The underlying op of 'aten.sym_storage_offset' has no overload name 'name'
While executing %sym_storage_offset : [#users=3] = call_function[target=torch.ops.aten.sym_storage_offset](args = (%arg0_1,), kwargs = {})
Original traceback:
File "<ipython-input-3-6e27d662c8bc>", line 18, in foo
e = a.diagonal()
Set torch._dynamo.config.verbose=True for more information
You can suppress this exception and fall back to eager by setting:
torch._dynamo.config.suppress_errors = True
### Minified repro
_No response_
### Versions
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 21676 100 21676 0 0 102k 0 --:--:-- --:--:-- --:--:-- 102k
Collecting environment information...
PyTorch version: 2.0.1+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: 10.0.0-4ubuntu1
CMake version: version 3.25.2
Libc version: glibc-2.31
Python version: 3.10.12 (main, Jun 7 2023, 12:45:35) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.15.107+-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: Tesla T4
Nvidia driver version: 525.85.12
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 2
On-line CPU(s) list: 0,1
Thread(s) per core: 2
Core(s) per socket: 1
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) CPU @ 2.00GHz
Stepping: 3
CPU MHz: 2000.144
BogoMIPS: 4000.28
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32 KiB
L1i cache: 32 KiB
L2 cache: 1 MiB
L3 cache: 38.5 MiB
NUMA node0 CPU(s): 0,1
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable; SMT Host state unknown
Vulnerability Meltdown: Vulnerable
Vulnerability Mmio stale data: Vulnerable
Vulnerability Retbleed: Vulnerable
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled, PBRSB-eIBRS: Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat md_clear arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.22.4
[pip3] torch==2.0.1+cu118
[pip3] torchaudio==2.0.2+cu118
[pip3] torchdata==0.6.1
[pip3] torchsummary==1.5.1
[pip3] torchtext==0.15.2
[pip3] torchvision==0.15.2+cu118
[pip3] triton==2.0.0
[conda] Could not collect
cc @ezyang @gchanan @zou3519 @svekars @carljparker @msaroufim @wconstab @bdhirsh @anijain2305
| 2 |
2,125 | 104,502 |
Get errors after compiling and running PyTorch MINIMAL EXAMPLE for c++ Mac M1 with make
|
module: build, module: cpp, triaged
|
### 🐛 Describe the bug
I am able to download and compile the MINIMAL Example for C++ on Mac with Cmake. I run the cmake command and everything finished with no errors. However, when I go and execute the "make" command I get errors related to "Undefined symbols for architecture arm64" for c10.
### **_This command works_**
❯ cmake -DCMAKE_PREFIX_PATH=/Users/raulcardona/pytorch/libtorch ..
-- The C compiler identification is AppleClang 14.0.3.14030022
-- The CXX compiler identification is AppleClang 14.0.3.14030022
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Found Torch: /Users/raulcardona/pytorch/libtorch/lib/libtorch.dylib
-- Configuring done (0.7s)
-- Generating done (0.0s)
-- Build files have been written to: /Users/raulcardona/pytorch/Projects/Basic_PyTorch_Project/build
### **_But when I run:_**
❯ cmake --build . --config Release
[ 50%] Building CXX object CMakeFiles/example-app.dir/main.cpp.o
[100%] Linking CXX executable example-app
ld: warning: ignoring file /Users/raulcardona/pytorch/libtorch/lib/libc10.dylib, building for macOS-arm64 but attempting to link with file built for macOS-x86_64
ld: warning: ignoring file /Users/raulcardona/pytorch/libtorch/lib/libtorch_cpu.dylib, building for macOS-arm64 but attempting to link with file built for macOS-x86_64
ld: warning: ignoring file /Users/raulcardona/pytorch/libtorch/lib/libtorch.dylib, building for macOS-arm64 but attempting to link with file built for macOS-x86_64
ld: warning: ignoring file /Users/raulcardona/pytorch/libtorch/lib/libc10.dylib, building for macOS-arm64 but attempting to link with file built for macOS-x86_64
ld: warning: ignoring file /Users/raulcardona/pytorch/libtorch/lib/libkineto.a, building for macOS-arm64 but attempting to link with file built for macOS-x86_64
Undefined symbols for architecture arm64:
"at::_ops::rand::call(c10::ArrayRef<c10::SymInt>, c10::optional<c10::ScalarType>, c10::optional<c10::Layout>, c10::optional<c10::Device>, c10::optional<bool>)", referenced from:
at::rand(c10::ArrayRef<long long>, c10::TensorOptions) in main.cpp.o
"at::print(std::__1::basic_ostream<char, std::__1::char_traits<char>>&, at::Tensor const&, long long)", referenced from:
at::operator<<(std::__1::basic_ostream<char, std::__1::char_traits<char>>&, at::Tensor const&) in main.cpp.o
"c10::TensorImpl::set_autograd_meta(std::__1::unique_ptr<c10::AutogradMetaInterface, std::__1::default_delete<c10::AutogradMetaInterface>>)", referenced from:
torch::autograd::make_variable(at::Tensor, bool, bool) in main.cpp.o
"c10::UndefinedTensorImpl::_singleton", referenced from:
c10::UndefinedTensorImpl::singleton() in main.cpp.o
"c10::AutogradMetaInterface::~AutogradMetaInterface()", referenced from:
torch::autograd::AutogradMeta::AutogradMeta(c10::TensorImpl*, bool, torch::autograd::Edge) in main.cpp.o
"c10::impl::ExcludeDispatchKeyGuard::ExcludeDispatchKeyGuard(c10::DispatchKeySet)", referenced from:
at::AutoDispatchBelowADInplaceOrView::AutoDispatchBelowADInplaceOrView() in main.cpp.o
"c10::impl::ExcludeDispatchKeyGuard::~ExcludeDispatchKeyGuard()", referenced from:
at::AutoDispatchBelowADInplaceOrView::~AutoDispatchBelowADInplaceOrView() in main.cpp.o
"c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&)", referenced from:
c10::fromIntArrayRefSlow(c10::ArrayRef<long long>) in main.cpp.o
"c10::detail::torchCheckFail(char const*, char const*, unsigned int, char const*)", referenced from:
torch::autograd::AutogradMeta::AutogradMeta(c10::TensorImpl*, bool, torch::autograd::Edge) in main.cpp.o
c10::detail::torchInternalAssertFail(char const*, char const*, unsigned int, char const*, c10::detail::CompileTimeEmptyString) in main.cpp.o
"c10::detail::torchInternalAssertFail(char const*, char const*, unsigned int, char const*, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&)", referenced from:
c10::intrusive_ptr_target::~intrusive_ptr_target() in main.cpp.o
c10::Device::validate() in main.cpp.o
"c10::detail::torchInternalAssertFail(char const*, char const*, unsigned int, char const*, char const*)", referenced from:
c10::intrusive_ptr<c10::VariableVersion::VersionCounter, c10::detail::intrusive_target_default_null_type<c10::VariableVersion::VersionCounter>>::intrusive_ptr(c10::VariableVersion::VersionCounter*) in main.cpp.o
c10::intrusive_ptr_target::~intrusive_ptr_target() in main.cpp.o
c10::intrusive_ptr<c10::TensorImpl, c10::UndefinedTensorImpl>::retain_() in main.cpp.o
c10::ArrayRef<c10::SymInt>::debugCheckNullptrInvariant() in main.cpp.o
"caffe2::TypeMeta::error_unsupported_typemeta(caffe2::TypeMeta)", referenced from:
caffe2::TypeMeta::toScalarType() in main.cpp.o
"vtable for c10::AutogradMetaInterface", referenced from:
c10::AutogradMetaInterface::AutogradMetaInterface() in main.cpp.o
NOTE: a missing vtable usually means the first non-inline virtual member function has no definition.
"vtable for torch::autograd::AutogradMeta", referenced from:
torch::autograd::AutogradMeta::AutogradMeta(c10::TensorImpl*, bool, torch::autograd::Edge) in main.cpp.o
NOTE: a missing vtable usually means the first non-inline virtual member function has no definition.
ld: symbol(s) not found for architecture arm64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
make[2]: *** [example-app] Error 1
make[1]: *** [CMakeFiles/example-app.dir/all] Error 2
make: *** [all] Error 2
### **This is my main program:**
#include <torch/torch.h>
#include <iostream>
int main() {
torch::Tensor tensor = torch::rand({2, 3});
std::cout << tensor << std::endl;
}
### **This is my CMakeLists.txt file:**
cmake_minimum_required(VERSION 3.18 FATAL_ERROR)
project(example-app)
find_package(Torch REQUIRED)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${TORCH_CXX_FLAGS}")
add_executable(example-app main.cpp)
target_link_libraries(example-app "${TORCH_LIBRARIES}")
set_property(TARGET example-app PROPERTY CXX_STANDARD 17)
### Versions
zsh: command not found: wget
zsh: command not found: python
cc @malfet @seemethere @jbschlosser
| 3 |
2,126 | 104,501 |
Add inverse gamma distribution and fix `sign` bug in `PowerTransform`.
|
module: distributions, triaged, open source, ciflow/trunk
|
This PR comprises a few small contributions:
1. `PowerTransform` returned a sign of `+1` irrespective of exponent. However, it should return the sign of the exponent because the gradient has the same sign as the exponent. That issue has been fixed.
2. Added tests to catch errors akin to 1. in the future.
3. Added an `InverseGamma` distribution as a `TransformedDistribution` with `PowerTransform(-1)` and `Gamma` base distribution. The `InverseGamma` is often used as a prior for the length scale of Gaussian processes to aggressively suppress short length scales (see [here](https://betanalpha.github.io/assets/case_studies/gaussian_processes.html#323_Informative_Prior_Model) for a discussion).
Note: I added a `positive` constraint for the support of the inverse gamma distribution because the `PowerTransform(-1)` can fail for `nonnegative` constraints if the random variable is zero.
```python
>>> torch.distributions.InverseGamma(0.5, 1.0).log_prob(torch.zeros(1))
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-8-758aa22deacd> in <module>
----> 1 torch.distributions.InverseGamma(0.5, 1.0).log_prob(torch.zeros(1))
~/git/pytorch/torch/distributions/transformed_distribution.py in log_prob(self, value)
140 """
141 if self._validate_args:
--> 142 self._validate_sample(value)
143 event_dim = len(self.event_shape)
144 log_prob = 0.0
~/git/pytorch/torch/distributions/distribution.py in _validate_sample(self, value)
298 valid = support.check(value)
299 if not valid.all():
--> 300 raise ValueError(
301 "Expected value argument "
302 f"({type(value).__name__} of shape {tuple(value.shape)}) "
ValueError: Expected value argument (Tensor of shape (1,)) to be within the support (GreaterThan(lower_bound=0.0)) of the distribution InverseGamma(), but found invalid values:
tensor([0.])
```
This differs from the scipy implementation.
```python
>>> scipy.stats.invgamma(0.5).pdf(0)
0.0
```
cc @fritzo @neerajprad @alicanb @nikitaved @lezcano
| 7 |
2,127 | 104,487 |
[NCCL][CUDA][CUDA Graphs] Flush enqueued work before starting a graph capture
|
module: cuda, triaged, module: nccl, open source, Merged, Reverted, module: cuda graphs, ciflow/trunk, topic: not user facing, ciflow/periodic
|
#103503 addresses the situation where additional work is enqueued for the NCCL watchdog to poll during a graph capture---something we want to avoid as the subsequent polling will query an event and crash the capture.
However, there is currently no check that there was not work _already_ enqueued for the watchdog to poll. If there was already work that was enqueued and not cleaned up before the start of a graph capture, then we run into a similar problem where the event query will crash the capture. We've observed this causing crashes on several workloads, although it is somewhat flaky (if the watchdog happens to poll just before the graph capture and cleanup, then we dodge the crash).
This is a bit of a tricky issue as it involves making sure that no process group has enqueued work at the start of a capture, and as such the simplest solution is to add a bit of global state to track all process groups. This PR forces the start of the graph capture to wait until all enqueued work is completed and cleaned up or times out.
I did consider the alternative of simply having the watchdog skip cleanup if we detect that we are in the middle of a graph capture, but I think deferring the cleanup until later could result in false positive timeouts (e.g., if we defer cleanup on work that has completed long ago, checking timers after the graph capture could yield a "timeout").
CC @Aidyn-A
@bottler @kwen2501 @ptrblck
cc @ptrblck @mcarilli @ezyang
| 60 |
2,128 | 104,479 |
FSDP Optimizer Overlap - follow ups
|
oncall: distributed, triaged, module: fsdp
|
### 🚀 The feature, motivation and pitch
FSDP optimizer overlap is in https://github.com/pytorch/pytorch/pull/98667 needs some follow up work:
- We reallocate the _cpu_grad for CPU offload every iteration. This is potentially wasteful and we should run some profiling to understand the memory / allocation overhead tradeoff.
- We wait on optimizer step completion in post-backward, when it could be moved to the next pre-forward
- For CPU offload, we should explore running the step() on the GPU, right now it runs on CPU
### Alternatives
_No response_
### Additional context
_No response_
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
| 2 |
2,129 | 104,472 |
Investigate numerical stability of forward-mode AD of some foreach functions
|
triaged, module: forward ad
|
see: https://github.com/pytorch/pytorch/blob/624d20c3de03316c4ad1928ae5243723c2c8bb03/test/test_foreach.py#L925-L932
Rel:
- #58833
- #102409
cc @soulitzer
| 0 |
2,130 | 104,470 |
[test-only] Tensor load endianness default value
|
module: serialization, triaged, open source, Stale, release notes: package/deploy
|
Test only PR for s390x CI.
Not for merging.
cc @mruberry @mikaylagawarecki
| 4 |
2,131 | 104,466 |
`torch.view_as_real(tensor)` should return `nn.identity(tensor)` if its not complex instead of raising an error
|
triaged, module: complex
|
### 🚀 The feature, motivation and pitch
I think it would be easier if `torch.view_as_real(tensor)` just return the original tensor (maybe with a warning) instead of raising an error. Would simplify usage from something like
```
import torchaudio
import torch
spectrogram = torchaudio.transforms.Spectrogram()(audio)
spectrogram = torch.view_as_real(spectrogram) if torch.is_complex(spectrogram) else spectrogram
```
to
```
import torchaudio
import torch
spectrogram = torchaudio.transforms.Spectrogram()(audio)
spectrogram = torch.view_as_real(spectrogram) #handles any case
```
### Alternatives
_No response_
### Additional context
_No response_
cc @ezyang @anjali411 @dylanbespalko @mruberry @Lezcano @nikitaved
| 2 |
2,132 | 104,458 |
[Feature Request] Add a new overload of torch::jit::load to restore traced shape and type
|
oncall: jit
|
### 🚀 The feature, motivation and pitch
I have a inference system, which consist of offline part and runtime part. In the offline phase, we will need to convert torchscript to our model format. So I'd like to be able to access the shape and type information of torch script inputs. Luckily, someone has done similar thing. PR: https://github.com/pytorch/pytorch/pull/89541
I only need to add a new api:
TORCH_API Module load(
const std::string& filename, bool restore_shapes,
c10::optional<c10::Device> device = c10::nullopt,
bool load_debug_files = true);
### Alternatives
_No response_
### Additional context
Usage:
auto module = torch::jit::load(model_path, true);
After that, you can get the input shapes and types.
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| 0 |
2,133 | 104,457 |
Deprecated the device usage without device_type
|
triaged, open source, module: deprecation, Stale, topic: deprecation, topic: not user facing
|
Fixes #ISSUE_NUMBER
1. Now for the operators or APIs, we can call it by `device=index` without `device_type`, for example, `torch.device(0)` will get `cuda:0`, and `torch.rand(2, device=0)` also will get `cuda:0`, but it is not friendly for other device type, so I add the deprecated warning info to suggest to use `device="cuda:0"`.
| 9 |
2,134 | 104,455 |
add tsan workflow
|
open source, Stale, topic: not user facing
|
Fixes #ISSUE_NUMBER
| 5 |
2,135 | 104,454 |
DISABLED test_nnc_correctness_frac_cpu_bfloat16 (__main__.TestNNCOpInfoCPU)
|
skipped, NNC
|
Platforms: asan
This test was disabled because it is failing on master ([recent examples](http://torch-ci.com/failure/test_jit_fuser_te.py%3A%3ATestNNCOpInfoCPU%3A%3Atest_nnc_correctness_frac_cpu_bfloat16)).
Disable this XFAIL while we try to figure out why this successes only on ASAN. This is related to the change in https://github.com/pytorch/pytorch/pull/103647. This test was set as XFAIL in https://github.com/pytorch/pytorch/issues/76047
cc @EikanWang @jgong5 @jeanschmidt @cyyever @davidberard98
| 1 |
2,136 | 104,452 |
[ONNX][TypePromo] Automate codegen type promotion rules
|
module: onnx, triaged
|
Maybe we should have an exclusive file with the auto generated content? That would make future update easier, maybe even automated without having to parse stuff like "# DO NOT EDIT MANUALLY !!!"
_Originally posted by @thiagocrepaldi in https://github.com/pytorch/pytorch/pull/104063#discussion_r1247218389_
| 1 |
2,137 | 104,450 |
Numpy/scipy module works fine with Torch modules, but not TorchScript. How to torchscript a numpy/scipy module?
|
oncall: jit
|
### 🐛 Numpy module works fine with Torch modules, but not TorchScript.
```python
from scipy.signal import find_peaks
batch_size = 1
input_data_shape = 1000
input_shape = (batch_size, input_data_shape)
reference_inputs = numpy.random.random(input_shape)
reference_outputs, _ = find_peaks(reference_inputs[0, :])
class FindPeaks(torch.nn.Module):
def __init__(self):
super(FindPeaks, self).__init__()
def forward(self, xs):
xs_numpy = xs.numpy()[0, :]
peaks, _ = find_peaks(xs_numpy)
return torch.tensor(peaks, dtype=int)
inputs = torch.tensor(reference_inputs, dtype=float)
torch_model = FindPeaks()
torch_outputs = torch_model(inputs)
torchscript_model = torch.jit.trace(torch_model, example_inputs=[inputs])
torchscript_model.save(f"./artifacts/{torch_model.__class__.__name__}.pt")
torchscript_outputs = torchscript_model(inputs).detach()
assert isinstance(torchscript_outputs, torch.Tensor)
assert torchscript_outputs.shape == reference_outputs.shape
assert numpy.allclose(
reference_outputs, torchscript_outputs.numpy(), rtol=1.0e-3, atol=1.0e-5
)
for i in range(5):
reference_inputs = numpy.random.random(input_shape)
reference_outputs, _ = find_peaks(reference_inputs[0, :])
inputs = torch.tensor(reference_inputs, dtype=float)
torch_outputs = torch_model(inputs).detach()
assert isinstance(torch_outputs, torch.Tensor)
assert torch_outputs.shape == reference_outputs.shape # works fine
assert numpy.allclose(
reference_outputs, torch_outputs.numpy(), rtol=1.0e-3, atol=1.0e-5
) # works fine
torchscript_outputs = torchscript_model(inputs).detach()
assert isinstance(torchscript_outputs, torch.Tensor)
assert torchscript_outputs.shape == reference_outputs.shape, \
(torchscript_outputs, reference_outputs) # not working, seems memorizing the input/output when compiling the model.
assert numpy.allclose(
reference_outputs, torchscript_outputs.numpy(), rtol=1.0e-3, atol=1.0e-5
)
```
### Versions
```
Collecting environment information...
PyTorch version: 1.12.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 13.3 (x86_64)
GCC version: Could not collect
Clang version: 16.0.3
CMake version: version 3.21.1
Libc version: N/A
Python version: 3.8.16 (default, Dec 7 2022, 01:39:17) [Clang 14.0.0 (clang-1400.0.29.202)] (64-bit runtime)
Python platform: macOS-13.3-x86_64-i386-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Intel(R) Core(TM) i9-9980HK CPU @ 2.40GHz
Versions of relevant libraries:
[pip3] flake8==4.0.1
[pip3] mypy==0.910
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.21.0
[pip3] torch==1.12.1
[pip3] torchaudio==0.12.1
[pip3] torchvision==0.13.1
[conda] Could not collect
```
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| 4 |
2,138 | 104,442 |
Functorch scan
|
triaged, open source, Stale, no-stale
|
This pull request contains the dense and autograd implementation of the functorch scan operator, as well as some unit tests.
The unit tests can be run with:
```
python test/functorch/test_control_flow.py -k TestControlFlow.test_scan
```
from the repo root directory.
Some details:
1. The scan operator interface adhere to JAX scan operator found in https://jax.readthedocs.io/en/latest/_autosummary/jax.lax.scan.html
2. lengths, reverse and unroll are not yet supported
3. You may find that there is a *reverse* keyword argument in the op implementation, but not exposed. The reverse argument is there so that we could reuse scan_dense for backward computation. In the future, we can expose the *reverse* kw option at the scan operator top level for a reversed forward computation
| 10 |
2,139 | 104,439 |
Silent incorrect result for addmm for noncontiguous input
|
high priority, triaged, module: mkldnn, module: correctness (silent), module: intel
|
### 🐛 Describe the bug
```python
import torch
def f_non_contiguous(x, y, z):
z_t = torch.ops.aten.t.default(z)
return torch.ops.aten.addmm.default(x, y, z_t)
def f_contiguous(x, y, z):
z_t = torch.ops.aten.t_copy.default(z)
return torch.ops.aten.addmm.default(x, y, z_t)
x = torch.randn(256)
y = torch.randn(6, 960)
z = torch.randn(256, 960)
contiguous_result = f_contiguous(x, y, z)
non_contiguous_result = f_non_contiguous(x, y, z)
print(contiguous_result.is_contiguous()) # prints True
print(non_contiguous_result.is_contiguous()) # print True
print(torch.allclose(contiguous_result, non_contiguous_result)) # prints False
print(torch.allclose(contiguous_result.contiguous(), non_contiguous_result.contiguous())) # prints False
```
### Versions
latest main
cc @ezyang @gchanan @zou3519 @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen @frank-wei
| 12 |
2,140 | 104,435 |
torch.compiled model output gets overwritten despite tensor.detach()
|
triaged, oncall: pt2
|
### 🐛 Describe the bug
related to https://github.com/pytorch/pytorch/blob/5ab1d2c2cc4e9c83b15c98974d6610a03322f40e/torch/_inductor/cudagraph_trees.py#L1889-L1893
at times when you would get this error, if instead of doing out = model(input), you do out = model(input).detach() to try to fix the error, you suppress the error message while not fixing the problem. Specifically the value of out will change if you run model(input).detach() again. you have to do model(input)+0 or something similar to actually fix the problem.
at a high level i think this bug is either
A) about tensor.detach() suppressing an error message without fixing the error.
B) model outputs getting overwritten despite tensor.detach()
depending on whether B is expected or not.
either the error message should not be suppressed or the output value should function as expected.
@eellison
### Error logs
n/a
### Minified repro
n/a
my own repro, try running with/without @torch.compile() and with/without .detach()
running as is should either throw the error message or give the same result as running without @torch.compile
@torch.compile()
def foo(x):
return x * x * x
inp = torch.rand([2], device="cuda")
out = foo(inp).detach()
sum_val_1 = out+out
out2 = foo(inp).detach()
sum_val_2 = out+out
print(sum_val_1, sum_val_2, out2 + out2)
assert sum_val_1.sum()==sum_val_2.sum()
### Versions
Collecting environment information...
PyTorch version: 2.1.0a0+git1dba81f
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: 10.0.0-4ubuntu1
CMake version: version 3.26.1
Libc version: glibc-2.31
Python version: 3.9.5 (default, Jun 4 2021, 12:28:51) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.15.0-1019-aws-x86_64-with-glibc2.31
Is CUDA available: False
CUDA runtime version: 11.7.64
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 96
On-line CPU(s) list: 0-95
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz
Stepping: 7
CPU MHz: 2500.000
BogoMIPS: 5000.00
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 1.5 MiB
L1i cache: 1.5 MiB
L2 cache: 48 MiB
L3 cache: 71.5 MiB
NUMA node0 CPU(s): 0-23,48-71
NUMA node1 CPU(s): 24-47,72-95
Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Vulnerable
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, STIBP disabled, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves ida arat pku ospke
Versions of relevant libraries:
[pip3] flake8==6.0.0
[pip3] flake8-bugbear==23.3.23
[pip3] flake8-comprehensions==3.12.0
[pip3] flake8-executable==2.1.3
[pip3] flake8-logging-format==0.9.0
[pip3] flake8-pyi==23.3.1
[pip3] flake8-simplify==0.19.3
[pip3] mypy==0.960
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.23.1
[pip3] pytorch-triton==2.1.0+440fd1bf20
[pip3] torch==2.1.0a0+git1dba81f
[pip3] torchvision==0.16.0a0+e5bf7cf
[conda] blas 1.0 mkl
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-include 2023.0.0 h06a4308_25399
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.23.1 pypi_0 pypi
[conda] pytorch-triton 2.1.0+440fd1bf20 pypi_0 pypi
[conda] torch 2.1.0a0+git1dba81f dev_0 <develop>
[conda] torchvision 0.16.0a0+e5bf7cf dev_0 <develop>
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 1 |
2,141 | 104,432 |
Make decomps opt-in for upsample_nearest 1D / 2D / 3D
|
Stale, ciflow/inductor
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #104432
Fixes the decomps for `upsample_nearest_{1d, 2d, 3d}` so they are opt-in rather than always run. This keeps `upsample_nearest_*d.default` in the graph during normal tracing.
| 4 |
2,142 | 104,421 |
LibTorch 2.0.1 scripting in Debug mode on Windows
|
module: windows, module: cpp, triaged
|
### 🐛 Describe the bug
I've saved a basic model in Python and I can load and execute it fine in Visual Studio 2022 Preview in the Release mode. However, it asserts at various points in the Debug mode when forward is called.
I'm using libtorch-win-shared-with-deps-debug-2.0.1+cu118.zip. I've tried both CPU and CUDA models/inputs and they both assert. I've tried both my own setup and the setup created by this plugin https://github.com/mszhanyi/VSIXTorch. I double-checked the paths to .lib files and copied DLLs to the Debug directory directly.
Executing tensor calculation works fine in Debug. I haven't tried compiling libtorch myself yet.
Errors when executing from the plugin:
```
Assertion failed: nthr_ == nthr, file C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\third_party\ideep\mkl-dnn\third_party\oneDNN\src\common/dnnl_thread.hpp, line 308
```
And from my setup:
```
torch_cpu.dll!operator delete(void * block) Line 38 C++
torch_cpu.dll!operator delete(void * block, unsigned __int64 __formal) Line 32 C++
torch_cpu.dll!std::_Deallocate<16,0>(void * _Ptr, unsigned __int64 _Bytes) Line 255 C++
torch_cpu.dll!std::allocator<c10::IValue>::deallocate(c10::IValue * const _Ptr, const unsigned __int64 _Count) Line 831 C++
torch_cpu.dll!std::vector<c10::IValue,std::allocator<c10::IValue>>::_Change_array(c10::IValue * const _Newvec, const unsigned __int64 _Newsize, const unsigned __int64 _Newcapacity) Line 2092 C++
torch_cpu.dll!std::vector<c10::IValue,std::allocator<c10::IValue>>::_Emplace_reallocate<c10::IValue>(c10::IValue * const _Whereptr, c10::IValue && <_Val_0>) Line 920 C++
torch_cpu.dll!std::vector<c10::IValue,std::allocator<c10::IValue>>::emplace<c10::IValue>(std::_Vector_const_iterator<std::_Vector_val<std::_Simple_types<c10::IValue>>> _Where, c10::IValue && <_Val_0>) Line 1062 C++
torch_cpu.dll!std::vector<c10::IValue,std::allocator<c10::IValue>>::insert(std::_Vector_const_iterator<std::_Vector_val<std::_Simple_types<c10::IValue>>> _Where, c10::IValue && _Val) Line 1070 C++
> torch_cpu.dll!torch::jit::Method::operator()(std::vector<c10::IValue,std::allocator<c10::IValue>> stack, const std::unordered_map<std::string,c10::IValue,std::hash<std::string>,std::equal_to<std::string>,std::allocator<std::pair<std::string const ,c10::IValue>>> & kwargs) Line 206 C++
torch_cpu.dll!torch::jit::Module::forward(std::vector<c10::IValue,std::allocator<c10::IValue>> inputs, const std::unordered_map<std::string,c10::IValue,std::hash<std::string>,std::equal_to<std::string>,std::allocator<std::pair<std::string const ,c10::IValue>>> & kwargs) Line 114 C++
```
```
torch::jit::script::Module module = torch::jit::load("u:/modules/mod.pt");
std::vector<torch::jit::IValue> inputs;
inputs.push_back(torch::rand({ 3, 55 }));
at::Tensor output = module.forward(inputs).toTensor();
std::cout << output << std::endl;
```
Update: I've replaced the CUDA libtorch version with https://download.pytorch.org/libtorch/cpu/libtorch-win-shared-with-deps-debug-2.0.1%2Bcpu.zip and it also asserts the same as the CUDA version. I also tried the nightly https://download.pytorch.org/libtorch/nightly/cpu/libtorch-win-shared-with-deps-debug-latest.zip with the same result.
### Versions
Couldn't download the script at the moment. Windows 10.
cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @vladimir-aubrecht @iremyux @Blackhex @cristianPanaite @jbschlosser
| 2 |
2,143 | 104,417 |
Support CUDA 12.2
|
module: cuda, triaged
|
### 🚀 The feature, motivation and pitch
Interesting feature:
This release introduces Heterogeneous Memory Management (HMM), allowing seamless sharing of data between host memory and accelerator devices. HMM is supported on Linux only and requires a recent kernel (6.1.24+ or 6.2.11+).
### Alternatives
_No response_
### Additional context
_No response_
cc @ptrblck
| 17 |
2,144 | 104,411 |
RuntimeError: t == DeviceType::CUDA INTERNAL ASSERT FAILED at HIPGuardImplMasqueradingAsCUDA.h:60, please report a bug to PyTorch
|
module: rocm, triaged
|
### 🐛 Describe the bug
I get the following runtime error when trying to use the Koala LLM and Microsoft Guidance :
```
Exception in thread Thread-8 (generate):
Traceback (most recent call last):
File "/home/x00/anaconda3/envs/GPT/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
self.run()
File "/home/x00/anaconda3/envs/GPT/lib/python3.10/threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "/home/x00/anaconda3/envs/GPT/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/home/x00/anaconda3/envs/GPT/lib/python3.10/site-packages/transformers/generation/utils.py", line 1522, in generate
return self.greedy_search(
File "/home/x00/anaconda3/envs/GPT/lib/python3.10/site-packages/transformers/generation/utils.py", line 2339, in greedy_search
outputs = self(
File "/home/x00/anaconda3/envs/GPT/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/x00/anaconda3/envs/GPT/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 691, in forward
outputs = self.model(
File "/home/x00/anaconda3/envs/GPT/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/x00/anaconda3/envs/GPT/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 579, in forward
layer_outputs = decoder_layer(
File "/home/x00/anaconda3/envs/GPT/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/x00/anaconda3/envs/GPT/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 293, in forward
hidden_states, self_attn_weights, present_key_value = self.self_attn(
File "/home/x00/anaconda3/envs/GPT/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/x00/anaconda3/envs/GPT/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 195, in forward
query_states = self.q_proj(hidden_states).view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2)
File "/home/x00/anaconda3/envs/GPT/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/x00/Bureau/libs/gptq/quant.py", line 279, in forward
quant_cuda.vecquant4matmul(x.float(), self.qweight, out, self.scales.float(), self.qzeros, self.g_idx)
RuntimeError: t == DeviceType::CUDA INTERNAL ASSERT FAILED at "/home/x00/anaconda3/envs/GPT/lib/python3.10/site-packages/torch/include/ATen/hip/impl/HIPGuardImplMasqueradingAsCUDA.h":60, please report a bug to PyTorch.
```
```
import os
import guidance
import transformers
import llama_inference
llama_inference.transformers = transformers
tokenizer = transformers.LlamaTokenizer.from_pretrained("TheBloke/koala-7B-GPTQ-4bit-128g")
model = llama_inference.load_quant("TheBloke/koala-7B-GPTQ-4bit-128g","koala-7B-4bit-128g.safetensors",4,128,0)
llm = guidance.llms.transformers.Koala(model=model, tokenizer=tokenizer)
valid_weapons = ["sword", "axe", "mace", "spear", "bow", "crossbow"]
character_maker = guidance("""The following is a character profile for an RPG game in JSON format.
```json
{
"id": "{{id}}",
"description": "{{description}}",
"name": "{{gen 'name'}}",
"age": {{gen 'age' pattern='[0-9]+' stop=','}},
"armor": "{{#select 'armor'}}leather{{or}}chainmail{{or}}plate{{/select}}",
"weapon": "{{select 'weapon' options=valid_weapons}}",
"class": "{{gen 'class'}}",
"mantra": "{{gen 'mantra' temperature=0.7}}",
"strength": {{gen 'strength' pattern='[0-9]+' stop=','}},
"items": [{{#geneach 'items' num_iterations=5 join=', '}}"{{gen 'this' temperature=0.7}}"{{/geneach}}]
}```""")
character_maker(
id="e1f491f7-7ab8-4dac-8c20-c92b5e7d883d",
description="A quick and nimble fighter.",
valid_weapons=valid_weapons, llm=llm
)
```
Expected behavior : This code should return the json completed by the LLM
I use this model but get the same error with others : https://huggingface.co/TheBloke/koala-7B-GPTQ-4bit-128g/tree/main
The llama_inference library comes from GPTQ-for-LLaMa (Cuda branch) : https://github.com/qwopqwop200/GPTQ-for-LLaMa
### Versions
PyTorch version: 2.0.1+rocm5.4.2
Is debug build: False
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: 5.4.22803-474e8620
OS: Linux Mint 21.1 (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04.1) 11.3.0
Clang version: Could not collect
CMake version: version 3.25.0
Libc version: glibc-2.35
Python version: 3.10.11 (main, May 16 2023, 00:28:57) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-75-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.5.119
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: AMD Radeon RX 6800 XT
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: 5.4.22803
MIOpen runtime version: 2.19.0
Is XNNPACK available: True
CPU:
Architecture : x86_64
Mode(s) opératoire(s) des processeurs : 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Boutisme : Little Endian
Processeur(s) : 12
Liste de processeur(s) en ligne : 0-11
Identifiant constructeur : AuthenticAMD
Nom de modèle : AMD Ryzen 5 5500
Famille de processeur : 25
Modèle : 80
Thread(s) par cœur : 2
Cœur(s) par socket : 6
Socket(s) : 1
Révision : 0
Frequency boost: disabled
Vitesse maximale du processeur en MHz : 3800,0000
Vitesse minimale du processeur en MHz : 1400,0000
BogoMIPS : 7600.33
Drapaux : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm
Virtualisation : AMD-V
Cache L1d : 192 KiB (6 instances)
Cache L1i : 192 KiB (6 instances)
Cache L2 : 3 MiB (6 instances)
Cache L3 : 16 MiB (1 instance)
Nœud(s) NUMA : 1
Nœud NUMA 0 de processeur(s) : 0-11
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.24.1
[pip3] pytorch-lightning==2.0.3
[pip3] pytorch-triton-rocm==2.0.1
[pip3] torch==2.0.1+rocm5.4.2
[pip3] torchaudio==2.0.2+rocm5.4.2
[pip3] torchmetrics==0.11.4
[pip3] torchvision==0.15.2+rocm5.4.2
[conda] numpy 1.24.1 pypi_0 pypi
[conda] pytorch-lightning 2.0.3 pypi_0 pypi
[conda] pytorch-triton-rocm 2.0.1 pypi_0 pypi
[conda] torch 2.0.1+rocm5.4.2 pypi_0 pypi
[conda] torchaudio 2.0.2+rocm5.4.2 pypi_0 pypi
[conda] torchmetrics 0.11.4 pypi_0 pypi
[conda] torchvision 0.15.2+rocm5.4.2 pypi_0 pypi
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang
| 0 |
2,145 | 104,405 |
Detailed error: Tensor-likes are not close! When use torch.jit.trace
|
oncall: jit
|
### 🐛 Describe the bug
When run follows code:
```
import torch
decoder_layer = torch.nn.TransformerDecoderLayer(
d_model=512, nhead=8, batch_first=True,
dropout=0.0, dtype=torch.bfloat16
)
model = torch.nn.TransformerDecoder(decoder_layer, num_layers=1)
src = torch.rand((1, 10, 512), dtype=torch.bfloat16)
tgt = torch.rand((1, 20, 512), dtype=torch.bfloat16)
model_trace = flatten_trace(model, src, tgt) # my flatten trace function
model_jit = torch.jit.trace(model_trace, (src, tgt))
out_jit = model_jit(src, tgt)
```
An error occurred:
TracerWarning: Output nr 1. of the traced function does not match the corresponding output of the Python function. Detailed error: Tensor-likes are not close!
Mismatched elements: 1607 / 5120 (31.4%)
Greatest absolute difference: 0.015625 at index (0, 0, 63) (up to 1e-05 allowed)
Greatest relative difference: 2.4893617021276597 at index (0, 0, 368) (up to 1e-05 allowed)
flatten_trace is my custom trace function, call this generate model:
```
def forward(self, tensor_0, tensor_1):
with torch.autograd.profiler.record_function("FlattenModule"):
layers_0_self_attn_in_proj_weight = self.layers_0_self_attn_in_proj_weight
layers_0_self_attn_in_proj_bias = self.layers_0_self_attn_in_proj_bias
layers_0_self_attn_out_proj_weight = self.layers_0_self_attn_out_proj_weight
layers_0_self_attn_out_proj_bias = self.layers_0_self_attn_out_proj_bias
layers_0_multihead_attn_in_proj_weight = self.layers_0_multihead_attn_in_proj_weight
layers_0_multihead_attn_in_proj_bias = self.layers_0_multihead_attn_in_proj_bias
layers_0_multihead_attn_out_proj_weight = self.layers_0_multihead_attn_out_proj_weight
layers_0_multihead_attn_out_proj_bias = self.layers_0_multihead_attn_out_proj_bias
layers_0_linear1_weight = self.layers_0_linear1_weight
layers_0_linear1_bias = self.layers_0_linear1_bias
layers_0_linear2_weight = self.layers_0_linear2_weight
layers_0_linear2_bias = self.layers_0_linear2_bias
layers_0_norm1_weight = self.layers_0_norm1_weight
layers_0_norm1_bias = self.layers_0_norm1_bias
layers_0_norm2_weight = self.layers_0_norm2_weight
layers_0_norm2_bias = self.layers_0_norm2_bias
layers_0_norm3_weight = self.layers_0_norm3_weight
layers_0_norm3_bias = self.layers_0_norm3_bias
transpose = torch.ops.aten.transpose(tensor_0, 1, 0)
t = torch.ops.aten.t(layers_0_self_attn_in_proj_weight); layers_0_self_attn_in_proj_weight = None
view = torch.ops.aten.view(transpose, [10, 512]); transpose = None
addmm = torch.ops.aten.addmm(layers_0_self_attn_in_proj_bias, view, t); layers_0_self_attn_in_proj_bias = view = t = None
view_1 = torch.ops.aten.view(addmm, [10, 1, 1536]); addmm = None
split = torch.ops.aten.split(view_1, 512, -1); view_1 = None
getitem = split[0]
getitem_1 = split[1]
getitem_2 = split[2]; split = None
clone = torch.ops.aten.clone(getitem, memory_format = torch.contiguous_format); getitem = None
view_2 = torch.ops.aten.view(clone, [10, 8, 64]); clone = None
transpose_1 = torch.ops.aten.transpose(view_2, 0, 1); view_2 = None
clone_1 = torch.ops.aten.clone(getitem_1, memory_format = torch.contiguous_format); getitem_1 = None
view_3 = torch.ops.aten.view(clone_1, [10, 8, 64]); clone_1 = None
transpose_2 = torch.ops.aten.transpose(view_3, 0, 1); view_3 = None
clone_2 = torch.ops.aten.clone(getitem_2, memory_format = torch.contiguous_format); getitem_2 = None
view_4 = torch.ops.aten.view(clone_2, [10, 8, 64]); clone_2 = None
transpose_3 = torch.ops.aten.transpose(view_4, 0, 1); view_4 = None
div = torch.ops.aten.div(transpose_1, 8.0); transpose_1 = None
transpose_4 = torch.ops.aten.transpose(transpose_2, -2, -1); transpose_2 = None
bmm = torch.ops.aten.bmm(div, transpose_4); div = transpose_4 = None
_softmax = torch.ops.aten._softmax(bmm, -1, False); bmm = None
bmm_1 = torch.ops.aten.bmm(_softmax, transpose_3); _softmax = transpose_3 = None
transpose_5 = torch.ops.aten.transpose(bmm_1, 0, 1); bmm_1 = None
clone_3 = torch.ops.aten.clone(transpose_5, memory_format = torch.contiguous_format); transpose_5 = None
view_5 = torch.ops.aten.view(clone_3, [10, 512]); clone_3 = None
t_1 = torch.ops.aten.t(layers_0_self_attn_out_proj_weight); layers_0_self_attn_out_proj_weight = None
addmm_1 = torch.ops.aten.addmm(layers_0_self_attn_out_proj_bias, view_5, t_1); layers_0_self_attn_out_proj_bias = view_5 = t_1 = None
view_6 = torch.ops.aten.view(addmm_1, [10, 1, 512]); addmm_1 = None
transpose_6 = torch.ops.aten.transpose(view_6, 1, 0); view_6 = None
add = torch.ops.aten.add(tensor_0, transpose_6); tensor_0 = transpose_6 = None
native_layer_norm = torch.ops.aten.native_layer_norm(add, [512], layers_0_norm1_weight, layers_0_norm1_bias, 1e-05); add = layers_0_norm1_weight = layers_0_norm1_bias = None
getitem_3 = native_layer_norm[0]; native_layer_norm = None
transpose_7 = torch.ops.aten.transpose(getitem_3, 1, 0)
transpose_8 = torch.ops.aten.transpose(tensor_1, 1, 0); tensor_1 = None
split_with_sizes = torch.ops.aten.split_with_sizes(layers_0_multihead_attn_in_proj_weight, [512, 1024]); layers_0_multihead_attn_in_proj_weight = None
getitem_6 = split_with_sizes[0]
getitem_7 = split_with_sizes[1]; split_with_sizes = None
split_with_sizes_1 = torch.ops.aten.split_with_sizes(layers_0_multihead_attn_in_proj_bias, [512, 1024]); layers_0_multihead_attn_in_proj_bias = None
getitem_8 = split_with_sizes_1[0]
getitem_9 = split_with_sizes_1[1]; split_with_sizes_1 = None
t_2 = torch.ops.aten.t(getitem_6); getitem_6 = None
view_7 = torch.ops.aten.view(transpose_7, [10, 512]); transpose_7 = None
addmm_2 = torch.ops.aten.addmm(getitem_8, view_7, t_2); getitem_8 = view_7 = t_2 = None
view_8 = torch.ops.aten.view(addmm_2, [10, 1, 512]); addmm_2 = None
t_3 = torch.ops.aten.t(getitem_7); getitem_7 = None
view_9 = torch.ops.aten.view(transpose_8, [20, 512]); transpose_8 = None
addmm_3 = torch.ops.aten.addmm(getitem_9, view_9, t_3); getitem_9 = view_9 = t_3 = None
view_10 = torch.ops.aten.view(addmm_3, [20, 1, 1024]); addmm_3 = None
split_1 = torch.ops.aten.split(view_10, 512, -1); view_10 = None
getitem_10 = split_1[0]
getitem_11 = split_1[1]; split_1 = None
view_11 = torch.ops.aten.view(view_8, [10, 8, 64]); view_8 = None
transpose_9 = torch.ops.aten.transpose(view_11, 0, 1); view_11 = None
clone_4 = torch.ops.aten.clone(getitem_10, memory_format = torch.contiguous_format); getitem_10 = None
view_12 = torch.ops.aten.view(clone_4, [20, 8, 64]); clone_4 = None
transpose_10 = torch.ops.aten.transpose(view_12, 0, 1); view_12 = None
clone_5 = torch.ops.aten.clone(getitem_11, memory_format = torch.contiguous_format); getitem_11 = None
view_13 = torch.ops.aten.view(clone_5, [20, 8, 64]); clone_5 = None
transpose_11 = torch.ops.aten.transpose(view_13, 0, 1); view_13 = None
div_1 = torch.ops.aten.div(transpose_9, 8.0); transpose_9 = None
transpose_12 = torch.ops.aten.transpose(transpose_10, -2, -1); transpose_10 = None
bmm_2 = torch.ops.aten.bmm(div_1, transpose_12); div_1 = transpose_12 = None
_softmax_1 = torch.ops.aten._softmax(bmm_2, -1, False); bmm_2 = None
bmm_3 = torch.ops.aten.bmm(_softmax_1, transpose_11); _softmax_1 = transpose_11 = None
transpose_13 = torch.ops.aten.transpose(bmm_3, 0, 1); bmm_3 = None
clone_6 = torch.ops.aten.clone(transpose_13, memory_format = torch.contiguous_format); transpose_13 = None
view_14 = torch.ops.aten.view(clone_6, [10, 512]); clone_6 = None
t_4 = torch.ops.aten.t(layers_0_multihead_attn_out_proj_weight); layers_0_multihead_attn_out_proj_weight = None
addmm_4 = torch.ops.aten.addmm(layers_0_multihead_attn_out_proj_bias, view_14, t_4); layers_0_multihead_attn_out_proj_bias = view_14 = t_4 = None
view_15 = torch.ops.aten.view(addmm_4, [10, 1, 512]); addmm_4 = None
transpose_14 = torch.ops.aten.transpose(view_15, 1, 0); view_15 = None
add_1 = torch.ops.aten.add(getitem_3, transpose_14); getitem_3 = transpose_14 = None
native_layer_norm_1 = torch.ops.aten.native_layer_norm(add_1, [512], layers_0_norm2_weight, layers_0_norm2_bias, 1e-05); add_1 = layers_0_norm2_weight = layers_0_norm2_bias = None
getitem_12 = native_layer_norm_1[0]; native_layer_norm_1 = None
t_5 = torch.ops.aten.t(layers_0_linear1_weight); layers_0_linear1_weight = None
view_16 = torch.ops.aten.view(getitem_12, [10, 512])
addmm_5 = torch.ops.aten.addmm(layers_0_linear1_bias, view_16, t_5); layers_0_linear1_bias = view_16 = t_5 = None
view_17 = torch.ops.aten.view(addmm_5, [1, 10, 2048]); addmm_5 = None
relu = torch.ops.aten.relu(view_17); view_17 = None
t_6 = torch.ops.aten.t(layers_0_linear2_weight); layers_0_linear2_weight = None
view_18 = torch.ops.aten.view(relu, [10, 2048]); relu = None
addmm_6 = torch.ops.aten.addmm(layers_0_linear2_bias, view_18, t_6); layers_0_linear2_bias = view_18 = t_6 = None
view_19 = torch.ops.aten.view(addmm_6, [1, 10, 512]); addmm_6 = None
add_2 = torch.ops.aten.add(getitem_12, view_19); getitem_12 = view_19 = None
native_layer_norm_2 = torch.ops.aten.native_layer_norm(add_2, [512], layers_0_norm3_weight, layers_0_norm3_bias, 1e-05); add_2 = layers_0_norm3_weight = layers_0_norm3_bias = None
getitem_15 = native_layer_norm_2[0]; native_layer_norm_2 = None
return getitem_15
```
after call torch.jit.trace() function, generate model:
```
def forward(self,
tensor_0: Tensor,
tensor_1: Tensor) -> Tensor:
layers_0_norm3_bias = self.layers_0_norm3_bias
layers_0_norm3_weight = self.layers_0_norm3_weight
layers_0_linear2_bias = self.layers_0_linear2_bias
layers_0_linear2_weight = self.layers_0_linear2_weight
layers_0_linear1_bias = self.layers_0_linear1_bias
layers_0_linear1_weight = self.layers_0_linear1_weight
layers_0_norm2_bias = self.layers_0_norm2_bias
layers_0_norm2_weight = self.layers_0_norm2_weight
layers_0_multihead_attn_out_proj_bias = self.layers_0_multihead_attn_out_proj_bias
layers_0_multihead_attn_out_proj_weight = self.layers_0_multihead_attn_out_proj_weight
layers_0_multihead_attn_in_proj_bias = self.layers_0_multihead_attn_in_proj_bias
layers_0_multihead_attn_in_proj_weight = self.layers_0_multihead_attn_in_proj_weight
layers_0_norm1_bias = self.layers_0_norm1_bias
layers_0_norm1_weight = self.layers_0_norm1_weight
layers_0_self_attn_out_proj_bias = self.layers_0_self_attn_out_proj_bias
layers_0_self_attn_out_proj_weight = self.layers_0_self_attn_out_proj_weight
layers_0_self_attn_in_proj_bias = self.layers_0_self_attn_in_proj_bias
layers_0_self_attn_in_proj_weight = self.layers_0_self_attn_in_proj_weight
_0 = ops.profiler._record_function_enter("FlattenModule")
_1 = torch.transpose(tensor_0, 1, 0)
_2 = torch.t(layers_0_self_attn_in_proj_weight)
_3 = torch.addmm(layers_0_self_attn_in_proj_bias, torch.view(_1, [10, 512]), _2)
_4 = torch.split(torch.view(_3, [10, 1, 1536]), 512, -1)
_5, _6, _7, = _4
_8 = torch.view(torch.clone(_5, memory_format=0), [10, 8, 64])
_9 = torch.transpose(_8, 0, 1)
_10 = torch.view(torch.clone(_6, memory_format=0), [10, 8, 64])
_11 = torch.transpose(_10, 0, 1)
_12 = torch.view(torch.clone(_7, memory_format=0), [10, 8, 64])
_13 = torch.transpose(_12, 0, 1)
_14 = torch.bmm(torch.div(_9, CONSTANTS.c0), torch.transpose(_11, -2, -1))
_15 = torch.bmm(torch._softmax(_14, -1, False), _13)
_16 = torch.clone(torch.transpose(_15, 0, 1), memory_format=0)
_17 = torch.view(_16, [10, 512])
_18 = torch.t(layers_0_self_attn_out_proj_weight)
_19 = torch.addmm(layers_0_self_attn_out_proj_bias, _17, _18)
_20 = torch.transpose(torch.view(_19, [10, 1, 512]), 1, 0)
_21, _22, _23 = torch.native_layer_norm(torch.add(tensor_0, _20), [512], layers_0_norm1_weight, layers_0_norm1_bias, 1.0000000000000001e-05)
_24 = torch.transpose(_21, 1, 0)
_25 = torch.transpose(tensor_1, 1, 0)
_26 = torch.split_with_sizes(layers_0_multihead_attn_in_proj_weight, [512, 1024])
_27, _28, = _26
_29 = torch.split_with_sizes(layers_0_multihead_attn_in_proj_bias, [512, 1024])
_30, _31, = _29
_32 = torch.t(_27)
_33 = torch.addmm(_30, torch.view(_24, [10, 512]), _32)
_34 = torch.view(_33, [10, 1, 512])
_35 = torch.t(_28)
_36 = torch.addmm(_31, torch.view(_25, [20, 512]), _35)
_37 = torch.split(torch.view(_36, [20, 1, 1024]), 512, -1)
_38, _39, = _37
_40 = torch.transpose(torch.view(_34, [10, 8, 64]), 0, 1)
_41 = torch.view(torch.clone(_38, memory_format=0), [20, 8, 64])
_42 = torch.transpose(_41, 0, 1)
_43 = torch.view(torch.clone(_39, memory_format=0), [20, 8, 64])
_44 = torch.transpose(_43, 0, 1)
_45 = torch.bmm(torch.div(_40, CONSTANTS.c0), torch.transpose(_42, -2, -1))
_46 = torch.bmm(torch._softmax(_45, -1, False), _44)
_47 = torch.clone(torch.transpose(_46, 0, 1), memory_format=0)
_48 = torch.view(_47, [10, 512])
_49 = torch.t(layers_0_multihead_attn_out_proj_weight)
_50 = torch.addmm(layers_0_multihead_attn_out_proj_bias, _48, _49)
_51 = torch.transpose(torch.view(_50, [10, 1, 512]), 1, 0)
_52, _53, _54 = torch.native_layer_norm(torch.add(_21, _51), [512], layers_0_norm2_weight, layers_0_norm2_bias, 1.0000000000000001e-05)
_55 = torch.t(layers_0_linear1_weight)
_56 = torch.addmm(layers_0_linear1_bias, torch.view(_52, [10, 512]), _55)
_57 = torch.relu(torch.view(_56, [1, 10, 2048]))
_58 = torch.t(layers_0_linear2_weight)
_59 = torch.addmm(layers_0_linear2_bias, torch.view(_57, [10, 2048]), _58)
_60 = torch.add(_52, torch.view(_59, [1, 10, 512]))
_61, _62, _63 = torch.native_layer_norm(_60, [512], layers_0_norm3_weight, layers_0_norm3_bias, 1.0000000000000001e-05)
ops.profiler._record_function_exit(_0)
return _61
```
I know that the above error may ultimately affect the accuracy of the model, so I want to confirmed: what could be the specific reasons for the above error?
### Versions
version:
Torch 1.13.0
cuda11.7
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| 5 |
2,146 | 104,403 |
Inconsistencies in ONNX exporting of operation `torch.full()`
|
module: onnx, low priority, triaged, OSS contribution wanted
|
### 🐛 Describe the bug
Hello all! At our team we have been testing the ONNX exporting functionality, and we found a slight inconsistency with respect to tensor datatypes and using tensors with different datatypes through `torch.full()`.
Our main aim is to be able to use a list of tensors that we know are one-dimensional as an input to the shape of `torch.full()`. However, these tensors are `int32`, which seems not to be supported by the export (see below). We've tried some things, and we're struggling to understand the differences between three ways of treating the issue. At each alternative you'll see the exported graph, obtained with `verbose=True` when calling `torch.onnx.export`:
When converting from `int32` to `int64` through `tensor_object.long()` (final graph [here](https://gist.github.com/Icemole/3617ba90038a4e8af6efe116d06b4651)), we don't get any warning. However, when we try NOT to convert (final graph [here](https://gist.github.com/Icemole/3eedac24a108f63cb441defb7897d039)), we get the following warning (which is practically an error for us since we want the model to run inference):
```python
UserWarning: The exported ONNX model failed ONNX shape inference.
The model will not be executable by the ONNX Runtime.If this is unintended
and you believe there is a bug,please report an issue at
https://github.com/pytorch/pytorch/issues.Error reported by strict ONNX shape inference:
[ShapeInferenceError] (op_type:ConstantOfShape, node name: /ConstantOfShape):
input typestr: T1, has unsupported type: tensor(int32)
(Triggered internally at ../torch/csrc/jit/serialization/export.cpp:1407.)
_C._check_onnx_proto(proto)
```
When I try to run the model in inference mode, the model doesn't work (as expected), and fails with the following error:
```python
Traceback (most recent call last):
File "/home/nbeneitez/Documentos/work/repos/returnn_pytorch/testonnxinference.py", line 9, in <module>
session = ort.InferenceSession(onnx_model)
File "/home/nbeneitez/.venv/returnn_pytorch/lib/python3.10/site-packages/onnxruntime
/capi/onnxruntime_inference_collection.py", line 383, in __init__
self._create_inference_session(providers, provider_options, disabled_optimizers)
File "/home/nbeneitez/.venv/returnn_pytorch/lib/python3.10/site-packages/onnxruntime
/capi/onnxruntime_inference_collection.py", line 424, in _create_inference_session
sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model)
onnxruntime.capi.onnxruntime_pybind11_state.InvalidGraph: [ONNXRuntimeError] : 10 : INVALID_GRAPH :
Load model from my_filename.onnx failed:This is an invalid model.
Type Error: Type 'tensor(int32)' of input parameter (/Concat_output_0) of operator
(ConstantOfShape) in node (/ConstantOfShape) is invalid.
```
A minimal example is shown below ([gist](https://gist.github.com/Icemole/55c68dc814cf20b639e2592eda8cb14e)). Note that for the code to run without any warning, at the beginning of `forward()` you should add `d1 = d1.long()`:
```python
import torch
class DummyModel(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, d1):
# To run without warnings, add: d1 = d1.long()
return torch.full([d1], 1)
dummy_model = DummyModel()
d1 = torch.tensor(2, dtype=torch.int32)
torch.onnx.export(
dummy_model,
(d1,),
f="my_filename.onnx",
verbose=True,
input_names=["d1"],
output_names=["casted_data"],
)
```
A minimal example for inference ([gist](https://gist.github.com/Icemole/ee48f313587c568c7ae460cee48008d8)):
```python
import onnxruntime as ort
import torch
import sys
onnx_model = sys.argv[1]
dummy_data = torch.randint(low=1, high=10, size=[], dtype=torch.int32)
session = ort.InferenceSession(onnx_model)
outputs_onnx = torch.FloatTensor(session.run(None, {"data": dummy_data.numpy()})[0])
```
My main questions are:
- In the latter warning, there's a mention to an `onnx::ConstantOfShape` operator. This operator does establish in the [corresponding ONNX docs](https://onnx.ai/onnx/operators/onnx__ConstantOfShape.html#type-constraints) that its shape must be given as an `int64`. Why isn't this operator logged in the final graph? **Edit**: I see now that the `ConstantOfShape` operator refers to the last operation, that stores the result in `%casted_data`. Still, the code seems to be working with PyTorch, but is failing with ONNX.
- Is this behavior of not being able to use `torch.full()` with the shape being `int32` values expected? If so, what's the best way of handling this? With the reported `tensor.long()` approach?
Thank you in advance!
### Versions
```
Collecting environment information...
PyTorch version: 2.0.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 23.04 (x86_64)
GCC version: (Ubuntu 12.2.0-17ubuntu1) 12.2.0
Clang version: Could not collect
CMake version: version 3.26.4
Libc version: glibc-2.37
Python version: 3.10.12 (main, Jun 16 2023, 17:09:59) [GCC 12.2.0] (64-bit runtime)
Python platform: Linux-6.2.0-23-generic-x86_64-with-glibc2.37
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: GenuineIntel
Model name: 13th Gen Intel(R) Core(TM) i5-1340P
CPU family: 6
Model: 186
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 1
Stepping: 2
CPU(s) scaling MHz: 46%
CPU max MHz: 4600,0000
CPU min MHz: 400,0000
BogoMIPS: 4377,60
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 448 KiB (12 instances)
L1i cache: 640 KiB (12 instances)
L2 cache: 9 MiB (6 instances)
L3 cache: 12 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-15
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.3
[pip3] torch==2.0.1
[pip3] torchdata==0.6.1
[pip3] triton==2.0.0
[conda] Could not collect
```
Note: it is important that the version is 2.0.1! As far as I remember, running the code in version 1.13.1 doesn't yield the `UserWarning` explicitly, although the model still cannot be used for inference.
| 3 |
2,147 | 104,389 |
distributed hooks want to support custom device
|
oncall: distributed, triaged
|
### 🐛 Describe the bug
for distributed hooks, there are some hard-codes, for cutsom device (privateuse1 backend, cuda-like device), we also want to support these hooks.
### Versions
main
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
| 2 |
2,148 | 104,367 |
FakeTensor can't handle meta impls that perform device conversion
|
triaged, module: fakeTensor
|
Minified after finding an internal example from @suo.
```
import torch
from torch.fx.experimental.proxy_tensor import make_fx
def foo_meta(x):
return x.to(device='cpu')
lib = torch.library.Library("test", "FRAGMENT")
lib.define("foo(Tensor x) -> Tensor")
lib.impl("test::foo", foo_meta, "Meta")
def f(x):
return torch.ops.test.foo.default(x)
x = torch.ones(4, device='cuda')
fx_g = make_fx(f, tracing_mode='symbolic')(x)
print(fx_g.code)
```
Fails with:
```
File "/scratch/hirsheybar1/work/pytorch/tmp3.py", line 6, in foo_meta
return x.to(device='cpu')
File "/scratch/hirsheybar1/work/pytorch/torch/_decomp/decompositions.py", line 1646, in _to_copy
x = torch._prims.device_put(x, device)
File "/scratch/hirsheybar1/work/pytorch/torch/_ops.py", line 429, in __call__
return self._op(*args, **kwargs or {})
File "/scratch/hirsheybar1/work/pytorch/torch/_prims/__init__.py", line 2069, in _device_put_meta
return TensorMeta(a, device=utils.canonicalize_device(device))
File "/scratch/hirsheybar1/work/pytorch/torch/_prims/__init__.py", line 258, in TensorMeta
return torch.empty_strided(shape, strides, dtype=dtype, device=device)
RuntimeError: /scratch/hirsheybar1/work/pytorch/build/aten/src/ATen/RegisterCPU.cpp:5200: SymIntArrayRef expected to contain only concrete integers
```
FakeTensor treads a fine line by trying to ensure that any operators that run underneath that dispatch to meta. It looks like that doesn't work properly when performing device conversion ops inside of your meta function, like `.to()`.
| 4 |
2,149 | 104,366 |
DISABLED test_conv3d_64bit_indexing_cuda (__main__.TestConvolutionNNDeviceTypeCUDA)
|
module: rocm, triaged, skipped
|
Platforms: rocm
This test was disabled because it is failing on ROCm5.5 CI upgrade. This test is also being skipped in ROCm5.4.2 upstream CI.
Copied from: https://github.com/pytorch/pytorch/issues/104348
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @jeffdaily @sunway513 @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang
| 1 |
2,150 | 104,361 |
ReduceLROnPlateau will throw IndexError: list index out of range with modified optimizer's param_groups.
|
module: optimizer, triaged, actionable, module: LrScheduler
|
### 🐛 Describe the bug
I am aware of similar posts which already notify this issue:
- https://github.com/pytorch/pytorch/issues/20997
- https://github.com/pytorch/pytorch/issues/53712
- And a lightning issue: https://github.com/Lightning-AI/lightning/issues/8727
But it seems that the error should be caught sooner to help the user understand what's going on.
Using ReduceLROnPlateau: https://github.com/pytorch/pytorch/blob/044a8e3305bdff28780cdab757b859abf2fc76d9/torch/optim/lr_scheduler.py#L913
If we use [Optimizer.add_param_group](https://pytorch.org/docs/stable/generated/torch.optim.Optimizer.add_param_group.html#torch.optim.Optimizer.add_param_group(param_group)) on the optimizer attached to the scheduler while the ReduceLrOnPlateau is instantiated, the method: [_reduce_lr(self, epoch)](
https://github.com/pytorch/pytorch/blob/044a8e3305bdff28780cdab757b859abf2fc76d9/torch/optim/lr_scheduler.py#L1033) once called while throw an **IndexError: list index out of range** due to this line [`new_lr = max(old_lr * self.factor, self.min_lrs[i]`](https://github.com/pytorch/pytorch/blob/044a8e3305bdff28780cdab757b859abf2fc76d9/torch/optim/lr_scheduler.py#L1036C12-L1036C64) which tries to access an index of `min_lrs` that does not exist.
This is something that usually happens in a finetuning process when we unfreeze some layers of a network and add a new group of parameters to the optimizer attached to the scheduler.
The property `min_lrs` of the ReduceLROnPlateau instance is already populated at this point based on the previous number of existing groups in the optimizer ( length of the param_groups) and will not be updated accordingly.
To make it clearer that the problem comes from the fact that `len(self.min_lrs) != len(optimizer.param_groups)` we could notify the user with some ValueError encapsulated in a property.
```python
class ReduceLROnPlateau:
def __init__(self, optimizer, mode='min', factor=0.1, patience=10,
threshold=1e-4, threshold_mode='rel', cooldown=0,
min_lr=0, eps=1e-8, verbose=False):
....
self.optimizer = optimizer
self._min_lrs = None
self.min_lrs = min_lr
@property
def min_lrs(self):
if len(self._min_lrs) != len(self.optimizer.param_groups):
raise ValueError("expected `min_lrs` lenght of {}, got {}. The number of elements present in `min_lrs` must match the lenght of the {}'s `param_groups`. Set the `min_lrs` of the scheduler each time the optimizer's method add_param_group() is called.".format(
len(self.optimizer.param_groups), len(self._min_lrs), self.optimizer.__class__))
return self._min_lrs
@min_lrs.setter
def min_lrs(self, min_lrs):
if isinstance(min_lrs, (list, tuple)):
if len(self._min_lrs) != len(self.optimizer.param_groups):
raise ValueError("expected {} min_lrs, got {}".format(
len(self.optimizer.param_groups), len(self._min_lrs)))
self._min_lrs = list(min_lrs)
else:
self._min_lrs = [min_lrs] * len(self.optimizer.param_groups)
....
```
This is a general idea and could even be dealt with using a list extension that match length of the param_groups instead of raising an error.
### Versions
PyTorch version: 2.0.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 13.3.1 (arm64)
GCC version: Could not collect
Clang version: 14.0.3 (clang-1403.0.22.14.1)
CMake version: Could not collect
Libc version: N/A
Python version: 3.10.2 (main, Feb 9 2023, 12:03:02) [Clang 14.0.0 (clang-1400.0.29.102)] (64-bit runtime)
Python platform: macOS-13.3.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Pro
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.25.0
[pip3] pytorch-lightning==2.0.3
[pip3] torch==2.0.1
[pip3] torchaudio==2.0.2
[pip3] torchmetrics==0.11.4
[pip3] torchvision==0.15.2
[conda] Could not collect
cc @vincentqb @jbschlosser @albanD @janeyx99 @crcrpar
| 5 |
2,151 | 104,360 |
[testing only] Enable inlining modules by default
|
Stale, release notes: quantization, module: inductor, module: dynamo, ciflow/inductor, keep-going
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #104360
* #103676
* #103987
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler @ngimel @anijain2305
| 6 |
2,152 | 104,342 |
Segmentation error while using F.cross_entropy with mps(for code that works fine with device= "cpu")
|
triaged, module: mps
|
### 🐛 Describe the bug
import torch
import torch.nn as nn
from torch.nn import functional as F
batch_size = 32
block_size = 8
device = "mps"
vocab_size = 1000
X = torch.randint(0, vocab_size, (32, 8), device = device)
Y = torch.randint(0, vocab_size, (32, 8), device = device)
token_embedding_table = nn.Embedding(vocab_size, vocab_size, device=device)
logits = token_embedding_table(X)
B, T, C = logits.shape
logits = logits.view(BT, C)
targets = Y.view(BT)
loss = F.cross_entropy(logits, targets)
### Versions
vrindavan@Vrindavans-MBP Code % python3 collect_env.py
Collecting environment information...
PyTorch version: 2.0.0.dev20230208
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 14.0 (arm64)
GCC version: Could not collect
Clang version: 14.0.3 (clang-1403.0.22.14.1)
CMake version: version 3.25.2
Libc version: N/A
Python version: 3.9.6 (default, May 7 2023, 23:32:44) [Clang 14.0.3 (clang-1403.0.22.14.1)] (64-bit runtime)
Python platform: macOS-14.0-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M2
Versions of relevant libraries:
[pip3] numpy==1.23.5
[pip3] torch==2.0.0.dev20230208
[pip3] torchaudio==2.0.0.dev20230207
[pip3] torchvision==0.15.0.dev20230207
[conda] Could not collect
vrindavan@Vrindavans-MBP Code %
cc @kulinseth @albanD @malfet @DenisVieriu97 @razarmehr @abhudev
| 1 |
2,153 | 104,328 |
DISABLED test_backward_ddp_inside (__main__.TensorPipeDdpUnderDistAutogradTest)
|
oncall: distributed, module: flaky-tests, skipped
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_backward_ddp_inside&suite=TensorPipeDdpUnderDistAutogradTest) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/14609542854).
Over the past 3 hours, it has been determined flaky in 2 workflow(s) with 2 failures and 2 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_backward_ddp_inside`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `distributed/rpc/test_tensorpipe_agent.py`
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
| 1 |
2,154 | 104,322 |
Illegal Memory Access on H100 `TestSparseCompressedTritonKernelsCUDA.test_triton_sampled_addmm_block_size_16_cuda_bfloat16`
|
module: sparse, module: cuda, triaged
|
### 🐛 Describe the bug
The test case seems to be added by https://github.com/pytorch/pytorch/pull/101163
```
$ cd /path/to/pytorch/test
$ pytest test_sparse_csr.py -k test_triton_sampled_addmm_block_size_16_cuda_bfloat16 -v
```
```
Traceback (most recent call last):
File "/usr/local/lib/python3.10/unittest/case.py", line 59, in testPartExecutor
yield
File "/usr/local/lib/python3.10/unittest/case.py", line 587, in run
self._callSetUp()
File "/usr/local/lib/python3.10/unittest/case.py", line 546, in _callSetUp
self.setUp()
File "/opt/pytorch/pytorch/torch/testing/_internal/common_utils.py", line 2358, in setUp
set_rng_seed(SEED)
File "/opt/pytorch/pytorch/torch/testing/_internal/common_utils.py", line 1486, in set_rng_seed
torch.manual_seed(seed)
File "/opt/pytorch/pytorch/torch/random.py", line 40, in manual_seed
torch.cuda.manual_seed_all(seed)
File "/opt/pytorch/pytorch/torch/cuda/random.py", line 114, in manual_seed_all
_lazy_call(cb, seed_all=True)
File "/opt/pytorch/pytorch/torch/cuda/__init__.py", line 202, in _lazy_call
callable()
File "/opt/pytorch/pytorch/torch/cuda/random.py", line 112, in cb
default_generator.manual_seed(seed)
RuntimeError: CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
```
### Versions
```
Collecting environment information...
PyTorch version: 2.1.0a0+git803c144
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04.1) 11.3.0
Clang version: Could not collect
CMake version: version 3.26.4
Libc version: glibc-2.35
Python version: 3.10.11 (main, Jun 27 2023, 00:48:59) [GCC 11.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-52-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA H100 80GB HBM3
Nvidia driver version: 525.85.12
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.3
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.3
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.3
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.3
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.3
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.3
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.3
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 48
On-line CPU(s) list: 0-47
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7413 24-Core Processor
CPU family: 25
Model: 1
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 1
Stepping: 1
Frequency boost: enabled
CPU max MHz: 3630.8101
CPU min MHz: 1500.0000
BogoMIPS: 5300.15
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca
Virtualization: AMD-V
L1d cache: 768 KiB (24 instances)
L1i cache: 768 KiB (24 instances)
L2 cache: 12 MiB (24 instances)
L3 cache: 128 MiB (4 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-47
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled, PBRSB-eIBRS: Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.24.4
[pip3] torch==2.1.0a0+git803c144
[pip3] torchvision==0.16.0a0+52eb503
[pip3] triton==2.1.0
[conda] Could not collect
```
cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer @ptrblck
| 0 |
2,155 | 104,315 |
Torch randperm with device mps does not sample exactly uniformly from all possible permutations
|
triaged, module: random, module: mps
|
### 🐛 Describe the bug
I don't think this bug is very urgent but I just wanted to capture a note stemming from discussion in #104171 so it wasn't forgotten.
Current the mps fused(?) kernel for randperm does works by generating random values without replacement from `min_int64` to `max_int64` and then for each index and sorting the indexes by these values in ascending order. Unfortunately due to collisions the resulting distribution over permutations of the indexes will not be perfectly uniform (see [this comment](https://github.com/pytorch/pytorch/pull/104171#issuecomment-1608754004) for a more in depth explanation).
The cuda kernel solves this by taking doing an extra step after the sort to shuffle all "islands" of consecutive indexes with the same value. See the [algorithm description](https://github.com/pytorch/pytorch/blob/05ebd538d4b064b19ba960c2b198bf991a49ca89/aten/src/ATen/native/cuda/Randperm.cu#L230) for a more detailed view.
This extra step would likely have to be implemented as a custom metal kernel but could probably just follow the structure of the [cuda implementation](https://github.com/pytorch/pytorch/blob/05ebd538d4b064b19ba960c2b198bf991a49ca89/aten/src/ATen/native/cuda/Randperm.cuh#L13).
### Versions
```
Collecting environment information...
PyTorch version: 2.1.0a0+git2967116
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 13.3.1 (arm64)
GCC version: Could not collect
Clang version: 14.0.0 (clang-1400.0.29.202)
CMake version: version 3.26.4
Libc version: N/A
Python version: 3.11.3 (main, May 15 2023, 18:01:31) [Clang 14.0.6 ] (64-bit runtime)
Python platform: macOS-13.3.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Pro
Versions of relevant libraries:
[pip3] numpy==1.25.0
[pip3] torch==2.1.0a0+git2967116
[conda] numpy 1.25.0 pypi_0 pypi
[conda] torch 2.1.0a0+git2967116 dev_0 <develop>
```
cc @pbelevich @kulinseth @albanD @malfet @DenisVieriu97 @razarmehr @abhudev
| 1 |
2,156 | 104,301 |
Attempt to use minifier on sam model fails
|
triaged, oncall: pt2, module: dynamic shapes, module: minifier
|
### 🐛 Describe the bug
calling TORCHDYNAMO_REPRO_AFTER="dynamo" TORCHDYNAMO_REPRO_LEVEL=4 gpurun2 python experimental/segment_anything/accuracy_spot_check.py --quantized=0 --compiled=2 &>log.txt results in an error. Whereas the model runs without issue when called normally but torch.compile returns a model with poor accuracy
### Error logs
Namespace(model_type='vit_h', checkpoint_path='/data/home/vasiliy/cluster/sam_dataset/sam_vit_h_4b8939.pth', run_shadow_sqnr=0, quantized=0, compiled=2, image_dir='/data/home/cdhernandez/cluster/saf/notebooks/images', run_name='test', save_model=0, generate_images=0)
fp32-dog.png
fp32-truck.png
fp32-groceries.png
AUTOTUNE mm(4900x1280, 1280x3840)
triton_mm_8 0.2427 ms 100.0%
mm 0.2468 ms 98.3%
triton_mm_9 0.2478 ms 97.9%
triton_mm_10 0.2990 ms 81.2%
triton_mm_11 0.3000 ms 80.9%
triton_mm_7 0.3308 ms 73.4%
triton_mm_15 0.3502 ms 69.3%
triton_mm_14 0.4321 ms 56.2%
triton_mm_17 0.5437 ms 44.6%
triton_mm_16 0.6636 ms 36.6%
SingleProcess AUTOTUNE takes 6.3883 seconds
AUTOTUNE mm(4900x1280, 1280x1280)
mm 0.0788 ms 100.0%
triton_mm_69 0.0942 ms 83.7%
triton_mm_68 0.0952 ms 82.8%
triton_mm_71 0.1137 ms 69.4%
triton_mm_70 0.1147 ms 68.7%
triton_mm_67 0.1270 ms 62.1%
triton_mm_75 0.1280 ms 61.6%
triton_mm_74 0.1526 ms 51.7%
triton_mm_77 0.2017 ms 39.1%
triton_mm_72 0.2304 ms 34.2%
SingleProcess AUTOTUNE takes 6.0125 seconds
AUTOTUNE mm(4096x1280, 1280x5120)
mm 0.2222 ms 100.0%
triton_mm_80 0.2703 ms 82.2%
triton_mm_81 0.2724 ms 81.6%
triton_mm_83 0.3267 ms 68.0%
triton_mm_82 0.3277 ms 67.8%
triton_mm_79 0.3799 ms 58.5%
triton_mm_87 0.3871 ms 57.4%
triton_mm_86 0.4833 ms 46.0%
triton_mm_89 0.6031 ms 36.8%
triton_mm_84 0.7322 ms 30.3%
SingleProcess AUTOTUNE takes 6.0187 seconds
AUTOTUNE mm(4096x5120, 5120x1280)
mm 0.2038 ms 100.0%
triton_mm_92 0.2652 ms 76.8%
triton_mm_93 0.2652 ms 76.8%
triton_mm_95 0.3082 ms 66.1%
triton_mm_94 0.3092 ms 65.9%
triton_mm_99 0.3686 ms 55.3%
triton_mm_91 0.4045 ms 50.4%
triton_mm_98 0.4680 ms 43.5%
triton_mm_101 0.6605 ms 30.9%
triton_mm_96 0.7014 ms 29.1%
SingleProcess AUTOTUNE takes 6.1106 seconds
AUTOTUNE mm(4096x1280, 1280x3840)
mm 0.1638 ms 100.0%
triton_mm_680 0.2038 ms 80.4%
triton_mm_681 0.2048 ms 80.0%
triton_mm_683 0.2468 ms 66.4%
triton_mm_682 0.2478 ms 66.1%
triton_mm_679 0.2785 ms 58.8%
triton_mm_687 0.2918 ms 56.1%
triton_mm_686 0.3574 ms 45.8%
triton_mm_689 0.4618 ms 35.5%
triton_mm_688 0.5550 ms 29.5%
SingleProcess AUTOTUNE takes 5.9864 seconds
AUTOTUNE mm(4096x1280, 1280x1280)
mm 0.0625 ms 100.0%
triton_mm_740 0.0768 ms 81.3%
triton_mm_741 0.0768 ms 81.3%
triton_mm_742 0.0901 ms 69.3%
triton_mm_743 0.0901 ms 69.3%
triton_mm_747 0.1055 ms 59.2%
triton_mm_739 0.1106 ms 56.5%
triton_mm_746 0.1219 ms 51.3%
triton_mm_749 0.1731 ms 36.1%
triton_mm_744 0.1925 ms 32.4%
SingleProcess AUTOTUNE takes 5.9320 seconds
reduction over non-contiguous dims
[2023-06-27 21:55:13,070] torch._dynamo.debug_utils: [ERROR] While minifying the program in accuracy minification mode, ran into a runtime exception which is likely an unrelated issue. Skipping this graph
Traceback (most recent call last):
File "/fsx/users/cdhernandez/pytorch/torch/_dynamo/debug_utils.py", line 453, in backend_accuracy_fails
compiled_gm = compiler_fn(
File "/fsx/users/cdhernandez/pytorch/torch/__init__.py", line 1530, in __call__
return compile_fx(model_, inputs_, config_patches=self.config)
File "/fsx/users/cdhernandez/pytorch/torch/_inductor/compile_fx.py", line 730, in compile_fx
return compile_fx(
File "/fsx/users/cdhernandez/pytorch/torch/_inductor/compile_fx.py", line 912, in compile_fx
return aot_autograd(
File "/fsx/users/cdhernandez/pytorch/torch/_dynamo/backends/common.py", line 55, in compiler_fn
cg = aot_module_simplified(gm, example_inputs, **kwargs)
File "/fsx/users/cdhernandez/pytorch/torch/_functorch/aot_autograd.py", line 3713, in aot_module_simplified
compiled_fn = create_aot_dispatcher_function(
File "/fsx/users/cdhernandez/pytorch/torch/_dynamo/utils.py", line 180, in time_wrapper
r = func(*args, **kwargs)
File "/fsx/users/cdhernandez/pytorch/torch/_functorch/aot_autograd.py", line 3202, in create_aot_dispatcher_function
fw_metadata = run_functionalized_fw_and_collect_metadata(
File "/fsx/users/cdhernandez/pytorch/torch/_functorch/aot_autograd.py", line 715, in inner
flat_f_outs = f(*flat_f_args)
File "/fsx/users/cdhernandez/pytorch/torch/_functorch/aot_autograd.py", line 3319, in functional_call
out = Interpreter(mod).run(*args[params_len:], **kwargs)
File "/fsx/users/cdhernandez/pytorch/torch/fx/interpreter.py", line 138, in run
self.env[node] = self.run_node(node)
File "/fsx/users/cdhernandez/pytorch/torch/fx/interpreter.py", line 195, in run_node
return getattr(self, n.op)(n.target, args, kwargs)
File "/fsx/users/cdhernandez/pytorch/torch/fx/interpreter.py", line 312, in call_module
return submod(*args, **kwargs)
File "/fsx/users/cdhernandez/pytorch/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/fsx/users/cdhernandez/pytorch/torch/nn/modules/module.py", line 1511, in _call_impl
return forward_call(*args, **kwargs)
File "/fsx/users/cdhernandez/pytorch/torch/nn/modules/linear.py", line 114, in forward
return F.linear(input, self.weight, self.bias)
RuntimeError: Cannot call numel() on tensor with symbolic sizes/strides
While executing %getattr_l__self___blocks_0___1___attn_qkv : [#users=1] = call_module[target=getattr_L__self___blocks_0___1___attn_qkv](args = (%view_1,), kwargs = {})
Original traceback:
File "/fsx/users/cdhernandez/saf/segment_anything/modeling/image_encoder.py", line 111, in forward
x = blk(x)
File "/fsx/users/cdhernandez/pytorch/torch/nn/modules/module.py", line 1511, in _call_impl
return forward_call(*args, **kwargs)
File "/fsx/users/cdhernandez/saf/segment_anything/modeling/image_encoder.py", line 171, in forward
x = self.attn(x)
File "/fsx/users/cdhernandez/pytorch/torch/nn/modules/module.py", line 1511, in _call_impl
return forward_call(*args, **kwargs)
File "/fsx/users/cdhernandez/saf/segment_anything/modeling/image_encoder.py", line 223, in forward
qkv = self.qkv(x).reshape(B, H * W, 3, self.num_heads, -1).permute(2, 0, 3, 1, 4)
[2023-06-27 21:55:13,190] torch.fx.experimental.symbolic_shapes: [WARNING] 0.0: Failing guard allocated at:
File "/fsx/users/cdhernandez/ao_benchmarks/experimental/segment_anything/accuracy_spot_check.py", line 336, in <module>
run()
File "/fsx/users/cdhernandez/pytorch/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/fsx/users/cdhernandez/ao_benchmarks/experimental/segment_anything/accuracy_spot_check.py", line 309, in run
act[args.run_name+"out"]["image-encoder"].append(sam.image_encoder(input).detach())
File "/fsx/users/cdhernandez/pytorch/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/fsx/users/cdhernandez/pytorch/torch/nn/modules/module.py", line 1511, in _call_impl
return forward_call(*args, **kwargs)
File "/fsx/users/cdhernandez/pytorch/torch/_dynamo/eval_frame.py", line 295, in _fn
return fn(*args, **kwargs)
File "/fsx/users/cdhernandez/pytorch/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/fsx/users/cdhernandez/pytorch/torch/nn/modules/module.py", line 1511, in _call_impl
return forward_call(*args, **kwargs)
File "/fsx/users/cdhernandez/pytorch/torch/_dynamo/eval_frame.py", line 448, in catch_errors
return callback(frame, cache_size, hooks, frame_state)
File "/fsx/users/cdhernandez/pytorch/torch/_dynamo/convert_frame.py", line 526, in _convert_frame
result = inner_convert(frame, cache_size, hooks, frame_state)
File "/fsx/users/cdhernandez/pytorch/torch/_dynamo/convert_frame.py", line 127, in _fn
return fn(*args, **kwargs)
File "/fsx/users/cdhernandez/pytorch/torch/_dynamo/convert_frame.py", line 360, in _convert_frame_assert
return _compile(
File "/fsx/users/cdhernandez/pytorch/torch/_dynamo/utils.py", line 180, in time_wrapper
r = func(*args, **kwargs)
File "/fsx/users/cdhernandez/pytorch/torch/_dynamo/convert_frame.py", line 430, in _compile
out_code = transform_code_object(code, transform)
File "/fsx/users/cdhernandez/pytorch/torch/_dynamo/bytecode_transformation.py", line 1000, in transform_code_object
transformations(instructions, code_options)
File "/fsx/users/cdhernandez/pytorch/torch/_dynamo/convert_frame.py", line 415, in transform
tracer.run()
File "/fsx/users/cdhernandez/pytorch/torch/_dynamo/symbolic_convert.py", line 2026, in run
super().run()
File "/fsx/users/cdhernandez/pytorch/torch/_dynamo/symbolic_convert.py", line 708, in run
and self.step()
File "/fsx/users/cdhernandez/pytorch/torch/_dynamo/symbolic_convert.py", line 668, in step
getattr(self, inst.opname)(inst)
File "/fsx/users/cdhernandez/pytorch/torch/_dynamo/symbolic_convert.py", line 2114, in RETURN_VALUE
self.output.compile_subgraph(
File "/fsx/users/cdhernandez/pytorch/torch/_dynamo/output_graph.py", line 763, in compile_subgraph
self.compile_and_call_fx_graph(tx, list(reversed(stack_values)), root)
File "/fsx/users/cdhernandez/conda/lib/python3.9/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/fsx/users/cdhernandez/pytorch/torch/_dynamo/output_graph.py", line 859, in compile_and_call_fx_graph
compiled_fn = self.call_user_compiler(gm)
File "/fsx/users/cdhernandez/pytorch/torch/_dynamo/utils.py", line 180, in time_wrapper
r = func(*args, **kwargs)
File "/fsx/users/cdhernandez/pytorch/torch/_dynamo/output_graph.py", line 911, in call_user_compiler
compiled_fn = compiler_fn(gm, self.example_inputs())
File "/fsx/users/cdhernandez/pytorch/torch/_dynamo/repro/after_dynamo.py", line 81, in debug_wrapper
if backend_accuracy_fails(gm, example_inputs, compiler_fn):
File "/fsx/users/cdhernandez/pytorch/torch/_dynamo/debug_utils.py", line 453, in backend_accuracy_fails
compiled_gm = compiler_fn(
File "/fsx/users/cdhernandez/pytorch/torch/__init__.py", line 1530, in __call__
return compile_fx(model_, inputs_, config_patches=self.config)
File "/fsx/users/cdhernandez/pytorch/torch/_inductor/compile_fx.py", line 730, in compile_fx
return compile_fx(
File "/fsx/users/cdhernandez/pytorch/torch/_inductor/compile_fx.py", line 912, in compile_fx
return aot_autograd(
File "/fsx/users/cdhernandez/pytorch/torch/_dynamo/backends/common.py", line 55, in compiler_fn
cg = aot_module_simplified(gm, example_inputs, **kwargs)
File "/fsx/users/cdhernandez/pytorch/torch/_functorch/aot_autograd.py", line 3713, in aot_module_simplified
compiled_fn = create_aot_dispatcher_function(
File "/fsx/users/cdhernandez/pytorch/torch/_dynamo/utils.py", line 180, in time_wrapper
r = func(*args, **kwargs)
File "/fsx/users/cdhernandez/pytorch/torch/_functorch/aot_autograd.py", line 3202, in create_aot_dispatcher_function
fw_metadata = run_functionalized_fw_and_collect_metadata(
File "/fsx/users/cdhernandez/pytorch/torch/_functorch/aot_autograd.py", line 715, in inner
flat_f_outs = f(*flat_f_args)
File "/fsx/users/cdhernandez/pytorch/torch/_functorch/aot_autograd.py", line 3319, in functional_call
out = Interpreter(mod).run(*args[params_len:], **kwargs)
File "/fsx/users/cdhernandez/pytorch/torch/fx/interpreter.py", line 138, in run
self.env[node] = self.run_node(node)
File "/fsx/users/cdhernandez/pytorch/torch/fx/interpreter.py", line 195, in run_node
return getattr(self, n.op)(n.target, args, kwargs)
File "/fsx/users/cdhernandez/pytorch/torch/fx/interpreter.py", line 312, in call_module
return submod(*args, **kwargs)
File "/fsx/users/cdhernandez/pytorch/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/fsx/users/cdhernandez/pytorch/torch/nn/modules/module.py", line 1511, in _call_impl
return forward_call(*args, **kwargs)
File "/fsx/users/cdhernandez/pytorch/torch/nn/modules/conv.py", line 463, in forward
return self._conv_forward(input, self.weight, self.bias)
File "/fsx/users/cdhernandez/pytorch/torch/nn/modules/conv.py", line 459, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
File "/fsx/users/cdhernandez/pytorch/torch/utils/_stats.py", line 20, in wrapper
return fn(*args, **kwargs)
File "/fsx/users/cdhernandez/pytorch/torch/_subclasses/fake_tensor.py", line 1161, in __torch_dispatch__
return self.dispatch(func, types, args, kwargs)
File "/fsx/users/cdhernandez/pytorch/torch/_subclasses/fake_tensor.py", line 1381, in dispatch
op_impl_out = op_impl(self, func, *args, **kwargs)
File "/fsx/users/cdhernandez/pytorch/torch/_subclasses/fake_tensor.py", line 612, in conv
conv_backend = torch._C._select_conv_backend(**kwargs)
File "/fsx/users/cdhernandez/pytorch/torch/fx/experimental/symbolic_shapes.py", line 852, in guard_bool
r = self.shape_env.evaluate_expr(self.expr, self.hint)
Traceback (most recent call last):
File "/fsx/users/cdhernandez/ao_benchmarks/experimental/segment_anything/accuracy_spot_check.py", line 336, in <module>
run()
File "/fsx/users/cdhernandez/pytorch/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/fsx/users/cdhernandez/ao_benchmarks/experimental/segment_anything/accuracy_spot_check.py", line 309, in run
act[args.run_name+"out"]["image-encoder"].append(sam.image_encoder(input).detach())
File "/fsx/users/cdhernandez/pytorch/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/fsx/users/cdhernandez/pytorch/torch/nn/modules/module.py", line 1511, in _call_impl
return forward_call(*args, **kwargs)
File "/fsx/users/cdhernandez/pytorch/torch/_dynamo/eval_frame.py", line 295, in _fn
return fn(*args, **kwargs)
File "/fsx/users/cdhernandez/pytorch/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/fsx/users/cdhernandez/pytorch/torch/nn/modules/module.py", line 1511, in _call_impl
return forward_call(*args, **kwargs)
File "/fsx/users/cdhernandez/pytorch/torch/_dynamo/eval_frame.py", line 448, in catch_errors
return callback(frame, cache_size, hooks, frame_state)
File "/fsx/users/cdhernandez/pytorch/torch/_dynamo/convert_frame.py", line 526, in _convert_frame
result = inner_convert(frame, cache_size, hooks, frame_state)
File "/fsx/users/cdhernandez/pytorch/torch/_dynamo/convert_frame.py", line 127, in _fn
return fn(*args, **kwargs)
File "/fsx/users/cdhernandez/pytorch/torch/_dynamo/convert_frame.py", line 360, in _convert_frame_assert
return _compile(
File "/fsx/users/cdhernandez/pytorch/torch/_dynamo/utils.py", line 180, in time_wrapper
r = func(*args, **kwargs)
File "/fsx/users/cdhernandez/pytorch/torch/_dynamo/convert_frame.py", line 478, in _compile
check_fn = CheckFunctionManager(
File "/fsx/users/cdhernandez/pytorch/torch/_dynamo/guards.py", line 863, in __init__
guard.create(local_builder, global_builder)
File "/fsx/users/cdhernandez/pytorch/torch/_guards.py", line 208, in create
return self.create_fn(self.source.select(local_builder, global_builder), self)
File "/fsx/users/cdhernandez/pytorch/torch/_dynamo/guards.py", line 540, in SHAPE_ENV
guards = output_graph.shape_env.produce_guards(
File "/fsx/users/cdhernandez/pytorch/torch/fx/experimental/symbolic_shapes.py", line 2520, in produce_guards
guard_expr = ShapeGuardPrinter(symbol_to_source, source_ref, self.var_to_sources).doprint(g)
File "/fsx/users/cdhernandez/conda/lib/python3.9/site-packages/sympy-1.12rc1-py3.9.egg/sympy/printing/printer.py", line 292, in doprint
return self._str(self._print(expr))
File "/fsx/users/cdhernandez/conda/lib/python3.9/site-packages/sympy-1.12rc1-py3.9.egg/sympy/printing/printer.py", line 331, in _print
return printmethod(expr, **kwargs)
File "/fsx/users/cdhernandez/conda/lib/python3.9/site-packages/sympy-1.12rc1-py3.9.egg/sympy/printing/str.py", line 778, in _print_Relational
return '%s %s %s' % (self.parenthesize(expr.lhs, precedence(expr)),
File "/fsx/users/cdhernandez/conda/lib/python3.9/site-packages/sympy-1.12rc1-py3.9.egg/sympy/printing/str.py", line 38, in parenthesize
return self._print(item)
File "/fsx/users/cdhernandez/conda/lib/python3.9/site-packages/sympy-1.12rc1-py3.9.egg/sympy/printing/printer.py", line 331, in _print
return printmethod(expr, **kwargs)
File "/fsx/users/cdhernandez/pytorch/torch/fx/experimental/symbolic_shapes.py", line 1493, in _print_Symbol
assert self.symbol_to_source.get(expr), (
AssertionError: s1 (could be from ['__meta_utils_unknown_tensor527.size()[2]', '__meta_utils_unknown_tensor527.size()[3]']) not in {s1: []}. If this assert is failing, it could be due to the issue described in https://github.com/pytorch/pytorch/pull/90665
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
srun: error: a100-st-p4d24xlarge-49: task 0: Exited with exit code 1
### Minified repro
minifier itself errors, repro info above
### Versions
(base) cdhernandez@ip-10-200-79-136:/fsx/users/cdhernandez/ao_benchmarks$ gpurun2 python3 collect_env.py
Collecting environment information...
PyTorch version: 2.1.0a0+git1dba81f
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: 10.0.0-4ubuntu1
CMake version: version 3.26.1
Libc version: glibc-2.31
Python version: 3.9.5 (default, Jun 4 2021, 12:28:51) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.15.0-1019-aws-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.7.64
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-40GB
GPU 1: NVIDIA A100-SXM4-40GB
Nvidia driver version: 525.85.12
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 96
On-line CPU(s) list: 0-95
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Platinum 8275CL CPU @ 3.00GHz
Stepping: 7
CPU MHz: 2999.998
BogoMIPS: 5999.99
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 1.5 MiB
L1i cache: 1.5 MiB
L2 cache: 48 MiB
L3 cache: 71.5 MiB
NUMA node0 CPU(s): 0-23,48-71
NUMA node1 CPU(s): 24-47,72-95
Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Vulnerable
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, STIBP disabled, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves ida arat pku ospke
Versions of relevant libraries:
[pip3] flake8==6.0.0
[pip3] flake8-bugbear==23.3.23
[pip3] flake8-comprehensions==3.12.0
[pip3] flake8-executable==2.1.3
[pip3] flake8-logging-format==0.9.0
[pip3] flake8-pyi==23.3.1
[pip3] flake8-simplify==0.19.3
[pip3] mypy==0.960
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.23.1
[pip3] pytorch-triton==2.1.0+440fd1bf20
[pip3] torch==2.1.0a0+git1dba81f
[pip3] torchvision==0.16.0a0+e5bf7cf
[conda] blas 1.0 mkl
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-include 2023.0.0 h06a4308_25399
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.23.1 pypi_0 pypi
[conda] pytorch-triton 2.1.0+440fd1bf20 pypi_0 pypi
[conda] torch 2.1.0a0+git1dba81f dev_0 <develop>
[conda] torchvision 0.16.0a0+e5bf7cf dev_0 <develop>
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 3 |
2,157 | 104,297 |
torch.distributed.all_to_all_single & alltoall_base, size limit INT_MAX
|
oncall: distributed
|
### 🐛 Describe the bug
I am using torch.distributed.all_to_all_single function do alltoall communication. My input and output tensor are both 2d tensor. From my error code, it seems this function only support tensor which size <= INT_MAX. Besides, the `size` is not `length` of dimension 0, it is actually the size of tensor. However, as I check in the source code, I think it should be the length of first dimension of tensor. I do find there was a check in older version(branch `1x1` ,function `computeLengthsAndOffsets`), but not in PyTorch 2.0.1. In PyTorch 2.0.1, I also find cpp code use `int64_t` as datatype, so there should not be a INT_MAX limitation.
I want to know is there anything wrong with my software version or code. I have some self-define function and whole script is long, so it is not easy to upload the code for reproduction, but I print some information about input/output tensor and splits.
Thank you for you attention and help.
```python
# all_to_all_single
print_all("rank, ", my_rank, "shape[input|output], ", input.shape, output.shape)
print_all("rank, ", my_rank, "type[input|output], ", input.dtype, output.dtype)
print_all("rank, ", my_rank, "input split, ", batch_split_lengths)
print_all("rank, ", my_rank, "output split, ", table_split_lengths)
req = dist.all_to_all_single(
output,
input,
output_split_sizes = table_split_lengths,
input_split_sizes = batch_split_lengths,
async_op=True)
```
Here is my error report.
```
rank, 1 shape[input|output], torch.Size([26214400, 128]) torch.Size([29491200, 128])
rank, 1 type[input|output], torch.uint8 torch.uint8
rank, 1 input split, [6553600, 6553600, 6553600, 6553600]
rank, 1 output split, [9830400, 6553600, 6553600, 6553600]
rank, 3 shape[input|output], torch.Size([26214400, 128]) torch.Size([29491200, 128])
rank, 3 type[input|output], torch.uint8 torch.uint8
rank, 3 input split, [6553600, 6553600, 6553600, 6553600]
rank, 3 output split, [9830400, 6553600, 6553600, 6553600]
rank, 2 shape[input|output], torch.Size([26214400, 128]) torch.Size([29491200, 128])
rank, 2 type[input|output], torch.uint8 torch.uint8
rank, 2 input split, [6553600, 6553600, 6553600, 6553600]
rank, 2 output split, [9830400, 6553600, 6553600, 6553600]
16:36:53
Using All2All_v_Req
embedding len is, 128
rank, 0 shape[input|output], torch.Size([39321600, 128]) torch.Size([29491200, 128])
rank, 0 type[input|output], torch.uint8 torch.uint8
rank, 0 input split, [9830400, 9830400, 9830400, 9830400]
rank, 0 output split, [9830400, 6553600, 6553600, 6553600]
Traceback (most recent call last):
File "/geode2/home/u030/haofeng/BigRed200/dlrm/ext_dist_quan.py", line 63, in <module>
Traceback (most recent call last):
File "/geode2/home/u030/haofeng/BigRed200/dlrm/ext_dist_quan.py", line 63, in <module>
a2q_v_req = ext_dist.alltoall_v(send_tensor, send_cnt = send_cnt, receive_cnt = receive_cnt)
a2q_v_req = ext_dist.alltoall_v(send_tensor, send_cnt = send_cnt, receive_cnt = receive_cnt)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/geode2/home/u030/haofeng/BigRed200/dlrm/extend_distributed.py", line 693, in alltoall_v
output = All2All_v_Req.apply(a2a_info, inputs) # I remove * , it used to be *inputs
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/N/slate/haofeng/anaconda3/envs/new_dlrm/lib/python3.11/site-packages/torch/autograd/function.py", line 506, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
File "/geode2/home/u030/haofeng/BigRed200/dlrm/extend_distributed.py", line 572, in forward
req = dist.all_to_all_single(
^^^^^^^^^^^^^^^^^^^^^^^
File "/N/slate/haofeng/anaconda3/envs/new_dlrm/lib/python3.11/site-packages/torch/distributed/distributed_c10d.py", line 3155, in all_to_all_single
work = default_pg.alltoall_base(
^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: length <= std::numeric_limits<int>::max() && offset <= std::numeric_limits<int>::max() INTERNAL ASSERT FAILED at "/opt/conda/conda-bld/pytorch_1682343970094/work/torch/csrc/distributed/c10d/Utils.hpp":463, please report a bug to PyTorch. Length or offset larger than INT_MAX not supported
```
### Versions
Collecting environment information...
PyTorch version: 2.0.1
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: SUSE Linux Enterprise Server 15 SP3 (x86_64)
GCC version: (SUSE Linux) 7.5.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.31
Python version: 3.11.3 (main, May 15 2023, 15:45:52) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.3.18-150300.59.90-default-x86_64-with-glibc2.31
Is CUDA available: False
CUDA runtime version: 11.3.58
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No devices found.
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 48 bits physical, 48 bits virtual
CPU(s): 128
On-line CPU(s) list: 0-127
Thread(s) per core: 2
Core(s) per socket: 64
Socket(s): 1
NUMA node(s): 4
Vendor ID: AuthenticAMD
CPU family: 25
Model: 1
Model name: AMD EPYC 7713 64-Core Processor
Stepping: 1
Frequency boost: enabled
CPU MHz: 2224.573
CPU max MHz: 2000.0000
CPU min MHz: 1500.0000
BogoMIPS: 3992.40
Virtualization: AMD-V
L1d cache: 2 MiB
L1i cache: 2 MiB
L2 cache: 32 MiB
L3 cache: 256 MiB
NUMA node0 CPU(s): 0-15,64-79
NUMA node1 CPU(s): 16-31,80-95
NUMA node2 CPU(s): 32-47,96-111
NUMA node3 CPU(s): 48-63,112-127
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP always-on, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm
Versions of relevant libraries:
[pip3] numpy==1.25.0
[pip3] torch==2.0.1
[pip3] triton==2.0.0
[conda] blas 1.0 mkl
[conda] mkl 2023.1.0 h6d00ec8_46342
[conda] numpy 1.25.0 pypi_0 pypi
[conda] pytorch 2.0.1 py3.11_cuda11.7_cudnn8.5.0_0 pytorch
[conda] pytorch-cuda 11.7 h778d358_5 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchtriton 2.0.0 py311 pytorch
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
| 1 |
2,158 | 104,296 |
affine_grid and grid_sample operators merge/accelleration
|
module: performance, feature, triaged
|
### 🚀 The feature, motivation and pitch
Hi,
To warp some data according to a (batch) of affine transformations, two functions called sequentially need to be used:
1. [affine_grid](https://pytorch.org/docs/stable/generated/torch.nn.functional.affine_grid.html) to calculate the transformed coordinates followed by
2. [warp_sample](https://pytorch.org/docs/stable/generated/torch.nn.functional.grid_sample.html#torch.nn.functional.grid_sample) to do the actual warping.
Would it make sense to have also the option of a function that does the two operation in a single pass rather than having to call the two sequentially?
The advantage would be only for speed: warping according to an affinity transformation would avoid to store the result of affine_grid (shape (N,H,W,2) for a spatial warp), but the transformed coordinates could be calculated locally and then used immediately to warp the input data.
I am exploring the possibility of working on the topic and wanted to know if a contribution in this direction could be useful.
Thanks in advance for your feedback
### Alternatives
_No response_
### Additional context
_No response_
| 29 |
2,159 | 104,289 |
getattr on `__slots__` object potentially suspicious
|
triaged, oncall: pt2
|
### 🐛 Describe the bug
@jansel pointed out that https://github.com/pytorch/pytorch/blob/afc788a99c1853e815283854ef6f168b250eaf2c/torch/_dynamo/variables/user_defined.py#L367 looked suspicious. The check should be something like ("if the object has slots AND no custom getattr, then we know how to handle it. Otherwise, the getattr may be unsafe").
### Error logs
_No response_
### Minified repro
_No response_
### Versions
main
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 1 |
2,160 | 104,284 |
`F.conv1d` and `F.conv2d` propagate `nan`'s incorrectly when minibatch > 15
|
needs reproduction, module: nn, triaged, module: macos, module: NaNs and Infs, module: arm
|
### 🐛 Describe the bug
Both `F.conv1d` and `F.conv2d` propagate `nan`'s incorrectly when minibatch > 15.
The test function below checks if the number of `nan`'s is the same for a minibatch of 15 and a minibatch of 16.
```python
import torch
def test_conv(in_channels, out_channels, kernel_size, input_size):
weight = torch.randn(out_channels, in_channels, *kernel_size)
input = torch.randn(16, in_channels, *input_size)
input[...,-1:] = float("nan")
assert len(kernel_size) == len(input_size)
if len(kernel_size) == 1:
conv_fun = torch.nn.functional.conv1d
elif len(kernel_size) == 2:
conv_fun = torch.nn.functional.conv2d
elif len(kernel_size) == 3:
conv_fun = torch.nn.functional.conv3d
output1 = conv_fun(input[:15], weight)
output2 = conv_fun(input, weight)
assert output1[:15].isnan().sum() == output2[:15].isnan().sum()
```
Some examples for `F.conv1d` and `F.conv2d` that fail are:
```python
test_conv(8, 6, (4,), (10,))
test_conv(4, 5, (4, 8), (6, 16))
```
However, I have not been able to replicate this with `F.conv3d`.
### Versions
PyTorch version: 2.0.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 13.4.1 (arm64)
GCC version: Could not collect
Clang version: 16.0.6
CMake version: Could not collect
Libc version: N/A
Python version: 3.10.6 (main, Aug 29 2022, 10:06:59) [Clang 13.1.6 (clang-1316.0.21.2.5)] (64-bit runtime)
Python platform: macOS-13.4.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Max
Versions of relevant libraries:
[pip3] mypy==1.3.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.2
[pip3] torch==2.0.1
[pip3] torchaudio==2.0.0
[pip3] torchtext==0.13.1
[pip3] torchvision==0.15.0
[conda] Could not collect
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @malfet
| 7 |
2,161 | 104,282 |
Rename `topic: not user facing` to `release notes: not user facing`
|
triaged
|
This has some implications for the release notes tooling.
See https://github.com/pytorch/pytorch/issues/101694 for details.
| 0 |
2,162 | 104,265 |
torch._dynamo.exc.TorchRunTimeError in get_fake_value while performing quantization aware training
|
oncall: quantization, triaged, oncall: pt2
|
### 🐛 Describe the bug
I've been trying to implement quantization-aware training on a convnextSmall model. It errors out saying RuntimeError: Failed running call_module self_loss_fn(*(FakeTensor(FakeTensor(..., device='meta', size=(256, 1)), cuda:0), FakeTensor(FakeTensor(..., device='meta', size=(256,), dtype=torch.float64), cuda:0)), **{}): RuntimeError: gather(): Expected dtype int64 for index, but got torch.float64 . I'm not sure what is causing this error or how to fix it. Any leads about what is going on here will be very helpful.
**Sample code to reproduce the problem**
```
class ConvNextSmallQAT(pl.LightningModule):
def __init__(self, num_classes):
super().__init__()
self.model = convnext_small(weights="DEFAULT")
self.model.classifier[2] = torch.nn.Linear(
in_features=self.model.classifier[2].in_features, out_features=num_classes
)
self.quant = QuantStub()
self.dequant = DeQuantStub()
def forward(self, x):
x = self.quant(x)
x = self.model(x)
x = self.dequant(x)
return x
class ExperimentExectutor:
def __init__(
self,
experiment_config,
num_gpus: int = 1,
num_workers: int = 8,
convert_model: bool = True,
):
self.batch_size: int = experiment_config.batch_size
self.max_steps: int = experiment_config.max_steps
self.optimizer: torch.optim = experiment_config.optimizer
self.model: torch.nn.Module = experiment_config.model
self.model_settings: dict = experiment_config.model_settings
self.data: pl.LightningDataModule = experiment_config.data
self.num_gpus: int = num_gpus
self.num_workers: int = num_workers
self.convert_model: bool = convert_model
self.experiment_config_name = experiment_config().__class__.__name__
def __call__(self):
dm = self.data(
batch_size=self.batch_size,
num_workers=self.num_workers,
num_gpus=self.num_gpus,
)
if self.prepare_data:
dm.prepare_data()
dm.setup()
model = self.trainer(
self.model,
self.model_settings,
self.label_smoothing,
self.optimizer_settings,
self.optimizer,
self.lr_scheduler_settings,
self.lr_scheduler,
)
compiled_model = torch.compile(model)
tensorboard_logger = TensorBoardLogger("../outputs/tensorboard/")
trainer = pl.Trainer(
max_steps=self.max_steps,
val_check_interval=self.validation_frequency,
logger=[ tensorboard_logger],
accelerator="auto",
callbacks=[
checkpoint_callback,
TQDMProgressBar(refresh_rate=10),
LearningRateMonitor(logging_interval="step"),
],
devices=self.num_gpus,
# precision=16,
)
print("Before QAT:")
print_size_of_model(compiled_model)
compiled_model.qconfig = torch.quantization.get_default_qat_qconfig('fbgemm')
torch.quantization.prepare_qat(compiled_model, inplace=True)
trainer.fit(compiled_model, dm)
```
**Error message**
```
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
Downloading: "https://download.pytorch.org/models/convnext_small-0c510722.pth" to /root/.cache/torch/hub/checkpoints/convnext_small-0c510722.pth
100%|██████████| 192M/192M [00:04<00:00, 47.9MB/s]
GPU available: True (cuda), used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
Before QAT:
model size: 188.658MB
/usr/local/lib/python3.10/dist-packages/torch/ao/quantization/observer.py:214: UserWarning: Please use quant_min and quant_max to specify the range for observers. reduce_range will be deprecated in a future release of PyTorch.
warnings.warn(
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
| Name | Type | Params
---------------------------------------------------------
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/utils.py", line 1199, in run_node
return nnmodule(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/loss.py", line 1174, in forward
return F.cross_entropy(input, target, weight=self.weight,
File "/usr/local/lib/python3.10/dist-packages/torch/nn/functional.py", line 3029, in cross_entropy
return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing)
File "/usr/local/lib/python3.10/dist-packages/torch/utils/_stats.py", line 20, in wrapper
return fn(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_subclasses/fake_tensor.py", line 987, in __torch_dispatch__
return self.dispatch(func, types, args, kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_subclasses/fake_tensor.py", line 1140, in dispatch
return decomposition_table[func](*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_decomp/decompositions.py", line 2804, in nll_loss_forward
result = -torch.gather(self, channel_dim, safe_target_).squeeze(channel_dim)
File "/usr/local/lib/python3.10/dist-packages/torch/utils/_stats.py", line 20, in wrapper
return fn(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_subclasses/fake_tensor.py", line 987, in __torch_dispatch__
return self.dispatch(func, types, args, kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_subclasses/fake_tensor.py", line 1170, in dispatch
r = func(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_ops.py", line 287, in __call__
return self._op(*args, **kwargs or {})
File "/usr/local/lib/python3.10/dist-packages/torch/_meta_registrations.py", line 1868, in meta_gather
check(
File "/usr/local/lib/python3.10/dist-packages/torch/_prims_common/__init__.py", line 1563, in check
raise exc_type(s())
RuntimeError: gather(): Expected dtype int64 for index, but got torch.float64
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/utils.py", line 1152, in get_fake_value
return wrap_fake_exception(
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/utils.py", line 808, in wrap_fake_exception
return fn()
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/utils.py", line 1153, in <lambda>
lambda: run_node(tx.output, node, args, kwargs, nnmodule)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/utils.py", line 1206, in run_node
raise RuntimeError(
RuntimeError: Failed running call_module self_loss_fn(*(FakeTensor(FakeTensor(..., device='meta', size=(256, 1)), cuda:0), FakeTensor(FakeTensor(..., device='meta', size=(256,), dtype=torch.float64), cuda:0)), **{}):
gather(): Expected dtype int64 for index, but got torch.float64
(scroll up for backtrace)
The above exception was the direct cause of the following exception:
Traceback` (most recent call last):
File "/plx-context/artifacts/ee60add6a77f4cf289ffde3ab9b75f41/uploads/main.py", line 35, in <module>
main()
File "/plx-context/artifacts/ee60add6a77f4cf289ffde3ab9b75f41/uploads/main.py", line 29, in main
ExperimentExectutor(
File "/plx-context/artifacts/ee60add6a77f4cf289ffde3ab9b75f41/uploads/experiment_executor_qat.py", line 106, in __call__
trainer.fit(compiled_model, dm)
File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/trainer.py", line 531, in fit
call._call_and_handle_interrupt(
File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/call.py", line 42, in _call_and_handle_interrupt
return trainer_fn(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/trainer.py", line 570, in _fit_impl
self._run(model, ckpt_path=ckpt_path)
File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/trainer.py", line 975, in _run
results = self._run_stage()
File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/trainer.py", line 1016, in _run_stage
self._run_sanity_check()
File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/trainer.py", line 1045, in _run_sanity_check
val_loop.run()
File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/loops/utilities.py", line 177, in _decorator
return loop_run(self, *args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/loops/evaluation_loop.py", line 115, in run
self._evaluation_step(batch, batch_idx, dataloader_idx)
File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/loops/evaluation_loop.py", line 375, in _evaluation_step
output = call._call_strategy_hook(trainer, hook_name, *step_kwargs.values())
File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/call.py", line 287, in _call_strategy_hook
output = fn(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/strategies/strategy.py", line 379, in validation_step
return self.model.validation_step(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/eval_frame.py", line 209, in _fn
return fn(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/eval_frame.py", line 337, in catch_errors
return callback(frame, cache_size, hooks)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 404, in _convert_frame
result = inner_convert(frame, cache_size, hooks)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 104, in _fn
return fn(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 262, in _convert_frame_assert
return _compile(
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/utils.py", line 163, in time_wrapper
r = func(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 324, in _compile
out_code = transform_code_object(code, transform)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/bytecode_transformation.py", line 445, in transform_code_object
transformations(instructions, code_options)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 311, in transform
tracer.run()
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 1726, in run
super().run()
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 576, in run.
and self.step().
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 540, in step
getattr(self, inst.opname)(inst)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 342, in wrapper
return inner_fn(self, inst)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 965, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 474, in call_function
self.push(fn.call_function(self, args, kwargs))
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/nn_module.py", line 203, in call_function
return wrap_fx_proxy(
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/builder.py", line 754, in wrap_fx_proxy
return wrap_fx_proxy_cls(
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/builder.py", line 789, in wrap_fx_proxy_cls
example_value = get_fake_value(proxy.node, tx)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/utils.py", line 1173, in get_fake_value
raise TorchRuntimeError() from e
torch._dynamo.exc.TorchRuntimeError:
from user code:
File "/plx-context/artifacts/ee60add6a77f4cf289ffde3ab9b75f41/uploads/trainers/classification.py", line 194, in validation_step
loss = self.loss_fn(logits, y)
Set torch._dynamo.config.verbose=True for more information
You can suppress this exception and fall back to eager by setting:
torch._dynamo.config.suppress_errors = True```
### Versions
pytorch-lightning-2.0.3
torchvision==0.15.2
requests-2.31.0
setuptools-59.5.0
sympy-1.12
torch-2.0.1+cu117
torchmetrics-0.10.3
tqdm-4.65.0
triton-2.0.0
typing-extensions-4.4.0
urllib3-1.26.16
```
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 7 |
2,163 | 104,259 |
ImportError: libcudnn.so.8: cannot open shared object file: No such file or directory
|
needs reproduction, oncall: binaries, module: cuda, triaged
|
### 🐛 Describe the bug
import torch,then:
import torch,then:
### Versions
import torch,then:

cc @seemethere @malfet @ptrblck
| 14 |
2,164 | 104,258 |
[FSDP] `ignored_states` is broken with auto wrap
|
triaged, module: fsdp
|
Passing ignored parameters to `ignored_states` with auto wrap does not propagate them to the nested FSDP instances.
My current thinking is that, for a quick fix, we can add `"ignored_states": self._ignored_params` to these `fsdp_kwargs`:
https://github.com/pytorch/pytorch/blob/18dacf7e793ec3ca3fb5c3d1e85485068e004d6e/torch/distributed/fsdp/fully_sharded_data_parallel.py#L417-L429
Then, we need to add some unit tests.
However, I am thinking about what the right design should be with `ignored_modules` / `ignored_states` more broadly. Passing ignored parameters to `ignored_states` _does not_ match the semantics of passing ignored modules to `ignored_modules` even excluding partially ignored modules. Via `ignored_states`, FSDP will still construct a `FullyShardedDataParallel` instance that just manages 0 parameters, which is undesirable.
cc @zhaojuanmao @mrshenli @rohan-varma @fegin
| 0 |
2,165 | 104,252 |
[RFC] Make `_HYBRID_SHARD_ZERO2` public as `HYBRID_SHARD_GRAD_OP`
|
triaged, module: fsdp
|
This is for consistency with our existing `SHARD_GRAD_OP` sharding strategy and to avoid mentioning ZeRO-2 in our public API (given that (1) we should not expect all FSDP users to know about ZeRO and (2) there are technical differences between our `SHARD_GRAD_OP` and ZeRO-2).
https://github.com/pytorch/pytorch/blob/18dacf7e793ec3ca3fb5c3d1e85485068e004d6e/torch/distributed/fsdp/api.py#L69C5-L69C24
cc @zhaojuanmao @mrshenli @rohan-varma @fegin
| 0 |
2,166 | 104,248 |
[inductor] Updated upsample_bicubic2d decomposition
|
open source, module: inductor, ciflow/inductor
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #104248
* #104182
* #104181
- fixed dispatch for upsample_bicubic2d_vec and use inductor lowering
- added support for uint8
- updated tests
Performance benchmarks results
```
[-------------------------- Interpolate bicubic, AA=false, cpu -------------------------]
| Eager | Compiled
1 threads: ------------------------------------------------------------------------------
- main/nightly:
Input (3, 345, 456), torch.uint8, torch.contiguous_format | 647.2 | 2828.6
Input (3, 345, 456), torch.float32, torch.contiguous_format | 2163.4 | 2076.7
Input (3, 345, 456), torch.uint8, torch.channels_last | 263.5 | 3284.6
Input (3, 345, 456), torch.float32, torch.channels_last | 3585.1 | 2400.3
- PR:
Input (3, 345, 456), torch.uint8, torch.contiguous_format | 634.5 | 3579.6 <--- worse than in main
Input (3, 345, 456), torch.float32, torch.contiguous_format | 2143.6 | 2237.8 <--- worse than in main
Input (3, 345, 456), torch.uint8, torch.channels_last | 261.2 | 2883.2
Input (3, 345, 456), torch.float32, torch.channels_last | 3544.3 | 1814.4
Times are in microseconds (us).
[------------------------ Interpolate bicubic, AA=false, cuda -------------------------]
| Eager | Compiled
1 threads: -----------------------------------------------------------------------------
- main/nightly:
Input (3, 345, 456), torch.float32, torch.contiguous_format | 17.2 | 352.0
Input (3, 345, 456), torch.float32, torch.channels_last | 17.5 | 357.4
- PR:
Input (3, 345, 456), torch.float32, torch.contiguous_format | 14.3 | 43.2
Input (3, 345, 456), torch.float32, torch.channels_last | 14.4 | 40.1
Times are in microseconds (us).
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov
| 2 |
2,167 | 104,247 |
[proposal] "Name" string attribute for modules, parameters, buffers, tensors for more pleasant debugging (especially for graph printouts / export / studying compiled generated code)
|
feature, triaged, needs design
|
### 🚀 The feature, motivation and pitch
This is useful for debugging complex tree-structured models to be able to `.name` and understand where a module is found within the whole model tree
This idea is currently used only for parameters within `state_dict()` serialization. I suggest that enabling `.name` attribute/property would be useful for debugging and various formatting/debug-printing in the general context. Such `.name` could be used in `__str__` implementations of modules and tensors, could be used for creating easier-visualizable ONNX graphs. Access to module name might also be useful for various finer-grained logic hacks in module hooks (although maybe not the best coding practice in all cases - but for hacks might be okay).
I propose to:
1. introduce a `.name` property (backed by `._name` attribute which may not exist if we wrongly torch.load and old model file) or just a `.name` attribute if the deserializing-old-module-objects-without-some-attributes is not a problem. It should be an empty string by default.
2. introduce an instance `.set_names_recursively()` (modulo bikeshedding) method on `torch.nn.Module` which would go around and produce names similar to what's now found in `state_dict()` keys formatting
I also propose to support setting/getting such attribute for any tensor. It's understood that most tensors won't have it assigned and the user would need to set it manually for any usefulness, but it's still good (e.g. can be set for tensors-to-be-returned-from-a-function). Propagation of such tensor names is a more complex task, and I propose that it's out of scope as the feature is already useful if only manual tensor names are supported (for the cases useful for debugging). Alternatively a `tensor.name()` / `tensor.name(value)` could be maybe used instead of the attribute so that the setter would `return self` for fluency and convenience, so that one can write `return (x + 1).name("mytensorplus1")` - although it would mean that the `name(...)` method would return either a string or a Tensor/sth else, depending on its argument.
Another consideration is that currently ONNX exporter already produces some automatic module/op names, and some code might have taken some dependencies on this. So the ONNX naming behavior should be unchanged unless explicitly `.set_names_recursively()` was called (so empty names would be overridden by existing automatic names)
Also probably if `.name`s are set, state_dict() should use them. Another consideration is that some people are probably monkey-patching `.name` attributes on tensors/modules themselves, so if state_dict starts using them, it might be surprising, so maybe some attribute name bikeshedding is required
e.g. https://github.com/facebookresearch/fvcore/blob/main/fvcore/nn/jit_analysis.py is doing sth similar wrt to module names
cc @ezyang @albanD
| 10 |
2,168 | 104,244 |
DISABLED test_mem_get_info (__main__.TestCudaMultiGPU)
|
module: cuda, triaged, module: flaky-tests, skipped
|
Platforms: linux, win
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_mem_get_info&suite=TestCudaMultiGPU) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/14578954385).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 4 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_mem_get_info`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_cuda_multigpu.py`
cc @ptrblck
| 31 |
2,169 | 104,241 |
Enable quantization dispatch for backend QuantizedPrivateUse1
|
triaged, open source, Stale, release notes: quantization
|
Support QuantizedPrivatesUse1 dispatch, which enable customize quantization backend.
As mentioned in #103663 , pytorch now supports new backend "QuantizedPrivateUse1", but to enable pytorch to dispatch and implement the quantization functions, these changes are needed.
| 7 |
2,170 | 104,230 |
[ONNX] Investigate `nn.functional.nll_loss` skip/xfail reason
|
module: onnx, triaged
|
After type promotion this test starts to fail a few subtests with unexpected pass, which appears to be incosistent with the taggex xfail reason.
_Originally posted by @BowenBao in https://github.com/pytorch/pytorch/pull/104064#discussion_r1243093115_
| 0 |
2,171 | 104,195 |
Torchscript with dynamic quantization produces inconsistent model outputs
|
oncall: jit, oncall: quantization, triaged
|
### 🐛 Describe the bug
Hello, I've been experimenting with torchscript and dynamic quantization and often have the issue that results of the models that are dynamically quantized are not consistent between Python and Java.
To reproduce the issue I created a fork of the python java-demo: https://github.com/westphal-jan/java-demo.
To setup you need to download libtorch and set its the location in `build.gradle` (https://github.com/westphal-jan/java-demo/blob/master/build.gradle#L16)
Download Link: https://download.pytorch.org/libtorch/cpu/libtorch-shared-with-deps-1.13.1%2Bcpu.zip
I created a simple dummy model with one linear layer and export it unquantized and quantized here: https://github.com/westphal-jan/java-demo/blob/master/create_dummy_models.py
(The code can also be run using the dependencies defined in https://github.com/westphal-jan/java-demo/blob/master/requirements.txt but I also commited the dummy models)
Python:
```
Unquantized model:
[[-2.758167028427124, 2.0038578510284424, -4.114053726196289, -1.2928203344345093, 1.4940322637557983]]
Quantized model:
[[-2.747678756713867, 1.9912285804748535, -4.110795021057129, -1.2891944646835327, 1.4982664585113525]]
```
You can run the java code with `./gradlew run`.
Java:
```
Unquantized model:
data: [-2.758167, 2.0038579, -4.1140537, -1.2928203, 1.4940323]
[W qlinear_dynamic.cpp:239] Warning: Currently, qnnpack incorrectly ignores reduce_range when it is set to true; this may change in a future release. (function apply_dynamic_impl)
Quantized model:
data: [-2.7473624, 1.9966378, -4.110954, -1.283469, 1.4918814]
```
As you can see the output of the unquantized model is perfectly consistent while the output of the dynamically quantized model is slightly inconsistent. It might seem insignificant but with larger models like a transformer it becomes more obvious (differences usually already in the first decimal place). Am I misunderstanding something conceptually? I thought as the code is compiled down to C it should be the same even when using dynamic quantization.
Note: I made sure that Python and Java use the same version of Torch `1.13.1` which is the latest published mvn version (https://mvnrepository.com/artifact/org.pytorch/pytorch_java_only)
### Versions
1.13.1
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @Xia-Weiwen @leslie-fang-intel
| 4 |
2,172 | 104,194 |
View ops on fake tensors can dispatch `detach`es to backend kernels
|
triaged, module: fakeTensor
|
### 🐛 Describe the bug
When calling an op on a fake tensor, a new fake tensor is made for the result via a call to `detach` ([src](https://github.com/pytorch/pytorch/blob/4e204ff87b1fbbe2f1820704e45559a6f448ce17/torch/csrc/autograd/python_variable.cpp#L587)).
When the op called is a view op (& I think it also needs to be one without a meta Python decomposition?), the `detach` is dispatched to the backend kernel (eg. `CPU`), instead of to `Meta`.
This usually isn't an issue as `detach` is declared only `CompositExplicitAutograd`, but for out-of-tree backends that _do_ intercept `detach`, they'll end up getting an op dispatched with a `Meta` tensor—this becomes an issue with Dynamo.
# Reproducer
```python
import torch
t = torch.randn(5, 5)
with torch._subclasses.FakeTensorMode() as mode:
ft = mode.from_tensor(t)
res = ft.view(25)
print(res)
```
Running this with `TORCH_SHOW_DISPATCH_TRACE=1` gives:
```
[call] op=[aten::randn], key=[BackendSelect]
[redispatch] op=[aten::randn], key=[CPU]
[call] op=[aten::empty.memory_format], key=[BackendSelect]
[redispatch] op=[aten::empty.memory_format], key=[CPU]
[call] op=[aten::normal_], key=[CPU]
[call] op=[aten::empty_strided], key=[BackendSelect]
[redispatch] op=[aten::empty_strided], key=[Meta]
[call] op=[aten::detach], key=[AutogradMeta]
[redispatch] op=[aten::detach], key=[ADInplaceOrView]
[redispatch] op=[aten::detach], key=[Meta]
[call] op=[aten::view], key=[PythonTLSSnapshot]
[redispatchBoxed] op=[aten::view], key=[AutogradCPU]
[redispatch] op=[aten::view], key=[ADInplaceOrView]
[redispatch] op=[aten::view], key=[Python]
[callBoxed] op=[aten::view], key=[Meta]
[call] op=[aten::detach], key=[CPU] <<<<<===== The issue
FakeTensor(..., size=(25,))
```
Running in a debugger, in `THPVariable_make_subclass` (called from `FakeTensor.__new__`), printing `toString(r.tensor(1).key_set())` gives `DispatchKeySet(CPU, ADInplaceOrView, AutogradCPU, AutocastCPU)`.
### Versions
Collecting environment information...
PyTorch version: 2.1.0a0+git4ff3108
Is debug build: True
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.25.0
[pip3] torch==2.1.0a0+git9593b3a
[conda] numpy 1.25.0 pypi_0 pypi
[conda] torch 2.1.0a0+git9593b3a pypi_0 pypi
| 2 |
2,173 | 104,193 |
Conversion from strided to batched sparse compressed tensor with a non-constant number of zeros in batches fails
|
module: sparse, triaged
|
## Issue description
As in the title.
The issue is created to discuss various approaches to supporting the strided-to-sparse-compressed conversion for the cases where the number of zeros in different batches is not equal.
## Code example
Consider the following batched tensor of 2-by-2 tensors:
```
x = [[[0, 1], # batch 1
[2, 0]],
[[3, 0], # batch 2
[0, 4]],
]
```
that can be represented as a batched CSR tensor:
```python
>>> torch.tensor(x).to_sparse_csr()
tensor(crow_indices=tensor([[0, 1, 2],
[0, 1, 2]]),
col_indices=tensor([[1, 0],
[0, 1]]),
values=tensor([[1, 2],
[3, 4]]), size=(2, 2, 2), nnz=2, layout=torch.sparse_csr)
```
because both batches have equal number zeros: 2.
Next, consider a batched tensor with an unequal number of zeros in batches:
```
y = [[[0, 1], # batch 1
[2, 9]],
[[3, 0], # batch 2
[0, 4]],
]
```
that currently cannot be represented as a batched CSR tensor:
```python
>>> torch.tensor(y).to_sparse_csr()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
RuntimeError: Expect the same number of specified elements per batch.
```
because the number of zeros in batches is different: 1 and 2, respectively.
## Discussion
The following approaches exist to create a batched CSR tensor from batches having unequal numbers of zeros.
### Approach 1: allow materialization of certain zeros
Notice that in the conversion of a strided to a CSR tensor, non-zero elements and specified elements are considered as synonyms. If we relax this condition and allow certain zero elements to become specified elements for the CSR representation, the example tensor `y` defined above can be represented as a batched CSR tensor. In fact, there exist many such representations, for example:
```python
>>> torch.sparse_csr_tensor([[0, 1, 3], [0, 1, 3]], [[1, 0, 1], [0, 0, 1]], [[1, 2, 9], [3, 0, 4]]).to_dense()
tensor([[[0, 1],
[2, 9]],
[[3, 0],
[0, 4]]])
>>> torch.sparse_csr_tensor([[0, 1, 3], [0, 2, 3]], [[1, 0, 1], [0, 1, 1]], [[1, 2, 9], [3, 0, 4]]).to_dense()
tensor([[[0, 1],
[2, 9]],
[[3, 0],
[0, 4]]])
>>> torch.sparse_csr_tensor([[0, 2, 4], [0, 2, 4]], [[0, 1, 0, 1], [0, 1, 0, 1]], [[0, 1, 2, 9], [3, 0, 0, 4]]).to_dense()
tensor([[[0, 1],
[2, 9]],
[[3, 0],
[0, 4]]])
```
that differ in the choice of materialized zeros.
Pros:
- solves the issue using the existing sparse compressed tensor implementation
Cons:
- requires introducing a complex and non-parallelizable strided->sparse compressed conversion algorithm that materializes zeros (non-optimal storage optimization) with the ambiguity of selecting materialized zeros (provisioning of a mask would resolve the ambiguity).
- batches with smaller NSE have the same memory usage as the batches with the largest NSE
- the maximum NSE in batches can be larger than the number of non-zeros in the batch of a minimum number of zeros
### Approach 2: allow a variable number of specified elements in batches
A prototype of this approach is implemented at https://github.com/pytorch/pytorch/pull/84843
The example tensor `y` defined above can be represented as a batched CSR tensor uniquely:
```python
>>> z = torch.tensor(y).to_sparse_csr()
>>> z
tensor(crow_indices=tensor([[0, 1, 3],
[0, 1, 2]]),
col_indices=tensor([[1, 0, 1],
[0, 1, 0]]),
values=tensor([[1, 2, 9],
[3, 4, 0]]), size=(2, 2, 2), nnz=3,
layout=torch.sparse_csr)
```
where each batch has the expected NSE count:
```python
>>> z[0]
tensor(crow_indices=tensor([0, 1, 3]),
col_indices=tensor([1, 0, 1]),
values=tensor([1, 2, 9]), size=(2, 2), nnz=3, layout=torch.sparse_csr)
>>> z[1]
tensor(crow_indices=tensor([0, 1, 2]),
col_indices=tensor([0, 1]),
values=tensor([3, 4]), size=(2, 2), nnz=2, layout=torch.sparse_csr)
```
Pros:
- solves the issue without explicitly materializing zeros
- the batched sparse compressed tensor representation is unique
- the strided->sparse compressed conversion algorithm is simple and parallelizable
- the maximum NSE is equal to the number of non-zeros in the batch of a minimum number of zeros
- the performance of `to_sparse_csr()` on CUDA tensors increased by 15%
Cons:
- requires relaxing the sparse compressed invariant: `compressed_index[..., -1] == nnz` becomes `compressed_index[..., -1] <= nnz`
- batches with smaller NSE have the same memory usage as the batches with the largest NSE (the optimal storage would require ragged tensor support)
- the indices and values of unused elements in batches with smaller NSE must be initialized to avoid confusing third-party libraries if the batched tensor has variable NSE (if the libraries treat batches as independent, there should be no confusion)
## System Info
- PyTorch version: main
cc @alexsamardzic @nikitaved @cpuhrsch @amjames @bhosmer
| 2 |
2,174 | 104,191 |
torch.embedding: Trying to convert BFloat16 to the MPS backend but it does not have support for that dtype.
|
triaged, enhancement, module: bfloat16, module: mps
|
### 🐛 Describe the bug
Code to reproduce
``` python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
path = "gpt2" # any LM would result the same
tokenizer = AutoTokenizer.from_pretrained(path)
model = AutoModelForCausalLM.from_pretrained(path, torch_dtype=torch.bfloat16, device_map={"":"mps"})
t = tokenizer("anything", return_attention_mask=False, return_tensors='pt')
with torch.inference_mode():
model(**t)
```
results in
``` python
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[6], line 2
1 with torch.inference_mode():
----> 2 model(**t)
File /opt/homebrew/Caskroom/miniconda/base/envs/torch/lib/python3.10/site-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs)
1496 # If we don't have any hooks, we want to skip the rest of the logic in
1497 # this function, and just call forward.
1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1499 or _global_backward_pre_hooks or _global_backward_hooks
1500 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501 return forward_call(*args, **kwargs)
1502 # Do not call functions when jit is used
1503 full_backward_hooks, non_full_backward_hooks = [], []
File /opt/homebrew/Caskroom/miniconda/base/envs/torch/lib/python3.10/site-packages/accelerate/hooks.py:165, in add_hook_to_module.<locals>.new_forward(*args, **kwargs)
163 output = old_forward(*args, **kwargs)
164 else:
--> 165 output = old_forward(*args, **kwargs)
166 return module._hf_hook.post_forward(module, output)
File /opt/homebrew/Caskroom/miniconda/base/envs/torch/lib/python3.10/site-packages/transformers/models/gpt2/modeling_gpt2.py:1080, in GPT2LMHeadModel.forward(self, input_ids, past_key_values, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, encoder_hidden_states, encoder_attention_mask, labels, use_cache, output_attentions, output_hidden_states, return_dict)
1072 r"""
1073 labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
1074 Labels for language modeling. Note that the labels **are shifted** inside the model, i.e. you can set
1075 `labels = input_ids` Indices are selected in `[-100, 0, ..., config.vocab_size]` All labels set to `-100`
1076 are ignored (masked), the loss is only computed for labels in `[0, ..., config.vocab_size]`
1077 """
1078 return_dict = return_dict if return_dict is not None else self.config.use_return_dict
-> 1080 transformer_outputs = self.transformer(
1081 input_ids,
1082 past_key_values=past_key_values,
1083 attention_mask=attention_mask,
1084 token_type_ids=token_type_ids,
1085 position_ids=position_ids,
1086 head_mask=head_mask,
1087 inputs_embeds=inputs_embeds,
1088 encoder_hidden_states=encoder_hidden_states,
1089 encoder_attention_mask=encoder_attention_mask,
1090 use_cache=use_cache,
1091 output_attentions=output_attentions,
1092 output_hidden_states=output_hidden_states,
1093 return_dict=return_dict,
1094 )
1095 hidden_states = transformer_outputs[0]
1097 # Set device for model parallelism
File /opt/homebrew/Caskroom/miniconda/base/envs/torch/lib/python3.10/site-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs)
1496 # If we don't have any hooks, we want to skip the rest of the logic in
1497 # this function, and just call forward.
1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1499 or _global_backward_pre_hooks or _global_backward_hooks
1500 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501 return forward_call(*args, **kwargs)
1502 # Do not call functions when jit is used
1503 full_backward_hooks, non_full_backward_hooks = [], []
File /opt/homebrew/Caskroom/miniconda/base/envs/torch/lib/python3.10/site-packages/accelerate/hooks.py:165, in add_hook_to_module.<locals>.new_forward(*args, **kwargs)
163 output = old_forward(*args, **kwargs)
164 else:
--> 165 output = old_forward(*args, **kwargs)
166 return module._hf_hook.post_forward(module, output)
File /opt/homebrew/Caskroom/miniconda/base/envs/torch/lib/python3.10/site-packages/transformers/models/gpt2/modeling_gpt2.py:846, in GPT2Model.forward(self, input_ids, past_key_values, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, encoder_hidden_states, encoder_attention_mask, use_cache, output_attentions, output_hidden_states, return_dict)
843 head_mask = self.get_head_mask(head_mask, self.config.n_layer)
845 if inputs_embeds is None:
--> 846 inputs_embeds = self.wte(input_ids)
847 position_embeds = self.wpe(position_ids)
848 hidden_states = inputs_embeds + position_embeds
File /opt/homebrew/Caskroom/miniconda/base/envs/torch/lib/python3.10/site-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs)
1496 # If we don't have any hooks, we want to skip the rest of the logic in
1497 # this function, and just call forward.
1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1499 or _global_backward_pre_hooks or _global_backward_hooks
1500 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501 return forward_call(*args, **kwargs)
1502 # Do not call functions when jit is used
1503 full_backward_hooks, non_full_backward_hooks = [], []
File /opt/homebrew/Caskroom/miniconda/base/envs/torch/lib/python3.10/site-packages/accelerate/hooks.py:165, in add_hook_to_module.<locals>.new_forward(*args, **kwargs)
163 output = old_forward(*args, **kwargs)
164 else:
--> 165 output = old_forward(*args, **kwargs)
166 return module._hf_hook.post_forward(module, output)
File /opt/homebrew/Caskroom/miniconda/base/envs/torch/lib/python3.10/site-packages/torch/nn/modules/sparse.py:162, in Embedding.forward(self, input)
161 def forward(self, input: Tensor) -> Tensor:
--> 162 return F.embedding(
163 input, self.weight, self.padding_idx, self.max_norm,
164 self.norm_type, self.scale_grad_by_freq, self.sparse)
File /opt/homebrew/Caskroom/miniconda/base/envs/torch/lib/python3.10/site-packages/torch/nn/functional.py:2210, in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
2204 # Note [embedding_renorm set_grad_enabled]
2205 # XXX: equivalent to
2206 # with torch.no_grad():
2207 # torch.embedding_renorm_
2208 # remove once script supports set_grad_enabled
2209 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
-> 2210 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
TypeError: Trying to convert BFloat16 to the MPS backend but it does not have support for that dtype.
```
Probably here:
https://github.com/pytorch/pytorch/blob/b2277075b0fe5cf085d369a313863f64c6fb50c3/aten/src/ATen/native/mps/OperationUtils.mm#L30-L44
I wasn't able to test this on nightly, because apparently it's been blocked currently:
https://github.com/pytorch/pytorch/blob/31f311a816c026bbfca622d6121d6a7fab44260d/aten/src/ATen/mps/EmptyTensor.cpp#L46
BF16 support is added to the OS version (macOS Sonoma) I use recently, referred from here (with timestamp):
https://developer.apple.com/wwdc23/10050?time=590
> Starting with macOS Sonoma, MPSGraph adds support for a new data type, bfloat16.
https://developer.apple.com/wwdc23/10050?time=659
> Adding Automatic Mixed Precision support to your network is a very easy process. First, add autocast. Both float16 and bfloat16 are supported.
### Versions
```
Collecting environment information...
PyTorch version: 2.0.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 14.0 (arm64)
GCC version: Could not collect
Clang version: 15.0.0 (clang-1500.0.28.1.1)
CMake version: Could not collect
Libc version: N/A
Python version: 3.10.11 (main, May 17 2023, 14:30:36) [Clang 14.0.6 ] (64-bit runtime)
Python platform: macOS-14.0-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M2
Versions of relevant libraries:
[pip3] numpy==1.25.0
[pip3] torch==2.0.1
[pip3] torchaudio==2.0.2
[pip3] torchvision==0.15.2
[conda] numpy 1.25.0 pypi_0 pypi
[conda] torch 2.0.1 pypi_0 pypi
[conda] torchaudio 2.0.2 pypi_0 pypi
[conda] torchvision 0.15.2 pypi_0 pypi
```
cc @kulinseth @albanD @malfet @DenisVieriu97 @razarmehr @abhudev
| 2 |
2,175 | 104,188 |
Add memory managemenet information for Apple silicon mps backend
|
triaged, enhancement, module: mps
|
### 🚀 The feature, motivation and pitch
Torch has useful memory management information for CUDA devices such as
- `torch.cuda.mem_get_info`
- `torch.cuda.memory_summary`
- `torch.cuda.max_memory_allocated`
- etc...
which are useful for debugging purposes. Nevertheless, these functionalities are very limited for Apple silicon mps backend. Is it possible to include more memory management information in `torch.mps`? I think these would be the most relevant memory management information functions for mps devices that are missing:
- memory summary
- available/free GPU memory
- maximum allocated memory
### Alternatives
_No response_
### Additional context
_No response_
cc @kulinseth @albanD @malfet @DenisVieriu97 @razarmehr @abhudev
| 3 |
2,176 | 104,182 |
[inductor] Updated upsample_bilinear2d decomposition
|
open source, module: inductor, ciflow/inductor
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #104248
* __->__ #104182
* #104181
Description:
- Updated upsample_bilinear2d decomposition
- added support for uint8 dtype support
- code improvements
- Added uint8 dtype tests
The goal of this PR is performance improvements on CPU by reducing the number of generated blocks (3 blocks -> 1 block):
```
[------------------------- Interpolate bilinear, AA=false, cpu -------------------------]
| Eager | Compiled
1 threads: ------------------------------------------------------------------------------
- PR
Input (3, 345, 456), torch.float32, torch.contiguous_format | 1435.3 | 942.2
Input (3, 345, 456), torch.float32, torch.channels_last | 984.3 | 944.0
Input (3, 345, 456), torch.uint8, torch.contiguous_format | 682.4 | 1366.9
Input (3, 345, 456), torch.uint8, torch.channels_last | 194.0 | 1387.4
- main/nightly
Input (3, 345, 456), torch.float32, torch.contiguous_format | 1474.4 | 1234.5
Input (3, 345, 456), torch.float32, torch.channels_last | 1001.2 | 1371.3
Input (3, 345, 456), torch.uint8, torch.contiguous_format | 596.2 | 1287.0
Input (3, 345, 456), torch.uint8, torch.channels_last | 195.6 | 1402.4
Times are in microseconds (us).
```
No regression for CUDA
```
[------------------------ Interpolate bilinear, AA=false, cuda ------------------------]
| Eager | Compiled
1 threads: -----------------------------------------------------------------------------
- PR
Input (3, 345, 456), torch.float32, torch.contiguous_format | 11.7 | 40.6
Input (3, 345, 456), torch.float32, torch.channels_last | 10.9 | 41.6
- main/nightly
Input (3, 345, 456), torch.float32, torch.contiguous_format | 12.2 | 42.6
Input (3, 345, 456), torch.float32, torch.channels_last | 11.5 | 44.5
Times are in microseconds (us).
```
[Source](https://gist.github.com/vfdev-5/eac35472981c09ea4898f0059011dae7)
- [main/nighlty C++ generated code](https://gist.github.com/vfdev-5/eac35472981c09ea4898f0059011dae7#file-main-generated-code-cpp) (3 blocks) vs [PR C++ generated code](https://gist.github.com/vfdev-5/eac35472981c09ea4898f0059011dae7#file-pr-generated-code-cpp) (1 block)
- [main/nighlty triron generated code](https://gist.github.com/vfdev-5/eac35472981c09ea4898f0059011dae7#file-main-triton-code-py), [PR triron generated code](https://gist.github.com/vfdev-5/eac35472981c09ea4898f0059011dae7#file-pr-triton-code-py)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78
| 1 |
2,177 | 104,175 |
No document for parameter `load_debug_files` in `torch::jit::load` in C++ API
|
module: docs, triaged, actionable
|
### 📚 The doc issue
Thanks a lot for your work.
I found that there is no document for parameter `load_debug_files` in `torch::jit::load` in C++ API.
This parameter can be found in various overloads of the function, however, none of them specifies what `load_debug_files` is, and whether it affects the loaded pre-trained model when setting to false.
Ref:
* https://pytorch.org/cppdocs/api/function_namespacetorch_1_1jit_1ac9b78087c3a653b13868cf7e6f8ed171.html#exhale-function-namespacetorch-1-1jit-1ac9b78087c3a653b13868cf7e6f8ed171
* https://github.com/pytorch/pytorch/blob/3c34a00d1b94aa4ceeede498ab878f1c5a25afb1/torch/csrc/jit/serialization/import.h#L100
I encounter this when I deployed a pre-trained model by libtorch. For my own model, I found that `torch::jit::load` works no matter whether `load_debug_files` is set to true or false, and the inference results are also correct.
So can I safely set `load_debug_files` to false for __all models in release mode__, i.e. no "debug symbols" are included, if my understanding is correct.
Thanks.
P.S.
I dug a little bit, and noticed that `load_debug_files` will enforce `PyTorchStreamReader` to read some "Records", however, I didn't find a definition for "Records".
https://github.com/pytorch/pytorch/blob/3c34a00d1b94aa4ceeede498ab878f1c5a25afb1/caffe2/serialize/inline_container.cc#L306
### Suggest a potential alternative/fix
_No response_
cc @svekars @carljparker
| 0 |
2,178 | 104,174 |
distributed.scatter memory leak in source rank
|
oncall: distributed, module: memory usage
|
### 🐛 Describe the bug
distributed scatter doesn't free memory on the source rank. The allocated and reserved memory values are correct but leaves less free memory on the source rank device.
```python
shape = (1_000_000_000,)
chunk = torch.empty(shape, device=rank)
if rank == 0:
torch.distributed.scatter(chunk, [torch.empty(shape, device=rank) for _ in range(world_size)], src=rank)
else:
torch.distributed.scatter(chunk, src=0)
torch.cuda.synchronize()
gc.collect()
torch.cuda.empty_cache()
print(f'{rank}:'
f' Free: {torch.cuda.mem_get_info(device)[0] // 1_000_000 :,}MB,'
f' Reserved: {torch.cuda.memory_reserved(device) // 1_000_000 :,}MB,'
f' Alloc: {torch.cuda.memory_allocated() // 1_000_000 :,}MB'
)
```
### Versions
```
Collecting environment information...
PyTorch version: 2.0.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.26.4
Libc version: glibc-2.31
Python version: 3.10.11 (main, Apr 20 2023, 19:02:41) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-67-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-80GB
GPU 1: NVIDIA A100-SXM4-80GB
GPU 2: NVIDIA A100-SXM4-80GB
GPU 3: NVIDIA A100-SXM4-80GB
GPU 4: NVIDIA A100-SXM4-80GB
GPU 5: NVIDIA A100-SXM4-80GB
GPU 6: NVIDIA A100-SXM4-80GB
GPU 7: NVIDIA A100-SXM4-80GB
Nvidia driver version: 525.85.12
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 48 bits physical, 48 bits virtual
CPU(s): 240
On-line CPU(s) list: 0-239
Thread(s) per core: 1
Core(s) per socket: 1
Socket(s): 240
NUMA node(s): 2
Vendor ID: AuthenticAMD
CPU family: 25
Model: 1
Model name: AMD EPYC 7J13 64-Core Processor
Stepping: 1
CPU MHz: 2449.998
BogoMIPS: 4899.99
Virtualization: AMD-V
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 15 MiB
L1i cache: 15 MiB
L2 cache: 120 MiB
L3 cache: 3.8 GiB
NUMA node0 CPU(s): 0-119
NUMA node1 CPU(s): 120-239
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw perfctr_core invpcid_single ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr wbnoinvd arat npt nrip_save umip pku ospke vaes vpclmulqdq rdpid fsrm arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.24.3
[pip3] torch==2.0.1
[pip3] torchaudio==2.0.2
[pip3] torchdata==0.6.1
[pip3] torchtext==0.15.2
[pip3] torchvision==0.15.2
[pip3] triton==2.0.0
[conda] blas 1.0 mkl
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2023.1.0 h6d00ec8_46342
[conda] mkl-service 2.4.0 py310h5eee18b_1
[conda] mkl_fft 1.3.6 py310h1128e8f_1
[conda] mkl_random 1.2.2 py310h1128e8f_1
[conda] numpy 1.24.3 py310h5f9d8c6_1
[conda] numpy-base 1.24.3 py310hb5e798b_1
[conda] pytorch-cuda 11.8 h7e8668a_5 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torch 2.0.1 pypi_0 pypi
[conda] torchaudio 2.0.2 py310_cu118 pytorch
[conda] torchdata 0.6.1 py310 pytorch
[conda] torchtext 0.15.2 py310 pytorch
[conda] torchvision 0.15.2 py310_cu118 pytorch
[conda] triton 2.0.0 pypi_0 pypi
```
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
| 0 |
2,179 | 104,168 |
Incorrect Reduce collective result with `_coalescing_manager`
|
oncall: distributed
|
Hi,
I am trying to execute a series of Reduce operations. I can execute a single Reduce operation using the `torch.distributed.reduce(...)` API. To better utilize the resources (network/compute), NCCL also provides a grouped collective execution API called `ncclGroupStart/End`, when a series of collective operations are surrounded by `ncclGroupStart/End` calls, NCCL will coalesce the individual calls into an optimized collective kernel. PyTorch seems to have the `_coalescing_manager` in distributed_c10d. It seems to do exactly what I'm trying to achieve. However, upon closer inspection, I see that the reduced values are incorrect.
Here is the exact minified python script which fails to compute the correct value. In this script, I reduce tensors with && without group mode && observe a difference in the result. On rank 0, `grad_slice2` and `grad_slice2_validation` operate on identical buffers, yet reduce to different values.
```python
# cat > application.py
import os
import torch
import torch.distributed as dist
device = int(os.environ["LOCAL_RANK"])
torch.cuda.set_device(device)
default_group = dist.init_process_group('nccl')
world_size = dist.get_world_size()
rank = dist.get_rank()
numel = 2
grad_slice1 = torch.rand((numel,), dtype=torch.bfloat16, device=rank%8)
grad_slice2 = torch.rand((numel,), dtype=torch.bfloat16, device=rank%8)
root_ranks = [1, 0]
grad_slices = [grad_slice1, grad_slice2]
grad_slice1_validation = grad_slice1.clone().detach()
grad_slice2_validation = grad_slice2.clone().detach()
validation_slices = [grad_slice1_validation, grad_slice2_validation]
# Sequential reduces.
async_handles = []
for dst_rank, grad_slice in zip(root_ranks, grad_slices):
async_handles.append(dist.reduce(grad_slice, dst=dst_rank, async_op=True))
for h in async_handles: h.wait()
torch.cuda.synchronize()
torch.distributed.barrier()
# Coalesced reduces.
async_handles = []
device = grad_slice1.device
with torch.distributed.distributed_c10d._coalescing_manager(
group=torch.distributed.group.WORLD, device=device, reqs=async_handles):
for dst_rank, grad_slice in zip(root_ranks, validation_slices):
async_handles.append(dist.reduce(grad_slice, dst=dst_rank, async_op=True))
for h in async_handles: h.wait()
torch.cuda.synchronize()
torch.distributed.barrier()
if not torch.all(torch.eq(grad_slice1, grad_slice1_validation) == True).item() or \
not torch.all(torch.eq(grad_slice2, grad_slice2_validation) == True).item():
print(f"{rank} Incorrect result grad_slice1={grad_slice1} grad_slice1_validation={grad_slice1_validation}")
print(f"{rank} Incorrect result grad_slice2={grad_slice2} grad_slice2_validation={grad_slice2_validation}")
print("")
else:
...
```
Here is the output:
```bash
[1,0]<stdout>:NCCL version 2.16.2+cuda11.8
[1,0]<stdout>:0 Incorrect result grad_slice1=tensor([0.0101, 0.0140], device='cuda:0', dtype=torch.bfloat16) grad_slice1_validation=tensor([0.0101, 0.0140], device='cuda:0', dtype=torch.bfloat16)
[1,0]<stdout>:0 Incorrect result grad_slice2=tensor([15.9375, 14.5000], device='cuda:0', dtype=torch.bfloat16) grad_slice2_validation=tensor([16.0000, 14.4375], device='cuda:0', dtype=torch.bfloat16)
[1,0]<stdout>:
```
`15.9375`->`16.0000` and `14.5000`->`14.4375`.
One may argue that the difference here is minimal, but the computations should be exact here. So I'm trying to understand if the coalesce semantic is incorrect used OR there is a bug in ProcessGroupNCCL.
I've also written a C++ application to validate if this is actually an issue with NCCL group mode execution. That is available here: https://gist.github.com/0x6b64/6b52d912c98e5c55ac826dec86a11744
I'm working on 4 Amazon EC2 P4D instances, here is my run command.
```bash
mpirun -np $((8*4)) --hostfile ~/hostfile python application.py
```
I've not been able to reproduce the same incorrect computation directly working with NCCL.
Note: the computation isn't always wrong, its only wrong is "sometimes" (whatever that means, but atleast it appears for the example I've attached here).
Any feedback here will help greatly!
### Versions
Collecting environment information...
PyTorch version: 2.0.0
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.26.3
Libc version: glibc-2.31
Python version: 3.10.8 | packaged by conda-forge | (main, Nov 22 2022, 08:26:04) [GCC 10.4.0] (64-bit runtime)
Python platform: Linux-5.10.173-154.642.amzn2.x86_64-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
Nvidia driver version: 525.85.12
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
| 2 |
2,180 | 104,164 |
DDP enhancement
|
oncall: distributed
|
### 🚀 The feature, motivation and pitch
The DDP feature is super cool and efficient. However, it is hardware agnostic and there is no specification regarding the machines used as backend. Making this feature hardware aware can help a lot to optimize executions.
With some tips it is possible to get better performance in this area. I added some line of code based on this idea. It is something which does not go too deep in the code structure and can be exposed to the user as argument in torchrun. The result I obtained are interesting in term of speedup and training time reduction.
I will be happy to share my experience and my code with the community, looking for your feedbacks.
Rgrds
### Alternatives
_No response_
### Additional context
_No response_
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
| 0 |
2,181 | 104,163 |
Nested Tensor with PyG dataset custom class
|
triaged, module: nestedtensor, actionable
|
### 🐛 Describe the bug
Hello everyone,
I am trying to implement a custom dataset class using PyTorch geometric. The initial issue was that in order the method "__inc__" to work we must have tensors. After being directed to the function "torch.nested.nested_tensor", I realized that I could use that instead of padding the lists with negative values so I can make them a tensor later. However, when implementing it, I get the following error:
Could not run 'aten::cat' with arguments from the 'NestedTensorCPU' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::cat' is only available for these backends: [CPU, CUDA, HIP, MPS, IPU, XPU, HPU, VE, Meta, MTIA, PrivateUse1, PrivateUse2, PrivateUse3, FPGA, ORT, Vulkan, Metal, QuantizedCPU, QuantizedCUDA, QuantizedHIP, QuantizedMPS, QuantizedIPU, QuantizedXPU, QuantizedHPU, QuantizedVE, QuantizedMeta, QuantizedMTIA, QuantizedPrivateUse1, QuantizedPrivateUse2, QuantizedPrivateUse3, CustomRNGKeyId, MkldnnCPU, SparseCPU, SparseCsrCPU, SparseCsrCUDA, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMeta, AutogradMTIA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, AutogradNestedTensor, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PythonDispatcher].
The batching techniques of PyG is introduced here: https://pytorch-geometric.readthedocs.io/en/latest/advanced/batching.html
Do you have any suggestions of how to solve my issue?
```py
import torch
from torch_geometric.data import Batch, Data
from typing import Any
class MyData(Data):
def __inc__(self, key, value, *args, **kwargs):
if 'adj' in key:
return torch.tensor([[getattr(self, 'x').size(0)], [getattr(self, 'x').size(0)]])
else:
return super().__inc__(key, value, *args, **kwargs)
def __cat_dim__(self, key: str, value: Any, *args, **kwargs) -> Any:
if 'adj' in key:
return 1
else:
return 0
adj = torch.tensor([[0, 1, 1, 2, 2, 2, 3, 3, 4, 4, 5, 5, 5, 6, 6, 7, 7, 8,
8, 8, 9, 10, 10, 10, 11, 11, 12, 12, 12, 13, 13, 14, 14, 15, 15, 15,
16, 16, 16, 16, 17, 18, 19, 19, 19, 20, 20, 21, 21, 21, 22, 23, 23, 24,
24, 25, 25, 26, 26, 27, 27, 27, 28, 28],
[ 1, 0, 2, 1, 3, 28, 2, 4, 3, 5, 4, 6, 27, 5, 7, 6, 8, 7,
9, 10, 8, 8, 11, 27, 10, 12, 11, 13, 26, 12, 14, 13, 15, 14, 16, 25,
15, 17, 18, 19, 16, 16, 16, 20, 24, 19, 21, 20, 22, 23, 21, 21, 24, 19,
23, 15, 26, 12, 25, 5, 10, 28, 2, 27]])
relations = {0:torch.tensor([2, 28, 27, 10, 8, 7, 6, 5, 4, 3]), 1:torch.tensor([2, 28, 27, 5, 4, 3]), 2:torch.tensor([2, 3, 4, 5, 27, 28])}
list_relations = torch.nested.nested_tensor(list(relations.values()))
data = MyData(x = torch.randn([29,5]), adj = adj, list_relations = list_relations)
batch = Batch.from_data_list([data, data])
print(batch)
```
Thanks a lot.
### Versions
PyTorch version: 2.0.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 12.0.1 (x86_64)
GCC version: Could not collect
Clang version: 13.0.0 (clang-1300.0.29.3)
CMake version: Could not collect
Libc version: N/A
Python version: 3.9.16 (main, Mar 8 2023, 04:29:44) [Clang 14.0.6 ] (64-bit runtime)
Python platform: macOS-10.16-x86_64-i386-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Pro
Versions of relevant libraries:
[pip3] numpy==1.24.3
[pip3] pytorch-lightning==2.0.3
[pip3] torch==2.0.1
[pip3] torch-cluster==1.6.1
[pip3] torch-geometric==2.3.1
[pip3] torch-scatter==2.1.1
[pip3] torch-sparse==0.6.17
[pip3] torch-spline-conv==1.2.2
[pip3] torchmetrics==0.11.4
[conda] blas 1.0 mkl
[conda] cpuonly 2.0 0 pytorch
[conda] mkl 2023.1.0 h59209a4_43558
[conda] numpy 1.24.3 pypi_0 pypi
[conda] pytorch 2.0.1 py3.9_0 pytorch
[conda] pytorch-lightning 2.0.3 pypi_0 pypi
[conda] pytorch-mutex 1.0 cpu pytorch
[conda] torch-cluster 1.6.1 pypi_0 pypi
[conda] torch-geometric 2.3.1 pypi_0 pypi
[conda] torch-scatter 2.1.1 pypi_0 pypi
[conda] torch-sparse 0.6.17 pypi_0 pypi
[conda] torch-spline-conv 1.2.2 pypi_0 pypi
[conda] torchmetrics 0.11.4 pypi_0 pypi
cc @cpuhrsch @jbschlosser @bhosmer @drisspg
| 2 |
2,182 | 104,162 |
Network does not return any thing, not even None and breaks loops
|
needs reproduction, module: windows, triaged, module: third_party
|
### 🐛 Describe the bug
I just installed torch via `pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu117`.
I have created a network
```python
class NeuralNetwork(nn.Module):
def __init__(self):
super(NeuralNetwork, self).__init__()
def forward(self, x):
return x
```
and a training loop
```python
net = NeuralNetwork()
criterion = CrossEntropyLoss()
N_EPOCHS = 10
for e in range(N_EPOCHS):
print(e)
for x, y in dataLoader: # `dataLoader` is a DataLoader object
logit = net(x)
loss = criterion(logit, y)
optimizer.zero_grad()
loss.backward()
optimizer.step()
```
Running the code only prints "0" but no error is thrown.
I have narrowed it down to it being the `net(x)` call.
If I do a short test
```python
net = NeuralNetwork()#.to(device)
x = torch.rand(1, 28, 28)
net(x)
```
then the last `net(x)` does return anything.
Doing
```python
a = net(x)
print(a)
```
results in `no variable named "a"`
### Versions
PyTorch version: 2.0.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 11 Pro
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A
Python version: 3.11.4 (tags/v3.11.4:d2340ef, Jun 7 2023, 05:45:37) [MSC v.1934 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.22621-SP0
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA RTX A3000 Laptop GPU
Nvidia driver version: 531.14
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture=9
CurrentClockSpeed=2611
DeviceID=CPU0
Family=207
L2CacheSize=10240
L2CacheSpeed=
Manufacturer=GenuineIntel
MaxClockSpeed=2611
Name=11th Gen Intel(R) Core(TM) i9-11950H @ 2.60GHz
ProcessorType=3
Revision=
Versions of relevant libraries:
[pip3] numpy==1.24.1
[pip3] torch==2.0.1+cu117
[pip3] torchaudio==2.0.2+cu117
[pip3] torchvision==0.15.2+cu117
[conda] Could not collect
cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @vladimir-aubrecht @iremyux @Blackhex @cristianPanaite
| 5 |
2,183 | 104,155 |
add dist hooks support for custom device
|
triaged, open source, release notes: distributed (ddp)
|
Fixes https://github.com/pytorch/pytorch/issues/104389
1. Now for distributed hooks, there are some hard-code like `cuda`, we want to support these hooks for custom device( privateuse1 backend), so we use the abstract device-module to run some funcs.
2. In `torch/nn/parallel/distributed.py`, I want to define a variable `self.device_module = getattr(torch, self.device_type, None)` so that we can reuse it, but it will cause an error in serialization, `TypeError: cannot pickle 'module' object`. So we call the func by `getattr` when needs.
| 6 |
2,184 | 104,154 |
Numbers bigger than the range should be inf while the implementation just keeps the original.
|
module: docs, triaged, module: NaNs and Infs
|
### 🐛 Describe the bug
Numbers bigger than the range should be inf while the implementation just keeps the original.
And I just wonder when we keep the original or just replace the number by inf.
```python
Python 3.8.13 | packaged by conda-forge | (default, Mar 25 2022, 06:04:18)
[GCC 10.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> torch.__version__
'1.11.0'
>>> torch.finfo(torch.float16)
finfo(resolution=0.001, min=-65504, max=65504, eps=0.000976562, tiny=6.10352e-05, dtype=float16)
>>> a = torch.tensor([65505.], dtype=torch.float16)
>>> a
tensor([65504.], dtype=torch.float16)
>>> b = torch.tensor([65570.], dtype=torch.float16)
>>> b
tensor([inf], dtype=torch.float16)
```
Thanks for any help in advance.
### Versions
```
Collecting environment information...
PyTorch version: 1.11.0
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.31
Python version: 3.8.13 | packaged by conda-forge | (default, Mar 25 2022, 06:04:18) [GCC 10.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-73-generic-x86_64-with-glibc2.10
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to:
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 2080 Ti
GPU 1: NVIDIA GeForce RTX 2080 Ti
Nvidia driver version: 515.105.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
架构: x86_64
CPU 运行模式: 32-bit, 64-bit
字节序: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU: 24
在线 CPU 列表: 0-23
每个核的线程数: 2
每个座的核数: 12
座: 1
NUMA 节点: 1
厂商 ID: GenuineIntel
CPU 系列: 6
型号: 63
型号名称: Intel(R) Xeon(R) CPU E5-2678 v3 @ 2.50GHz
步进: 2
CPU MHz: 1200.000
CPU 最大 MHz: 3300.0000
CPU 最小 MHz: 1200.0000
BogoMIPS: 4998.73
虚拟化: VT-x
L1d 缓存: 384 KiB
L1i 缓存: 384 KiB
L2 缓存: 3 MiB
L3 缓存: 30 MiB
NUMA 节点0 CPU: 0-23
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
标记: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm cpuid_fault epb invpcid_single pti intel_ppin ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm xsaveopt cqm_llc cqm_occup_llc dtherm ida arat pln pts md_clear flush_l1d
Versions of relevant libraries:
[pip3] numpy==1.22.3
[pip3] torch==1.11.0
[conda] _tflow_select 2.3.0 mkl defaults
[conda] blas 1.0 mkl defaults
[conda] cudatoolkit 11.3.1 h2bc3f7f_2 defaults
[conda] libblas 3.9.0 14_linux64_mkl conda-forge
[conda] libcblas 3.9.0 14_linux64_mkl conda-forge
[conda] liblapack 3.9.0 14_linux64_mkl conda-forge
[conda] mkl 2022.0.1 h06a4308_117 defaults
[conda] numpy 1.19.2 pypi_0 pypi
[conda] pytorch 1.11.0 py3.8_cuda11.3_cudnn8.2.0_0 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] tensorflow 2.4.1 mkl_py38hb2083e0_0 defaults
[conda] tensorflow-base 2.4.1 mkl_py38h43e0292_0 defaults
```
cc @svekars @carljparker
| 6 |
2,185 | 104,152 |
Error 101: invalid device ordinal (Triggered internally at ../c10/cuda/CUDAFunctions.cpp:109.)
|
module: cuda, triaged, module: third_party
|
### 🐛 Describe the bug
```python
nvidia-smi
Sun Jun 25 11:30:36 2023
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 520.61.05 Driver Version: 520.61.05 CUDA Version: 11.8 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA GeForce ... Off | 00000000:06:00.0 Off | N/A |
| 48% 47C P0 129W / 390W | 0MiB / 24576MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
./venv/bin/python3
Python 3.10.6 (main, May 29 2023, 11:10:38) [GCC 11.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> torch.cuda.is_available()
/data/app/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/cuda/__init__.py:107: UserWarning: CUDA initialization: Unexpected error from cudaGetDeviceCount(). Did you run some cuda functions before calling NumCudaDevices() that might have already set an error? Error 101: invalid device ordinal (Triggered internally at ../c10/cuda/CUDAFunctions.cpp:109.)
return torch._C._cuda_getDeviceCount() > 0
False
>>> torch.__version__
'2.0.1+cu118'
>>> import torchvision
>>> torchvision.__version__
'0.15.2+cu118'
```
### Versions
./venv/bin/python3 ./collect_env.py
Collecting environment information...
/data/app/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/cuda/__init__.py:107: UserWarning: CUDA initialization: Unexpected error from cudaGetDeviceCount(). Did you run some cuda functions before calling NumCudaDevices() that might have already set an error? Error 101: invalid device ordinal (Triggered internally at ../c10/cuda/CUDAFunctions.cpp:109.)
return torch._C._cuda_getDeviceCount() > 0
PyTorch version: 2.0.1+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04.1) 11.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.10.6 (main, May 29 2023, 11:10:38) [GCC 11.3.0] (64-bit runtime)
Python platform: Linux-5.19.0-45-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3090
Nvidia driver version: 520.61.05
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
架构: x86_64
CPU 运行模式: 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
字节序: Little Endian
CPU: 12
在线 CPU 列表: 0-11
厂商 ID: GenuineIntel
型号名称: Intel(R) Core(TM) i7-6800K CPU @ 3.40GHz
CPU 系列: 6
型号: 79
每个核的线程数: 2
每个座的核数: 6
座: 1
步进: 1
CPU 最大 MHz: 3800.0000
CPU 最小 MHz: 1200.0000
BogoMIPS: 6796.32
标记: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 ds_cpl est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti intel_ppin ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a rdseed adx smap intel_pt xsaveopt cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts md_clear flush_l1d
L1d 缓存: 192 KiB (6 instances)
L1i 缓存: 192 KiB (6 instances)
L2 缓存: 1.5 MiB (6 instances)
L3 缓存: 15 MiB (1 instance)
NUMA 节点: 1
NUMA 节点0 CPU: 0-11
Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT vulnerable
Versions of relevant libraries:
[pip3] numpy==1.23.5
[pip3] open-clip-torch==2.7.0
[pip3] pytorch-lightning==1.9.4
[pip3] torch==2.0.1+cu118
[pip3] torchdiffeq==0.2.3
[pip3] torchmetrics==0.11.4
[pip3] torchsde==0.2.5
[pip3] torchvision==0.15.2+cu118
[pip3] triton==2.0.0
[conda] Could not collect
cc @ptrblck
| 2 |
2,186 | 104,150 |
[RFC] TorchInductor with X86 CPU as backend of Quantization in PyTorch 2.0 Export
|
oncall: quantization, triaged, oncall: pt2
|
## 🚀 The feature, motivation and pitch
This RFC proposes to add [TorchInductor](https://dev-discuss.pytorch.org/t/torchinductor-a-pytorch-native-compiler-with-define-by-run-ir-and-symbolic-shapes/747) with X86 CPU device as one of the backends for [Quantization 2.0 in Export](https://pytorch.org/tutorials/prototype/pt2e_quant_ptq_static.html).
* New Quantization 2.0 flow uses the PT2 Export workflow (`torch._dynamo.export`) to capture the model into a graph and perform quantization transformations on top of the ATen dialect graph. This approach is expected to have significantly higher model coverage, better programmability, and a simplified UX.
* TorchInductor is the new compiler backend that compiles the FX Graphs generated by TorchDynamo into optimized C++/Triton kernels.
* This RFC focus on X86 CPU device to combine Quantization 2.0 flow and TorchInductor. As for the support of other devices, please refer to [open discussion section](#open-discussion)
The proposed high-level architecture of quantization 2.0 with Inductor could look like this:
```
float_model(Python) Input
\ /
\ /
—-------------------------------------------------------
| Dynamo Export |
—-------------------------------------------------------
|
FX Graph in ATen
| X86InductorQuantizer
| /
| /
—--------------------------------------------------------
| prepare_pt2e |
—--------------------------------------------------------
|
Calibrate/Train
|
—--------------------------------------------------------
| convert_pt2e |
—--------------------------------------------------------
|
Reference Quantized Model
|
—--------------------------------------------------------
| Lowering |
—--------------------------------------------------------
|
Inductor
```
The proposed UX is as below:
```
import torch
import torch._dynamo as torchdynamo
from torch.ao.quantization._quantize_pt2e import convert_pt2e, prepare_pt2e_quantizer
import torch.ao.quantization._pt2e.quantizer.x86_inductor_quantizer as xiq
class M(torch.nn.Module):
def __init__(self):
super().__init__()
self.linear = torch.nn.Linear(5, 10)
def forward(self, x):
return self.linear(x)
example_inputs = (torch.randn(1, 5),)
model = M().eval()
# Step 1: Trace the model into an FX graph of flattened ATen operators
exported_graph_module, guards = torchdynamo.export(
model,
*copy.deepcopy(example_inputs),
aten_graph=True,
)
# Step 2: Insert observers or fake quantize modules
quantizer = xiq.X86InductorQuantizer()
operator_config = xiq.get_symmetric_quantization_config(is_per_channel=True)
quantizer.set_global(operator_config)
prepared_graph_module = prepare_pt2e(exported_graph_module, quantizer)
# Doing calibration here.
# Step 3: Quantize the model
convert_graph_module = convert_pt2e(prepared_graph_module)
# Step 4: Lower Reference Quantized Model into the backend
compile_model = torch.compile(convert_graph_module)
```
## Alternatives
### Frontend Changes
The frontend will follow the design of Quantizer as in [PyTorch 2.0 Export Post Training Static Quantization](https://pytorch.org/tutorials/prototype/pt2e_quant_ptq_static.html).
* `prepare_pt2e` and `convert_pt2e` are the APIs provided by standard Quantization 2.0 flow.
* `X86InductorQuantizer` will be enabled for the quantization recipes on X86 platform as in this draft [PR-98730](https://github.com/pytorch/pytorch/pull/98730). Although, the quantization annotation API is not finalized, `X86InductorQuantizer` can follow the changes in quantization annotation API.
* `Quantized Model Representation` is the IR user saw after the quantization flow of `convert_pt2e`. It's important to the backend developer for doing the quantization fusion and lowering. The final design and implementation of `Quantized Model Representation` are not ready, we will rely on current `Quantized Model Representation` to enable quantization fusion and kernel code-gen inside Inductor for now. The current `Quantized Model Representation` is as:
* Take convolution as an example, the quantization pattern is like `dequantize_per_tensor -> fp32_convolution -> quant_per_tensor`
* `dequantize_per_tensor` and `quantize_per_tensor` are not decomposed into primary aten operators.
We will match the quantization patterns based on the above `Quantized Model Representation` inside inductor for now. Meanwhile, the change of `Quantized Model Representation` will gradually take effect per operator. The backend pattern matcher will follow up with the changes of `Quantized Model Representation` gradually.
### Backend Changes
We will use the `torch.compile` API to lower the `Quantized Model Representation` into inductor. Since the `Quantized Model Representation` is captured by `torch._dynamo.export`, we probably don't need to use the `torch.compile` API which will trigger the `dynamo` process for FX graph capture again.
Inside inductor, we will enable the external-call for computation-intensive operators/patterns (convolution, linear) and C++ backend code-gen for memory-intensive operators/patterns.
#### Computation-intensive operators/patterns
For computation-intensive operators/patterns (convolution, linear), we will match the quantization pattern and lower it into Inductor external-call.
##### Pattern Matcher
Refer to the [PR-101164](https://github.com/pytorch/pytorch/pull/101164) for general implementations of the pattern matcher and external-call lowering process:
* `dequantize_per_tensor` and `quantize_per_tensor` will be decomposed into primary operators like `mul`, `div`, `to` inside AOT-Autograd.
* We will use Inductor `register_lowering_pattern` function to match `decomposed dequantize_per_tensor -> aten.convolution -> decomposed quantize_per_tensor` pattern and substitute it into a `QConv` External Call. `QConv` External Call will codegen the C++ code with lines of new quantized convolution operators implementation.
* As we mentioned in the `Frontend` section of `Quantized Model Representation`, `Quantized Model Representation` is not finalized and the quantization patterns that Inductor saw will be subject to change accordingly.
##### New quantized operators implementation
We plan to enable new quantized operators implementation for quantization 2.0 and put them in PyTorch to support Inductor backend as we did for FP32 and BF16 operators in Inductor.
* Previously, we have [quantized convolution operators](https://github.com/pytorch/pytorch/blob/e3ee5b00beff9401d69206b84b82323f7a63a048/aten/src/ATen/native/quantized/library.cpp#L66) which accepts quantized tensors of activation, weight as inputs. Quantized activation has the scale and zero_point information which is needed for the quantized convolution calculation. For quantization 2.0, we will only see the plain tensor of uint8/int8 data type, so the schema of the new quantized convolution operator is subject to be changed for accepting extra input of scale, zero_point for activation, weight, and output tensors.
* Previously, the weight of convolution is encapsulated in a `Conv2dPackedParamsBase` object instead of a tensor. Also, the qconv's implementation is encapsulated inside the `PackedConvWeight` class. Some runtime information for the calculation can be fetched from the attr of the `PackedConvWeight` object. For quantization 2.0, we will implement the new quantized convolution as functional invoking with only Tensor objects in inputs.
Due to these differences in 1.X and 2.X path, we need to enable new quantized operators implementation.
##### Weight prepack implementation
OneDNN convolution needs prepacked weight to do the calculation. The weight prepack can be done in the graph preparation phase instead of the runtime to avoid runtime overhead. As we discuss [here](https://github.com/pytorch/pytorch/pull/101164#discussion_r1195953417), to do weight prepack we need the constant fold feature support in Inductor.
* Current constant folding feature doesn't support FX graph captured by `torch._dynamo.export` as reported in [issue-103582](https://github.com/pytorch/pytorch/issues/103582). It needs further discussion about how to resolve this issue.
* Current constant folding feature will fold constants when possible. It will fold `fp32_weight->quant_per_channel->dequant_per_channel` into a new `fp32` weight as discussed [here](https://github.com/pytorch/pytorch/pull/100652#discussion_r1218789255). We need to keep `dequant_per_channel` node in the graph. Then `fp32_weight->quant_per_channel` can be constant folding into an `int8_weight`, and `dequant_per_channel->conv` can be further lowered into an `int8_conv` node.
Suppose we resolve the above issues, the design of weight prepack feature can be:
* Step 1: `fp32_weight->quant_per_channel` can be constant folding into an `int8_weight`.
* Step 2: Doing weight prepack of this `int8_weight` to pack it into an `int8 Mkldnn Tensor`. Meanwhile, the `dequant_per_channel->aten.convolution` pattern will be substituted with a `dynamic_quant_convloution` node which accepts `fp32 tensor activation` and `int8 Mkldnn Tensor weight` as inputs.
* Step 3: Pattern match the pattern of
```
decomposed_dequantize_per_tensor int8_mkldnn_tensor_weight
\ /
dynamic_quant_convloution
|
optional(decomposed_quantize_per_tensor)
```
and substitue the pattern into the `QConv` external call node.
##### Primitive Cache Design
OneDNN Primitive Cache is a feature for Quantized 1.X FX path of oneDNN convolution implementation. It can eliminate the overhead of re-creation of oneDNN primitive. For 2.X path, it's still a design open to be explored further.
##### CPPWrap Support
CPPWrap can reduce the python overhead for inductor generated code. Quantization also needs the support of this feature. Since CPPWrap already supports fp32 and bf16 external calls, if the quantization follows the designs of fp32 and bf16 external calls. CPPWrap should be able to support it by nature.
#### Memory-intensive operators
As for the Memory-intensive operators besides convolution and linear, we will reply on the inductor C++ backend code-gen capability. Typical, there are 3 patterns will see in quantization.
##### Dequant->op code gen
As we mentioned above, `decomposed dequantize_per_tensor` has been decomposed into primary operators of `to_fp32`, `sub`, and `mul` after this [PR-99131](https://github.com/pytorch/pytorch/pull/99131). These primary operators support code-gen and loop fusions with the following memory-intensive operator.
Take example of `dequantize_per_tensor->relu` pattern as example.
```
def fn(x):
tmp = torch.ops.quantized_decomposed.dequantize_per_tensor.tensor(
x,
scale=torch.tensor(0.1, dtype=torch.float),
zero_point=torch.tensor(1, dtype=torch.int64),
quant_min=0,
quant_max=255,
dtype=torch.uint8,
)
y = torch.relu(tmp)
return y
```
The generated code on AVX-512 supported CPU will be:
```
cpp_fused_dequantize_per_tensor_lift_fresh_relu_0 = async_compile.cpp('''
#include "/tmp/torchinductor_root/mq/cmqzxwuyo7ryvun3egqos5jq5ak4fue7d2jbopbqs7pgpkhdpfh4.h"
extern "C" void kernel(const unsigned char* in_ptr0,
float* out_ptr0)
{
{
for(long i0=static_cast<long>(0L); i0<static_cast<long>(198144L); i0+=static_cast<long>(16L))
{
auto tmp0 = at::vec::load_uint8_as_float(in_ptr0 + static_cast<long>(i0));
auto tmp1 = (tmp0);
auto tmp2 = at::vec::Vectorized<float>(static_cast<float>(1.0));
auto tmp3 = tmp1 - tmp2;
auto tmp4 = at::vec::Vectorized<float>(static_cast<float>(0.10000000149011612));
auto tmp5 = tmp3 * tmp4;
auto tmp6 = at::vec::clamp_min(tmp5, decltype(tmp5)(0));
tmp6.store(out_ptr0 + static_cast<long>(i0));
}
#pragma omp simd simdlen(8)
for(long i0=static_cast<long>(198144L); i0<static_cast<long>(198147L); i0+=static_cast<long>(1L))
{
auto tmp0 = in_ptr0[static_cast<long>(i0)];
auto tmp1 = static_cast<float>(tmp0);
auto tmp2 = static_cast<float>(1.0);
auto tmp3 = tmp1 - tmp2;
auto tmp4 = static_cast<float>(0.10000000149011612);
auto tmp5 = decltype(tmp3)(tmp3 * tmp4);
auto tmp6 = tmp5 * (tmp5>0);
out_ptr0[static_cast<long>(i0)] = tmp6;
}
}
}
''')
```
##### op->quant code gen
As we mentioned above, `decomposed quantize_per_tensor` has been decomposed into primary operators of `div`, `mul` and `add`, `to_uint8` after this [PR-99131](https://github.com/pytorch/pytorch/pull/99131). These primary operators support code-gen and loop fusions with the following memory-intensive operator.
Take an example of `relu->quantize_per_tensor` pattern:
```
def fn(x):
tmp = torch.relu(x)
y = torch.ops.quantized_decomposed.quantize_per_tensor.tensor(
tmp,
scale=torch.tensor(0.1, dtype=torch.float),
zero_point=torch.tensor(1, dtype=torch.int64),
quant_min=0,
quant_max=255,
dtype=torch.uint8,
)
return y
```
The generated code on AVX-512 supported CPU will be:
```
cpp_fused_lift_fresh_quantize_per_tensor_relu_0 = async_compile.cpp('''
#include "/tmp/torchinductor_root/mq/cmqzxwuyo7ryvun3egqos5jq5ak4fue7d2jbopbqs7pgpkhdpfh4.h"
extern "C" void kernel(const float* in_ptr0,
unsigned char* out_ptr0)
{
{
for(long i0=static_cast<long>(0L); i0<static_cast<long>(198144L); i0+=static_cast<long>(16L))
{
auto tmp0 = at::vec::Vectorized<float>::loadu(in_ptr0 + static_cast<long>(i0));
auto tmp1 = at::vec::clamp_min(tmp0, decltype(tmp0)(0));
auto tmp2 = at::vec::Vectorized<float>(static_cast<float>(0.10000000149011612));
auto tmp3 = tmp2.reciprocal();
auto tmp4 = at::vec::Vectorized<float>(static_cast<float>(1.0));
auto tmp5 = tmp3 * tmp4;
auto tmp6 = tmp1 * tmp5;
auto tmp7 = tmp6.round();
auto tmp8 = tmp7 + tmp4;
auto tmp9 = at::vec::Vectorized<float>(static_cast<float>(0.0));
auto tmp10 = at::vec::maximum(tmp8, tmp9);
auto tmp11 = at::vec::Vectorized<float>(static_cast<float>(255.0));
auto tmp12 = at::vec::minimum(tmp10, tmp11);
auto tmp13 = (tmp12);
at::vec::store_float_as_uint8(tmp13, out_ptr0 + static_cast<long>(i0));
}
#pragma omp simd simdlen(8)
for(long i0=static_cast<long>(198144L); i0<static_cast<long>(198147L); i0+=static_cast<long>(1L))
{
auto tmp0 = in_ptr0[static_cast<long>(i0)];
auto tmp1 = tmp0 * (tmp0>0);
auto tmp2 = static_cast<float>(0.10000000149011612);
auto tmp3 = 1 / tmp2;
auto tmp4 = static_cast<float>(1.0);
auto tmp5 = decltype(tmp3)(tmp3 * tmp4);
auto tmp6 = decltype(tmp1)(tmp1 * tmp5);
auto tmp7 = std::nearbyint(tmp6);
auto tmp8 = tmp7 + tmp4;
auto tmp9 = static_cast<float>(0.0);
auto tmp10 = max_propagate_nan(tmp8, tmp9);
auto tmp11 = static_cast<float>(255.0);
auto tmp12 = min_propagate_nan(tmp10, tmp11);
auto tmp13 = static_cast<unsigned char>(tmp12);
out_ptr0[static_cast<long>(i0)] = tmp13;
}
}
}
''')
```
##### Dequant->op->quant code gen
This case is a combination of the above 2 cases, take an example of `dequantize_per_tensor->relu->quantize_per_tensor`:
```
def fn(x):
tmp = torch.ops.quantized_decomposed.dequantize_per_tensor.tensor(
x,
scale=torch.tensor(0.1, dtype=torch.float),
zero_point=torch.tensor(1, dtype=torch.int64),
quant_min=0,
quant_max=255,
dtype=torch.uint8,
)
tmp = torch.relu(tmp)
y = torch.ops.quantized_decomposed.quantize_per_tensor.tensor(
tmp,
scale=torch.tensor(0.1, dtype=torch.float),
zero_point=torch.tensor(1, dtype=torch.int64),
quant_min=0,
quant_max=255,
dtype=torch.uint8,
)
return y
```
The generated code should be:
```
cpp_fused_dequantize_per_tensor_lift_fresh_quantize_per_tensor_relu_0 = async_compile.cpp('''
#include "/tmp/torchinductor_root/mq/cmqzxwuyo7ryvun3egqos5jq5ak4fue7d2jbopbqs7pgpkhdpfh4.h"
extern "C" void kernel(const unsigned char* in_ptr0,
unsigned char* out_ptr0)
{
{
for(long i0=static_cast<long>(0L); i0<static_cast<long>(198144L); i0+=static_cast<long>(16L))
{
auto tmp0 = at::vec::load_uint8_as_float(in_ptr0 + static_cast<long>(i0));
auto tmp1 = (tmp0);
auto tmp2 = at::vec::Vectorized<float>(static_cast<float>(1.0));
auto tmp3 = tmp1 - tmp2;
auto tmp4 = at::vec::Vectorized<float>(static_cast<float>(0.10000000149011612));
auto tmp5 = tmp3 * tmp4;
auto tmp6 = at::vec::clamp_min(tmp5, decltype(tmp5)(0));
auto tmp7 = tmp4.reciprocal();
auto tmp8 = tmp7 * tmp2;
auto tmp9 = tmp6 * tmp8;
auto tmp10 = tmp9.round();
auto tmp11 = tmp10 + tmp2;
auto tmp12 = at::vec::Vectorized<float>(static_cast<float>(0.0));
auto tmp13 = at::vec::maximum(tmp11, tmp12);
auto tmp14 = at::vec::Vectorized<float>(static_cast<float>(255.0));
auto tmp15 = at::vec::minimum(tmp13, tmp14);
auto tmp16 = (tmp15);
at::vec::store_float_as_uint8(tmp16, out_ptr0 + static_cast<long>(i0));
}
#pragma omp simd simdlen(8)
for(long i0=static_cast<long>(198144L); i0<static_cast<long>(198147L); i0+=static_cast<long>(1L))
{
auto tmp0 = in_ptr0[static_cast<long>(i0)];
auto tmp1 = static_cast<float>(tmp0);
auto tmp2 = static_cast<float>(1.0);
auto tmp3 = tmp1 - tmp2;
auto tmp4 = static_cast<float>(0.10000000149011612);
auto tmp5 = decltype(tmp3)(tmp3 * tmp4);
auto tmp6 = tmp5 * (tmp5>0);
auto tmp7 = 1 / tmp4;
auto tmp8 = decltype(tmp7)(tmp7 * tmp2);
auto tmp9 = decltype(tmp6)(tmp6 * tmp8);
auto tmp10 = std::nearbyint(tmp9);
auto tmp11 = tmp10 + tmp2;
auto tmp12 = static_cast<float>(0.0);
auto tmp13 = max_propagate_nan(tmp11, tmp12);
auto tmp14 = static_cast<float>(255.0);
auto tmp15 = min_propagate_nan(tmp13, tmp14);
auto tmp16 = static_cast<unsigned char>(tmp15);
out_ptr0[static_cast<long>(i0)] = tmp16;
}
}
}
''')
```
## Additional context
This RFC focus on x86 CPU device. As for the other devices, different quantizers must be added per device into the frontend. Meanwhile, lowering the quantized graph via inductor compiler should be similar.
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 4 |
2,187 | 104,147 |
PyTorch2.0 ROCM LayerNorm HIP error: invalid configuration
|
module: rocm, triaged
|
### 🐛 Describe the bug
I am trying to inference a protein structure graph transformer model using PyTorch2 with rocm-5.4.3/5.5.1 on MI210 GPUs.
When I scale the batch size pass 250 per gpu I get the error below. The error does not exist when we use PyTorch 1.13 and we can scale to 1500 batch size per gpu on the MI210s. However, It works with PyTorch2 on our A100 GPUs. So it seems to be specific to PyTorch2.0 on ROCM.
I have begun working with @srinivamd on the issue. He can discuss more about the current state of diagnosing and debugging the issue.
```python
Original Traceback (most recent call last):
File "/scratch/cluster/danny305/miniconda3/envs/amd/lib/python3.8/site-packages/torch/nn/parallel/parallel_apply.py", line 64, in _worker
output = module(*input, **kwargs)
File "/scratch/cluster/danny305/miniconda3/envs/amd/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/var/local/proteins/ml/Negatron/Negatron/models/torch/oracle/stability_oracle.py", line 334, in forward
local_env_feats, aa_logits = self.backbone(
File "/scratch/cluster/danny305/miniconda3/envs/amd/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/var/local/proteins/ml/Negatron/Negatron/models/torch/oracle/stability_oracle.py", line 220, in forward
x = layer(
File "/scratch/cluster/danny305/miniconda3/envs/amd/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/var/local/proteins/ml/Negatron/Negatron/models/torch/oracle/transformer.py", line 271, in forward
attn_bias = self.distance_encoder(attn_bias_feat).permute(
File "/scratch/cluster/danny305/miniconda3/envs/amd/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/scratch/cluster/danny305/miniconda3/envs/amd/lib/python3.8/site-packages/torch/nn/modules/container.py", line 217, in forward
input = module(input)
File "/scratch/cluster/danny305/miniconda3/envs/amd/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/scratch/cluster/danny305/miniconda3/envs/amd/lib/python3.8/site-packages/torch/nn/modules/normalization.py", line 190, in forward
return F.layer_norm(
File "/scratch/cluster/danny305/miniconda3/envs/amd/lib/python3.8/site-packages/torch/nn/functional.py", line 2515, in layer_norm
return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled)
RuntimeError: HIP error: invalid configuration argument
Compile with `TORCH_USE_HIP_DSA` to enable device-side assertions.
```
### Versions
Collecting environment information...
/scratch/cluster/danny305/miniconda3/envs/amd/lib/python3.8/site-packages/torch/cuda/__init__.py:546: UserWarning: Can't initialize NVML
warnings.warn("Can't initialize NVML")
PyTorch version: 2.0.1+rocm5.4.2
Is debug build: False
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: 5.4.22803-474e8620
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.25.0
Libc version: glibc-2.31
Python version: 3.8.16 (default, Mar 2 2023, 03:21:46) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-71-generic-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: AMD Instinct MI210
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: 5.4.22804
MIOpen runtime version: 2.19.0
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 48 bits physical, 48 bits virtual
CPU(s): 256
On-line CPU(s) list: 0-255
Thread(s) per core: 2
Core(s) per socket: 64
Socket(s): 2
NUMA node(s): 2
Vendor ID: AuthenticAMD
CPU family: 25
Model: 1
Model name: AMD EPYC 7763 64-Core Processor
Stepping: 1
Frequency boost: enabled
CPU MHz: 1500.000
CPU max MHz: 3529.0520
CPU min MHz: 1500.0000
BogoMIPS: 4899.83
Virtualization: AMD-V
L1d cache: 4 MiB
L1i cache: 4 MiB
L2 cache: 64 MiB
L3 cache: 512 MiB
NUMA node0 CPU(s): 0-63,128-191
NUMA node1 CPU(s): 64-127,192-255
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.1
[pip3] pytorch-triton-rocm==2.0.1
[pip3] torch==2.0.1+rocm5.4.2
[pip3] torchaudio==2.0.2+rocm5.4.2
[pip3] torchvision==0.15.2+rocm5.4.2
[conda] nomkl 3.0 0 anaconda
[conda] numpy 1.22.4 pypi_0 pypi
[conda] pytorch-triton-rocm 2.0.1 pypi_0 pypi
[conda] torch 2.0.1+rocm5.4.2 pypi_0 pypi
[conda] torchaudio 2.0.2+rocm5.4.2 pypi_0 pypi
[conda] torchvision 0.15.2+rocm5.4.2 pypi_0 pypi
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo
| 6 |
2,188 | 104,146 |
make_fx: torch.where scalar promotion burns in device
|
triaged, module: aotdispatch
|
```
import torch
from torch.fx.experimental.proxy_tensor import make_fx
def f(x):
return torch.where(x > 0.5, x, 1.0)
g = make_fx(f)(torch.rand(2, 3))
print(g)
```
yields
```
def forward(self, x_1):
gt = torch.ops.aten.gt.Scalar(x_1, 0.5)
scalar_tensor = torch.ops.aten.scalar_tensor.default(1.0, dtype = torch.float32, layout = torch.strided, device = device(type='cpu'))
where = torch.ops.aten.where.self(gt, x_1, scalar_tensor); gt = x_1 = scalar_tensor = None
return where
```
Note the hard-coded cpu device, which we are getting as we trace the promotion of `1.0` from float -> tensor.
We just spend almost 2 weeks debugging IMAs in distributed training that were root caused from this behavior during export :( . Now that we have root caused it, it's trivial to detect device burn-in and ask people to change the model. But in the long run we'd like to solve this entirely. Any thoughts on how to do so?
cc @ezyang, @bdhirsh
| 3 |
2,189 | 104,144 |
[ONNX] Support symbolic tracing without using external `FakeTensorMode` on public API
|
module: onnx, triaged, enhancement, release notes: onnx
|
### 🚀 The feature, motivation and pitch
https://github.com/pytorch/pytorch/pull/103865 assumes that if symbolic tracing is to be used, the user must provide the `FakeTensorMode` instance used to fakefy the model input and weight.
An alternative design would be converting model input and weight into faketensors inside the `torch.onnx.dynamo_export` to eliminate the need of a `fake_tensor`.
### Alternatives
_No response_
### Additional context
_No response_
| 0 |
2,190 | 104,140 |
add fsdp checkpoint support for custom device
|
module: cpu, triaged, open source, Stale, ciflow/trunk, release notes: distributed (sharded), ciflow/inductor
|
Fixes https://github.com/pytorch/pytorch/issues/104390
Now for distributed checkpointing, there are some hard-code like cuda, we want to support checkpointing for custom device( privateuse1 backend), so we use the abstract device-module to run some funcs.
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| 14 |
2,191 | 104,128 |
Python Crashes When Importing Torch With C API
|
needs reproduction, module: windows, triaged, module: third_party
|
### 🐛 Describe the bug
EDIT: Please read my comment to @malfet below. It isn't the case that Python is crashing. Rather, my C++ file seems to hang midway through importing torch when I call `PyImport_ImportModule`.
I'm trying to embed Python into a C++ based application using the [Python/C API][1]. If I don't include an embedded Python installation inside my projects folder, then the program uses my local Python install by default. Using the local install doesn't result in any errors and torch is able to import successfully. However, if I include a [Windows embeddable package (64-bit)][2] version of Python in my projects folder, then Python crashes after calling `PyImport_ImportModule("importsFile")` in my C++ file.
Inside of `importsFile.py`I have the following code:
```py
import numpy as np
import matplotlib
import torch
```
If I comment out `import torch`, then the other modules are able to import without error. However, if I leave `import torch` uncommented, then Python crashes.
I'm not sure if this is a bug on Python's side or PyTorch's side. Python only seems to crash when importing torch specifically, so it seems to me like it should be on PyTorch's side, but I may be wrong about this.
Is this a known issue?
How can I fix it?
[1]: https://docs.python.org/3.9/c-api/import.html
[2]: https://www.python.org/downloads/release/python-397/
### Versions
PyTorch version: 2.0.1+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 10 Education
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A
Python version: 3.9.0 (tags/v3.9.0:9cf6752, Oct 5 2020, 15:34:40) [MSC v.1927 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.19041-SP0
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3070 Ti
Nvidia driver version: 535.98
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture=9
CurrentClockSpeed=3701
DeviceID=CPU0
Family=107
L2CacheSize=6144
L2CacheSpeed=
Manufacturer=AuthenticAMD
MaxClockSpeed=3701
Name=AMD Ryzen 9 5900X 12-Core Processor
ProcessorType=3
Revision=8448
Versions of relevant libraries:
[pip3] numpy==1.24.1
[pip3] torch==2.0.1+cu118
[pip3] torchaudio==2.0.2+cu118
[pip3] torchvision==0.15.2+cu118
[conda] Could not collect
cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @vladimir-aubrecht @iremyux @Blackhex @cristianPanaite
| 18 |
2,192 | 104,119 |
Re-enable `test_typing`
|
module: typing, module: ci, triaged
|
### 🚀 The feature, motivation and pitch
Further to the discussion in #103376 and OH, we need some tests for static type checking with `reveal_type` on `Tensor` methods.
However, `test_typing` is currently in the blocklist and is broken, confirmed in https://github.com/pytorch/pytorch/issues/103376#issuecomment-1595010512 (cc @ezyang @malfet @rgommers @xuzhao9 @gramster @seemethere @pytorch/pytorch-dev-infra @huydhn).
My plan is to to re-enable it, and then in #103376 I can add some more reveal tests.
### Alternatives
Create a new test for #103376, but it will include a lot of code duplication with the existing `test_typing`.
### Additional context
`test_typing` is added to the blocklist when the blocklist is introduced in #64246 c5d80e41b0271163aa1900c002dbf5d430c4b953. I'm not sure why it was blocked. If it's because it was broken, I can fix it (actually very easy); if it's because it was slow (can take 5~10 mins w/o cache), we will anyway need to spend that time for the reveal test; if it's other reasons, please tell me. (cc @malfet as the commit author).
| 2 |
2,193 | 104,113 |
Documentation building fails due to torchgen
|
module: build, module: docs, triaged, actionable
|
### 🐛 Describe the bug
I am trying to build the documentation locally, but i keep getting this error:
```
$ make html
```
output:
```
Traceback (most recent call last):
File "/home/osman/Documents/temp/pytorch/docs/source/scripts/build_opsets.py", line 74, in <module>
main()
File "/home/osman/Documents/temp/pytorch/docs/source/scripts/build_opsets.py", line 57, in main
aten_ops_list = get_aten()
File "/home/osman/Documents/temp/pytorch/docs/source/scripts/build_opsets.py", line 19, in get_aten
parsed_yaml = parse_native_yaml(NATIVE_FUNCTION_YAML_PATH, TAGS_YAML_PATH)
File "/home/osman/Documents/temp/pytorch/docs/.venv/lib/python3.10/site-packages/torchgen/gen.py", line 235, in parse_native_yaml
_GLOBAL_PARSE_NATIVE_YAML_CACHE[path] = parse_native_yaml_struct(
File "/home/osman/Documents/temp/pytorch/docs/.venv/lib/python3.10/site-packages/torchgen/gen.py", line 167, in parse_native_yaml_struct
error_check_native_functions(rs)
File "/home/osman/Documents/temp/pytorch/docs/.venv/lib/python3.10/site-packages/torchgen/gen.py", line 277, in error_check_native_functions
assert len(base_func_map[out_of_place_base_name]) > 0, (
AssertionError: resize_as_ is marked with tag: inplace_view. The codegen expects there to be a corresponding out-of-place view op with the name 'resize_as_' and matching schema, but it didn't find one.
make: *** [Makefile:25: opset] Error 1
```
None of the `make` options I am interested in were compiling, it all gave the same error above.
I have installed the python requirements and katex inside a new virtual environment.
### Versions
Collecting environment information...
PyTorch version: 2.0.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Manjaro Linux (x86_64)
GCC version: (GCC) 13.1.1 20230429
Clang version: 15.0.7
CMake version: version 3.26.4
Libc version: glibc-2.37
Python version: 3.10.11 (main, Jun 23 2023, 17:06:29) [GCC 13.1.1 20230429] (64-bit runtime)
Python platform: Linux-6.1.31-2-MANJARO-x86_64-with-glibc2.37
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce MX250
Nvidia driver version: 530.41.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) i7-1065G7 CPU @ 1.30GHz
CPU family: 6
Model: 126
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
Stepping: 5
CPU(s) scaling MHz: 57%
CPU max MHz: 3900,0000
CPU min MHz: 400,0000
BogoMIPS: 2996,00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves split_lock_detect dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid fsrm md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 192 KiB (4 instances)
L1i cache: 128 KiB (4 instances)
L2 cache: 2 MiB (4 instances)
L3 cache: 8 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-7
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Mitigation; Microcode
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.25.0
[pip3] pytorch-sphinx-theme==0.0.24
[pip3] torch==2.0.1
[pip3] triton==2.0.0
[conda] Could not collect
cc @malfet @seemethere @svekars @carljparker
| 7 |
2,194 | 104,107 |
Tensor to_sparse fails on large matrices
|
module: sparse, module: cuda, triaged
|
### 🐛 Describe the bug
```
>>> import torch
>>> torch.randn(46000, 46000, device='cuda:3').to_sparse()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
RuntimeError: CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
```
### Versions
```
Collecting environment information...
PyTorch version: 1.13.0a0+936e930
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.24.1
Libc version: glibc-2.31
Python version: 3.8.10 (default, Jun 22 2022, 20:18:18) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.4.111-14-x86_64-with-glibc2.29
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.7.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 48 bits physical, 48 bits virtual
CPU(s): 54
On-line CPU(s) list: 0-53
Thread(s) per core: 1
Core(s) per socket: 1
Socket(s): 54
NUMA node(s): 1
Vendor ID: AuthenticAMD
CPU family: 23
Model: 49
Model name: AMD EPYC 7662 64-Core Processor
Stepping: 0
CPU MHz: 1996.014
BogoMIPS: 3992.02
Virtualization: AMD-V
L1d cache: 3.4 MiB
L1i cache: 3.4 MiB
L2 cache: 27 MiB
L3 cache: 864 MiB
NUMA node0 CPU(s): 0-53
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Full AMD retpoline, IBPB conditional, IBRS_FW, STIBP disabled, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_
tsc cpuid extd_apicid pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch
osvw perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero xsaveerptr wbnoinvd arat npt nrip_save umip rdpid arch
_capabilities
Versions of relevant libraries:
[pip3] functorch==1.13.0a0+936e930
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.22.2
[pip3] pytorch-quantization==2.1.2
[pip3] torch==1.13.0a0+936e930
[pip3] torch-tensorrt==1.3.0a0
[pip3] torchtext==0.13.0a0+fae8e8c
[pip3] torchvision==0.15.0a0
[conda] Could not collect
```
cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer @ptrblck
| 8 |
2,195 | 104,106 |
batch size unexpectedly affects model inference on Mac M1
|
triaged, module: mps
|
### 🐛 Describe the bug
Batch size appears to unexpectedly affect the output of a model in inference mode when running on a Macbook Pro M1. This issue does not seem to present elsewhere (e.g. older Macbook Pro with intel cpu).
Haven't found the exact crossover point that triggers this.
code to reproduce:
```python
import torch
from torchvision import models
# set up inceptionv3 model for feature extraction
model = models.inception_v3(weights=models.Inception_V3_Weights.DEFAULT)
model.fc = torch.nn.Identity()
model.eval()
# create random data
input_data = torch.randn(128, 3, 299, 299)
# create large batch - all items
batch_128 = input_data
# create a couple of small batches
batch_1 = input_data[0:1]
batch_4 = input_data[0:4]
# just take first output of each and compare
output_128_0 = model(batch_128)[0]
output_1_0 = model(batch_1)[0]
output_4_0 = model(batch_4)[0]
# sanity check
assert torch.equal(batch_128[0], batch_1[0]), "data not equal... scenario set up badly"
# small batches should be the same
assert torch.equal(output_1_0, output_4_0), "output not equal between small batches"
# should produce same output? this breaks on macbook pro m1
assert torch.equal(
output_128_0, output_1_0
), "output not equal between small and large batches"
print("all fine")
```
### Versions
```
Collecting environment information...
PyTorch version: 2.0.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 12.6.5 (arm64)
GCC version: Could not collect
Clang version: 14.0.0 (clang-1400.0.29.202)
CMake version: Could not collect
Libc version: N/A
Python version: 3.10.4 (v3.10.4:9d38120e33, Mar 23 2022, 17:29:05) [Clang 13.0.0 (clang-1300.0.29.30)] (64-bit runtime)
Python platform: macOS-12.6.5-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Pro
Versions of relevant libraries:
[pip3] numpy==1.25.0
[pip3] torch==2.0.1
[pip3] torchvision==0.15.2
[conda] nomkl 3.0 0
[conda] numpy 1.21.5 py39h42add53_3
[conda] numpy-base 1.21.5 py39hadd41eb_3
[conda] numpydoc 1.2 pyhd3eb1b0_0
```
and
```
Collecting environment information...
PyTorch version: 1.13.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 12.6.5 (arm64)
GCC version: Could not collect
Clang version: 14.0.0 (clang-1400.0.29.202)
CMake version: Could not collect
Libc version: N/A
Python version: 3.10.4 (v3.10.4:9d38120e33, Mar 23 2022, 17:29:05) [Clang 13.0.0 (clang-1300.0.29.30)] (64-bit runtime)
Python platform: macOS-12.6.5-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Pro
Versions of relevant libraries:
[pip3] numpy==1.23.5
[pip3] pytorch-lightning==1.8.3.post1
[pip3] torch==1.13.1
[pip3] torchmetrics==0.11.4
[pip3] torchtext==0.14.1
[pip3] torchvision==0.14.1
[conda] nomkl 3.0 0
[conda] numpy 1.21.5 py39h42add53_3
[conda] numpy-base 1.21.5 py39hadd41eb_3
[conda] numpydoc 1.2 pyhd3eb1b0_0
```
cc @kulinseth @albanD @malfet @DenisVieriu97 @razarmehr @abhudev
| 0 |
2,196 | 104,102 |
Inductor dynamic shapes output: NameError: name 's2' is not defined
|
good first issue, triaged, oncall: pt2, module: dynamic shapes, module: inductor
|
### 🐛 Describe the bug
Repro script:
```
import torch
from torch import tensor, device
import torch.fx as fx
from torch._dynamo.testing import rand_strided
from math import inf
import torch._inductor.inductor_prims
import torch._dynamo.config
import torch._inductor.config
import torch._functorch.config
torch._inductor.config.fallback_random = True
torch._inductor.config.generate_intermediate_hooks = True
torch._inductor.config.triton.cudagraphs = True
isolate_fails_code_str = None
# torch version: 2.1.0a0+git2232cce
# torch cuda version: 12.0
# torch git version: 2232cce69c450db3163751e9238e1f34a6a2477e
# CUDA Info:
# nvcc: NVIDIA (R) Cuda compiler driver
# Copyright (c) 2005-2023 NVIDIA Corporation
# Built on Fri_Jan__6_16:45:21_PST_2023
# Cuda compilation tools, release 12.0, V12.0.140
# Build cuda_12.0.r12.0/compiler.32267302_0
# GPU Hardware Info:
# NVIDIA PG509-210 : 8
from torch.nn import *
class Repro(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, arg5_1):
sym_size = torch.ops.aten.sym_size(arg5_1, 2); arg5_1 = None
mul_2 = 1024 * sym_size; sym_size = None
return (mul_2,)
def load_args(reader):
buf0 = reader.storage('d5d3fbeeaccb2bf4d8e4ba44de52bd663bfe4e41', 1572864, device=device(type='cuda', index=0), dtype_hint=torch.float16)
reader.tensor(buf0, (4, 256, 24, 32), dtype=torch.float16, is_leaf=True) # arg5_1
load_args._version = 0
mod = Repro()
if __name__ == '__main__':
from torch._dynamo.repro.after_aot import run_repro
run_repro(mod, load_args, accuracy=False, command='run', save_dir='/data/users/ezyang/b/pytorch/torch_compile_debug/run_2023_06_23_06_17_09_326738-pid_4066193/minifier/checkpoints', tracing_mode='real')
```
Run with `python repro.py run --tracing-mode=symbolic`
Fails with
```
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/repro.py", line 58, in <module>
run_repro(mod, load_args, accuracy=False, command='run', save_dir='/data/users/ezyang/b/pytorch/torch_compile_debug/run_2023_06_23_06_17_09_326738-pid_4066193/minifier/checkpoints', tracing_mode='real')
File "/data/users/ezyang/b/pytorch/torch/_dynamo/repro/after_aot.py", line 908, in run_repro
COMMAND_FNS[options.command](options, mod, load_args)
File "/data/users/ezyang/b/pytorch/torch/_dynamo/repro/after_aot.py", line 696, in repro_run
ref = compiled(args)
File "/data/users/ezyang/b/pytorch/torch/_inductor/codecache.py", line 313, in __call__
return self.get_current_callable()(inputs)
File "/data/users/ezyang/b/pytorch/torch/_inductor/compile_fx.py", line 510, in run
return old_compiled_artifact(new_inputs)
File "/data/users/ezyang/b/pytorch/torch/_inductor/cudagraph_trees.py", line 359, in deferred_cudagraphify
fn, out = cudagraphify(model, inputs, static_input_idxs, *args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_inductor/cudagraph_trees.py", line 383, in cudagraphify
return manager.add_function(
File "/data/users/ezyang/b/pytorch/torch/_inductor/cudagraph_trees.py", line 1853, in add_function
return fn, fn(inputs)
File "/data/users/ezyang/b/pytorch/torch/_inductor/cudagraph_trees.py", line 1673, in run
out = self._run(new_inputs, function_id)
File "/data/users/ezyang/b/pytorch/torch/_inductor/cudagraph_trees.py", line 1714, in _run
return self.run_eager(new_inputs, function_id)
File "/data/users/ezyang/b/pytorch/torch/_inductor/cudagraph_trees.py", line 1829, in run_eager
return node.run(new_inputs)
File "/data/users/ezyang/b/pytorch/torch/_inductor/cudagraph_trees.py", line 571, in run
out = self.wrapped_function.model(new_inputs)
File "/data/users/ezyang/b/pytorch/torch/_inductor/codecache.py", line 340, in _run_from_cache
return compiled_graph.compiled_artifact(inputs)
File "/tmp/torchinductor_ezyang/tz/ctzcpl3bpwtj5hhfuecve6mdgpbgkxmuazrhludfbutwb6oxbu6t.py", line 28, in call
return (1024*s2, )
NameError: name 's2' is not defined
```
Generated code is
```
from ctypes import c_void_p, c_long
import torch
import math
import random
import os
import tempfile
from math import inf, nan
from torch._inductor.hooks import run_intermediate_hooks
from torch._inductor.utils import maybe_profile
from torch import empty_strided, as_strided, device
from torch._inductor.codecache import AsyncCompile
from torch._inductor.select_algorithm import extern_kernels
aten = torch.ops.aten
assert_size_stride = torch._C._dynamo.guards.assert_size_stride
async_compile = AsyncCompile()
async_compile.wait(globals())
del async_compile
def call(args):
arg0_1, = args
args.clear()
assert_size_stride(arg0_1, (4, 256, 24, 32), (196608, 768, 32, 1))
return (1024*s2, )
def benchmark_compiled_module(times=10, repeat=10):
from torch._dynamo.testing import rand_strided
from torch._inductor.utils import print_performance
arg0_1 = rand_strided((4, 256, 24, 32), (196608, 768, 32, 1), device='cuda:0', dtype=torch.float16)
return print_performance(lambda: call([arg0_1]), times=times, repeat=repeat)
if __name__ == "__main__":
from torch._inductor.utils import compiled_module_main
compiled_module_main('None', benchmark_compiled_module)
```
Repro was extracted from yolov3 although... it's not the actual bug I was trying to hit LOL.
### Versions
main
cc @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78
| 5 |
2,197 | 104,098 |
DISABLED test_graph_breaks (__main__.LoggingTests)
|
triaged, module: flaky-tests, skipped, module: dynamo
|
Platforms: linux, rocm, mac
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_graph_breaks&suite=LoggingTests) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/14493989183).
Over the past 3 hours, it has been determined flaky in 6 workflow(s) with 18 failures and 6 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_graph_breaks`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `dynamo/test_logging.py`
cc @voznesenskym @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @ipiszy @chenyang78
| 27 |
2,198 | 104,095 |
(Possible) Memory leak on deleting a compiled model
|
high priority, triaged, has workaround, module: cuda graphs, oncall: pt2
|
### 🐛 Describe the bug
Even after deleting a compiled model, cuda allocated memory persists and not freed.
Seems like it may related to `nn.Sequential` since I couldn't reproduce it without `nn.Sequential()`
I wonder if there is something I am missing? or it's an intended behavior?
### Error logs
```
tests/img2img/test_memory_leak.py test 0
[before compile ] alloc: 0.00244140625, reserved: 2.0, max reserved: 2.0
[after compile ] alloc: 57.00341796875, reserved: 92.0, max reserved: 92.0
[after delete] alloc: 57.00341796875, reserved: 92.0, max reserved: 92.0
test 1
[before compile ] alloc: 57.005859375, reserved: 92.0, max reserved: 92.0
[after compile ] alloc: 114.0068359375, reserved: 182.0, max reserved: 182.0
[after delete] alloc: 114.0068359375, reserved: 182.0, max reserved: 182.0
test 2
[before compile ] alloc: 114.00927734375, reserved: 182.0, max reserved: 182.0
[after compile ] alloc: 171.01025390625, reserved: 252.0, max reserved: 252.0
[after delete] alloc: 171.01025390625, reserved: 252.0, max reserved: 252.0
test 3
[before compile ] alloc: 171.0126953125, reserved: 252.0, max reserved: 252.0
[after compile ] alloc: 228.013671875, reserved: 342.0, max reserved: 342.0
[after delete] alloc: 228.013671875, reserved: 342.0, max reserved: 342.0
test 4
[before compile ] alloc: 228.01611328125, reserved: 342.0, max reserved: 342.0
[after compile ] alloc: 285.01708984375, reserved: 412.0, max reserved: 412.0
[after delete] alloc: 285.01708984375, reserved: 412.0, max reserved: 412.0
```
### Minified repro
```python
def print_cuda_memory(name = "none"):
print(f"[{name}] alloc: {torch.cuda.memory_allocated() / 1024**2}, reserved: {torch.cuda.memory_reserved() / 1024**2}, max reserved: {torch.cuda.max_memory_reserved() / 1024 ** 2}")
def create_and_compile():
graph = nn.Sequential(nn.Conv2d(3, 16, 3, padding=1)).cuda()
print_cuda_memory("before compile ")
graph = torch.compile(graph, mode="reduce-overhead")
graph(torch.randn((3, 3, 512, 512)).cuda())
print_cuda_memory("after compile ")
for i in range(5):
print(f"test {i}")
create_and_compile()
print_cuda_memory("after delete")
sleep(1)
```
### Versions
```
Collecting environment information...
PyTorch version: 2.0.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.26.4
Libc version: glibc-2.31
Python version: 3.10.8 (main, Nov 24 2022, 14:13:03) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-4.19.93-1.nbp.el7.x86_64-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: Quadro RTX 6000
Nvidia driver version: 510.47.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 72
On-line CPU(s) list: 0-71
Thread(s) per core: 2
Core(s) per socket: 18
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 5220 CPU @ 2.20GHz
Stepping: 7
CPU MHz: 2699.996
BogoMIPS: 4400.00
Virtualization: VT-x
L1d cache: 1.1 MiB
L1i cache: 1.1 MiB
L2 cache: 36 MiB
L3 cache: 49.5 MiB
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71
Vulnerability Itlb multihit: KVM: Vulnerable
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled
Vulnerability Tsx async abort: Vulnerable
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] clip-anytorch==2.5.2
[pip3] mypy==1.3.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.3
[pip3] open-clip-torch==2.20.0
[pip3] pytorch-lightning==1.9.5
[pip3] torch==2.0.1
[pip3] torchdiffeq==0.2.3
[pip3] torchmetrics==0.11.4
[pip3] torchsde==0.2.5
[pip3] torchvision==0.15.2
[pip3] triton==2.0.0.post1
[conda] numpy 1.23.5 pypi_0 pypi
```
cc @ezyang @gchanan @zou3519 @mcarilli @msaroufim @wconstab @bdhirsh @anijain2305
| 8 |
2,199 | 104,093 |
RuntimeError: _ivalue_ INTERNAL ASSERT FAILED
|
oncall: jit
|
### 🐛 Describe the bug
A sudden Error in using script model in cpp
`RuntimeError: _ivalue_ INTERNAL ASSERT FAILED at "/home/loc/softwares/pytorch_v2.0.0/torch/csrc/jit/api/object.h":37, please report a bug to PyTorch. `
I built my torch from source "2.0.0a0+gitc263bd4 "
libtorch " libtorch-cxx11-abi-shared-with-deps-2.0.0+cu118"
cuda 11.8 on Ubuntu 22.04
Sample Code
` torch::Tensor example_input = torch::randn({1, 3, 224, 224});
// forward
auto desc = model_.forward({example_input}).toTensor();
std::cout << "des: " << desc.sizes() << std::endl;
// return descriptors
return desc;
`
Note
I m binding my class to python and including :
`#include "torch/torch.h"
#include <torch/script.h>
#include <torch/extension.h>`
### Versions
python=3.8.16
cuda=11.8
pytorch=2.0.0 (source)
torchvision=0.15.2
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| 0 |
2,200 | 104,088 |
Regressions with torch.compile + amp + ddp with recent nightly builds
|
needs reproduction, oncall: distributed, module: cuda, triaged, module: ddp, oncall: pt2
|
Saw similar regressions with torch.compile + amp + ddp with nightly builds from 06/19/23 and 06/20/23. Wasn't seeing this issue before. This is on both 4x A100s (40GB) and 8xH100s
_Originally posted by @nanand2 in https://github.com/pytorch/pytorch/issues/101838#issuecomment-1600056539_
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @penguinwu @ptrblck @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.