Serial Number
int64 1
6k
| Issue Number
int64 75.6k
112k
| Title
stringlengths 3
357
| Labels
stringlengths 3
241
⌀ | Body
stringlengths 9
74.5k
⌀ | Comments
int64 0
867
|
---|---|---|---|---|---|
2,901 | 99,037 |
AttributeError: type object 'torch._C._profiler.ProfilerActivity' has no attribute 'MPS'
|
triaged, oncall: profiler, module: mps
|
### 🚀 The feature, motivation and pitch
Add support for 'MPS' in Pytorch profiler
```
In [1]: import torch
In [2]: from torch.profiler import profile, record_function, ProfilerActivity
...:
In [9]: with profile(activities=[ProfilerActivity.CPU, ProfilerActivity.MPS], re
...: cord_shapes=True) as prof:
...: for epoch in range(num_epochs):
...: i = i + 1
```
### Alternatives
_No response_
### Additional context
_No response_
cc @robieta @chaekit @aaronenyeshi @ngimel @nbcsm @guotuofeng @guyang3532 @gaoteng-git @tiffzhaofb @dzhulgakov @davidberard98 @kulinseth @albanD @malfet @DenisVieriu97 @razarmehr @abhudev
| 2 |
2,902 | 99,035 |
Issue on building from source: Remove -mfpu=neon option on MacOS with Apple silicon
|
module: build, triaged
|
### 🐛 Describe the bug
Get pytorch:
```
git clone -b v2.0.0 --recursive https://github.com/pytorch/pytorch
```
Install dependencies as said in Readme.
I tried compiling with bot gcc(real gcc, not alias for clang) and clang, they all told me about -mfpu=neon option.
GCC:
```
gcc-12: error: unrecognized command-line option '-mfpu=neon'
```
Clang:
```
clang: warning: argument unused during compilation: '-mfpu=neon' [-Wunused-command-line-argument]
```
This option seems to be useless in this situation.
### Versions
pytorch 2.0.0
cc @malfet @seemethere
| 1 |
2,903 | 99,025 |
Is there a way to get the full call stack of pytorch from python to C/C++?
|
triaged
|
### 🚀 The feature, motivation and pitch
I can get the python call stack of pytorch with PYCG or other package. I can get C/C++ call stack with perf. But how to link them together?
Pytorch calls C/C++ functions/operators with dynamic dispatching. It's hard to know what C/C++ functions/operators is called by a pytorch operator ,e.g. bmm operator.
Is there any tools that can profile the call stack or trace from pytorch(up) to C/C++ operators/functions(down)?
### Alternatives
_No response_
### Additional context
_No response_
| 4 |
2,904 | 99,023 |
Dtype changes while going from FX graph -> Torchscript
|
triaged, FX-TorchScript Compatibility, module: fx
|
### 🐛 Describe the bug
Python:
```
rand_idx = torch.randint(sy*sx, size=(hsy, wsx, 1), device=metric.device, dtype=torch.int64)
idx_buffer_view = torch.zeros(hsy, wsx, sy*sx, device=metric.device, dtype=torch.int64)
idx_buffer_view.scatter_(dim=2, index=rand_idx, src=-torch.ones_like(rand_idx, dtype=rand_idx.dtype))
```
The `FX graph` :
```
%randint : [#users=2] = call_function[target=torch.ops.aten.randint.default](args = (4, [32, 32, 1]), kwargs = {device: cpu, pin_memory: False})
%zeros : [#users=1] = call_function[target=torch.ops.aten.zeros.default](args = ([32, 32, 4],), kwargs = {dtype: torch.int64, device: cpu, pin_memory: False})
%ones_like : [#users=1] = call_function[target=torch.ops.aten.ones_like.default](args = (%randint,), kwargs = {dtype: torch.int64, pin_memory: False})
%neg : [#users=1] = call_function[target=torch.ops.aten.neg.default](args = (%ones_like,), kwargs = {})
%scatter_ : [#users=1] = call_function[target=torch.ops.aten.scatter_.src](args = (%zeros, 2, %randint, %neg), kwargs = {})
```
The `Torchscript` IR :
```
%836 = torch.aten.randint %int4, %835, %int4, %none_3, %cpu, %false : !torch.int, !torch.list<int>, !torch.int, !torch.none, !torch.Device, !torch.bool -> !torch.tensor
%838 = torch.aten.zeros %837, %int5, %none_3, %cpu, %false : !torch.list<int>, !torch.int, !torch.none, !torch.Device, !torch.bool -> !torch.tensor
%839 = torch.aten.ones_like %836, %int4, %none_3, %none_3, %false, %none_3 : !torch.tensor, !torch.int, !torch.none, !torch.none, !torch.bool, !torch.none -> !torch.tensor
%840 = torch.aten.neg %839 : !torch.tensor -> !torch.tensor
%841 = torch.aten.scatter_.src %838, %int2, %836, %840 : !torch.tensor, !torch.int, !torch.tensor, !torch.tensor -> !torch.tensor
```
`torch.aten.randint ` : 3rd argument is `dtype`, in this case it's `%int4` (int64)
`torch.aten.zeros` : 2nd argument is `dtype`, in this case it's `%int5`. (half)
`torch.aten.ones_like` : 2nd argument is `dtype`, in this case it's `%int4`. (int64)
The reason behind `torch.aten.zeros` being set to have dtype as`fp16` despite having `int64` in the Python code is because when an FX graph is converted to TorchScript and imported into Torch-MLIR, a Python representation of the graph is included as a string parameter in the MLIR module. All the `torch.ops.aten.zeros` in this Python representation are being set to have `dtyp = torch.float16` (that's a bug!).
You can observe that in the [Torchscript](https://drive.google.com/file/d/167bzYzDKfv6G1WR5Gnv0vAHDSUoGIVrN/view?usp=sharing) IR file.
The following is what you'd observe :-
```
zeros = torch.ops.aten.zeros([32, 32, 4], dtype = torch.float16, device = device(type='cpu'), pin_memory = False)
zeros_1 = torch.ops.aten.zeros([2, 4096, 320], dtype = torch.float16, device = device(type='cpu'), pin_memory = False)
zeros_2 = torch.ops.aten.zeros([32, 32, 4], dtype = torch.float16, device = device(type='cpu'), pin_memory = False)
zeros_3 = torch.ops.aten.zeros([2, 4096, 320], dtype = torch.float16, device = device(type='cpu'), pin_memory = False)
zeros_4 = torch.ops.aten.zeros([32, 32, 4], dtype = torch.float16, device = device(type='cpu'), pin_memory = False)
zeros_5 = torch.ops.aten.zeros([2, 4096, 320], dtype = torch.float16, device = device(type='cpu'), pin_memory = False)
zeros_6 = torch.ops.aten.zeros([32, 32, 4], dtype = torch.float16, device = device(type='cpu'), pin_memory = False)
zeros_7 = torch.ops.aten.zeros([2, 4096, 320], dtype = torch.float16, device = device(type='cpu'), pin_memory = False)
zeros_8 = torch.ops.aten.zeros([32, 32, 4], dtype = torch.float16, device = device(type='cpu'), pin_memory = False)
zeros_9 = torch.ops.aten.zeros([2, 4096, 320], dtype = torch.float16, device = device(type='cpu'), pin_memory = False)
```
Also, here is the corresponding [FX graph](https://drive.google.com/file/d/1KQrOEtDzoUjf-H_oee_Un30QME05Lwfk/view?usp=sharing) file just for your reference.
### Versions
```
Collecting environment information...
PyTorch version: 2.1.0.dev20230403+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
Clang version: 14.0.0-1ubuntu1
CMake version: version 3.26.3
Libc version: glibc-2.35
Python version: 3.11.2 (main, Feb 8 2023, 14:49:25) [GCC 11.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-1030-gcp-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.0.76
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA A100-SXM4-40GB
Nvidia driver version: 525.85.12
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.7.0
/usr/local/cuda-12.0/targets/x86_64-linux/lib/libcudnn.so.8.7.0
/usr/local/cuda-12.0/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.7.0
/usr/local/cuda-12.0/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.7.0
/usr/local/cuda-12.0/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.7.0
/usr/local/cuda-12.0/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.7.0
/usr/local/cuda-12.0/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.7.0
/usr/local/cuda-12.0/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.7.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU @ 2.20GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
Stepping: 7
BogoMIPS: 4400.39
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat avx512_vnni md_clear arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 192 KiB (6 instances)
L1i cache: 192 KiB (6 instances)
L2 cache: 6 MiB (6 instances)
L3 cache: 38.5 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-11
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT Host state unknown
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.2
[pip3] pytorch-lightning==2.0.1.post0
[pip3] pytorch-triton==2.1.0+46672772b4
[pip3] torch==2.1.0.dev20230403+cu118
[pip3] torch-mlir==20230411.805
[pip3] torchmetrics==0.11.4
[pip3] torchvision==0.16.0.dev20230403+cu118
[pip3] triton==2.0.0
[conda] Could not collect
```
cc @ezyang @SherlockNoMad @soumith @EikanWang @jgong5 @wenzhe-nrv
| 0 |
2,905 | 99,012 |
[BUG]Float32 attention mask not working with torch.autocast("cpu")
|
triaged, oncall: transformer/mha
|
### 🐛 Describe the bug
**Here is a minimal example of the bug, torch==2.0.0**
```
import torch
# torch==2.0.0, cuda=11.7
b = 1
n = 64
d = 256
query = torch.randn(n, b, d)
key = torch.randn(n, b, d)
value = torch.randn(n, b, d)
attn_mask = torch.zeros(n, n)
attention = torch.nn.MultiheadAttention(d, 8)
with torch.no_grad(), torch.autocast("cpu"):
output = attention(query, key, value, attn_mask=attn_mask, need_weights=False)
```
**RuntimeError: Expected attn_mask dtype to be bool or to match query dtype, but got attn_mask.dtype: float and query.dtype: c10::BFloat16 instead.**
### Versions
[pip3] numpy==1.23.4
[pip3] open-clip-torch==2.16.0
[pip3] torch==2.0.0
[pip3] torchvision==0.15.1
[pip3] triton==2.0.0
[conda] numpy 1.23.4 pypi_0 pypi
[conda] open-clip-torch 2.16.0 pypi_0 pypi
[conda] torch 2.0.0 pypi_0 pypi
[conda] torchvision 0.15.1 pypi_0 pypi
[conda] triton 2.0.0 pypi_0 pypi
cc @jbschlosser @bhosmer @cpuhrsch @erichan1
| 0 |
2,906 | 99,007 |
create_graph_input and add_grapharg should be combined into one function
|
triaged, module: dynamo
|
### 🐛 Describe the bug
@awgu came up with this: https://github.com/pytorch/pytorch/pull/98775#issuecomment-1503293308
It seems to me that the correct invariant is that they should always be called in lockstep. Maybe there is some funny business with constant source but that could be toggled with a kwarg. We should combine these two methods.
### Versions
master
cc @soumith @voznesenskym @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @desertfire
| 3 |
2,907 | 98,978 |
[torch.compile] makes `linear(permute(input))` succeed for integer input in `torch.no_grad` context
|
triaged, module: inductor
|
### 🐛 Describe the bug
`torch.compile` makes `linear(permute(input))` succeed for integer input in `torch.no_grad` context
```py
import torch
import torch.nn as nn
torch.manual_seed(420)
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.fc1 = nn.Linear(3, 3)
def forward(self, x):
x = self.fc1(x.permute(1, 2, 0))
return x
input_tensor = torch.tensor([[[1, 2, 3], [4, 5, 6], [7, 8, 9]], [[10, 11, 12], [13, 14, 15], [16, 17, 18]], [[19, 20, 21], [22, 23, 24], [25, 26, 27]]])
func = Net().to('cpu')
with torch.no_grad():
jit_func = torch.compile(func)
print(jit_func(input_tensor))
#tensor([[[ 6.8708, -10.1139, -2.9715],
# [ 7.4253, -11.1465, -2.5976],
# [ 7.9799, -12.1791, -2.2237]],
# [[ 8.5344, -13.2118, -1.8498],
# [ 9.0889, -14.2444, -1.4759],
# [ 9.6435, -15.2770, -1.1020]],
# [[ 10.1980, -16.3097, -0.7281],
# [ 10.7526, -17.3423, -0.3542],
# [ 11.3071, -18.3750, 0.0197]]])
print(func(input_tensor))
# RuntimeError: expected scalar type Long but found Float
```
In the `torch.no_grad` context, `torch.compile` does some optimization to make it succeed even the dtypes are mismatched.
But without `torch.no_grad`, `torch.compile` will just raise an exception
```py
import torch
import torch.nn as nn
torch.manual_seed(420)
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.fc1 = nn.Linear(3, 3)
def forward(self, x):
x = self.fc1(x.permute(1, 2, 0))
return x
input_tensor = torch.tensor([[[1, 2, 3], [4, 5, 6], [7, 8, 9]], [[10, 11, 12], [13, 14, 15], [16, 17, 18]], [[19, 20, 21], [22, 23, 24], [25, 26, 27]]])
func = Net().to('cpu')
jit_func = torch.compile(func)
print(jit_func(input_tensor))
# torch._dynamo.exc.TorchRuntimeError
```
### Versions
```
Collecting environment information...
PyTorch version: 2.1.0.dev20230404+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
Clang version: 14.0.0-1ubuntu1
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.9.16 (main, Mar 8 2023, 14:00:05) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.19.5-051905-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.5.119
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060
Nvidia driver version: 510.108.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 24
On-line CPU(s) list: 0-23
Vendor ID: GenuineIntel
Model name: 12th Gen Intel(R) Core(TM) i9-12900K
CPU family: 6
Model: 151
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Stepping: 2
CPU max MHz: 6700.0000
CPU min MHz: 800.0000
BogoMIPS: 6374.40
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault cat_l2 invpcid_single cdp_l2 ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdt_a rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi umip pku ospke waitpkg gfni vaes vpclmulqdq tme rdpid movdiri movdir64b fsrm md_clear serialize pconfig arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 640 KiB (16 instances)
L1i cache: 768 KiB (16 instances)
L2 cache: 14 MiB (10 instances)
L3 cache: 30 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-23
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.24.1
[pip3] pytorch-triton==2.1.0+46672772b4
[pip3] torch==2.1.0.dev20230404+cu118
[pip3] torchaudio==2.1.0.dev20230404+cu118
[pip3] torchvision==0.16.0.dev20230404+cu118
[conda] numpy 1.24.1 pypi_0 pypi
[conda] pytorch-triton 2.1.0+46672772b4 pypi_0 pypi
[conda] torch 2.1.0.dev20230404+cu118 pypi_0 pypi
[conda] torchaudio 2.1.0.dev20230404+cu118 pypi_0 pypi
[conda] torchvision 0.16.0.dev20230404+cu118 pypi_0 pypi
```
cc @soumith @voznesenskym @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @desertfire
| 1 |
2,908 | 98,977 |
[BE] Dedup the functorch skipOps mechanism and the common_method_invocations one
|
triaged, module: testing
|
This code was original in functorch out-of-tree. Then, we [copy-pasted it into PyTorch](https://github.com/pytorch/pytorch/blob/1149ba5553dc3467afed7e1867dffc7065c742c7/torch/testing/_internal/common_methods_invocations.py#L20288-L20321). Since then, there's been a number of improvements to the functorch version and functorch was upstreamed into PyTorch. However, we have not yet consolidated [the functorch version](https://github.com/pytorch/pytorch/blob/master/test/functorch/common_utils.py#L355) with the one in common_method_invocations.
We should consolidate the two, likely just by moving the functorch pieces into common_method_invocations.
| 0 |
2,909 | 98,976 |
Sparse Tensor: in-place operation on detached tensors no longer raised error
|
module: sparse, triaged
|
### 🐛 Describe the bug
According to the [documentation](https://pytorch.org/docs/stable/generated/torch.Tensor.detach.html), in-place operations to detached sparse tensors would raise an error. However, it did not. This has implications for modifying the `.state_dict` (which detaches the tensor by default) in place.
### Versions
[pip3] mypy==1.1.1
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.23.5
[pip3] torch==2.0.0
[pip3] torchaudio==2.0.0
[pip3] torchdata==0.6.0
[pip3] torchelastic==0.2.2
[pip3] torchtext==0.15.0
[pip3] torchvision==0.15.0
[pip3] triton==2.0.0
[conda] blas 1.0 mkl
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] magma-cuda117 2.6.1 1 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py310h7f8727e_0
[conda] mkl_fft 1.3.1 py310hd6ae3a3_0
[conda] mkl_random 1.2.2 py310h00e6091_0
[conda] numpy 1.23.5 py310hd5efca6_0
[conda] numpy-base 1.23.5 py310h8e6c178_0
[conda] pytorch 2.0.0 py3.10_cuda11.7_cudnn8.5.0_0 pytorch
[conda] pytorch-cuda 11.7 h778d358_3 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 2.0.0 py310_cu117 pytorch
[conda] torchdata 0.6.0 py310 pytorch
[conda] torchelastic 0.2.2 pypi_0 pypi
[conda] torchtext 0.15.0 py310 pytorch
[conda] torchtriton 2.0.0 py310 pytorch
[conda] torchvision 0.15.0 py310_cu117 pytorch
cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer
| 5 |
2,910 | 98,970 |
[torch.compile] `replace_fx`
|
triaged, module: inductor
|
### 🐛 Describe the bug
`torch.compile` will replace `dropout` to some other implementation for performance. However, the original `dropout` will raise an exception if the input `dtype` is integer. By contrast, the compiled version will accept it and return the value without any error.
Notably, I think the dropout value returned by compiled version is wrong. Please correct me if I am wrong.
```py
import torch
import torch.nn as nn
torch.manual_seed(420)
class MyModel(torch.nn.Module):
def forward(self, x):
x = x * 2
x = torch.nn.functional.dropout(x, p=0.5)
x = torch.relu(x)
return x
example_inputs = torch.tensor([[1, 2, 3], [4, 5, 6]])
func = MyModel()
jit_func = torch.compile(func)
print(jit_func(example_inputs))
# tensor([[ 0, 0, 12],
# [16, 0, 0]])
print(func(example_inputs))
# RuntimeError: result type Float can't be cast to the desired output type Long
```
This is caused by `replace_fx` in `overrides.py`.
### Versions
```
Collecting environment information...
PyTorch version: 2.1.0.dev20230404+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
Clang version: 14.0.0-1ubuntu1
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.9.16 (main, Mar 8 2023, 14:00:05) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.19.5-051905-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.5.119
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060
Nvidia driver version: 510.108.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 24
On-line CPU(s) list: 0-23
Vendor ID: GenuineIntel
Model name: 12th Gen Intel(R) Core(TM) i9-12900K
CPU family: 6
Model: 151
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Stepping: 2
CPU max MHz: 6700.0000
CPU min MHz: 800.0000
BogoMIPS: 6374.40
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault cat_l2 invpcid_single cdp_l2 ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdt_a rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi umip pku ospke waitpkg gfni vaes vpclmulqdq tme rdpid movdiri movdir64b fsrm md_clear serialize pconfig arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 640 KiB (16 instances)
L1i cache: 768 KiB (16 instances)
L2 cache: 14 MiB (10 instances)
L3 cache: 30 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-23
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.24.1
[pip3] pytorch-triton==2.1.0+46672772b4
[pip3] torch==2.1.0.dev20230404+cu118
[pip3] torchaudio==2.1.0.dev20230404+cu118
[pip3] torchvision==0.16.0.dev20230404+cu118
[conda] numpy 1.24.1 pypi_0 pypi
[conda] pytorch-triton 2.1.0+46672772b4 pypi_0 pypi
[conda] torch 2.1.0.dev20230404+cu118 pypi_0 pypi
[conda] torchaudio 2.1.0.dev20230404+cu118 pypi_0 pypi
[conda] torchvision 0.16.0.dev20230404+cu118 pypi_0 pypi
```
cc @soumith @voznesenskym @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @desertfire
| 2 |
2,911 | 98,955 |
Please verify 1.14.0 ONNX release candidate on TestPyPI
|
module: onnx, triaged
|
### 🚀 The feature, motivation and pitch
Hi ONNX partner,
We have released TestPyPI packages of ONNX 1.14.0: https://test.pypi.org/project/onnx/1.14.0rc1/ (ONNX 1.14.0rc1 is the latest version number for testing now).
Please verify it and let us know about any problems. Thank you for your help!
### Alternatives
_No response_
### Additional context
_No response_
| 3 |
2,912 | 98,948 |
behaviour of `torch.tensor()` changes after editing `Tensor.__getitem__`
|
triaged, module: python frontend
|
### 🐛 Describe the bug
Bit of a weird one, not sure if this is something interesting but just in case:
```python
import torch
torch.tensor([torch.tensor(0)]) # works fine
torch.Tensor.__getitem__ = None
torch.tensor([torch.tensor(0)]) # fails
```
For some reason the second `torch.tensor([torch.tensor(0)])` fails, in particular due to the changing of `__getitem__`. Error message is ```TypeError: len() of a 0-d tensor```
### Versions
Collecting environment information...
PyTorch version: 2.0.0+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: 10.0.0-4ubuntu1
CMake version: version 3.25.2
Libc version: glibc-2.31
Python version: 3.9.16 (main, Dec 7 2022, 01:11:51) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.10.147+-x86_64-with-glibc2.31
Is CUDA available: False
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.7.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 2
On-line CPU(s) list: 0,1
Thread(s) per core: 2
Core(s) per socket: 1
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 79
Model name: Intel(R) Xeon(R) CPU @ 2.20GHz
Stepping: 0
CPU MHz: 2199.998
BogoMIPS: 4399.99
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32 KiB
L1i cache: 32 KiB
L2 cache: 256 KiB
L3 cache: 55 MiB
NUMA node0 CPU(s): 0,1
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable; SMT Host state unknown
Vulnerability Meltdown: Vulnerable
Vulnerability Mmio stale data: Vulnerable
Vulnerability Retbleed: Vulnerable
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled, PBRSB-eIBRS: Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx smap xsaveopt arat md_clear arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.22.4
[pip3] torch==2.0.0+cu118
[pip3] torchaudio==2.0.1+cu118
[pip3] torchdata==0.6.0
[pip3] torchsummary==1.5.1
[pip3] torchtext==0.15.1
[pip3] torchvision==0.15.1+cu118
[pip3] triton==2.0.0
[conda] Could not collect
cc @albanD
| 9 |
2,913 | 98,947 |
Add `torch.cat` support for torch native sparse tensors. (Need for PyG)
|
module: sparse, feature, triaged
|
### 🚀 The feature, motivation and pitch
Up until recently PyG required torch-sparse which provided functionality for sparse tensor math for GNNs. PyG 2.3+ has the goal of dropping torch-sparse and other legacy torch-* packages that PyG used to require. However, for dropping torch-sparse, we are relying on upstream pytorch native sparse tensors. One required functionality of these tensors is the ability to `torch.cat` them together. Currently this is not supported:
This issue tracks the failure w/ reproduction steps: https://github.com/pytorch/pytorch/issues/98861
### Alternatives
_No response_
### Additional context
_No response_
cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer
| 10 |
2,914 | 98,942 |
[torch.fx] Upgrade on node info
|
triaged, module: fx
|
### 🚀 The feature, motivation and pitch
Torch.fx is a very useful tool capable of analysing and understanding deep models, but from an analysis perspective it would be interesting to know the dimensions of the tensors that have passed through the nodes. In order to understand where there might be memory space problems, for example.
What I'd like to see is for each node to have two additional arguments, input_size and output_size so that this information can be easily retrieved during an analysis using torch.fx
I've see something in torch.passes.shape_prop that seems to analyze tensors but I don't see anything on how to use it. And same for the proxy, I see that it have some useful information in but no way to use it.
Maybe the solution already exist but I prefer suggest if it doesn't
### Alternatives
_No response_
### Additional context
_No response_
cc @ezyang @SherlockNoMad @soumith @EikanWang @jgong5 @wenzhe-nrv
| 2 |
2,915 | 98,939 |
torch.dist with minus norm returns tensor(0.), while with -inf can return result
|
module: distributions, triaged
|
### 🐛 Describe the bug
# with minus norm
```
import torch
arg_1_tensor = torch.rand([4], dtype=torch.float32)
arg_1 = arg_1_tensor.clone()
print(arg_1)
arg_2_tensor = torch.rand([4], dtype=torch.float32)
arg_2 = arg_2_tensor.clone()
print(arg_2)
arg_3 = -100
res = torch.dist(input=arg_1,other=arg_2,p=arg_3,)
print(res)
```
```
tensor([0.3692, 0.1006, 0.4169, 0.5297])
tensor([0.4667, 0.3731, 0.2566, 0.8941])
tensor(0.)
```
# with -inf norm
```
import torch
inf = float('inf')
arg_1_tensor = torch.rand([4], dtype=torch.float32)
arg_1 = arg_1_tensor.clone()
print(arg_1)
arg_2_tensor = torch.rand([4], dtype=torch.float32)
arg_2 = arg_2_tensor.clone()
print(arg_2)
arg_3 = -inf
res = torch.dist(input=arg_1,other=arg_2,p=arg_3,)
print(res)
```
```
tensor([0.2863, 0.5415, 0.4990, 0.6137])
tensor([0.1516, 0.4867, 0.1853, 0.8488])
tensor(0.0548)
```
I find this happens when norm is less than -40.
### Versions
pytorch version: 2.0.0
cuda : 118
cc @fritzo @neerajprad @alicanb @nikitaved
| 2 |
2,916 | 98,937 |
TracingContext.get().frame_summary_stack doesn't produce full stack trace
|
triaged, module: dynamo
|
### 🐛 Describe the bug
If you are here because an error message, comment on the issue to describe how you were affected, so we can help prioritize this issue.
TracingContext.get().frame_summary_stack doesn't report full backtraces; you'll only get frames from inside the region of code that dynamo traced through. In principle, we could also collect the regular backtrace from before callback entry and report that too.
@Chillee notes that it is good not to give too much information, and indeed the stack trace before calling into the model is probably not that useful. But if we have graph breaks inside the model, we may have lost useful context. Maybe only want the partial stack trace up to torch.compile? Not going to do it unless someone shouts.
### Versions
master
cc @soumith @voznesenskym @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @desertfire
| 0 |
2,917 | 98,929 |
torch.sparse_csr_tensor() stops gradients
|
module: sparse, triaged
|
### 🐛 Describe the bug
I would expect the following code to produce a gradient w.r.t. `a`, but instead it stops the gradient flow at `torch.sparse_csr_tensor()`.
```python
import torch
a = torch.randn(3, requires_grad=True)
b = torch.sparse_csr_tensor(
torch.tensor([0, 2, 2, 3]),
torch.tensor([0, 1, 2]),
a,
)
print(b.grad_fn)
torch.sum(b).backward()
```
Accordingly, this is the output:
```
None
Traceback (most recent call last):
File "/path/csr_grad.py", line 58, in <module>
main_csr()
File "/path/csr_grad.py", line 36, in main_csr
torch.sum(b).backward()
File "/venv/lib64/python3.11/site-packages/torch/_tensor.py", line 487, in backward
torch.autograd.backward(
File "/venv/lib64/python3.11/site-packages/torch/autograd/__init__.py", line 200, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
```
The same code works for COO tensors, but I need the performance benefits of CSR tensors.
### Versions
```
Collecting environment information...
PyTorch version: 2.0.0+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Fedora Linux 37 (KDE Plasma) (x86_64)
GCC version: (GCC) 12.2.1 20221121 (Red Hat 12.2.1-4)
Clang version: Could not collect
CMake version: version 3.26.1
Libc version: glibc-2.36
Python version: 3.11.2 (main, Feb 8 2023, 00:00:00) [GCC 12.2.1 20221121 (Red Hat 12.2.1-4)] (64-bit runtime)
Python platform: Linux-6.2.7-200.fc37.x86_64-x86_64-with-glibc2.36
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 2070 SUPER
Nvidia driver version: 530.30.02
cuDNN version: Probably one of the following:
/usr/lib64/libcudnn.so.8.8.1
/usr/lib64/libcudnn_adv_infer.so.8.8.1
/usr/lib64/libcudnn_adv_train.so.8.8.1
/usr/lib64/libcudnn_cnn_infer.so.8.8.1
/usr/lib64/libcudnn_cnn_train.so.8.8.1
/usr/lib64/libcudnn_ops_infer.so.8.8.1
/usr/lib64/libcudnn_ops_train.so.8.8.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 7 2700 Eight-Core Processor
CPU family: 23
Model: 8
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
Stepping: 2
Frequency boost: enabled
CPU(s) scaling MHz: 67%
CPU max MHz: 3200.0000
CPU min MHz: 1550.0000
BogoMIPS: 6387.18
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb hw_pstate ssbd ibpb vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt sha_ni xsaveopt xsavec xgetbv1 clzero irperf xsaveerptr arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif overflow_recov succor smca sev sev_es
Virtualization: AMD-V
L1d cache: 256 KiB (8 instances)
L1i cache: 512 KiB (8 instances)
L2 cache: 4 MiB (8 instances)
L3 cache: 16 MiB (2 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-15
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT vulnerable
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.2
[pip3] torch==2.0.0
[pip3] triton==2.0.0
[conda] Could not collect
```
cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer
| 6 |
2,918 | 98,928 |
Changing module attributes doesn't retrigger compilation
|
high priority, triaged, bug, oncall: pt2, module: dynamo
|
### 🐛 Describe the bug
When compiling a method that has an execution flow that depends on some module attribute value, changing this attribute value outside of the method, doesn't retrigger compilation. However, replacing the module attribute with `self.training` or a global variable behaves as expected and recompile when the value changes.
The documentation seems to say that mutating attributes should be fully supported: https://pytorch.org/get-started/pytorch-2.0/#reading-and-updating-attributes
Repro script:
```python
import torch
from torch import nn
import torch._dynamo
import logging
torch._dynamo.config.log_level = logging.DEBUG
torch._dynamo.config.verbose = True
def check(module, attr: str) -> None:
inp = torch.ones(1)
compiled_value = m(inp)
eager_value = m._orig_mod(inp)
prefix = "✅" if (compiled_value == eager_value).all().item() else "❌"
print(f"{prefix} {attr}={getattr(m._orig_mod, attr)}: compiled={compiled_value}, eager: {eager_value}")
print("=== Foo attribute test ===")
class MyModuleFoo(nn.Module):
foo: bool
def __init__(self):
super().__init__()
self.foo = True
def forward(self, x: torch.Tensor) -> torch.Tensor:
if self.foo:
return x * 123
else:
return x * 0
m = torch.compile(MyModuleFoo())
check(m, "foo")
m._orig_mod.foo = False
check(m, "foo")
print("=== Training attribute test ===")
class MyModuleTraining(nn.Module):
def forward(self, x: torch.Tensor) -> torch.Tensor:
if self.training:
return x * 123
else:
return x * 0
m = torch.compile(MyModuleTraining())
check(m, "training")
m._orig_mod.training = False
check(m, "training")
```
Output:
```console
=== Foo attribute test ===
[2023-04-12 11:40:57,882] torch._dynamo.eval_frame: [DEBUG] skipping __init__ /Volumes/Data/bin/miniconda3/envs/torch2/lib/python3.8/contextlib.py
[2023-04-12 11:40:57,882] torch._dynamo.eval_frame: [DEBUG] skipping __enter__ /Volumes/Data/bin/miniconda3/envs/torch2/lib/python3.8/contextlib.py
[2023-04-12 11:40:57,883] torch._dynamo.eval_frame: [DEBUG] skipping __init__ /Volumes/Data/bin/miniconda3/envs/torch2/lib/python3.8/contextlib.py
[2023-04-12 11:40:57,883] torch._dynamo.eval_frame: [DEBUG] skipping __enter__ /Volumes/Data/bin/miniconda3/envs/torch2/lib/python3.8/contextlib.py
[2023-04-12 11:40:57,883] torch._dynamo.eval_frame: [DEBUG] skipping enable_dynamic /Volumes/Data/bin/miniconda3/envs/torch2/lib/python3.8/site-packages/torch/_dynamo/eval_frame.py
[2023-04-12 11:40:57,899] torch._dynamo.symbolic_convert: [INFO] Step 1: torchdynamo start tracing forward
[2023-04-12 11:40:57,899] torch._dynamo.symbolic_convert: [DEBUG] TRACE starts_line test_compile_repro.py:27
[2023-04-12 11:40:57,899] torch._dynamo.symbolic_convert: [DEBUG] TRACE LOAD_FAST self []
[2023-04-12 11:40:57,899] torch._dynamo.symbolic_convert: [DEBUG] TRACE LOAD_ATTR foo [NNModuleVariable()]
[2023-04-12 11:40:57,900] torch._dynamo.symbolic_convert: [DEBUG] TRACE POP_JUMP_IF_FALSE 14 [ConstantVariable(bool)]
[2023-04-12 11:40:57,900] torch._dynamo.symbolic_convert: [DEBUG] TRACE starts_line test_compile_repro.py:28
[2023-04-12 11:40:57,900] torch._dynamo.symbolic_convert: [DEBUG] TRACE LOAD_FAST x []
[2023-04-12 11:40:57,900] torch._dynamo.symbolic_convert: [DEBUG] TRACE LOAD_CONST 123 [TensorVariable()]
[2023-04-12 11:40:57,901] torch._dynamo.symbolic_convert: [DEBUG] TRACE BINARY_MULTIPLY None [TensorVariable(), ConstantVariable(int)]
[2023-04-12 11:40:57,909] torch._dynamo.symbolic_convert: [DEBUG] TRACE RETURN_VALUE None [TensorVariable()]
[2023-04-12 11:40:57,909] torch._dynamo.symbolic_convert: [INFO] Step 1: torchdynamo done tracing forward (RETURN_VALUE)
[2023-04-12 11:40:57,909] torch._dynamo.symbolic_convert: [DEBUG] RETURN_VALUE triggered compile
[2023-04-12 11:40:57,909] torch._dynamo.output_graph: [DEBUG] COMPILING GRAPH due to GraphCompileReason(reason='return_value', user_stack=[<FrameSummary file test_compile_repro.py, line 28 in forward>])
[2023-04-12 11:40:57,910] torch._dynamo.output_graph: [INFO] Step 2: calling compiler function debug_wrapper
[2023-04-12 11:40:59,805] torch._inductor.compile_fx: [INFO] Step 3: torchinductor compiling FORWARDS graph 0
[2023-04-12 11:40:59,973] torch._inductor.compile_fx: [INFO] Step 3: torchinductor done compiling FORWARDS graph 0
[2023-04-12 11:40:59,975] torch._dynamo.output_graph: [INFO] Step 2: done compiler function debug_wrapper
[2023-04-12 11:40:59,980] torch._dynamo.eval_frame: [DEBUG] skipping _fn /Volumes/Data/bin/miniconda3/envs/torch2/lib/python3.8/site-packages/torch/_dynamo/eval_frame.py
[2023-04-12 11:40:59,980] torch._dynamo.eval_frame: [DEBUG] skipping nothing /Volumes/Data/bin/miniconda3/envs/torch2/lib/python3.8/site-packages/torch/_dynamo/eval_frame.py
[2023-04-12 11:40:59,981] torch._dynamo.eval_frame: [DEBUG] skipping __exit__ /Volumes/Data/bin/miniconda3/envs/torch2/lib/python3.8/contextlib.py
[2023-04-12 11:40:59,981] torch._dynamo.eval_frame: [DEBUG] skipping __exit__ /Volumes/Data/bin/miniconda3/envs/torch2/lib/python3.8/contextlib.py
✅ foo=True: compiled=tensor([123.]), eager: tensor([123.])
❌ foo=False: compiled=tensor([123.]), eager: tensor([0.])
=== Training attribute test ===
[2023-04-12 11:41:00,039] torch._dynamo.symbolic_convert: [INFO] Step 1: torchdynamo start tracing forward
[2023-04-12 11:41:00,039] torch._dynamo.symbolic_convert: [DEBUG] TRACE starts_line test_compile_repro.py:42
[2023-04-12 11:41:00,039] torch._dynamo.symbolic_convert: [DEBUG] TRACE LOAD_FAST self []
[2023-04-12 11:41:00,039] torch._dynamo.symbolic_convert: [DEBUG] TRACE LOAD_ATTR training [NNModuleVariable()]
[2023-04-12 11:41:00,040] torch._dynamo.symbolic_convert: [DEBUG] TRACE POP_JUMP_IF_FALSE 14 [ConstantVariable(bool)]
[2023-04-12 11:41:00,040] torch._dynamo.symbolic_convert: [DEBUG] TRACE starts_line test_compile_repro.py:43
[2023-04-12 11:41:00,040] torch._dynamo.symbolic_convert: [DEBUG] TRACE LOAD_FAST x []
[2023-04-12 11:41:00,040] torch._dynamo.symbolic_convert: [DEBUG] TRACE LOAD_CONST 123 [TensorVariable()]
[2023-04-12 11:41:00,041] torch._dynamo.symbolic_convert: [DEBUG] TRACE BINARY_MULTIPLY None [TensorVariable(), ConstantVariable(int)]
[2023-04-12 11:41:00,042] torch._dynamo.symbolic_convert: [DEBUG] TRACE RETURN_VALUE None [TensorVariable()]
[2023-04-12 11:41:00,043] torch._dynamo.symbolic_convert: [INFO] Step 1: torchdynamo done tracing forward (RETURN_VALUE)
[2023-04-12 11:41:00,043] torch._dynamo.symbolic_convert: [DEBUG] RETURN_VALUE triggered compile
[2023-04-12 11:41:00,043] torch._dynamo.output_graph: [DEBUG] COMPILING GRAPH due to GraphCompileReason(reason='return_value', user_stack=[<FrameSummary file test_compile_repro.py, line 43 in forward>])
[2023-04-12 11:41:00,044] torch._dynamo.output_graph: [INFO] Step 2: calling compiler function debug_wrapper
[2023-04-12 11:41:00,053] torch._inductor.compile_fx: [INFO] Step 3: torchinductor compiling FORWARDS graph 1
[2023-04-12 11:41:00,061] torch._inductor.compile_fx: [INFO] Step 3: torchinductor done compiling FORWARDS graph 1
[2023-04-12 11:41:00,061] torch._dynamo.output_graph: [INFO] Step 2: done compiler function debug_wrapper
✅ training=True: compiled=tensor([123.]), eager: tensor([123.])
[2023-04-12 11:41:00,064] torch._dynamo.symbolic_convert: [INFO] Step 1: torchdynamo start tracing forward
[2023-04-12 11:41:00,064] torch._dynamo.symbolic_convert: [DEBUG] TRACE starts_line test_compile_repro.py:42
[2023-04-12 11:41:00,064] torch._dynamo.symbolic_convert: [DEBUG] TRACE LOAD_FAST self []
[2023-04-12 11:41:00,064] torch._dynamo.symbolic_convert: [DEBUG] TRACE LOAD_ATTR training [NNModuleVariable()]
[2023-04-12 11:41:00,064] torch._dynamo.symbolic_convert: [DEBUG] TRACE POP_JUMP_IF_FALSE 14 [ConstantVariable(bool)]
[2023-04-12 11:41:00,064] torch._dynamo.symbolic_convert: [DEBUG] TRACE starts_line test_compile_repro.py:45
[2023-04-12 11:41:00,064] torch._dynamo.symbolic_convert: [DEBUG] TRACE LOAD_FAST x []
[2023-04-12 11:41:00,065] torch._dynamo.symbolic_convert: [DEBUG] TRACE LOAD_CONST 0 [TensorVariable()]
[2023-04-12 11:41:00,065] torch._dynamo.symbolic_convert: [DEBUG] TRACE BINARY_MULTIPLY None [TensorVariable(), ConstantVariable(int)]
[2023-04-12 11:41:00,065] torch._dynamo.symbolic_convert: [DEBUG] TRACE RETURN_VALUE None [TensorVariable()]
[2023-04-12 11:41:00,065] torch._dynamo.symbolic_convert: [INFO] Step 1: torchdynamo done tracing forward (RETURN_VALUE)
[2023-04-12 11:41:00,066] torch._dynamo.symbolic_convert: [DEBUG] RETURN_VALUE triggered compile
[2023-04-12 11:41:00,066] torch._dynamo.output_graph: [DEBUG] COMPILING GRAPH due to GraphCompileReason(reason='return_value', user_stack=[<FrameSummary file test_compile_repro.py, line 45 in forward>])
[2023-04-12 11:41:00,066] torch._dynamo.output_graph: [INFO] Step 2: calling compiler function debug_wrapper
[2023-04-12 11:41:00,073] torch._inductor.compile_fx: [INFO] Step 3: torchinductor compiling FORWARDS graph 2
[2023-04-12 11:41:00,082] torch._inductor.compile_fx: [INFO] Step 3: torchinductor done compiling FORWARDS graph 2
[2023-04-12 11:41:00,082] torch._dynamo.output_graph: [INFO] Step 2: done compiler function debug_wrapper
✅ training=False: compiled=tensor([0.]), eager: tensor([0.])
```
### Error logs
_No response_
### Minified repro
_No response_
### Versions
PyTorch version: 2.0.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 13.3 (x86_64)
GCC version: Could not collect
Clang version: 14.0.3 (clang-1403.0.22.8)
CMake version: Could not collect
Libc version: N/A
Python version: 3.8.13 (default, Mar 28 2022, 06:16:26) [Clang 12.0.0 ] (64-bit runtime)
Python platform: macOS-10.16-x86_64-i386-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz
Versions of relevant libraries:
[pip3] flake8==3.9.2
[pip3] flake8-junit-report==2.1.0
[pip3] mypy==0.812
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.23.3
[pip3] torch==2.0.0
[pip3] torchvision==0.15.1
[conda] numpy 1.23.3 pypi_0 pypi
[conda] torch 2.0.0 pypi_0 pypi
[conda] torchvision 0.15.1 pypi_0 pypi
cc @ezyang @gchanan @zou3519 @soumith @msaroufim @wconstab @ngimel @bdhirsh @voznesenskym @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @desertfire
| 9 |
2,919 | 98,926 |
add gradscaler on CPU
|
module: cpu, open source, module: half, module: amp (automated mixed precision), ciflow/trunk, ciflow/periodic
|
Just test gradscaler on CPU
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @mcarilli @ptrblck @leslie-fang-intel
| 3 |
2,920 | 98,925 |
Request for deterministic support for reflection_pad2d_backward_cuda
|
module: cuda, triaged, enhancement, module: padding
|
### 🚀 The feature, motivation and pitch
Currently, the `reflection_pad2d_backward_cuda` operation used in neural network training cannot be computed deterministically on the GPU. This causes issues for users who have set `torch.use_deterministic_algorithms(True)` and require deterministic behavior in their training process.
I would like to request that deterministic support be added for `reflection_pad2d_backward_cuda` so that users can perform neural network training deterministically on the GPU. Thank you for your consideration.
### Alternatives
_No response_
### Additional context
_No response_
cc @ngimel
| 1 |
2,921 | 98,924 |
Integrate open device privateuse1 customized method registration
|
triaged, module: backend
|
### 🚀 The feature, motivation and pitch
Currently, if user registers the privateuse1 backend, the user may need to perform the following operations:
1. Call `torch._register_device_module` to register the device module.
2. Call `torch.serialization.register_package` to register the `_tag` and `_deserialize` methods customized by privateuse1 backend.
There may be more examples
You need to register multiple places in the patch of privateuse1.
### Alternatives
Unified registration entry. Users only need to implement the corresponding method.
Such as [#98920](https://github.com/pytorch/pytorch/pull/98920)
### Additional context
_No response_
| 1 |
2,922 | 98,921 |
Unable to load MultiStepLR with torch.load(weights_only=True)
|
module: serialization, triaged
|
### 🐛 Describe the bug
`MultiStepLR.state_dict()` contains an instance of `collections.Counter`, but `collections.Counter` is not included in the safelist of weights_only_unpickler.
So, errors occur when loading checkpoints depending on the class of the LR scheduler.
reproduction code
```python
import torch
model = torch.nn.Linear(4, 4)
optimizer = torch.optim.Adam(model.parameters())
scheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer, [10, 20])
# print(scheduler.state_dict())
torch.save({
"model_state_dict": model.state_dict(),
"optimizer_state_dict": optimizer.state_dict(),
"scheduler_state_dict": scheduler.state_dict(),
}, "./checkpoint.pth")
print("SAVE")
torch.load("./checkpoint.pth", weights_only=True)
print("LOAD")
```
output
```
SAVE
Traceback (most recent call last):
File "/home/nagadomi/dev/nunif/tmp/weights_only/bug_multistep.py", line 16, in <module>
torch.load("./checkpoint.pth", weights_only=True)
File "/home/nagadomi/dev/nunif/.venv/lib/python3.10/site-packages/torch/serialization.py", line 808, in load
raise pickle.UnpicklingError(UNSAFE_MESSAGE + str(e)) from None
_pickle.UnpicklingError: Weights only load failed. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution.Do it only if you get the file from a trusted source. WeightsUnpickler error: Unsupported class collections.Counter
```
`scheduler.state_dict()`
```
{'milestones': Counter({10: 1, 20: 1}), 'gamma': 0.1, 'base_lrs': [0.001], 'last_epoch': 0, 'verbose': False, '_step_count': 1, '_get_lr_called_within_step': False, '_last_lr': [0.001]}
```
### Versions
PyTorch version: 2.0.0+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
Clang version: Could not collect
CMake version: version 3.26.0
Libc version: glibc-2.35
Python version: 3.10.6 (main, Mar 10 2023, 10:55:28) [GCC 11.3.0] (64-bit runtime)
Python platform: Linux-5.19.0-38-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3070 Ti
Nvidia driver version: 525.105.17
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.8.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.8.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.8.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.8.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.8.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.8.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.8.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.24.2
[pip3] perlin-numpy==0.0.0
[pip3] torch==2.0.0
[pip3] torchaudio==0.13.1
[pip3] torchdata==0.6.0
[pip3] torchtext==0.15.1
[pip3] torchvision==0.15.1
[conda] Could not collect
cc @mruberry
| 0 |
2,923 | 98,917 |
Change module to module_ in torch/csrc/api/include/torch/python.h
|
module: build, triaged
|
### 🚀 The feature, motivation and pitch
Until 2023.04.12 on master branch, 'module' is used as a variable name in torch/csrc/api/include/torch/python.h, however, 'module' has became a keyword since C++20, consider replace module to module_?
### Alternatives
_No response_
### Additional context
_No response_
cc @malfet @seemethere
| 2 |
2,924 | 98,907 |
Move template code to header
|
module: cpp, triaged
|
### 🐛 Describe the bug
The template code https://github.com/pytorch/pytorch/blob/39fd7f945f292bea7af411946f75417966470359/torch/csrc/api/src/nn/modules/batchnorm.cpp#L17-L34 appears in the `.cpp` file.
It will make extending from `BatchNormImplBase` using libtorch troublesome.
Unless copying the above quoted code into users own source code, the linker will not able to find implementations of `pretty_print`
### Versions
latest master
cc @jbschlosser
| 0 |
2,925 | 98,904 |
Test failure: TestCommonCPU.test_python_ref__refs_abs_cpu_complex32
|
module: tests, triaged
|
### 🐛 Describe the bug
When I execute the following test case on s390x, I got the failure.
```
% python test/test_ops.py TestCommonCPU.test_python_ref__refs_abs_cpu_complex32
...
----------------------------------------------------------------------
Ran 1 test in 3.065s
FAILED (unexpected successes=1)
```
When I executed the same test on x86, it passed.
```
$ python test/test_ops.py TestCommonCPU.test_python_ref__refs_abs_cpu_complex32
...
x
----------------------------------------------------------------------
Ran 1 test in 0.920s
OK (expected failures=1)
```
So, this test suite expects that one of tests is generates different results. I am not sure why this test case expects to generate different results so far.
### Versions
```
Collecting environment information...
PyTorch version: 2.1.0a0+gite3df6a7
Is debug build: True
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (s390x)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.6 (main, Nov 14 2022, 16:10:14) [GCC 11.3.0] (64-bit runtime)
Python platform: Linux-5.4.0-144-generic-s390x-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: False
Versions of relevant libraries:
[pip3] numpy==1.24.2
[pip3] torch==2.1.0a0+gite3df6a7
[conda] Could not collect
```
cc @mruberry
| 1 |
2,926 | 98,888 |
Changes to TorchScript autodiff changing default behavior are no longer accepted
|
triage review, oncall: jit
|
TorchScript support is very limited currently, and changes to autodiff can (and did!) lead to very hard to diagnose bugs. To prevent these bugs, and in view of TorchScript not being actively developed, we no longer accept PRs that expand or change functionality of autodiff. If it's absolutely necessary to change autodiff for your backend, the changes should be behind a config flag and not enabled by default.
To reviewers: please don't accept PRs modifying TS autodiff.
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| 1 |
2,927 | 98,882 |
[PT2] AOTAutograd de-dups but skips de-dup guards for DDP
|
triaged, oncall: pt2
|
Run DDP with a shared buffer (different TorchDynamo `Source`):
<details>
<summary> Repro Script </summary>
```
"""
torchrun --standalone --nproc_per_node=1 test/dup_repro.py
TORCH_LOGS=aot,dynamo torchrun --standalone --nproc_per_node=1 test/dup_repro.py
"""
import os
import torch
import torch.distributed as dist
import torch.nn as nn
from torch.distributed.fsdp import FullyShardedDataParallel as FSDP
from torch.nn.parallel import DistributedDataParallel as DDP
USE_FSDP = False
print(f"USE_FSDP={USE_FSDP}")
os.environ["TORCHDYNAMO_PRINT_GUARDS"] = "1"
class BufModule(nn.Module):
def __init__(self) -> None:
super().__init__()
self.register_buffer(
"_buf", torch.randn((3,), requires_grad=False, device="cuda")
)
def forward(self, x: torch.Tensor) -> torch.Tensor:
return x + self._buf
class Model(nn.Module):
def __init__(self) -> None:
super().__init__()
# Define a parameter since DDP requires at least one
self._param = nn.Parameter(torch.randn((1,), device="cuda"))
self._buf_module = BufModule()
# Use same tensor but with different source
self.register_buffer("_buf", self._buf_module._buf)
def forward(self, x: torch.Tensor) -> torch.Tensor:
z = x + self._buf
z = self._buf_module(z)
z += self._param
return z
dist.init_process_group(backend="nccl")
gpu_id = int(os.environ["LOCAL_RANK"])
device = f"cuda:{gpu_id}"
torch.cuda.set_device(device)
model = Model()
if USE_FSDP:
model = FSDP(model, use_orig_params=True)
else:
model = DDP(model, device_ids=[dist.get_rank()])
model = torch.compile(model)
if USE_FSDP:
assert model._buf is model._buf_module._buf
else:
assert model.module._buf is model.module._buf_module._buf
inp = torch.randn((2, 3), device="cuda")
model(inp)
```
</details>
DDP forward graph:
```
====== Forward graph 0 ======
<eval_with_key>.7 class GraphModule(torch.nn.Module):
def forward(self, primals_1: f32[1], primals_2: f32[3], primals_3: f32[2, 3]):
# File: test/dup_repro.py:39, code: z = x + self._buf
add: f32[2, 3] = torch.ops.aten.add.Tensor(primals_3, primals_2); primals_3 = None
# File: test/dup_repro.py:27, code: return x + self._buf
add_1: f32[2, 3] = torch.ops.aten.add.Tensor(add, primals_2); add = primals_2 = None
# File: test/dup_repro.py:41, code: z += self._param
add_2: f32[2, 3] = torch.ops.aten.add.Tensor(add_1, primals_1); add_1 = primals_1 = None
return [add_2]
```
It looks like `model._buf` and `model._buf_module._buf` got de-duplicated since the forward graph only has `primals_2` as the size `[3]` tensor (`primals_1` is `model._param` and `primals_3` is the input tensor).
However, there is no de-dup guard:
```
GUARDS ___guarded_code.valid
and ___check_obj_id(L['self'], 140390721572928)
and L['self'].training == True
and not ___are_deterministic_algorithms_enabled()
and ___check_type_id(G['__import_torch_dot_nn_dot_modules_dot_module']._global_forward_hooks, 93969908371680)
and set(G['__import_torch_dot_nn_dot_modules_dot_module']._global_forward_hooks.keys()) == set()
and ___check_type_id(G['__import_torch_dot_nn_dot_modules_dot_module']._global_backward_hooks, 93969908371680)
and set(G['__import_torch_dot_nn_dot_modules_dot_module']._global_backward_hooks.keys()) == set()
and ___check_type_id(G['__import_torch_dot_nn_dot_modules_dot_module']._global_forward_pre_hooks, 93969908371680)
and set(G['__import_torch_dot_nn_dot_modules_dot_module']._global_forward_pre_hooks.keys()) == set()
and ___check_type_id(G['__import_torch_dot_nn_dot_modules_dot_module']._global_backward_pre_hooks, 93969908371680)
and set(G['__import_torch_dot_nn_dot_modules_dot_module']._global_backward_pre_hooks.keys()) == set()
and ___check_tensors(L['x'])
```
I was expecting a guard like:
```
L['self']._buf is L['self']._buf_module._buf
```
Let me know if I am not understanding correctly.
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 11 |
2,928 | 98,872 |
Expand component configurable logging system to C++
|
module: logging, triaged
|
The component configurable logging system introduced for #94788 needs to be expanded to control logs generated from C++
Some of the requirements for this are discussed here:
* https://github.com/pytorch/pytorch/issues/94788#issuecomment-1502519872
* https://github.com/pytorch/pytorch/issues/94788#issuecomment-1503989249
List of requirements:
* Logs generated in C++ should get piped out to the Python logging system.
* C++ should have access to all the logging component and artifact types
available in Python.
* From Python user's perspective, whether a log came from C++ or Python
should be opaque.
| 4 |
2,929 | 98,871 |
Document the user-facing API for the component-level logging system
|
module: docs, triaged
|
Document the component-level logging system that was added for #94788
cc @svekars @carljparker
| 0 |
2,930 | 98,864 |
Support SPDA on non-CUDA backends
|
oncall: transformer/mha
|
### 🚀 The feature, motivation and pitch
Currently, [SPDA](https://pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html#torch.nn.functional.scaled_dot_product_attention) supports CUDA backend as outlined in the documentation tutorial. Requesting the op to become available for other backends, specifically, `torch_xla`.
CC @JackCaoG @wconstab
cc @jbschlosser @bhosmer @cpuhrsch @erichan1
| 2 |
2,931 | 98,863 |
Problem with instalation torch2 on a100+cu12.1
|
oncall: binaries, module: cuda, triaged
|
### 🐛 Describe the bug
I am trying to install torch2 with cu118 on cu121 machine
Using
pip install --upgrade torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
Trying
import torch
print(torch.__version__)
torch.zeros(2).cuda(0)
And got this:
CUDA call failed lazily at initialization with error: device >= 0 && device < num_gpus INTERNAL ASSERT FAILED at "../aten/src/ATen/cuda/CUDAContext.cpp":50, please report a bug to PyTorch.
CUDA call was originally invoked at:
[' File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main\n return _run_code(code, main_globals, None,\n', ' File "/usr/lib/python3.8/runpy.py", line 87, in _run_code\n exec(code, run_globals)\n', ' File "/home/fsuser/.local/lib/python3.8/site-packages/ipykernel_launcher.py", line 17, in <module>\n app.launch_new_instance()\n', ' File "/home/fsuser/.local/lib/python3.8/site-packages/traitlets/config/application.py", line 1043, in launch_instance\n app.start()\n', ' File "/home/fsuser/.local/lib/python3.8/site-packages/ipykernel/kernelapp.py", line 725, in start\n self.io_loop.start()\n', ' File "/home/fsuser/.local/lib/python3.8/site-packages/tornado/platform/asyncio.py", line 215, in start\n self.asyncio_loop.run_forever()\n', ' File "/usr/lib/python3.8/asyncio/base_events.py", line 570, in run_forever\n self._run_once()\n', ' File "/usr/lib/python3.8/asyncio/base_events.py", line 1859, in _run_once\n handle._run()\n', ' File "/usr/lib/python3.8/asyncio/events.py", line 81, in _run\n self._context.run(self._callback, *self._args)\n', ' File "/home/fsuser/.local/lib/python3.8/site-packages/ipykernel/kernelbase.py", line 513, in dispatch_queue\n await self.process_one()\n', ' File "/home/fsuser/.local/lib/python3.8/site-packages/ipykernel/kernelbase.py", line 502, in process_one\n await dispatch(*args)\n', ' File "/home/fsuser/.local/lib/python3.8/site-packages/ipykernel/kernelbase.py", line 409, in dispatch_shell\n await result\n', ' File "/home/fsuser/.local/lib/python3.8/site-packages/ipykernel/kernelbase.py", line 729, in execute_request\n reply_content = await reply_content\n', ' File "/home/fsuser/.local/lib/python3.8/site-packages/ipykernel/ipkernel.py", line 422, in do_execute\n res = shell.run_cell(\n', ' File "/home/fsuser/.local/lib/python3.8/site-packages/ipykernel/zmqshell.py", line 540, in run_cell\n return super().run_cell(*args, **kwargs)\n', ' File "/home/fsuser/.local/lib/python3.8/site-packages/IPython/core/interactiveshell.py", line 3006, in run_cell\n result = self._run_cell(\n', ' File "/home/fsuser/.local/lib/python3.8/site-packages/IPython/core/interactiveshell.py", line 3061, in _run_cell\n result = runner(coro)\n', ' File "/home/fsuser/.local/lib/python3.8/site-packages/IPython/core/async_helpers.py", line 129, in _pseudo_sync_runner\n coro.send(None)\n', ' File "/home/fsuser/.local/lib/python3.8/site-packages/IPython/core/interactiveshell.py", line 3266, in run_cell_async\n has_raised = await self.run_ast_nodes(code_ast.body, cell_name,\n', ' File "/home/fsuser/.local/lib/python3.8/site-packages/IPython/core/interactiveshell.py", line 3445, in run_ast_nodes\n if await self.run_code(code, result, async_=asy):\n', ' File "/home/fsuser/.local/lib/python3.8/site-packages/IPython/core/interactiveshell.py", line 3505, in run_code\n exec(code_obj, self.user_global_ns, self.user_ns)\n', ' File "/tmp/ipykernel_15415/129772968.py", line 1, in <module>\n import torch\n', ' File "<frozen importlib._bootstrap>", line 991, in _find_and_load\n', ' File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked\n', ' File "<frozen importlib._bootstrap>", line 671, in _load_unlocked\n', ' File "<frozen importlib._bootstrap_external>", line 848, in exec_module\n', ' File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed\n', ' File "/home/fsuser/.local/lib/python3.8/site-packages/torch/__init__.py", line 1146, in <module>\n _C._initExtension(manager_path())\n', ' File "<frozen importlib._bootstrap>", line 991, in _find_and_load\n', ' File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked\n', ' File "<frozen importlib._bootstrap>", line 671, in _load_unlocked\n', ' File "<frozen importlib._bootstrap_external>", line 848, in exec_module\n', ' File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed\n', ' File "/home/fsuser/.local/lib/python3.8/site-packages/torch/cuda/__init__.py", line 197, in <module>\n _lazy_call(_check_capability)\n', ' File "/home/fsuser/.local/lib/python3.8/site-packages/torch/cuda/__init__.py", line 195, in _lazy_call\n _queued_calls.append((callable, traceback.format_stack()))\n']
### Versions
2.0.0+cu118
cc @seemethere @malfet @ngimel
| 4 |
2,932 | 98,861 |
Sparse Tensor not working for `torch.cat`
|
module: sparse, triaged
|
### 🐛 Describe the bug
cc: @rusty1s
these examples use `collate` which uses cat on torch native sparse tensors:
https://github.com/pyg-team/pytorch_geometric/blob/master/examples/egc.py
https://github.com/pyg-team/pytorch_geometric/blob/master/examples/gcn2_ppi.py
https://github.com/pyg-team/pytorch_geometric/blob/master/examples/multi_gpu/distributed_batching.py
repro:
`cd /opt/pyg; pip uninstall -y torch-geometric torch-scatter torch-sparse torch-spline-conv torch-cluster; rm -rf pytorch_geometric; git clone -b fix-for-collate https://github.com/pyg-team/pytorch_geometric.git; cd /opt/pyg/pytorch_geometric; pip install .; python3 examples/egc.py; python3 examples/gcn2_ppi.py`
error:
```
Traceback (most recent call last):
File "examples/gcn2_ppi.py", line 93, in <module>
loss = train()
File "examples/gcn2_ppi.py", line 67, in train
for data in train_loader:
File "/usr/local/lib/python3.8/dist-packages/torch/utils/data/dataloader.py", line 635, in __next__
data = self._next_data()
File "/usr/local/lib/python3.8/dist-packages/torch/utils/data/dataloader.py", line 679, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "/usr/local/lib/python3.8/dist-packages/torch/utils/data/_utils/fetch.py", line 61, in fetch
return self.collate_fn(data)
File "/usr/local/lib/python3.8/dist-packages/torch_geometric/loader/dataloader.py", line 20, in __call__
return Batch.from_data_list(batch, self.follow_batch,
File "/usr/local/lib/python3.8/dist-packages/torch_geometric/data/batch.py", line 76, in from_data_list
batch, slice_dict, inc_dict = collate(
File "/usr/local/lib/python3.8/dist-packages/torch_geometric/data/collate.py", line 85, in collate
value, slices, incs = _collate(attr, values, data_list, stores,
File "/usr/local/lib/python3.8/dist-packages/torch_geometric/data/collate.py", line 178, in _collate
value = torch.cat(values, dim=cat_dim)
RuntimeError: Sparse CSR tensors do not have is_contiguous
```
### Versions
root@979d4b259838:/opt/pyg/pytorch_geometric# python collect_env.py
Collecting environment information...
PyTorch version: 2.0.0a0+1767026
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.24.1
Libc version: glibc-2.31
Python version: 3.8.10 (default, Nov 14 2022, 12:59:47) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-4.15.0-142-generic-x86_64-with-glibc2.29
Is CUDA available: True
CUDA runtime version: 12.1.66
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A40
GPU 1: NVIDIA A40
GPU 2: NVIDIA A40
GPU 3: NVIDIA A40
Nvidia driver version: 515.65.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.8.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.8.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.8.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.8.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.8.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.8.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.8.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 43 bits physical, 48 bits virtual
CPU(s): 32
On-line CPU(s) list: 0-31
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
NUMA node(s): 1
Vendor ID: AuthenticAMD
CPU family: 23
Model: 49
Model name: AMD EPYC 7282 16-Core Processor
Stepping: 0
Frequency boost: enabled
CPU MHz: 1493.554
CPU max MHz: 2800.0000
CPU min MHz: 1500.0000
BogoMIPS: 5589.38
Virtualization: AMD-V
L1d cache: 512 KiB
L1i cache: 512 KiB
L2 cache: 8 MiB
L3 cache: 64 MiB
NUMA node0 CPU(s): 0-31
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Full AMD retpoline, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate sme ssbd ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif umip rdpid overflow_recov succor smca
Versions of relevant libraries:
[pip3] flake8==5.0.3
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.22.3
[pip3] pytorch-quantization==2.1.2
[pip3] torch==2.0.0a0+1767026
[pip3] torch_geometric==2.4.0
[pip3] torch-tensorrt==1.4.0.dev0
[pip3] torchmetrics==0.9.3
[pip3] torchtext==0.13.0a0+fae8e8c
[pip3] torchvision==0.15.0a0
[pip3] triton==2.0.0
[pip3] tritonclient==2.29.0
[conda] Could not collect
cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer
| 6 |
2,933 | 98,860 |
Sharded Grad Scaler Issue Tracker
|
oncall: distributed, triaged, module: amp (automated mixed precision), module: fsdp
|
I recently unintentionally discovered that there exists a [sharded_grad_scaler.py](https://github.com/pytorch/pytorch/blob/4584851da5cad7f2e5f9fd5ed2245f3a06f8359e/torch/distributed/fsdp/sharded_grad_scaler.py#L4) which derives a lot from our [amp/grad_scaler.py](https://github.com/pytorch/pytorch/blob/e64ddd1ab9d46cfc921c19269969ffc5cd7d6f6c/torch/cuda/amp/grad_scaler.py#L195).
- [ ] A lot of the duplication seems unnecessary. ShardedGradScaler should reuse/call as much of GradScaler as possible instead copying/pasting. This way, ShardedGradScaler can be automatically enrolled in GradScaler bug fixes/improvements.
- [ ] It is uncertain whether ShardedGradScaler supports all that current GradScaler supports in a consistent way, such as being able to call unscale separate from step like in the grad clipping use case: https://pytorch.org/docs/stable/amp.html#torch.cuda.amp.GradScaler.unscale_
- [ ] ShardedGradScaler should be documented as a public API once it's ready. It would be good to link to it from the existing GradScaler page.
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @mcarilli @ptrblck @leslie-fang-intel @jgong5
| 0 |
2,934 | 98,844 |
[PT2] Some errors with `cond` and `torch.compile`
|
triaged, oncall: pt2, module: functorch
|
I am not sure if these are intended to be supported use cases, but as a part of https://github.com/pytorch/pytorch/pull/98775, I experimented with `cond()`. This is not blocking any use case.
```
import torch
from functorch.experimental.control_flow import cond
class Module(torch.nn.Module):
def __init__(self):
super().__init__()
self.linear = torch.nn.Linear(3, 3)
def forward(self, pred, x):
def true_fn(val):
return self.linear(val) * torch.tensor(2)
def false_fn(val):
return self.linear(val) * torch.tensor(-1)
return cond(pred, true_fn, false_fn, [x])
mod = Module()
mod = torch.compile(mod)
x = torch.randn([3, 3])
pred = torch.tensor(x[0][0].item() < 0)
real_result = mod.forward(pred, x)
```
raises
```
File "/fsx/users/andgu/work/pytorch/torch/_ops.py", line 236, in dispatch
assert final_key in self.py_kernels, f"{dispatch_key} -> {final_key}"
torch._dynamo.exc.BackendCompilerFailed: backend='debug_wrapper' raised:
AssertionError: DispatchKey.Functionalize -> DispatchKey.Functionalize
```
<details>
<summary> Full traceback </summary>
```
Traceback (most recent call last):
File "dynamo/test_cond.py", line 23, in <module>
real_result = mod.forward(pred, x)
File "/fsx/users/andgu/work/pytorch/torch/_dynamo/eval_frame.py", line 118, in forward
return self.dynamo_ctx(self._orig_mod.forward)(*args, **kwargs)
File "/fsx/users/andgu/work/pytorch/torch/_dynamo/eval_frame.py", line 247, in _fn
return fn(*args, **kwargs)
File "/fsx/users/andgu/work/pytorch/torch/_dynamo/eval_frame.py", line 394, in catch_errors
return callback(frame, cache_size, hooks)
File "/fsx/users/andgu/work/pytorch/torch/_dynamo/convert_frame.py", line 453, in _convert_frame
result = inner_convert(frame, cache_size, hooks)
File "/fsx/users/andgu/work/pytorch/torch/_dynamo/convert_frame.py", line 113, in _fn
return fn(*args, **kwargs)
File "/fsx/users/andgu/work/pytorch/torch/_dynamo/convert_frame.py", line 296, in _convert_frame_assert
return _compile(
File "/fsx/users/andgu/work/pytorch/torch/_dynamo/utils.py", line 169, in time_wrapper
r = func(*args, **kwargs)
File "/fsx/users/andgu/work/pytorch/torch/_dynamo/convert_frame.py", line 361, in _compile
out_code = transform_code_object(code, transform)
File "/fsx/users/andgu/work/pytorch/torch/_dynamo/bytecode_transformation.py", line 683, in transform_code_object
transformations(instructions, code_options)
File "/fsx/users/andgu/work/pytorch/torch/_dynamo/convert_frame.py", line 348, in transform
tracer.run()
File "/fsx/users/andgu/work/pytorch/torch/_dynamo/symbolic_convert.py", line 1892, in run
super().run()
File "/fsx/users/andgu/work/pytorch/torch/_dynamo/symbolic_convert.py", line 611, in run
and self.step()
File "/fsx/users/andgu/work/pytorch/torch/_dynamo/symbolic_convert.py", line 571, in step
getattr(self, inst.opname)(inst)
File "/fsx/users/andgu/work/pytorch/torch/_dynamo/symbolic_convert.py", line 1979, in RETURN_VALUE
self.output.compile_subgraph(
File "/fsx/users/andgu/work/pytorch/torch/_dynamo/output_graph.py", line 630, in compile_subgraph
self.compile_and_call_fx_graph(tx, list(reversed(stack_values)), root)
File "/fsx/users/andgu/work/pytorch/torch/_dynamo/output_graph.py", line 700, in compile_and_call_fx_graph
compiled_fn = self.call_user_compiler(gm)
File "/fsx/users/andgu/work/pytorch/torch/_dynamo/utils.py", line 169, in time_wrapper
r = func(*args, **kwargs)
File "/fsx/users/andgu/work/pytorch/torch/_dynamo/output_graph.py", line 782, in call_user_compiler
raise BackendCompilerFailed(self.compiler_fn, e).with_traceback(
File "/fsx/users/andgu/work/pytorch/torch/_dynamo/output_graph.py", line 778, in call_user_compiler
compiled_fn = compiler_fn(gm, self.fake_example_inputs())
File "/fsx/users/andgu/work/pytorch/torch/_dynamo/debug_utils.py", line 1098, in debug_wrapper
compiled_gm = compiler_fn(gm, example_inputs)
File "/fsx/users/andgu/work/pytorch/torch/__init__.py", line 1530, in __call__
return compile_fx(model_, inputs_, config_patches=self.config)
File "/fsx/users/andgu/work/pytorch/torch/_inductor/compile_fx.py", line 722, in compile_fx
return aot_autograd(
File "/fsx/users/andgu/work/pytorch/torch/_dynamo/backends/common.py", line 62, in compiler_fn
cg = aot_module_simplified(gm, example_inputs, **kwargs)
File "/fsx/users/andgu/work/pytorch/torch/_functorch/aot_autograd.py", line 3093, in aot_module_simplified
compiled_fn = create_aot_dispatcher_function(
File "/fsx/users/andgu/work/pytorch/torch/_dynamo/utils.py", line 169, in time_wrapper
r = func(*args, **kwargs)
File "/fsx/users/andgu/work/pytorch/torch/_functorch/aot_autograd.py", line 2712, in create_aot_dispatcher_function
fw_metadata = run_functionalized_fw_and_collect_metadata(
File "/fsx/users/andgu/work/pytorch/torch/_functorch/aot_autograd.py", line 686, in inner
flat_f_outs = f(*flat_f_args)
File "/fsx/users/andgu/work/pytorch/torch/_functorch/aot_autograd.py", line 3017, in functional_call
out = Interpreter(mod).run(*args[params_len:], **kwargs)
File "/fsx/users/andgu/work/pytorch/torch/fx/interpreter.py", line 137, in run
self.env[node] = self.run_node(node)
File "/fsx/users/andgu/work/pytorch/torch/fx/interpreter.py", line 179, in run_node
return getattr(self, n.op)(n.target, args, kwargs)
File "/fsx/users/andgu/work/pytorch/torch/fx/interpreter.py", line 251, in call_function
return target(*args, **kwargs)
File "/fsx/users/andgu/work/pytorch/torch/_ops.py", line 247, in __call__
return torch.overrides.handle_torch_function(
File "/fsx/users/andgu/work/pytorch/torch/overrides.py", line 1538, in handle_torch_function
result = mode.__torch_function__(public_api, types, args, kwargs)
File "/fsx/users/andgu/work/pytorch/torch/_inductor/overrides.py", line 33, in __torch_function__
return replace_fn(func)(*args, **kwargs)
File "/fsx/users/andgu/work/pytorch/torch/_ops.py", line 252, in __call__
return self.dispatch(dispatch_key_set.highestPriorityTypeId(), *args, **kwargs)
File "/fsx/users/andgu/work/pytorch/torch/_ops.py", line 242, in dispatch
return kernel(*args, **kwargs)
File "/fsx/users/andgu/work/pytorch/functorch/experimental/_cond.py", line 122, in cond_autograd
return cond(pred, true_fn, false_fn, *operands)
File "/fsx/users/andgu/work/pytorch/torch/_ops.py", line 252, in __call__
return self.dispatch(dispatch_key_set.highestPriorityTypeId(), *args, **kwargs)
File "/fsx/users/andgu/work/pytorch/torch/_ops.py", line 236, in dispatch
assert final_key in self.py_kernels, f"{dispatch_key} -> {final_key}"
torch._dynamo.exc.BackendCompilerFailed: backend='debug_wrapper' raised:
AssertionError: DispatchKey.Functionalize -> DispatchKey.Functionalize
While executing %cond : [#users=1] = call_function[target=torch.ops.cond](args = (%l_pred_, %cond_true_0, %cond_false_0, [%l_x_]), kwargs = {})
Original traceback:
File "dynamo/test_cond.py", line 17, in forward
return cond(pred, true_fn, false_fn, [x])
```
</details>
```
import torch
from functorch.experimental.control_flow import cond
x = torch.randn((3,))
def f1(x1, x2):
return x1 + x2
def f2(x1, x2):
return x1 * x2
@torch.compile()
def f(z):
return cond(z, f1, f2, [x, x])
f(torch.tensor(True))
```
raises the same error:
```
File "/fsx/users/andgu/work/pytorch/torch/_ops.py", line 236, in dispatch
assert final_key in self.py_kernels, f"{dispatch_key} -> {final_key}"
torch._dynamo.exc.BackendCompilerFailed: backend='debug_wrapper' raised:
AssertionError: DispatchKey.Functionalize -> DispatchKey.Functionalize
While executing %cond : [#users=1] = call_function[target=torch.ops.cond](args = (%l_z_, %cond_true_0, %cond_false_0, [%g_x_, %g_x_]), kwargs = {})
Original traceback:
File "dynamo/test_cond.py", line 18, in f
return cond(z, f1, f2, [x, x])
```
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh @zou3519 @Chillee @samdow @kshitij12345 @janeyx99
| 8 |
2,935 | 98,836 |
PyTorch's packaged libgomp causes significant performance penalties on CPU when used together with other Python packages
|
module: build, triaged, module: multithreading
|
### 🐛 Describe the bug
PyTorch's PYPI packages come with their own `libgomp-SOMEHASH.so` packaged. Other packages like SciKit Learn do the same. The problem is, that depending on the order of loading your Python modules, the PyTorch OpenMP might be initialized with only a single thread.
This can be easily seen by running (I removed all non-related output):
```json
# python3 -m threadpoolctl -i torch sklearn
[
{
"user_api": "openmp",
"internal_api": "openmp",
"prefix": "libgomp",
"filepath": "/.../python3.8/site-packages/torch/lib/libgomp-a34b3233.so.1",
"version": null,
"num_threads": 12 # PyTorch 12 Threads
},
{
"user_api": "openmp",
"internal_api": "openmp",
"prefix": "libgomp",
"filepath": "/.../python3.8/site-packages/scikit_learn.libs/libgomp-a34b3233.so.1.0.0",
"version": null,
"num_threads": 1 # SKlearn 1 Thread
}
]
```
and:
```json
# python3 -m threadpoolctl -i sklearn torch
[
{
"user_api": "openmp",
"internal_api": "openmp",
"prefix": "libgomp",
"filepath": "/.../python3.8/site-packages/scikit_learn.libs/libgomp-a34b3233.so.1.0.0",
"version": null,
"num_threads": 24 # SKlearn 24 Threads
},
{
"user_api": "openmp",
"internal_api": "openmp",
"prefix": "libgomp",
"filepath": "/.../python3.8/site-packages/torch/lib/libgomp-a34b3233.so.1",
"version": null,
"num_threads": 1 # PyTorch 1 Thread
}
]
```
In the first case, PyTorch gets all threads, in the second case, SciKit Learn gets all threads.
This minimal example shows the effect on the performance:
```python
import sklearn # remove or swap with 2nd line
import torch
import torchvision
from time import perf_counter_ns as timer
model = torchvision.models.resnet50()
model.eval()
data = torch.rand(64, 3, 224, 224)
start = timer()
with torch.no_grad():
for i in range(5):
model(data)
end = timer()
print(f'Total: {(end-start)/1000000.0}ms')
```
Result without `import sklearn` or by swapping the two import lines: `Total: 5020.870435ms`
And with `import sklearn`: `Total: 27399.992653ms`
Even if we would manually set the number of threads correctly, it still would have a performance penalty when switching between PyTorch and SKlearn, as the thread pools need to be swapped.
My current workaround is to remove all `libgomp-*.so` within my Python user site and replace them with symlinks to the system's `libgomp.so`. This causes that Sklearn and Pytorch use the same thread pool, which in my opinion is the desired behavior. Another solution would be to compile PyTorch from source.
I'm not sure why PyTorch is shipping it's own `libgomp`. I'm guessing it's for compatibility reasons on older systems, that don't have `libgomp` or an outdated/incompatible version. However, the current approach causes significant downsides when using PyTorch with other packages or user applications, that are linked against the system's `libgomp`. So far I identified `onnxruntime-openmp` and `scikit-learn` that do the same, but I assume there are many more.
I came up with multiple solutions:
1. A hacky solution would be to ensure that all packages use the identical `libgomp-SOMEHASH.so.SO_VERSION`, e.g., SKlearn and onnxruntime use `libgomp-a34b3233.so.1.0.0` while PyTorch uses `libgomp-a34b3233.so.1`. This works as `libdl` only checks the file name. But that does not solve the fundamental problem of shipping your own `libgomp`, and still would have the problem when the user include own libraries linked against system `libgomp`.
2. A proper solution would be to do something like the [intel-openmp](https://pypi.org/project/intel-openmp/) package, that provides a centralized way of accessing the libraries and then can be easily taken up by multiple python packages without conflicts. Here, PyTorch, SKlearn, etc. could just have this package as common requirement, and load all the same library.
As this is a cross project issue, I'm not sure what the best way is to coordinate with the other projects.
This issue is related to: #44282, #19764
### Versions
```
Collecting environment information...
PyTorch version: 2.0.0+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: CentOS Linux 7 (Core) (x86_64)
GCC version: (GCC) 10.3.0
Clang version: Could not collect
CMake version: version 3.26.3
Libc version: glibc-2.17
Python version: 3.8.16 (default, Mar 17 2023, 07:42:34) [GCC 10.2.1 20210130 (Red Hat 10.2.1-11)] (64-bit runtime)
Python platform: Linux-3.10.0-1160.76.1.el7.x86_64-x86_64-with-glibc2.17
Is CUDA available: False
CUDA runtime version: 11.4.120
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.5
[pip3] pytorch-lightning==2.0.1
[pip3] torch==2.0.0
[pip3] torchmetrics==0.11.4
[pip3] torchvision==0.15.1
[conda] Could not collect
```
cc @malfet @seemethere
| 7 |
2,936 | 98,835 |
[PT2.0] empty output shape causes Segmentation fault
|
triaged, bug, oncall: pt2, module: aotdispatch, module: inductor
|
### 🐛 Describe the bug
If output tensor is initialized with torch.empty(0) and then passed through the torch.compile then there is an segfault observed n allocating tensor with invalid size
Use below sample code to reproduce the issue:
```
import torch
def fn(x, y):
torch.abs(x, out=y)
x = torch.rand((8, 8))
y = torch.empty(0)
compiled_fn = torch.compile(fn)
compiled_fn(x, y)
print(y)
```
### Error logs
```
Internal Error: Received signal - Segmentation fault
dmesg: read kernel buffer failed: Operation not permitted
Fatal Python error: Segmentation fault
Thread 0x00007fa475c0c700 (most recent call first):
File "/usr/lib/python3.8/concurrent/futures/thread.py", line 78 in _worker
File "/usr/lib/python3.8/threading.py", line 870 in run
File "/usr/lib/python3.8/threading.py", line 932 in _bootstrap_inner
File "/usr/lib/python3.8/threading.py", line 890 in _bootstrap
Thread 0x00007fa47540b700 (most recent call first):
File "/usr/lib/python3.8/selectors.py", line 415 in select
File "/usr/lib/python3.8/multiprocessing/connection.py", line 931 in wait
File "/usr/lib/python3.8/concurrent/futures/process.py", line 362 in _queue_management_worker
File "/usr/lib/python3.8/threading.py", line 870 in run
File "/usr/lib/python3.8/threading.py", line 932 in _bootstrap_inner
File "/usr/lib/python3.8/threading.py", line 890 in _bootstrap
Thread 0x00007fa41ca8d700 (most recent call first):
File "/usr/lib/python3.8/threading.py", line 306 in wait
File "/usr/lib/python3.8/threading.py", line 558 in wait
File "/usr/local/lib/python3.8/dist-packages/tqdm/_monitor.py", line 60 in run
File "/usr/lib/python3.8/threading.py", line 932 in _bootstrap_inner
File "/usr/lib/python3.8/threading.py", line 890 in _bootstrap
Current thread 0x00007fa4a6fd5dc0 (most recent call first):
File "/tmp/torchinductor_root/u4/cu4v5vey6e3iafqgfidpy6wayatqg6bhyuqbkarylgvhc2uvflyj.py", line 49 in call
File "/usr/local/lib/python3.8/dist-packages/torch/_functorch/aot_autograd.py", line 1247 in call_func_with_args
File "/usr/local/lib/python3.8/dist-packages/torch/_functorch/aot_autograd.py", line 1898 in runtime_wrapper
File "/usr/local/lib/python3.8/dist-packages/torch/_functorch/aot_autograd.py", line 1222 in g
File "/usr/local/lib/python3.8/dist-packages/torch/_functorch/aot_autograd.py", line 2819 in forward
File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/eval_frame.py", line 209 in _fn
File "out_abs.py", line 17 in forward
File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/eval_frame.py", line 209 in _fn
File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/eval_frame.py", line 82 in forward
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1501 in _call_impl
File "out_abs.py", line 25 in run_on_device
File "out_abs.py", line 32 in <module>
Segmentation fault (core dumped)
```
### Minified repro
```
import torch
def fn(x, y):
torch.abs(x, out=y)
x = torch.rand((8, 8))
y = torch.empty(0)
compiled_fn = torch.compile(fn)
compiled_fn(x, y)
print(y)
```
### Versions
Name: torch
Version: 2.0.0
Summary: Tensors and Dynamic neural networks in Python with strong GPU acceleration
Home-page: https://pytorch.org/
Author: PyTorch Team
Author-email: packages@pytorch.org
License: BSD-3
Location: /home/jthakur/.pt_2_0/lib/python3.8/site-packages
Requires: filelock, jinja2, networkx, nvidia-cublas-cu11, nvidia-cuda-cupti-cu11, nvidia-cuda-nvrtc-cu11, nvidia-cuda-runtime-cu11, nvidia-cudnn-cu11, nvidia-cufft-cu11, nvidia-curand-cu11, nvidia-cusolver-cu11, nvidia-cusparse-cu11, nvidia-nccl-cu11, nvidia-nvtx-cu11, sympy, triton, typing-extensions
Required-by: torchaudio, torchvision, triton
cc @ezyang @gchanan @zou3519 @soumith @msaroufim @wconstab @ngimel @bdhirsh @voznesenskym @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @desertfire
| 5 |
2,937 | 98,827 |
[functorch] vmap_hessian_fc - fails under torch.compile
|
triaged, oncall: pt2, module: functorch
|
### 🐛 Describe the bug
Running vmap_hessian_fc from torchbench/userbench with following patch
```patch
diff --git a/userbenchmark/functorch/vmap_hessian_fc.py b/userbenchmark/functorch/vmap_hessian_fc.py
index ebfe7cf3..6ae546c0 100644
--- a/userbenchmark/functorch/vmap_hessian_fc.py
+++ b/userbenchmark/functorch/vmap_hessian_fc.py
@@ -1,6 +1,6 @@
import torch
import torch.nn as nn
from functorch import vmap, jacfwd, jacrev
from .util import BenchmarkCase
# batched hessians of fully connected layers is a popular quantity
@@ -8,12 +8,25 @@ from .util import BenchmarkCase
# This test case is from https://github.com/pytorch/functorch/issues/989
# We haven't been able to get the full model yet, so, this test case
# is going into the functorch userbenchmark instead of torchbenchmark.
+
+from torch._dynamo import allow_in_graph
+from functools import wraps
+
+def traceable(f):
+ f = allow_in_graph(f)
+
+ @wraps(f)
+ def wrapper(*args, **kwargs):
+ return f(*args, **kwargs)
+
+ return wrapper
+
class VmapHessianFC(BenchmarkCase):
def __init__(self):
- device = 'cuda'
+ device = 'cpu'
D1 = 2 # x, y
D2 = 3 # u, v, p
- B = 10000
+ B = 10
x = torch.randn(B, D1).to(device)
model = nn.Sequential(
@@ -43,9 +56,12 @@ class VmapHessianFC(BenchmarkCase):
out = self.model(x)
return out, out
- hessian, pred = vmap(
+ fn = vmap(
jacfwd(jacrev(predict, argnums=0, has_aux=True), argnums=0, has_aux=True),
in_dims=0,
- )(
+ )
+
+ fn = torch.compile(traceable(fn))
+ hessian, pred = fn(
self.x
)
```
Leads to failure:
```
RuntimeError: Failed running call_function <function VmapHessianFC.run.<locals>.predict at 0x7fa75ef8f040>(*(FakeTensor(FakeTensor(..., device='meta', size=(10, 2)), cpu),), **{}):
InferenceMode::is_enabled() && self.is_inference() INTERNAL ASSERT FAILED at "/home/kshiteej/Pytorch/pytorch_functorch/aten/src/ATen/native/VariableMethodStubs.cpp":67, please report a bug to PyTorch. Expected this method to only be reached in inference mode and when all the inputs are inference tensors. You should NOT call this method directly as native::_fw_primal. Please use the dispatcher, i.e., at::_fw_primal. Please file an issue if you come across this error otherwise.
(scroll up for backtrace)
```
### Versions
master
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh @zou3519 @Chillee @samdow @janeyx99
| 1 |
2,938 | 98,825 |
[functorch] functorch_maml_omniglot - fails under torch.compile
|
triaged, oncall: pt2, module: functorch
|
### 🐛 Describe the bug
Running `functorch_maml_omniglot` from `torchbench` with following patch
```patch
diff --git a/torchbenchmark/models/functorch_maml_omniglot/__init__.py b/torchbenchmark/models/functorch_maml_omniglot/__init__.py
index faf16d73..430eaadf 100644
--- a/torchbenchmark/models/functorch_maml_omniglot/__init__.py
+++ b/torchbenchmark/models/functorch_maml_omniglot/__init__.py
@@ -10,6 +10,17 @@ from typing import Tuple
from ...util.model import BenchmarkModel
from torchbenchmark.tasks import OTHER
+from torch._dynamo import allow_in_graph
+from functools import wraps
+
+def traceable(f):
+ f = allow_in_graph(f)
+
+ @wraps(f)
+ def wrapper(*args, **kwargs):
+ return f(*args, **kwargs)
+
+ return wrapper
def loss_for_task(net, n_inner_iter, x_spt, y_spt, x_qry, y_qry):
params, buffers, fnet = net
@@ -66,7 +77,7 @@ class Model(BenchmarkModel):
self.model = net
root = str(Path(__file__).parent.parent)
- self.meta_inputs = torch.load(f'{root}/maml_omniglot/batch.pt')
+ self.meta_inputs = torch.load(f'{root}/functorch_maml_omniglot/batch.pt')
self.meta_inputs = tuple([torch.from_numpy(i).to(self.device) for i in self.meta_inputs])
self.example_inputs = (self.meta_inputs[0][0],)
@@ -90,7 +101,9 @@ class Model(BenchmarkModel):
# In parallel, trains one model per task. There is a support (x, y)
# for each task and a query (x, y) for each task.
compute_loss_for_task = functools.partial(loss_for_task, net, n_inner_iter)
- qry_losses, qry_accs = vmap(compute_loss_for_task)(x_spt, y_spt, x_qry, y_qry)
+ fn = vmap(compute_loss_for_task)
+ fn = torch.compile(traceable(fn))
+ qry_losses, qry_accs = fn(x_spt, y_spt, x_qry, y_qry)
# Compute the maml loss by summing together the returned losses.
qry_losses.sum().backward()
```
Leads to failure:
```
Exception: Please convert all Tensors to FakeTensors first or instantiate FakeTensorMode with 'allow_non_fake_inputs'. Found in aten.convolution.default(*(FakeTensor(FakeTensor(..., device='meta', size=(160, 1, 28, 28)), cpu),
```
### Versions
master
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh @zou3519 @Chillee @samdow @janeyx99
| 3 |
2,939 | 98,822 |
[functorch] torch.compile - functorch transforms Interaction
|
triaged, oncall: pt2, module: functorch
|
### 🐛 Describe the bug
This is an umbrella issue for all the issues related to functorch - torch.compile interaction.
Currently, functorch transforms can be compiled under torch.compile with undocumented API. However, there are still a few issues which require resolution.
Example of compiling transform
```python
import torch
from torch._dynamo import allow_in_graph
from functools import wraps
def traceable(f):
f = allow_in_graph(f)
@wraps(f)
def wrapper(*args, **kwargs):
return f(*args, **kwargs)
return wrapper
def fn(x):
return vmap(torch.sin)(x)
opt_fn = torch.compile(traceable(fn))
opt_fn(torch.randn(3, 3))
```
torchbench issues:
- [ ] https://github.com/pytorch/pytorch/issues/98825
- [ ] https://github.com/pytorch/pytorch/issues/98827
user reported issues:
- [x] https://github.com/pytorch/pytorch/issues/97425
- [ ] https://github.com/pytorch/pytorch/issues/100105
- [ ] https://github.com/pytorch/pytorch/issues/100075
### Versions
master
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh @zou3519 @Chillee @samdow @janeyx99
| 0 |
2,940 | 98,817 |
[FSDP] summon_full_params with_grad=True CPU offload can crash
|
oncall: distributed, triaged, module: fsdp
|
### 🐛 Describe the bug
It can crash if the grads are not None (i.e. optim.zero_grad(set_to_none=False) is called, or grad is not zeroed at all):
```
527 work = group._allgather_base(output_tensor, input_tensor)
528 RuntimeError: Tensors must be CUDA and dense
```
### Versions
main
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
| 0 |
2,941 | 98,816 |
File-level retry enhancements
|
triaged, module: devx
|
### 🐛 Describe the bug
Here is the list of several potential enhancements on how file-level retry https://github.com/pytorch/pytorch/pull/97506 works. The goal is to start the discussion to see which makes sense and keep track of their progress:
* [x] Correctly resume from the last running test when timed out. Here is an example of `distributed/_tensor/test_dtensor_ops` failure at
https://github.com/pytorch/pytorch/actions/runs/4662747010/jobs/8253593870. The test timed out flakily at the first run and was retried accordingly `Command took >30min, retrying (retries left=1)`. However, when the retry kicked in, it started again from the beginning because the test timed out, not failing `stepwise: no previously failed tests, not skipping`. Thus, the second try also timed out. It makes sense to start from the last running test here instead.
* [ ] Ensure that each flaky test is retried once. Take an example when a test fails, the retry logic will run the test again starting at the failed test. The number of remaining retry would decrease from 1 to 0 (no more retry). Assuming that the failed test is indeed flaky and it now passes, the test file will continue. However, the edge case here is that if there is yet another flaky test further down the list, there is no retry left to handle it. A possible solution is to only decrease the number of retries if the same flaky test fails again. Here is an example https://github.com/pytorch/pytorch/actions/runs/4660804309/jobs/8249750328 in which the first flaky test `TestForeachCUDA.test_binary_op__foreach_clamp_max_is_fastpath_True_cuda_float32` was retried successfully while the second flaky test `TestForeachCUDA.test_binary_op__foreach_mul_is_fastpath_True_cuda_bfloat16` was out of luck.
Other issues:
* [ ] Integrate with ONNX tests https://github.com/pytorch/pytorch/issues/98626
### Versions
PyTorch CI
cc @ZainRizvi @kit1980 @clee2000
| 0 |
2,942 | 98,814 |
autocast does not work properly on embedding module
|
triaged, module: amp (automated mixed precision)
|
### 🐛 Describe the bug
Hello,
I'm not sure whether it is intended, but autocast seems not working on embedding module.
below is the link of a colab notebook that reproduce the issue
https://colab.research.google.com/drive/1EoHFFH5CXvkwExQeyvsFIqMV9JqjMRfI?usp=sharing
```
import torch
embeddings = torch.nn.Embedding(3, 128).cuda()
keys = torch.tensor([0,1,2]).cuda()
with torch.autocast(device_type='cuda', dtype=torch.float16):
print(embeddings(keys).dtype)
```
the processed datatype is still float32 instead of float 16.
I'm not sure whether this is supposed to be the case, but this causes the "index put requires the source and destination dtypes match" error in my code when I use amp training.
Thanks
### Versions
Collecting environment information...
PyTorch version: 2.0.0+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: 10.0.0-4ubuntu1
CMake version: version 3.25.2
Libc version: glibc-2.31
Python version: 3.9.16 (main, Dec 7 2022, 01:11:51) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.10.147+-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: Tesla T4
Nvidia driver version: 525.85.12
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.7.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 2
On-line CPU(s) list: 0,1
Thread(s) per core: 2
Core(s) per socket: 1
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) CPU @ 2.00GHz
Stepping: 3
CPU MHz: 2000.150
BogoMIPS: 4000.30
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32 KiB
L1i cache: 32 KiB
L2 cache: 1 MiB
L3 cache: 38.5 MiB
NUMA node0 CPU(s): 0,1
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable; SMT Host state unknown
Vulnerability Meltdown: Vulnerable
Vulnerability Mmio stale data: Vulnerable
Vulnerability Retbleed: Vulnerable
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled, PBRSB-eIBRS: Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat md_clear arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.22.4
[pip3] torch==2.0.0+cu118
[pip3] torchaudio==2.0.1+cu118
[pip3] torchdata==0.6.0
[pip3] torchsummary==1.5.1
[pip3] torchtext==0.15.1
[pip3] torchvision==0.15.1+cu118
[pip3] triton==2.0.0
[conda] Could not collect
cc @mcarilli @ptrblck @leslie-fang-intel @jgong5
| 2 |
2,943 | 98,808 |
[FSDP] move up the first all gather
|
oncall: distributed, triaged, module: fsdp
|
### 🚀 The feature, motivation and pitch
The first allgather in FSDP is currently launched before computation for layer 1 needs to begin, but it can actually begin much sooner:
1. Potentially overlap with `_to_kwargs` data movement
2. API for advanced users to kick off this all gather even outside of model forward pass, to overlap with other work in their training loop.
The API could look as follows:
```
def gather_first_fsdp_layer_params(self: FullyShardedDataParallel):
handle = self._root_handle
handle.unshard() # kicks off the allgather on the unshard stream, but does not block
```
And can be used as follows in an example training loop:
```
while has_next(dataloader):
batch = next(dataloader)
# kick off FSPD allgather
model.dense.gather_first_fsdp_layer_params()
# overlap with next data transfer + sparse part of model
dataloader.prefetch()
model.sparse()
model.dense()
```
### Alternatives
_No response_
### Additional context
_No response_
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
| 0 |
2,944 | 98,805 |
Discrepancy of supported Python versions between Get Started page and index of pre-built binaries for PIP installation
|
module: docs, triaged
|
### 📚 The doc issue
Sorry if this is not directly under contents under https://pytorch.org/docs/stable/index.html but for the page for "Get Started" https://pytorch.org/get-started/locally/
In the page description of required Python versions, it shows "Currently, PyTorch on Windows only supports Python 3.7-3.9; Python 2.x is not supported."
However, if we choose pip as the package manager to install Pytorch, the index of pre-built binaries of package `torch` is actually not containing `py37`. For example, under https://download.pytorch.org/whl/nightly/torch/, there is no item contains field `py37`. By the way, Python 3.10 and 3.11 seems actually supported.
If we try the command suggested by the page with Python 3.7, in my case `pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu118`, the installation is not going to work. The error message is:
```
ERROR: Could not find a version that satisfies the requirement torch (from versions: none)
ERROR: No matching distribution found for torch
```
### Suggest a potential alternative/fix
We may change the page description to match the actual support. Maybe the supported versions should be 3.8-3.11.
cc @svekars @carljparker
| 0 |
2,945 | 98,792 |
DataLoader doesn't accept non-cpu device for loading.
|
module: dataloader, triaged
|
### 🐛 Describe the bug
Not sure if this is intentional but a DataLoader does not accept a non-cpu device despite tensors living somewhere else.
[Example of a few months of a big issue that allows you to pass in `cuda` Generator to the dataloader.](https://discuss.pytorch.org/t/runtimeerror-expected-a-cuda-device-type-for-generator-but-found-cpu/161463)
```python
from torch.utils.data import DataLoader, TensorDataset, RandomSampler
device= torch.device("cuda")
x, y = torch.tensor([1,2,3], device=device), torch.tensor([1,2,3], device=device)
dataset = TensorDataset(x,y)
next(iter(DataLoader(dataset, generator=torch.Generator(device)))) # RuntimeError: Expected a 'cpu' device type for generator but found 'cuda'
```
### Versions
Collecting environment information...
PyTorch version: 2.0.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 13.0 (x86_64)
GCC version: Could not collect
Clang version: 14.0.0 (clang-1400.0.29.102)
CMake version: Could not collect
Libc version: N/A
Python version: 3.11.0 | packaged by conda-forge | (main, Jan 15 2023, 05:44:48) [Clang 14.0.6 ] (64-bit runtime)
Python platform: macOS-13.0-x86_64-i386-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Pro
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.2
[pip3] torch==2.0.0
[pip3] torchvision==0.15.1
[conda] mkl 2022.2.1 h44ed08c_16952 conda-forge
[conda] numpy 1.24.2 py311ha9d2c9f_0 conda-forge
[conda] torch 2.0.0 pypi_0 pypi
[conda] torchvision 0.15.1 pypi_0 pypi
And for google colab:
Collecting environment information...
PyTorch version: 2.0.0+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: 10.0.0-4ubuntu1
CMake version: version 3.26.3
Libc version: glibc-2.31
Python version: 3.11.3 (main, Apr 5 2023, 14:15:06) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.10.147+-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: Tesla T4
Nvidia driver version: 525.85.12
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.7.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 2
On-line CPU(s) list: 0,1
Thread(s) per core: 2
Core(s) per socket: 1
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 79
Model name: Intel(R) Xeon(R) CPU @ 2.20GHz
Stepping: 0
CPU MHz: 2199.998
BogoMIPS: 4399.99
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32 KiB
L1i cache: 32 KiB
L2 cache: 256 KiB
L3 cache: 55 MiB
NUMA node0 CPU(s): 0,1
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable; SMT Host state unknown
Vulnerability Meltdown: Vulnerable
Vulnerability Mmio stale data: Vulnerable
Vulnerability Retbleed: Vulnerable
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled, PBRSB-eIBRS: Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx smap xsaveopt arat md_clear arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.24.2
[pip3] torch==2.0.0
[pip3] torchvision==0.15.1
[pip3] triton==2.0.0
[conda] Could not collect
cc @SsnL @VitalyFedyunin @ejguan @NivekT @dzhulgakov
| 2 |
2,946 | 98,768 |
[SPMD] DistCompiler graph optimization improvement
|
oncall: distributed, triaged
|
### 🚀 The feature, motivation and pitch
The graph optimization for DistCompiler is enabled via the stack of https://github.com/pytorch/pytorch/pull/98182. Many feedbacks and experiences are provided during the development and code review of the stack. This issue is used to track the issues and TODOs.
### List of TODOs
1. Instead of having another layer of Module, IterGraphModule should serve the purpose of lowering Inductor. Context: https://github.com/pytorch/pytorch/pull/98182/files#r1158786964
2. `graph_optimization_pass` should support `run_before` argument.
3. `graph_optimization_pass` should support support multiple runs of graph optimization.
4. Ensure that `_optimized_func` of `graph_optimization_pass` does not have conflict.
5. Graph optimization passes should support symbolic shape. Context: https://github.com/pytorch/pytorch/pull/98285/files#r1161194034
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
| 0 |
2,947 | 98,728 |
[triton hash update] update the pinned triton hash
|
open source, ciflow/trunk, topic: not user facing, ciflow/inductor
|
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/master/.github/workflows/_update-commit-hash.yml).
Update the pinned triton hash.
| 99 |
2,948 | 98,727 |
Pytorch member variable not working after converting to onnx format
|
module: onnx, triaged
|
### 🐛 Describe the bug
Hi team,
we're now investigating the export to onnx feature and we found that some update logic in the original pytorch model is not working in the converted onnx model. The pytorch result kept updating as expected but the onnx result stays the same.
```
# onnx (stays the same)
[array([[ 0.09353793, -0.06549314, -0.17803375, 0.07057121, -0.07197426,
-0.00245702, 0.09384082, -0.07102646, 0.00091066, -0.012063 ]],
dtype=float32)]
[array([[ 0.09353793, -0.06549314, -0.17803375, 0.07057121, -0.07197426,
-0.00245702, 0.09384082, -0.07102646, 0.00091066, -0.012063 ]],
dtype=float32)]
[array([[ 0.09353793, -0.06549314, -0.17803375, 0.07057121, -0.07197426,
-0.00245702, 0.09384082, -0.07102646, 0.00091066, -0.012063 ]],
dtype=float32)]
# pytorch result (keep updating)
tensor([[ 0.1028, -0.0641, -0.1713, 0.0673, -0.0882, -0.0108, 0.1027, -0.0583,
0.0012, -0.0174]], grad_fn=<DifferentiableGraphBackward>)
tensor([[ 0.0977, -0.0628, -0.1801, 0.0675, -0.0858, -0.0092, 0.1020, -0.0584,
0.0034, -0.0185]], grad_fn=<DifferentiableGraphBackward>)
tensor([[ 0.0987, -0.0620, -0.1770, 0.0681, -0.0860, -0.0084, 0.1019, -0.0604,
0.0033, -0.0192]], grad_fn=<DifferentiableGraphBackward>)
```
How to deal with such scenario that we need to update some member variable like self.last_hidden ?
### Reproduce code
```
import torch.nn as nn
import torch
class RNN(nn.Module):
# you can also accept arguments in your model constructor
def __init__(self, data_size, hidden_size, output_size):
super(RNN, self).__init__()
self.last_hidden = torch.zeros(1, hidden_size)
input_size = data_size + hidden_size
self.i2h = nn.Linear(input_size, hidden_size)
self.h2o = nn.Linear(hidden_size, output_size)
def forward(self, data):
input = torch.cat((data, self.last_hidden), 1)
hidden = self.i2h(input)
output = self.h2o(hidden)
self.last_hidden = hidden
return output
data_size = 50
hidden_size = 20
output_size = 10
rnn_model = RNN(data_size, hidden_size, output_size)
rnn_model.eval()
data = torch.zeros(1, data_size)
last_hidden = torch.zeros(1, hidden_size)
torch.jit.save(torch.jit.script(rnn_model), 'rnn_model.pt')
pytorch_rnn_model = torch.load('rnn_model.pt')
torch.onnx.export(
pytorch_rnn_model,
data,
'rnn_model.onnx',
opset_version=15,
input_names=('input',),
output_names=('output',),
dynamic_axes={
'input': {0: 'batch', 1: 'sequence'},
'output': {0: 'batch', 1: 'sequence'},
},
training=torch.onnx.TrainingMode.EVAL,
do_constant_folding=False,
verbose=True,
keep_initializers_as_inputs=True,
)
import onnxruntime
rnn_onnx_model = onnxruntime.InferenceSession('rnn_model.onnx', providers=['CPUExecutionProvider'])
input_name = rnn_onnx_model.get_inputs()[0].name
output_name = rnn_onnx_model.get_outputs()[0].name
print(rnn_onnx_model.run([output_name], {input_name: data.numpy()}))
print(rnn_onnx_model.run([output_name], {input_name: data.numpy()}))
print(rnn_onnx_model.run([output_name], {input_name: data.numpy()}))
print(pytorch_rnn_model.forward(data))
print(pytorch_rnn_model.forward(data))
print(pytorch_rnn_model.forward(data))
```
### Versions
```
Collecting environment information...
PyTorch version: 2.0.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 13.1 (x86_64)
GCC version: Could not collect
Clang version: 14.0.0 (clang-1400.0.29.202)
CMake version: Could not collect
Libc version: N/A
Python version: 3.8.16 (default, Mar 1 2023, 21:19:10) [Clang 14.0.6 ] (64-bit runtime)
Python platform: macOS-10.16-x86_64-i386-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Max
Versions of relevant libraries:
[pip3] numpy==1.23.5
[pip3] torch==2.0.0
[pip3] torchvision==0.15.1
[conda] blas 1.0 mkl https://repo.anaconda.com/pkgs/main
[conda] mkl 2021.4.0 hecd8cb5_637 https://repo.anaconda.com/pkgs/main
[conda] mkl-service 2.4.0 py38h9ed2024_0 https://repo.anaconda.com/pkgs/main
[conda] mkl_fft 1.3.1 py38h4ab4a9b_0 https://repo.anaconda.com/pkgs/main
[conda] mkl_random 1.2.2 py38hb2f4e1b_0 https://repo.anaconda.com/pkgs/main
[conda] numpy 1.23.5 py38he696674_0 https://repo.anaconda.com/pkgs/main
[conda] numpy-base 1.23.5 py38h9cd3388_0 https://repo.anaconda.com/pkgs/main
[conda] torch 2.0.0 pypi_0 pypi
[conda] torchvision 0.15.1 pypi_0 pypi
```
| 11 |
2,949 | 98,724 |
Conflict between ``torch.func`` transformations and ``torch.jit.trace``
|
triaged, module: functorch
|
### 🐛 Describe the bug
```python
@torch.func.grad
@partial(torch.jit.trace, example_inputs=torch.ones([3]))
def f(a):
return torch.sum(a)
f(torch.ones([3]))
```
the above code works as expected, while
```python
@partial(torch.jit.trace, example_inputs=torch.ones([3]))
@torch.func.grad
def f(a):
return torch.sum(a)
f(torch.ones([3]))
```
raise runtimeerror as
```bash
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
/var/folders/_3/7wt5f7ss5tq2_bfcwr3gj9780000gn/T/ipykernel_92193/3316297889.py in <module>
1 @partial(torch.jit.trace, example_inputs=torch.ones([3]))
2 @torch.func.grad
----> 3 def f(a):
4 return torch.sum(a)
~/opt/anaconda3/envs/tf27/lib/python3.8/site-packages/torch/jit/_trace.py in trace(func, example_inputs, optimize, check_trace, check_inputs, check_tolerance, strict, _force_outplace, _module_class, _compilation_unit, example_kwarg_inputs, _store_inputs)
857 )
858 else:
--> 859 traced = torch._C._create_function_from_trace(
860 name,
861 func,
~/opt/anaconda3/envs/tf27/lib/python3.8/site-packages/torch/_functorch/eager_transforms.py in wrapper(*args, **kwargs)
1378 @wraps(func)
1379 def wrapper(*args, **kwargs):
-> 1380 results = grad_and_value(func, argnums, has_aux=has_aux)(*args, **kwargs)
1381 if has_aux:
1382 grad, (_, aux) = results
~/opt/anaconda3/envs/tf27/lib/python3.8/site-packages/torch/_functorch/vmap.py in fn(*args, **kwargs)
37 def fn(*args, **kwargs):
38 with torch.autograd.graph.disable_saved_tensors_hooks(message):
---> 39 return f(*args, **kwargs)
40 return fn
41
~/opt/anaconda3/envs/tf27/lib/python3.8/site-packages/torch/_functorch/eager_transforms.py in wrapper(*args, **kwargs)
1265 # NB: need create_graph so that backward pass isn't run in no_grad mode
1266 flat_outputs = _as_tuple(output)
-> 1267 flat_grad_input = _autograd_grad(flat_outputs, flat_diff_args, create_graph=True)
1268 grad_input = tree_unflatten(flat_grad_input, spec)
1269
~/opt/anaconda3/envs/tf27/lib/python3.8/site-packages/torch/_functorch/eager_transforms.py in _autograd_grad(outputs, inputs, grad_outputs, retain_graph, create_graph)
111 if len(diff_outputs) == 0:
112 return tuple(torch.zeros_like(inp) for inp in inputs)
--> 113 grad_inputs = torch.autograd.grad(diff_outputs, inputs, grad_outputs,
114 retain_graph=retain_graph,
115 create_graph=create_graph,
~/opt/anaconda3/envs/tf27/lib/python3.8/site-packages/torch/autograd/__init__.py in grad(outputs, inputs, grad_outputs, retain_graph, create_graph, only_inputs, allow_unused, is_grads_batched)
286
287 grad_outputs_ = _tensor_or_tensors_to_tuple(grad_outputs, len(t_outputs))
--> 288 grad_outputs_ = _make_grads(t_outputs, grad_outputs_, is_grads_batched=is_grads_batched)
289
290 if retain_graph is None:
~/opt/anaconda3/envs/tf27/lib/python3.8/site-packages/torch/autograd/__init__.py in _make_grads(outputs, grads, is_grads_batched)
85 elif grad is None:
86 if out.requires_grad:
---> 87 if out.numel() != 1:
88 raise RuntimeError("grad can be implicitly created only for scalar outputs")
89 new_grads.append(torch.ones_like(out, memory_format=torch.preserve_format))
RuntimeError: Cannot access data pointer of Tensor that doesn't have storage
```
### Versions
```bash
Collecting environment information...
PyTorch version: 2.0.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 11.3.1 (x86_64)
GCC version: Could not collect
Clang version: 12.0.5 (clang-1205.0.22.9)
CMake version: Could not collect
Libc version: N/A
Python version: 3.8.0 (default, Nov 6 2019, 15:49:01) [Clang 4.0.1 (tags/RELEASE_401/final)] (64-bit runtime)
Python platform: macOS-10.16-x86_64-i386-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Intel(R) Core(TM) i7-9750H CPU @ 2.60GHz
Versions of relevant libraries:
[pip3] flake8==4.0.1
[pip3] mypy==0.982
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.23.5
[pip3] torch==2.0.0
[pip3] torchvision==0.11.1
[conda] Could not collect
```
## Possible related issues:
https://github.com/pytorch/pytorch/issues/96041
cc @zou3519 @Chillee @samdow @soumith @kshitij12345 @janeyx99
| 2 |
2,950 | 98,707 |
Ubuntu 22.04 LTS issue <built-in function load_binary> returned NULL without setting an exception
|
module: rocm, triaged, oncall: pt2
|
### 🐛 Describe the bug
Greetings,
I was directed to this repository as I am encountering an issue with PyTorch. Specifically, I am experiencing an error with loading triton when attempting to run the software with stable diffusion and dreambooth addon on a newly installed KUbuntu 22.04 operating system on an AMD CPU with an AMD GPU (6950XT) and rocm 5.4.2 (installed via official deb package)
Regrettably, I do not possess a high level of proficiency in Python, and as such, I can only describe the steps that I have taken on a freshly installed machine.
The system is set up by cloning the repository https://github.com/AUTOMATIC1111/stable-diffusion-webui with torch rocm via `pip install torch torchvision --extra-index-url https://download.pytorch.org/whl/rocm5.4.2` in venv as well as dreambooth extension. No further modifications are done. In the training of the model it stops with the following error message:
```
Steps: 0%| | 0/600 [00:00<?, ?it/s]Traceback (most recent call last):
File "/home/krim/GIT/stable-diffusion-webui/extensions/sd_dreambooth_extension/dreambooth/ui_functions.py", line 727, in start_training
result = main(class_gen_method=class_gen_method)
File "/home/krim/GIT/stable-diffusion-webui/extensions/sd_dreambooth_extension/dreambooth/train_dreambooth.py", line 1371, in main
return inner_loop()
File "/home/krim/GIT/stable-diffusion-webui/extensions/sd_dreambooth_extension/dreambooth/memory.py", line 119, in decorator
return function(batch_size, grad_size, prof, *args, **kwargs)
File "/home/krim/GIT/stable-diffusion-webui/extensions/sd_dreambooth_extension/dreambooth/train_dreambooth.py", line 1169, in inner_loop
noise_pred = unet(
File "/home/krim/GIT/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/krim/GIT/stable-diffusion-webui/venv/lib/python3.10/site-packages/accelerate/utils/operations.py", line 495, in __call__
return convert_to_fp32(self.model_forward(*args, **kwargs))
File "/home/krim/GIT/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/amp/autocast_mode.py", line 14, in decorate_autocast
return func(*args, **kwargs)
File "/home/krim/GIT/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 82, in forward
return self.dynamo_ctx(self._orig_mod.forward)(*args, **kwargs)
File "/home/krim/GIT/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 209, in _fn
return fn(*args, **kwargs)
File "/home/krim/GIT/stable-diffusion-webui/venv/lib/python3.10/site-packages/diffusers/models/unet_2d_condition.py", line 556, in forward
t_emb = self.time_proj(timesteps)
File "/home/krim/GIT/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/krim/GIT/stable-diffusion-webui/venv/lib/python3.10/site-packages/diffusers/models/embeddings.py", line 222, in forward
def forward(self, timesteps):
File "/home/krim/GIT/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 209, in _fn
return fn(*args, **kwargs)
File "/home/krim/GIT/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 2819, in forward
return compiled_fn(full_args)
File "/home/krim/GIT/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1222, in g
return f(*args)
File "/home/krim/GIT/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1898, in runtime_wrapper
all_outs = call_func_with_args(
File "/home/krim/GIT/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1247, in call_func_with_args
out = normalize_as_list(f(args))
File "/home/krim/GIT/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 248, in run
return model(new_inputs)
File "/home/krim/GIT/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 265, in run
compiled_fn = cudagraphify_impl(model, new_inputs, static_input_idxs)
File "/home/krim/GIT/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 320, in cudagraphify_impl
model(list(static_inputs))
File "/tmp/torchinductor_krim/xq/cxqqlwutwzuplluoktmemj63w363iojzltjm3s3avxbjioaget6s.py", line 113, in call
triton__0.run(arg0_1, buf0, buf1, 160, grid=grid(160), stream=stream0)
File "/home/krim/GIT/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/_inductor/triton_ops/autotune.py", line 190, in run
result = launcher(
File "<string>", line 6, in launcher
File "/home/krim/GIT/stable-diffusion-webui/venv/lib/python3.10/site-packages/triton/compiler.py", line 1944, in __getattribute__
self._init_handles()
File "/home/krim/GIT/stable-diffusion-webui/venv/lib/python3.10/site-packages/triton/compiler.py", line 1930, in _init_handles
mod, func, n_regs, n_spills = hip_utils.load_binary(self.metadata["name"], self.asm["hsaco_path"], self.shared, device)
SystemError: <built-in function load_binary> returned NULL without setting an exception
```
See https://github.com/d8ahazard/sd_dreambooth_extension/issues/1174
On Reddit, I stumbled upon only one other individual who reported facing the same issue. Unfortunately, they did not provide any insights on how to even begin debugging this issue.
Are there any suggestions or ideas that anyone could offer how to start debugging it?
### Versions
```
Collecting environment information...
/home/krim/GIT/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/cuda/__init__.py:546: UserWarning: Can't initialize NVML
warnings.warn("Can't initialize NVML")
PyTorch version: 2.0.0+rocm5.4.2
Is debug build: False
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: 5.4.22803-474e8620
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
Clang version: Could not collect
CMake version: version 3.26.1
Libc version: glibc-2.35
Python version: 3.10.6 (main, Mar 10 2023, 10:55:28) [GCC 11.3.0] (64-bit runtime)
Python platform: Linux-6.2.10-060210-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: AMD Radeon RX 6950 XT
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: 5.4.22803
MIOpen runtime version: 2.19.0
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 24
On-line CPU(s) list: 0-23
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 9 5900X 12-Core Processor
CPU family: 25
Model: 33
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU max MHz: 4950,1948
CPU min MHz: 2200,0000
BogoMIPS: 7386.11
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm
Virtualization: AMD-V
L1d cache: 384 KiB (12 instances)
L1i cache: 384 KiB (12 instances)
L2 cache: 6 MiB (12 instances)
L3 cache: 64 MiB (2 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-23
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] lion-pytorch==0.0.7
[pip3] numpy==1.23.3
[pip3] open-clip-torch==2.7.0
[pip3] pytorch-lightning==1.9.4
[pip3] pytorch-triton-rocm==2.0.1
[pip3] torch==2.0.0+rocm5.4.2
[pip3] torchdiffeq==0.2.3
[pip3] torchmetrics==0.11.4
[pip3] torchsde==0.2.5
[pip3] torchvision==0.15.1+rocm5.4.2
[conda] Could not collect
```
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh @voznesenskym @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @desertfire
| 13 |
2,951 | 98,695 |
Torchscript: Name Mangling prevents Type Refinement
|
oncall: jit
|
### 🐛 Describe the bug
Name mangling in Torchscript prevents type refinement. See the following script for an example:
```python
from typing import Tuple
import torch
from torch import nn
from torch import Tensor
class A(nn.Module):
def __init__(self, s: int):
super().__init__()
self.linear = nn.Linear(int(s**2), int(s**2))
def forward(self, x: Tensor):
return x
class Sequence(nn.Module):
def __init__(self, n_modules: int):
super().__init__()
self.mods = nn.ModuleList([])
for i in range(n_modules):
self.mods.append(A(i + 1))
def forward(self, x):
tmp : Tuple[A] = (self.mods[0], self.mods[1])
for mod in self.mods:
assert isinstance(mod, A)
x = mod(x)
return x
if __name__ == "__main__":
seq = Sequence(3)
seqs = torch.jit.script(seq)
```
The script fails with the following error:
```
Traceback (most recent call last):
File "problem.py", line 34, in <module>
seqs = torch.jit.script(seq)
File "/home/schuetze/.local/lib/python3.8/site-packages/torch/jit/_script.py", line 1284, in script
return torch.jit._recursive.create_script_module(
File "/home/schuetze/.local/lib/python3.8/site-packages/torch/jit/_recursive.py", line 480, in create_script_module
return create_script_module_impl(nn_module, concrete_type, stubs_fn)
File "/home/schuetze/.local/lib/python3.8/site-packages/torch/jit/_recursive.py", line 546, in create_script_module_impl
create_methods_and_properties_from_stubs(concrete_type, method_stubs, property_stubs)
File "/home/schuetze/.local/lib/python3.8/site-packages/torch/jit/_recursive.py", line 397, in create_meth
ods_and_properties_from_stubs
concrete_type._create_methods_and_properties(property_defs, property_rcbs, method_defs, method_rcbs, method_defaults)
RuntimeError:
Variable 'tmp' is annotated with type Tuple[__torch__.A] but is being assigned to a value of type Tuple[__torch__.A, __torch__.___torch_mangle_1.A]:
File "problem.py", line 26
def forward(self, x):
tmp : Tuple[A] = (self.mods[0], self.mods[1])
~~~ <--- HERE
for mod in self.mods:
assert isinstance(mod, A)
```
The types of the different elements are mangled names, and therefore I cannot do type refinment. Is there a way for me around this problem?
_For context (in case it matters):_
I need to use type refinement because I need to help the compiler with some overload resolution. In the real example, there's not just class A, but also class B and I need to tell the compiler that some element of a nn.ModuleList have the same type and what that type is.
### Versions
Collecting environment information...
PyTorch version: 2.0.0+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.18.2
Libc version: glibc-2.31
Python version: 3.8.10 (default, Mar 13 2023, 10:26:41) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-67-generic-x86_64-with-glibc2.29
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 39 bits physical, 48 bits virtual
CPU(s): 4
On-line CPU(s) list: 0-3
Thread(s) per core: 2
Core(s) per socket: 2
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 78
Model name: Intel(R) Core(TM) i7-6600U CPU @ 2.60GHz
Stepping: 3
CPU MHz: 2800.000
CPU max MHz: 3400,0000
CPU min MHz: 400,0000
BogoMIPS: 5599.85
Virtualization: VT-x
L1d cache: 64 KiB
L1i cache: 64 KiB
L2 cache: 512 KiB
L3 cache: 4 MiB
NUMA node0 CPU(s): 0-3
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Mitigation; Microcode
Vulnerability Tsx async abort: Mitigation; TSX disabled
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp md_clear flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] flake8==5.0.4
[pip3] functorch==1.13.1
[pip3] mypy==0.982
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.24.2
[pip3] numpy-financial==1.0.0
[pip3] onnx2torch==1.5.6
[pip3] pytorch-lightning==1.5.4
[pip3] pytorch3d==0.7.3
[pip3] torch==2.0.0
[pip3] torch-tb-profiler==0.4.0
[pip3] torchdata==0.6.0
[pip3] torchmetrics==0.9.3
[pip3] torchtext==0.15.1
[pip3] torchvision==0.15.1
[pip3] triton==2.0.0
[pip3] tritonclient==2.27.0
[conda] Could not collect
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| 0 |
2,952 | 98,690 |
Linking ResNeXt PyTorch Hub in Pipeline docs
|
oncall: distributed
|
### 📚 The doc issue
Mention ResNeXt model at the first without link reference to PyTorch Hub.
see https://pytorch.org/docs/stable/pipeline.html#skip-connections
The main reason to show differentiation between ResNet and ResNeXt model. Apart from ResNet topic is excluded in docs of Pipeline. Since ResNeXt is next generation of well known more popular ResNet.
Introducing ResNeXt at first is a necessity.
### Suggest a potential alternative/fix
Provide link to PyTorch Hub ResNeXt in Pipeline docs.
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
| 0 |
2,953 | 98,678 |
DISABLED test_gradgrad_nn_GroupNorm_cuda_float64 (__main__.TestModuleCUDA)
|
module: nn, triaged, skipped
|
Platforms: linux
This test was disabled because it is failing on master ([recent examples](http://torch-ci.com/failure/test_modules.py%3A%3ATestModuleCUDA%3A%3Atest_gradgrad_nn_GroupNorm_cuda_float64)).
This test is failing on slow gradcheck, pending a fix https://github.com/pytorch/pytorch/pull/98424#issuecomment-1499858018
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| 4 |
2,954 | 98,677 |
DISABLED test_grad_nn_GroupNorm_cuda_float64 (__main__.TestModuleCUDA)
|
module: nn, triaged, skipped
|
Platforms: linux
This test was disabled because it is failing on master ([recent examples](http://torch-ci.com/failure/test_modules.py%3A%3ATestModuleCUDA%3A%3Atest_grad_nn_GroupNorm_cuda_float64)).
This test is failing on slow gradcheck, pending a fix https://github.com/pytorch/pytorch/pull/98424#issuecomment-1499858018
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| 1 |
2,955 | 98,675 |
torch.matmul with batched CSR matrix
|
module: sparse, triaged
|
### 🐛 Describe the bug
```python
import torch
device = torch.device("cuda:0")
a = torch.tensor([
[
[1.0, 0.0],
[2.0, 1.0]
],
[
[0.1, 0.1],
[0.0, 2.0],
]
]).to_sparse_csr().to(device)
b = torch.randn(2, 2).to(device)
print(torch.matmul(a, b))
```
This leads to `RuntimeError: Sparse CSR tensors do not have strides`, so seems like this is not implemented, yet. However, it is unclear to me why this is a problem with striding.
```
Traceback (most recent call last):
File "/<path>/sparse_missing_support.py", line 24, in <module>
main()
File "/<path>/sparse_missing_support.py", line 19, in main
print(torch.matmul(a, b.to_sparse_csr()))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Sparse CSR tensors do not have strides
```
### Versions
```
Collecting environment information...
PyTorch version: 2.0.0+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Fedora Linux 37 (KDE Plasma) (x86_64)
GCC version: (GCC) 12.2.1 20221121 (Red Hat 12.2.1-4)
Clang version: Could not collect
CMake version: version 3.26.1
Libc version: glibc-2.36
Python version: 3.11.2 (main, Feb 8 2023, 00:00:00) [GCC 12.2.1 20221121 (Red Hat 12.2.1-4)] (64-bit runtime)
Python platform: Linux-6.2.7-200.fc37.x86_64-x86_64-with-glibc2.36
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 2070 SUPER
Nvidia driver version: 530.30.02
cuDNN version: Probably one of the following:
/usr/lib64/libcudnn.so.8.8.1
/usr/lib64/libcudnn_adv_infer.so.8.8.1
/usr/lib64/libcudnn_adv_train.so.8.8.1
/usr/lib64/libcudnn_cnn_infer.so.8.8.1
/usr/lib64/libcudnn_cnn_train.so.8.8.1
/usr/lib64/libcudnn_ops_infer.so.8.8.1
/usr/lib64/libcudnn_ops_train.so.8.8.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 7 2700 Eight-Core Processor
CPU family: 23
Model: 8
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
Stepping: 2
Frequency boost: enabled
CPU(s) scaling MHz: 67%
CPU max MHz: 3200.0000
CPU min MHz: 1550.0000
BogoMIPS: 6387.18
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb hw_pstate ssbd ibpb vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt sha_ni xsaveopt xsavec xgetbv1 clzero irperf xsaveerptr arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif overflow_recov succor smca sev sev_es
Virtualization: AMD-V
L1d cache: 256 KiB (8 instances)
L1i cache: 512 KiB (8 instances)
L2 cache: 4 MiB (8 instances)
L3 cache: 16 MiB (2 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-15
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT vulnerable
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.2
[pip3] torch==2.0.0
[pip3] triton==2.0.0
[conda] Could not collect
```
cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer
| 7 |
2,956 | 98,673 |
[ux] Non-blocking tensor constructors
|
triaged, enhancement, has workaround, module: tensor creation
|
### 🚀 The feature, motivation and pitch
It appears that regular `torch.tensor`/`torch.as_tensor` constructors incur blocking behavior when placing the result in CUDA memory: https://github.com/pytorch/vision/issues/7504 https://github.com/pytorch/vision/pull/7506
So @nlgranger had to replace `torch.tensor(..., device = my_cuda_device)` by `torch.tensor(...).to(device = my_cuda_device, non_blocking = True)`
I think, it would be more idiomatic/clean to allow `non_blocking` argument directly on torch.tensor/torch.as_tensor
### Alternatives
_No response_
### Additional context
_No response_
cc @gchanan @mruberry
| 0 |
2,957 | 98,668 |
Cannot use `checkpoint_sequential` with `torch.compile`
|
triaged, oncall: pt2
|
## Issue description
I have a number of classes that derive directly from `nn.Sequential`, when I `torch.compile` models containing these classes and attempt to use them in conjunction with `checkpoint_sequential` execution immediately aborts with a TypeError exception (@ line 352 in torch/utils/checkpoint.py):
```
segment_size = len(functions) // segments
TypeError: object of type 'OptimizedModule' has no len()
```
I could imagine that perhaps the two are not meant to interoperate, but it's not clear whether that's the case.
I have seen this with both the stable 2.0.0 CUDA 11.7 release as well as the nightly CUDA 11.8 release (system info below)
## Code example
I have managed to reproduce the issue in a minimal way with this code snippet:
```python
import torch
import torch.nn as nn
import torch.utils.checkpoint as ckpt
import collections
class Sequence(nn.Sequential):
def __init__(self) -> None:
builder = collections.OrderedDict()
builder['linear_1'] = nn.Linear(32, 32)
builder['linear_2'] = nn.Linear(32, 32)
super(Sequence, self).__init__(builder)
m = Sequence()
n = torch.compile(m)
x = torch.randn(32)
y = ckpt.checkpoint_sequential(n, segments=2, input=x, use_reentrant=False)
```
## System Info
```
Collecting environment information...
PyTorch version: 2.1.0.dev20230406+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Arch Linux (x86_64)
GCC version: (GCC) 12.2.1 20230201
Clang version: 15.0.7
CMake version: Could not collect
Libc version: glibc-2.37
Python version: 3.10.10 (main, Mar 5 2023, 22:26:53) [GCC 12.2.1 20230201] (64-bit runtime)
Python platform: Linux-6.2.9-arch1-1-x86_64-with-glibc2.37
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3090
Nvidia driver version: 530.41.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Vendor ID: GenuineIntel
Model name: 12th Gen Intel(R) Core(TM) i5-12400F
CPU family: 6
Model: 151
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
Stepping: 5
CPU(s) scaling MHz: 35%
CPU max MHz: 5600.0000
CPU min MHz: 800.0000
BogoMIPS: 4993.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l2 invpcid_single cdp_l2 ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdt_a rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 288 KiB (6 instances)
L1i cache: 192 KiB (6 instances)
L2 cache: 7.5 MiB (6 instances)
L3 cache: 18 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-11
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.24.1
[pip3] pytorch-triton==2.1.0+46672772b4
[pip3] torch==2.1.0.dev20230406+cu118
[pip3] torchaudio==2.1.0.dev20230406+cu118
[pip3] torchdata==0.7.0.dev20230406
[pip3] torchvision==0.16.0.dev20230406+cu118
[conda] Could not collect
```
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 2 |
2,958 | 98,618 |
DISABLED test_transpose_with_norm (__main__.CPUReproTests)
|
triaged, skipped, module: inductor
|
Platforms: linux
This test was disabled because it is failing on master ([recent examples](http://torch-ci.com/failure/inductor%2Ftest_cpu_repro.py%3A%3ACPUReproTests%3A%3Atest_transpose_with_norm)).
After https://github.com/pytorch/pytorch/pull/97841 to re-enable it, this test has been failing consistently, i.e. https://hud.pytorch.org/pytorch/pytorch/commit/46d765c15e702e2e2bc64b2948fba1f8845c4cda
cc @soumith @voznesenskym @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @desertfire
| 1 |
2,959 | 98,617 |
Add test/distributed/test_c10d_mpi.py
|
oncall: distributed, triaged
|
> We have one config where MPI is available and we build the wheels with MPI support.
https://github.com/pytorch/pytorch/pull/98545#issuecomment-1500440581
We don't have any tests for MPI as we do for gloo (test_c10d_gloo) or nccl (test_c10d_nccl). As @malfet mentions since we have support in CI for mpi, we should at least create a basic test_c10d_mpi to perform basic distributed operations (e.g. init_process_group) to prevent any future regressions.
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @kwen2501 @awgu
| 0 |
2,960 | 98,600 |
Wrong illustration in README.md
|
module: docs, triaged
|
### 📚 The doc issue
The last frame of the illustration is wrong:

### Suggest a potential alternative/fix
The correct illustration should look like this:

cc @svekars @carljparker
| 1 |
2,961 | 98,587 |
Cannot use AT_CUDA_DRIVER_CHECK from user code
|
module: build, module: cpp-extensions, module: internals, module: cuda, triaged
|
### 🐛 Describe the bug
In my C++ extension that links against PyTorch I cannot use the AT_CUDA_DRIVER_CHECK macro to check the return code of a call to the CUDA driver API, because that macro makes use of the at::cuda::NVRTC struct which is defined in the ATen/cuda/nvrtc_stub/ATenNVRTC.h header, which doesn't get installed/shipped by PyTorch!
### Versions
I'm using PyTorch 2.0.0 installed from the official conda channels (build py3.9_cuda11.7_cudnn8.5.0_0).
cc @malfet @seemethere @zou3519 @ezyang @bhosmer @smessmer @ljk53 @bdhirsh @ngimel
| 2 |
2,962 | 98,566 |
`F.interpolate` and `F.grid_sample` - documentation error and bug
|
module: docs, triaged
|
### 🐛 Describe the bug
`F.interpolate` and `F.grid_sample` are closely related as documented [here](https://pytorch.org/docs/stable/generated/torch.nn.functional.interpolate.html) and [here](https://pytorch.org/docs/stable/generated/torch.nn.functional.grid_sample.html#torch.nn.functional.grid_sample) respectively.
I ran into two separate issues with them:
1. The documentation is wrong and states the exact opposite behavior/use case for the `align_corners` flag of the function,
2. In the volumetric case, there is a bug that causes resolution-independent sampling with trilinear interpolation not to work at all.
To demonstrate both, I created minimal reproducible test cases that should be ready to use or adapt if wanted:
```
#!/usr/bin/env python3
import unittest
import torch
from torch.nn import functional as F
class TestGridSampleAndInterpolate(unittest.TestCase):
"""
Reproducer for grid sample and interpolation behavior.
"""
def test_2d(self):
"""
Demonstrate functionality in the 2D case.
Everything works correctly (as far as I can tell), but the behavior
is opposite to the documentation as stated on the website.
"""
torch.random.manual_seed(42)
# We create a tensor with minimal size to demonstrate.
# It must be a 4D tensor (N, C, H, W) since we need at least two
# spatial dimensions for the `grid_sample` function later on.
test_tensor = torch.rand((1, 1, 2, 2))
# We use the interpolate function as documented. For this behavior it does
# not matter whether we use the `scale_factor` or `size` parameters since
# we aim for a non-fractional upscaling anyways.
# First, we demonstrate that the described behavior in the documentation that
# states that this causes sampling coordinates to be "referring to the corner
# points of the input’s corner pixels, making the sampling more resolution
# agnostic". It should make it completely resolution agnostic...
test_tensor_scaled = F.interpolate(
test_tensor, scale_factor=2.0, mode="bilinear", align_corners=False
)
# Now we generate 100 random sample point coordinates.
# They have to be scaled into the target coordinate system ranging from -1 to 1.
coords_xy = torch.rand(100, 2) * 2.0 - 1.0
# We now sample the grid in both resolution using the `grid_sample` function
# as described in the documentation.
sample_results_original = F.grid_sample(
test_tensor.expand((100, 1, 2, 2)),
coords_xy[:, None, None, :],
mode="bilinear",
align_corners=False,
)
# We do the same with the upscaled grid.
sample_results_scaled = F.grid_sample(
test_tensor_scaled.expand((100, 1, 4, 4)),
coords_xy[:, None, None, :],
mode="bilinear",
align_corners=False,
)
# And the results _should_ match - but they don't by a large margin.
self.assertGreater(
(sample_results_original - sample_results_scaled).abs().max(), 1e-1
)
# However, if we do the exact opposite of what's stated in the documentation
# and use `align_corners=True` everything works as expected.
test_tensor_scaled = F.interpolate(
test_tensor, scale_factor=2.0, mode="bilinear", align_corners=True
)
sample_results_original = F.grid_sample(
test_tensor.expand((100, 1, 2, 2)),
coords_xy[:, None, None, :],
mode="bilinear",
align_corners=True,
)
# We do the same with the upscaled grid.
sample_results_scaled = F.grid_sample(
test_tensor_scaled.expand((100, 1, 4, 4)),
coords_xy[:, None, None, :],
mode="bilinear",
align_corners=True,
)
self.assertTrue(torch.allclose(sample_results_original, sample_results_scaled))
def test_3d(self):
"""
Demonstrate functionality in the 3D case.
Now comes the really interesting part: this seems not to work in the 3D
case at all (except potentially in the degenerate case where the volume
is composed of equal slices, making it bilinear interpolation - I have
not tested that sufficiently much, though). This is very important for
correct volume upsampling.
"""
torch.random.manual_seed(42)
# We create a tensor with minimal size to demonstrate.
# It must be a 5D tensor (N, C, H, W, D) since we need at least three
# spatial dimensions for the `grid_sample` function later on.
test_tensor = torch.rand((1, 1, 2, 2, 2))
# We again generate 100 random sample point coordinates.
# They have to be scaled into the target coordinate system ranging from -1 to 1.
coords_xyz = torch.rand(100, 3) * 2.0 - 1.0
# We use the interpolate function as documented. For this behavior it does
# not matter whether we use the `scale_factor` or `size` parameters since
# we aim for a non-fractional upscaling anyways.
# Here, we look at both modes (`align_corners=True` and `align_corners=False`)
# and show that they both do not work as intended and advertised.
for align_corners in [True, False]:
test_tensor_scaled = F.interpolate(
test_tensor,
scale_factor=2.0,
mode="trilinear",
align_corners=align_corners,
)
# We now sample the grid in both resolution using the `grid_sample` function
# as described in the documentation.
sample_results_original = F.grid_sample(
test_tensor.expand((100, 1, 2, 2, 2)),
coords_xyz[:, None, None, None, :],
# Using 'bilinear' here since the documentation mentions that
# it is the correct mode for actually trilinear interpolation
# in the volumetric case.
mode="bilinear",
align_corners=align_corners,
)
# We do the same with the upscaled grid.
sample_results_scaled = F.grid_sample(
test_tensor_scaled.expand((100, 1, 4, 4, 4)),
coords_xyz[:, None, None, None, :],
mode="bilinear",
align_corners=False,
)
# And the results _should_ match at least in one of the two cases - here
# they never do.
self.assertGreater(
(sample_results_original - sample_results_scaled).abs().max(), 1e-1
)
if __name__ == "__main__":
unittest.main()
```
All these tests are consistently passing for me on PyTorch 1.13.1 (since I currently swapped conditions for the tests to show the _undesired_ instead of the desired behavior).
As shown in the 2D case, there is no reason that these interpolations / sampling operations should not be exact, the maximum error in 2D using `align_corners=True` usually has errors <1e-8.
**Is this an edge case?** Of course I did not start out trying this on 2x2(x2) sized volumes or images, I just found it to be a fairly minimal reproducer, but the same effects occur on larger images/volumes just the same and across the entire image/volume.
It would be great if you could have a look at this!
### Versions
PyTorch version: 1.13.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
Clang version: Could not collect
CMake version: version 3.25.2
Libc version: glibc-2.35
Python version: 3.10.9 (main, Dec 16 2022, 10:01:32) [GCC 11.3.0] (64-bit runtime)
Python platform: Linux-5.15.90.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.7.99
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA RTX A6000
Nvidia driver version: 527.27
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: AuthenticAMD
Model name: AMD Ryzen Threadripper PRO 3995WX 64-Cores
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 1
Stepping: 0
BogoMIPS: 5389.87
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl tsc_reliable nonstop_tsc cpuid extd_apicid pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload umip rdpid
Virtualization: AMD-V
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 1 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 16 MiB (32 instances)
L3 cache: 16 MiB (1 instance)
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flake8==6.0.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.23.5
[pip3] pytorch-lightning==1.9.4
[pip3] torch==1.13.1
[pip3] torchaudio==0.13.1
[pip3] torchmetrics==0.11.4
[pip3] torchvision==0.14.1
[conda] Could not collect
cc @svekars @carljparker
| 4 |
2,963 | 98,561 |
Tracker - Failing models in the torch.compile dashboard
|
triaged, oncall: pt2
|
# Torchbench models
- [ ] @bdhirsh - Inductor(default) fails for `hf_Longformer` - Repro at https://github.com/pytorch/pytorch/issues/100067
Cmd - `python benchmarks/dynamo/torchbench.py --accuracy --training --amp --backend inductor --disable-cudagraphs --device cuda --total-partitions 3 --partition-id 1 --only=hf_Longformer`
Possible fix - https://github.com/pytorch/pytorch/pull/100115
- [x] @wconstab - https://github.com/pytorch/pytorch/issues/103385
- [x] @williamwen42 - `hf_T5_base` is failing
- [x] @yanboliang - detectron2_maskrcnn - They dont show up in dashboard because they are skipped - https://github.com/pytorch/pytorch/issues/99665
# TIMM models
- [x] @eellison - Inductor (w/ cudagraphs) OOM for `cait_m36_384`
# Huggingface models
- [ ] (**Up for Grabs**) - Inductor (default) accuracy failure with `AlbertForQuestionAnswering` - Can't repro on AWS machine
Next steps - Repro this on GCP machine, check the offending tensors, increase tolerance if needed.
- [x] @eellison - Inductor (w/ cudagraphs) OOM for `DebertaV2ForQuestionAnswering`
## Dynamic shapes (NOT POPULATED YET)
- [ ] @ezyang - Inductor (dynamic) w/eval fails for `hf_BigBird`
Cmd - `python benchmarks/dynamo/torchbench.py --accuracy --inference --amp --backend inductor --dynamic-shapes --dynamic-batch-only --disable-cudagraphs --device cuda --only=hf_BigBird`
~~~
2023-04-24T18:32:37.8937866Z expr = pexpr(V.graph.sizevars.simplify(self.shape))
2023-04-24T18:32:37.8938395Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/sympy/printing/printer.py", line 292, in doprint
2023-04-24T18:32:37.8938757Z return self._str(self._print(expr))
2023-04-24T18:32:37.8939235Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/sympy/printing/printer.py", line 331, in _print
2023-04-24T18:32:37.8939711Z return printmethod(expr, **kwargs)
2023-04-24T18:32:37.8940289Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/codegen/common.py", line 191, in _print_Pow
2023-04-24T18:32:37.8940643Z assert exp.is_integer
2023-04-24T18:32:37.8941042Z torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
2023-04-24T18:32:37.8941357Z AssertionError:
~~~
- [ ] @bdhirsh - Inductor(default) w/ training fails
Cmd - `python benchmarks/dynamo/torchbench.py --accuracy --training --amp --backend inductor --disable-cudagraphs --device cuda --total-partitions 3 --partition-id 1 --only=hf_BigBird`
Error
~~~
2023-04-24T19:31:30.2966005Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/big_bird/modeling_big_bird.py", line 567, in bigbird_block_sparse_attention
2023-04-24T19:31:30.2966675Z np.random.seed(seed)
2023-04-24T19:31:30.2967733Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/big_bird/modeling_big_bird.py", line 569, in <resume in bigbird_block_sparse_attention>
2023-04-24T19:31:30.2968585Z rand_attn = [
2023-04-24T19:31:30.2969590Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/big_bird/modeling_big_bird.py", line 591, in <resume in bigbird_block_sparse_attention>
2023-04-24T19:31:30.2970293Z rand_attn = np.stack(rand_attn, axis=0)
2023-04-24T19:31:30.2971020Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/big_bird/modeling_big_bird.py", line 592, in <resume in bigbird_block_sparse_attention>
2023-04-24T19:31:30.2971769Z rand_attn = torch.tensor(rand_attn, device=query_layer.device, dtype=torch.long)
2023-04-24T19:31:30.2972585Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/big_bird/modeling_big_bird.py", line 629, in <resume in bigbird_block_sparse_attention>
2023-04-24T19:31:30.2972999Z first_context_layer.unsqueeze_(2)
2023-04-24T19:31:30.2974281Z RuntimeError: Output 0 of CompiledFunctionBackward is a view and is being modified inplace. This view was created inside a custom Function (or because an input was returned as-is) and the autograd logic to handle view+inplace would override the custom backward associated with the custom Function, leading to incorrect gradients. This behavior is forbidden. You can fix this by cloning the output of the custom Function.
~~~
- [ ] @anijain2305 + @ezyang - Inductor (w/ dynamic) fails for `convit_base` for sympy error
~~~
2023-04-24T18:14:43.2692442Z cuda train convit_base WARNING:common:fp64 golden ref were not generated for convit_base. Setting accuracy check to cosine
2023-04-24T18:14:58.0197194Z [2023-04-24 18:14:58,017] torch.fx.experimental.symbolic_shapes: [WARNING] 13.0: RecursionError in sympy.solve(floor(s0**0.5) - 14, s0)
2023-04-24T18:15:01.5186560Z ERROR:common:backend='inductor' raised:
2023-04-24T18:15:01.5187148Z CppCompileError: C++ compile error
2023-04-24T18:15:01.5187414Z
2023-04-24T18:15:01.5187542Z Command:
2023-04-24T18:15:01.5190876Z g++ /tmp/torchinductor_jenkins/za/czarprmgsfhybui3toxkkq3vz6vil3qetzl7exfphbvz2kjjt4tr.cpp -shared -fPIC -Wall -std=c++17 -Wno-unused-variable -I/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include -I/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -I/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/TH -I/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/THC -I/opt/conda/envs/py_3.10/include/python3.10 -lgomp -O3 -ffast-math -fno-finite-math-only -march=native -fopenmp -D C10_USING_CUSTOM_GENERATED_MACROS -o /tmp/torchinductor_jenkins/za/czarprmgsfhybui3toxkkq3vz6vil3qetzl7exfphbvz2kjjt4tr.so
2023-04-24T18:15:01.5192575Z
2023-04-24T18:15:01.5192698Z Output:
2023-04-24T18:15:01.5193959Z /tmp/torchinductor_jenkins/za/czarprmgsfhybui3toxkkq3vz6vil3qetzl7exfphbvz2kjjt4tr.cpp: In function ‘void kernel(float*, long int*, long int*, long int)’:
2023-04-24T18:15:01.5195123Z /tmp/torchinductor_jenkins/za/czarprmgsfhybui3toxkkq3vz6vil3qetzl7exfphbvz2kjjt4tr.cpp:42:72: error: invalid operands of types ‘long int’ and ‘double’ to binary ‘operator%’
2023-04-24T18:15:01.5196985Z auto tmp0 = out_ptr1[static_cast<long>((((i0 / 1L) % (std::floor(std::sqrt(ks0))))*(std::floor(std::sqrt(ks0)))) + ((i1 / 1L) % (std::floor(std::sqrt(ks0)))))];
2023-04-24T18:15:01.5197557Z ~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2023-04-24T18:15:01.5198643Z /tmp/torchinductor_jenkins/za/czarprmgsfhybui3toxkkq3vz6vil3qetzl7exfphbvz2kjjt4tr.cpp:42:147: error: invalid operands of types ‘long int’ and ‘double’ to binary ‘operator%’
2023-04-24T18:15:01.5202118Z auto tmp0 = out_ptr1[static_cast<long>((((i0 / 1L) % (std::floor(std::sqrt(ks0))))*(std::floor(std::sqrt(ks0)))) + ((i1 / 1L) % (std::floor(std::sqrt(ks0)))))];
~~~
Next steps
* @anijain2305 - Use this opportunity to test minifier with dynamic shapes
* @ezyang to fix/assign the owner to fix the issue
Completed
-------
- [x] Inductor (default) accuracy flakiness for `sebotnet33ts_256` - Fixed by @anijain2305 https://github.com/pytorch/pytorch/pull/99851
- [x] Inductor (default) w/eval fails for `hf_BigBird` - Fixed by @anijain2305
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 1 |
2,964 | 98,557 |
torch.jit.script codegen warning with cuda and vmap
|
oncall: jit, triaged, module: functorch
|
### 🐛 Describe the bug
I'm getting the following warning which hints at suboptimal speed, and doesn't look like it should happen at any point.
```... site-packages\torch\_functorch\vmap.py:619: UserWarning: FALLBACK path has been taken inside: torch::jit::fuser::cuda::runCudaFusionGroup. This is an indication that codegen Failed for some reason.
To debug try disable codegen fallback path via setting the env variable `export PYTORCH_NVFUSER_DISABLE=fallback`
(Triggered internally at C:\cb\pytorch_1000000000000\work\third_party\nvfuser\csrc\manager.cpp:340.)
batched_outputs = func(*batched_inputs, **kwargs)```
This can be reproduced with the following code:
```import torch
from torch import vmap
import torch.jit
@torch.jit.script
def test(params):
x0 = params[0]
y = torch.arange(0, 64, dtype=torch.float32, device=params.device)
return torch.cos(x0)*y
params = torch.zeros((200, 4), dtype=torch.float32, device='cuda')
torch.vmap(test, chunk_size=100)(params)
```
It seems to occur only when erf is passed an array instead of scalar.
I tested this both on pytorch 2.0 and on the nightly build (version is collected below), and on windows and ubuntu (see collected environments)
### Versions
[windows_env.txt](https://github.com/pytorch/pytorch/files/11175006/windows_env.txt)
[linux_env.txt](https://github.com/pytorch/pytorch/files/11175007/linux_env.txt)
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @zou3519 @Chillee @samdow @soumith @kshitij12345 @janeyx99
| 3 |
2,965 | 98,542 |
Training runs 50% slower when using 2 GPUs comparing to 1
|
oncall: distributed
|
### 🐛 Describe the bug
Hi,
I'm trying to train an InternImage model and I'm running into a weird issue using either Torchrun or torch.distributed.launch
I'm using the PyTorch Nvidia docker 22.04 with CUDA 11.6.2 and PyTorch 1.12.
When I'm training the model using "python train.py ..." the model runs well, however, when I try to take advantage of my 2 GPUs, I"m running into problems.
I can see that both my GPUs are running since their temperature is going up and their memory usage is going up, but while training the model with 1 GPU takes me 2.5 days, using the 2 GPUs, the model ETA is > 3.5 days.
I don't care that much about the time, but the main problem is that the training crashes after 1000 iterations, when it's saving the first checkpoint. None of these issues is observed when I use 1 GPU.
I'll appreciate your help troubleshooting this problem.
I'm using WSL2, with Nvidia docker 22.04, and I have two RTX3090 GPUs
This is the error I'm getting:
2023-04-06 21:18:27,943 - mmseg - INFO - Saving checkpoint at 1000 iterations
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 19682 closing signal SIGTERM
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: -9) local_rank: 0 (pid: 19681) of binary: /opt/conda/bin/python
Traceback (most recent call last):
File "/opt/conda/bin/torchrun", line 33, in <module>
sys.exit(load_entry_point('torch==1.12.0a0+bd13bc6', 'console_scripts', 'torchrun')())
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 345, in wrapper
return f(*args, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/run.py", line 761, in main
run(args)
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/run.py", line 752, in run
elastic_launch(
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 131, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 245, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
======================================================
train.py FAILED
------------------------------------------------------
Failures:
<NO_OTHER_FAILURES>
------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2023-04-06_21:18:35
host : f56b97aeeef1
rank : 0 (local_rank: 0)
exitcode : -9 (pid: 19681)
error_file: <N/A>
traceback : Signal 9 (SIGKILL) received by PID 19681
======================================================
root@f56b97aeeef1:/workspace/InternImage/segmentation# /opt/conda/lib/python3.8/multiprocessing/resource_tracker.py:216: UserWarning: resource_tracker: There appear to be 38 leaked semaphore objects to clean up at shutdown
warnings.warn('resource_tracker: There appear to be %d '
/opt/conda/lib/python3.8/multiprocessing/resource_tracker.py:216: UserWarning: resource_tracker: There appear to be 38 leaked semaphore objects to clean up at shutdown
warnings.warn('resource_tracker: There appear to be %d '
### Versions
Collecting environment information...
PyTorch version: 1.12.0a0+bd13bc6
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.22.3
Libc version: glibc-2.31
Python version: 3.8.13 | packaged by conda-forge | (default, Mar 25 2022, 06:04:10) [GCC 10.3.0] (64-bit runtime)
Python platform: Linux-5.15.90.1-microsoft-standard-WSL2-x86_64-with-glibc2.10
Is CUDA available: True
CUDA runtime version: 11.6.124
CUDA_MODULE_LOADING set to:
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
Nvidia driver version: 531.18
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.4.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 20
On-line CPU(s) list: 0-19
Thread(s) per core: 2
Core(s) per socket: 10
Socket(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Core(TM) i9-10900X CPU @ 3.70GHz
Stepping: 7
CPU MHz: 3695.996
BogoMIPS: 7391.99
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 320 KiB
L1i cache: 320 KiB
L2 cache: 10 MiB
L3 cache: 19.3 MiB
Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid pni pclmulqdq ssse3 fma cx16 pdcm pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx512_vnni flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.22.3
[pip3] pytorch-quantization==2.1.2
[pip3] torch==1.12.0a0+bd13bc6
[pip3] torch-tensorrt==1.1.0a0
[pip3] torchtext==0.13.0a0
[pip3] torchvision==0.13.0a0
[conda] magma-cuda110 2.5.2 5 local
[conda] mkl 2019.5 281 conda-forge
[conda] mkl-include 2019.5 281 conda-forge
[conda] numpy 1.22.3 py38h1d589f8_2 conda-forge
[conda] pytorch-quantization 2.1.2 pypi_0 pypi
[conda] torch 1.12.0a0+bd13bc6 pypi_0 pypi
[conda] torch-tensorrt 1.1.0a0 pypi_0 pypi
[conda] torchtext 0.13.0a0 pypi_0 pypi
[conda] torchvision 0.13.0a0 pypi_0 pypi
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
| 0 |
2,966 | 98,541 |
Memory corruption using torch.ops.* to access re-registered operator
|
module: internals, triaged
|
### 🐛 Describe the bug
We register an operator (foo::sum), call torch.ops.foo.sum, de-register it (by deleting the Library object), re-register it, and then call torch.ops.foo.sum again. This causes memory corruption (and leads to segfaults).
Repro: Run the following under an asan build.
```python
class TestTesting(TestCase):
def test_AAA(self) -> None:
from torch.library import Library
my_lib1 = Library("foo", "DEF")
my_lib1.define("sum(Tensor self) -> Tensor")
@torch.library.impl(my_lib1, "sum", "CPU")
def my_sum(*args, **kwargs):
return args[0]
x = torch.tensor([1, 2])
self.assertEqual(torch.ops.foo.sum(x), x)
import sys
assert sys.getrefcount(my_lib1) == 2
del my_lib1
my_lib1 = Library("foo", "DEF")
my_lib1.define("sum(Tensor self) -> Tensor")
@torch.library.impl(my_lib1, "sum", "CPU")
def my_sum(*args, **kwargs):
return args[0]
x = torch.tensor([1, 2])
self.assertEqual(torch.ops.foo.sum(x), x)
```
Likely related to https://github.com/pytorch/pytorch/issues/98537, unsure if same exact issue.
### Versions
master
cc @ezyang @bhosmer @smessmer @ljk53 @bdhirsh
| 0 |
2,967 | 98,537 |
Segfault when using torch.ops.* to access de-registered op
|
module: crash, triaged, module: dispatch, module: library
|
### 🐛 Describe the bug
We register an operator (foo::sum), call torch.ops.foo.sum, de-register it (by deleting the Library object), and then call torch.ops.foo.sum again. This segfaults.
```python
import torch
from torch.library import Library
my_lib1 = Library("foo", "DEF")
my_lib1.define("sum(Tensor self) -> Tensor")
@torch.library.impl(my_lib1, "sum", "CPU")
def my_sum(*args, **kwargs):
return args[0] * 2
x = torch.tensor([1, 2])
torch.ops.foo.sum(x)
import sys
assert sys.getrefcount(my_lib1) == 2
del my_lib1
x = torch.tensor([1, 2])
torch.ops.foo.sum(x)
```
It should not segfault
### Versions
master
cc @anjali411
| 3 |
2,968 | 98,533 |
Dynamo compiled graph gets overwritten by eager in a data dependent branch when False branch is empty
|
triaged, oncall: pt2
|
### 🐛 Describe the bug
I have encountered an unexpected dynamo capture behavior that is related to a data dependent branch. While the result of the code execution is correct, the way it executes is unexpected -- an already compiled graph got silently dropped and fall back to eager if an empty false branch is run.
It is best illustrated using an example. The example is a simplified version of running HuggingFace Whisper model.
```
import torch
import torch._dynamo as dynamo
global_dict = {
3: 3
}
def f(x, y):
# begin graph 0
x_len = x.shape[-1]
update_idx = global_dict.get(x_len, None)
# end graph 0
if update_idx is not None:
# begin graph 1
y[update_idx] += 1
# end graph 1
return y
class DebugCompiler:
def __init__(self):
self.count = 0
def __call__(self, gm, example_inputs):
id = self.count
print(f'compiling graph {id}')
gm.graph.print_tabular()
def run(*args, **kwargs):
print(f'running graph {id}')
return gm.forward(*args, **kwargs)
self.count += 1
return run
f = dynamo.optimize(DebugCompiler(), dynamic=True)(f)
def test(x_sizes):
y = torch.zeros(5)
for size_of_x in x_sizes:
x = torch.tensor(range(size_of_x))
y = f(x, y)
print(y)
# Expected output:
# ```
# compiling graph 0
# running graph 0
# compiling graph 1
# running graph 1
# running graph 0
# running graph 1
# ```
test([3, 3])
# Expected output:
# ```
# running graph 0
# running graph 1
# running graph 0
# running graph 0
# running graph 1
# running graph 0
# running graph 1
# running graph 0
# running graph 1
# ```
# Actual output:
# ```
# running graph 0
# running graph 1
# running graph 0
# running graph 0
# running graph 0
# running graph 0
# ```
test([3, 4, 3, 3, 3])
```
There are two subgraphs in `f`, `graph0` checks the shape of `x` and compare against a dictionary. `graph1` updates `y`. The execution of `graph1` depends on the output of the check of `graph1`.
The unexpected behavior is that once the "False" branch in `f` is triggered, `graph1` is never used. The code for updating `y` silently runs in eager mode. This is proven in the 2nd test case where for the first `3`, graph 1 is used, then the input `4` triggers the False branch, and afterwards, even running with `3` does not trigger the compiled graph anymore. The output `y` for each run is still correct however, which means, the code for updating y is running in eager mode instead of using `graph1`.
On the other hand if there is tensor computation in the False branch this problem does not happen.
```
def f(x, y):
# begin graph 0
x_len = x.shape[-1]
update_idx = global_dict.get(x_len, None)
# end graph 0
if update_idx is not None:
# begin graph 1
y[update_idx] += 1
# end graph 1
else:
# begin graph 2
y[update_idx] -= 1
# end graph 2
return y
```
### Versions
torch==2.1.0.dev20230331+cu117
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 8 |
2,969 | 98,515 |
torch.cond should work with expressions involving SymInt
|
triaged, module: functorch
|
### 🐛 Describe the bug
If x.size(0) is an unbacked symint (because x came from nonzero), it is useful to say `torch.cond(x.size(0) > 5, do_true, do_false)`.
I believe @tugsbayasgalan will indirectly finish this off with https://github.com/pytorch/pytorch/pull/98453 because with symbool, we can then do `scalar_to_tensor(x.size(0) > 5)` and then pass that as a tensor argument to torch.cond.
### Versions
master
cc @zou3519 @Chillee @samdow @soumith @kshitij12345 @janeyx99
| 1 |
2,970 | 98,503 |
Power VSX vectorization support disabled
|
module: build, triaged, module: regression, module: POWER
|
### 🐛 Describe the bug
During cmake cleanup VSX vectorization support for power was disabled as well:
https://github.com/pytorch/pytorch/commit/847dbb8684f9d9bbf59cae629d07bff3ede0c4a2#diff-12e8125164bbfc7556b1781a8ed516e333cc0bf058acb7197f7415be44606c72L1729
ZVECTOR support for s390x was removed as well, but it's not part of this issue, it's already being worked on separately.
For ZVECTOR it takes more than just 1 cmake line to properly restore vectorization support, thus I assume it might be also the case for VSX.
### Versions
pytorch master branch
cc @malfet @seemethere
| 2 |
2,971 | 98,499 |
`torch.nn.utils.rnn.unpad_sequence` modifies arguments in-place
|
module: docs, module: rnn, triaged
|
### 🐛 Describe the bug
`torch.nn.utils.rnn.unpad_sequence` has the unexpected side-effect of transposing the input-tensor in-place. This should be either documented or, even better, fixed so that the function call does not modify input data.
```python
import torch
x = torch.randn(4,2)
# x.shape == (4, 2)
torch.nn.utils.rnn.unpad_sequence(x, lengths=torch.tensor([4, 4]))
# x.shape == (2, 4)
print(x.shape)
```
Yields `torch.Size([2, 4])`
The problem is the `x.transpose_(0, 1)` when `batch_first=False` [in this line of code](https://github.com/pytorch/pytorch/blob/master/torch/nn/utils/rnn.py#L440).
### Versions
PyTorch version: 1.13.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: 10.0.0-4ubuntu1
CMake version: Could not collect
Libc version: glibc-2.31
Python version: 3.9.16 (main, Mar 8 2023, 14:00:05) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-69-generic-x86_64-with-glibc2.31
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 39 bits physical, 48 bits virtual
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 142
Model name: Intel(R) Core(TM) i5-8350U CPU @ 1.70GHz
Stepping: 10
CPU MHz: 1900.000
CPU max MHz: 3600.0000
CPU min MHz: 400.0000
BogoMIPS: 3799.90
Virtualization: VT-x
L1d cache: 128 KiB
L1i cache: 128 KiB
L2 cache: 1 MiB
L3 cache: 6 MiB
NUMA node0 CPU(s): 0-7
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Mitigation; Microcode
Vulnerability Tsx async abort: Mitigation; TSX disabled
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust sgx bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp md_clear flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] mypy==0.990
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.23.5
[pip3] torch==1.13.1
[conda] numpy 1.23.5 pypi_0 pypi
[conda] torch 1.13.1 pypi_0 pypi
cc @svekars @carljparker @zou3519
| 0 |
2,972 | 98,498 |
Higher order derivatives not working when setting compute device to `torch.device("mps")`
|
module: autograd, triaged, module: mps
|
### 🐛 Describe the bug
I have a code example that performs fast updates, similar to what we have in MAML, to a perform inner updates and then computes the meta-loss to backpropagate through the optimization trajectory. When computing gradients in the fast updates, we can set `create_graph=True` to enable second-order derivatives when call backward on the meta-loss. When using `torch.device("mps")`, it throws an error that `derivative for aten:linear is not implemented`. It works fine when you set `create_graph=False` in the inner updates but then it wouldn't compute the higher order derivatives. I don't get the error when using `torch.device("cpu")` and `torch.device("cuda")`. Here is the code to reproduce the error:
```python
device = torch.device("mps")
model = nn.Sequential(nn.Linear(10, 5), nn.ReLU(), nn.Linear(5, 1))
model.to(device)
# Initial fast parameters
fast_params_0 = {n: deepcopy(p) for (n, p) in model.named_parameters()}
# First inner update
x = torch.randn(10, 10, device=device)
y = torch.randn(10, 1, device=device)
logits_0 = torch.func.functional_call(model, fast_params_0, x)
loss = nn.MSELoss()(logits_0, y)
grads_0 = torch.autograd.grad(loss, fast_params_0.values(),
create_graph=True,
retain_graph=True)
# Compute fast parameters after the first inner update
fast_params_1 = {n: p - 0.1 * g for ((n, p), g) in zip(fast_params_0.items(), list(grads_0))}
# Compute meta-loss and backprop through the optimization trajectory
x = torch.randn(10, 10, device=device)
y = torch.randn(10, 1, device=device)
logits_1 = torch.func.functional_call(model, fast_params_1, x)
met_loss = nn.MSELoss()(logits_1, y)
met_loss.backward()
```
And, the error I get:
```
RuntimeError: derivative for aten::linear_backward is not implemented
```
*I get the same error for any layer type.
### Versions
```
[pip3] numpy==1.23.5
[pip3] pytorchcv==0.0.67
[pip3] torch==2.0.0
[pip3] torchaudio==2.0.0
[pip3] torchmetrics==0.11.4
[pip3] torchvision==0.15.0
[conda] numpy 1.23.5 py39h1398885_0
[conda] numpy-base 1.23.5 py39h90707a3_0
[conda] pytorch 2.0.0 py3.9_0 pytorch
[conda] pytorchcv 0.0.67 pypi_0 pypi
[conda] torchaudio 2.0.0 py39_cpu pytorch
[conda] torchmetrics 0.11.4 pypi_0 pypi
[conda] torchvision 0.15.0 py39_cpu pytorch
```
cc @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved @soulitzer @Lezcano @Varal7 @kulinseth @malfet @DenisVieriu97 @razarmehr @abhudev
| 9 |
2,973 | 98,497 |
[onnx]Unsupported: ONNX export of convolution for kernel of unknown shape
|
module: onnx, triaged
|
### 🐛 Describe the bug
I encounter this error when converting a pytorch model to onnx.
I am trying to convolve with specific weights and in groups. I narrowed down the piece of code creating the problem shown below.
```python
import torch
class Filter(nn.Module):
def __init__(self):
super().__init__()
self.resample_filter = torch.rand(4,4)
def forward(self, x):
x = torch.nn.functional.pad(x, [1, 1, 1, 1]) # If this line is commented out, it works.
weight = self.resample_filter[None, None].repeat([x.shape[1] , 1] + [1] * self.resample_filter.ndim)
x = torch.nn.functional.conv2d(input=x, padding=1, weight=weight, groups=x.shape[1] )
return x
x = torch.rand((1, 3, 256, 256))
f = Filter()
y = f(x)
torch.onnx.export(f, x, "test-filter.onnx", opset_version=15)
```
Observed results - error message:
```
Traceback (most recent call last):
File "/home/soham/miniconda3/envs/rotation3d/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/soham/miniconda3/envs/rotation3d/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/home/soham/.vscode/extensions/ms-python.python-2023.6.0/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher/../../debugpy/__main__.py", line 39, in <module>
cli.main()
File "/home/soham/.vscode/extensions/ms-python.python-2023.6.0/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 430, in main
run()
File "/home/soham/.vscode/extensions/ms-python.python-2023.6.0/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 284, in run_file
runpy.run_path(target, run_name="__main__")
File "/home/soham/.vscode/extensions/ms-python.python-2023.6.0/pythonFiles/lib/python/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 321, in run_path
return _run_module_code(code, init_globals, run_name,
File "/home/soham/.vscode/extensions/ms-python.python-2023.6.0/pythonFiles/lib/python/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 135, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "/home/soham/.vscode/extensions/ms-python.python-2023.6.0/pythonFiles/lib/python/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 124, in _run_code
exec(code, run_globals)
File "/home/soham/casablanca/rotation3d/test_modules.py", line 173, in <module>
torch.onnx.export(f, x, "test-filter.onnx", opset_version=15)
File "/home/soham/miniconda3/envs/rotation3d/lib/python3.10/site-packages/torch/onnx/__init__.py", line 305, in export
return utils.export(model, args, f, export_params, verbose, training,
File "/home/soham/miniconda3/envs/rotation3d/lib/python3.10/site-packages/torch/onnx/utils.py", line 118, in export
_export(model, args, f, export_params, verbose, training, input_names, output_names,
File "/home/soham/miniconda3/envs/rotation3d/lib/python3.10/site-packages/torch/onnx/utils.py", line 719, in _export
_model_to_graph(model, args, verbose, input_names,
File "/home/soham/miniconda3/envs/rotation3d/lib/python3.10/site-packages/torch/onnx/utils.py", line 503, in _model_to_graph
graph = _optimize_graph(graph, operator_export_type,
File "/home/soham/miniconda3/envs/rotation3d/lib/python3.10/site-packages/torch/onnx/utils.py", line 232, in _optimize_graph
graph = torch._C._jit_pass_onnx(graph, operator_export_type)
File "/home/soham/miniconda3/envs/rotation3d/lib/python3.10/site-packages/torch/onnx/__init__.py", line 354, in _run_symbolic_function
return utils._run_symbolic_function(*args, **kwargs)
File "/home/soham/miniconda3/envs/rotation3d/lib/python3.10/site-packages/torch/onnx/utils.py", line 1061, in _run_symbolic_function
return symbolic_fn(g, *inputs, **attrs)
File "/home/soham/miniconda3/envs/rotation3d/lib/python3.10/site-packages/torch/onnx/symbolic_helper.py", line 172, in wrapper
return fn(g, *args, **kwargs)
File "/home/soham/miniconda3/envs/rotation3d/lib/python3.10/site-packages/torch/onnx/symbolic_opset9.py", line 1301, in _convolution
raise RuntimeError("Unsupported: ONNX export of convolution for kernel "
RuntimeError: Unsupported: ONNX export of convolution for kernel of unknown shape.
```
### Versions
PyTorch version: 1.11.0
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.10.9 (main, Jan 11 2023, 15:21:40) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.19.0-38-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to:
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3080 Ti Laptop GPU
Nvidia driver version: 515.65.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 20
On-line CPU(s) list: 0-19
Vendor ID: GenuineIntel
Model name: 12th Gen Intel(R) Core(TM) i7-12700H
CPU family: 6
Model: 154
Thread(s) per core: 2
Core(s) per socket: 14
Socket(s): 1
Stepping: 3
CPU max MHz: 4700,0000
CPU min MHz: 400,0000
BogoMIPS: 5376.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 544 KiB (14 instances)
L1i cache: 704 KiB (14 instances)
L2 cache: 11,5 MiB (8 instances)
L3 cache: 24 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-19
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.23.5
[pip3] torch==1.11.0
[pip3] torch-fidelity==0.3.0
[pip3] torchmetrics==0.11.3
[pip3] torchvision==0.12.0
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.3.1 h2bc3f7f_2
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py310h7f8727e_0
[conda] mkl_fft 1.3.1 py310hd6ae3a3_0
[conda] mkl_random 1.2.2 py310h00e6091_0
[conda] numpy 1.23.5 py310hd5efca6_0
[conda] numpy-base 1.23.5 py310h8e6c178_0
[conda] pytorch 1.11.0 py3.10_cuda11.3_cudnn8.2.0_0 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torch-fidelity 0.3.0 pypi_0 pypi
[conda] torchmetrics 0.11.3 pypi_0 pypi
[conda] torchvision 0.12.0 py310_cu113 pytorch
| 6 |
2,974 | 98,495 |
Strided to batch BSR/BSC conversion fails when the number of zeros per block varies while the number of blocks per patch is constant
|
module: sparse, triaged
|
## Issue description
As in the title.
## Code example
```python
>>> torch.tensor([[[1, 2]], [[3, 4]]]).to_sparse_bsr((1, 1))
tensor(crow_indices=tensor([[0, 2],
[0, 2]]),
col_indices=tensor([[0, 1],
[0, 1]]),
values=tensor([[[[1]],
[[2]]],
[[[3]],
[[4]]]]), size=(2, 1, 2), nnz=2,
layout=torch.sparse_bsr)
>>> torch.tensor([[[1, 2]], [[0, 4]]]).to_sparse_bsr((1, 1))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
RuntimeError: Expect the same number of specified elements per batch.
>>> torch.tensor([[[1, 0]], [[0, 4]]]).to_sparse_bsr((1, 1))
tensor(crow_indices=tensor([[0, 1],
[0, 1]]),
col_indices=tensor([[0],
[1]]),
values=tensor([[[[1]]],
[[[4]]]]), size=(2, 1, 2), nnz=1,
layout=torch.sparse_bsr)
```
Notice that in the failing conversion example, the number of zeros in the first block is 0 and in the second block it is 1.
Apparently, the check logic in
https://github.com/pytorch/pytorch/blob/ccc27bc361f2fa5043534b8f898922ffd0ca9340/aten/src/ATen/native/TensorConversions.cpp#L95-L98
is flawed for BSR and BSC conversion cases.
## System Info
- PyTorch version: master
cc @alexsamardzic @nikitaved @cpuhrsch @amjames @bhosmer
| 3 |
2,975 | 98,487 |
torch.fx.GraphModule inside custom backend has `training` attribute always set to `True` regardless of the user settings
|
triaged, oncall: pt2
|
### 🐛 Describe the bug
Calling `eval()` and `train()` on either the original `torch.nn.Module` or `OptimizedModule` returned by `torch.compile` has no effect on the `training` attribute of `torch.fx.GraphModule` that is passed to custom backend function.
### Error logs
_No response_
### Minified repro
```
import torch
def my_custom_backend(gm, example_inputs):
print(gm.training)
return gm.forward
class MockModule(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, x):
if self.training:
return x + 2
else:
return x + 3
mod = MockModule()
optimized_mod = torch.compile(mod, backend=my_custom_backend)
mod.eval()
optimized_mod.eval()
print(optimized_mod(torch.zeros(10)))
print(optimized_mod(torch.zeros(10)))
mod.train()
optimized_mod.train()
print(optimized_mod(torch.zeros(10)))
print(optimized_mod(torch.zeros(10)))
mod.eval()
optimized_mod.eval()
print(optimized_mod(torch.zeros(10)))
print(optimized_mod(torch.zeros(10)))
```
Result:
```
True
tensor([3., 3., 3., 3., 3., 3., 3., 3., 3., 3.])
tensor([3., 3., 3., 3., 3., 3., 3., 3., 3., 3.])
True
tensor([2., 2., 2., 2., 2., 2., 2., 2., 2., 2.])
tensor([2., 2., 2., 2., 2., 2., 2., 2., 2., 2.])
tensor([3., 3., 3., 3., 3., 3., 3., 3., 3., 3.])
tensor([3., 3., 3., 3., 3., 3., 3., 3., 3., 3.])
```
### Versions
Torch 2.0
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 0 |
2,976 | 98,486 |
Options are not forwarded to the custom backend
|
triaged, oncall: pt2
|
### 🐛 Describe the bug
Based on the description inside the [torch. compile](https://pytorch.org/docs/stable/generated/torch.compile.html) options are passed to the backend. Unfortunately, they are only passed to the inductor backend. Currently, the backend function has the following contract `(gm: torch.fx.GraphModule, example_inputs: List[torch.Tensor]) -> Callable` and there is no way to get options from the registered backend function.
Am I missing something? Is there a way to get options from the custom backend function? If not, is it possible to add that possibility or update the description in the documentation?
### Error logs
_No response_
### Minified repro
#torch version 2.0
### Versions
#torch version 2.0
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 0 |
2,977 | 98,484 |
Improvements to FSDP debugability
|
oncall: distributed, triaged, module: fsdp
|
### 🚀 The feature, motivation and pitch
There are a couple pain points which make FSDP harder to debug:
- In some cases, post backward hooks don't fire resulting in layers not getting gradients, resulting in training convergence issues. We could add logging in a debug mode for this, but need to think of a more comprehensive solution to identify this issue.
- On lazy init, we should iterate through the original params and check that shared params are in the same FSDP instance.
### Alternatives
_No response_
### Additional context
_No response_
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
| 0 |
2,978 | 98,481 |
Bring CudaPluggableAllocator to feature parity with the Native Allocator
|
module: internals, module: cuda, triaged, module: CUDACachingAllocator
|
### 🚀 The feature, motivation and pitch
I have tried a few times to add Unified Memory support to Pytorch, so as to leverage as many resources of my computer as possible while running training and inference alike, but to no avail; so I abandoned my fork somewhat. After I hear about the pluggable Allocator mechanism, I tried it and [RAPIDS rmm](https://github.com/AUTOMATIC1111/stable-diffusion-webui) with [stable diffusion WebUI](https://github.com/AUTOMATIC1111/stable-diffusion-webui) , but it gave errors such as "CudaPluggableAllocator does not yet support CacheInfo", and thus prohibiting my ability to run operations requiring more than 5.5GB of memory efficiently; thus, I would like to request that CudaPluggableAllocator get all features which the native allocator has.
### Alternatives
Implement Cuda and ROCm Unified Memory support, and provide users an easy way to use it, like how one can switch between the native allocator and Cudamallocasync with an environment variable.
### Additional context
_No response_
cc @ezyang @bhosmer @smessmer @ljk53 @bdhirsh @ngimel
| 6 |
2,979 | 98,467 |
tacotron2 times out
|
triaged, oncall: pt2, module: inductor
|
Repro:
```
python benchmarks/dynamo/torchbench.py --accuracy --inference --amp --backend inductor --disable-cudagraphs --device cuda --only tacotron2
```
ctrl+c gives this stack information, which looks like a problem in the fuser heuristic,
```
File "/fsx/users/binbao/conda/envs/release/lib/python3.10/site-packages/torch/_inductor/scheduler.py", line 636, in __init__
self.fuse_nodes()
File "/fsx/users/binbao/conda/envs/release/lib/python3.10/site-packages/torch/_inductor/scheduler.py", line 817, in fuse_nodes
self.fuse_nodes_once()
File "/fsx/users/binbao/conda/envs/release/lib/python3.10/site-packages/torch/_inductor/scheduler.py", line 833, in fuse_nodes_once
if self.can_fuse(node1, node2) and not self.will_fusion_create_cycle(
File "/fsx/users/binbao/conda/envs/release/lib/python3.10/site-packages/torch/_inductor/scheduler.py", line 907, in will_fusion_create_cycle
return any(check(self.name_to_fused_node[n]) for n in combined_predecessors)
File "/fsx/users/binbao/conda/envs/release/lib/python3.10/site-packages/torch/_inductor/scheduler.py", line 907, in <genexpr>
return any(check(self.name_to_fused_node[n]) for n in combined_predecessors)
File "/fsx/users/binbao/conda/envs/release/lib/python3.10/site-packages/torch/_inductor/scheduler.py", line 896, in check
return bool(combined_names & node.recursive_predecessors) or any(
KeyboardInterrupt
```
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh @voznesenskym @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10
| 1 |
2,980 | 98,465 |
Need better error message when a merge cancelled because of timeout
|
module: ci, triaged
|
As suggested by @malfet, the error message needs improvements.
See this as an example: https://github.com/pytorch/pytorch/pull/98201#issuecomment-1497216298
The merge was cancelled because MacOS jobs time-outed - see https://github.com/pytorch/pytorch/issues/98362
The error message on the PR is just "The merge job was canceled. If you believe this is a mistake,then you can re trigger it through pytorch-bot", which is not very informative.
Should be something like this instead:
{timeout//60} hours have passed, but {len(jobs_pending} job(s) are still running, first few of them are ...)
cc @seemethere @malfet @pytorch/pytorch-dev-infra
| 0 |
2,981 | 98,459 |
Fail to pass test HAVE_XXX_REGEX while building pytorch
|
module: build, triaged
|
### 🐛 Describe the bug
Dear PyTorch team,
# Subject
I am not able to build from the `master` branch due to regex test has failed
```
-- Performing Test HAVE_STD_REGEX
-- Performing Test HAVE_STD_REGEX
-- Performing Test HAVE_STD_REGEX -- compiled but failed to run
-- Performing Test HAVE_GNU_POSIX_REGEX
-- Performing Test HAVE_GNU_POSIX_REGEX
-- Performing Test HAVE_GNU_POSIX_REGEX -- failed to compile
-- Performing Test HAVE_POSIX_REGEX
-- Performing Test HAVE_POSIX_REGEX
-- Performing Test HAVE_POSIX_REGEX -- compiled but failed to run
CMake Error at third_party/benchmark/CMakeLists.txt:304 (message):
Failed to determine the source files for the regular expression backend
```
# Environment
OS: Linux Fedora 37
CC: clang
CXX: clang++
LLVM: version 12
# Understanding
Those test `HAVE_XXXX_REGEX` are performed for several third parties:
- benchmark
- onnx
- protobuf
- QNNPACK
- XNNPACK
Those `HAVE_XXXX_REGEX` are defined by the cmake function `cxx_feature_check` in
`pytorch/third_party/benchmark/cmake/CXXFeatureCheck.cmake` line ~ 20
source code of the function: [here](https://github.com/google/benchmark/blob/main/cmake/CXXFeatureCheck.cmake)
To my understanding, the function will try to i) compile ii) run the code from those files
```
pytorch/third_party/benchmark/cmake/gnu_posix_regex.cpp
pytorch/third_party/benchmark/cmake/posix_regex.cpp
pytorch/third_party/benchmark/cmake/std_regex.cpp
pytorch/third_party/onnx/third_party/benchmark/cmake/gnu_posix_regex.cpp
pytorch/third_party/onnx/third_party/benchmark/cmake/posix_regex.cpp
pytorch/third_party/onnx/third_party/benchmark/cmake/std_regex.cpp
pytorch/third_party/onnx-tensorrt/third_party/onnx/third_party/benchmark/cmake/gnu_posix_regex.cpp
pytorch/third_party/onnx-tensorrt/third_party/onnx/third_party/benchmark/cmake/posix_regex.cpp
pytorch/third_party/onnx-tensorrt/third_party/onnx/third_party/benchmark/cmake/std_regex.cpp
pytorch/third_party/protobuf/third_party/benchmark/cmake/gnu_posix_regex.cpp
pytorch/third_party/protobuf/third_party/benchmark/cmake/posix_regex.cpp
pytorch/third_party/protobuf/third_party/benchmark/cmake/std_regex.cpp
```
So I tried to build those files with clang++ and run them
```bash
$ for f in $(find pytorch/ -name '*regex*.cpp');do echo "Test ==> $(grep -Po '(?<=pytorch/third_party/)[[:alnum:]\-]+' <<< $f) ---- $(basename $f)"; clang++ $f; ./a.out; echo $?; rm -f ./a.out; done
Test ==> benchmark ---- gnu_posix_regex.cpp
pytorch/third_party/benchmark/cmake/gnu_posix_regex.cpp:1:10: fatal error: 'gnuregex.h' file not found
#include <gnuregex.h>
^~~~~~~~~~~~
1 error generated.
bash: ./a.out: Aucun fichier ou dossier de ce type
127
Test ==> benchmark ---- posix_regex.cpp
0
Test ==> benchmark ---- std_regex.cpp
0
Test ==> onnx ---- gnu_posix_regex.cpp
pytorch/third_party/onnx/third_party/benchmark/cmake/gnu_posix_regex.cpp:1:10: fatal error: 'gnuregex.h' file not found
#include <gnuregex.h>
^~~~~~~~~~~~
1 error generated.
bash: ./a.out: Aucun fichier ou dossier de ce type
127
Test ==> onnx ---- posix_regex.cpp
0
Test ==> onnx ---- std_regex.cpp
0
Test ==> onnx-tensorrt ---- gnu_posix_regex.cpp
pytorch/third_party/onnx-tensorrt/third_party/onnx/third_party/benchmark/cmake/gnu_posix_regex.cpp:1:10: fatal error: 'gnuregex.h' file not found
#include <gnuregex.h>
^~~~~~~~~~~~
1 error generated.
bash: ./a.out: Aucun fichier ou dossier de ce type
127
Test ==> onnx-tensorrt ---- posix_regex.cpp
0
Test ==> onnx-tensorrt ---- std_regex.cpp
0
Test ==> protobuf ---- gnu_posix_regex.cpp
pytorch/third_party/protobuf/third_party/benchmark/cmake/gnu_posix_regex.cpp:1:10: fatal error: 'gnuregex.h' file not found
#include <gnuregex.h>
^~~~~~~~~~~~
1 error generated.
bash: ./a.out: Aucun fichier ou dossier de ce type
127
Test ==> protobuf ---- posix_regex.cpp
0
Test ==> protobuf ---- std_regex.cpp
0
```
So this homemade test highlight that `posix_regex` and `std_regex` exit with success.
Why Cmake report that do not run ?
# how to reproduce the error
```
$ podman run -it --rm --name f37 fedora:37 bash
# dnf install -y rocm-comgr-devel rocm-device-libs boost-devel cmake blis-devel libstdc++-devel python3-setuptools python3-pyyaml ninja-build git
# git clone https://github.com/pytorch/pytorch
# cd pytorch
# export CC="clang"
# export CXX="clang++"
# export LDSHARED="clang --shared"
# export LDFLAGS="-stdlib=libstdc++"
# export CFLAGS="-fsanitize=address -fno-sanitize-recover=all -shared-libasan -pthread"
# export CXX_FLAGS="-shared-libasan -pthread"
# export CPLUS_INCLUDE_PATH="/usr/include/c++/12/:${CPLUS_INCLUDE_PATH}"
# export ASAN_SYMBOLIZER_PATH=/usr/bin/llvm-symbolizer
# USE_CUDA=0 USE_OPENMP=0 BUILD_CAFFE2_OPS=0 USE_DISTRIBUTED=0 DEBUG=1 \
python setup.py develop
...
-- Performing Test HAVE_STD_REGEX
-- Performing Test HAVE_STD_REGEX
-- Performing Test HAVE_STD_REGEX -- compiled but failed to run
-- Performing Test HAVE_GNU_POSIX_REGEX
-- Performing Test HAVE_GNU_POSIX_REGEX
-- Performing Test HAVE_GNU_POSIX_REGEX -- failed to compile
-- Performing Test HAVE_POSIX_REGEX
-- Performing Test HAVE_POSIX_REGEX
-- Performing Test HAVE_POSIX_REGEX -- compiled but failed to run
CMake Error at third_party/benchmark/CMakeLists.txt:304 (message):
Failed to determine the source files for the regular expression backend
```
Thanks for your help
### Versions
```
$ python collect_env.py
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Fedora Linux 37 (Workstation Edition) (x86_64)
GCC version: (GCC) 12.2.1 20221121 (Red Hat 12.2.1-4)
Clang version: 15.0.7 (Fedora 15.0.7-1.fc37)
CMake version: version 3.26.1
Libc version: glibc-2.36
Python version: 3.11.2 (main, Feb 8 2023, 00:00:00) [GCC 12.2.1 20221121 (Red Hat 12.2.1-4)] (64-bit runtime)
Python platform: Linux-6.1.14-200.fc37.x86_64-x86_64-with-glibc2.36
Is CUDA available: N/A
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Architecture : x86_64
Mode(s) opératoire(s) des processeurs : 32-bit, 64-bit
Tailles des adresses: 48 bits physical, 48 bits virtual
Boutisme : Little Endian
Processeur(s) : 12
Liste de processeur(s) en ligne : 0-11
Identifiant constructeur : AuthenticAMD
Nom de modèle : AMD Ryzen 5 5600X 6-Core Processor
Famille de processeur : 25
Modèle : 33
Thread(s) par cœur : 2
Cœur(s) par socket : 6
Socket(s) : 1
Révision : 0
Accroissement de fréquence : activé
multiplication des MHz du/des CPU(s) : 80%
Vitesse maximale du processeur en MHz : 4650,2920
Vitesse minimale du processeur en MHz : 2200,0000
BogoMIPS : 7399,98
Drapeaux : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm
Virtualisation : AMD-V
Cache L1d : 192 KiB (6 instances)
Cache L1i : 192 KiB (6 instances)
Cache L2 : 3 MiB (6 instances)
Cache L3 : 32 MiB (1 instance)
Nœud(s) NUMA : 1
Nœud NUMA 0 de processeur(s) : 0-11
Vulnérabilité Itlb multihit : Not affected
Vulnérabilité L1tf : Not affected
Vulnérabilité Mds : Not affected
Vulnérabilité Meltdown : Not affected
Vulnérabilité Mmio stale data : Not affected
Vulnérabilité Retbleed : Not affected
Vulnérabilité Spec store bypass : Mitigation; Speculative Store Bypass disabled via prctl
Vulnérabilité Spectre v1 : Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnérabilité Spectre v2 : Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnérabilité Srbds : Not affected
Vulnérabilité Tsx async abort : Not affected
Versions of relevant libraries:
[pip3] numpy==1.24.2
[conda] Could not collect
```
cc @malfet @seemethere
| 1 |
2,982 | 98,456 |
README could use link to governance
|
high priority, module: docs, triaged
|
### 📚 The doc issue
README.md does not have link to governance https://github.com/pytorch/pytorch/blob/master/docs/source/community/governance.rst
~~indicate different governance structure, different maintainers, etc..~~
~~### Suggest a potential alternative/fix~~
### Suggest updating the readme to point to the governance documentation ~~.. modify the current readme contents to be a thank you for Emeritus efforts section.~~
updated issue to reflect the readme maintainer list is now in sync .. and could benefit from a link to governance structure docs
cc @ezyang @gchanan @zou3519 @svekars @carljparker
| 3 |
2,983 | 98,441 |
Torch Compile is slightly slower than eager mode.
|
triaged, oncall: pt2
|
### 🐛 Describe the bug
When running some models on Torch, I have noticed that the torch.compile mode is slightly slower than the eager mode.
It may or may not be related to this issue : https://github.com/pytorch/pytorch/issues/98102
one example is : microsoft-deberta-base
To reproduce:
go to this folder transformers/examples/pytorch/language-modeling/ and run:
eager mode:
`python run_mlm.py --model_name_or_path microsoft/deberta-v3-base --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --num_train_epochs 1 --per_device_train_batch_size 1 --per_device_eval_batch_size 1 --do_train --do_eval --overwrite_output_dir --output_dir ./outputs/ --seed 1137 --fp16 --report_to none --max_train_samples 1000 `
torch.compile:
`python run_mlm.py --model_name_or_path microsoft/deberta-v3-base --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --num_train_epochs 1 --per_device_train_batch_size 1 --per_device_eval_batch_size 1 --do_train --do_eval --overwrite_output_dir --output_dir ./outputs/ --seed 1137 --fp16 --report_to none --max_train_samples 1000 --torch_compile`
results :
<html>
<body>
<!--StartFragment-->
Metric | Eager | TorchCompile
-- | -- | --
Avg of 2nd half | 72.44162 ms | 102.73143 ms
Train loss | 5.995 | 5.9397
Train runtime | 0:03:09.17 | 0:04:38.75
Train samples | 1000 | 1000
Train samples per second | 5.286 | 3.587
Train steps per second | 5.286 | 3.587
Eval accuracy | 0.3637 | 0.3657
Eval loss | 4.8822 | 4.8525
Eval runtime | 0:00:10.11 | 0:00:32.71
Eval samples | 230 | 230
Eval samples per second | 22.746 | 7.031
Eval steps per second | 22.746 | 7.031
Perplexity | 131.92 | 128.0628
<!--EndFragment-->
</body>
</html>
Ran on a Single Tesla V100 16GB GPU.
### Versions
[conda] numpy 1.24.1 pypi_0 pypi
[conda] pytorch-triton 2.1.0+46672772b4 pypi_0 pypi
[conda] torch 2.1.0.dev20230404+cu117 pypi_0 pypi
[conda] torch-ort 1.14.0 pypi_0 pypi
[conda] torchaudio 2.0.0.dev20230313+cu117 pypi_0 pypi
[conda] torchvision 0.15.0.dev20230313+cu117 pypi_0 pypi
[conda] triton 2.0.0 pypi_0 pypi
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 5 |
2,984 | 98,434 |
assert callable(unaltered_fn)
|
high priority, triaged, oncall: pt2
|
### 🐛 Describe the bug
This is a bug generated from https://github.com/pytorch/pytorch/issues/97078
To reproduce, check out transformers and patch (I tested on a515d0a77c769954ac2f0151a2a99c04d8d6cf95)
```
diff --git a/src/transformers/trainer.py b/src/transformers/trainer.py
index 2eb081af7..886df74c1 100755
--- a/src/transformers/trainer.py
+++ b/src/transformers/trainer.py
@@ -1572,8 +1572,7 @@ class Trainer:
# torch.compile() needs to be called after wrapping the model with FSDP or DDP
# to ensure that it accounts for the graph breaks required by those wrappers
- if self.args.torch_compile:
- model = torch.compile(model, backend=self.args.torch_compile_backend, mode=self.args.torch_compile_mode)
+ model = torch.compile(model, backend=self.args.torch_compile_backend, mode=self.args.torch_compile_mode, dynamic=True)
return model
```
Then, run
```
pytest tests/trainer/test_trainer.py --tb=native -k test_adafactor_lr_none
```
It fails with
```
______________________________________________ TrainerIntegrationPrerunTest.test_adafactor_lr_none ______________________________________________
Traceback (most recent call last):
File "/data/users/ezyang/a/transformers/tests/trainer/test_trainer.py", line 465, in setUp
trainer.train()
File "/data/users/ezyang/a/transformers/src/transformers/trainer.py", line 1658, in train
return inner_training_loop(
File "/data/users/ezyang/a/transformers/src/transformers/trainer.py", line 1745, in _inner_training_loop
model = self._wrap_model(self.model_wrapped)
File "/data/users/ezyang/a/transformers/src/transformers/trainer.py", line 1575, in _wrap_model
model = torch.compile(model, backend=self.args.torch_compile_backend, mode=self.args.torch_compile_mode, dynamic=True)
File "/data/users/ezyang/a/pytorch/torch/__init__.py", line 1600, in compile
return torch._dynamo.optimize(backend=backend, nopython=fullgraph, dynamic=dynamic, disable=disable)(model)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/eval_frame.py", line 499, in optimize
return _optimize_catch_errors(
File "/data/users/ezyang/a/pytorch/torch/_dynamo/eval_frame.py", line 401, in _optimize_catch_errors
return OptimizeContext(
File "/data/users/ezyang/a/pytorch/torch/_dynamo/eval_frame.py", line 330, in __init__
compiler_fn = innermost_fn(callback)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/eval_frame.py", line 149, in innermost_fn
assert callable(unaltered_fn)
AssertionError
```
It's possible that HF is misusing the torch.compile API (there's some sort of repeated wrapping going on, it seems like), but even if that's true, it shouldn't assert error.
cc @gchanan @zou3519 @soumith @msaroufim @wconstab @ngimel @bdhirsh @stas00
### Versions
master
| 2 |
2,985 | 98,422 |
[FX] Symbolic trace over `torch.Tensor.${fn}` APIs
|
oncall: fx
|
### 🐛 Describe the bug
`torch.fx.symbolic_trace` seems to not support usages like `torch.Tensor.xxx`.
```python
import torch
def f(x):
return torch.Tensor.flatten(x)
# return x.flatten() # works
# return torch.Tensor.flip(x, dims=[0]) # fails too
torch.fx.symbolic_trace(f) # fails
# torch.compile(f) # works
```
While such usages are incommon, I wonder if it is possible to support such styles as at least they are used in unit tests (e.g., `test_binary_ufuncs.py`)? Or may is there a way to magically regard `torch.Tensor.${fn}(x, *args)` as `x.${fn}(*args)`?
### Versions
<details><summary><i>Environments :: Click to expand.</i></summary>
<div>
```python
"""
Collecting environment information...
PyTorch version: 2.1.0.dev20230403+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: 10.0.0-4ubuntu1
CMake version: version 3.22.2
Libc version: glibc-2.31
Python version: 3.8.13 (default, Mar 28 2022, 11:38:47) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.15.0-67-generic-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: 12.1.66
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 2080 Ti
Nvidia driver version: 530.30.02
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.8.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.8.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.8.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.8.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.8.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.8.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.8.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 39 bits physical, 48 bits virtual
CPU(s): 16
On-line CPU(s) list: 0-15
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 165
Model name: Intel(R) Core(TM) i7-10700K CPU @ 3.80GHz
Stepping: 5
CPU MHz: 3800.000
CPU max MHz: 5100.0000
CPU min MHz: 800.0000
BogoMIPS: 7599.80
L1d cache: 256 KiB
L1i cache: 256 KiB
L2 cache: 2 MiB
L3 cache: 16 MiB
NUMA node0 CPU(s): 0-15
Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Mitigation; Microcode
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp pku ospke md_clear flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] mypy==0.812
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.22.3
[pip3] pytorch-triton==2.1.0+46672772b4
[pip3] torch==2.1.0.dev20230403+cu118
[pip3] torchaudio==2.1.0.dev20230403+cu118
[pip3] torchvision==0.16.0.dev20230403+cu118
[conda] numpy 1.22.3 pypi_0 pypi
[conda] pytorch-triton 2.1.0+46672772b4 pypi_0 pypi
[conda] torch 2.1.0.dev20230403+cu118 pypi_0 pypi
[conda] torchaudio 2.1.0.dev20230403+cu118 pypi_0 pypi
[conda] torchvision 0.16.0.dev20230403+cu118 pypi_0 pypi
"""
```
</div>
</details>
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| 0 |
2,986 | 98,419 |
Support backward hook optimizers in FSDP
|
oncall: distributed, triaged, module: fsdp
|
### 🚀 The feature, motivation and pitch
I'm currently optimizing the [Lightning reference implementation of LLaMA](https://github.com/Lightning-AI/lit-llama) (7B), although the following will be generally applicable to any LLM with high memory pressure. The default configuration (24GB model sharded across 4x40GB A100s) is just on the cusp of being able to run (Between weights, 2 AdamW states, and gradients for the shard **logical** GPU memory caps out around 27GB, although I don't think that captures comms buffers) and the profile shows clear signs of allocator thrashing.
After a bit of hacking I came up with this monstrosity:
```
def add_optimizer_hooks(
model,
optimizers: Dict[torch.nn.Parameter, torch.optim.Optimizer], # Per-parameter optimizers
):
"""Ugly FSDP analog to torch.distributed.optim._apply_optimizer_in_backward
FSDP changes acc_grad every step, so we need to apply this before *each* `backward()`
call, unlike the normal recipe where we only apply it once.
"""
param_handles = torch.distributed.fsdp._traversal_utils._get_fsdp_handles(model)
assert set(model.parameters()) == {i.flat_param for i in param_handles} == set(optimizers.keys())
# We need to use the post backward stream so updates apply gradients are accumulated
stream = torch.distributed.fsdp._common_utils._get_module_fsdp_state(model)._streams["post_backward"]
for h in param_handles:
# We're going to call this early, so if we don't override to a no-op FSDP proper will call it again and assert fail.
h.prepare_gradient_for_optim = lambda: None
p = h.flat_param
assert hasattr(p, "_post_backward_hook_state")
fsdp_acc_grad, _ = p._post_backward_hook_state
def _opt_hook(optimizer, p, h, *_unused):
assert p._post_backward_called
with torch.cuda.stream(stream):
# Use the class to get at `prepare_gradient_for_optim`
h.__class__.prepare_gradient_for_optim(h)
assert p.grad is not None
optimizer.step()
optimizer.zero_grad(set_to_none=True) # Cool that this is now the default
assert p.grad is None
fsdp_acc_grad.register_hook(functools.partial(_opt_hook, optimizers[p], p, h))
```
<img width="350" alt="Screenshot 2023-04-05 at 8 43 23 AM" src="https://user-images.githubusercontent.com/13089297/230133300-0865ec63-45e5-416c-aa66-091c16c3ef3e.png"> <img width="339" alt="Screenshot 2023-04-05 at 8 43 47 AM" src="https://user-images.githubusercontent.com/13089297/230133388-ab6303bb-a2fa-4c22-baff-a1f075bf5a68.png">
More importantly, that's enough to get out of the high contention regime and decreases the step time close to an order of magnitude. But given how much FSDP internal state I had to crack open to get things running (and I'm sure I missed plenty...) it's really only suitable as a PoC.
CC @rohan-varma @albanD @zdevito
### Alternatives
I know there's been more general discussion of creating optimizers on the fly so there might be a better alternative to the big list of single Tensor optimizers.
### Additional context
_No response_
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
| 4 |
2,987 | 98,416 |
Backwards graph is labeled incorrectly when dynamic=True
|
triaged, oncall: pt2, module: dynamic shapes
|
### 🐛 Describe the bug
Run `TORCH_COMPILE_DEBUG=1 python tt.py` with
```
#!/usr/bin/env python3
import time
import torch
import torch._dynamo as dynamo
import torchvision.models as models
model = models.alexnet()
optimizer = torch.optim.SGD(model.parameters(), lr=0.1)
compiled_model = torch.compile(model, dynamic=True)
x = torch.randn(16, 3, 224, 224)
optimizer.zero_grad()
epoches=10
count = []
for epoch in range(epoches):
start = time.time()
#out = model(x)
out = compiled_model(x)
out.sum().backward()
optimizer.step()
end = time.time()
count.append(end - start)
print(f"Epoch {epoch}/{epoches} time: {end - start}")
print(f"Epoch avg time: {sum(count)/len(count)}")
```
but really any training script will work. Inspect the inductor directory:
```
$ ls torch_compile_debug/run_2023_04_05_07_17_04_323873-pid_27994/torchinductor/
aot_model___0_debug.log model__0_forward_1.0 model__0_inference_2.1
```
The backwards graph is incorrectly reported as an inference graph.
cc @soumith @msaroufim @wconstab @ngimel @bdhirsh @Chillee
### Versions
master
| 2 |
2,988 | 98,414 |
PyTorch 1.12, high failure rate for test_optim/test_nadam
|
module: optimizer, triaged
|
### 🐛 Describe the bug
Similar to https://github.com/pytorch/pytorch/issues/63079 but this is for test_nadam under test_optim instead.
I've done some initial investigations on other nodes (8xT4,4xV100,4xA100) and other CUDA &/ PyTorch versions. I'll try to collect what worked and what didn't in separate comments. If I remember correctly it is only for combination PyTorch-1.12.x CUDA-11.7.0 on A40s I've observed the test failure.
## To Reproduce
1. Build PyTorch 1.12.1 from source with GCC 11.3.0 with CUDA 11.7.0
2. Run `python test_optim.py -k test_nadam`
ensuing error message:
```
/dev/shm/PyTorch/1.12.1/foss-2022a-CUDA-11.7.0/pytorch-v1.12.1/build/lib.linux-x86_64-cpython-310/torch/testing/_internal/common_cuda.py:19: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.
CUDA11OrLater = torch.version.cuda and LooseVersion(torch.version.cuda) >= "11.0"
/apps/Arch/software/Python/3.10.4-GCCcore-11.3.0/lib/python3.10/site-packages/setuptools/_distutils/version.py:351: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.
other = LooseVersion(other)
F
======================================================================
FAIL: test_nadam (__main__.TestOptim)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/cephyr/NOBACKUP/priv/c3-staff/vikren/build/EasyBuild/PyTorch/test_optim.py", line 671, in test_nadam
self._test_basic_cases(
File "/cephyr/NOBACKUP/priv/c3-staff/vikren/build/EasyBuild/PyTorch/test_optim.py", line 259, in _test_basic_cases
self._test_state_dict(
File "/cephyr/NOBACKUP/priv/c3-staff/vikren/build/EasyBuild/PyTorch/test_optim.py", line 241, in _test_state_dict
self.assertEqual(bias, bias_cuda)
File "/dev/shm/PyTorch/1.12.1/foss-2022a-CUDA-11.7.0/pytorch-v1.12.1/build/lib.linux-x86_64-cpython-310/torch/testing/_internal/common_utils.py", line 2219, in assertEqual
assert_equal(
File "/dev/shm/PyTorch/1.12.1/foss-2022a-CUDA-11.7.0/pytorch-v1.12.1/build/lib.linux-x86_64-cpython-310/torch/testing/_comparison.py", line 1095, in assert_equal
raise error_metas[0].to_error(msg)
AssertionError: Tensor-likes are not close!
Mismatched elements: 1 / 10 (10.0%)
Greatest absolute difference: 1.609325408935547e-05 at index (1,) (up to 1e-05 allowed)
Greatest relative difference: 1.477008233390933e-05 at index (1,) (up to 1.3e-06 allowed)
----------------------------------------------------------------------
Ran 1 test in 5.343s
FAILED (failures=1)
```
### Versions
```
Collecting environment information...
PyTorch version: 1.12.1
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Rocky Linux 8.6 (Green Obsidian) (x86_64)
GCC version: (GCC) 11.3.0
Clang version: Could not collect
CMake version: version 3.23.1
Libc version: glibc-2.28
Python version: 3.10.4 (main, Aug 14 2022, 22:57:54) [GCC 11.3.0] (64-bit runtime)
Python platform: Linux-4.18.0-372.32.1.el8_6.x86_64-x86_64-with-glibc2.28
Is CUDA available: True
CUDA runtime version: 11.7.64
GPU models and configuration:
GPU 0: NVIDIA A40
GPU 1: NVIDIA A40
GPU 2: NVIDIA A40
GPU 3: NVIDIA A40
Nvidia driver version: 520.61.05
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.22.3
[conda] Could not collect
```
cc @vincentqb @jbschlosser @albanD @janeyx99
| 13 |
2,989 | 98,413 |
TORCH_COMPILE_DEBUG and TORCH_LOGS interact badly
|
triaged, oncall: pt2
|
### 🐛 Describe the bug
Here's an example log: https://gist.github.com/ezyang/6e2904c8ecbd863eefcbee7456ada544 for this run:
```
TORCH_COMPILE_DEBUG=1 PYTHONUNBUFFERED=1 WANDB_DISABLED=true TORCH_LOGS=dynamo,inductor,guards CUDA_VISIBLE_DEVICES=3 PYTHONPATH=src pp python examples/pytorch/translation/run_translation.py --model_name_or_path t5-base --do_train --source_lang en --target_lang de --source_prefix 'translate English to German: ' --dataset_name stas/wmt14-en-de-pre-processed --output_dir /tmp/tst-translation --num_train_epochs 1 --per_device_train_batch_size=1 --max_train_samples 1000 --overwrite_output_dir --seed 1137 --per_device_eval_batch_size 1 --fp16 --torch_compile 2>&1 | tee comp.log
```
Some things to note:
1. Despite not asking for it, I'm still getting inductor DEBUG logs printed to stderr:
```
[2023-04-05 06:43:18,087] torch._inductor.codegen.triton.__schedule: [DEBUG] Schedule:
```
My intention for TORCH_COMPILE_DEBUG was to get the directory dump; I think it shouldn't interact with console output
2. It keeps repeatedly printing this:
```
04/05/2023 06:43:27 - WARNING - torch._logging._internal - Using TORCH_LOGS environment variable for log settings, ignoring call to set_logs
04/05/2023 06:43:27 - WARNING - torch._logging._internal - Using TORCH_LOGS environment variable for log settings, ignoring call to set_logs
04/05/2023 06:43:27 - WARNING - torch._logging._internal - Using TORCH_LOGS environment variable for log settings, ignoring call to set_logs
```
### Versions
master
cc @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 2 |
2,990 | 98,409 |
`torch.Tensor.layout` is not documented
|
module: docs, triaged, module: python frontend
|
### 🐛 Describe the bug
This looks like a public function, so it should have been documented, but it is not:
```
% python -c "import torch;print(torch.Tensor.layout.__doc__)"
None
```
I wonder if this intentional. If not, we should add the documentation.
### Versions
2.0/nightly
cc @svekars @carljparker @albanD
| 1 |
2,991 | 98,406 |
Contribute to the privateuse1 backend.
|
module: internals, triaged, module: backend
|
### 🚀 The feature, motivation and pitch
This issue is used to discuss how to improve the PrivateUse1 backend to facilitate third-party manufacturers to access Pytorch.
With the popularity of pytorch and the evolution of computing acceleration hardware, the strong coupling between pytorch and cuda has become a serious problem. So the completeness of PrivateUse1 is what third-party hardware manufacturers need. After all, we can't add more enumerated types to DeviceType unless we are a big company like Apple or Intel (just kidding🫡).
For each feature, I will summarize it into this issue.
Please join us, thank you.
### Alternatives
_No response_
### Additional context
_No response_
cc @ezyang @bhosmer @smessmer @ljk53 @bdhirsh
| 8 |
2,992 | 98,386 |
[PTD][Checkpoint] Enable single_file_per_rank for fsspec storage read/write
|
oncall: distributed, triaged
|
### 🚀 The feature, motivation and pitch
With our current setup, single_file_per_rank is not supported for fsspec StorageWriter and StorageReader. This means we can only write single file per tensor/blob, which will significantly affect our performance.
We need to support single file per rank in fsspec and add the option back.
### Alternatives
_No response_
### Additional context
_No response_
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
| 0 |
2,993 | 98,361 |
pip doesn't install the right version of pytorch when torchtext is involved
|
oncall: binaries
|
### 🐛 Describe the bug
I encountered some weird installation problems while installing the nightly version.
```bash
pip3 install --force-reinstall --pre torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/nightly/cu118
```
This is fine.
```bash
pip3 install --force-reinstall --pre torch torchtext torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/nightly/cu118
```
This is going to install torch2.0.0 instead of the nightly version.
I’m sure it worked a few days ago.
### Versions
PyTorch version: 2.1.0.dev20230404+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
CMake version: version 3.26.1
Libc version: glibc-2.31
Versions of relevant libraries:
[pip3] torch==2.1.0.dev20230404+cu118
[pip3] torchaudio==2.1.0.dev20230404+cu118
[pip3] torchdata==0.6.0
[pip3] torchtext==0.15.1
[pip3] torchvision==0.16.0.dev20230404+cu118
[pip3] triton==2.1.0
cc @seemethere @malfet
| 6 |
2,994 | 98,355 |
Intermittent failure of mobilenet_v3_large
|
triaged, module: flaky-tests, oncall: pt2
|
Repro:
```
python benchmarks/dynamo/torchbench.py --training --accuracy --device cuda --inductor --amp --only mobilenet_v3_large
```
See more details at https://github.com/pytorch/pytorch/pull/98314
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 0 |
2,995 | 98,338 |
[functorch] [vmap] tests fail when `_set_vmap_fallback_enabled(False)`.
|
triaged, module: functorch
|
### 🐛 Describe the bug
With the following patch
```patch
diff --git a/test/functorch/test_vmap.py b/test/functorch/test_vmap.py
index a00612fdf5..f63651d9d4 100644
--- a/test/functorch/test_vmap.py
+++ b/test/functorch/test_vmap.py
@@ -58,6 +58,8 @@ from torch._functorch.make_functional import functional_init_with_buffers
from torch.testing._internal.autograd_function_db import autograd_function_db
from torch._functorch.vmap import restore_vmap
+torch._C._functorch._set_vmap_fallback_enabled(False)
+
FALLBACK_REGEX = 'There is a performance drop'
```
A lot of tests fail
<details>
<summary> Failed Tests </summary>
```
============================================================================ short test summary info ============================================================================
FAILED test/functorch/test_vmap.py::TestVmapAPI::test_fallback_does_not_warn_by_default - RuntimeError: aten::_test_functorch_fallback hit the vmap fallback which is currentl...
FAILED test/functorch/test_vmap.py::TestVmapAPI::test_fallback_warning - RuntimeError: aten::_test_functorch_fallback hit the vmap fallback which is currently disabled
FAILED test/functorch/test_vmap.py::TestVmapAPI::test_fallback_zero_dim - AssertionError: "The fallback path does not support vmap over dims of size 0" does not match "aten::...
FAILED test/functorch/test_vmap.py::TestVmapAPI::test_inplace_fallback_nary_same_levels - RuntimeError: aten::atan2_ hit the vmap fallback which is currently disabled
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_linalg_eigh_cpu - RuntimeError: aten::linalg_eigh hit the vmap fallback which is currently disabled
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive__segment_reduce_lengths_cpu_float32 - RuntimeError: aten::segment_reduce hit the vmap fal...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive__segment_reduce_offsets_cpu_float32 - RuntimeError: aten::segment_reduce hit the vmap fal...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive__upsample_bilinear2d_aa_cpu_float32 - RuntimeError: aten::_upsample_bilinear2d_aa.vec hit...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_abs_cpu_float32 - RuntimeError: aten::absolute_ hit the vmap fallback which is currently ...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_acos_cpu_float32 - RuntimeError: aten::arccos_ hit the vmap fallback which is currently d...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_acosh_cpu_float32 - RuntimeError: aten::arccosh_ hit the vmap fallback which is currently...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_addbmm_cpu_float32 - RuntimeError: aten::addbmm_ hit the vmap fallback which is currently...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_addmm_cpu_float32 - RuntimeError: aten::addmm_ hit the vmap fallback which is currently d...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_addmm_decomposed_cpu_float32 - RuntimeError: aten::addmm_ hit the vmap fallback which is ...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_addmv_cpu_float32 - RuntimeError: aten::addmv_ hit the vmap fallback which is currently d...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_addr_cpu_float32 - RuntimeError: aten::addr_ hit the vmap fallback which is currently dis...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_argwhere_cpu_float32 - RuntimeError: aten::argwhere hit the vmap fallback which is curren...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_asin_cpu_float32 - RuntimeError: aten::arcsin_ hit the vmap fallback which is currently d...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_asinh_cpu_float32 - RuntimeError: aten::arcsinh_ hit the vmap fallback which is currently...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_atan2_cpu_float32 - RuntimeError: aten::atan2_ hit the vmap fallback which is currently d...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_atan_cpu_float32 - RuntimeError: aten::arctan_ hit the vmap fallback which is currently d...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_atanh_cpu_float32 - RuntimeError: aten::arctanh_ hit the vmap fallback which is currently...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_baddbmm_cpu_float32 - RuntimeError: aten::baddbmm_ hit the vmap fallback which is current...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_bincount_cpu_int64 - RuntimeError: aten::bincount hit the vmap fallback which is currentl...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_bucketize_cpu_float32 - RuntimeError: aten::bucketize.Tensor hit the vmap fallback which ...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_chalf_cpu_float32 - RuntimeError: aten::chalf hit the vmap fallback which is currently di...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_conj_physical_cpu_float32 - RuntimeError: aten::conj_physical_ hit the vmap fallback whic...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_count_nonzero_cpu_float32 - RuntimeError: aten::count_nonzero hit the vmap fallback which...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_cumprod_cpu_float32 - RuntimeError: aten::cumprod_ hit the vmap fallback which is current...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_cumsum_cpu_float32 - RuntimeError: aten::cumsum_ hit the vmap fallback which is currently...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_diagflat_cpu_float32 - RuntimeError: aten::diagflat hit the vmap fallback which is curren...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_div_floor_rounding_cpu_float32 - RuntimeError: aten::div_.Tensor_mode hit the vmap fallba...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_div_trunc_rounding_cpu_float32 - RuntimeError: aten::div_.Tensor_mode hit the vmap fallba...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_fft_ihfft2_cpu_float32 - RuntimeError: aten::fft_ihfft2 hit the vmap fallback which is cu...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_fft_ihfftn_cpu_float32 - RuntimeError: aten::fft_ihfftn hit the vmap fallback which is cu...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_fill_cpu_float32 - RuntimeError: aten::fill.Scalar hit the vmap fallback which is current...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_floor_divide_cpu_float32 - RuntimeError: aten::floor_divide_.Tensor hit the vmap fallback...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_fmod_cpu_float32 - RuntimeError: aten::fmod_.Tensor hit the vmap fallback which is curren...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_gcd_cpu_int64 - RuntimeError: aten::gcd_ hit the vmap fallback which is currently disabled
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_heaviside_cpu_float32 - RuntimeError: aten::heaviside_ hit the vmap fallback which is cur...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_histc_cpu_float32 - RuntimeError: aten::histc hit the vmap fallback which is currently di...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_histogram_cpu_float32 - RuntimeError: aten::histogram.bin_ct hit the vmap fallback which ...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_hypot_cpu_float32 - RuntimeError: aten::hypot_ hit the vmap fallback which is currently d...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_igamma_cpu_float32 - RuntimeError: aten::igamma_ hit the vmap fallback which is currently...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_igammac_cpu_float32 - RuntimeError: aten::igammac_ hit the vmap fallback which is current...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_index_add_cpu_float32 - RuntimeError: aten::index_add_ hit the vmap fallback which is cur...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_index_copy_cpu_float32 - RuntimeError: aten::index_copy_ hit the vmap fallback which is c...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_index_reduce_cpu_float32 - RuntimeError: aten::index_reduce hit the vmap fallback which i...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_isclose_cpu_float32 - RuntimeError: aten::isclose hit the vmap fallback which is currentl...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_isin_cpu_float32 - RuntimeError: aten::isin.Tensor_Tensor hit the vmap fallback which is ...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_istft_cpu_complex64 - RuntimeError: aten::istft hit the vmap fallback which is currently ...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_lcm_cpu_int64 - RuntimeError: aten::lcm_ hit the vmap fallback which is currently disabled
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_ldexp_cpu_float32 - RuntimeError: aten::ldexp_ hit the vmap fallback which is currently d...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_lerp_cpu_float32 - RuntimeError: aten::lerp_.Scalar hit the vmap fallback which is curren...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_linalg_ldl_solve_cpu_float32 - RuntimeError: aten::linalg_ldl_solve hit the vmap fallback...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_linalg_lu_cpu_float32 - RuntimeError: aten::linalg_lu hit the vmap fallback which is curr...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_linalg_tensorsolve_cpu_float32 - RuntimeError: aten::linalg_tensorsolve hit the vmap fall...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_lu_solve_cpu_float32 - RuntimeError: aten::lu_solve hit the vmap fallback which is curren...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_lu_unpack_cpu_float32 - RuntimeError: aten::lu_unpack hit the vmap fallback which is curr...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_masked_fill_cpu_float32 - RuntimeError: aten::masked_fill.Tensor hit the vmap fallback wh...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_masked_scatter_cpu_float32 - RuntimeError: aten::masked_scatter hit the vmap fallback whi...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_matrix_exp_cpu_float32 - RuntimeError: aten::matrix_exp hit the vmap fallback which is cu...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_nanquantile_cpu_float32 - RuntimeError: aten::nanquantile.scalar hit the vmap fallback wh...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_native_dropout_backward_cpu_float32 - RuntimeError: aten::native_dropout_backward hit the...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_neg_cpu_float32 - RuntimeError: aten::negative_ hit the vmap fallback which is currently ...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_nextafter_cpu_float32 - RuntimeError: aten::nextafter_ hit the vmap fallback which is cur...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_nn_functional_bilinear_cpu_float32 - RuntimeError: aten::bilinear hit the vmap fallback w...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_nn_functional_ctc_loss_cpu_float32 - RuntimeError: aten::ctc_loss.Tensor hit the vmap fal...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_nn_functional_huber_loss_cpu_float32 - RuntimeError: aten::huber_loss hit the vmap fallba...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_nn_functional_kl_div_cpu_float32 - RuntimeError: aten::kl_div hit the vmap fallback which...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_nn_functional_margin_ranking_loss_cpu_float32 - RuntimeError: aten::margin_ranking_loss h...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_nn_functional_max_pool1d_cpu_float32 - RuntimeError: aten::max_pool1d hit the vmap fallba...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_nn_functional_max_pool3d_cpu_float32 - RuntimeError: aten::max_pool3d_with_indices hit th...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_nn_functional_max_unpool1d_cpu_float32 - RuntimeError: aten::max_unpool2d hit the vmap fa...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_nn_functional_max_unpool1d_grad_cpu_float32 - RuntimeError: aten::max_unpool2d hit the vm...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_nn_functional_max_unpool2d_cpu_float32 - RuntimeError: aten::max_unpool2d hit the vmap fa...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_nn_functional_max_unpool2d_grad_cpu_float32 - RuntimeError: aten::max_unpool2d hit the vm...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_nn_functional_max_unpool3d_cpu_float32 - RuntimeError: aten::max_unpool3d hit the vmap fa...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_nn_functional_max_unpool3d_grad_cpu_float32 - RuntimeError: aten::max_unpool3d hit the vm...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_nn_functional_multi_margin_loss_cpu_float32 - RuntimeError: aten::multi_margin_loss hit t...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_nn_functional_multilabel_margin_loss_cpu_float32 - RuntimeError: aten::multilabel_margin_...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_nn_functional_pdist_cpu_float32 - RuntimeError: aten::pdist hit the vmap fallback which i...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_nn_functional_smooth_l1_loss_cpu_float32 - RuntimeError: aten::smooth_l1_loss hit the vma...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_nn_functional_soft_margin_loss_cpu_float32 - RuntimeError: aten::soft_margin_loss hit the...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_nn_functional_triplet_margin_loss_cpu_float32 - RuntimeError: aten::triplet_margin_loss h...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_ormqr_cpu_float32 - RuntimeError: aten::ormqr hit the vmap fallback which is currently di...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_polygamma_polygamma_n_0_cpu_float32 - RuntimeError: aten::polygamma_ hit the vmap fallbac...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_polygamma_polygamma_n_1_cpu_float32 - RuntimeError: aten::polygamma_ hit the vmap fallbac...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_polygamma_polygamma_n_2_cpu_float32 - RuntimeError: aten::polygamma_ hit the vmap fallbac...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_polygamma_polygamma_n_3_cpu_float32 - RuntimeError: aten::polygamma_ hit the vmap fallbac...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_polygamma_polygamma_n_4_cpu_float32 - RuntimeError: aten::polygamma_ hit the vmap fallbac...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_pow_cpu_float32 - RuntimeError: aten::pow_.Tensor hit the vmap fallback which is currentl...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_put_cpu_float32 - RuntimeError: aten::put hit the vmap fallback which is currently disabled
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_quantile_cpu_float32 - RuntimeError: aten::quantile.scalar hit the vmap fallback which is...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_remainder_cpu_float32 - RuntimeError: aten::remainder_.Tensor hit the vmap fallback which...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_renorm_cpu_float32 - RuntimeError: aten::renorm hit the vmap fallback which is currently ...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_scatter_add_cpu_float32 - RuntimeError: aten::scatter_add_ hit the vmap fallback which is...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_scatter_cpu_float32 - RuntimeError: aten::scatter_.src hit the vmap fallback which is cur...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_scatter_reduce_amax_cpu_float32 - RuntimeError: aten::scatter_reduce.two hit the vmap fal...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_scatter_reduce_amin_cpu_float32 - RuntimeError: aten::scatter_reduce.two hit the vmap fal...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_scatter_reduce_mean_cpu_float32 - RuntimeError: aten::scatter_reduce.two hit the vmap fal...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_scatter_reduce_prod_cpu_float32 - RuntimeError: aten::scatter_reduce.two hit the vmap fal...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_scatter_reduce_sum_cpu_float32 - RuntimeError: aten::scatter_reduce.two hit the vmap fall...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_special_airy_ai_cpu_float32 - RuntimeError: aten::special_airy_ai hit the vmap fallback w...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_special_bessel_j0_cpu_float32 - RuntimeError: aten::special_bessel_j0 hit the vmap fallba...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_special_bessel_j1_cpu_float32 - RuntimeError: aten::special_bessel_j1 hit the vmap fallba...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_special_bessel_y0_cpu_float32 - RuntimeError: aten::special_bessel_y0 hit the vmap fallba...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_special_bessel_y1_cpu_float32 - RuntimeError: aten::special_bessel_y1 hit the vmap fallba...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_special_chebyshev_polynomial_t_cpu_float32 - RuntimeError: aten::special_chebyshev_polyno...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_special_chebyshev_polynomial_u_cpu_float32 - RuntimeError: aten::special_chebyshev_polyno...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_special_hermite_polynomial_h_cpu_float32 - RuntimeError: aten::special_hermite_polynomial...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_special_hermite_polynomial_he_cpu_float32 - RuntimeError: aten::special_hermite_polynomia...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_special_laguerre_polynomial_l_cpu_float32 - RuntimeError: aten::special_laguerre_polynomi...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_special_log_ndtr_cpu_float32 - RuntimeError: aten::special_log_ndtr hit the vmap fallback...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_special_modified_bessel_i0_cpu_float32 - RuntimeError: aten::special_modified_bessel_i0 h...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_special_modified_bessel_i1_cpu_float32 - RuntimeError: aten::special_modified_bessel_i1 h...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_special_modified_bessel_k0_cpu_float32 - RuntimeError: aten::special_modified_bessel_k0 h...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_special_modified_bessel_k1_cpu_float32 - RuntimeError: aten::special_modified_bessel_k1 h...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_special_scaled_modified_bessel_k0_cpu_float32 - RuntimeError: aten::special_scaled_modifi...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_special_scaled_modified_bessel_k1_cpu_float32 - RuntimeError: aten::special_scaled_modifi...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_special_spherical_bessel_j0_cpu_float32 - RuntimeError: aten::special_spherical_bessel_j0...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_square_cpu_float32 - RuntimeError: aten::square_ hit the vmap fallback which is currently...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_stft_cpu_float32 - RuntimeError: aten::stft hit the vmap fallback which is currently disa...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_sub_cpu_float32 - RuntimeError: aten::subtract_.Tensor hit the vmap fallback which is cur...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_take_cpu_float32 - RuntimeError: aten::take hit the vmap fallback which is currently disa...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_trunc_cpu_float32 - RuntimeError: aten::fix_ hit the vmap fallback which is currently dis...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_exhaustive_xlogy_cpu_float32 - RuntimeError: aten::xlogy_.Tensor hit the vmap fallback which is curr...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_linalg_failure_1D_input_linalg_eigh_cpu_float32 - AssertionError: "dimension" does not match "aten::...
FAILED test/functorch/test_vmap.py::TestVmapOperatorsOpInfoCPU::test_vmap_linalg_failure_1D_input_linalg_lu_cpu_float32 - AssertionError: "dimension" does not match "aten::li...
==================================================== 129 failed, 1581 passed, 77 skipped, 226 xfailed in 1513.32s (0:25:13) ===================================================
```
</details>
More context: https://github.com/pytorch/pytorch/pull/98328#discussion_r1157628714
### Versions
master
cc @zou3519 @Chillee @samdow @soumith @janeyx99
| 0 |
2,996 | 98,330 |
[cpu] Fix div with rounding_mode="floor" when division overflows
|
module: cpu, open source, release notes: python_frontend, topic: bug fixes
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #98330
* #98329
Fixes #77742
Sleef_fmod returns NaN when the division overflows, but we should be
returning inf here. So lets just use the direct division result
whenever it's nonfinite.
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| 3 |
2,997 | 98,296 |
"We don't have an op for aten::bitwise_and but it isn't a special case." when exporting NMS operation as ONNX.
|
oncall: jit, module: onnx
|
### 🐛 Describe the bug
Hi! I'm trying to use a detection model from ultralytics in ONNX format, but realized it does not have Non Max Suppression. I checked if I could use Pytorch to easily generate the corresponding post-procession ONNX, but it fails with a "please report a bug to PyTorch" message, so here I am :-)
```python
import torch
from torch import nn
from ultralytics.yolo.utils.ops import non_max_suppression
from ultralytics.yolo.v8.detect import DetectionPredictor
class PostProcessingModule(nn.Module, DetectionPredictor):
def forward(self, yolo_results, iou_threshold, score_threshold):
return non_max_suppression(
yolo_results, iou_threshold, score_threshold
)
if __name__ == '__main__':
yolo_results = torch.rand([1, 14, 1000]).type(torch.float32)
iou_threshold = 0.5
score_threshold = 0.5
t_model = PostProcessingModule()
torch.onnx.export(
t_model,
(yolo_results, iou_threshold, score_threshold),
"NMS_after.onnx",
input_names=["yolo_results", "iou_threshold", "score_threshold"],
output_names=["yolo_results_filtered"],
)
```
Here's the full error message:
> RuntimeError : 0 INTERNAL ASSERT FAILED at "/opt/conda/conda-bld/pytorch_1678402412426/work/torch/csrc/jit/ir/alias_analysis.cpp":615, please report a bug to PyTorch. We don't have an op for aten::bitwise_and but it isn't a special case. Argument types: Tensor, bool,
>
> Candidates:
> aten::bitwise_and.Tensor(Tensor self, Tensor other) -> Tensor
> aten::bitwise_and.Scalar(Tensor self, Scalar other) -> Tensor
> aten::bitwise_and.Scalar_Tensor(Scalar self, Tensor other) -> Tensor
> aten::bitwise_and.Tensor_out(Tensor self, Tensor other, *, Tensor(a!) out) -> Tensor(a!)
> aten::bitwise_and.Scalar_out(Tensor self, Scalar other, *, Tensor(a!) out) -> Tensor(a!)
> aten::bitwise_and.Scalar_Tensor_out(Scalar self, Tensor other, *, Tensor(a!) out) -> Tensor(a!)
### Versions
Collecting environment information...
PyTorch version: 2.0.0
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.9.16 | packaged by conda-forge | (main, Feb 1 2023, 21:39:03) [GCC 11.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-69-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.5.119
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 2060
Nvidia driver version: 510.108.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture : x86_64
Mode(s) opératoire(s) des processeurs : 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Boutisme : Little Endian
Processeur(s) : 12
Liste de processeur(s) en ligne : 0-11
Identifiant constructeur : GenuineIntel
Nom de modèle : Intel(R) Core(TM) i7-9750H CPU @ 2.60GHz
Famille de processeur : 6
Modèle : 158
Thread(s) par cœur : 2
Cœur(s) par socket : 6
Socket(s) : 1
Révision : 10
Vitesse maximale du processeur en MHz : 4500,0000
Vitesse minimale du processeur en MHz : 800,0000
BogoMIPS : 5199.98
Drapaux : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp md_clear flush_l1d arch_capabilities
Virtualisation : VT-x
Cache L1d : 192 KiB (6 instances)
Cache L1i : 192 KiB (6 instances)
Cache L2 : 1,5 MiB (6 instances)
Cache L3 : 12 MiB (1 instance)
Nœud(s) NUMA : 1
Nœud NUMA 0 de processeur(s) : 0-11
Vulnerability Itlb multihit: KVM: Mitigation: Split huge pages
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Mitigation; Microcode
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flake8==6.0.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.2
[pip3] numpydoc==1.5.0
[pip3] pytorch-lightning==2.0.0
[pip3] torch==2.0.0
[pip3] torchmetrics==0.11.4
[pip3] torchvision==0.15.0
[pip3] triton==2.0.0
[conda] blas 1.0 mkl conda-forge
[conda] mkl 2023.0.0 h6d00ec8_25399
[conda] numpy 1.24.2 py39h7360e5f_0 conda-forge
[conda] numpydoc 1.5.0 pyhd8ed1ab_0 conda-forge
[conda] pytorch 2.0.0 py3.9_cuda11.8_cudnn8.7.0_0 pytorch
[conda] pytorch-cuda 11.8 h7e8668a_3 pytorch
[conda] pytorch-lightning 2.0.0 pyhd8ed1ab_1 conda-forge
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchmetrics 0.11.4 pyhd8ed1ab_0 conda-forge
[conda] torchtriton 2.0.0 py39 pytorch
[conda] torchvision 0.15.0 py39_cu118 pytorch
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| 1 |
2,998 | 98,291 |
Make BetterTransformer implementation non-blocking
|
oncall: transformer/mha
|
### 🚀 The feature, motivation and pitch
I am using `optimum` integration for `BetterTransformer` with AMP.
Here is what I get without `BetterTransformer`:


The key point here is `_process_doc_contents` is a CPU-intensive preprocessing function and `repad` is GPU-blocking. Notice how we seemingly spend no time in `transformers` code.
Now after I introduce `BetterTransformer` I get the following picture:


Suddenly, we start spending a lot of time in `transformers` code, which tells me some operation is GPU-blocking there. Furthermore, my guess is confirmed by the drop in GPU saturation and increase in total time by ~30s (the time needed to preprocess the inputs).
Why is this issue not in `transformers` repo? As far as I can tell from the profiler's output, the execution was blocked by this function:

**Software:**
Ubuntu 18.04.6 LTS
```
torch==2.0.0
transformers==4.27.4
optimum==1.7.3
```
**Hardware:**
NVIDIA GeForce RTX 2060
```
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 510.108.03 Driver Version: 510.108.03 CUDA Version: 11.6 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA GeForce ... Off | 00000000:01:00.0 Off | N/A |
| 32% 38C P8 9W / 160W | 5MiB / 6144MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 2541 G /usr/lib/xorg/Xorg 4MiB |
+-----------------------------------------------------------------------------+
```
Sorry for not providing a reproducible example. I will work on it later since for now I am not sure how to implement it.
### Alternatives
_No response_
### Additional context
_No response_
cc @jbschlosser @bhosmer @cpuhrsch @erichan1
| 0 |
2,999 | 98,286 |
When I use the DDP model, I use a custom loss function, when the batch size changes during training, the process will be stuck.
|
oncall: distributed, triaged
|
### 🐛 Describe the bug
For some reasons, I need to discard part of the data in the collate_fn of the dataloader, which makes my batch size change. My program gets stuck in the loss function when the batch size is changed. It doesn't report an error, it just keeps stuck there and never proceeds to the next step.
You can run the code below and you will get output similar to mine.
You can observe that when the batch size changes, the 0 rank will be stuck in the sentence
`p_dist, n_dist = self.compute_triplet_dist(dist, p_mask, n_mask)`
Because 0 rank outputs loss 1 but not loss 2.
Since it doesn't report an error, I don't know what happened. I tried to use debug to trace, but I am not familiar with multi-process debugging. Every time I reach the stuck position, debug doesn't work anymore.
```python
import os
import numpy as np
import torch
import torch.nn.functional as F
from torch import nn, Tensor, optim, distributed
from torch.cuda.amp import autocast, GradScaler
from torch.nn import Conv2d, LeakyReLU, MaxPool2d
from torch.nn.parallel import DistributedDataParallel as DDP
import torch.multiprocessing as mp
from typing import Dict, Tuple
class ToyMoulde(nn.Module):
def __init__(self) -> None:
super().__init__()
self.layer1 = nn.Sequential(
Conv2d(1, 32, (5, 5), padding=2, bias=False),
LeakyReLU(inplace=True),
MaxPool2d(kernel_size=2, stride=2),
Conv2d(32, 64, (3, 3), padding=1, bias=False),
LeakyReLU(inplace=True),
MaxPool2d(kernel_size=2, stride=2),
Conv2d(64, 128, (3, 3), padding=1, bias=False),
LeakyReLU(inplace=True),
Conv2d(128, 128, (3, 3), padding=1, bias=False),
LeakyReLU(inplace=True),
)
self.linear1 = nn.Linear(128, 1024)
def forward(self, input_):
x = self.layer1(input_)
x = torch.max(x, dim=-1)[0]
x = torch.max(x, dim=-1, keepdim=True)[0]
x = x.permute(0, 2, 1).contiguous()
x = self.linear1(x)
x = x.permute(0, 2, 1).contiguous()
return x
def parallel_network(network: nn.Module) -> DDP:
distributed.barrier()
network_ = torch.nn.SyncBatchNorm.convert_sync_batchnorm(network)
network_ = DDP(
network_,
device_ids=[distributed.get_rank()],
output_device=distributed.get_rank(),
)
return network_
def generateInput(batch_size):
x = torch.rand(batch_size, 1, 64, 64)
y = torch.rand(batch_size) * 8
return (x, y)
def main(local_rank):
if distributed.is_nccl_available():
backend = "nccl"
else:
backend = "gloo"
print(f"backend is {backend}")
distributed.init_process_group(
backend=backend,
init_method="env://",
world_size=torch.cuda.device_count(),
rank=local_rank,
)
distributed.barrier()
DistInfo.init()
torch.cuda.set_device(local_rank)
device = torch.cuda.current_device()
network = ToyMoulde()
network.to(device)
network = parallel_network(network)
scaler = GradScaler(enabled=True)
optimizer = optim.Adam(network.parameters(), lr=1.0e-4)
Loss = TripletLoss(0.2)
for train_iter in range(10):
distributed.barrier()
print("------------------------", end="")
if train_iter < 1:
samples, lables = generateInput(64)
else:
samples, lables = generateInput(64 - local_rank * 5)
samples = samples.to(device)
lables = lables.to(device).long()
with autocast(enabled=True):
predict = network(samples)
print(f"\n{DistInfo.local_rank} rank, {train_iter} iter\n", end="")
distributed.barrier()
loss = Loss(predict, lables)
scaler.scale(loss).backward()
scaler.step(optimizer)
scaler.update()
if __name__ == "__main__":
os.environ["MASTER_ADDR"] = "127.0.0.1"
os.environ["MASTER_PORT"] = "8888"
mp.spawn(
main,
nprocs=torch.cuda.device_count(),
)
class TripletLoss(nn.Module):
def __init__(self, margin):
super().__init__()
self.dist_info = DistInfo()
self.margin = margin
self.metric_func = euclidean
def forward(self, feature_: Tensor, label_: Tensor) -> Tuple[Tensor, Dict]:
feature = cat_all_gather(feature_)
label = cat_all_gather(label_)
feature = feature.permute(2, 0, 1).contiguous()
dist: Tensor = self.metric_func(feature, feature)
p_mask, n_mask = self.get_mask(label)
print(f"{DistInfo.local_rank} rank, loss 1")
p_dist, n_dist = self.compute_triplet_dist(dist, p_mask, n_mask)
print(f"{DistInfo.local_rank} rank, loss 2")
triplet_loss = F.relu(p_dist - n_dist + self.margin)
batch_loss, loss_num = self.compute_batch_loss(triplet_loss)
return batch_loss.mean()
@staticmethod
def get_mask(label: Tensor) -> Tuple[torch.BoolTensor, torch.BoolTensor]:
row_label: Tensor = label.unsqueeze(0)
col_label: Tensor = label.unsqueeze(1)
p_mask: torch.BoolTensor = row_label == col_label
n_mask: torch.BoolTensor = ~p_mask
return p_mask, n_mask
@staticmethod
def compute_triplet_dist(
dist: Tensor, p_mask: torch.BoolTensor, n_mask: torch.BoolTensor
):
pmask: Tensor = p_mask.byte()
nmask: Tensor = n_mask.byte()
pmask = pmask.fill_diagonal_(0)
pmask = pmask.unsqueeze(2)
nmask = nmask.unsqueeze(1)
triplet = pmask * nmask
a_idx, p_idx, n_idx = torch.where(triplet)
p_dist = dist[:, a_idx, p_idx]
n_dist = dist[:, a_idx, n_idx]
return p_dist, n_dist
@staticmethod
def compute_batch_loss(triplet_loss):
eps = 1.0e-9
loss_sum = triplet_loss.sum(-1)
loss_num = (triplet_loss != 0).sum(-1).float()
batch_loss = loss_sum / (loss_num + eps)
batch_loss[loss_num == 0] = 0
return batch_loss, loss_num
def cat_all_gather(input_: torch.Tensor):
if not isinstance(input_, torch.Tensor):
raise TypeError(f"input must is a torch.Tensor, but input is {type(input_)}")
if DistInfo.world_size == 1:
return input_
gather_list = [torch.empty_like(input_) for _ in range(DistInfo.world_size)]
distributed.all_gather(gather_list, input_)
gather_list[DistInfo.local_rank] = input_
output = torch.cat(gather_list, 0).contiguous()
return output
class DistInfo:
is_parallel = False
local_rank = 0
world_size = 1
def __init__(self):
if distributed.is_initialized():
DistInfo.is_parallel = True
DistInfo.local_rank = distributed.get_rank()
DistInfo.world_size = distributed.get_world_size()
@classmethod
def init(cls):
if distributed.is_initialized():
cls.is_parallel = True
cls.local_rank = distributed.get_rank()
cls.world_size = distributed.get_world_size()
def euclidean(probe: torch.Tensor, gallery: torch.Tensor) -> torch.Tensor:
x2 = torch.sum(probe**2, -1).unsqueeze(-1)
y2 = torch.sum(gallery**2, -1).unsqueeze(-2)
inner = probe.matmul(gallery.transpose(-1, -2))
dist = x2 + y2 - 2 * inner
dist = torch.sqrt(F.relu(dist))
return dist
```
```
backend is nccl
backend is nccl
------------------------------------------------
0 rank, 0 iter
1 rank, 0 iter
1 rank, loss 1
0 rank, loss 1
1 rank, loss 2
0 rank, loss 2
------------------------------------------------
0 rank, 1 iter
1 rank, 1 iter
0 rank, loss 1
1 rank, loss 1
1 rank, loss 2
```
### Versions
```
Collecting environment information...
PyTorch version: 1.13.0+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.10.8 (main, Nov 4 2022, 13:48:29) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-67-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
Nvidia driver version: 525.89.02
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 20
On-line CPU(s) list: 0-19
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Silver 4210R CPU @ 2.40GHz
CPU family: 6
Model: 85
Thread(s) per core: 1
Core(s) per socket: 10
Socket(s): 2
Stepping: 7
CPU max MHz: 3200.0000
CPU min MHz: 1000.0000
BogoMIPS: 4800.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 640 KiB (20 instances)
L1i cache: 640 KiB (20 instances)
L2 cache: 20 MiB (20 instances)
L3 cache: 27.5 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-9
NUMA node1 CPU(s): 10-19
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT disabled
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.23.4
[pip3] torch==1.13.0+cu117
[pip3] torchaudio==0.13.0+cu117
[pip3] torchvision==0.14.0+cu117
[conda] numpy 1.23.4 pypi_0 pypi
[conda] torch 1.13.0+cu117 pypi_0 pypi
[conda] torchaudio 0.13.0+cu117 pypi_0 pypi
[conda] torchvision 0.14.0+cu117 pypi_0 pypi
```
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
| 2 |
3,000 | 98,273 |
[Inductor] [CPU] Huggingface model BartForCausalLM & MBartForCausalLM & OPTForCausalLM & PLBartForCausalLM performance regression > 10% on 2023-04-02 nightly release
|
triaged, oncall: pt2, module: inductor, module: cpu inductor
|
### 🐛 Describe the bug
Compare with the 2023-03-29, there is a performance regression on huggingface model**BartForCausalLM & MBartForCausalLM & OPTForCausalLM & PLBartForCausalLM** on [TorchInductor CPU Performance Dashboard](https://github.com/pytorch/pytorch/issues/93531#issuecomment-1495275117) on 2023-04-02 as bellow:
| | 2023-04-02 | | | | 2023-03-29 | | | | Result Comp | | |
| ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- |
| model | batch_size | speedup | inductor | eager | batch_size | speedup | inductor | eager | speedup ratio | eager ratio | inductor ratio |
| BartForCausalLM | 1 | 0.8682 | 3.6489689 | 3.168034799 | 1 | 1.0814 | 2.9063651 | 3.142943219 | 0.8 | 0.99 | 0.8
| MBartForCausalLM | 1 | 0.8722 | 3.6418691 | 3.176438229 | 1 | 1.08 | 2.9201856 | 3.153800448 | 0.81 | 0.99 | 0.8
| OPTForCausalLM | 1 | 0.7925 | 7.1054617 | 5.631078397 | 1 | 1.1534 | 4.8903451 | 5.640524038 | 0.69 | 1 | 0.69
| PLBartForCausalLM | 1 | 0.8884 | 1.4350074 | 1.274860574 | 1 | 1.1012 | 1.1629979 | 1.280693287 | 0.81 | 1 | 0.81
2023-04-02 nightly release SW information:
SW | Nightly commit | Master/Main commit
-- | -- | --
Pytorch|[5775e1c1](https://github.com/pytorch/pytorch/commit/5775e1c1)|[7fcff01](https://github.com/pytorch/pytorch/commit/7fcff01)
Torchbench|/|[83a316df](https://github.com/pytorch/benchmark/commit/83a316df)
torchaudio|[375e751](https://github.com/pytorch/audio/commit/375e751)|[a8f4e97](https://github.com/pytorch/audio/commit/a8f4e97)
torchtext|[9749082](https://github.com/pytorch/text/commit/9749082)| [46e7eef](https://github.com/pytorch/text/commit/46e7eef)
torchvision|[8d15ca7](https://github.com/pytorch/vision/commit/8d15ca7)|[98c5815](https://github.com/pytorch/vision/commit/98c5815)
torchdata|[b3048d5](https://github.com/pytorch/data/commit/b3048d5)|[f1283eb](https://github.com/pytorch/data/commit/f1283eb)
dynamo_benchmarks|[1238ae3](https://github.com/pytorch/pytorch/commit/1238ae3)|/
2023-03-29 nightly release SW information:
SW | Nightly commit | Master/Main commit
-- | -- | --
Pytorch|[f1f0a4f](https://github.com/pytorch/pytorch/commit/f1f0a4f)|[91166ef](https://github.com/pytorch/pytorch/commit/91166ef)
Torchbench|/|[83a316df](https://github.com/pytorch/benchmark/commit/83a316df)
torchaudio|[375e751](https://github.com/pytorch/audio/commit/375e751)|[a8f4e97](https://github.com/pytorch/audio/commit/a8f4e97)
torchtext|[9749082](https://github.com/pytorch/text/commit/9749082)| [46e7eef](https://github.com/pytorch/text/commit/46e7eef)
torchvision|[8d15ca7](https://github.com/pytorch/vision/commit/8d15ca7)|[98c5815](https://github.com/pytorch/vision/commit/98c5815)
torchdata|[b3048d5](https://github.com/pytorch/data/commit/b3048d5)|[f1283eb](https://github.com/pytorch/data/commit/f1283eb)
dynamo_benchmarks|[1238ae3](https://github.com/pytorch/pytorch/commit/1238ae3)|/
Graph dump by cosim:
### Versions
Minified repro:
```
python -m torch.backends.xeon.run_cpu --core_list 0 --ncores_per_instance 1 benchmarks/dynamo/huggingface.py --performance --float32 -dcpu -n50 --inductor --no-skip --dashboard --only BartForCausalLM --cold_start_latency --batch_size 1 --threads 1
```
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh @voznesenskym @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @desertfire
| 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.