Serial Number
int64 1
6k
| Issue Number
int64 75.6k
112k
| Title
stringlengths 3
357
| Labels
stringlengths 3
241
⌀ | Body
stringlengths 9
74.5k
⌀ | Comments
int64 0
867
|
---|---|---|---|---|---|
1,901 | 105,665 |
Running Llama 2 on Apple Silicon GPUs - missing MPS types and operators
|
triaged, module: mps
|
### 🚀 The feature, motivation and pitch
I have attempted to run Llama 2 on M-series (M1/M2) Mac GPUs here: https://github.com/Samyak2/llama-mps
## Current status
The models loads correctly but inference fails because:
- [ ] The `ComplexFloat` dtype is not supported in MPS yet (Closest existing issue I found: https://github.com/pytorch/pytorch/issues/78044)
- [ ] The `aten::view_as_complex` operator is not supported in MPS yet (https://github.com/pytorch/pytorch/issues/77764)
- [ ] The `aten::polar.out` operator is not supported in MPS yet. This can be worked around by setting `PYTORCH_ENABLE_MPS_FALLBACK=1` which runs the operator on CPU instead. For full performance, this operator would need to be supported too.
There may be more operators and types that may need to be supported. I have not dug further on this since it crashes due to `ComplexFloat` not being supported.
### Alternatives
There have been forks of Llama to make it work on CPU instead. Examples: https://github.com/b0kch01/llama-cpu
These will leave a lot of performance on the table though.
### Additional context
Failure logs for context (from https://github.com/Samyak2/llama-mps):
```
<redacted>/llama/llama/model.py:55: UserWarning: The operator 'aten::polar.out' is not currently supported on the MPS backend and will fall back to run on the CPU. This may have performance implications. (Triggered internally at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/mps/MPSFallback.mm:11.)
freqs_cis = torch.polar(torch.ones_like(freqs), freqs) # complex64
Loaded in 11.68 seconds
<redacted>/llama/llama/model.py:72: UserWarning: 0The operator aten::view_as_complex appears to be a view operator, but it has no implementation for the backend "mps:0". View operators don't support falling back to run on the CPU, since the tensor's storage cannot be shared across devices. (Triggered internally at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/native/CPUFallback.cpp:181.)
xq_ = torch.view_as_complex(xq.float().reshape(*xq.shape[:-1], -1, 2))
<redacted>/llama/llama/model.py:73: UserWarning: 0The operator aten::view_as_complex appears to be a view operator, but it has no implementation for the backend "mps:0". View operators don't support falling back to run on the CPU, since the tensor's storage cannot be shared across devices. (Triggered internally at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/native/CPUFallback.cpp:181.)
xk_ = torch.view_as_complex(xk.float().reshape(*xk.shape[:-1], -1, 2))
libc++abi: terminating due to uncaught exception of type c10::TypeError: Trying to convert ComplexFloat to the MPS backend but it does not have support for that dtype.
Exception raised from getMPSScalarType at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/native/mps/OperationUtils.mm:91 (most recent call first):
frame #0: at::native::mps::getMPSScalarType(c10::ScalarType) + 180 (0x116dc5954 in libtorch_cpu.dylib)
frame #1: invocation function for block in at::native::mps::binaryOpTensor(at::Tensor const&, at::Tensor const&, c10::Scalar const&, at::Tensor const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, MPSGraphTensor* (at::native::mps::BinaryOpCachedGraph*, MPSGraphTensor*, MPSGraphTensor*) block_pointer) + 108 (0x116de0814 in libtorch_cpu.dylib)
frame #2: invocation function for block in at::native::mps::MPSGraphCache::CreateCachedGraph(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, at::native::mps::MPSCachedGraph* () block_pointer) + 216 (0x116ddb8d4 in libtorch_cpu.dylib)
frame #3: _dispatch_client_callout + 20 (0x185114400 in libdispatch.dylib)
frame #4: _dispatch_lane_barrier_sync_invoke_and_complete + 56 (0x18512397c in libdispatch.dylib)
frame #5: at::native::mps::MPSGraphCache::CreateCachedGraph(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, at::native::mps::MPSCachedGraph* () block_pointer) + 160 (0x116dc99e0 in libtorch_cpu.dylib)
frame #6: at::native::mps::binaryOpTensor(at::Tensor const&, at::Tensor const&, c10::Scalar const&, at::Tensor const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, MPSGraphTensor* (at::native::mps::BinaryOpCachedGraph*, MPSGraphTensor*, MPSGraphTensor*) block_pointer) + 2352 (0x116ddf898 in libtorch_cpu.dylib)
frame #7: at::native::structured_mul_out_mps::impl(at::Tensor const&, at::Tensor const&, at::Tensor const&) + 128 (0x116de33f0 in libtorch_cpu.dylib)
frame #8: at::(anonymous namespace)::wrapper_MPS_mul_Tensor(at::Tensor const&, at::Tensor const&) + 140 (0x11457fea8 in libtorch_cpu.dylib)
frame #9: at::_ops::mul_Tensor::call(at::Tensor const&, at::Tensor const&) + 284 (0x1133bd898 in libtorch_cpu.dylib)
frame #10: torch::autograd::THPVariable_mul(_object*, _object*, _object*) + 396 (0x10726c2dc in libtorch_python.dylib)
frame #11: _object* torch::autograd::TypeError_to_NotImplemented_<&torch::autograd::THPVariable_mul(_object*, _object*, _object*)>(_object*, _object*, _object*) + 12 (0x1071c8330 in libtorch_python.dylib)
<omitting python frames>
```
cc @kulinseth @albanD @malfet @DenisVieriu97 @razarmehr @abhudev
| 8 |
1,902 | 105,664 |
[LTC] Fix type inference for native_layer_norm_backward
|
triaged, open source, Stale
|
### Description
Fix a bug in `compute_shape_native_layer_norm_backward` function.
| 8 |
1,903 | 105,655 |
Pytorch - cpu only & caffe2 build failing
|
module: build, caffe2, triaged
|
## Pytorch build failing always
[Pytorch cpu-only build from source failing with caffe2 on]
I was trying to build caffe2 within Pytorch directory but couldn't find a way to build it. The build seems to fail always. Can somebody tell what & how to do?
Below are the set of commands I was using:
- How you installed PyTorch (conda, pip, source):
```
git clone --recursive https://github.com/pytorch/pytorch
cd pytorch
git checkout tags/v2.0.1
```
- Build command used :
```
export USE_CAFFE2=1
export USE_CUDA=0
export USE_MKLDNN=1
export BUILD_CAFFE2_OPS=1
export BUILD_CAFFE2=1
export USE_OPENMP=1
export CMAKE_PREFIX_PATH=${CONDA_PREFIX:-"$(dirname $(which conda))/../"}
export _GLIBCXX_USE_CXX11_ABI=1
BUILD_CAFFE2=ON BUILD_CAFFE2_OPS=ON USE_MKLDNN=ON python setup.py install
```
## System Info
```
PyTorch version: 2.0.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: 3.8.0-2ubuntu1 (tags/RELEASE_380/final)
CMake version: version 3.26.4
Libc version: glibc-2.31
Python version: 3.10.12 (main, Jul 5 2023, 18:54:27) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-148-generic-x86_64-with-glibc2.31
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 40
On-line CPU(s) list: 0-39
Thread(s) per core: 2
Core(s) per socket: 10
Socket(s): 2
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 63
Model name: Intel(R) Xeon(R) CPU E5-2650 v3 @ 2.30GHz
Stepping: 2
CPU MHz: 1227.239
CPU max MHz: 2300.0000
CPU min MHz: 1200.0000
BogoMIPS: 4589.03
Virtualization: VT-x
L1d cache: 640 KiB
L1i cache: 640 KiB
L2 cache: 5 MiB
L3 cache: 50 MiB
NUMA node0 CPU(s): 0-39
Vulnerability Itlb multihit: KVM: Mitigation: Split huge pages
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm cpuid_fault epb invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm xsaveopt cqm_llc cqm_occup_llc dtherm arat pln pts md_clear flush_l1d
Versions of relevant libraries:
[pip3] numpy==1.25.0
[pip3] torch==2.0.1
[pip3] torchaudio==2.0.2
[pip3] torchvision==0.15.2
[conda] blas 1.0 mkl
[conda] cpuonly 2.0 0 pytorch
[conda] mkl 2023.2.0 pypi_0 pypi
[conda] mkl-include 2023.2.0 pypi_0 pypi
[conda] mkl-service 2.4.0 py310h5eee18b_1
[conda] mkl_fft 1.3.6 py310h1128e8f_1
[conda] mkl_random 1.2.2 py310h1128e8f_1
[conda] numpy 1.25.0 py310h5f9d8c6_0
[conda] numpy-base 1.25.0 py310hb5e798b_0
[conda] pytorch 2.0.1 py3.10_cpu_0 pytorch
[conda] pytorch-mutex 1.0 cpu pytorch
[conda] torchaudio 2.0.2 py310_cpu pytorch
[conda] torchvision 0.15.2 py310_cpu pytorch
```
cc @malfet @seemethere
| 1 |
1,904 | 105,648 |
add Half support for interpolate operators on CPU
|
module: cpu, triaged, open source, ciflow/trunk, ciflow/mps
|
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| 7 |
1,905 | 105,644 |
Tensor subclass is not preserved during backward with gradient checkpointing
|
module: checkpoint, triaged, module: __torch_function__, tensor subclass
|
### 🐛 Describe the bug
When creating a custom `torch.Tensor` subclass, during forward, the subclass information is properly preserved. However, when using gradient checkpointing feature, the subclass information is not kept after any calculation during backward.
```python
import torch
from torch import nn
from torch.utils import checkpoint
class MyTensor(torch.Tensor):
pass
class Module(nn.Linear):
def forward(self, x):
print('layer input type:', type(x))
x = MyTensor(x)
y = super().forward(x)
print('layer output type:', type(y))
return y
def main():
x = MyTensor(torch.randn(1, 1))
m1 = nn.Linear(1, 1)
m2 = Module(1, 1)
print('forward')
z = checkpoint.checkpoint(m2, m1(x))
print('output type:', type(z))
print('backward')
z.backward()
if __name__ == '__main__':
main()
```
output:
```
forward
layer input type: <class '__main__.MyTensor'>
layer output type: <class '__main__.MyTensor'>
output type: <class '__main__.MyTensor'>
backward
layer input type: <class 'torch.Tensor'>
layer output type: <class 'torch.Tensor'>
```
### Versions
```
PyTorch version: 2.0.1
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04.1) 11.3.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.11.4 | packaged by conda-forge | (main, Jun 10 2023, 18:08:17) [GCC 12.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-73-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A40
GPU 1: NVIDIA A40
GPU 2: NVIDIA A40
GPU 3: NVIDIA A40
Nvidia driver version: 520.61.05
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 56
On-line CPU(s) list: 0-55
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 5120 CPU @ 2.20GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 14
Socket(s): 2
Stepping: 4
CPU max MHz: 3200.0000
CPU min MHz: 1000.0000
BogoMIPS: 4400.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti intel_ppin ssbd mba ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 896 KiB (28 instances)
L1i cache: 896 KiB (28 instances)
L2 cache: 28 MiB (28 instances)
L3 cache: 38.5 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT vulnerable
Versions of relevant libraries:
[pip3] numpy==1.25.1
[pip3] pytorch-lightning==2.0.5
[pip3] torch==2.0.1
[pip3] torchmetrics==0.11.4
[pip3] torchvision==0.15.2
[pip3] triton==2.0.0
[conda] blas 2.116 mkl conda-forge
[conda] blas-devel 3.9.0 16_linux64_mkl conda-forge
[conda] libblas 3.9.0 16_linux64_mkl conda-forge
[conda] libcblas 3.9.0 16_linux64_mkl conda-forge
[conda] liblapack 3.9.0 16_linux64_mkl conda-forge
[conda] liblapacke 3.9.0 16_linux64_mkl conda-forge
[conda] mkl 2022.1.0 h84fe81f_915 conda-forge
[conda] mkl-devel 2022.1.0 ha770c72_916 conda-forge
[conda] mkl-include 2022.1.0 h84fe81f_915 conda-forge
[conda] numpy 1.25.1 py311h64a7726_0 conda-forge
[conda] pytorch 2.0.1 py3.11_cuda11.8_cudnn8.7.0_0 pytorch
[conda] pytorch-cuda 11.8 h7e8668a_5 pytorch
[conda] pytorch-lightning 2.0.5 pypi_0 pypi
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchmetrics 0.11.4 pyhd8ed1ab_0 conda-forge
[conda] torchtriton 2.0.0 py311 pytorch
[conda] torchvision 0.15.2 py311_cu118 pytorch
```
cc @hameerabbasi @rgommers @peterbell10 @ezyang @msaroufim @albanD
| 3 |
1,906 | 105,641 |
Turn indexing with a scalar tensor into an copy into a view and avoid a D2H synchronization.
|
module: bc-breaking, triaged, module: numpy, module: advanced indexing, topic: bc breaking
|
### 🚀 The feature, motivation and pitch
Today, this triggers a cuda synchronization:
```
import torch
torch.set_default_device('cuda')
def f(x, y):
return x[y]
inps = (torch.randn(5), torch.tensor(0))
torch.cuda.set_sync_debug_mode(2)
f(*inps)
```
The reason why is that when the tensor is a 0-dim value, instead of launching a gather kernel, we move the tensor to the hsot and do a slice instead (https://github.com/pytorch/pytorch/pull/105518/files#diff-2574bfb0ffa78d685fb7bd2ebc0c64b1a5f87dd55ec74ae67b41b31adc566020L466).
We could just fix this, but unfortunately, this does change the semantics. In particular, now, this operation would create a copy instead of a view, which could cause issues for downstream in-place operations.
I think these are bad semantics, for 3 reasons:
1. Cuda synchronizations are very bad in general. They're slow, prevent the use of many different features (streams, cudagraphs, don't play well with collectives, etc.), and should strongly be avoided. This, however, is a very implicit coercion we're doing. It's not obvious at all that if the tensor is 3-dim/2-dim/1-dim it doesn't do a sync, but if the tensor is 0-dim it does do a sync. In addition, it makes this much harder to trace and compile/not composite compliant in general.
2. Moreover, it's *not* consistent!!
Why should `x[torch.tensor(0)]` return a view but `x[torch.tensor([0])` return a copy? Why should the first one do a synchronization and the second one doesn't?
To drive this point home further, we also diverge from Numpy semantics here.
```
import numpy as np
x = np.ones(5)
y = np.array(1)
z = x[y]
z += 1
print(x)
>>> array([1., 1., 1., 1., 1.])
```
3. It's actually *slower* than just doing the index operator on GPUs! Benchmarking `x[torch.tensor(0)]` vs. `x[torch.tensor([0])`, we see that the first takes `35 us` per iteration while the second one takes `8 us`.
PS: I've also done a brief survey of use cases with this pattern I could find (https://github.com/search?q=%2F%28%5Cw%2B%29%5C%5Btorch.tensor%5C%28%2F+language%3APython&type=code), and I couldn't find many use cases of this code path at all.
cc: @ezyang @zou3519 @ngimel
cc @ezyang @gchanan @mruberry @rgommers
| 10 |
1,907 | 105,640 |
Add z3-solver as dependency to dynamo tests
|
fb-exported, Stale, topic: not user facing, module: dynamo
|
Test Plan: sandcastle
Reviewed By: malfet, huydhn
Differential Revision: D47438456
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng @anijain2305 @ipiszy
| 4 |
1,908 | 105,637 |
[MPS] Add mps support for max unpool2d
|
triaged, open source, release notes: mps, ciflow/mps
|
Fixes one of the missing ops listed in #77764
Adds support for max_unpool2d forward & backward on the mps backend.
Since I don't think this op is natively supported in MPS, I've added an MSL kernel that mirrors the max_unpool2d cuda kernel.
| 5 |
1,909 | 105,636 |
Syntax error when compileing Megatron-LM models.
|
triaged, ezyang's list, oncall: pt2
|
### 🐛 Describe the bug
Sorry, I haven't reproduced this bug with a simple demo.
Use `@torch.compile`to annotate `CoreAttention.forward` method in this file https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/model/transformer.py, then train a simple model can reproduce this bug.
```text
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 364, in _compile
check_fn = CheckFunctionManager(
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/guards.py", line 548, in __init__
self.check_fn = self.compile_check_fn(
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/guards.py", line 645, in compile_check_fn
exec(py_code, global_builder.scope, out)
File "<string>", line 2
return lambda name,self,dtype,___stack0,required_len,tensor_shape,___odict_getitem(self,**___kwargs_ignored: ___guarded_code.valid and ___check_type_id(name, 93889009977280) and name == 'mpu' and ___check_type_id(self, 93889097820512) and str(dtype) == 'torch.bfloat16' and ___check_type_id(self.buffer, 93889009989056) and ___check_type_id(required_len, 93889009992480) and required_len == 134217728 and ___check_type_id(tensor_shape, 93889009978272) and len(tensor_shape) == 3 and ___check_type_id(tensor_shape[0], 93889009992480) and ___check_type_id(tensor_shape[1], 93889009992480) and ___check_type_id(tensor_shape[2], 93889009992480) and tensor_shape == (32, 2048, 2048) and tensor_shape[0] == 32 and tensor_shape[1] == 2048 and tensor_shape[2] == 2048 and ___check_tensors(___stack0, ___odict_getitem(self.buffer, ('mpu', torch.bfloat16)))
^
SyntaxError: invalid syntax
```
### Versions
Collecting environment information...
PyTorch version: 2.0.0
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
Clang version: Could not collect
CMake version: version 3.24.1
Libc version: glibc-2.35
Python version: 3.10.6 (main, Mar 10 2023, 10:55:28) [GCC 11.3.0] (64-bit runtime)
Python platform: Linux-4.19.91-012.ali4000.alios7.x86_64-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A10
GPU 1: NVIDIA A10
Nvidia driver version: 470.103.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8369B CPU @ 2.90GHz
CPU family: 6
Model: 106
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 6
BogoMIPS: 5799.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves wbnoinvd arat avx512vbmi avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid fsrm arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 3 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 80 MiB (64 instances)
L3 cache: 96 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-63
NUMA node1 CPU(s): 64-127
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.22.2
[pip3] pytorch-quantization==2.1.2
[pip3] torch==2.0.0
[pip3] torch-tensorrt==1.4.0.dev0
[pip3] torchdata==0.6.0
[pip3] torchtext==0.15.1
[pip3] torchvision==0.15.1
[pip3] triton==2.0.0
[conda] Could not collect
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 2 |
1,910 | 105,635 |
FSDP with gradient checkpointing lead to redundant allgathers during backward
|
triaged, module: fsdp
|
### 🐛 Describe the bug
While training huggingface Llama 13B with FSDP with full shard and gradient checkpointing enabled on a single node, I observed that the backward pass has two allgathers per layer. Compared to non checkpointed training, this additional allgather also affects the overlap between the reduce scatter and gradient computation. I think ideally only one allgather is needed to gather the weights per FSDP module.
Trace: (browm is reduce scatter, blue is allgather)

```python3
model = FSDP(model,
sharding_strategy=ShardingStrategy.FULL_SHARD,
sync_module_states=True,
mixed_precision=mixed_precision,
auto_wrap_policy=functools.partial(transformer_auto_wrap_policy, transformer_layer_cls={LlamaDecoderLayer}),
limit_all_gathers=True,
device_id=dev,
param_init_fn=param_init_fn,
)
```
### Versions
I'm using pytorch nightly: `2.1.0.dev20230709+cu121`
cc @zhaojuanmao @mrshenli @rohan-varma @awgu
| 6 |
1,911 | 105,634 |
[inductor] unexpected dynamic shape error encountered in TritonTemplate
|
triaged, ezyang's list, oncall: pt2, module: dynamic shapes
|
### 🐛 Describe the bug
`TritonTemplate` cannot deal with some symbolic variable right now. (to be supported in #105295)
But these symbolic variables still show up even if used with `dynamic=False`.
```python
import torch
@torch.compile(mode='max-autotune', dynamic=False)
def func(inp, mat1, mat2):
res = torch.addmm(inp, mat1, mat2)
return res
inp = torch.randn(128, device='cuda')
mat1 = torch.randn(16, 64, device='cuda')
mat2 = torch.randn(64, 128, device='cuda')
res = func(inp, mat1, mat2)
print('res', res)
# change size of mat1
mat1 = torch.randn(32, 64, device='cuda')
res = func(inp, mat1, mat2)
print('res', res)
```
error message:
```
Traceback (most recent call last):
File "/home/constroy/projects/model-zoo/HuggingFace/debug_bmm.py", line 17, in <module>
res = func(inp, mat1, mat2)
File "/home/constroy/projects/pytorch/torch/_dynamo/eval_frame.py", line 306, in _fn
return fn(*args, **kwargs)
File "/home/constroy/projects/pytorch/torch/_dynamo/eval_frame.py", line 466, in catch_errors
return callback(frame, cache_size, hooks, frame_state)
File "/home/constroy/projects/pytorch/torch/_dynamo/convert_frame.py", line 545, in _convert_frame
result = inner_convert(frame, cache_size, hooks, frame_state)
File "/home/constroy/projects/pytorch/torch/_dynamo/convert_frame.py", line 128, in _fn
return fn(*args, **kwargs)
File "/home/constroy/projects/pytorch/torch/_dynamo/convert_frame.py", line 364, in _convert_frame_assert
return _compile(
File "/home/constroy/projects/pytorch/torch/_dynamo/utils.py", line 179, in time_wrapper
r = func(*args, **kwargs)
File "/home/constroy/projects/pytorch/torch/_dynamo/convert_frame.py", line 434, in _compile
out_code = transform_code_object(code, transform)
File "/home/constroy/projects/pytorch/torch/_dynamo/bytecode_transformation.py", line 1002, in transform_code_object
transformations(instructions, code_options)
File "/home/constroy/projects/pytorch/torch/_dynamo/convert_frame.py", line 419, in transform
tracer.run()
File "/home/constroy/projects/pytorch/torch/_dynamo/symbolic_convert.py", line 2068, in run
super().run()
File "/home/constroy/projects/pytorch/torch/_dynamo/symbolic_convert.py", line 727, in run
and self.step()
File "/home/constroy/projects/pytorch/torch/_dynamo/symbolic_convert.py", line 687, in step
getattr(self, inst.opname)(inst)
File "/home/constroy/projects/pytorch/torch/_dynamo/symbolic_convert.py", line 2156, in RETURN_VALUE
self.output.compile_subgraph(
File "/home/constroy/projects/pytorch/torch/_dynamo/output_graph.py", line 791, in compile_subgraph
self.compile_and_call_fx_graph(tx, list(reversed(stack_values)), root)
File "/usr/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/home/constroy/projects/pytorch/torch/_dynamo/output_graph.py", line 915, in compile_and_call_fx_graph
compiled_fn = self.call_user_compiler(gm)
File "/home/constroy/projects/pytorch/torch/_dynamo/utils.py", line 179, in time_wrapper
r = func(*args, **kwargs)
File "/home/constroy/projects/pytorch/torch/_dynamo/output_graph.py", line 971, in call_user_compiler
raise BackendCompilerFailed(self.compiler_fn, e).with_traceback(
File "/home/constroy/projects/pytorch/torch/_dynamo/output_graph.py", line 967, in call_user_compiler
compiled_fn = compiler_fn(gm, self.example_inputs())
File "/home/constroy/projects/pytorch/torch/_dynamo/repro/after_dynamo.py", line 117, in debug_wrapper
compiled_gm = compiler_fn(gm, example_inputs)
File "/home/constroy/projects/pytorch/torch/__init__.py", line 1549, in __call__
return compile_fx(model_, inputs_, config_patches=self.config)
File "/home/constroy/projects/pytorch/torch/_inductor/compile_fx.py", line 861, in compile_fx
return compile_fx(
File "/home/constroy/projects/pytorch/torch/_inductor/compile_fx.py", line 1045, in compile_fx
return aot_autograd(
File "/home/constroy/projects/pytorch/torch/_dynamo/backends/common.py", line 55, in compiler_fn
cg = aot_module_simplified(gm, example_inputs, **kwargs)
File "/home/constroy/projects/pytorch/torch/_functorch/aot_autograd.py", line 3755, in aot_module_simplified
compiled_fn = create_aot_dispatcher_function(
File "/home/constroy/projects/pytorch/torch/_dynamo/utils.py", line 179, in time_wrapper
r = func(*args, **kwargs)
File "/home/constroy/projects/pytorch/torch/_functorch/aot_autograd.py", line 3294, in create_aot_dispatcher_function
compiled_fn = compiler_fn(flat_fn, fake_flat_args, aot_config, fw_metadata=fw_metadata)
File "/home/constroy/projects/pytorch/torch/_functorch/aot_autograd.py", line 2098, in aot_wrapper_dedupe
return compiler_fn(flat_fn, leaf_flat_args, aot_config, fw_metadata=fw_metadata)
File "/home/constroy/projects/pytorch/torch/_functorch/aot_autograd.py", line 2278, in aot_wrapper_synthetic_base
return compiler_fn(flat_fn, flat_args, aot_config, fw_metadata=fw_metadata)
File "/home/constroy/projects/pytorch/torch/_functorch/aot_autograd.py", line 1552, in aot_dispatch_base
compiled_fw = compiler(fw_module, flat_args)
File "/home/constroy/projects/pytorch/torch/_dynamo/utils.py", line 179, in time_wrapper
r = func(*args, **kwargs)
File "/home/constroy/projects/pytorch/torch/_inductor/compile_fx.py", line 987, in fw_compiler_base
return inner_compile(
File "/usr/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/home/constroy/projects/pytorch/torch/_dynamo/repro/after_aot.py", line 80, in debug_wrapper
inner_compiled_fn = compiler_fn(gm, example_inputs)
File "/home/constroy/projects/pytorch/torch/_inductor/debug.py", line 220, in inner
return fn(*args, **kwargs)
File "/usr/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/home/constroy/projects/pytorch/torch/_inductor/compile_fx.py", line 48, in newFunction
return old_func(*args, **kwargs)
File "/home/constroy/projects/pytorch/torch/_inductor/compile_fx.py", line 303, in compile_fx_inner
compiled_graph: CompiledFxGraph = fx_codegen_and_compile(
File "/home/constroy/projects/pytorch/torch/_inductor/compile_fx.py", line 502, in fx_codegen_and_compile
graph.run(*example_inputs)
File "/home/constroy/projects/pytorch/torch/_dynamo/utils.py", line 179, in time_wrapper
r = func(*args, **kwargs)
File "/home/constroy/projects/pytorch/torch/_inductor/graph.py", line 419, in run
return super().run(*args)
File "/home/constroy/projects/pytorch/torch/fx/interpreter.py", line 138, in run
self.env[node] = self.run_node(node)
File "/home/constroy/projects/pytorch/torch/_inductor/graph.py", line 675, in run_node
result = super().run_node(n)
File "/home/constroy/projects/pytorch/torch/fx/interpreter.py", line 195, in run_node
return getattr(self, n.op)(n.target, args, kwargs)
File "/home/constroy/projects/pytorch/torch/_inductor/graph.py", line 566, in call_function
raise LoweringException(e, target, args, kwargs).with_traceback(
File "/home/constroy/projects/pytorch/torch/_inductor/graph.py", line 563, in call_function
out = lowerings[target](*args, **kwargs)
File "/home/constroy/projects/pytorch/torch/_inductor/lowering.py", line 275, in wrapped
out = decomp_fn(*args, **kwargs)
File "/home/constroy/projects/pytorch/torch/_inductor/kernel/mm.py", line 180, in tuned_addmm
mm_template.maybe_append_choice(
File "/home/constroy/projects/pytorch/torch/_inductor/select_algorithm.py", line 375, in maybe_append_choice
self.generate(
File "/home/constroy/projects/pytorch/torch/_inductor/select_algorithm.py", line 470, in generate
assert list(call_args) == expected_args, (call_args, expected_args)
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
LoweringException: AssertionError: (['arg0_1', 'arg2_1', 'arg3_1', 'buf_out', s0], ['arg0_1', 'arg2_1', 'arg3_1', 'buf_out'])
target: aten.addmm.default
args[0]: TensorBox(StorageBox(
InputBuffer(name='arg0_1', layout=FixedLayout('cuda', torch.float32, size=[128], stride=[1]))
))
args[1]: TensorBox(StorageBox(
InputBuffer(name='arg2_1', layout=FixedLayout('cuda', torch.float32, size=[s0, 64], stride=[64, 1]))
))
args[2]: TensorBox(StorageBox(
InputBuffer(name='arg3_1', layout=FixedLayout('cuda', torch.float32, size=[64, 128], stride=[128, 1]))
))
```
### Versions
Collecting environment information...
PyTorch version: 2.1.0a0+git631ab5d
Is debug build: False
CUDA used to build PyTorch: 11.5
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
Clang version: 14.0.0-1ubuntu1
CMake version: version 3.26.4
Libc version: glibc-2.35
Python version: 3.10.6 (main, Mar 10 2023, 10:55:28) [GCC 11.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-69-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA A100-PCIE-40GB
Nvidia driver version: 515.105.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 56
On-line CPU(s) list: 0-55
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz
CPU family: 6
Model: 79
Thread(s) per core: 2
Core(s) per socket: 14
Socket(s): 2
Stepping: 1
CPU max MHz: 2400.0000
CPU min MHz: 1200.0000
BogoMIPS: 4800.10
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a rdseed adx smap intel_pt xsaveopt cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm arat pln pts md_clear flush_l1d
Virtualization: VT-x
L1d cache: 896 KiB (28 instances)
L1i cache: 896 KiB (28 instances)
L2 cache: 7 MiB (28 instances)
L3 cache: 70 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT vulnerable
Versions of relevant libraries:
[pip3] flake8==6.0.0
[pip3] flake8-bugbear==23.3.23
[pip3] flake8-comprehensions==3.12.0
[pip3] flake8-executable==2.1.3
[pip3] flake8-logging-format==0.9.0
[pip3] flake8-pyi==23.3.1
[pip3] flake8-simplify==0.19.3
[pip3] mypy==0.960
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.3
[pip3] pytorch-triton==2.1.0+3c400e7818
[pip3] torch==2.1.0a0+git631ab5d
[conda] Could not collect
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 8 |
1,912 | 105,632 |
torch.nn.TransformerDecoderLayer lacks parameter validation check
|
oncall: transformer/mha
|
### 🐛 Describe the bug
### Description:
torch.nn.TransformerDecoderLayer lacks parameter validation check, and when invalid values are given, they are used in subsequent computations, leading to errors such as division by zero.
### Examples:
input:
```
p = torch.nn.TransformerDecoderLayer(d_model=10, nhead=0)
```
error_message:
```
Traceback (most recent call last):
File "D:\PythonProjects\venv\lib\site-packages\IPython\core\interactiveshell.py", line 3508, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-32-6f43c243aac4>", line 1, in <module>
p = torch.nn.TransformerDecoderLayer(d_model=10, nhead=0)
File "D:\PythonProjects\venv\lib\site-packages\torch\nn\modules\transformer.py", line 653, in __init__
self.self_attn = MultiheadAttention(d_model, nhead, dropout=dropout, batch_first=batch_first,
File "D:\PythonProjects\venv\lib\site-packages\torch\nn\modules\activation.py", line 968, in __init__
self.head_dim = embed_dim // num_heads
ZeroDivisionError: integer division or modulo by zero
```
### Versions
PS D:\PythonProjects\venv\Lib\site-packages\torch\utils> python collect_env.py
Collecting environment information...
PyTorch version: 2.0.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 11 家庭中文版
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A
Python version: 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.22621-SP0
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4070 Laptop GPU
Nvidia driver version: 532.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture=9
CurrentClockSpeed=2200
DeviceID=CPU0
Family=207
L2CacheSize=16384
L2CacheSpeed=
Manufacturer=GenuineIntel
MaxClockSpeed=2200
Name=13th Gen Intel(R) Core(TM) i9-13900HX
ProcessorType=3
Revision=
Versions of relevant libraries:
[pip3] numpy==1.24.3
[pip3] torch==2.0.1+cu117
[pip3] torchaudio==2.0.2+cu117
[pip3] torchvision==0.15.2+cu117
[conda] Could not collect
cc @jbschlosser @bhosmer @cpuhrsch @erichan1 @drisspg
| 2 |
1,913 | 105,629 |
F.pad will accept 0 and negative values as parameter
|
module: nn, module: error checking, triaged, module: padding
|
### 🐛 Describe the bug
When using interfaces with certain extension functionalities(ZeroPad, ConstantPad, ReflectionPad, ReplicationPad, no matter 1d, 2d, or 3d), setting padding to 0 will result in the input tensor being output as it is.
##### An example for zero padding here:
run:
```
p = nn.ConstantPad2d(padding=0, value=1.0)
x = torch.randn(3,3,3,3)
print(p(x).shape)
```
output:
```
torch.Size([3, 3, 3, 3])
```
The performance of 0 padding in these four padding layers will be the same.
Also, when padding is set to a negative value, the functionality of the interface will behave as a "**narrowing**" operation on the input tensor.
##### An example for negative padding here:
run:
```
p = nn.ConstantPad2d(padding=-1, value=1.0)
x = torch.randn(3,3,3,3)
print(p(x).shape)
```
output:
```
torch.Size([3, 3, 1, 1])
```
Furthermore, when this "narrowing" operation is insufficient to be applied on the input tensor, it will lead to an error in a deeper place.
As below:
run:
```
p_f = nn.ConstantPad2d(padding=-5, value=1.0)
x = torch.randn(3,3,3,3)
print(p_f(x).shape)
```
error message:
```
Traceback (most recent call last):
File "D:\PythonProjects\venv\lib\site-packages\IPython\core\interactiveshell.py", line 3508, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-12-5f158586f13d>", line 1, in <module>
print(p_f(x).shape)
File "D:\PythonProjects\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\PythonProjects\venv\lib\site-packages\torch\nn\modules\padding.py", line 25, in forward
return F.pad(input, self.padding, 'constant', self.value)
RuntimeError: narrow(): length must be non-negative.
```
This performance appears in several interfaces(4 Pad layers and there 1~3d) because they all call F.pad, and F.pad does not reject padding=0 or negative values.
### Versions
PS D:\PythonProjects\venv\Lib\site-packages\torch\utils> python collect_env.py
Collecting environment information...
PyTorch version: 2.0.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 11 家庭中文版
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A
Python version: 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.22621-SP0
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4070 Laptop GPU
Nvidia driver version: 532.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture=9
CurrentClockSpeed=2200
DeviceID=CPU0
Family=207
L2CacheSize=16384
L2CacheSpeed=
Manufacturer=GenuineIntel
MaxClockSpeed=2200
Name=13th Gen Intel(R) Core(TM) i9-13900HX
ProcessorType=3
Revision=
Versions of relevant libraries:
[pip3] numpy==1.24.3
[pip3] torch==2.0.1+cu117
[pip3] torchaudio==2.0.2+cu117
[pip3] torchvision==0.15.2+cu117
[conda] Could not collect
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @malfet
| 0 |
1,914 | 105,626 |
Extend the device verification of the RPC module on the Python side
|
triaged, open source, Stale, release notes: distributed (rpc), topic: not user facing
|
#103829
The current rpc module only supports cpu and cuda tensor, and it is hoped to expand this module to support third-party device tensor.
Currently, only the code on the python side is extended, and the existing cuda code logic is not affected.
| 3 |
1,915 | 105,623 |
[ONNX] fix `test_fx_op_consistency.py` test failure when running on torch built with cuda
|
module: onnx, triaged
|
Step to repro
`pytest test/onnx/test_fx_op_consistency.py -k test_output_match_full_like_cpu_float32`
raises
`AssertionError: The values for attribute 'device' do not match: cuda:0 != cpu.`
cc @justinchuby
| 3 |
1,916 | 105,600 |
Enable Mypy checking for scheduler.py
|
topic: not user facing, module: inductor, ciflow/inductor
|
ATT, add type annotations and type assertions to pass Mypy checks.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov
| 5 |
1,917 | 105,597 |
Out of bounds error with `nn.MultiMarginLoss`
|
low priority, triaged, hackathon, oncall: pt2
|
### 🐛 Describe the bug
As I was working on the inductor hackathon https://github.com/pytorch/pytorch/issues/105558
I went to `_inductor/lowering` and commented out `# make_fallback(aten.multi_margin_loss)`
I then had this code snippet
```python
import torch
torch.set_default_device("cuda")
@torch.compile
def f(a, b):
a = a.cos()
# b = b.sin()
b = b.long()
loss = torch.nn.MultiMarginLoss()
return loss(a, b)
f(torch.randn(1), torch.randn(1))
```
Which gave the error below. It runs in eager and it runs if you change`f(torch.randn(10), torch.randn(1))` it passes more frequently - You might need to run teh script more than once
cc @ezyang @wconstab @bdhirsh @anijain2305 @Chillee
### Error logs
```
<frozen importlib._bootstrap_external>:883: _call_with_frames_removed: block: [0,0,0], thread: [0,0,0] Assertion `index out of bounds: 0 <= tmp1 < 1` failed.
<frozen importlib._bootstrap_external>:883: _call_with_frames_removed: block: [0,0,0], thread: [1,0,0] Assertion `index out of bounds: 0 <= tmp1 < 1` failed.
<frozen importlib._bootstrap_external>:883: _call_with_frames_removed: block: [0,0,0], thread: [2,0,0] Assertion `index out of bounds: 0 <= tmp1 < 1` failed.
<frozen importlib._bootstrap_external>:883: _call_with_frames_removed: block: [0,0,0], thread: [3,0,0] Assertion `index out of bounds: 0 <= tmp1 < 1` failed.
<frozen importlib._bootstrap_external>:883: _call_with_frames_removed: block: [0,0,0], thread: [4,0,0] Assertion `index out of bounds: 0 <= tmp1 < 1` failed.
<frozen importlib._bootstrap_external>:883: _call_with_frames_removed: block: [0,0,0], thread: [5,0,0] Assertion `index out of bounds: 0 <= tmp1 < 1` failed.
<frozen importlib._bootstrap_external>:883: _call_with_frames_removed: block: [0,0,0], thread: [6,0,0] Assertion `index out of bounds: 0 <= tmp1 < 1` failed.
<frozen importlib._bootstrap_external>:883: _call_with_frames_removed: block: [0,0,0], thread: [7,0,0] Assertion `index out of bounds: 0 <= tmp1 < 1` failed.
<frozen importlib._bootstrap_external>:883: _call_with_frames_removed: block: [0,0,0], thread: [8,0,0] Assertion `index out of bounds: 0 <= tmp1 < 1` failed.
<frozen importlib._bootstrap_external>:883: _call_with_frames_removed: block: [0,0,0], thread: [9,0,0] Assertion `index out of bounds: 0 <= tmp1 < 1` failed.
<frozen importlib._bootstrap_external>:883: _call_with_frames_removed: block: [0,0,0], thread: [10,0,0] Assertion `index out of bounds: 0 <= tmp1 < 1` failed.
<frozen importlib._bootstrap_external>:883: _call_with_frames_removed: block: [0,0,0], thread: [11,0,0] Assertion `index out of bounds: 0 <= tmp1 < 1` failed.
<frozen importlib._bootstrap_external>:883: _call_with_frames_removed: block: [0,0,0], thread: [12,0,0] Assertion `index out of bounds: 0 <= tmp1 < 1` failed.
<frozen importlib._bootstrap_external>:883: _call_with_frames_removed: block: [0,0,0], thread: [13,0,0] Assertion `index out of bounds: 0 <= tmp1 < 1` failed.
<frozen importlib._bootstrap_external>:883: _call_with_frames_removed: block: [0,0,0], thread: [14,0,0] Assertion `index out of bounds: 0 <= tmp1 < 1` failed.
<frozen importlib._bootstrap_external>:883: _call_with_frames_removed: block: [0,0,0], thread: [15,0,0] Assertion `index out of bounds: 0 <= tmp1 < 1` failed.
<frozen importlib._bootstrap_external>:883: _call_with_frames_removed: block: [0,0,0], thread: [16,0,0] Assertion `index out of bounds: 0 <= tmp1 < 1` failed.
<frozen importlib._bootstrap_external>:883: _call_with_frames_removed: block: [0,0,0], thread: [17,0,0] Assertion `index out of bounds: 0 <= tmp1 < 1` failed.
<frozen importlib._bootstrap_external>:883: _call_with_frames_removed: block: [0,0,0], thread: [18,0,0] Assertion `index out of bounds: 0 <= tmp1 < 1` failed.
<frozen importlib._bootstrap_external>:883: _call_with_frames_removed: block: [0,0,0], thread: [19,0,0] Assertion `index out of bounds: 0 <= tmp1 < 1` failed.
<frozen importlib._bootstrap_external>:883: _call_with_frames_removed: block: [0,0,0], thread: [20,0,0] Assertion `index out of bounds: 0 <= tmp1 < 1` failed.
<frozen importlib._bootstrap_external>:883: _call_with_frames_removed: block: [0,0,0], thread: [21,0,0] Assertion `index out of bounds: 0 <= tmp1 < 1` failed.
<frozen importlib._bootstrap_external>:883: _call_with_frames_removed: block: [0,0,0], thread: [22,0,0] Assertion `index out of bounds: 0 <= tmp1 < 1` failed.
<frozen importlib._bootstrap_external>:883: _call_with_frames_removed: block: [0,0,0], thread: [23,0,0] Assertion `index out of bounds: 0 <= tmp1 < 1` failed.
<frozen importlib._bootstrap_external>:883: _call_with_frames_removed: block: [0,0,0], thread: [24,0,0] Assertion `index out of bounds: 0 <= tmp1 < 1` failed.
<frozen importlib._bootstrap_external>:883: _call_with_frames_removed: block: [0,0,0], thread: [25,0,0] Assertion `index out of bounds: 0 <= tmp1 < 1` failed.
<frozen importlib._bootstrap_external>:883: _call_with_frames_removed: block: [0,0,0], thread: [26,0,0] Assertion `index out of bounds: 0 <= tmp1 < 1` failed.
<frozen importlib._bootstrap_external>:883: _call_with_frames_removed: block: [0,0,0], thread: [27,0,0] Assertion `index out of bounds: 0 <= tmp1 < 1` failed.
<frozen importlib._bootstrap_external>:883: _call_with_frames_removed: block: [0,0,0], thread: [28,0,0] Assertion `index out of bounds: 0 <= tmp1 < 1` failed.
<frozen importlib._bootstrap_external>:883: _call_with_frames_removed: block: [0,0,0], thread: [29,0,0] Assertion `index out of bounds: 0 <= tmp1 < 1` failed.
<frozen importlib._bootstrap_external>:883: _call_with_frames_removed: block: [0,0,0], thread: [30,0,0] Assertion `index out of bounds: 0 <= tmp1 < 1` failed.
<frozen importlib._bootstrap_external>:883: _call_with_frames_removed: block: [0,0,0], thread: [31,0,0] Assertion `index out of bounds: 0 <= tmp1 < 1` failed.
```
### Minified repro
n
### Versions
n
| 1 |
1,918 | 105,596 |
Add sdpa op prototype
|
Stale, release notes: fx, module: inductor, module: dynamo, ciflow/inductor
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #105596
* #105518
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler @ngimel @anijain2305
| 3 |
1,919 | 105,592 |
Change default autograd fallback mode to "Warn"
|
Stale, ciflow/trunk, topic: not user facing
|
Stack from [ghstack](https://github.com/ezyang/ghstack):
* __->__ #105592
* #105845
* #105660
We changed the default autograd fallback mode to "Nothing" in #105505
because it was breaking internal tests.
We have fixed the problems in #105587, so this PR changes the fallback
back to "Warn".
Test Plan:
- internal tests
| 2 |
1,920 | 105,590 |
[Inductor] Add support for NEON ISA in the Inductor C++ backend
|
module: cpu, triaged, open source, module: inductor, ciflow/inductor
|
Fixes #104729
As suggested in the [blog](https://dev-discuss.pytorch.org/t/torchinductor-update-5-cpu-backend-backend-performance-update-and-deep-dive-on-key-optimizations/1117#:~:text=It%20can%20be,sub%2Dclasses.), I subclassed the `VecISA` class and implemented a NEON version of the `vec_reduce_all()` function, to go along with the existing AVX2 and AVX512 versions. Any operation that calls `vec_reduce_all()` will also take the NEON path and benefit from its vectorization.
The `vec_reduce_all()` is invoked by Softmax and other operations like norms. Using the fast path results in 30% time savings for Softmax as compared to the previously taken slow path.
| Slow path | Fast path (NEON intrinsics)
-- | -- | --
Softmax (100 passes, 1024 dimension) | 623.706ms | 452.011ms
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler @ngimel @malfet
| 8 |
1,921 | 105,582 |
RFC: Integrating oneDNN Graph Compiler into Inductor C++/OpenMP Backend for Enhanced Graph Fusion and Performance
|
triaged, oncall: pt2, module: inductor
|
### 🚀 The feature, motivation and pitch
Integrating oneDNN Graph Compiler into Inductor C++ Backend enables enhanced pattern fusion and performance for CPU.
<img src="https://github.com/pytorch/pytorch/assets/19395079/47275ec4-f7ea-425b-9656-97a6a33b1844" alt="design" width="500"/>
### Motivation
Recent developments on the Inductor C++ backend have demonstrated promising performance on DL inference workloads with CPU, thanks to optimizations like Conv/GEMM + post-op fusions and vectorization (see [this](https://dev-discuss.pytorch.org/t/Inductor-update-4-cpu-backend-started-to-show-promising-performance-boost/874) and [this](https://dev-discuss.pytorch.org/t/torchinductor-update-5-cpu-backend-backend-performance-update-and-deep-dive-on-key-optimizations/1117)).
[oneDNN Graph API](https://spec.oneapi.io/onednn-graph/latest/introduction.html) (codename LLGA) extents oneDNN with a high-level graph API. It goes beyond Conv/GEMM post-op fusions and supports [aggressive fusion patterns](http://oneapi-src.github.io/oneDNN/dev_guide_graph_fusion_patterns.html#aggressive-fusion-patterns) such as MultiheadAttention, MLP blocks, and more (with its [graph compiler backend](http://oneapi-src.github.io/oneDNN/dev_guide_graph_compiler.html)). Other features include [low precision](http://oneapi-src.github.io/oneDNN/dev_guide_graph_low_precision.html). Since PyTorch 1.12, this API has been added in [TorchScript JIT fuser path](https://pytorch.org/tutorials/recipes/recipes/tuning_guide.html#use-onednn-graph-with-torchscript-for-inference) showing promising performance [#49444](https://github.com/pytorch/pytorch/issues/49444).
Integrating the oneDNN Graph Compiler with Inductor C++ backend offers further performance enhancements. Additionally, adopting the oneDNN Graph API simplifies design and development.
### Plan
Our long-term goal is to use oneDNN Graph fusion by default, replacing post-op fusions. Starting as an experimental feature, we have implemented a `onednn_graph_fusion` pass into the Inductor post-graph passes, enabled by an inductor.cpp config option. We hope it will eventually become the default option after validation of no performance regression from Inductor C++ backend.
We start with CPU inference for float32 data type. In the future, we will add support for other PyTorch 2 features, like training, quantization, dynamic shape, BF16, etc..
### Implementation
#### Inductor Post-grad pass
We introduce the `onednn_graph_fusion` pass directly takes the FX graph from AOTAutograd as input. The FX graph is used to construct an LLGA Graph by lowering aten/prims IR to LLGA IR and by lowering `FakeTensor` to LLGA `LogicalTensor`. LLGA identifies fusion opportunities in the Graph and returns a list of LLGA `partition`s, each represents a set of fused operations.
To enable the desired fusion, we rewrite the FX graph by fusing nodes within each LLGA partition into a call_function node. The target of the call_function node points to the corresponding LLGA kernel, which represents the compiled partitions. Other aten ops in the FX graph are executed by the Inductor C++ backend.
#### Other graph rewrites
Some graph rewrites are implemented to match current oneDNN fusion patterns and simplify the LLGA graph. For example, the LLGA op `MatMul` supports any number of dimensions while the `aten.bmm` op only supports 3D inputs, so we rewrite the FX graph to remove `ND -> 3D` and `3D -> ND` transformations before and after a call to `aten.bmm`. These graph rewrites will eventually be supported in the oneDNN Graph code, but currently remain part of our implementation.
#### Additional context
To implement the backend in Python, we plan to add python binding for oneDNN graph API inside `jit/codegen/onednn` temporarily.
### User interface:
```
torch.compile(options={"cpp.onednn_graph": True})
```
### Preliminary performance
OneDNN Graph provides significant speedups on models that contain advanced fusion patterns such as transformer models, with potentially larger gains as the batch size increases. As the fuison-pattern coverage of oneDNN Graph increases, we expect speedups in more and more models. Currently, the oneDNN-graph pass has an accuracy rate of 56/68 models in the torchbench suite.
The following performance results are from a 32-core, single node test on Sapphire Rapids
| Benchmark Set (Geometric Mean) | Inductor Speedup | Inductor + oneDNN Graph Speedup |
| -- | -- | -- |
| Torchbench | 1.29x | 1.13x |
| Torchbench (BS=32) | 1.35x | 1.28x |
| Hugging Face in Torchbench (10 models) | 1.21x | 1.38x |
| Model | Batch Size | Inductor Speedup | Inductor + oneDNN Graph Speedup |
| -- | -- | -- | -- |
| Resnet50 | 32 | 1.72x | 1.81x |
| hf_GPT2_large | 1 | 1.29x | 1.40x |
| hf_GPT2_large | 4 | 1.14x | 1.29x |
| hf_GPT2_large | 32 | 1.22x | 1.47x |
| hf_T5_large | 1 | 1.22x | 2.03x |
<details><summary>Performance Details</summary>
<p>
Model | Batch Size | Accuracy (oneDNN) | | oneDNN 3.2 | oneDNN+GC 3.2 | Inductor
-- | -- | -- | -- | -- | -- | --
BERT_pytorch | 2 | pass | | 0.81 | 0.82 | 1.50
Background_Matting | 1 | pass | | 0.90 | 0.90 | 1.13
LearningToPaint | 96 | pass | | 1.21 | 1.22 | 1.27
Super_SloMo | 6 | pass | | 1.11 | 1.11 | 1.17
alexnet | 128 | pass | | 1.31 | 0.94 | 1.36
attention_is_all_you_need_pytorch | 32 | fail_accuracy | | 0.82 | 0.16 | 0.91
basic_gnn_edgecnn | 1 | pass | | 1.60 | 1.58 | 1.66
basic_gnn_gcn | 1 | pass | | 0.37 | 0.37 | 0.49
basic_gnn_gin | 1 | pass | | 0.55 | 0.56 | 0.53
basic_gnn_sage | 1 | pass | | 0.36 | 0.08 | 0.33
cm3leon_generate | 0 | infra_error | | | |
dcgan | 256 | pass | | 1.28 | 1.27 | 1.31
densenet121 | 64 | pass | | 0.96 | 0.97 | 1.66
detectron2_fcos_r_50_fpn | 1 | pass | | 0.96 | 0.95 | 1.07
dlrm | 2048 | pass | | 0.90 | 0.96 | 1.17
doctr_det_predictor | 1 | pass | | 1.34 | 1.35 | 1.40
doctr_reco_predictor | 1 | pass | | 1.50 | 1.48 | 2.06
drq | 1 | pass | | 0.61 | 0.09 | 0.99
fastNLP_Bert | 1 | pass | | 1.24 | 0.08 | 1.27
functorch_dp_cifar10 | 64 | pass | | 1.15 | 1.02 | 1.03
hf_Albert | 1 | pass | | 1.19 | 0.06 | 1.29
hf_Bart | 1 | pass | | 1.01 | 0.03 | 1.16
hf_Bert | 1 | pass | | 1.27 | 0.02 | 1.25
hf_Bert_large | 1 | pass | | 2.00 | 0.04 | 1.66
hf_BigBird | 1 | fail_accuracy | | 0.94 | 0.94 | 1.31
hf_DistilBert | 1 | pass | | 1.33 | 0.05 | 1.30
hf_GPT2 | 1 | pass | | 1.12 | 1.10 | 1.01
hf_GPT2_large | 1 | pass_due_to_skip | | 1.40 | 1.36 | 1.29
hf_Longformer | 0 | fail_to_run | | | |
hf_Reformer | 1 | fail_accuracy | | 0.79 | 0.26 | 0.92
hf_T5 | 1 | pass | | 1.30 | 1.30 | 1.01
hf_T5_base | 1 | pass | | 1.49 | 1.49 | 1.01
hf_T5_generate | 1 | fail_to_run | | | |
hf_T5_large | 1 | pass_due_to_skip | | 2.03 | 2.09 | 1.22
lennard_jones | 1000 | pass | | 0.65 | 0.66 | 1.26
llama | 32 | pass | | 0.61 | 0.61 | 0.49
mnasnet1_0 | 32 | pass | | 1.73 | 1.83 | 2.27
mobilenet_v2 | 16 | pass | | 1.77 | 1.76 | 2.40
mobilenet_v2_quantized_qat | 0 | fail_to_run | | | |
mobilenet_v3_large | 32 | pass | | 2.63 | 2.67 | 3.10
nanogpt_generate | 0 | fail_to_run | | | |
nvidia_deeprecommender | 256 | fail_accuracy | | 0.85 | 0.85 | 1.07
opacus_cifar10 | 64 | pass | | 0.75 | 0.76 | 0.84
phlippe_densenet | 128 | pass | | 0.71 | 0.72 | 1.80
phlippe_resnet | 128 | pass | | 1.39 | 1.41 | 1.82
pytorch_CycleGAN_and_pix2pix | 1 | pass | | 0.90 | 0.91 | 1.18
pytorch_stargan | 16 | pass | | 0.96 | 0.97 | 0.97
pytorch_unet | 1 | pass | | 1.13 | 1.14 | 1.06
resnet152 | 32 | pass | | 1.44 | 1.45 | 1.52
resnet18 | 8 | pass | | 1.60 | 1.53 | 1.75
resnet50 | 32 | pass | | 1.81 | 1.82 | 1.72
resnet50_quantized_qat | 0 | fail_to_run | | | |
resnext50_32x4d | 8 | pass | | 1.61 | 1.61 | 1.47
sam | 0 | infra_error | | | |
shufflenet_v2_x1_0 | 64 | pass | | 1.63 | 1.72 | 2.14
soft_actor_critic | 256 | pass | | 1.16 | 0.04 | 2.00
speech_transformer | 1 | pass | | 1.01 | 0.06 | 1.00
squeezenet1_1 | 16 | pass | | 1.39 | 1.39 | 2.61
timm_efficientnet | 64 | pass | | 1.40 | 1.67 | 2.23
timm_nfnet | 128 | pass | | 1.32 | 1.33 | 1.27
timm_regnet | 32 | pass | | 1.35 | 1.36 | 1.55
timm_resnest | 32 | pass | | 1.49 | 1.51 | 1.80
timm_vision_transformer | 32 | pass | | 1.16 | 1.15 | 1.29
timm_vision_transformer_large | 32 | pass_due_to_skip | | 1.15 | 1.19 | 1.16
timm_vovnet | 32 | pass | | 1.12 | 1.13 | 1.43
vgg16 | 4 | pass | | 1.50 | 0.70 | 1.42
vision_maskrcnn | 1 | fail_accuracy | | 1.26 | 1.04 | 1.29
yolov3 | 8 | pass | | 1.42 | 1.41 | 1.53
**Geometric Mean Speedup:** | | 56/68 | | 1.13 | 0.65 | 1.29
</p>
</details>
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov
| 5 |
1,922 | 105,572 |
Add color-coding to fx graph readable printouts :)
|
triaged, hackathon, oncall: pt2
|
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 0 |
1,923 | 105,570 |
Using scans
|
triaged, hackathon, oncall: pt2
|
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 1 |
1,924 | 105,569 |
Lowering topk to reductions and pointwise when k is small
|
triaged, hackathon, oncall: pt2
|
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 0 |
1,925 | 105,568 |
Move Inductor-specific decompositions to general decomposition registrations.
|
triaged, hackathon, oncall: pt2
|
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 0 |
1,926 | 105,567 |
replication_pad1d
|
triaged, hackathon, oncall: pt2
|
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 2 |
1,927 | 105,566 |
Reflection_pad1d
|
triaged, hackathon, oncall: pt2
|
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 0 |
1,928 | 105,562 |
aten.multilabel_margin_loss_backward
|
triaged, hackathon, oncall: pt2
|
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 0 |
1,929 | 105,561 |
aten._cdist_backward
|
triaged, hackathon, oncall: pt2
|
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 0 |
1,930 | 105,560 |
aten._trilinear
|
triaged, oncall: pt2
|
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 0 |
1,931 | 105,556 |
aten._cdist_forward
|
triaged, hackathon, oncall: pt2
|
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 0 |
1,932 | 105,555 |
Avoid calling AOTAutograd from AOTInductor, since Export has already done that
|
triaged, hackathon, oncall: pt2
|
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 4 |
1,933 | 105,554 |
[easy] Add an option to force recompilation
|
triaged, hackathon, oncall: pt2
|
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 1 |
1,934 | 105,548 |
torch.sparse.sampled_addmm doesn't compute gradients for 3D tensors
|
module: sparse, triaged
|
### 🐛 Describe the bug
Hi, I hope you don't mind me raising so many issues 😄
`torch.sparse.sampled_addmm` works fine in the forward phase for both 2D and 3D tensors. However, in the backward pass it fails for 3D tensors and throws a cryptic error message.
```
import torch
B, N, D, p = 4, 100, 30, 0.01
M1 = torch.randn(B, N, D).cuda().requires_grad_(True)
M2 = torch.randn(B, D, N).cuda()
mask = torch.bernoulli(p * torch.ones((N, N))).to_sparse_csr().cuda()
out = torch.sparse.sampled_addmm(mask, M1[0], M2[0])
out.to_dense()[0, 0].backward() # works
out = torch.sparse.sampled_addmm(mask, M1, M2)
out.to_dense()[0, 0, 0].backward() # doesn't work
```
I receive the following error:
```
RuntimeError: crow_indices is supposed to be a vector, but got 2 dimensional tensor.
```
### Versions
PyTorch version: 2.0.1+cu117
cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer
| 1 |
1,935 | 105,547 |
[MPS] Lerp tensor implementation
|
triaged, open source, release notes: mps, ciflow/mps
|
Related to #105470
I am working on improving lerp tensor implementation in this pull request.
| 7 |
1,936 | 105,539 |
UserWarning: There is a performance drop because we have not yet implemented the batching rule for aten::index_add_. Please file us an issue on GitHub so that we can prioritize its implementation.
|
triaged, actionable, module: vmap, module: functorch
|
### 🚀 The feature, motivation and pitch
I'm working on a system that requires inplace updates to a state inside of a vmap (think the performer model with multiple heads and sparse updates/matmul). This used to not work, but as of 2.0.1 it does but I get this warning:
```
UserWarning: There is a performance drop because we have not yet implemented the batching rule for aten::index_add_. Please file us an issue on GitHub so that we can prioritize its implementation.
```
This occurred on an M2 macbook, I'll test later whether this also occurs on a cuda device. It would be nice to have indexed inplace updates fully supported in a future update.
### Alternatives
_No response_
### Additional context
_No response_
cc @zou3519 @Chillee @samdow @kshitij12345 @janeyx99
| 0 |
1,937 | 105,535 |
[POC] DynamicTensor
|
release notes: fx
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #105535
DynamicTensor is a regular CPU/CUDA tensor, but where some dimensions are treated symbolically. If you have an (s0, 2) dynamic tensor, it is intended to vary dynamically over s0. This size propagates over operations, and always behaves uniformly, even if s0 = 0, 1 (so, for example, you cannot broadcast (s0, 2) with (5, 2), even when s0 == 1).
More discussion at https://docs.google.com/document/d/1Dge173HVbXnTysnvp8716mi_0BlEpJZbmhXbI_-WdqI/edit#heading=h.ey3z8v3k3z06
The implementation strategy is to subclass FakeTensor into DynamicTensor, but then embed a real backing tensor which also gets operated on. The implementation does some naughty things, putting the POC out there to get some comments.
TODO: When a DynamicTensor has no more symbolic dimensions, decay it into a regular tensor.
Signed-off-by: Edward Z. Yang <ezyang@meta.com>
| 4 |
1,938 | 105,534 |
test_torchinductor_opinfo tracker
|
triaged, hackathon, oncall: pt2
|
https://github.com/pytorch/pytorch/blob/main/test/inductor/test_torchinductor_opinfo.py
### Single sample failures
- [x] "__getitem__": {b8, f16, f32, f64, i32, i64},
- [x] "__rdiv__": {b8, f16, f32, f64, i32, i64},
- [x] "addr": {f16},
- [x] "allclose": {f16, f32, f64},
- [x] "angle": {f32, f64},
- [x] "argwhere": {b8, f16, f32, f64, i32, i64},
- [ ] ("as_strided", "partial_views"): {b8, f16, f32, f64, i32, i64},
- [x] "baddbmm": {f16},
- [ ] "bernoulli": {f16, f32, f64},
- [x] "bincount": {i32, i64},
- [x] "bucketize": {b8, f16, f32, f64, i32, i64}, likely related to @davidberard98 / @aakhundov 's PRs
- [ ] "cholesky": {f32, f64},
- [x] "combinations": {b8, f16, f32, f64, i32, i64},
- [x] "corrcoef": {f16, f32, f64, i32, i64},
- [x] "cov": {f16, f32, f64, i32, i64},
- [x] "equal": {b8, f16, f32, f64, i32, i64},
- [x] "index_reduce": {f16, f32, f64}, @davidberard98
- [x] "istft": {f32, f64},
# Unsupported: data dependent operator: aten._local_scalar_dense.default
- [x] "item": {b8, f16, f32, f64, i32, i64}, https://github.com/pytorch/pytorch/pull/105480
- [x] "linalg.eig": {f32, f64},
- [x] "linalg.eigh": {f32, f64},
- [x] "linalg.eigvals": {f32, f64},
- [x] "linalg.eigvalsh": {f32, f64},
- [x] "linalg.householder_product": {f32, f64},
- [x] "linalg.lstsq": {f32, f64},
- [x] ("linalg.lstsq", "grad_oriented"): {f32, f64},
- [ ] "masked_scatter": {f16, f32, f64}, (@int3)
- [x] "masked_select": {b8, f16, f32, f64, i32, i64},
- [x] ("max", "reduction_with_dim"): {b8}, https://github.com/pytorch/pytorch/pull/109264
- [x] ("min", "reduction_with_dim"): {b8}, https://github.com/pytorch/pytorch/pull/109264
- [ ] "multinomial": {f16, f32, f64}, (@int3) -- needs test RNG issues to be fixed first
- [x] "nn.functional.adaptive_avg_pool2d": {f16},
- [x] "nn.functional.ctc_loss": {f32, f64},
- [x] "nn.functional.grid_sample": {f16},
- [x] "grid_sampler_2d": {f16},
- [x] "nn.functional.gaussian_nll_loss": {f16, f32, f64},
- [x] "nn.functional.one_hot": {i64},
- [ ] "nn.functional.rrelu": {f16, f32, f64}, (@masnesral)
- [ ] "nn.functional.triplet_margin_with_distance_loss": {f16, f32, f64, i32, i64},
- [x] "nonzero": {b8, f16, f32, f64, i32, i64},
- [ ] "normal": {f16, f32, f64},
- [ ] "normal", "number_mean": {f16, f32, f64},
- [x] "polar": {f32, f64},
- [ ] "rand_like": {f16, f32, f64},
- [ ] "randint_like": {f16, f32, f64, i32, i64},
- [ ] "randint": {f16, f32, f64, i32, i64},
- [ ] "randn_like": {f16, f32, f64},
- [x] "repeat_interleave": {b8, f16, f32, f64, i32, i64}, data-dependent output shape
- [x] ("round", "decimals_3"): {f16}, Internal upcast in inductor causes different results
- [x] ("scatter_reduce", "prod"): {f16, f32, f64}, -> see index_reduce, same issue
- [x] ("_segment_reduce", "lengths"): {f16, f32, f64}, https://github.com/pytorch/pytorch/pull/109359
- [ ] "sparse.sampled_addmm": {f32, f64},
- [x] ("std_mean", "unbiased"): {f16}, https://github.com/pytorch/pytorch/pull/109081
- [x] "stft": {f32, f64},
- [x] "tensor_split": {b8, f16, f32, f64, i32, i64},
- [ ] "to_sparse": {f16, f32, f64},
- [ ] "_upsample_bilinear2d_aa": {f16, f32, f64},
# AssertionError: Tensor-likes are not close!
- [ ] "atanh": {f32},
- [ ] "cauchy": {f16, f32, f64},
- [ ] "exponential": {f16, f32, f64},
- [ ] "geometric": {f16, f32, f64, i32, i64},
("normal", "in_place"): {f16, f32, f64},
- [ ] "log_normal": {f16, f32, f64},
- [x] "nanquantile": {f32, f64}, may be fixed by #109172
- [ ] "uniform": {f16, f32, f64},
- [x] "unique": {b8, f16, f32, f64, i32, i64},
- [x] "unique_consecutive": {b8, f16, f32, f64, i32, i64},
- [ ] "nn.functional.triplet_margin_loss": {f16},
- [x] "pca_lowrank": {f32, f64},
- [x] "svd_lowrank": {f32, f64},
- [x] "svd": {f32, f64},
# AssertionError: Scalars are not close!
- [x] "nn.functional.soft_margin_loss": {f16},
- [x] "fft.fft": {b8, f16, f32, f64, i32, i64},
- [x] "fft.fft2": {b8, f16, f32, f64, i32, i64},
- [x] "fft.fftn": {b8, f16, f32, f64, i32, i64},
- [x] "fft.hfft": {b8, f16, f32, f64, i32, i64},
- [x] "fft.hfft2": {b8, f16, f32, f64, i32, i64},
- [x] "fft.hfftn": {b8, f16, f32, f64, i32, i64},
- [x] "fft.ifft": {f16, f32, f64, b8, i32, i64},
- [x] "fft.ifft2": {b8, f16, f32, f64, i32, i64},
- [x] "fft.ifftn": {b8, f16, f32, f64, i32, i64},
- [x] "fft.ihfft": {f16, f32, f64, b8, i32, i64},
- [ ] "fft.ihfft2": {f16, f32, f64, b8, i32, i64},
- [ ] "fft.ihfftn": {f16, f32, f64, b8, i32, i64},
- [x] "fft.irfft": {b8, f16, f32, f64, i32, i64},
- [x] "fft.irfft2": {b8, f16, f32, f64, i32, i64},
- [x] "fft.irfftn": {b8, f16, f32, f64, i32, i64},
- [x] "fft.rfft": {f16, f32, f64, b8, i32, i64},
- [x] "fft.rfft2": {b8, f16, f32, f64, i32, i64},
- [x] "fft.rfftn": {b8, f16, f32, f64, i32, i64},
# These return complex tensors
- [x] "cdouble": {b8, i32, i64, f16, f32, f64},
- [x] "cfloat": {b8, i32, i64, f16, f32, f64},
- [x] "chalf": {b8, i32, i64, f16, f32, f64},
- [x] "complex": {f16, f32, f64},
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 1 |
1,939 | 105,532 |
tts_angular: fail_to_run, torch._dynamo.exc.Unsupported: call_method NNModuleVariable() flatten_parameters [] {}
|
triaged, oncall: pt2
|
Repro:
```
python benchmarks/dynamo/torchbench.py --accuracy --inference --bfloat16 --export-aot-inductor --disable-cudagraphs --device cuda --only tts_angular
```
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 0 |
1,940 | 105,530 |
convit_base: AssertionError: Mutating module attribute rel_indices during export.
|
triaged, oncall: pt2
|
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 2 |
1,941 | 105,529 |
Efficient BMM for sparse-dense tensors
|
module: sparse, triaged, topic: new features
|
### 🚀 The feature, motivation and pitch
Hi,
I want to perform a sparse-dense BMM and compute gradients for the sparse matrix. Is there an operation in torch which does it efficiently? According to [this table](https://pytorch.org/docs/stable/sparse.html#supported-operations), `torch.bmm` computes only the dense gradients. Also, it's limited to the COO layout.
### Alternatives
_No response_
### Additional context
_No response_
cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer
| 2 |
1,942 | 105,528 |
DISABLED test_conv_with_as_strided_dynamic_shapes_cuda (__main__.DynamicShapesCudaTests)
|
module: rocm, triaged, module: flaky-tests, skipped
|
Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/failure/test_conv_with_as_strided_dynamic_shapes_cuda) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/15120975791).
Over the past 72 hours, it has flakily failed in 6 workflow(s).
**Debugging instructions (after clicking on the recent samples link):**
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Grep for `test_conv_with_as_strided_dynamic_shapes_cuda`
Test file path: `inductor/test_torchinductor_dynamic_shapes.py`
ResponseTimeoutError: Response timeout for 5000ms, GET https://raw.githubusercontent.com/pytorch/pytorch/main/test/inductor/test_torchinductor_dynamic_shapes.py -1 (connected: true, keepalive socket: false, socketHandledRequests: 1, socketHandledResponses: 0)
headers: {}
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang
| 5 |
1,943 | 105,526 |
torch.onnx.export error
|
module: onnx, triaged
|
### 🐛 Describe the bug
while running the code
```python
input_s = torch.zeros((1, 500), dtype=torch.long).to('cuda')
input_names = ["x"] # specify the name of the input tensor
output_names = ["out"] # specify the name of the output tensor
dynamic_axes = {"x":{0: "batch_size"}, "out":{0: "batch_size"}}
torch.onnx.export(model, input_s, 'model_test.onnx', input_names=input_names, output_names=output_names)
```
The following problem occurs, what is the situation, is this a common warning, or an error? do i need to edit again
### Versions
UserWarning: The exported ONNX model failed ONNX shape inference.The model will not be executable by the ONNX Runtime.If this is unintended and you believe there is a bug,please report an issue at https://github.com/pytorch/pytorch/issues.Error reported by strict ONNX shape inference: [ShapeInferenceError] Shape inference error(s): (op_type:MaxPool, node name: /MaxPool): [ShapeInferenceError] Attribute strides has incorrect size
(op_type:MaxPool, node name: /MaxPool_1): [ShapeInferenceError] Attribute strides has incorrect size
(op_type:MaxPool, node name: /MaxPool_2): [ShapeInferenceError] Attribute strides has incorrect size
(Triggered internally at ../torch/csrc/jit/serialization/export.cpp:1407.)
_C._check_onnx_proto(proto)
============= Diagnostic Run torch.onnx.export version 2.0.1+cu117 =============
verbose: False, log level: Level.ERROR
======================= 0 NONE 0 NOTE 0 WARNING 0 ERROR ========================
| 1 |
1,944 | 105,520 |
[ONNX] Exporting the operator 'aten::exponential' to opset version 13 is not supported
|
module: onnx, triaged
|
### 🐛 Describe the bug
'aten::exponential' seems not supported by onnx. One way I know is to create a custom function and register it, but it doesn't work.
### Versions
--2023-07-19 14:32:17-- https://raw.githubusercontent.com/pytorch/pytorch/main/torch/utils/collect_env.py
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.111.133, 185.199.109.133, 185.199.108.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.111.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 21653 (21K) [text/plain]
Saving to: ‘collect_env.py’
collect_env.py 100%[===================================================================>] 21.15K --.-KB/s in 0.001s
2023-07-19 14:32:18 (22.8 MB/s) - ‘collect_env.py’ saved [21653/21653]
| 3 |
1,945 | 105,519 |
aten.bernoulli.p is missing in core aten IR opset but does not get decomposed
|
triaged, oncall: pt2
|
### 🐛 Describe the bug
`aten.bernoulli.p` should either be decomposed or should be in core IR opset.
```python
import torch
from torch._functorch.aot_autograd import aot_module_simplified
from torch._decomp import core_aten_decompositions
decompositions = core_aten_decompositions()
def toy_backend(gm, sample_inputs):
def my_compiler(gm, sample_inputs):
gm.print_readable()
return gm
# Invoke AOTAutograd
return aot_module_simplified(
gm,
sample_inputs,
decompositions=decompositions,
fw_compiler=my_compiler
)
def run(input):
return torch.bernoulli(input, 0.5)
input = torch.randn(8, 32)
out = run(input)
print("EAGER OK")
fn = torch.compile(backend=toy_backend)(run)
out = fn(input)
```
produces the following graph
```
EAGER OK
class <lambda>(torch.nn.Module):
def forward(self, arg0_1: f32[8, 32]):
# File: bug_bernoull.py:26, code: return torch.bernoulli(input, 0.5)
bernoulli: f32[8, 32] = torch.ops.aten.bernoulli.p(arg0_1, 0.5); arg0_1 = None
return (bernoulli,)
```
### Versions
Collecting environment information...
PyTorch version: 2.1.0.dev20230718+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.26.4
Libc version: glibc-2.31
Python version: 3.8.10 (default, May 26 2023, 14:05:08) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.4.0-150-generic-x86_64-with-glibc2.29
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 43 bits physical, 48 bits virtual
CPU(s): 12
On-line CPU(s) list: 0-11
Thread(s) per core: 1
Core(s) per socket: 1
Socket(s): 12
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 5220R CPU @ 2.20GHz
Stepping: 0
CPU MHz: 2194.843
BogoMIPS: 4389.68
Virtualization: VT-x
Hypervisor vendor: VMware
Virtualization type: full
L1d cache: 384 KiB
L1i cache: 384 KiB
L2 cache: 12 MiB
L3 cache: 429 MiB
NUMA node0 CPU(s): 0-11
Vulnerability Itlb multihit: KVM: Mitigation: Split huge pages
Vulnerability L1tf: Mitigation; PTE Inversion; VMX flush not necessary, SMT disabled
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon nopl xtopology tsc_reliable nonstop_tsc cpuid pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 invpcid avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xsaves arat pku ospke md_clear flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.4
[pip3] torch==2.1.0.dev20230718+cpu
[pip3] triton==2.0.0
[conda] Could not collect
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 1 |
1,946 | 105,518 |
Avoid synchronization when using scalar tensor as index
|
Stale, module: dynamo
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #105596
* __->__ #105518
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng @anijain2305 @ipiszy
| 4 |
1,947 | 105,515 |
[ONNX] FX produce valid node names in models
|
module: onnx, triaged
|
Currently we see names like `LayerNorm(L__self___embed_layer_norm)_14` for functions and `weight.output.1` for tensors. We need to make them valid C variable names to follow the ONNX standard: https://onnx.ai/onnx/repo-docs/IR.html#names-within-a-graph
> All names MUST adhere to [C90 identifier syntax rules](https://en.cppreference.com/w/c/language/identifier).
> Names of nodes, inputs, outputs, initializers, and attributes are organized into several namespaces. Within a namespace, each name MUST be unique for each given graph.
cc @BowenBao
| 4 |
1,948 | 105,508 |
[export] Serialize SymFloat
|
fb-exported, ciflow/inductor, module: export, release notes: export
|
Test Plan: CI
Differential Revision: D47571561
| 4 |
1,949 | 105,499 |
[FSDP] Revisit mixed-precision casting logic
|
triaged, module: fsdp
|
**Overview**
We may want to revisit FSDP's behavior for casting forward input tensors and buffers. Our mixed precision API has options `param_dtype` ,`reduce_dtype`, `buffer_dtype`, `cast_forward_inputs `, and `cast_root_forward_inputs`:
https://github.com/pytorch/pytorch/blob/abc1cadddba00b7412240b56314d9592d7ad29c7/torch/distributed/fsdp/api.py#L211-L216
In addition, we allow using full precision in eval, configured by an environment variable:
https://github.com/pytorch/pytorch/blob/abc1cadddba00b7412240b56314d9592d7ad29c7/torch/distributed/fsdp/flat_param.py#L482-L484
After https://github.com/pytorch/pytorch/pull/104408, every FSDP state has either 0 or 1 handles (see https://github.com/pytorch/pytorch/pull/104488). Note that `any(...)` in Python returns `True` if the `...` is empty. This means that currently:
- For the case where `cast_root_forward_inputs=True` and the root does not have a handle, the `(module.training or not state._use_full_prec_in_eval)` is there to represent the `(not self._fully_sharded_module.training and self._use_full_prec_in_eval)` check, which cannot happen without a handle.
- [ ] We should be able to simplify this to `should_cast_forward_inputs = (module.training or not state._use_full_prec_in_eval) and state.mixed_precision.cast_root_forward_inputs` (removing the check on `not handle._force_full_precision`). However, this creates some asymmetry with the condition for `cast_forward_inputs=True` (`_pre_forward()`).
- For the case where `cast_forward_inputs=True`, the `len(state._handles) > 0` check avoids the need for something similar. If the FSDP instance does not manage any parameters (and hence has no handle), it never casts its forward inputs.
- [ ] Unlike `cast_root_forward_inputs=True`, if the module does not manage any parameters, then this will not cast. We should document this behavior explicitly.
- [ ] If the root does not have a handle, then it always casts buffers to full precision. This may not be the desired behavior, and we may want to add a `state._force_full_precision` clause as part of this check.
**Code Pointers**
`cast_root_forward_inputs=True` (`_root_pre_forward()`):
https://github.com/pytorch/pytorch/blob/91ab32e4b1f9e601cd42b7e9887b93a444c99dfb/torch/distributed/fsdp/_runtime_utils.py#L655-L658
`cast_forward_inputs=True` (`_pre_forwarrd()`):
https://github.com/pytorch/pytorch/blob/91ab32e4b1f9e601cd42b7e9887b93a444c99dfb/torch/distributed/fsdp/_runtime_utils.py#L461-L463
(`_root_pre_forward()`):
https://github.com/pytorch/pytorch/blob/91ab32e4b1f9e601cd42b7e9887b93a444c99dfb/torch/distributed/fsdp/_runtime_utils.py#L587-L589
https://github.com/pytorch/pytorch/blob/abc1cadddba00b7412240b56314d9592d7ad29c7/torch/distributed/fsdp/flat_param.py#L2469-L2478
---
cc @zhaojuanmao @mrshenli @rohan-varma
| 0 |
1,950 | 105,488 |
torch.save throws an error when the path uses mixed separators on Windows
|
module: windows, triaged
|
### 🐛 Describe the bug
Using some combinations of \ and / as the path separator throws an exception. Windows prefers \ as the path separator, but also accepts /. torch.save throws an exception for paths that should be valid windows paths.
Reproducible using the following example
``` python
import os
import torch
data = [{'1': 1}]
torch.save(data, "H:/a\\a.ckpt")
```
replacing the path can lead to a few different results:
`torch.save(data, "H:\\a\\a.ckpt")` -> works
`torch.save(data, "H:/a/a.ckpt")` -> works
`torch.save(data, "H:/a\\a.ckpt")` -> Throws "Parent directory H: does not exist."
`torch.save(data, "H:\\a/a.ckpt")` -> works
But now it gets really strange:
`torch.save(data, "H:/a.ckpt")` -> Throws "Parent directory H: does not exist.". The file is created, but remains empty
`torch.save(data, "H:\\a.ckpt")` -> Throws "Parent directory H: does not exist.". The file is created, but remains empty
Stack trace
```
...
File "H:\stable-diffusion\one-trainer\scripts\debug.py", line 17, in main
torch.save(data, "H:\\a.ckpt")
File "H:\stable-diffusion\one-trainer\venv\lib\site-packages\torch\serialization.py", line 440, in save
with _open_zipfile_writer(f) as opened_zipfile:
File "H:\stable-diffusion\one-trainer\venv\lib\site-packages\torch\serialization.py", line 315, in _open_zipfile_writer
return container(name_or_buffer)
File "H:\stable-diffusion\one-trainer\venv\lib\site-packages\torch\serialization.py", line 288, in __init__
super().__init__(torch._C.PyTorchFileWriter(str(name)))
RuntimeError: Parent directory H: does not exist.
```
### Versions
PyTorch version: 2.0.1+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 10 Home
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A
Python version: 3.10.8 (tags/v3.10.8:aaaf517, Oct 11 2022, 16:50:30) [MSC v.1933 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.19045-SP0
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA RTX A5000
Nvidia driver version: 535.98
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture=9
CurrentClockSpeed=3600
DeviceID=CPU0
Family=205
L2CacheSize=1536
L2CacheSpeed=
Manufacturer=GenuineIntel
MaxClockSpeed=3600
Name=Intel(R) Core(TM) i5-8600K CPU @ 3.60GHz
ProcessorType=3
Revision=
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.23.5
[pip3] pytorch-lightning==2.0.3
[pip3] torch==2.0.1+cu118
[pip3] torchmetrics==1.0.1
[pip3] torchvision==0.15.2+cu118
[conda] Could not collect
cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @vladimir-aubrecht @iremyux @Blackhex @cristianPanaite
| 5 |
1,951 | 105,485 |
Specifying `FakeTensorMode` for Custom Backends
|
triaged, oncall: pt2
|
### 🐛 Describe the bug
When specifying the `FakeTensorMode` to be used for custom backend implementations, the utility [`fake_tensor_unsupported`](https://github.com/pytorch/pytorch/blob/6ca3d7e1a245934279b784eb6eef5a13cfd5755e/torch/_dynamo/backends/common.py#L85-L97) is very useful for indicating _no_ fake tensors should be allowed, but I could not find similar utilities for specifying custom fake modes, for instance `FakeTensorMode(allow_non_fake_inputs=True)`.
An attempt was made to set the fake mode directly in the tracing context upon entry into the backend function (see Minified repro), however this causes an error upon completion of the compilation. What is the recommended way to set the `FakeTensorMode` for a custom backend?
### Error logs
```python
File "~/demo.py", line 167, in compile
return torch_compile(
File "~/demo.py", line 188, in torch_compile
model(*torch_inputs)
File "~/python_virtual_environments/torch_trt_venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1505, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "~/python_virtual_environments/torch_trt_venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1514, in _call_impl
return forward_call(*args, **kwargs)
File "~/python_virtual_environments/torch_trt_venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 294, in _fn
return fn(*args, **kwargs)
File "~/python_virtual_environments/torch_trt_venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1505, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "~/python_virtual_environments/torch_trt_venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1514, in _call_impl
return forward_call(*args, **kwargs)
File "~/python_virtual_environments/torch_trt_venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 447, in catch_errors
return callback(frame, cache_size, hooks, frame_state)
File "~/python_virtual_environments/torch_trt_venv/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 531, in _convert_frame
result = inner_convert(frame, cache_size, hooks, frame_state)
File "~/python_virtual_environments/torch_trt_venv/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 127, in _fn
return fn(*args, **kwargs)
File "~/python_virtual_environments/torch_trt_venv/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 360, in _convert_frame_assert
return _compile(
File "~/python_virtual_environments/torch_trt_venv/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 179, in time_wrapper
r = func(*args, **kwargs)
File "~/python_virtual_environments/torch_trt_venv/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 519, in _compile
raise InternalTorchDynamoError(str(e)).with_traceback(e.__traceback__) from None
File "~/python_virtual_environments/torch_trt_venv/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 478, in _compile
check_fn = CheckFunctionManager(
File "~/python_virtual_environments/torch_trt_venv/lib/python3.10/site-packages/torch/_dynamo/guards.py", line 871, in __init__
guard.create(local_builder, global_builder)
File "~/python_virtual_environments/torch_trt_venv/lib/python3.10/site-packages/torch/_guards.py", line 215, in create
return self.create_fn(self.source.select(local_builder, global_builder), self)
File "~/python_virtual_environments/torch_trt_venv/lib/python3.10/site-packages/torch/_dynamo/guards.py", line 548, in SHAPE_ENV
guards = output_graph.shape_env.produce_guards(
torch._dynamo.exc.InternalTorchDynamoError: 'NoneType' object has no attribute 'produce_guards'
```
### Minified repro
Below is a demo of how the `FakeTensorMode` was set in the backend.
```python
fake_mode = FakeTensorMode(allow_non_fake_inputs=True)
@torch._dynamo.register_backend(name="custom_backend")
def my_custom_backend(
gm: torch.fx.GraphModule, sample_inputs: Sequence[torch.Tensor], **kwargs
):
from torch._guards import TracingContext
TracingContext.get().fake_mode = fake_mode
return aot_module_simplified(
gm,
sample_inputs,
fw_compiler=make_boxed_compiler(backend_impl),
)
```
### Versions
**Relevant Versions**
```bash
torch == 2.1.0.dev20230703+cu121
```
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 15 |
1,952 | 105,483 |
[OpInfo] index.Tensor
|
triaged, module: testing
|
Create opinfo for index.Tensor. E.g. https://github.com/microsoft/onnxscript/pull/883
| 0 |
1,953 | 105,471 |
[benchmark] Rename the count field FunctionCount
|
oncall: profiler
|
The `count` field is in `FunctionCount` defined in `torch/utils/benchmark/utils/valgrind_wrapper/timer_interface.py` is redefining the built in field of tuple. Consider renaming.
> Hmm, do you mind renaming it to something else? How about `counter`? or `invocation_count` (though strictly speaking out of scope of this PR)
_Originally posted by @malfet in https://github.com/pytorch/pytorch/pull/105424#discussion_r1266832538_
cc @robieta @chaekit @aaronenyeshi @nbcsm @guotuofeng @guyang3532 @gaoteng-git @tiffzhaofb @dzhulgakov @davidberard98
| 0 |
1,954 | 105,465 |
[proposal] Bit ops: e.g. setbit/getbit/togglebit/byteswap + introduce well-standardized unsigned dtypes (uint16, uint32, uint64)
|
feature, triaged, needs research, module: python frontend
|
### 🐛 Describe the bug
Some functions are reversible/invertible (e.g. https://pytorch.org/docs/stable/generated/torch.nn.SiLU.html) if we record one more bit of information: whether the input is larger than some cut-off value.
Of course one can save a bitmask for backward. Another funny/hacky way to do the same would be record this information in the LSB mantissa bit. Currently there exists a single such op: torch.copysign_ (which is also not very flexible as it can't accept BoolTensors now)
According to https://stackoverflow.com/a/47990/445810, forcing a certain bit can be done by (but with a lot of allocations):
```python
import torch
torch.manual_seed(1)
a = torch.rand(4, 3, dtype = torch.float32)
sign = a.ge(0.5).to(torch.int32)
a_, a_repr = a.clone(), a.view(torch.int32)
which_bit = 0
a_repr ^= (-sign ^ a_repr) & (1 << which_bit)
# for lsb : a_repr ^= ((-sign ^ a_repr) & 1)
print(a, a_, a == a_)
```
It would be good to maybe support more of these bitops as native ops? Ideally, inductor would generate efficient impls of these ops and fuse them with the rest of the computation, but clear API would be great.
Also, it might be good to support torch.uint32 for guaranteed bit ops? I think for int32 some bitops result are not well defined in C++, so at least for bit manipulations being able to clearly express uint32 might be useful. Existing issue: https://github.com/pytorch/pytorch/issues/58734
Related: https://github.com/pytorch/pytorch/issues/32867 on supporting BitTensor natively (and especially as outcome for boolean ops like torch.ge)
### Versions
N/A
cc @albanD
| 2 |
1,955 | 105,464 |
[ONNX] Support Fake Tensor Mode on new Dynamo based ONNX exporter
|
module: onnx, triaged, enhancement, release notes: onnx
|
### 🐛 Describe the bug
This task is an umbrella for all tasks related to exporting a model to ONNX using the new PyTorch Dynamo API with Fake Tensor support
Idea for alternative API design: https://github.com/pytorch/pytorch/issues/104144
### Versions
PyTorch main branch
```[tasklist]
### Tasks
- [ ] Revisit serialization of models with Fake Tensor support
- [ ] https://github.com/pytorch/pytorch/issues/105467
- [ ] Support large model export without special hardware (e.e.g A100)
- [x] Create public API for ONNX export with Fake Tensor support
- [ ] Address mix of fake and real tensor as reported by https://github.com/pytorch/pytorch/issues/105077
- [ ] https://github.com/pytorch/pytorch/issues/105490
- [ ] https://github.com/pytorch/pytorch/issues/105751
- [ ] Support Fake Mode with dynamic shapes natively
- [ ] https://github.com/pytorch/pytorch/issues/106412
```
| 0 |
1,956 | 105,460 |
Specify version
|
module: docs, triaged
|
### 📚 The doc issue
In many cases, I know that some functionality was only introduced in a recent PyTorch version. E.g. the use of `torch.device` as a context manager, or the function `torch.set_default_device`. I would expect that the documentation mentions in what version this was introduced, but this information is lacking.
### Suggest a potential alternative/fix
For every function, specify since what version it is available.
Also, similarly, if new arguments are added to a function, you could specify since what version the argument is available.
cc @svekars @carljparker
| 0 |
1,957 | 105,459 |
Adding documentation an diagram on code base
|
module: cpu, triaged, module: mkldnn, open source, module: amp (automated mixed precision), NNC, ciflow/trunk, release notes: quantization, topic: not user facing, ciflow/mps, module: inductor, module: dynamo, ciflow/inductor, module: export
|
Fixes #104962
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @gujinghui @PenghuiCheng @jianyuh @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen @mcarilli @ptrblck @leslie-fang-intel @EikanWang @voznesenskym @penguinwu @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @anijain2305
| 9 |
1,958 | 105,457 |
Top level Glossary for users (not contributers)
|
module: docs, triaged
|
### 📚 The doc issue
Related to [https://github.com/pytorch/pytorch/issues/104623](https://github.com/pytorch/pytorch/issues/104623) i propose a new top level glassy that defines concepts that a PyTorch developer is interested in - the rest of the documentation could then link to the standard definitions of words. It should start with definitions of :
-PyTorch
- the associated various repos
- the various packages
For example as a newby. I would have liked clear definitions of :
- frontend/ backend
- device
- tables of capabilities for different backends
- what is pickleable an what is not.
I am happy to start this off with a starter set for people to review , then add to.
I know we have GLOSSARY.md - but this seems to be contributor focused, I am not sure if users would need to know about the dispatcher, but may be I am wrong.
### Suggest a potential alternative/fix
_No response_
cc @svekars @carljparker
| 4 |
1,959 | 105,454 |
torch.onnx.export failed: torch.onnx.errors.SymbolicValueError: Unsupported: ONNX export of convolution for kernel of unknown shape
|
module: onnx, triaged
|
### 🐛 Describe the bug
torch.onnx.export failed with custom autograd function.
If return quant in symbolic directly, torch.onnx.export successfully.
```py
import torch
class FakeQuantizeFunction(torch.autograd.Function):
@staticmethod
def forward(ctx, x, scale, axis):
return (x / scale).round().clamp(-127, 127) * scale
@staticmethod
def symbolic(g, x, scale, axis):
zero_point = g.op("Constant", value_t=torch.zeros(1, dtype=torch.int32))
quant = g.op("Horizon::QuantizeLinear", x, scale, zero_point, axis_i=axis).setType(x.type())
# return quant
dequant = g.op("Horizon::DeQuantizeLinear", quant, scale, zero_point, axis_i=axis).setType(x.type())
return dequant
class Net(torch.nn.Module):
def __init__(self):
super(Net, self).__init__()
def forward(self, x):
weight = torch.randn(1, 3, 1, 1)
weight = FakeQuantizeFunction.apply(weight, torch.ones(1), 0)
return torch.nn.functional.conv2d(input=x, weight=weight, bias=None)
with torch.no_grad():
net = Net()
net.eval()
x = torch.zeros(1, 3, 10, 10)
print(net(x))
onnx = torch.onnx.export(net, x, "test.onnx", verbose=True)
```
full traceback:
```sh
Torch IR graph at exception: graph(%0 : Float(1, 3, 10, 10, strides=[300, 100, 10, 1], requires_grad=0, device=cpu)):
%100 : int[] = prim::Constant[value=[1, 3, 1, 1]]()
%53 : NoneType = prim::Constant(), scope: __main__.Net::
%55 : Device = prim::Constant[value="cpu"](), scope: __main__.Net:: # test.py:23:0
%107 : Bool(device=cpu) = prim::Constant[value={0}](), scope: __main__.Net::
%weight.3 : Float(1, 3, 1, 1, strides=[3, 1, 1, 1], requires_grad=0, device=cpu) = aten::randn(%100, %53, %53, %55, %107), scope: __main__.Net:: # test.py:23:0
%102 : Float(1, strides=[1], requires_grad=0, device=cpu) = prim::Constant[value={1}]()
%65 : Float(1, 3, 1, 1, strides=[3, 1, 1, 1], requires_grad=0, device=cpu) = ^FakeQuantizeFunction[inplace=0, module="__main__"](0)(%weight.3, %102), scope: __main__.Net:: # /home/users/yushu.gao/miniconda3/envs/torch20/lib/python3.8/site-packages/torch/autograd/function.py:506:0
block0(%weight : Float(1, 3, 1, 1, strides=[3, 1, 1, 1], requires_grad=0, device=cpu), %93 : Float(1, strides=[1], requires_grad=0, device=cpu)):
%94 : Float(1, 3, 1, 1, strides=[3, 1, 1, 1], requires_grad=0, device=cpu) = aten::div(%weight, %93), scope: __main__.Net:: # test.py:7:0
%95 : Float(1, 3, 1, 1, strides=[3, 1, 1, 1], requires_grad=0, device=cpu) = aten::round(%94), scope: __main__.Net:: # test.py:7:0
%108 : Long(device=cpu) = prim::Constant[value={-127}](), scope: __main__.Net::
%109 : Long(device=cpu) = prim::Constant[value={127}](), scope: __main__.Net::
%98 : Float(1, 3, 1, 1, strides=[3, 1, 1, 1], requires_grad=0, device=cpu) = aten::clamp(%95, %108, %109), scope: __main__.Net:: # test.py:7:0
%99 : Float(1, 3, 1, 1, strides=[3, 1, 1, 1], requires_grad=0, device=cpu) = aten::mul(%98, %93), scope: __main__.Net:: # test.py:7:0
-> (%99)
%103 : int[] = prim::Constant[value=[1, 1]]()
%104 : int[] = prim::Constant[value=[0, 0]]()
%110 : Long(device=cpu) = prim::Constant[value={1}](), scope: __main__.Net::
%111 : Bool(device=cpu) = prim::Constant[value={1}](), scope: __main__.Net::
%91 : Float(1, 1, 10, 10, strides=[100, 100, 10, 1], requires_grad=0, device=cpu) = aten::_convolution(%0, %65, %53, %103, %104, %103, %107, %104, %110, %107, %107, %111, %111), scope: __main__.Net:: # test.py:25:0
return (%91)
============= Diagnostic Run torch.onnx.export version 2.0.1+cu117 =============
verbose: False, log level: Level.ERROR
======================= 0 NONE 0 NOTE 0 WARNING 0 ERROR ========================
Traceback (most recent call last):
File "test.py", line 32, in <module>
onnx = torch.onnx.export(net, x, "test.onnx", verbose=True)
File "/home/users/yushu.gao/miniconda3/envs/torch20/lib/python3.8/site-packages/torch/onnx/utils.py", line 506, in export
_export(
File "/home/users/yushu.gao/miniconda3/envs/torch20/lib/python3.8/site-packages/torch/onnx/utils.py", line 1548, in _export
graph, params_dict, torch_out = _model_to_graph(
File "/home/users/yushu.gao/miniconda3/envs/torch20/lib/python3.8/site-packages/torch/onnx/utils.py", line 1117, in _model_to_graph
graph = _optimize_graph(
File "/home/users/yushu.gao/miniconda3/envs/torch20/lib/python3.8/site-packages/torch/onnx/utils.py", line 665, in _optimize_graph
graph = _C._jit_pass_onnx(graph, operator_export_type)
File "/home/users/yushu.gao/miniconda3/envs/torch20/lib/python3.8/site-packages/torch/onnx/utils.py", line 1891, in _run_symbolic_function
return symbolic_fn(graph_context, *inputs, **attrs)
File "/home/users/yushu.gao/miniconda3/envs/torch20/lib/python3.8/site-packages/torch/onnx/symbolic_helper.py", line 306, in wrapper
return fn(g, *args, **kwargs)
File "/home/users/yushu.gao/miniconda3/envs/torch20/lib/python3.8/site-packages/torch/onnx/symbolic_opset9.py", line 2451, in _convolution
raise errors.SymbolicValueError(
torch.onnx.errors.SymbolicValueError: Unsupported: ONNX export of convolution for kernel of unknown shape. [Caused by the value '0 defined in (%0 : Float(1, 3, 10, 10, strides=[300, 100, 10, 1], requires_grad=0, device=cpu) = prim::Param()
)' (type 'Tensor') in the TorchScript graph. The containing node has kind 'prim::Param'.]
Inputs:
Empty
Outputs:
#0: 0 defined in (%0 : Float(1, 3, 10, 10, strides=[300, 100, 10, 1], requires_grad=0, device=cpu) = prim::Param()
) (type 'Tensor')
```
### Versions
```sh
Collecting environment information...
PyTorch version: 2.0.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: CentOS Linux 7 (Core) (x86_64)
GCC version: (GCC) 7.3.1 20180303 (Red Hat 7.3.1-5)
Clang version: Could not collect
CMake version: version 3.26.4
Libc version: glibc-2.17
Python version: 3.8.17 (default, Jul 5 2023, 21:04:15) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-3.10.0-1160.el7.x86_64-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: 11.6.55
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
GPU 3: NVIDIA GeForce RTX 3090
Nvidia driver version: 515.76
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 96
On-line CPU(s) list: 0-95
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 6248R CPU @ 3.00GHz
Stepping: 7
CPU MHz: 3599.853
CPU max MHz: 4000.0000
CPU min MHz: 1200.0000
BogoMIPS: 6000.00
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 1024K
L3 cache: 36608K
NUMA node0 CPU(s): 0-23,48-71
NUMA node1 CPU(s): 24-47,72-95
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb cat_l3 cdp_l3 invpcid_single intel_ppin intel_pt ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear spec_ctrl intel_stibp flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] flake8==3.9.2
[pip3] flake8-polyfill==1.0.2
[pip3] horizon-plugin-pytorch==1.8.1.dev20230712+cu117.torch201.29fc3
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.4
[pip3] torch==2.0.1+cu117
[pip3] torchvision==0.15.2+cu117
[pip3] triton==2.0.0
[conda] horizon-plugin-pytorch 1.8.1.dev20230712+cu117.torch201.29fc3 dev_0 <develop>
[conda] numpy 1.24.4 pypi_0 pypi
[conda] torch 2.0.1+cu117 pypi_0 pypi
[conda] torchvision 0.15.2+cu117 pypi_0 pypi
[conda] triton 2.0.0 pypi_0 pypi
```
| 0 |
1,960 | 105,448 |
Will nn.unfold support non-4D-tensor input in future version?
|
module: nn, triaged, enhancement, actionable
|
### 🚀 The feature, motivation and pitch
For pretty many versions there is the warning "Currently, only 4-D input tensors (batched image-like tensors) are supported". Such program can be frequently used but nonnative implementation is slow. Perhaps you may consider to add the support for "arbitrary spatial dimensions" as the docs said in the ongoing version? Thanks!
### Alternatives
_No response_
### Additional context
_No response_
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| 1 |
1,961 | 105,447 |
DISABLED test_cross_entropy_large_tensor_reduction_none_cuda (__main__.TestNNDeviceTypeCUDA)
|
module: nn, module: rocm, triaged, module: flaky-tests, skipped
|
Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/failure/test_cross_entropy_large_tensor_reduction_none_cuda) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/15112704347).
Over the past 72 hours, it has flakily failed in 6 workflow(s).
**Debugging instructions (after clicking on the recent samples link):**
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Grep for `test_cross_entropy_large_tensor_reduction_none_cuda`
Test file path: `test_nn.py`
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang
| 1 |
1,962 | 105,445 |
Silent Error of torch.fx.symbolic_trace when forward hooks are registered
|
triaged, module: fx
|
### 🐛 Describe the bug
When there are some forward hooks in the module, the `torch.fx.symbolic_trace` silently ignores the computation.
```python
from torch.nn.utils import spectral_norm
from torch import fx
from torch import nn
m = spectral_norm(nn.Linear(20, 40))
m.weight.data.zero_()
m.weight.data += 500
import torch
input = torch.ones(32, 20)
fx_model = torch.fx.symbolic_trace(m)
output1 = m(input)
output2 = fx_model(input)
print((output1 - output2).abs().max().item()) # 9999.2939453125
```
This is because the `torch.fx.symbolic_trace` ignores the forward hooks, as discussed in https://github.com/pytorch/vision/issues/5193 .
However, we should at least report some errors/warning in such a case.
The fix is simple: just raise errors/warnings when hooks are deteced.
### Versions
It affects `torch.fx` in all versions.
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| 1 |
1,963 | 105,442 |
`vmap` causes unpredictable behavior when combined with `autocast`
|
triaged, module: vmap, module: amp (automated mixed precision), module: functorch
|
### 🐛 Describe the bug
Hi, when I use `autocast` I get numerical differences in the output of basic pytorch operators (in the attached example, `nn.Linear`) depending on whether or not `vmap` is used. In the following minimal code example, we calculate the sum of an output of a matrix multiplication applied to a vector. The output of the summation changes from standard pytorch to vmap if we use autocast, and does not if we do not use `autocast` (here, in the "no autocast" scenario we use `bfloat16` values). Is this expected behavior or a bug?
Minimal code example:
```
import torch as ch
import copy
def make_functional_with_buffers(mod):
params_dict = dict(mod.named_parameters())
params_names = tuple(params_dict.keys())
params_values = tuple(params_dict.values())
stateless_mod = copy.deepcopy(mod)
stateless_mod.to('cuda')
def fmodel(new_params_values, x: ch.Tensor):
new_params_dict = {name: value for name, value in zip(params_names, new_params_values)}
return ch.func.functional_call(stateless_mod, (new_params_dict,), (x,))
return fmodel, params_values, params_dict
def test(enable_autocast=True):
ch.manual_seed(25)
x = ch.rand(1, 1024, 768, dtype=ch.bfloat16, device='cuda')
if enable_autocast:
linear_dtype = ch.float32
else:
linear_dtype = ch.bfloat16
Wqkv = ch.nn.Linear(768, 2304).to(device='cuda', dtype=linear_dtype)
Wqkv.weight.data = ch.randn(2304, 768, dtype=linear_dtype, device='cuda' )
Wqkv.bias.data = ch.randn(2304, dtype=linear_dtype, device='cuda')
fmodel, params_values, _ = make_functional_with_buffers(Wqkv)
def output_function(fmodel, weights, x_input):
outp = fmodel(weights, x_input)
return outp.sum()
vmap_output_fn = ch.func.vmap(output_function, in_dims=(None, None, 0))
print('With autocast?', enable_autocast)
with ch.cuda.amp.autocast(dtype=ch.bfloat16, enabled=enable_autocast):
print('> functorch:', vmap_output_fn(fmodel, params_values, x[None, ...]))
with ch.cuda.amp.autocast(dtype=ch.bfloat16, enabled=enable_autocast):
qkv = Wqkv(x)
print('> standard pytorch:', qkv.sum())
if __name__ == '__main__':
test(enable_autocast=True)
print('\n')
test(enable_autocast=False)
```
which outputs:
```
With autocast? True
> functorch: tensor([173219.2500], device='cuda:0', grad_fn=<SumBackward1>)
> standard pytorch: tensor(173103.8125, device='cuda:0', grad_fn=<SumBackward0>)
With autocast? False
> functorch: tensor([173056.], device='cuda:0', dtype=torch.bfloat16,
grad_fn=<SumBackward1>)
> standard pytorch: tensor(173056., device='cuda:0', dtype=torch.bfloat16, grad_fn=<SumBackward0>)
```
Please let me know if I can provide any more information that helps!
### Versions
Collecting environment information...
PyTorch version: 2.1.0.dev20230713+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04.1) 11.3.0
Clang version: Could not collect
CMake version: version 3.26.3
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jul 5 2023, 18:54:27) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-76-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.7.64
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-PCIE-40GB
GPU 1: NVIDIA A100-PCIE-40GB
GPU 2: NVIDIA A100-PCIE-40GB
GPU 3: NVIDIA A100-PCIE-40GB
GPU 4: NVIDIA A100-PCIE-40GB
GPU 5: NVIDIA A100-PCIE-40GB
GPU 6: NVIDIA A100-PCIE-40GB
GPU 7: NVIDIA A100-PCIE-40GB
GPU 8: NVIDIA A100-PCIE-40GB
Nvidia driver version: 515.43.04
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 96
On-line CPU(s) list: 0-95
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7402 24-Core Processor
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 2
Stepping: 0
Frequency boost: enabled
CPU max MHz: 2800.0000
CPU min MHz: 1500.0000
BogoMIPS: 5600.11
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sme sev sev_es
Virtualization: AMD-V
L1d cache: 1.5 MiB (48 instances)
L1i cache: 1.5 MiB (48 instances)
L2 cache: 24 MiB (48 instances)
L3 cache: 256 MiB (16 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-23,48-71
NUMA node1 CPU(s): 24-47,72-95
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.4
[pip3] pytorch-ranger==0.1.1
[pip3] pytorch-triton==2.1.0+3c400e7818
[pip3] torch==2.1.0.dev20230713+cu118
[pip3] torch-optimizer==0.3.0
[pip3] torchaudio==2.1.0.dev20230713+cu118
[pip3] torchdata==0.6.1
[pip3] torchmetrics==0.11.4
[pip3] torchtext==0.15.2
[pip3] torchvision==0.16.0.dev20230713+cu118
[pip3] triton==2.0.0
[pip3] triton-pre-mlir==2.0.0
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.8.0 h6a678d5_0
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2023.1.0 h6d00ec8_46342
[conda] mkl-service 2.4.0 py310h5eee18b_1
[conda] mkl_fft 1.3.6 py310h1128e8f_1
[conda] mkl_random 1.2.2 py310h1128e8f_1
[conda] numpy 1.24.4 pypi_0 pypi
[conda] pytorch-cuda 11.7 h778d358_5 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] pytorch-ranger 0.1.1 pypi_0 pypi
[conda] pytorch-triton 2.1.0+3c400e7818 pypi_0 pypi
[conda] torch 2.1.0.dev20230713+cu118 pypi_0 pypi
[conda] torch-optimizer 0.3.0 pypi_0 pypi
[conda] torchaudio 2.1.0.dev20230713+cu118 pypi_0 pypi
[conda] torchdata 0.6.1 pypi_0 pypi
[conda] torchmetrics 0.11.4 pypi_0 pypi
[conda] torchtext 0.15.2 pypi_0 pypi
[conda] torchtriton 2.0.0 py310 pytorch
[conda] torchvision 0.16.0.dev20230713+cu118 pypi_0 pypi
[conda] triton-pre-mlir 2.0.0 pypi_0 pypi
cc @zou3519 @mcarilli @ptrblck @leslie-fang-intel @jgong5 @Chillee @samdow @kshitij12345 @janeyx99
| 1 |
1,964 | 105,382 |
Need support and testing for Adam optimizer for MPS
|
high priority, module: optimizer, triaged, enhancement, module: mps
|
### 🚀 The feature, motivation and pitch
Environment of Mac M2
```
Python3.10
torch 2.1.0.dev20230717
torchaudio 2.1.0.dev20230717
torchvision 0.15.2a0
```
I want to use the Adam optimizer to train my model. And got an error:
```
NotImplementedError: The operator 'aten::lerp.Scalar_out' is not currently implemented for the MPS device. If you want this op to be added in priority during the prototype phase of this feature, please comment on https://github.com/pytorch/pytorch/issues/77764. As a temporary fix, you can set the environment variable `PYTORCH_ENABLE_MPS_FALLBACK=1` to use the CPU as a fallback for this op. WARNING: this will be slower than running natively on MPS
```
And when I set the `PYTORCH_ENABLE_MPS_FALLBACK=1`, the training speed is quite slower.
I'm testing the [tiny VIT model](https://github.com/UdbhavPrasad072300/Transformer-Implementations/blob/main/transformer_package/models/transformer.py#L406) with minist dataset.
The details is following:
M2 chip takes about **2.4 minutes** on CPU with Adam for one epoch.
M2 chip takes about **2.0 minutes** on GPU with Adam for one epoch( with `PYTORCH_ENABLE_MPS_FALLBACK=1`).
M2 chip takes about **30 seconds** on GPU with SGD for one epoch.
### Alternatives
_No response_
### Additional context
_No response_
cc @vincentqb @jbschlosser @albanD @janeyx99 @crcrpar @kulinseth @malfet @DenisVieriu97 @razarmehr @abhudev @ezyang @gchanan @zou3519
| 10 |
1,965 | 105,379 |
FSDP loading with a partial state triggers KeyError
|
triaged, module: fsdp
|
### 🐛 Describe the bug
In fine-tuning cases, you might want to save a subset of your model to reduce the size of your checkpoints. This is particularly important when techniques such as LoRA are used with very large models.
The suggested way to do this is to filter the keys of the model's `state_dict`
However, this seems to break FSDP loading:
```python
import os
import torch.cuda
import torch.distributed as dist
import torch.multiprocessing as mp
import torch.nn as nn
from torch.distributed.checkpoint import FileSystemReader, load_state_dict, FileSystemWriter, save_state_dict
from torch.distributed.fsdp import FullyShardedDataParallel as FSDP
from torch.distributed.fsdp.api import StateDictType
class MyModel(nn.Module):
def __init__(self):
super().__init__()
self.l1 = nn.Linear(100, 50, bias=False)
self.l2 = nn.Linear(50, 1, bias=False)
def work(rank):
os.environ["MASTER_ADDR"] = "127.0.0.1"
os.environ["MASTER_PORT"] = "1234"
dist.init_process_group("nccl", world_size=1, rank=rank)
torch.cuda.set_device(rank)
device = torch.device("cuda", rank)
model = MyModel().to(device)
model = FSDP(model)
path = "tmp/pytorch_debug_sharded"
with FSDP.state_dict_type(module=model, state_dict_type=StateDictType.SHARDED_STATE_DICT):
sd = model.state_dict()
print(list(sd))
# Trim off some layers
del sd["l2.weight"]
writer = FileSystemWriter(path=path, single_file_per_rank=True)
save_state_dict(sd, writer)
reader = FileSystemReader(path=path)
with FSDP.state_dict_type(module=model, state_dict_type=StateDictType.SHARDED_STATE_DICT):
holder_state = model.state_dict()
load_state_dict(holder_state, reader)
model.load_state_dict(holder_state)
print("good!")
def run():
mp.spawn(work, nprocs=1)
if __name__ == "__main__":
run()
```
```python
Process SpawnProcess-1:
Traceback (most recent call last):
File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/home/carmocca/git/venv/lib/python3.10/site-packages/torch/multiprocessing/spawn.py", line 69, in _wrap
fn(i, *args)
File "/home/carmocca/git/lightning/kk.py", line 44, in work
load_state_dict(holder_state, reader)
File "/home/carmocca/git/venv/lib/python3.10/site-packages/torch/distributed/checkpoint/state_dict_loader.py", line 111, in load_state_dict
central_plan = distW.reduce_scatter("plan", local_step, global_step)
File "/home/carmocca/git/venv/lib/python3.10/site-packages/torch/distributed/checkpoint/utils.py", line 200, in reduce_scatter
raise result
torch.distributed.checkpoint.api.CheckpointException: CheckpointException ranks:dict_keys([0])
Traceback (most recent call last): (RANK 0)
File "/home/carmocca/git/venv/lib/python3.10/site-packages/torch/distributed/checkpoint/utils.py", line 173, in reduce_scatter
local_data = map_fun()
File "/home/carmocca/git/venv/lib/python3.10/site-packages/torch/distributed/checkpoint/state_dict_loader.py", line 101, in local_step
local_plan = planner.create_local_plan()
File "/home/carmocca/git/venv/lib/python3.10/site-packages/torch/distributed/checkpoint/default_planner.py", line 199, in create_local_plan
return create_default_local_load_plan(self.state_dict, self.metadata)
File "/home/carmocca/git/venv/lib/python3.10/site-packages/torch/distributed/checkpoint/default_planner.py", line 255, in create_default_local_load_plan
md = metadata.state_dict_metadata[fqn]
KeyError: 'l2.weight'
Traceback (most recent call last):
File "/home/carmocca/git/lightning/kk.py", line 55, in <module>
run()
File "/home/carmocca/git/lightning/kk.py", line 51, in run
mp.spawn(work, nprocs=1)
File "/home/carmocca/git/venv/lib/python3.10/site-packages/torch/multiprocessing/spawn.py", line 239, in spawn
return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
File "/home/carmocca/git/venv/lib/python3.10/site-packages/torch/multiprocessing/spawn.py", line 197, in start_processes
while not context.join():
File "/home/carmocca/git/venv/lib/python3.10/site-packages/torch/multiprocessing/spawn.py", line 149, in join
raise ProcessExitedException(
torch.multiprocessing.spawn.ProcessExitedException: process 0 terminated with exit code 1
```
A related feature request of mine is https://github.com/pytorch/pytorch/issues/103136 where I asked if FSDP could be made to work if the model didn't include all layers in the `state_dict`
### Versions
```sh
torch 2.1.0.dev20230616+cu118
```
cc @zhaojuanmao @mrshenli @rohan-varma @awgu
| 1 |
1,966 | 105,365 |
Quadric Layer
|
feature, module: nn, triaged, needs research
|
### 🚀 The feature, motivation and pitch
Introducing a layer with second order (quadric) hypersurface separability which in turn reduces model size significantly at the same performance on a high level before even utilizing sparsity/quantization. This approach can be used everywhere as a drop-in for a Linear layer but with substantially reduced size.
This paradigm is based on my research:
https://www.researchgate.net/publication/221582251_Using_Quadratic_Perceptrons_to_Reduce_Interconnection_Density_in_Multilayer_Neural_Networks
There is also other research about higher order neurons in the field, although later afaik
I have further explained the paradigm in my GitHub repo : https://github.com/diro5t/deep_quadric_learning
In this repo there are further examples of reducing model size in concrete applications for a singular quadric neuron as well as for quadric layers demonstrated for the MNIST dataset in PyTorch as well as in tinygrad.
The proposed implementation can be seen in my fork https://github.com/diro5t/pytorch in the torch.nn.modules.linear.py.
This feature is also on the PyTorch 2.1 feature list https://docs.google.com/spreadsheets/d/1TzGkWuUMF1yTe88adz1dt2mzbIsZLd3PBasy588VWgk/edit#gid=2032684683
### Alternatives
_No response_
### Additional context
comment regarding this feature:
@[dirk.roeckmann@fivetroop.com](mailto:dirk.roeckmann@fivetroop.com) : Please first create an issue (feature request, here: [https://github.com/pytorch/pytorch/issues/new/choose](https://www.google.com/url?q=https://github.com/pytorch/pytorch/issues/new/choose&sa=D&source=docs&ust=1689635667681400&usg=AOvVaw2iZID1OC247jRrAx4e8vGu)) against pytorch/pytorch. With this feature description. This issue needs to be accepted by pytorch maintainers in order to be considered.
cc @[albandes@meta.com](mailto:albandes@meta.com) @[nshulga@meta.com](mailto:nshulga@meta.com)
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| 6 |
1,967 | 105,363 |
[pytorch][codev] add current test finding logic to find_matching_merge_rule
|
fb-exported, Stale, topic: not user facing
|
Summary:
Currently we enable skipping pytorch ci tests from codev which is analagous to "pytorchbot merge -f". I think it'd be reasonable to add functionality analagous to "pytorchbot merge -i" where we only skip currently failing tests. The easiest way to do this is to just have a boolean flag to trigger in find_matching_merge_rule and have the logic to find the currently failing tests there.
#FACEBOOK
D47485107 is the diff that adds a pytorch ci skipping label.
Test Plan: testing for trymerge should cover this change as its just a refactor.
Differential Revision: D47530806
| 4 |
1,968 | 105,358 |
Set dir for aot_inductor output files
|
triaged, open source, fb-exported, module: inductor, ciflow/inductor
|
Summary: Generate a random hash name as dir name for aot_inductor, all aot_inductor output files should write into this hash_name dir. This enables merge net predictor file package.
Differential Revision: D47487758
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov
| 5 |
1,969 | 105,349 |
torch.onnx.export does not support divisor_override in AvgPool2d
|
module: onnx, triaged
|
```
import torch
# export works fine if divisor_override=None, but fails when divisor_override is integer value
model = torch.nn.AvgPool2d(kernel_size=5, stride=2, padding=1, divisor_override=3)
model.eval()
rand_inp = torch.randn(1, 3, 8, 8)
torch.onnx.export(model, rand_inp, "AvgPool2dModel.onnx", verbose=True)
```
Above code fails during export with the error “torch/onnx/symbolic_helper.py:243: UserWarning: ONNX export failed on avg_pool2d because divisor_override not supported”
Please add support for export of AvgPool2d with an integer divisor_override value.
### Versions
Torch version: 1.9.1
| 4 |
1,970 | 105,348 |
FSDP Full Shard compatibility with BF16 AMP
|
oncall: distributed, triaged, module: fsdp
|
### 🐛 Describe the bug
Hi all,
Per conversation with @Chillee, I am opening this issue.
I was wondering if there were any potential compatibility issues when using FSDP Full Shard in conjunction with BF16 AMP during training?
I do understand that different Mixed Precision configurations can be selected through:
```python
precision = MixedPrecision(
param_dtype=torch.float32,
# Gradient communication precision.
reduce_dtype=torch.bfloat16,
# Buffer precision.
buffer_dtype=torch.bfloat16,
)
```
But I am not sure if these predefined configurations will directly conflict with BF16 AMP:
```python
with autocast(dtype=torch.bfloat16):
```
Or if they are even compatible at all?
I greatly appreciate your help.
Thank you,
Enrico
### Versions
PyTorch - Stable (2.0.1) CUDA 11.8
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
| 6 |
1,971 | 105,338 |
[ONNX] Refactor `test_fx_op_consistency.py`
|
module: onnx, triaged, onnx-triaged
|
`test_fx_op_consistency.py` references `ops_test.py` in onnx-script. However, onnx-script tests has refactored to include all dtype tests and prims test. A refactor is needed on `test_fx_op_consistency.py`.
```[tasklist]
### Tasks
- [ ] Use OpInfo rtol/atol
- [ ] Add coverage
- [ ] Prims
```
| 0 |
1,972 | 105,335 |
Enable SLEEF on ARM
|
module: build, triaged, module: sleef, module: arm, topic: improvements
|
### 🚀 The feature, motivation and pitch
SLEEF implementations of [functions](https://github.com/pytorch/pytorch/blob/main/aten/src/ATen/cpu/vec/vec256/vec256_float_neon.h#L391-L410) (like exp, erf, etc) are much faster than their corresponding STD implementations. Currently, on Intel, the SLEEF implementation is the [default](https://github.com/pytorch/pytorch/blob/main/aten/src/ATen/cpu/vec/vec256/vec256_float.h) one for many of these functions, but not on ARM.
On ARM, it is controlled by the flag `AT_BUILD_ARM_VEC256_WITH_SLEEF`. It seems like this flag is controlled by the `USE_SLEEF_FOR_ARM_VEC256` [option](https://github.com/pytorch/pytorch/blob/e5f5bcf6d4ec022558caf4d0611d928497394a88/CMakeLists.txt#L278) in CMakeLists. But setting that option to ON and building Pytorch did not set the `AT_BUILD_ARM_VEC256_WITH_SLEEF` flag, and the SLEEF code was still not being executed.
I'd like to understand what is the correct way to enable SLEEF on ARM, and if there is any issue with enabling it by default (like on Intel).
### Alternatives
_No response_
### Additional context
_No response_
cc @malfet @seemethere
| 4 |
1,973 | 105,332 |
DISABLED test_super_resolution_cuda (__main__.TestModels)
|
oncall: jit, module: flaky-tests, skipped
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_super_resolution_cuda&suite=TestModels) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/15097403578).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 5 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_super_resolution_cuda`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_jit.py`
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| 136 |
1,974 | 105,329 |
Softmax doesn't support sparse tensors with the CSR layout
|
module: sparse, triaged, topic: new features
|
### 🚀 The feature, motivation and pitch
Hi,
both `torch.softmax` and `torch.sparse.softmax` don't support the CSR format.
Namely, when I try to apply any softmax to `torch.sparse_csr tensor`, I receive the following error:
``RuntimeError: unsupported tensor layout: SparseCsr``
Would it be possible to start supporting this format?
### Alternatives
_No response_
### Additional context
_No response_
cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer
| 17 |
1,975 | 105,328 |
TorchInductor Hack-a-Day on July 19th
|
triaged, tracker
|
### Better Engineering
- [x] #105571
- [ ] #105572
### Unit test
- [ ] skipIfTorchInductor tracker: https://github.com/pytorch/pytorch/issues/102207
- [ ] #105534
### AOTInductor
- [x] #105552
- [x] #105553
- [ ] #105554
- [ ] Support rand fallback, https://github.com/pytorch/pytorch/issues/103415
- [ ] #105555
### Decompositions (write decomp, verify that Inductor has good performance on them and fuses as expected):
- [ ] #105556
- [x] #105557
- [x] #105558
- [x] #105559
- [ ] #105560
- [ ] #105561
- [ ] #105562
- [x] #105563
- [x] #105564
- [x] #105565
- [ ] #105566
- [ ] #105567
- [ ] #105568
### Decomposition optimizations
- [ ] Lowering matmuls to pointwise operators when they’re small or bandwidth bound (https://github.com/pytorch/pytorch/issues/103313)
- [ ] #105569
### Lowering:
- [ ] #105570
aten.grid_sampler_2d_backward (investigate using decomposition for backward as well)
aten.avg_pool3d
### Performance:
- [ ] Improve concat fusion with matmuls, https://github.com/pytorch/pytorch/issues/102804
- [ ] Do smarter layout planning with concatenate, https://github.com/pytorch/pytorch/issues/102805
- [ ] Do concatenate copies with a foreach kernel if applicable, https://github.com/pytorch/pytorch/issues/103475
- [ ] Pattern-match operators into foreach kernels
- [ ] Stop zero-ing out non-differentiable outputs in AOTAutograd, https://github.com/pytorch/pytorch/issues/104272
### Compilation time
- [ ] `will_fusion_create_cycle` takes a long time , https://github.com/pytorch/pytorch/issues/98467
| 0 |
1,976 | 105,326 |
Can't vmap over torch.tensor constructor
|
triaged, module: functorch
|
### 🐛 Describe the bug
Initially reporeted by @mra-h over [here](https://github.com/pytorch/pytorch/issues/102109#issuecomment-1634596330)
```py
def skew_matrix(w):
torch.tensor([[0.0, -w[2], w[1]],
[w[2], 0.0, -w[0]],
[-w[1], w[0], 0.0]])
points = torch.rand((1024, 3))
fn = torch.vmap(skew_matrix, in_dims=(0))
fn(points)
```
### Versions
main
cc @Chillee @samdow @kshitij12345 @janeyx99
| 0 |
1,977 | 105,325 |
Padded tensor subclass
|
feature, triaged, module: nestedtensor, tensor subclass
|
### 🐛 Describe the bug
Suppose you have a network which operates on dynamically sized inputs / has data dependent dynamism internally. Our default policy is to represent such a tensor as compactly as possible (e.g., with no padding) to minimize storage and FLOPs needed to operate on it.
However, in some situations, it could be profitable / cost free to pad out the tensor:
* If you are CUDA graphing with dynamic shapes and you know your maximum size, padding in the outermost (e.g., batch) dimension is effectively free, because the CUDA graph will require you to maintain memory equivalent to the maximum memory usage for your dynamic shapes. In fact, it is better than free, because ensuring you always allocate the same amount of memory every iteration ensures that you will use the same allocations; the allocator otherwise can make bad decisions in the name of "saving" memory (e.g., if you previously allocated a tensor out of a 10MB block, but this time you only need 5MB because you halved your sequence length, instead of serving the allocation out of the 10MB, it might allocate an *extra* 5MB to "save" the 10MB for later (even though it will never be used!)
* Increasing the size of tensors can improve the performance on kernels. @Chillee has a good explainer about this phenomenon in matmuls at https://twitter.com/cHHillee/status/1630274804795445248 In fact, @msaroufim and @Chillee tried to add this optimization directly to PyTorch but the post facto layout change was a bit hard to implement. Doing the layout change "early" with a tensor subclass should be easier to implement (albeit less automatic.) These improvements generalize beyond matmuls, although mostly for making sure your sizes are divisible by something nice. Fully automatic size increases here are a little difficult to do, because you have to know that later you're going to do a matmul, and you also have to know that you aren't losing all your gains from non-contiguous kernels. However, if you have a net where one of the input dimensions is dynamic, you can choose to bucket to reduce the number of CUDA graphs you need. That being said, if the dynamic dimension is batch size (or even sequence length but you have embeddings on the inner dimensions so there's no padding problems), you aren't going to get kernel perf wins.
Padded batch tensor for dynamic batch size is probably the easiest to implement to start, because you can use symbolic shape propagation rules to propagate batch dim and ensure they're properly padded (I can't think of a good way to make vmap do this.) Annoyance is avoiding wasted FLOPs by "adjusting down" the logical size.
Related: https://github.com/pytorch/pytorch/issues/65156
Related: nested tensor
cc @cpuhrsch @jbschlosser @bhosmer @drisspg @msaroufim @albanD
### Versions
main
| 9 |
1,978 | 105,322 |
DeadKernel when training GNN for Cora on MPS
|
triaged, module: mps
|
> With torch ver: 2.0.1 MPS is faster, using the original toy example.
> CPU: 16.08941249999998
> MPS: 3.2959765830000833
Hey I'm also using PyTorch 2.0
My config: Apple M2 16gb ram
Im trying to train a simple GNN for the Cora dataset.(which is < 1 Mb) I use Jupyter notebook. When my device is CPU- it runs quickly. When I use MPS, I get DeadKernel. Am I missing something while using MPS?
_Originally posted by @narenq7 in https://github.com/pytorch/pytorch/issues/77799#issuecomment-1635749861_
cc @kulinseth @albanD @malfet @DenisVieriu97 @razarmehr @abhudev
| 1 |
1,979 | 105,319 |
Implementation of torch.sparse.sampled_baddmm
|
module: sparse, triaged, topic: new features
|
### 🚀 The feature, motivation and pitch
Hi,
I would like to perform a batch matrix-matrix product with a per-sample mask.
It's similar to [torch.sparse.sampled_addmm](https://pytorch.org/docs/stable/generated/torch.sparse.sampled_addmm.html), the only difference is that `input` would be a (b, m, n) sparse tensor in the CSR format, unless we could provide masks as a list consisting of b (m, n) tensors.
It might be blocked by https://github.com/pytorch/pytorch/issues/104193 though.
### Alternatives
_No response_
### Additional context
_No response_
cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer
| 13 |
1,980 | 105,318 |
[docs] torch.sigmoid to make clear equivalence relations to other sigmoid functions
|
module: docs, triaged, actionable
|
### 📚 The doc issue
Currently https://pytorch.org/docs/stable/generated/torch.sigmoid.html only points to https://pytorch.org/docs/stable/special.html?highlight=expit#torch.special.expit
Both do not link to https://pytorch.org/docs/stable/generated/torch.nn.Sigmoid.html?highlight=sigmoid#torch.nn.Sigmoid or https://pytorch.org/docs/stable/generated/torch.Tensor.sigmoid.html?highlight=sigmoid#torch.Tensor.sigmoid
https://pytorch.org/docs/stable/special.html?highlight=expit#torch.special.expit (and other special functions) should deserve their own `.html` pages btw (as all other functions). expit should mention explicitly if it's just an alias to torch.sigmoid and torch.nn.functional.sigmoid
Ideally all these function variants should just say "See [some single variant]" (maybe to torch.sigmoid? or torch.nn.Sigmoid as it contains currently the function plot)
Currently there exists 4 functional variants (without considering quantized variants) and 1 module variant and findable in docs:
- torch.sigmoid
- torch.Tensor.sigmoid
- torch.nn.functional.sigmoid
- torch.special.expit
- torch.nn.Sigmoid
As usual, search results also contain duplicates and snippets are quite bad (it even finds sth like `[FXE0004:fx-pass-convert-neg-to-sigmoid]` - I would say that Sphinx search should not index code examples/source code itself - only text) :(
<img width="305" alt="image" src="https://github.com/pytorch/pytorch/assets/1041752/adc1ff18-1ebf-45b4-be62-1ac6577d6b87">
### Suggest a potential alternative/fix
_No response_
cc @svekars @carljparker
| 9 |
1,981 | 105,313 |
Failed to convert model that has LeakyReLU to ONNX
|
module: onnx, triaged
|
### 🐛 Describe the bug
I want to convert this model to ONNX.
```
class VGG(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Sequential(
nn.Conv2d(3, 64, 3, stride=1, padding=1),
nn.BatchNorm2d(64),
nn.LeakyReLU(),
nn.Conv2d(64, 64, 3, stride=1, padding=1),
nn.BatchNorm2d(64),
nn.LeakyReLU(),
nn.MaxPool2d(2, 2),
)
self.conv2 = nn.Sequential(
nn.Conv2d(64, 128, 3, stride=1, padding=1),
nn.BatchNorm2d(128),
nn.LeakyReLU(),
nn.Conv2d(128, 128, 3, stride=1, padding=1),
nn.BatchNorm2d(128),
nn.LeakyReLU(),
nn.MaxPool2d(2, 2),
)
self.conv3 = nn.Sequential(
nn.Conv2d(128, 256, 3, stride=1, padding=1),
nn.BatchNorm2d(256),
nn.LeakyReLU(),
nn.Conv2d(256, 256, 3, stride=1, padding=1),
nn.BatchNorm2d(256),
nn.LeakyReLU(),
nn.Conv2d(256, 256, 3, stride=1, padding=1),
nn.BatchNorm2d(256),
nn.LeakyReLU(),
nn.MaxPool2d(2, 2),
)
self.conv4 = nn.Sequential(
nn.Conv2d(256, 512, 3, stride=1, padding=1),
nn.BatchNorm2d(512),
nn.LeakyReLU(),
nn.Conv2d(512, 512, 3, stride=1, padding=1),
nn.BatchNorm2d(512),
nn.LeakyReLU(),
nn.Conv2d(512, 512, 3, stride=1, padding=1),
nn.BatchNorm2d(512),
nn.LeakyReLU(),
nn.MaxPool2d(2, 2),
)
self.conv5 = nn.Sequential(
nn.Conv2d(512, 512, 3, stride=1, padding=1),
nn.BatchNorm2d(512),
nn.LeakyReLU(),
nn.Conv2d(512, 512, 3, stride=1, padding=1),
nn.BatchNorm2d(512),
nn.LeakyReLU(),
nn.Conv2d(512, 512, 3, stride=1, padding=1),
nn.BatchNorm2d(512),
nn.LeakyReLU(),
nn.MaxPool2d(2, 2),
)
self.avgpool = nn.AdaptiveAvgPool2d((7, 7))
self.fc = nn.Sequential(
nn.Linear(512 * 7 * 7, 4096), nn.LeakyReLU(True), nn.Linear(4096, 512)
)
def forward(self, x):
x = x.float()
x = self.conv1(x)
x = self.conv2(x)
x = self.conv3(x)
x = self.conv4(x)
x = self.conv5(x)
x = self.avgpool(x)
x = x.view(-1, 512 * 7 * 7)
x = self.fc(x)
return x
```
The command I used:
```
torch.onnx.export(model, # model being run
x, # model input (or a tuple for multiple inputs)
"face_comparison.onnx", # where to save the model (can be a file or file-like object)
export_params=False, # store the trained parameter weights inside the model file
opset_version=16, # the ONNX version to export the model to
#do_constant_folding=True, # whether to execute constant folding for optimization
input_names = ['input'], # the model's input names
output_names = ['output'], # the model's output names
)
```
And I got this:
```
RuntimeError: 0 INTERNAL ASSERT FAILED at "../torch/csrc/jit/ir/alias_analysis.cpp":615, please report a bug to PyTorch. We don't have an op for aten::leaky_relu but it isn't a special case. Argument types: Tensor, bool,
Candidates:
aten::leaky_relu(Tensor self, Scalar negative_slope=0.01) -> Tensor
aten::leaky_relu.out(Tensor self, Scalar negative_slope=0.01, *, Tensor(a!) out) -> Tensor(a!)
```
### Versions
```
Collecting environment information...
PyTorch version: 2.0.1+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: 10.0.0-4ubuntu1
CMake version: version 3.25.2
Libc version: glibc-2.31
Python version: 3.10.12 (main, Jun 7 2023, 12:45:35) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.15.109+-x86_64-with-glibc2.31
Is CUDA available: False
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 2
On-line CPU(s) list: 0,1
Thread(s) per core: 2
Core(s) per socket: 1
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 79
Model name: Intel(R) Xeon(R) CPU @ 2.20GHz
Stepping: 0
CPU MHz: 2199.998
BogoMIPS: 4399.99
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32 KiB
L1i cache: 32 KiB
L2 cache: 256 KiB
L3 cache: 55 MiB
NUMA node0 CPU(s): 0,1
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable; SMT Host state unknown
Vulnerability Meltdown: Vulnerable
Vulnerability Mmio stale data: Vulnerable
Vulnerability Retbleed: Vulnerable
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled, PBRSB-eIBRS: Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx smap xsaveopt arat md_clear arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.22.4
[pip3] torch==2.0.1+cu118
[pip3] torchaudio==2.0.2+cu118
[pip3] torchdata==0.6.1
[pip3] torchsummary==1.5.1
[pip3] torchtext==0.15.2
[pip3] torchvision==0.15.2+cu118
[pip3] triton==2.0.0
[conda] Could not collect
```
| 0 |
1,982 | 105,311 |
Batching rule not implemented for aten::unsafe_chunk
|
triaged, actionable, module: vmap, module: functorch
|
### 🐛 Describe the bug
LSTM with batch-jacobian (with vmap) seem not support batch derivative computation with the new version of pytorch.
When the output of a LSTM layer is used with some derivation manipulation (eg torch.func.jacrev), the forward step in the training process raised an error due to the vmap function
torch version : 2.1.0.dev20230716 (the previous version version of pytorch can not be used due to some corrections add after https://github.com/pytorch/pytorch/issues/99413)
Here a simple example to reproduce the error :
```python
import torch
from torch import nn
import torch.optim as optim
class MyNet(nn.Module) :
def __init__(self,n_input,n_layers,hiddden_size):
super(MyNet,self).__init__()
self.n_input = n_input
self.n_layers = n_layers
self.hidden_size = hiddden_size
# Layers
self.rnn = nn.LSTM(input_size = self.n_input, hidden_size = self.hidden_size,num_layers = self.n_layers)
self.layer_out = nn.Linear(self.hidden_size,1)
self.time_trace = torch.func.vmap(torch.trace)
def forward(self,x) :
self.jac = torch.func.vmap(torch.func.jacrev(self.RNN_forward))
der_out = torch.diagonal(self.jac(x),dim1=1,dim2=3)[:,0].transpose(2,3).transpose(1,2)
return der_out
def RNN_forward(self,x) :
output,_ = self.rnn(self.time_trace(x)[:,None])
return self.layer_out(output)
if __name__ == "__main__" :
batch = 10
seq_len = 12
Net = MyNet(1,4,20)
x = torch.rand(batch,seq_len,3,3)
x.requires_grad = True
y_truth = torch.rand(batch,seq_len,3,3)
y_pred = Net(x)
criterion = nn.MSELoss()
optimizer = optim.Adam(Net.parameters(), lr = 1e-3)
optimizer.zero_grad()
loss = criterion(y_pred,y_truth)
loss.backward()
optimizer.step()
```
The error :
```
/home/npistenon/Documents/anaconda3/envs/torchtest/lib/python3.11/site-packages/torch/nn/modules/rnn.py:835: UserWarning: There is a performance drop because we have not yet implemented the batching rule for aten::mkldnn_rnn_layer. Please file us an issue on GitHub so that we can prioritize its implementation. (Triggered internally at /opt/conda/conda-bld/pytorch_1689491447879/work/aten/src/ATen/functorch/BatchedFallback.cpp:82.)
result = _VF.lstm(input, hx, self._flat_weights, self.bias, self.num_layers,
Traceback (most recent call last):
File "/home/npistenon/Documents/Multifidelity/Visco_elasticity_nonlinear/Issue/Pb_DerTorch3.py", line 36, in <module>
y_pred = Net(x)
^^^^^^
File "/home/npistenon/Documents/anaconda3/envs/torchtest/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1522, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/npistenon/Documents/anaconda3/envs/torchtest/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1531, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/npistenon/Documents/Multifidelity/Visco_elasticity_nonlinear/Issue/Pb_DerTorch3.py", line 20, in forward
der_out = torch.diagonal(self.jac(x),dim1=1,dim2=3)[:,0].transpose(2,3).transpose(1,2)
^^^^^^^^^^^
File "/home/npistenon/Documents/anaconda3/envs/torchtest/lib/python3.11/site-packages/torch/_functorch/vmap.py", line 436, in wrapped
return _flat_vmap(
^^^^^^^^^^^
File "/home/npistenon/Documents/anaconda3/envs/torchtest/lib/python3.11/site-packages/torch/_functorch/vmap.py", line 39, in fn
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/home/npistenon/Documents/anaconda3/envs/torchtest/lib/python3.11/site-packages/torch/_functorch/vmap.py", line 621, in _flat_vmap
batched_outputs = func(*batched_inputs, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/npistenon/Documents/anaconda3/envs/torchtest/lib/python3.11/site-packages/torch/_functorch/eager_transforms.py", line 602, in wrapper_fn
flat_jacobians_per_input = compute_jacobian_stacked()
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/npistenon/Documents/anaconda3/envs/torchtest/lib/python3.11/site-packages/torch/_functorch/eager_transforms.py", line 533, in compute_jacobian_stacked
chunked_result = vmap(vjp_fn)(basis)
^^^^^^^^^^^^^^^^^^^
File "/home/npistenon/Documents/anaconda3/envs/torchtest/lib/python3.11/site-packages/torch/_functorch/vmap.py", line 436, in wrapped
return _flat_vmap(
^^^^^^^^^^^
File "/home/npistenon/Documents/anaconda3/envs/torchtest/lib/python3.11/site-packages/torch/_functorch/vmap.py", line 39, in fn
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/home/npistenon/Documents/anaconda3/envs/torchtest/lib/python3.11/site-packages/torch/_functorch/vmap.py", line 621, in _flat_vmap
batched_outputs = func(*batched_inputs, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/npistenon/Documents/anaconda3/envs/torchtest/lib/python3.11/site-packages/torch/_functorch/eager_transforms.py", line 329, in wrapper
result = _autograd_grad(flat_primals_out, flat_diff_primals, flat_cotangents,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/npistenon/Documents/anaconda3/envs/torchtest/lib/python3.11/site-packages/torch/_functorch/eager_transforms.py", line 117, in _autograd_grad
grad_inputs = torch.autograd.grad(diff_outputs, inputs, grad_outputs,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/npistenon/Documents/anaconda3/envs/torchtest/lib/python3.11/site-packages/torch/autograd/__init__.py", line 319, in grad
result = Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Batching rule not implemented for aten::unsafe_chunk. We could not generate a fallback.
```
### Versions
PyTorch version: 2.1.0.dev20230716
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04.1) 11.3.0
Clang version: Could not collect
CMake version: version 3.26.3
Libc version: glibc-2.35
Python version: 3.11.4 (main, Jul 5 2023, 13:45:01) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.19.0-46-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture : x86_64
Mode(s) opératoire(s) des processeurs : 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Boutisme : Little Endian
Processeur(s) : 12
Liste de processeur(s) en ligne : 0-11
Identifiant constructeur : GenuineIntel
Nom de modèle : Intel(R) Xeon(R) W-11855M CPU @ 3.20GHz
Famille de processeur : 6
Modèle : 141
Thread(s) par cœur : 2
Cœur(s) par socket : 6
Socket(s) : 1
Révision : 1
Vitesse maximale du processeur en MHz : 4900.0000
Vitesse minimale du processeur en MHz : 800.0000
BogoMIPS : 6374.40
Drapaux : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l2 invpcid_single cdp_l2 ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves split_lock_detect dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq rdpid movdiri movdir64b fsrm avx512_vp2intersect md_clear ibt flush_l1d arch_capabilities
Virtualisation : VT-x
Cache L1d : 288 KiB (6 instances)
Cache L1i : 192 KiB (6 instances)
Cache L2 : 7.5 MiB (6 instances)
Cache L3 : 18 MiB (1 instance)
Nœud(s) NUMA : 1
Nœud NUMA 0 de processeur(s) : 0-11
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.24.3
[pip3] torch==2.1.0.dev20230716
[pip3] torchaudio==2.1.0.dev20230716
[pip3] torchvision==0.16.0.dev20230716
[conda] blas 1.0 mkl
[conda] brotlipy 0.7.0 py311h9bf148f_1002 pytorch-nightly
[conda] cffi 1.15.1 py311h9bf148f_3 pytorch-nightly
[conda] cpuonly 2.0 0 pytorch-nightly
[conda] cryptography 38.0.4 py311h46ebde7_0 pytorch-nightly
[conda] filelock 3.9.0 py311_0 pytorch-nightly
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py311h9bf148f_0 pytorch-nightly
[conda] mkl_fft 1.3.1 py311hc796f24_0 pytorch-nightly
[conda] mkl_random 1.2.2 py311hbba84a0_0 pytorch-nightly
[conda] mpmath 1.2.1 py311_0 pytorch-nightly
[conda] numpy 1.24.3 py311hc206e33_0
[conda] numpy-base 1.24.3 py311hfd5febd_0
[conda] pillow 9.3.0 py311h3fd9d12_2 pytorch-nightly
[conda] pysocks 1.7.1 py311_0 pytorch-nightly
[conda] pytorch 2.1.0.dev20230716 py3.11_cpu_0 pytorch-nightly
[conda] pytorch-mutex 1.0 cpu pytorch-nightly
[conda] requests 2.28.1 py311_0 pytorch-nightly
[conda] torchaudio 2.1.0.dev20230716 py311_cpu pytorch-nightly
[conda] torchvision 0.16.0.dev20230716 py311_cpu pytorch-nightly
[conda] urllib3 1.26.14 py311_0 pytorch-nightly
cc @zou3519 @Chillee @samdow @kshitij12345 @janeyx99
| 1 |
1,983 | 105,304 |
Backward pass with sparse parameters results in error "Sparse division requires a scalar or zero-dim dense tensor divisor"
|
module: sparse, module: loss, module: optimizer, triaged
|
### 🐛 Describe the bug
I'm working on a simple model where the parameters are sparse and the inputs are dense. Let's take a simple Regression network, if we define the parameters as Sparse Tensors as follows:
```python
class SparseLinear(nn.Module):
def __init__(self, in_features, out_features, rand=True, rate=0.1):
super().__init__()
w = torch.FloatTensor(in_features, out_features)
b = torch.zeros(out_features)
if rand:
w = torch.empty(in_features, out_features)
nn.init.sparse_(w, sparsity=rate, std=0.01)
w = w.to_sparse()
self.weight = nn.Parameter(data=w, requires_grad=True)
self.bias = nn.Parameter(data=b, requires_grad=True)
def forward(self, x):
return torch.sparse.addmm(self.bias, self.weight.T, x.T)
```
If we instantiate a network with a single input layer (90), a single hidden layer (100 features), an output layer of size (1) (YearPredictionMSD dataset is used here)
```python
class Regression(nn.Module):
def __init__(self, input_features:int, hidden_features:int, sparse:dict, output_features=1):
super().__init__()
self.nonlinearity = lambda x: F.relu(x, inplace=True)
self.fc1 = SparseLinear(input_features, hidden_features, rate=sparse['rate'], rand=sparse['random'])
self.fc2 = SparseLinear(hidden_features, output_features, rate=sparse['rate'], rand=sparse['random'])
def forward(self, x):
x = self.nonlinearity(self.fc1(x))
x = self.fc2(x)
return x
```
Using `nn.MSE()` as the loss, an exception occurs at the backward pass
`Exception has occurred: RuntimeError
Sparse division requires a scalar or zero-dim dense tensor divisor (got shape [1, 1] for divisor)`
Not really sure what this is referring to. There was an[ issue](https://discuss.pytorch.org/t/bug-in-backprop-with-sparse-tensors/175835) submitted where the target tensor is sparse (in my case, inputs and targets are dense, weights are sparse), where the NLL loss was used in that case, and a suggestion was made to re-write the loss with its backward pass to avoid a division by 0.
However, in my case with the MSE loss, this is not an issue (also I've tested it with a custom loss and extended the `torch.autograd.Function` with the backward pass, but the same error occurs still.
Any possibility of a work around to have the loss propagated through sparse weights? Should I be using hooks?
Also any suggestions on how to trace where the error is occurring as the only message I get is at the `loss.backward()` pass without any trace to where within the backward pass is the error occurring.
Any suggestion would be helpful!
UPDATE: when removing the L2 regularization, this issue is no longer encountered. Possibly `w.norm` backward does not support sparse parameters. However a new error is encountered:
```Exception has occurred: NotImplementedError
Could not run 'aten::_foreach_mul_.Scalar' with arguments from the 'SparseCUDA' backend. This could be because the
operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build).
If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions.
'aten::_foreach_mul_.Scalar' is only available for these backends: [CPU, CUDA, BackendSelect, Python,
FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU,
AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE,
AutogradLazy, AutogradMeta, AutogradMTIA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3,
AutogradNestedTensor, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, FuncTorchVmapMode, Batched, VmapMode,
FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PythonDispatcher].
CPU: registered at /opt/conda/conda-bld/pytorch_1682343998658/work/build/aten/src/ATen/RegisterCPU.cpp:31034 [kernel]
CUDA: registered at /opt/conda/conda-bld/pytorch_1682343998658/work/build/aten/src/ATen/RegisterCUDA.cpp:43986 [kernel]
BackendSelect: fallthrough registered at /opt/conda/conda-bld/pytorch_1682343998658/work/aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback]
Python: registered at /opt/conda/conda-bld/pytorch_1682343998658/work/aten/src/ATen/core/PythonFallbackKernel.cpp:144 [backend fallback]
FuncTorchDynamicLayerBackMode: registered at /opt/conda/conda-bld/pytorch_1682343998658/work/aten/src/ATen/functorch/DynamicLayer.cpp:491 [backend fallback]
Functionalize: registered at /opt/conda/conda-bld/pytorch_1682343998658/work/build/aten/src/ATen/RegisterFunctionalization_1.cpp:23013 [kernel]
Named: registered at /opt/conda/conda-bld/pytorch_1682343998658/work/aten/src/ATen/core/NamedRegistrations.cpp:7 [backend fallback]
Conjugate: registered at /opt/conda/conda-bld/pytorch_1682343998658/work/aten/src/ATen/ConjugateFallback.cpp:17 [backend fallback]
Negative: registered at /opt/conda/conda-bld/pytorch_1682343998658/work/aten/src/ATen/native/NegateFallback.cpp:19 [backend fallback]
ZeroTensor: registered at /opt/conda/conda-bld/pytorch_1682343998658/work/aten/src/ATen/ZeroTensorFallback.cpp:86 [backend fallback]
ADInplaceOrView: fallthrough registered at /opt/conda/conda-bld/pytorch_1682343998658/work/aten/src/ATen/core/VariableFallbackKernel.cpp:63 [backend fallback]
AutogradOther: registered at /opt/conda/conda-bld/pytorch_1682343998658/work/torch/csrc/autograd/generated/VariableType_3.cpp:16043 [autograd kernel]
AutogradCPU: registered at /opt/conda/conda-bld/pytorch_1682343998658/work/torch/csrc/autograd/generated/VariableType_3.cpp:16043 [autograd kernel]
AutogradCUDA: registered at /opt/conda/conda-bld/pytorch_1682343998658/work/torch/csrc/autograd/generated/VariableType_3.cpp:16043 [autograd kernel]
AutogradHIP: registered at /opt/conda/conda-bld/pytorch_1682343998658/work/torch/csrc/autograd/generated/VariableType_3.cpp:16043 [autograd kernel]
AutogradXLA: registered at /opt/conda/conda-bld/pytorch_1682343998658/work/torch/csrc/autograd/generated/VariableType_3.cpp:16043 [autograd kernel]
AutogradMPS: registered at /opt/conda/conda-bld/pytorch_1682343998658/work/torch/csrc/autograd/generated/VariableType_3.cpp:16043 [autograd kernel]
AutogradIPU: registered at /opt/conda/conda-bld/pytorch_1682343998658/work/torch/csrc/autograd/generated/VariableType_3.cpp:16043 [autograd kernel]
AutogradXPU: registered at /opt/conda/conda-bld/pytorch_1682343998658/work/torch/csrc/autograd/generated/VariableType_3.cpp:16043 [autograd kernel]
AutogradHPU: registered at /opt/conda/conda-bld/pytorch_1682343998658/work/torch/csrc/autograd/generated/VariableType_3.cpp:16043 [autograd kernel]
AutogradVE: registered at /opt/conda/conda-bld/pytorch_1682343998658/work/torch/csrc/autograd/generated/VariableType_3.cpp:16043 [autograd kernel]
AutogradLazy: registered at /opt/conda/conda-bld/pytorch_1682343998658/work/torch/csrc/autograd/generated/VariableType_3.cpp:16043 [autograd kernel]
AutogradMeta: registered at /opt/conda/conda-bld/pytorch_1682343998658/work/torch/csrc/autograd/generated/VariableType_3.cpp:16043 [autograd kernel]
AutogradMTIA: registered at /opt/conda/conda-bld/pytorch_1682343998658/work/torch/csrc/autograd/generated/VariableType_3.cpp:16043 [autograd kernel]
AutogradPrivateUse1: registered at /opt/conda/conda-bld/pytorch_1682343998658/work/torch/csrc/autograd/generated/VariableType_3.cpp:16043 [autograd kernel]
AutogradPrivateUse2: registered at /opt/conda/conda-bld/pytorch_1682343998658/work/torch/csrc/autograd/generated/VariableType_3.cpp:16043 [autograd kernel]
AutogradPrivateUse3: registered at /opt/conda/conda-bld/pytorch_1682343998658/work/torch/csrc/autograd/generated/VariableType_3.cpp:16043 [autograd kernel]
AutogradNestedTensor: registered at /opt/conda/conda-bld/pytorch_1682343998658/work/torch/csrc/autograd/generated/VariableType_3.cpp:16043 [autograd kernel]
Tracer: registered at /opt/conda/conda-bld/pytorch_1682343998658/work/torch/csrc/autograd/generated/TraceType_3.cpp:14198 [kernel]
AutocastCPU: fallthrough registered at /opt/conda/conda-bld/pytorch_1682343998658/work/aten/src/ATen/autocast_mode.cpp:487 [backend fallback]
AutocastCUDA: fallthrough registered at /opt/conda/conda-bld/pytorch_1682343998658/work/aten/src/ATen/autocast_mode.cpp:354 [backend fallback]
FuncTorchBatched: registered at /opt/conda/conda-bld/pytorch_1682343998658/work/aten/src/ATen/functorch/LegacyBatchingRegistrations.cpp:815 [backend fallback]
FuncTorchVmapMode: fallthrough registered at /opt/conda/conda-bld/pytorch_1682343998658/work/aten/src/ATen/functorch/VmapModeRegistrations.cpp:28 [backend fallback]
Batched: registered at /opt/conda/conda-bld/pytorch_1682343998658/work/aten/src/ATen/LegacyBatchingRegistrations.cpp:1073 [backend fallback]
VmapMode: fallthrough registered at /opt/conda/conda-bld/pytorch_1682343998658/work/aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]
FuncTorchGradWrapper: registered at /opt/conda/conda-bld/pytorch_1682343998658/work/aten/src/ATen/functorch/TensorWrapper.cpp:210 [backend fallback]
PythonTLSSnapshot: registered at /opt/conda/conda-bld/pytorch_1682343998658/work/aten/src/ATen/core/PythonFallbackKernel.cpp:152 [backend fallback]
FuncTorchDynamicLayerFrontMode: registered at /opt/conda/conda-bld/pytorch_1682343998658/work/aten/src/ATen/functorch/DynamicLayer.cpp:487 [backend fallback]
PythonDispatcher: registered at /opt/conda/conda-bld/pytorch_1682343998658/work/aten/src/ATen/core/PythonFallbackKernel.cpp:148 [backend fallback]
File "/home/zeinab/scripts/Regression/train.py", line 192, in train_single_fold
optimizer.step()
File "/home/zeinab/scripts/Regression/train.py", line 312, in <module>
train_single_fold(network, data_loader, test_loader, optimizers=optimizers, full_log=full_log,
NotImplementedError: Could not run 'aten::_foreach_mul_.Scalar' with arguments from the 'SparseCUDA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::_foreach_mul_.Scalar' is only available for these backends: [CPU, CUDA, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMeta, AutogradMTIA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, AutogradNestedTensor, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PythonDispatcher].
CPU: registered at /opt/conda/conda-bld/pytorch_1682343998658/work/build/aten/src/ATen/RegisterCPU.cpp:31034 [kernel]
CUDA: registered at /opt/conda/conda-bld/pytorch_1682343998658/work/build/aten/src/ATen/RegisterCUDA.cpp:43986 [kernel]
BackendSelect: fallthrough registered at /opt/conda/conda-bld/pytorch_1682343998658/work/aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback]
Python: registered at /opt/conda/conda-bld/pytorch_1682343998658/work/aten/src/ATen/core/PythonFallbackKernel.cpp:144 [backend fallback]
FuncTorchDynamicLayerBackMode: registered at /opt/conda/conda-bld/pytorch_1682343998658/work/aten/src/ATen/functorch/DynamicLayer.cpp:491 [backend fallback]
Functionalize: registered at /opt/conda/conda-bld/pytorch_1682343998658/work/build/aten/src/ATen/RegisterFunctionalization_1.cpp:23013 [kernel]
Named: registered at /opt/conda/conda-bld/pytorch_1682343998658/work/aten/src/ATen/core/NamedRegistrations.cpp:7 [backend fallback]
Conjugate: registered at /opt/conda/conda-bld/pytorch_1682343998658/work/aten/src/ATen/ConjugateFallback.cpp:17 [backend fallback]
Negative: registered at /opt/conda/conda-bld/pytorch_1682343998658/work/aten/src/ATen/native/NegateFallback.cpp:19 [backend fallback]
ZeroTensor: registered at /opt/conda/conda-bld/pytorch_1682343998658/work/aten/src/ATen/ZeroTensorFallback.cpp:86 [backend fallback]
ADInplaceOrView: fallthrough registered at /opt/conda/conda-bld/pytorch_1682343998658/work/aten/src/ATen/core/VariableFallbackKernel.cpp:63 [backend fallback]
AutogradOther: registered at /opt/conda/conda-bld/pytorch_1682343998658/work/torch/csrc/autograd/generated/VariableType_3.cpp:16043 [autograd kernel]
AutogradCPU: registered at /opt/conda/conda-bld/pytorch_1682343998658/work/torch/csrc/autograd/generated/VariableType_3.cpp:16043 [autograd kernel]
AutogradCUDA: registered at /opt/conda/conda-bld/pytorch_1682343998658/work/torch/csrc/autograd/generated/VariableType_3.cpp:16043 [autograd kernel]
AutogradHIP: registered at /opt/conda/conda-bld/pytorch_1682343998658/work/torch/csrc/autograd/generated/VariableType_3.cpp:16043 [autograd kernel]
AutogradXLA: registered at /opt/conda/conda-bld/pytorch_1682343998658/work/torch/csrc/autograd/generated/VariableType_3.cpp:16043 [autograd kernel]
AutogradMPS: registered at /opt/conda/conda-bld/pytorch_1682343998658/work/torch/csrc/autograd/generated/VariableType_3.cpp:16043 [autograd kernel]
AutogradIPU: registered at /opt/conda/conda-bld/pytorch_1682343998658/work/torch/csrc/autograd/generated/VariableType_3.cpp:16043 [autograd kernel]
AutogradXPU: registered at /opt/conda/conda-bld/pytorch_1682343998658/work/torch/csrc/autograd/generated/VariableType_3.cpp:16043 [autograd kernel]
AutogradHPU: registered at /opt/conda/conda-bld/pytorch_1682343998658/work/torch/csrc/autograd/generated/VariableType_3.cpp:16043 [autograd kernel]
AutogradVE: registered at /opt/conda/conda-bld/pytorch_1682343998658/work/torch/csrc/autograd/generated/VariableType_3.cpp:16043 [autograd kernel]
AutogradLazy: registered at /opt/conda/conda-bld/pytorch_1682343998658/work/torch/csrc/autograd/generated/VariableType_3.cpp:16043 [autograd kernel]
AutogradMeta: registered at /opt/conda/conda-bld/pytorch_1682343998658/work/torch/csrc/autograd/generated/VariableType_3.cpp:16043 [autograd kernel]
AutogradMTIA: registered at /opt/conda/conda-bld/pytorch_1682343998658/work/torch/csrc/autograd/generated/VariableType_3.cpp:16043 [autograd kernel]
AutogradPrivateUse1: registered at /opt/conda/conda-bld/pytorch_1682343998658/work/torch/csrc/autograd/generated/VariableType_3.cpp:16043 [autograd kernel]
AutogradPrivateUse2: registered at /opt/conda/conda-bld/pytorch_1682343998658/work/torch/csrc/autograd/generated/VariableType_3.cpp:16043 [autograd kernel]
AutogradPrivateUse3: registered at /opt/conda/conda-bld/pytorch_1682343998658/work/torch/csrc/autograd/generated/VariableType_3.cpp:16043 [autograd kernel]
AutogradNestedTensor: registered at /opt/conda/conda-bld/pytorch_1682343998658/work/torch/csrc/autograd/generated/VariableType_3.cpp:16043 [autograd kernel]
Tracer: registered at /opt/conda/conda-bld/pytorch_1682343998658/work/torch/csrc/autograd/generated/TraceType_3.cpp:14198 [kernel]
AutocastCPU: fallthrough registered at /opt/conda/conda-bld/pytorch_1682343998658/work/aten/src/ATen/autocast_mode.cpp:487 [backend fallback]
AutocastCUDA: fallthrough registered at /opt/conda/conda-bld/pytorch_1682343998658/work/aten/src/ATen/autocast_mode.cpp:354 [backend fallback]
FuncTorchBatched: registered at /opt/conda/conda-bld/pytorch_1682343998658/work/aten/src/ATen/functorch/LegacyBatchingRegistrations.cpp:815 [backend fallback]
FuncTorchVmapMode: fallthrough registered at /opt/conda/conda-bld/pytorch_1682343998658/work/aten/src/ATen/functorch/VmapModeRegistrations.cpp:28 [backend fallback]
Batched: registered at /opt/conda/conda-bld/pytorch_1682343998658/work/aten/src/ATen/LegacyBatchingRegistrations.cpp:1073 [backend fallback]
VmapMode: fallthrough registered at /opt/conda/conda-bld/pytorch_1682343998658/work/aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]
FuncTorchGradWrapper: registered at /opt/conda/conda-bld/pytorch_1682343998658/work/aten/src/ATen/functorch/TensorWrapper.cpp:210 [backend fallback]
PythonTLSSnapshot: registered at /opt/conda/conda-bld/pytorch_1682343998658/work/aten/src/ATen/core/PythonFallbackKernel.cpp:152 [backend fallback]
FuncTorchDynamicLayerFrontMode: registered at /opt/conda/conda-bld/pytorch_1682343998658/work/aten/src/ATen/functorch/DynamicLayer.cpp:487 [backend fallback]
PythonDispatcher: registered at /opt/conda/conda-bld/pytorch_1682343998658/work/aten/src/ATen/core/PythonFallbackKernel.cpp:148 [backend fallback]```
### Versions
PyTorch version: 2.0.1
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: CentOS Stream 9 (x86_64)
GCC version: (GCC) 11.3.1 20221121 (Red Hat 11.3.1-4)
Clang version: 15.0.7 (Red Hat 15.0.7-2.el9)
CMake version: version 3.20.2
Libc version: glibc-2.34
Python version: 3.8.12 | packaged by conda-forge | (default, Jan 30 2022, 23:42:07) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.14.0-282.el9.x86_64-x86_64-with-glibc2.10
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3090
Nvidia driver version: 525.78.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: GenuineIntel
Model name: 11th Gen Intel(R) Core(TM) i7-11700K @ 3.60GHz
CPU family: 6
Model: 167
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
Stepping: 1
CPU max MHz: 5000.0000
CPU min MHz: 800.0000
BogoMIPS: 7200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap avx512ifma clflushopt intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid fsrm md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 384 KiB (8 instances)
L1i cache: 256 KiB (8 instances)
L2 cache: 4 MiB (8 instances)
L3 cache: 16 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-15
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.24.3
[pip3] torch==2.0.1
[pip3] torchaudio==2.0.2
[pip3] torchvision==0.15.2
[pip3] triton==2.0.0
[conda] blas 1.0 mkl
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] libblas 3.9.0 12_linux64_mkl conda-forge
[conda] libcblas 3.9.0 12_linux64_mkl conda-forge
[conda] liblapack 3.9.0 12_linux64_mkl conda-forge
[conda] liblapacke 3.9.0 12_linux64_mkl conda-forge
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py38h95df7f1_0 conda-forge
[conda] mkl_fft 1.3.1 py38h8666266_1 conda-forge
[conda] mkl_random 1.2.2 py38h1abd341_0 conda-forge
[conda] numpy 1.24.3 py38h14f4228_0
[conda] numpy-base 1.24.3 py38h31eccc5_0
[conda] pytorch 2.0.1 py3.8_cuda11.8_cudnn8.7.0_0 pytorch
[conda] pytorch-cuda 11.8 h7e8668a_5 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 2.0.2 py38_cu118 pytorch
[conda] torchtriton 2.0.0 py38 pytorch
[conda] torchvision 0.15.2 py38_cu118 pytorch
cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer @vincentqb @jbschlosser @albanD @janeyx99 @crcrpar
| 2 |
1,984 | 105,299 |
Support ONNX opset 20 to export GELU to one single op
|
module: onnx, triaged
|
### 🚀 The feature, motivation and pitch
ONNX added GELU (and its tanh approximation ) as a new op in its opset 20: https://github.com/onnx/onnx/blob/main/docs/Operators.md#Gelu
It will be great to upgrade Pytorch ONNX export to support it. Model visualization and GELU pattern matching will be a lot easier.
### Alternatives
_No response_
### Additional context
_No response_
| 0 |
1,985 | 105,290 |
Torch.compile Error: RuntimeError: aten::_conj() Expected a value of type 'Tensor' for argument 'self' but instead found type 'complex'.
|
triaged, module: complex, module: functionalization, oncall: pt2
|
### 🐛 Describe the bug
Training code
manual_seed(args.seed)
torch.backends.cudnn.benchmark = True
with open(args.model_path+'/config.yaml') as f:
config = ConfigDict(yaml.load(f, Loader=yaml.FullLoader))
config.training.num_steps = args.num_steps
trainset = MSSDatasets(config, args.data_root)
train_loader = DataLoader(
trainset,
batch_size=config.training.batch_size,
shuffle=True,
num_workers=args.num_workers,
pin_memory=args.pin_memory
)
model = TFC_TDF_net(config)
model = torch.compile(model)
model.train()
device_ids = args.device_ids
if type(device_ids)==int:
device = torch.device(f'cuda:{device_ids}')
model = model.to(device)
else:
device = torch.device(f'cuda:{device_ids[0]}')
model = nn.DataParallel(model, device_ids=device_ids).to(device)
optimizer = Adam(model.parameters(), lr=config.training.lr)
print('Train Loop')
scaler = GradScaler()
for batch in tqdm(train_loader):
y = batch.to(device)
x = y.sum(1) # mixture
if config.training.target_instrument is not None:
i = config.training.instruments.index(config.training.target_instrument)
y = y[:,i]
with torch.cuda.amp.autocast():
y_ = model(x)
loss = nn.MSELoss()(y_, y)
scaler.scale(loss).backward()
if config.training.grad_clip:
nn.utils.clip_grad_norm_(model.parameters(), config.training.grad_clip)
scaler.step(optimizer)
scaler.update()
optimizer.zero_grad(set_to_none=True)
state_dict = model.state_dict() if type(device_ids)==int else model.module.state_dict()
torch.save(state_dict, args.model_path+'/ckpt')
if __name__ == "__main__":
train()`
Model code
```
> class STFT:
> def __init__(self, config):
> self.n_fft = config.n_fft
> self.hop_length = config.hop_length
> self.window = torch.hann_window(window_length=self.n_fft, periodic=True)
> self.dim_f = config.dim_f
>
> def __call__(self, x):
> window = self.window.to(x.device)
> batch_dims = x.shape[:-2]
> c, t = x.shape[-2:]
> x = x.reshape([-1, t])
> x = torch.stft(x, n_fft=self.n_fft, hop_length=self.hop_length, window=window, center=True, return_complex=False)
> x = x.permute([0,3,1,2])
> x = x.reshape([*batch_dims,c,2,-1,x.shape[-1]]).reshape([*batch_dims,c*2,-1,x.shape[-1]])
> return x[...,:self.dim_f,:]
>
> def inverse(self, x):
> window = self.window.to(x.device)
> batch_dims = x.shape[:-3]
> c,f,t = x.shape[-3:]
> n = self.n_fft//2+1
> f_pad = torch.zeros([*batch_dims,c,n-f,t]).to(x.device)
> x = torch.cat([x, f_pad], -2)
> x = x.reshape([*batch_dims,c//2,2,n,t]).reshape([-1,2,n,t])
> x = x.permute([0,2,3,1])
> x = x[...,0] + x[...,1] * 1.j
> x = torch.istft(x, n_fft=self.n_fft, hop_length=self.hop_length, window=window, center=True)
> x = x.reshape([*batch_dims,2,-1])
> return x
>
>
> def get_norm(norm_type):
> def norm(c, norm_type):
> if norm_type=='BatchNorm':
> return nn.BatchNorm2d(c)
> elif norm_type=='InstanceNorm':
> return nn.InstanceNorm2d(c, affine=True)
> elif 'GroupNorm' in norm_type:
> g = int(norm_type.replace('GroupNorm', ''))
> return nn.GroupNorm(num_groups=g, num_channels=c)
> else:
> return nn.Identity()
> return partial(norm, norm_type=norm_type)
>
>
> def get_act(act_type):
> if act_type=='gelu':
> return nn.GELU()
> elif act_type=='relu':
> return nn.ReLU()
> elif act_type[:3]=='elu':
> alpha = float(act_type.replace('elu', ''))
> return nn.ELU(alpha)
> else:
> raise Exception
>
>
> class Upscale(nn.Module):
> def __init__(self, in_c, out_c, scale, norm, act):
> super().__init__()
> self.conv = nn.Sequential(
> norm(in_c),
> act,
> nn.ConvTranspose2d(in_channels=in_c, out_channels=out_c, kernel_size=scale, stride=scale, bias=False)
> )
>
> def forward(self, x):
> return self.conv(x)
>
>
> class Downscale(nn.Module):
> def __init__(self, in_c, out_c, scale, norm, act):
> super().__init__()
> self.conv = nn.Sequential(
> norm(in_c),
> act,
> nn.Conv2d(in_channels=in_c, out_channels=out_c, kernel_size=scale, stride=scale, bias=False)
> )
>
> def forward(self, x):
> return self.conv(x)
>
>
> class TFC_TDF(nn.Module):
> def __init__(self, in_c, c, l, f, bn, norm, act):
> super().__init__()
>
> self.blocks = nn.ModuleList()
> for i in range(l):
> block = nn.Module()
>
> block.tfc1 = nn.Sequential(
> norm(in_c),
> act,
> nn.Conv2d(in_c, c, 3, 1, 1, bias=False),
> )
> block.tdf = nn.Sequential(
> norm(c),
> act,
> nn.Linear(f, f//bn, bias=False),
> norm(c),
> act,
> nn.Linear(f//bn, f, bias=False),
> )
> block.tfc2 = nn.Sequential(
> norm(c),
> act,
> nn.Conv2d(c, c, 3, 1, 1, bias=False),
> )
> block.shortcut = nn.Conv2d(in_c, c, 1, 1, 0, bias=False)
>
> self.blocks.append(block)
> in_c = c
>
> def forward(self, x):
> for block in self.blocks:
> s = block.shortcut(x)
> x = block.tfc1(x)
> x = x + block.tdf(x)
> x = block.tfc2(x)
> x = x + s
> return x
>
>
> class TFC_TDF_net(nn.Module):
> def __init__(self, config):
> super().__init__()
> self.config = config
>
> norm = get_norm(norm_type=config.model.norm)
> act = get_act(act_type=config.model.act)
>
> self.num_target_instruments = 1 if config.training.target_instrument else len(config.training.instruments)
> self.num_subbands = config.model.num_subbands
>
> dim_c = self.num_subbands * config.audio.num_channels * 2
> n = config.model.num_scales
> scale = config.model.scale
> l = config.model.num_blocks_per_scale
> c = config.model.num_channels
> g = config.model.growth
> bn = config.model.bottleneck_factor
> f = config.audio.dim_f // self.num_subbands
>
> self.first_conv = nn.Conv2d(dim_c, c, 1, 1, 0, bias=False)
>
> self.encoder_blocks = nn.ModuleList()
> for i in range(n):
> block = nn.Module()
> block.tfc_tdf = TFC_TDF(c, c, l, f, bn, norm, act)
> block.downscale = Downscale(c, c+g, scale, norm, act)
> f = f//scale[1]
> c += g
> self.encoder_blocks.append(block)
>
> self.bottleneck_block = TFC_TDF(c, c, l, f, bn, norm, act)
>
> self.decoder_blocks = nn.ModuleList()
> for i in range(n):
> block = nn.Module()
> block.upscale = Upscale(c, c-g, scale, norm, act)
> f = f*scale[1]
> c -= g
> block.tfc_tdf = TFC_TDF(2*c, c, l, f, bn, norm, act)
> self.decoder_blocks.append(block)
>
> self.final_conv = nn.Sequential(
> nn.Conv2d(c + dim_c, c, 1, 1, 0, bias=False),
> act,
> nn.Conv2d(c, self.num_target_instruments * dim_c, 1, 1, 0, bias=False)
> )
>
> self.stft = STFT(config.audio)
>
> def cac2cws(self, x):
> k = self.num_subbands
> b,c,f,t = x.shape
> x = x.reshape(b,c,k,f//k,t)
> x = x.reshape(b,c*k,f//k,t)
> return x
>
> def cws2cac(self, x):
> k = self.num_subbands
> b,c,f,t = x.shape
> x = x.reshape(b,c//k,k,f,t)
> x = x.reshape(b,c//k,f*k,t)
> return x
>
> def forward(self, x):
>
> x = self.stft(x)
>
> mix = x = self.cac2cws(x)
>
> first_conv_out = x = self.first_conv(x)
>
> x = x.transpose(-1,-2)
>
> encoder_outputs = []
> for block in self.encoder_blocks:
> x = block.tfc_tdf(x)
> encoder_outputs.append(x)
> x = block.downscale(x)
>
> x = self.bottleneck_block(x)
>
> for block in self.decoder_blocks:
> x = block.upscale(x)
> x = torch.cat([x, encoder_outputs.pop()], 1)
> x = block.tfc_tdf(x)
>
> x = x.transpose(-1,-2)
>
> x = x * first_conv_out # reduce artifacts
>
> x = self.final_conv(torch.cat([mix, x], 1))
>
> x = self.cws2cac(x)
>
> if self.num_target_instruments > 1:
> b,c,f,t = x.shape
> x = x.reshape(b,self.num_target_instruments,-1,f,t)
>
> x = self.stft.inverse(x)
>
> return x
```
### Error logs
```
0% 0/1000000 [00:00<?, ?it/s][2023-07-16 14:24:31,474] torch._inductor.utils: [WARNING] DeviceCopy in input program
0% 0/1000000 [01:07<?, ?it/s]
Traceback (most recent call last):
File "/content/sdx23/my_submission/src/train.py", line 120, in <module>
train()
File "/content/sdx23/my_submission/src/train.py", line 91, in train
out = model(x)
^^^^^^^^
File "/usr/local/envs/mdx-net/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1522, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/envs/mdx-net/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1531, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/envs/mdx-net/lib/python3.11/site-packages/torch/nn/parallel/data_parallel.py", line 183, in forward
return self.module(*inputs[0], **module_kwargs[0])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/envs/mdx-net/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1522, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/envs/mdx-net/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1531, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/envs/mdx-net/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 294, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/usr/local/envs/mdx-net/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1522, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/envs/mdx-net/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1531, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/content/sdx23/my_submission/src/tfc_tdf_v3.py", line 196, in forward
def forward(self, x):
File "/content/sdx23/my_submission/src/tfc_tdf_v3.py", line 198, in <resume in forward>
x = self.stft(x)
File "/usr/local/envs/mdx-net/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 447, in catch_errors
return callback(frame, cache_size, hooks, frame_state)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/envs/mdx-net/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 535, in _convert_frame
result = inner_convert(frame, cache_size, hooks, frame_state)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/envs/mdx-net/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 128, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/usr/local/envs/mdx-net/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 364, in _convert_frame_assert
return _compile(
^^^^^^^^^
File "/usr/local/envs/mdx-net/lib/python3.11/site-packages/torch/_dynamo/utils.py", line 179, in time_wrapper
r = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/envs/mdx-net/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 434, in _compile
out_code = transform_code_object(code, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/envs/mdx-net/lib/python3.11/site-packages/torch/_dynamo/bytecode_transformation.py", line 1002, in transform_code_object
transformations(instructions, code_options)
File "/usr/local/envs/mdx-net/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 419, in transform
tracer.run()
File "/usr/local/envs/mdx-net/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 2068, in run
super().run()
File "/usr/local/envs/mdx-net/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 727, in run
and self.step()
^^^^^^^^^^^
File "/usr/local/envs/mdx-net/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 687, in step
getattr(self, inst.opname)(inst)
File "/usr/local/envs/mdx-net/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 441, in wrapper
self.output.compile_subgraph(self, reason=reason)
File "/usr/local/envs/mdx-net/lib/python3.11/site-packages/torch/_dynamo/output_graph.py", line 815, in compile_subgraph
self.compile_and_call_fx_graph(tx, pass2.graph_output_vars(), root)
File "/usr/local/envs/mdx-net/lib/python3.11/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/usr/local/envs/mdx-net/lib/python3.11/site-packages/torch/_dynamo/output_graph.py", line 915, in compile_and_call_fx_graph
compiled_fn = self.call_user_compiler(gm)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/envs/mdx-net/lib/python3.11/site-packages/torch/_dynamo/utils.py", line 179, in time_wrapper
r = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/envs/mdx-net/lib/python3.11/site-packages/torch/_dynamo/output_graph.py", line 971, in call_user_compiler
raise BackendCompilerFailed(self.compiler_fn, e).with_traceback(
File "/usr/local/envs/mdx-net/lib/python3.11/site-packages/torch/_dynamo/output_graph.py", line 967, in call_user_compiler
compiled_fn = compiler_fn(gm, self.example_inputs())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/envs/mdx-net/lib/python3.11/site-packages/torch/_dynamo/repro/after_dynamo.py", line 117, in debug_wrapper
compiled_gm = compiler_fn(gm, example_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/envs/mdx-net/lib/python3.11/site-packages/torch/__init__.py", line 1548, in __call__
return compile_fx(model_, inputs_, config_patches=self.config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/envs/mdx-net/lib/python3.11/site-packages/torch/_inductor/compile_fx.py", line 1045, in compile_fx
return aot_autograd(
^^^^^^^^^^^^^
File "/usr/local/envs/mdx-net/lib/python3.11/site-packages/torch/_dynamo/backends/common.py", line 55, in compiler_fn
cg = aot_module_simplified(gm, example_inputs, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/envs/mdx-net/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py", line 3750, in aot_module_simplified
compiled_fn = create_aot_dispatcher_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/envs/mdx-net/lib/python3.11/site-packages/torch/_dynamo/utils.py", line 179, in time_wrapper
r = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/envs/mdx-net/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py", line 3289, in create_aot_dispatcher_function
compiled_fn = compiler_fn(flat_fn, fake_flat_args, aot_config, fw_metadata=fw_metadata)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/envs/mdx-net/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py", line 2098, in aot_wrapper_dedupe
return compiler_fn(flat_fn, leaf_flat_args, aot_config, fw_metadata=fw_metadata)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/envs/mdx-net/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py", line 2278, in aot_wrapper_synthetic_base
return compiler_fn(flat_fn, flat_args, aot_config, fw_metadata=fw_metadata)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/envs/mdx-net/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py", line 2686, in aot_dispatch_autograd
fx_g = aot_dispatch_autograd_graph(flat_fn, flat_args, aot_config, fw_metadata=fw_metadata)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/envs/mdx-net/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py", line 2663, in aot_dispatch_autograd_graph
fx_g = create_functionalized_graph(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/envs/mdx-net/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py", line 1399, in create_functionalized_graph
fx_g = make_fx(helper, decomposition_table=aot_config.decompositions)(*args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/envs/mdx-net/lib/python3.11/site-packages/torch/fx/experimental/proxy_tensor.py", line 809, in wrapped
t = dispatch_trace(wrap_key(func, args, fx_tracer, pre_dispatch), tracer=fx_tracer, concrete_args=tuple(phs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/envs/mdx-net/lib/python3.11/site-packages/torch/_compile.py", line 24, in inner
return torch._dynamo.disable(fn, recursive)(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/envs/mdx-net/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 294, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/usr/local/envs/mdx-net/lib/python3.11/site-packages/torch/_dynamo/external_utils.py", line 17, in inner
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/usr/local/envs/mdx-net/lib/python3.11/site-packages/torch/fx/experimental/proxy_tensor.py", line 468, in dispatch_trace
graph = tracer.trace(root, concrete_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/envs/mdx-net/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 294, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/usr/local/envs/mdx-net/lib/python3.11/site-packages/torch/_dynamo/external_utils.py", line 17, in inner
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/usr/local/envs/mdx-net/lib/python3.11/site-packages/torch/fx/_symbolic_trace.py", line 817, in trace
(self.create_arg(fn(*args)),),
^^^^^^^^^
File "/usr/local/envs/mdx-net/lib/python3.11/site-packages/torch/fx/_symbolic_trace.py", line 684, in flatten_fn
tree_out = root_fn(*tree_args)
^^^^^^^^^^^^^^^^^^^
File "/usr/local/envs/mdx-net/lib/python3.11/site-packages/torch/fx/experimental/proxy_tensor.py", line 485, in wrapped
out = f(*tensors)
^^^^^^^^^^^
File "/usr/local/envs/mdx-net/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py", line 1388, in joint_helper
return functionalized_f_helper(primals, tangents)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/envs/mdx-net/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py", line 1341, in functionalized_f_helper
f_outs = fn(*f_args)
^^^^^^^^^^^
File "/usr/local/envs/mdx-net/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py", line 1312, in inner_fn_with_anomaly
return inner_fn(*args)
^^^^^^^^^^^^^^^
File "/usr/local/envs/mdx-net/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py", line 1295, in inner_fn
backward_out = torch.autograd.grad(
^^^^^^^^^^^^^^^^^^^^
File "/usr/local/envs/mdx-net/lib/python3.11/site-packages/torch/autograd/__init__.py", line 319, in grad
result = Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/envs/mdx-net/lib/python3.11/site-packages/torch/utils/_stats.py", line 20, in wrapper
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/usr/local/envs/mdx-net/lib/python3.11/site-packages/torch/fx/experimental/proxy_tensor.py", line 555, in __torch_dispatch__
return self.inner_torch_dispatch(func, types, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/envs/mdx-net/lib/python3.11/site-packages/torch/fx/experimental/proxy_tensor.py", line 580, in inner_torch_dispatch
return proxy_call(self, func, self.pre_dispatch, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/envs/mdx-net/lib/python3.11/site-packages/torch/fx/experimental/proxy_tensor.py", line 361, in proxy_call
out = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/envs/mdx-net/lib/python3.11/site-packages/torch/_ops.py", line 437, in __call__
return self._op(*args, **kwargs or {})
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
RuntimeError: aten::_conj() Expected a value of type 'Tensor' for argument 'self' but instead found type 'complex'.
Position: 0
Value: 1j
Declaration: aten::_conj(Tensor(a) self) -> Tensor(a)
Cast error details: Unable to cast 1j to Tensor
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
```
### Minified repro
_No response_
### Versions
```
# packages in environment at /usr/local/envs/mdx-net:
#
# Name Version Build Channel
_libgcc_mutex 0.1 conda_forge conda-forge
_openmp_mutex 4.5 2_gnu conda-forge
absl-py 1.4.0 pypi_0 pypi
antlr4-python3-runtime 4.9.3 pypi_0 pypi
attrs 23.1.0 pypi_0 pypi
bzip2 1.0.8 h7f98852_4 conda-forge
ca-certificates 2023.5.7 hbcca054_0 conda-forge
certifi 2023.5.7 pypi_0 pypi
cffi 1.15.1 pypi_0 pypi
charset-normalizer 3.2.0 pypi_0 pypi
click 8.1.5 pypi_0 pypi
cloudpickle 2.2.1 pypi_0 pypi
cmake 3.26.4 pypi_0 pypi
contextlib2 21.6.0 pypi_0 pypi
cython 0.29.36 pypi_0 pypi
demucs 4.0.0 pypi_0 pypi
diffq 0.2.4 pypi_0 pypi
docker-pycreds 0.4.0 pypi_0 pypi
dora-search 0.1.12 pypi_0 pypi
einops 0.6.1 pypi_0 pypi
ffmpeg-python 0.2.0 pypi_0 pypi
filelock 3.12.2 pypi_0 pypi
fsspec 2023.4.0 pypi_0 pypi
future 0.18.3 pypi_0 pypi
gitdb 4.0.10 pypi_0 pypi
gitpython 3.1.32 pypi_0 pypi
idna 3.4 pypi_0 pypi
jinja2 3.1.2 pypi_0 pypi
jsonschema 4.18.3 pypi_0 pypi
jsonschema-specifications 2023.6.1 pypi_0 pypi
julius 0.2.7 pypi_0 pypi
lameenc 1.5.1 pypi_0 pypi
ld_impl_linux-64 2.40 h41732ed_0 conda-forge
libexpat 2.5.0 hcb278e6_1 conda-forge
libffi 3.4.2 h7f98852_5 conda-forge
libgcc-ng 13.1.0 he5830b7_0 conda-forge
libgomp 13.1.0 he5830b7_0 conda-forge
libnsl 2.0.0 h7f98852_0 conda-forge
libsqlite 3.42.0 h2797004_0 conda-forge
libuuid 2.38.1 h0b41bf4_0 conda-forge
libzlib 1.2.13 hd590300_5 conda-forge
lit 16.0.6 pypi_0 pypi
markupsafe 2.1.3 pypi_0 pypi
mir-eval 0.7 pypi_0 pypi
ml-collections 0.1.1 pypi_0 pypi
mpmath 1.3.0 pypi_0 pypi
musdb 0.4.0 pypi_0 pypi
museval 0.4.1 pypi_0 pypi
ncurses 6.4 hcb278e6_0 conda-forge
networkx 3.1 pypi_0 pypi
numpy 1.25.1 pypi_0 pypi
nvidia-cublas-cu11 11.10.3.66 pypi_0 pypi
nvidia-cuda-cupti-cu11 11.7.101 pypi_0 pypi
nvidia-cuda-nvrtc-cu11 11.7.99 pypi_0 pypi
nvidia-cuda-runtime-cu11 11.7.99 pypi_0 pypi
nvidia-cudnn-cu11 8.5.0.96 pypi_0 pypi
nvidia-cufft-cu11 10.9.0.58 pypi_0 pypi
nvidia-curand-cu11 10.2.10.91 pypi_0 pypi
nvidia-cusolver-cu11 11.4.0.1 pypi_0 pypi
nvidia-cusparse-cu11 11.7.4.91 pypi_0 pypi
nvidia-nccl-cu11 2.14.3 pypi_0 pypi
nvidia-nvtx-cu11 11.7.91 pypi_0 pypi
omegaconf 2.3.0 pypi_0 pypi
openssl 3.1.1 hd590300_1 conda-forge
openunmix 1.2.1 pypi_0 pypi
pandas 2.0.3 pypi_0 pypi
pathtools 0.1.2 pypi_0 pypi
pip 23.2 pyhd8ed1ab_0 conda-forge
promise 2.3 pypi_0 pypi
protobuf 3.20.3 pypi_0 pypi
psutil 5.9.5 pypi_0 pypi
pyaml 23.7.0 pypi_0 pypi
pycparser 2.21 pypi_0 pypi
python 3.11.4 hab00c5b_0_cpython conda-forge
python-dateutil 2.8.2 pypi_0 pypi
pytorch-triton 2.1.0+3c400e7818 pypi_0 pypi
pytz 2023.3 pypi_0 pypi
pyyaml 6.0 pypi_0 pypi
readline 8.2 h8228510_1 conda-forge
referencing 0.29.1 pypi_0 pypi
requests 2.31.0 pypi_0 pypi
retrying 1.3.4 pypi_0 pypi
rpds-py 0.8.10 pypi_0 pypi
scipy 1.11.1 pypi_0 pypi
sentry-sdk 1.28.1 pypi_0 pypi
setproctitle 1.3.2 pypi_0 pypi
setuptools 68.0.0 pyhd8ed1ab_0 conda-forge
shortuuid 1.0.11 pypi_0 pypi
simplejson 3.19.1 pypi_0 pypi
six 1.16.0 pypi_0 pypi
smmap 5.0.0 pypi_0 pypi
soundfile 0.12.1 pypi_0 pypi
stempeg 0.2.3 pypi_0 pypi
submitit 1.4.5 pypi_0 pypi
sympy 1.12 pypi_0 pypi
tk 8.6.12 h27826a3_0 conda-forge
torch 2.1.0.dev20230716+cu118 pypi_0 pypi
torchaudio 2.0.2 pypi_0 pypi
tqdm 4.65.0 pypi_0 pypi
treetable 0.2.5 pypi_0 pypi
triton 2.0.0 pypi_0 pypi
typing-extensions 4.7.1 pypi_0 pypi
tzdata 2023.3 pypi_0 pypi
urllib3 2.0.3 pypi_0 pypi
wandb 0.13.2 pypi_0 pypi
wheel 0.40.0 pyhd8ed1ab_1 conda-forge
xz 5.2.6 h166bdaf_0 conda-forge
```
cc @ezyang @anjali411 @dylanbespalko @mruberry @Lezcano @nikitaved @msaroufim @wconstab @bdhirsh @anijain2305
| 7 |
1,986 | 105,281 |
Optimize PyTorch C++ part with Profile-Guided Optimization (PGO)
|
module: performance, module: internals, triaged, oncall: pt2
|
### 🚀 The feature, motivation and pitch
Profile-Guided Optimization (PGO) helps a lot with optimizing different kinds of software based on runtime execution profiles.
PyTorch has a huge C++ [part](https://github.com/pytorch/pytorch/tree/main/torch/csrc) that could be optimized with PGO. E.g. compiler load and JIT. Similar projects like Clang, GCC, Rustc, CPython and others are already built with PGO and proved that PGO (and more advanced techniques like LLVM BOLT) can help here.
So would be great to see PGO results on PyTorch codebase. I am not familiar (at least yet) with the codebase but I think starting with optimizing the compiler is a good starting point. If you know a better place - please correct me. If anyone already tried to optimize these parts with PGO and has some results - would be great if you can share them here.
### Alternatives
Leave things as is.
### Additional context
More about real-life results of applying PGO on different kinds of software you can find (and more materials about PGO and other optimization techniques like LLVM BOLT) [here](https://github.com/zamazan4ik/awesome-pgo/).
cc @ezyang @bhosmer @smessmer @ljk53 @bdhirsh @msaroufim @wconstab @anijain2305
| 0 |
1,987 | 105,279 |
[Dynamo][Compile]Torch compile with dynamic shapes not working
|
triaged, oncall: pt2
|
### 🐛 Describe the bug
My networks rely on varying shapes during training as well as during inference. Thus, I tried to use `torch.compile(... dynamic=True)` as well as the `torch._dynamo.optimize(..., dynamic=True)` feature. However, unfortunately I can't get it to work properly and the function gets recompiled always as soon as the input shape changes. I tried it with the latest nightly build verison `2.1.0.dev20230712` as well as with the current main branch from `Friday Jul 14th 2023`. I printed the guards that trigger the recompilation and the culprit guard is precisely what the dynamic feature targets:
`GuardFail(reason="tensor 'L['input']' size mismatch at index 0. expected 146, actual 149", orig_code=<code object RNNScript at 0x2758840 ...)`
I also tried to add a `warm-up phase`, where the input length is varied, but whenever the shape of the input changes, the function gets recompiled.
@ezyang could you maybe please advice here?
### Error logs
GuardFail(reason="tensor 'L['input']' size mismatch at index 0. expected 146, actual 149", orig_code=<code object RNNScript at 0x2758840 ...)
### Minified repro
```
from typing import List, Tuple, Optional, overload, Union, cast
import torch
import numpy as np
import time
import torch.optim as optim
from torch.nn.parameter import Parameter
def RNNScript(
input,
param1,
param2,
):
state1 = torch.zeros(32, 340, dtype=input.dtype, device=input.device)
outs = []
Wx = input @ param1
Wx_inp, Wx_rec = torch.tensor_split(Wx, 2, 2)
for wt_inp, wt_rec in zip(Wx_inp, Wx_rec):
rec_mul_inp, rec_mul_rec = torch.tensor_split(state1 @ param2, 2, 1)
input_prev = (wt_inp + rec_mul_inp)
output_gate = (wt_rec + rec_mul_rec)
state1 = input_prev * torch.sigmoid(output_gate)
outs.append(state1)
outs = torch.stack(outs)
return outs, (outs)
if __name__ == "__main__":
input_size = 140
hidden_size = 340
num_layers = 1
num_timesteps = 111
batch_size = 32
bi_dir = True
rnnt_input = False
num_threads = -1
use_gpu = True
load_weights = False
forward_times = []
backward_times = []
if use_gpu:
device = torch.device('cuda:0')
else:
device = None
parameters = []
w_ih = torch.empty((input_size, hidden_size), device=device)
w_io = torch.empty((input_size, hidden_size), device=device)
w_i_comb = Parameter(torch.cat([w_ih,w_io],1))
parameters.append(w_i_comb)
w_hh = torch.empty((hidden_size, hidden_size), device=device)
w_ho = torch.empty((hidden_size, hidden_size), device=device)
w_h_comb = Parameter(torch.cat([w_hh,w_ho],1))
parameters.append(w_h_comb)
def count_kernels(guard):
print("[pt2_compile] guard failed: ", guard)
rnnscript = torch.compile(RNNScript, mode='reduce-overhead', dynamic=True, fullgraph=True)
#backend = torch._TorchCompileInductorWrapper('reduce-overhead', None, True)
#rnnscript = torch._dynamo.optimize(backend=backend, nopython=True, dynamic=True, guard_fail_fn=count_kernels)(RNNScript)
#rnnscript = RNNScript
snu = lambda x: rnnscript(x, w_i_comb, w_h_comb)
optimizer = optim.SGD(parameters, 0.1)
inp = torch.rand((num_timesteps, batch_size, input_size))
if use_gpu:
inp = inp.cuda()
optimizer.zero_grad()
for execution in range(5):
start_forward = time.time_ns()
t_rnd = np.random.randint(0, 200)
inp = torch.rand((t_rnd, batch_size, input_size))
if use_gpu:
inp = inp.cuda()
out, state = snu(inp)
if use_gpu:
torch.cuda.synchronize()
stop_forward = time.time_ns()
forward_times.append((stop_forward - start_forward) / (10 ** 9))
loss = 1. - torch.sum(out)
start_time_backward = time.time_ns()
#loss.backward()
if use_gpu:
torch.cuda.synchronize()
stop_time_backward = time.time_ns()
backward_times.append((stop_time_backward - start_time_backward) / (10 ** 9))
print('================================================================')
print('Model with sSNU-os:')
print('# Layers: ' + str(num_layers))
print('# Units per layer: ' + str(hidden_size))
print('Bidirectional: ' + str(bi_dir))
print('Load weights: ' + str(load_weights))
print('RNN-T input: ' + str(rnnt_input))
print('# CPU threads: ' + str(num_threads))
print('GPU support: ' + str(use_gpu))
print('----------------------------------------------------------------')
print('Timing summary')
print('Time of forward computation: {:.4f} +- {:.4f} s'.format(np.mean(np.array(forward_times)), np.std(np.array(forward_times))))
print('Time of backward computation: {:.4f} +- {:.4f} s'.format(np.mean(np.array(backward_times)), np.std(np.array(backward_times))))
```
### Versions
Collecting environment information...
PyTorch version: 2.1.0a0+gitfb376f8
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Red Hat Enterprise Linux release 8.7 (Ootpa) (x86_64)
GCC version: (GCC) 8.5.0 20210514 (Red Hat 8.5.0-16)
Clang version: Could not collect
CMake version: version 3.26.4
Libc version: glibc-2.28
Python version: 3.11.3 (main, May 15 2023, 15:45:52) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-4.18.0-425.10.1.el8_7.x86_64-x86_64-with-glibc2.28
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA A100-SXM4-40GB
Nvidia driver version: 525.60.13
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Thread(s) per core: 1
Core(s) per socket: 64
Socket(s): 2
NUMA node(s): 2
Vendor ID: AuthenticAMD
CPU family: 23
Model: 49
Model name: AMD EPYC 7742 64-Core Processor
Stepping: 0
CPU MHz: 3361.974
CPU max MHz: 2250.0000
CPU min MHz: 1500.0000
BogoMIPS: 4499.91
Virtualization: AMD-V
L1d cache: 32K
L1i cache: 32K
L2 cache: 512K
L3 cache: 16384K
NUMA node0 CPU(s): 0-63
NUMA node1 CPU(s): 64-127
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sme sev sev_es
Versions of relevant libraries:
[pip3] numpy==1.24.3
[pip3] pytorch-triton==2.1.0+3c400e7818
[pip3] torch==2.1.0a0+gitfb376f8
[pip3] torchaudio==2.1.0a0+cf53a48
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.8.0 h6a678d5_0
[conda] magma-cuda118 2.6.1 1 pytorch
[conda] mkl 2023.1.0 h6d00ec8_46342
[conda] mkl-include 2023.1.0 h06a4308_46342
[conda] mkl-service 2.4.0 py311h5eee18b_1
[conda] mkl_fft 1.3.6 py311ha02d727_1
[conda] mkl_random 1.2.2 py311ha02d727_1
[conda] numpy 1.24.3 py311h08b1b3b_1
[conda] numpy-base 1.24.3 py311hf175353_1
[conda] pytorch-triton 2.1.0+3c400e7818 pypi_0 pypi
[conda] torch 2.1.0a0+gitfb376f8 pypi_0 pypi
[conda] torchaudio 2.1.0a0+cf53a48 dev_0 <develop>
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 25 |
1,988 | 105,274 |
[DO NOT MERGE] Testing to see if CUDA API call is allowed in watchdog thread
|
open source, ciflow/trunk, topic: not user facing, ciflow/periodic
|
Do not merge!
#105182 seems to hang on M60 runners that I cannot reproduce locally, opening this to test a theory that this might be caused by a runtime API call in the watchdog thread.
| 20 |
1,989 | 105,271 |
added some more codegen files from inductor module
|
triaged, open source, Stale, topic: not user facing
|
Fixes some of #105230
| 4 |
1,990 | 105,264 |
Inductor generates incorrect CPU code for `uint8` operations
|
triaged, oncall: pt2, module: cpu inductor
|
### 🐛 Describe the bug
Consider the following snippet, which adds a `torch.uint8` tensor to itself, _then_ casts the result to `torch.int16`:
```python
import torch
def f(x):
return (x + x).to(torch.int16)
x = torch.tensor(128, dtype=torch.uint8)
print(f(x))
print(torch.compile(f)(x))
```
This produces the following output (`0` is correct):
```python
tensor(0, dtype=torch.int16)
tensor(256, dtype=torch.int16)
```
The relevant generated CPU kernel:
```cpp
extern "C" void kernel(const unsigned char* in_ptr0,
short* out_ptr0)
{
{
auto tmp0 = in_ptr0[static_cast<long>(0L)];
auto tmp1 = tmp0 + tmp0;
auto tmp2 = static_cast<short>(tmp1);
out_ptr0[static_cast<long>(0L)] = tmp2;
}
}
```
### Error logs
_No response_
### Minified repro
_No response_
### Versions
```
Collecting environment information...
PyTorch version: 2.1.0.dev20230714+cpu
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.9.5 (default, Nov 23 2021, 15:27:38) [GCC 9.3.0] (64-bit runtime)
Python platform: Linux-5.4.0-148-generic-x86_64-with-glibc2.31
Is CUDA available: False
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3090
Nvidia driver version: 530.30.02
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 57 bits virtual
CPU(s): 32
On-line CPU(s) list: 0-31
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 106
Model name: Intel(R) Xeon(R) Silver 4314 CPU @ 2.40GHz
Stepping: 6
Frequency boost: enabled
CPU MHz: 3136.478
CPU max MHz: 3400.0000
CPU min MHz: 800.0000
BogoMIPS: 4800.00
Virtualization: VT-x
L1d cache: 768 KiB
L1i cache: 512 KiB
L2 cache: 20 MiB
L3 cache: 24 MiB
NUMA node0 CPU(s): 0-31
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq rdpid md_clear pconfig flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.25.1
[pip3] torch==2.1.0.dev20230714+cpu
[conda] No relevant packages
```
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 19 |
1,991 | 105,257 |
recompile fx.GraphModule lazily
|
release notes: quantization, release notes: fx, module: dynamo, ciflow/inductor, module: export, suppress-bc-linter
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #105257
Context: @eellison 's review comment [here](https://github.com/pytorch/pytorch/pull/103642#discussion_r1244418586) complains about my code calling `torch.fx.GraphModule.recompile` after I changed the graph. We didn't simply remove the call to `recompile` at that time since that increases the risk that user see or run stale python code. In this PR, I recompile GraphModule lazily without increasing the risk that user see/run stale python code.
When training BertForMaskedLM, the `GraphModule.recompile` is called 707 times and takes 1.8s in total. The whole compilation takes around 60 seconds.
By spot checking, I found the main reason we call recompile so frequently is due to inductor pattern matcher. E.g., if we want to replace src_fn with dst_fn, we need trace both src_fn and dst_fn. After tracing is done, we create a GraphModule. The init method of GraphModule will call recompile.
By doing recompile lazily, we reduce the number of calls for `GraphModule._real_recompile` (in this PR, `recompile` just mark the class as needing recompilation and is very light weight. `_real_recompile` does the real recompilation) to 37 times and reduces its total execution time to 0.045s.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng @anijain2305 @ipiszy
| 9 |
1,992 | 105,255 |
[discussion] Integrate widely used utilities from fvcore into the core repo
|
oncall: distributed, feature, module: nn, triaged, needs research, module: LrScheduler
|
### 🚀 The feature, motivation and pitch
Some things like FLOP counter have already been reimplemented in core. But I propose to systematically study widely used things in fvcore (primarily uses in detectron2: https://github.com/search?q=repo%3Afacebookresearch%2Fdetectron2+fvcore+language%3APython&type=code&l=Python) and maybe move some of them into core (it would allow people to take fewer dependencies. that repo has existed now for a long time, so popularity/success of utils there can now be estimated more or less well):
- For instance, in vision/detectron2-related code weight inits from https://github.com/facebookresearch/fvcore/blob/main/fvcore/nn/weight_init.py are often used
- Another useful one is https://github.com/facebookresearch/fvcore/blob/main/fvcore/nn/print_model_statistics.py
- There are also some distributed utils
- It also has stateless schedulers used by detectron2: "A stateless, scale-invariant hyperparameter scheduler: see its [API doc](https://detectron2.readthedocs.io/en/latest/modules/fvcore.html#fvcore.common.param_scheduler.ParamScheduler)". It is also related to https://github.com/pytorch/pytorch/issues/68332, so maybe can be this design can be taken as base for schedulers enhancement / redesign in core (it has float iterations and uses `__call__` syntax; might be more generic to allow for int64 iterations and derive from nn.Module? Or even better: fully stateless free functions)
There are also a bunch of losses there that seem to have been already implemented in core. It would be useful then to deprecate / add warnings to the impls in fvcore or at least add a word in README there.
### Alternatives
_No response_
### Additional context
_No response_
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| 0 |
1,993 | 105,254 |
DISABLED test_fused_optimizers_with_large_tensors (optim.test_optim.TestOptim)
|
module: rocm, triaged, skipped
|
Platforms: rocm
This test was disabled because it is failing on master ([recent examples](http://torch-ci.com/failure/test_optim.py%3A%3ATestOptim%3A%3Atest_fused_optimizers_with_large_tensors)).
cc @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang
| 11 |
1,994 | 105,253 |
DISABLED test_cross_entropy_large_tensor_reduction_mean_cuda (__main__.TestNNDeviceTypeCUDA)
|
module: rocm, triaged, skipped
|
Platforms: rocm
This test was disabled because it is failing on master ([recent examples](http://torch-ci.com/failure/test_nn.py%3A%3ATestNNDeviceTypeCUDA%3A%3Atest_cross_entropy_large_tensor_reduction_mean_cuda)).
cc @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang
| 1 |
1,995 | 105,248 |
Multiple linux jobs are failing with version `GLIBCXX_3.4.30' not found
|
module: ci, triaged
|
## Current Status
mitigated by reverting
## Error looks like
Linux jobs are failing with following error:
```
+ python -c 'import torch; torch._C._crash_if_debug_asserts_fail(424242)'
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/__init__.py", line 236, in <module>
from torch._C import * # noqa: F403
ImportError: /opt/conda/envs/py_3.10/bin/../lib/libstdc++.so.6: version `GLIBCXX_3.4.30' not found (required by /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/lib/libtorch_python.so)
```
## Incident timeline (all times pacific)
5:00PM EST Issue start happening
5:45PM EST Issue was noticed and OSS CI channel advised
## User impact
multiple inductor, periodic, pull and trunk jobs are failing
## Root cause
tbd
## Mitigation
tbd
## Prevention/followups
tbd
cc @seemethere @malfet @pytorch/pytorch-dev-infra
| 6 |
1,996 | 105,230 |
Enable Mypy Checking in torch/_inductor
|
good first issue, module: typing, triaged, actionable
|
### 🚀 The feature, motivation and pitch
Type checking can find bugs, but more importantly serves as documentation. We should enable mypy type checking for inductor.
To enable a file remove it from [the `MYPYNOFOLLOW` exclude list](https://github.com/pytorch/pytorch/blob/main/.lintrunner.toml#L199). Tag yourself next to a file if you are working on it. Then run mypy on the file locally to see what needs to be fixed.
Especially if you are new to pytorch or inductor, I would recommend using https://github.com/Instagram/MonkeyType as an aide and running part of the inductor test suite: `test/inductor/test_torchinductor.py`
- [x] torch/_inductor/autotune_process.py
- [x] torch/_inductor/codegen/__init__.py
- [x] torch/_inductor/codegen/triton_foreach.py (@mlazos)
- [x] torch/_inductor/codegen/triton_utils.py (@masnesral)
- [x] torch/_inductor/codegen/common.py
- [x] torch/_inductor/codegen/cpp.py (@masnesral)
- [x] torch/_inductor/codegen/triton.py (@masnesral)
- [x] torch/_inductor/codegen/wrapper.py
- [x] torch/_inductor/coordinate_descent_tuner.py
- [x] torch/_inductor/cuda_properties.py https://github.com/pytorch/pytorch/pull/105620
- [x] torch/_inductor/exc.py https://github.com/pytorch/pytorch/pull/109176
- [x] torch/_inductor/freezing.py (@int3)
- [x] torch/_inductor/fx_passes/__init__.py
- [x] torch/_inductor/fx_passes/freezing_patterns.py
- [x] torch/_inductor/fx_passes/mkldnn_fusion.py https://github.com/pytorch/pytorch/pull/108131
- [x] torch/_inductor/fx_passes/pre_grad.py https://github.com/pytorch/pytorch/pull/109952
- [x] torch/_inductor/fx_passes/quantization.py https://github.com/pytorch/pytorch/pull/108131
- [x] torch/_inductor/fx_passes/replace_random.py
- [x] torch/_inductor/fx_passes/split_cat.py https://github.com/pytorch/pytorch/pull/109951
- [x] torch/_inductor/fx_passes/fuse_attention.py
- [x] torch/_inductor/fx_passes/group_batch_fusion.py (@int3)
- [x] torch/_inductor/fx_passes/joint_graph.py https://github.com/pytorch/pytorch/pull/109955
- [x] torch/_inductor/fx_passes/pad_mm.py https://github.com/pytorch/pytorch/pull/109954
- [x] torch/_inductor/fx_passes/post_grad.py
- [x] torch/_inductor/fx_utils.py (@int3)
- [x] torch/_inductor/hooks.py
- [x] torch/_inductor/kernel/__init__.py (@masnesral)
- [x] torch/_inductor/kernel/bmm.py (@masnesral)
- [x] torch/_inductor/kernel/conv.py
- [x] torch/_inductor/kernel/mm.py (@masnesral)
- [x] torch/_inductor/kernel/mm_common.py
- [x] torch/_inductor/kernel/mm_plus_mm.py (@masnesral)
- [x] torch/_inductor/metrics.py
- [ ] torch/_inductor/scheduler.py (@ipiszy)
- [x] torch/_inductor/select_algorithm.py
- [x] torch/_inductor/test_operators.py
- [x] torch/_inductor/triton_helpers.py
- [x] torch/_inductor/virtualized.py https://github.com/pytorch/pytorch/pull/108916
- [x] torch/_inductor/config.py
- [x] torch/_inductor/__init__.py
- [x] torch/_inductor/bounds.py (@masnesral)
- [x] torch/_inductor/codecache.py (@masnesral)
- [x] torch/_inductor/compile_fx.py https://github.com/pytorch/pytorch/pull/105830
- [x] torch/_inductor/cudagraph_trees.py
- [x] torch/_inductor/debug.py https://github.com/pytorch/pytorch/pull/109335
- [x] torch/_inductor/decomposition.py (@masnesral)
- [x] torch/_inductor/dependencies.py
- [x] torch/_inductor/graph.py
- [x] torch/_inductor/index_propagation.py https://github.com/pytorch/pytorch/pull/105622
- [x] torch/_inductor/inductor_prims.py https://github.com/pytorch/pytorch/pull/109173
- [x] torch/_inductor/ir.py (@int3)
- [x] torch/_inductor/lowering.py (@aakhundov)
- [x] torch/_inductor/optimize_indexing.py https://github.com/pytorch/pytorch/pull/105621
- [x] torch/_inductor/pattern_matcher.py (@int3)
- [x] torch/_inductor/sizevars.py
- [x] torch/_inductor/triton_heuristics.py
- [x] torch/_inductor/utils.py https://github.com/pytorch/pytorch/pull/106252
### Alternatives
_No response_
### Additional context
_No response_
cc @ezyang @malfet @rgommers @xuzhao9 @gramster
| 6 |
1,997 | 105,220 |
Significant time difference of calculating Jacobian matrix using jacrev and oracle functions
|
module: autograd, triaged, module: functorch
|
### 🚀 The feature, motivation and pitch
Sorry I'm not sure if this is a new feature, but I'd be very grateful if you guys would like to improve this part.
Specifically, I found that using `jacrev` would be much slower than using the oracle Jacobian function:
```python
from torch.func import vmap, jacrev
import torch
import time
a = torch.rand(10000, 10000)
def f(x):
return (x ** 2).sum(-1)
def df(x):
return 2 * x
t0 = time.time()
b = df(a)
t1 = time.time()
c = vmap(jacrev(f))(a)
t2= time.time()
assert torch.allclose(b, c)
print(t1 - t0, t2 - t1)
```
result: 0.10568618774414062 0.9206998348236084
Given that oracle's Jacobian is readily available in neural networks, I wonder why using `jacrev` is so much slower? Is there something wrong with me?
Of course, I can actually rewrite each layer of the neural network to obtain the value and Jacobian at the same time, but calculating the Hessian matrix is too troublesome. It would be great if `jacrev` could be faster!
### Alternatives
_No response_
### Additional context
_No response_
cc @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved @soulitzer @Lezcano @Varal7 @Chillee @samdow @kshitij12345 @janeyx99
| 4 |
1,998 | 105,217 |
Export+AOTInductor issue tracker
|
triaged, tracker
|
Updated (October 20)
- 14K github models (72%) (https://github.com/jansel/pytorch-jit-paritybench)
- [ ] #111691
- [ ] #111693
- [ ] 159 errors like: RuntimeError: expected inputs vector size to be 2, but got 1 (example ./generated/test_lwpyr_CSP_pedestrian_detection_in_pytorch.py:L2Norm # pytest ./generated/test_lwpyr_CSP_pedestrian_detection_in_pytorch.py -k test_003)
- [ ] 153 errors like: AssertionError: Failed to produce a graph during tracing. Tracing through 'f' must produce a single graph. (example ./generated/test_lucidrains_imagen_pytorch.py:UpsampleCombiner # pytest ./generated/test_lucidrains_imagen_pytorch.py -k test_007) (https://github.com/pytorch/pytorch/issues/111254)
- [ ] 130 errors like: AssertionError: Mutating module attribute min_val during export. (example ./generated/test_NVlabs_GroupViT.py:Transformer # pytest ./generated/test_NVlabs_GroupViT.py -k test_004) (https://github.com/pytorch/pytorch/issues/105530)
- [ ] 84 errors like: RuntimeError: AOTInductorModelContainerRun( model_container.get(), input_handles.data(), input_tensors.size(), output_handles.data(), output_handles.size(), stream_handle, proxy_executor_handle) API call failed at /home/binbao/.cache/torch_extensions/py310_cu121/aot_inductor/main.cpp, line 66 (example ./generated/test_sithu31296_semantic_segmentation.py:Downsample # pytest ./generated/test_sithu31296_semantic_segmentation.py -k test_015)
- [ ] 81 errors like: AssertionError: original output # 2 is None, but only the following types are supported: (<class 'torch.Tensor'>, <class 'torch.SymInt'>, <class 'torch.SymFloat'>, <class 'torch.SymBool'>) (example ./generated/test_elbayadm_attn2d.py:GridGatedMAX # pytest ./generated/test_elbayadm_attn2d.py -k test_011) (https://github.com/pytorch/pytorch/issues/111250)
- [ ] 75 errors like: Unsupported: setattr(UserDefinedObjectVariable) <function Module.__setattr__ at 0x7f61e53808b0> (example ./generated/test_XPixelGroup_BasicSR.py:SFTUpBlock # pytest ./generated/test_XPixelGroup_BasicSR.py -k test_029)
- [ ] 68 errors like: Unsupported: call_function BuiltinVariable(setattr) [TensorVariable(), ConstantVariable(str), ConstantVariable(bool)] {} (example ./generated/test_cedias_Hierarchical_Sentiment.py:EmbedAttention # pytest ./generated/test_cedias_Hierarchical_Sentiment.py -k test_000)
- [ ] 53 errors like: AssertionError: Dynamo attempts to add additional input during export: value=0.1767766952966369, source=NNModuleSource(base=AttrSource(base=NNModuleSource(base=AttrSource(base=LocalSource(local_name='self', cell_or_freevar=False), member='wscale')), member='scale')) (example ./generated/test_open_mmlab_mmgeneration.py:FullyConnectedLayer # pytest ./generated/test_open_mmlab_mmgeneration.py -k test_015) (https://github.com/pytorch/pytorch/issues/111255)
- [ ] 52 errors like: RuntimeError: Found following user inputs located at [0] are mutated. This is currently banned in the aot_export workflow. (example ./generated/test_kuprel_min_dalle.py:AttentionBase # pytest ./generated/test_kuprel_min_dalle.py -k test_000)
- [ ] 51 errors like: Unsupported: 'call_function Mish in skip_files Builtin Mish, filename is None' (example ./generated/test_ikostrikov_pytorch_a2c_ppo_acktr_gail.py:Categorical # pytest ./generated/test_ikostrikov_pytorch_a2c_ppo_acktr_gail.py -k test_001)
- [ ] 46 errors like: UserError: Dynamic control flow is not supported at the moment. Please use functorch.experimental.control_flow.cond to explicitly capture the control flow. For more information about this error, see: https://pytorch.org/docs/main/generated/exportdb/index.html#cond-operands (example ./generated/test_maudzung_SFA3D.py:FocalLoss # pytest ./generated/test_maudzung_SFA3D.py -k test_001)
- [ ] 43 errors like: AttributeError: 'int' object has no attribute 'device' (example ./generated/test_chengchunhsu_EveryPixelMatters.py:FCOSDiscriminator # pytest ./generated/test_chengchunhsu_EveryPixelMatters.py -k test_008)
- [ ] 41 errors like: UserError: Tried to use data-dependent value in the subsequent computation. This can happen when we encounter unbounded dynamic value that is unknown during tracing time.You will need to explicitly give hint to the compiler. Please take a look at constrain_as_value OR constrain_as_size APIs (example ./generated/test_mdv3101_CDeCNet.py:GHMC # pytest ./generated/test_mdv3101_CDeCNet.py -k test_008) (https://github.com/pytorch/pytorch/issues/111252)
- [ ] 34 errors like: ValueError: tree_unflatten(values, spec): `values` has length 2 but the spec refers to a pytree that holds 1 items (*). (example ./generated/test_LoSealL_VideoSuperResolution.py:Srcnn # pytest ./generated/test_LoSealL_VideoSuperResolution.py -k test_035)
- [ ] 34 errors like: AssertionError: (example ./generated/test_ruotianluo_self_critical_pytorch.py:SublayerConnection # pytest ./generated/test_ruotianluo_self_critical_pytorch.py -k test_010)
- [ ] 29 errors like: RuntimeError: Error building extension 'aot_inductor_v2': [1/2] c++ -MMD -MF main.o.d -DTORCH_EXTENSION_NAME=aot_inductor_v2 -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1013\" -isystem /data/users/binbao/pytorch/torch/include -isystem /data/users/binbao/pytorch/torch/include/torch/csrc/api/include -isystem /data/users/binbao/pytorch/torch/include/TH -isystem /data/users/binbao/pytorch/torch/include/THC -isystem /usr/local/cuda-12.1/include -isystem /home/binbao/.conda/envs/pytorch-3.10/include/python3.10 -D_GLIBCXX_USE_CXX11_ABI=1 -fPIC -std=c++17 -c /home/binbao/.cache/torch_extensions/py310_cu121/aot_inductor/main.cpp -o main.o (example ./generated/test_Vious_LBAM_Pytorch.py:GaussActivation # pytest ./generated/test_Vious_LBAM_Pytorch.py -k test_001)
- [ ] #111711
- [ ] 23 errors like: Unsupported: call_method NNModuleVariable() _get_backward_hooks [] {} (example ./generated/test_xheon_panoptic_reconstruction.py:InstanceNorm3d # pytest ./generated/test_xheon_panoptic_reconstruction.py -k test_011)
- Huggingface (passrate 91%)
- [x] ~~#105242~~
- [x] ~~“torch._dynamo.exc.InternalTorchDynamoError: 'str' object has no attribute 'size’”~~
- [x] ~~XLNetLMHeadModel: #109655~~
- [ ] #110096
- TIMM (passrate 98%)
- [x] ~~crossvit_9_240: fail_accuracy~~
- [x] ~~deit_base_distilled_patch16_224: fail_accuracy~~
- [x] ~~dla102: fail_accuracy~~
- [x] ~~dpn107: fail_accuracy~~
- [x] ~~levit_128: fail_accuracy~~
- [x] ~~volo_d1_224: fail_accuracy~~
- [ ] #105530
- [x] #108173
- Torchbench (passrate 78%) (repro cmd: `python benchmarks/dynamo/torchbench.py --bfloat16 --accuracy --inference --device cuda --export-aot-inductor --only `)
- [x] ~~Background_Matting: fail_accuracy~~
- [x] ~~LearningToPaint: fail_accuracy~~
- [x] ~~Super_SloMo: fail_accuracy~~
- [x] ~~dcgan: fail_accuracy~~
- [x] ~~nvidia_deeprecommender: fail_accuracy~~
- [x] ~~pytorch_CycleGAN_and_pix2pix: fail_accuracy~~
- [x] ~~pytorch_unet: fail_accuracy~~
- [x] ~~#108697~~
- [x] ~~#108699~~
- **Export/Dynamo problems:**
- [ ] BERT_pytorch, llama, yolov3: Mutating module attribute during export, the same as https://github.com/pytorch/pytorch/issues/105530
- [x] ~~#109894~~
- [ ] #109884
- [ ] #109885
- [ ] doctr_det_predictor: ERROR:common:call_method UserDefinedObjectVariable(morphologyEx) __call__ [TensorVariable(), ConstantVariable(int), NumpyNdarrayVariable()] {}
- [ ] #108698
- [ ] drq: AssertionError: traced result #0 (<class 'torchbenchmark.models.drq.drqutils.SquashedNormal'>) is not among graph-captured outputs (<class 'torch._subclasses.fake_tensor.FakeTensor'>, <class 'torch._subclasses.fake_tensor.FakeTensor'>, <class 'torch._subclasses.fake_tensor.FakeTensor'>, <class 'torch._subclasses.fake_tensor.FakeTensor'>, <class 'torch._subclasses.fake_tensor.FakeTensor'>, <class 'torch._subclasses.fake_tensor.FakeTensor'>, <class 'torch._subclasses.fake_tensor.FakeTensor'>, <class 'torch._subclasses.fake_tensor.FakeTensor'>, <class 'torch._subclasses.fake_tensor.FakeTensor'>, <class 'torch._subclasses.fake_tensor.FakeTensor'>) or original args (<class 'torch.Tensor'>)
- [ ] hf_T5_generate: torch._dynamo.exc.Unsupported: call_function deepcopy in skip_files /opt/conda/envs/py_3.10/lib/python3.10/copy.py
- [ ] #109895
- [ ] pyhpc_isoneutral_mixing: Found following user inputs located at [16, 17, 18, 19, 20, 21, 22] are mutated. This is currently banned in the aot_export workflow.
- [ ] soft_actor_critic: ERROR: mismatch between eager output (<class 'torchbenchmark.models.soft_actor_critic.nets.SquashedNormal'>) and traced output (list of tensors)
- [ ] timm_efficientdet: torch._subclasses.fake_tensor.UnsupportedOperatorException: torchvision.nms.default
- [ ] #105532
- [ ] #105531
- **Inductor problems:**
- [x] ~~clip, hf_BigBird: #109655~~
- [ ] #111227
- [ ] #110304
- [ ] #110089
- **DATA DEPENDENT (will be skipped):**
- [ ] (HF) AllenaiLongformerBase: Control flow on data-dependent [here](https://github.com/huggingface/transformers/blob/0a55d9f7376f72ad3ff296d4249840021b03bcc4/src/transformers/models/longformer/modeling_longformer.py#L601)
- [ ] cm3leon_generate: ERROR:common:Dynamic control flow is not supported at the moment. Please use functorch.experimental.control_flow.cond to explicitly capture the control flow
- [ ] detectron2_fcos_r_50_fpn: data-dependent error occurring on https://fburl.com/code/7x2ztpni (probably needs a constraint)
- [ ] fastNLP_Bert: ERROR:common:Consider annotating your code using constrain_as_*(). It appears that you're trying to get a value out of symbolic int/float whose value is data-dependent (and thus we do not know the true value.) The expression we were trying to evaluate is i0 + 2 > 512 (unhinted: i0 + 2 > 512). Scroll up to see where each of these data-dependent accesses originally occurred.
- [ ] hf_Longformer: torch._dynamo.exc.UserError: Consider annotating your code using constrain_as_*(). It appears that you're trying to get a value out of symbolic int/float whose value is data-dependent (and thus we do not know the true value.) The expression we were trying to evaluate is Eq(i0, 1) (unhinted: Eq(i0, 1)). Scroll up to see where each of these data-dependent accesses originally occurred.
- [ ] hf_Reformer: torch._dynamo.exc.UserError: Dynamic control flow is not supported at the moment. Please use functorch.experimental.control_flow.cond to explicitly capture the control flow
- [ ] opacus_cifar10: ERROR:common:Dynamic control flow is not supported at the moment. Please use functorch.experimental.control_flow.cond to explicitly capture the control flow
RROR:common:Tried to use data-dependent value in the subsequent computation. This can happen when we encounter unbounded dynamic value that is unknown during tracing time.You will need to explicitly give hint to the compiler. Please take a look at constrain_as_value OR constrain_as_size APIs
- [ ] speech_transformer: ERROR:common:Dynamic slicing on data-dependent value is not supported
| 18 |
1,999 | 105,214 |
[DTensor] Dtensor API should report the correct device when GPU is used
|
triaged, module: dtensor
|
### 🚀 The feature, motivation and pitch
The DTensor API is not reporting the correct device id of GPU.
```python
from torch.testing._internal.common_distributed import spawn_threads_and_init_comms
import torch
import torch.distributed as dist
from torch.distributed._tensor import DTensor, DeviceMesh, Shard, Replicate, distribute_tensor
from torch.distributed.tensor.parallel import (
PairwiseParallel,
RowwiseParallel,
ColwiseParallel,
parallelize_module,
make_input_replicate_1d,
make_output_replicate_1d,
make_output_shard_1d,
)
WORLD_SIZE = 2
@spawn_threads_and_init_comms(world_size=WORLD_SIZE)
def shard_big_tensor(world_size):
mesh = DeviceMesh('cuda', list(range(WORLD_SIZE)))
big_tensor = torch.randn((7, 3, 1024, 2048))
spec = [Shard(3)]
dtensor = distribute_tensor(big_tensor, mesh, spec)
print(f'Rank: {dist.get_rank()}, dtensor global shape: {dtensor.shape}, local shape: {dtensor.to_local().shape}, dtensor.device: {dtensor.device}, dtensor.to_local().device: {dtensor.to_local().device}')
if __name__ == "__main__":
shard_big_tensor(WORLD_SIZE)
```
LOG
```bash
INFO:torch.distributed.distributed_c10d:Added key: store_based_barrier_key:1 to store for rank: 0
INFO:torch.distributed.distributed_c10d:Added key: store_based_barrier_key:1 to store for rank: 1
INFO:torch.distributed.distributed_c10d:Rank 0: Completed store-based barrier for key:store_based_barrier_key:1 with 2 nodes.
INFO:torch.distributed.distributed_c10d:Rank 1: Completed store-based barrier for key:store_based_barrier_key:1 with 2 nodes.
Rank: 1, dtensor global shape: torch.Size([7, 3, 1024, 2048]), local shape: torch.Size([7, 3, 1024, 1024]), dtensor.device: cuda:0, dtensor.to_local().device: cuda:0
Rank: 0, dtensor global shape: torch.Size([7, 3, 1024, 2048]), local shape: torch.Size([7, 3, 1024, 1024]), dtensor.device: cuda:0, dtensor.to_local().device: cuda:0
```
Why `dtensor.to_local().device` both report `cuda:0` in two ranks? There are two GPUs used actually, and it should be `cuda:0` and `cuda:1` respectively.
### Alternatives
_No response_
### Additional context
_No response_
| 0 |
2,000 | 105,213 |
[DTensor] Module parallelized with ColwiseParallel should return a sharded tensor
|
triaged, module: dtensor
|
### 🚀 The feature, motivation and pitch
I do not know why a module parallelized with ColwiseParallel returns a replicated tensor. It should return a sharded tensor according to the document.
```python
@spawn_threads_and_init_comms(world_size=WORLD_SIZE)
def demo_colwise_parallel(world_size):
rank = dist.get_rank()
mesh = DeviceMesh('cuda', list(range(WORLD_SIZE)))
spec = [Replicate()]
model = torch.nn.Linear(10, 32)
LR = 0.25
optimizer = torch.optim.SGD(model.parameters(), lr = LR)
model = parallelize_module(model, mesh, ColwiseParallel())
for i in range(ITER_TIME):
inp = torch.randn(20, 10)
dtensor = distribute_tensor(inp, mesh, spec)
output = model(dtensor)
output.sum().backward()
print(f'Iter {i}, rank: {dist.get_rank()}, ', dtensor.to_local().shape, output.to_local().shape)
optimizer.step()
```
log
```bash
INFO:torch.distributed.distributed_c10d:Added key: store_based_barrier_key:1 to store for rank: 0
INFO:torch.distributed.distributed_c10d:Added key: store_based_barrier_key:1 to store for rank: 1
INFO:torch.distributed.distributed_c10d:Rank 0: Completed store-based barrier for key:store_based_barrier_key:1 with 2 nodes.
INFO:torch.distributed.distributed_c10d:Rank 1: Completed store-based barrier for key:store_based_barrier_key:1 with 2 nodes.
Iter 0, rank: 1, torch.Size([20, 10]) torch.Size([20, 32])
Iter 0, rank: 0, torch.Size([20, 10]) torch.Size([20, 32])
```
I expect `output.to_local().shape` is (20, 16) instead of the current (20, 32).
### Alternatives
_No response_
### Additional context
_No response_
| 11 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.