Serial Number
int64 1
6k
| Issue Number
int64 75.6k
112k
| Title
stringlengths 3
357
| Labels
stringlengths 3
241
โ | Body
stringlengths 9
74.5k
โ | Comments
int64 0
867
|
---|---|---|---|---|---|
1,601 | 106,851 |
RPC all_gather doesn't work with dynamic world size (world_size=None)
|
oncall: distributed, module: rpc
|
### ๐ Describe the bug
The `rpc.api._all_gather(obj)` function doesn't work when the rpc cluster is initialized with `world_size=None`.
Here is a test code that can reproduce the problem:
```python
import torch.distributed.rpc as rpc
import time
import multiprocessing as mp
import torch
import os
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '11111'
def worker(rank):
rpc.init_rpc(
name=f'worker_{rank}',
rank=rank,
)
print(f'rank {rank} initialized')
time.sleep(1)
print(rpc.api._all_gather(torch.cuda.device_count()))
if rank == 0:
time.sleep(1)
rpc.shutdown()
print(f'rank {rank} exited')
if __name__ == '__main__':
ranks = [0, 1, 2, 3]
for rank in ranks:
process = mp.Process(target=worker, args=(rank, ))
process.start()
```
Output:
```
rank 0 initialized
rank 1 initialized
rank 2 initialized
rank 3 initialized
{'worker_0': 4}
[W tensorpipe_agent.cpp:726] RPC agent for worker_1 encountered error when reading incoming request from worker_0: eof (this error originated at tensorpipe/transport/shm/connection_impl.cc:259)
[W tensorpipe_agent.cpp:726] RPC agent for worker_2 encountered error when reading incoming request from worker_0: eof (this error originated at tensorpipe/transport/shm/connection_impl.cc:259)
[W tensorpipe_agent.cpp:726] RPC agent for worker_3 encountered error when reading incoming request from worker_0: eof (this error originated at tensorpipe/transport/shm/connection_impl.cc:259)
rank 0 exited
```
`Rank 0` only collects its own device count, the other ranks throw the `eof` error and doesn't exit. Specifying the `world_size` as `4` makes the code behave as expected.
### Versions
PyTorch version: 1.13.0+cu116
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.8.10 (default, Mar 13 2023, 10:26:41) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.4.0-150-generic-x86_64-with-glibc2.29
Is CUDA available: True
CUDA runtime version: 11.6.124
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A10
GPU 1: NVIDIA A10
GPU 2: NVIDIA A10
GPU 3: NVIDIA A10
Nvidia driver version: 535.54.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.4.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 128
On-line CPU(s) list: 0-127
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 106
Model name: Intel(R) Xeon(R) Platinum 8369B CPU @ 2.90GHz
Stepping: 6
CPU MHz: 2899.998
BogoMIPS: 5799.99
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 3 MiB
L1i cache: 2 MiB
L2 cache: 80 MiB
L3 cache: 96 MiB
NUMA node0 CPU(s): 0-63
NUMA node1 CPU(s): 64-127
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves wbnoinvd arat avx512vbmi avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid arch_capabilities
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @penguinwu @pietern @jjlilley @mrzzd
| 0 |
1,602 | 106,846 |
untimeError: The following operation failed in the TorchScript interpreter. Traceback of TorchScript (most recent call last): RuntimeError: nvrtc: error: invalid value for --gpu-architecture (-arch)
|
needs reproduction, oncall: jit
|
### ๐ Describe the bug
pos[:2] += torch_rand_float(-1., 1., (2, 1),
device=self.device).squeeze(1)
untimeError: The following operation failed in the TorchScript interpreter. Traceback of TorchScript (most recent call last): RuntimeError: nvrtc: error: invalid value for --gpu-architecture (-arch)
### Versions
ubuntu:20.04:
NVIDIA GeForce RTX 4060 Ti
cuda :11.3
- torch==1.10.0+cu113
- torchvision==0.11.1+cu113
- torchaudio==0.10.0+cu113
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| 0 |
1,603 | 106,845 |
`1/torch.inf` produce inconsistent results
|
module: numerical-stability, triaged, module: complex, module: type promotion, module: edge cases
|
### ๐ Describe the bug
```Python
import torch
def hf0(m, n):
x0 = torch.ones(m,n,dtype=torch.complex128)
x1 = torch.ones(m,n,dtype=torch.float64)
x1[0,0] = torch.inf
print(x0/x1)
hf0(1, 2)
# tensor([[0.+0.j, 1.+0.j]], dtype=torch.complex128)
hf0(2, 2)
# tensor([[nan+nanj, 1.+0.j],
# [1.+0.j, 1.+0.j]], dtype=torch.complex128)
```
the following environments are tested
| environment | results |
| :-: | :-: |
| ubuntu 22.04 | `[0,1], [[nan,1],[1,1]]` |
| mac m1 silicon | `[0,1], [[0,1],[1,1]]` |
| google colab [link](https://colab.research.google.com/drive/1K30lY7zLvbrQ5Jl5lvNvWpF6cpQPNgcy?usp=sharing) | `[0,1], [[nan,1],[1,1]]` |
### Versions
Collecting environment information...
PyTorch version: 2.0.0.post200
Is debug build: False
CUDA used to build PyTorch: 11.2
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.12 | packaged by conda-forge | (main, Jun 23 2023, 22:40:32) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-6.2.0-26-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060 Laptop GPU
Nvidia driver version: 535.86.10
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.3
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.3
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.3
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.3
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.3
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.3
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.3
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 7 5800H with Radeon Graphics
CPU family: 25
Model: 80
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU max MHz: 4462.5000
CPU min MHz: 1200.0000
BogoMIPS: 6387.93
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm
Virtualization: AMD-V
L1d cache: 256 KiB (8 instances)
L1i cache: 256 KiB (8 instances)
L2 cache: 4 MiB (8 instances)
L3 cache: 16 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-15
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.25.1
[pip3] torch==2.0.0.post200
[pip3] torchvision==0.15.2a0+072ec57
[conda] cudatoolkit 11.8.0 h4ba93d1_12 conda-forge
[conda] libmagma 2.7.1 hc72dce7_3 conda-forge
[conda] libmagma_sparse 2.7.1 hc72dce7_4 conda-forge
[conda] magma 2.7.1 ha770c72_4 conda-forge
[conda] mkl 2022.2.1 h84fe81f_16997 conda-forge
[conda] numpy 1.25.1 py310ha4c1d20_0 conda-forge
[conda] pytorch 2.0.0 cuda112py310he33e0d6_200 conda-forge
[conda] torchvision 0.15.2 cuda112py310h0801bf5_1 conda-forge
cc @ezyang @anjali411 @dylanbespalko @mruberry @Lezcano @nikitaved @nairbv
| 4 |
1,604 | 106,837 |
Multiprocess DataLoader doesn't work with sparse tensor as it'll try to access the underlying storage
|
module: sparse, module: dataloader, triaged
|
### ๐ Describe the bug
# Summary
When using `DataLoader` with multiprocess loading to load a dataset with sparse tensor elements, it'll try to access the underlying storage of the tensor, but sparse tensor (COO, CSF etc) doesn't support accessing storage.
I've put the **minimal reproduction sample** in this colab notebook: https://colab.research.google.com/drive/16q_tzyUz5ylZSCcpzhJ52pxVSUpMZX-M#scrollTo=o0KeaWnVz9Hm&uniqifier=1
## Case 1: default collation
When using default collate (auto_collate=True [here](https://github.com/pytorch/pytorch/blob/v2.0.1/torch/utils/data/dataloader.py#L366)), to collate on a sparse tensor, it'll attempt to access the `elem._typed_storage` [here](https://github.com/pytorch/pytorch/blob/v2.0.1/torch/utils/data/_utils/collate.py#L160), thus hitting error:
```
NotImplementedError: Caught NotImplementedError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/torch/utils/data/_utils/worker.py", line 308, in _worker_loop
data = fetcher.fetch(index)
File "/usr/local/lib/python3.10/dist-packages/torch/utils/data/_utils/fetch.py", line 54, in fetch
return self.collate_fn(data)
File "/usr/local/lib/python3.10/dist-packages/torch/utils/data/_utils/collate.py", line 265, in default_collate
return collate(batch, collate_fn_map=default_collate_fn_map)
File "/usr/local/lib/python3.10/dist-packages/torch/utils/data/_utils/collate.py", line 119, in collate
return collate_fn_map[elem_type](batch, collate_fn_map=collate_fn_map)
File "/usr/local/lib/python3.10/dist-packages/torch/utils/data/_utils/collate.py", line 160, in collate_tensor_fn
storage = elem._typed_storage()._new_shared(numel, device=elem.device)
File "/usr/local/lib/python3.10/dist-packages/torch/_tensor.py", line 238, in _typed_storage
untyped_storage = self.untyped_storage()
NotImplementedError: Cannot access storage of SparseTensorImpl
```
## Case 2: Manual collation
Without auto collation, (set `batch_size=None` so that it'll use `default_convert` method, OR provide a `collate_fn`) we get around the issue in default_collate in case 1; But later on when the worker process is feeding into the `worker_result_queue` the loaded data, it'll again attemp to access the underlying storage, thus hitting
```
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/torch/multiprocessing/reductions.py", line 152, in reduce_tensor
storage = tensor._typed_storage()
File "/usr/lib/python3.10/multiprocessing/queues.py", line 244, in _feed
obj = _ForkingPickler.dumps(obj)
File "/usr/lib/python3.10/multiprocessing/reduction.py", line 51, in dumps
cls(buf, protocol).dump(obj)
File "/usr/local/lib/python3.10/dist-packages/torch/_tensor.py", line 238, in _typed_storage
untyped_storage = self.untyped_storage()
NotImplementedError: Cannot access storage of SparseTensorImpl
Traceback (most recent call last):
File "/usr/lib/python3.10/multiprocessing/queues.py", line 244, in _feed
obj = _ForkingPickler.dumps(obj)
File "/usr/lib/python3.10/multiprocessing/reduction.py", line 51, in dumps
cls(buf, protocol).dump(obj)
File "/usr/local/lib/python3.10/dist-packages/torch/multiprocessing/reductions.py", line 152, in reduce_tensor
storage = tensor._typed_storage()
File "/usr/local/lib/python3.10/dist-packages/torch/_tensor.py", line 238, in _typed_storage
untyped_storage = self.untyped_storage()
NotImplementedError: Cannot access storage of SparseTensorImpl
```
So that anyway we cannot do multiprocess loading with sparse tensor.
### Versions
```
PyTorch version: 2.0.1+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.25.2
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.109+-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 2
On-line CPU(s) list: 0,1
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU @ 2.20GHz
CPU family: 6
Model: 79
Thread(s) per core: 2
Core(s) per socket: 1
Socket(s): 1
Stepping: 0
BogoMIPS: 4399.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx smap xsaveopt arat md_clear arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32 KiB (1 instance)
L1i cache: 32 KiB (1 instance)
L2 cache: 256 KiB (1 instance)
L3 cache: 55 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0,1
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable; SMT Host state unknown
Vulnerability Meltdown: Vulnerable
Vulnerability Mmio stale data: Vulnerable
Vulnerability Retbleed: Vulnerable
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled, PBRSB-eIBRS: Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable
Versions of relevant libraries:
[pip3] numpy==1.23.5
[pip3] torch==2.0.1+cu118
[pip3] torchaudio==2.0.2+cu118
[pip3] torchdata==0.6.1
[pip3] torchsummary==1.5.1
[pip3] torchtext==0.15.2
[pip3] torchvision==0.15.2+cu118
[pip3] triton==2.0.0
[conda] Could not collect
```
cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer @SsnL @VitalyFedyunin @ejguan @dzhulgakov
| 3 |
1,605 | 106,833 |
Fix warning in CUDAJitLoops.cuh
|
fb-exported, Stale, release notes: cuda
|
Differential Revision: D48173306
| 3 |
1,606 | 106,828 |
Use expect tests for error inputs
|
triaged, better-engineering, module: testing
|
### ๐ Describe the bug
If I change an error message, it shouldn't be necessary to go manually update every single error input for the new format. This is exactly what expect tests are designed to do.
Example of this in practice: https://github.com/pytorch/pytorch/pull/106788
### Versions
master
| 1 |
1,607 | 106,826 |
Ablate TORCH_CUDA_ARCH_LIST from torchaudio install
|
Stale, ciflow/trunk, ciflow/inductor
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #106826
Signed-off-by: Edward Z. Yang <ezyang@meta.com>
| 6 |
1,608 | 106,820 |
Simplify TypeIndex.h
|
fb-exported, Stale
|
Differential Revision: D48164249
| 4 |
1,609 | 106,816 |
Broadcasting semantics notes
|
Stale
|
When a tensor has a dimension size 0, the resulting size is 0.
Add an example for that.
This is consistent with numpy:
```
>>> np.add(np.zeros([1, 0]), torch.ones([1, 1, 1]))
tensor([], size=(1, 1, 0), dtype=torch.float64)
```
Fixes #ISSUE_NUMBER
| 3 |
1,610 | 106,815 |
Please verify 1.14.1 ONNX release candidate on TestPyPI
|
module: onnx, triaged, module: infra
|
### ๐ The feature, motivation and pitch
Hi ONNX partner,
We have released TestPyPI packages of ONNX 1.14.1: https://test.pypi.org/project/onnx/1.14.1rc1/ (ONNX 1.14.1rc1 is the latest version number for testing now).
Please verify it and let us know about any problems. Thank you for your help!
### Alternatives
_No response_
### Additional context
_No response_
| 1 |
1,611 | 106,813 |
[MPS] Make several ops unranked to avoid graph recompilation
|
triaged, open source, module: mps, release notes: mps, ciflow/mps
|
- Made `bmm`, `cat`, `softmax`, `copy_cast` and `index_select` unranked. This improves the performance of LLMs and other networks, because these graphs would be invariant to shape changes, and therefore, they won't recompile.
- Binary ops: Flatten tensors bigger than 4D to 1D, and reuse existing buffer for in-place operations (for offset 0).
- Cache the `MPSGraphExecutable` in `MPSCachedGraph`. These executables have the typeInference disabled which should help with recompilation issues on MPSGraph.
- Handle transposes in second batch of matrices in `bmm`. This helps with performance in case the second batch (i.e., `other` argument) was transposed (like the case with `QxK.t()` in Transformers).
cc @kulinseth @albanD @malfet @DenisVieriu97 @abhudev
| 3 |
1,612 | 106,802 |
Optimizers should use learning rates passed as tensors directly
|
module: optimizer, triaged, actionable, module: dynamic shapes
|
### ๐ The feature, motivation and pitch
If you pass a tensor as the learning rate when constructing an optimizer, it works, but internally it does an item() on every step, which breaks graph capture and triggers synchronization warnings if you have them enabled.
It would be nice if the optimizers (maybe just the fused versions?) directly used the tensor, because then you could adjust learning rate every step while using CUDA graphs. I wind up forcing graph rebuilds every 1000 steps or so when I want to dynamically change learning rates.
### Alternatives
_No response_
### Additional context
_No response_
cc @vincentqb @jbschlosser @albanD @janeyx99 @crcrpar @ezyang
| 9 |
1,613 | 106,801 |
Timer benchmark stores only one time value, and therefore has broken mean/median/etc metrics
|
triaged, module: benchmark
|
### ๐ Describe the bug
Here is a simple code to reproduce the error:
```
import torch
import torchvision
import torch.utils.benchmark as benchmark
if __name__ == '__main__':
net = torchvision.models.resnet18().cuda()
data = torch.rand((1, 3, 224, 224), device='cuda')
net.eval()
with torch.inference_mode():
timer = benchmark.Timer(stmt='net(data)', num_threads=1, globals={'net': net, 'data': data})
measurement = timer.timeit(number=10)
print(measurement.raw_times, measurement.median, measurement.mean, measurement.iqr)
```
This should display a list of 10 values, different median and mean and non-zero iqr.
However, I get this:
```
[0.05537769995862618] 0.005537769995862618 0.005537769995862618 0.0
```
This means that only one value is stored, which not only prevents me doing what I wanted (i.e. get all the raw values and plot them on a graph), but also explains the broken behavior of the metrics (same mean and median, null iqr...).
I looked up the source code a bit, in torch/utils/benchmark/utils/timer.py, and found that:
```
def _timeit(self, number: int) -> float:
# Even calling a timer in C++ takes ~50 ns, so no real operation should
# take less than 1 ns. (And this prevents divide by zero errors.)
return max(self._timer.timeit(number), 1e-9)
def timeit(self, number: int = 1000000) -> common.Measurement:
"""Mirrors the semantics of timeit.Timer.timeit().
Execute the main statement (`stmt`) `number` times.
https://docs.python.org/3/library/timeit.html#timeit.Timer.timeit
"""
with common.set_torch_threads(self._task_spec.num_threads):
# Warmup
self._timeit(number=max(int(number // 100), 2))
return common.Measurement(
number_per_run=number,
raw_times=[self._timeit(number=number)],
task_spec=self._task_spec
)
```
And, if I understand well, it means that, by definition, only one value will ever be stored.
I don't think this is the intended behavior at all, and therefore suggest either to fix the source code, or to update the documentation to make it clearer how it actually behaves.
### Versions
Collecting environment information...
PyTorch version: 2.1.0.dev20230726+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 10 Famille
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A
Python version: 3.11.4 (tags/v3.11.4:d2340ef, Jun 7 2023, 05:45:37) [MSC v.1934 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.19045-SP0
Is CUDA available: True
CUDA runtime version: 12.2.91
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce GTX 1050
Nvidia driver version: 536.25
cuDNN version: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.2\bin\cudnn_ops_train64_8.dll
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Revision=
Versions of relevant libraries:
[pip3] numpy==1.25.1
[pip3] torch==2.1.0.dev20230726+cu121
[pip3] torchaudio==2.1.0.dev20230727+cu121
[pip3] torchinfo==1.8.0
[pip3] torchvision==0.16.0.dev20230727+cu121
[conda] Could not collect
| 4 |
1,614 | 106,796 |
DISABLED test_conversions_all_patterns_backend_cutlass_cuda_float16 (__main__.TestSparseSemiStructuredCUDA)
|
module: sparse, triaged, module: flaky-tests, skipped
|
Platforms: linux, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_conversions_all_patterns_backend_cutlass_cuda_float16&suite=TestSparseSemiStructuredCUDA) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/15706109496).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_conversions_all_patterns_backend_cutlass_cuda_float16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_sparse_semi_structured.py`
cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer
| 33 |
1,615 | 106,794 |
DISABLED test_linear_inference_mode_False_backend_cutlass_cuda (__main__.TestSparseSemiStructuredCUDA)
|
module: sparse, triaged, module: flaky-tests, skipped
|
Platforms: linux, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_linear_inference_mode_False_backend_cutlass_cuda&suite=TestSparseSemiStructuredCUDA) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/15706109496).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_linear_inference_mode_False_backend_cutlass_cuda`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_sparse_semi_structured.py`
cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer
| 9 |
1,616 | 106,793 |
DISABLED test_conversions_all_patterns_backend_cutlass_cuda_bfloat16 (__main__.TestSparseSemiStructuredCUDA)
|
module: sparse, triaged, module: flaky-tests, skipped
|
Platforms: linux, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_conversions_all_patterns_backend_cutlass_cuda_bfloat16&suite=TestSparseSemiStructuredCUDA) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/15706109496).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_conversions_all_patterns_backend_cutlass_cuda_bfloat16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_sparse_semi_structured.py`
cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer
| 33 |
1,617 | 106,789 |
Fix prod double backward when there are 2+ zeros
|
triaged
|
### ๐ Describe the bug
When there are two or more zeros, prod's gradient does not require grad. This is equivalent to saying that the double backwards is 0, i.e. there are no gradients to be computed. However, this is wrong; the double backwards should not be zero.
```
import torch
x = torch.tensor([2., 3, 0, 0], requires_grad=True)
y = torch.prod(x)
gx, = torch.autograd.grad(y.sum(), x, create_graph=True)
```
### Versions
main
| 0 |
1,618 | 106,785 |
Binary op support for (B, *, D) NT with (B, 1, 1) dense
| null |
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #106785
* #106786
Provides a CUDA kernel based on the existing ESUHM kernel for binary op computation between `(B, *, D)` NT and `(B, 1, 1)` dense, broadcasting over both the * and D dims.
This is needed for [image preprocessing in SAM](https://github.com/facebookresearch/segment-anything/blob/6fdee8f2727f4506cfbbe553e23b895e27956588/segment_anything/modeling/sam.py#L167).
| 1 |
1,619 | 106,784 |
[ux] Suppot torch.tensor(set([1,2,3]))
|
triaged, needs research, topic: new features, module: python frontend
|
### ๐ The feature, motivation and pitch
If it's easy to do, it would save a few keystrokes ( + maybe same for `dict_keys` and `dict_values`) and maybe some copies and python objects on the heap for large inputs:
```python
import torch
torch.tensor(list(set([1,2,3])))
# tensor([1, 2, 3])
torch.tensor(set([1,2,3]))
# RuntimeError: Could not infer dtype of set
# torch.tensor(set([1,2,3]), dtype = torch.int64)
# TypeError: an integer is required (got type set)
# torch.tensor({1 : 'a', 2 : 'b', 3 : 'c'}.keys())
# RuntimeError: Could not infer dtype of dict_keys
# torch.tensor({1 : 'a', 2 : 'b', 3 : 'c'}.keys(), dtype = torch.int64)
# TypeError: an integer is required (got type dict_keys)
```
### Alternatives
_No response_
### Additional context
_No response_
cc @albanD
| 4 |
1,620 | 106,780 |
inf and nan are mapped to quant_min in torch.fake_quantize_per_tensor_affine
|
oncall: quantization, triaged
|
### ๐ Describe the bug
torch.fake_quantize_per_tensor_affine quantizes inf, -inf and nan to quant_min (after dequantize).
This is expected for -inf, but not expected behavior for inf.
```
import torch
x = torch.tensor([1.0, torch.inf, torch.nan, -torch.inf, 26])
torch.fake_quantize_per_tensor_affine(x, 0.1, 0, -128, 127)
>> tensor([ 1.0000, -12.8000, -12.8000, -12.8000, 12.7000])
```
### Versions
--2023-08-08 13:31:48-- https://raw.githubusercontent.com/pytorch/pytorch/main/torch/utils/collect_env.py
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.108.133, 185.199.109.133, 185.199.110.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.108.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 21653 (21K) [text/plain]
Saving to: โcollect_env.pyโ
collect_env.py 100%[===================>] 21.15K --.-KB/s in 0.001s
2023-08-08 13:31:48 (25.6 MB/s) - โcollect_env.pyโ saved [21653/21653]
Collecting environment information...
PyTorch version: 2.0.1+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.25.2
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.109+-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: Tesla T4
Nvidia driver version: 525.105.17
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 2
On-line CPU(s) list: 0,1
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU @ 2.30GHz
CPU family: 6
Model: 63
Thread(s) per core: 2
Core(s) per socket: 1
Socket(s): 1
Stepping: 0
BogoMIPS: 4599.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm invpcid_single ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid xsaveopt arat md_clear arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32 KiB (1 instance)
L1i cache: 32 KiB (1 instance)
L2 cache: 256 KiB (1 instance)
L3 cache: 45 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0,1
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable; SMT Host state unknown
Vulnerability Meltdown: Vulnerable
Vulnerability Mmio stale data: Vulnerable
Vulnerability Retbleed: Vulnerable
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled, PBRSB-eIBRS: Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.22.4
[pip3] torch==2.0.1+cu118
[pip3] torchaudio==2.0.2+cu118
[pip3] torchdata==0.6.1
[pip3] torchsummary==1.5.1
[pip3] torchtext==0.15.2
[pip3] torchvision==0.15.2+cu118
[pip3] triton==2.0.0
[conda] Could not collect
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel
| 0 |
1,621 | 106,778 |
`torch::nn::MultiheadAttention`, `F::multi_head_attention_forward`: check embed_dim and num_heads
|
module: cpp, module: error checking, triaged, open source, ciflow/trunk, release notes: cpp, module: multi-headed-attention
|
Fixes #106700
Referred [check of python API](https://github.com/pytorch/pytorch/blob/main/torch/nn/modules/activation.py#L967-L971).
cc @jbschlosser @malfet
| 9 |
1,622 | 106,772 |
Enable Mypy Checking in torch/_inductor/codegen/triton.py
|
triaged, open source, module: inductor, ciflow/inductor
|
Fixes #105230
Summary:
As suggested in https://github.com/pytorch/pytorch/issues/105230 mypy checking is enabled in torch/_inductor/codegen/triton.py
After Fix:
mypy --follow-imports=skip torch/_inductor/codegen/triton.py Success: no issues found in 1 source file
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov
| 2 |
1,623 | 106,770 |
[Inductor][cpu] torchbench model doctr_det_predictor perf regression
|
triaged, oncall: pt2
|
### ๐ Describe the bug
**doctr_det_predictor** Perf regression tracked on https://github.com/pytorch/pytorch/issues/93531#issuecomment-1668825530
| | 2023-08-06 nightly | | | | 2023-08-02 nightly | | | | Result Comp | | |
| ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- |
| model | batch_size | speedup | inductor | eager | batch_size | speedup | inductor | eager | speedup ratio | eager ratio | inductor ratio |
|doctr_det_predictor |1 |0.649268 |3.384136689 |2.19721166 |1 |1.182882 |1.861407945 |2.201825953 |0.55 |1 |0.55
2023-08-06 nightly
SW | Nightly commit | Main commit
-- | -- | --
Pytorch|[a8638d6](https://github.com/pytorch/pytorch/commit/a8638d6)|[](https://github.com/pytorch/pytorch/commit/)
Torchbench|/|[770d5cf7](https://github.com/pytorch/benchmark/commit/770d5cf7)
torchaudio|[dc83b38](https://github.com/pytorch/audio/commit/dc83b38)|[](https://github.com/pytorch/audio/commit/)
torchtext|[c11d758](https://github.com/pytorch/text/commit/c11d758)| [](https://github.com/pytorch/text/commit/)
torchvision|[58366ab](https://github.com/pytorch/vision/commit/58366ab)|[](https://github.com/pytorch/vision/commit/)
torchdata|[1d231d1](https://github.com/pytorch/data/commit/1d231d1)|[](https://github.com/pytorch/data/commit/)
dynamo_benchmarks|[f228c8b](https://github.com/pytorch/pytorch/commit/f228c8b)|/
2023-08-02 nightly
SW | Nightly commit | Main commit
-- | -- | --
Pytorch|[c89b169](https://github.com/pytorch/pytorch/commit/c89b169)|[92cac6b](https://github.com/pytorch/pytorch/commit/92cac6b)
Torchbench|/|[770d5cf7](https://github.com/pytorch/benchmark/commit/770d5cf7)
torchaudio|[dc83b38](https://github.com/pytorch/audio/commit/dc83b38)|[66f661d](https://github.com/pytorch/audio/commit/66f661d)
torchtext|[c11d758](https://github.com/pytorch/text/commit/c11d758)| [60bea66](https://github.com/pytorch/text/commit/60bea66)
torchvision|[58366ab](https://github.com/pytorch/vision/commit/58366ab)|[a6dea86](https://github.com/pytorch/vision/commit/a6dea86)
torchdata|[1d231d1](https://github.com/pytorch/data/commit/1d231d1)|[757c032](https://github.com/pytorch/data/commit/757c032)
dynamo_benchmarks|[f228c8b](https://github.com/pytorch/pytorch/commit/f228c8b)|/
### Versions
bash [inductor_single_test.sh](https://github.com/chuanqi129/inductor-tools/blob/yudong/aws_auto/scripts/modelbench/inductor_single_run.sh) multiple inference performance torchbench doctr_det_predictor float32 first static default 0
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 1 |
1,624 | 106,764 |
[Feature request] Add new API Tensor.device_as
|
triage review, oncall: jit, feature, module: python frontend
|
### ๐ The feature, motivation and pitch
This new Api can be like Tensor.type_as.
```python
a1 = torch.zeros(5, device='cuda')
a2 = torch.zeros(5, device='cpu')
a3 = a1.device_as(a2)
assert a3.device == a1.device
```
Mainly used in the jit.trace model.
Because Tensor.to only record constant parameter.
This can cause exceptions when moving the model to other devices.
I am currently using this method to transfer tensor from A device to B device in the trace model.
```python
@torch.jit.script
def device_as(self, other):
return self.to(device=other.device)
```
### Alternatives
_No response_
### Additional context
_No response_
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @albanD
| 0 |
1,625 | 106,751 |
[Inductor][amp] clip: expected scalar type BFloat16 but found Float
|
triaged
|
### ๐ Describe the bug
As the title, for reproduce:
bash [inductor_single_test.sh](https://github.com/chuanqi129/inductor-tools/blob/yudong/aws_auto/scripts/modelbench/inductor_single_run.sh) multiple inference performance torchbench clip amp
Error message:
```
loading model: 0it [00:00, ?it/s]
loading model: 0it [00:01, ?it/s]cpu eval clip
ERROR:common:expected scalar type BFloat16 but found Float
Traceback (most recent call last):
File "/workspace/pytorch/benchmarks/dynamo/common.py", line 1997, in check_accuracy
correct_result = self.run_n_iterations(
File "/workspace/pytorch/benchmarks/dynamo/common.py", line 1882, in run_n_iterations
self.model_iter_fn(mod, inputs, collect_outputs=False)
File "benchmarks/dynamo/torchbench.py", line 437, in forward_pass
return mod(*inputs)
File "/workspace/pytorch/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/workspace/pytorch/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/torchmultimodal/models/clip/model.py", line 72, in forward
embeddings_a = self.encoder_a(features_a)
File "/workspace/pytorch/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/workspace/pytorch/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/torchmultimodal/models/clip/image_encoder.py", line 109, in forward
x = self.encoder(x)
File "/workspace/pytorch/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/workspace/pytorch/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/workspace/pytorch/torch/nn/modules/transformer.py", line 361, in forward
output = mod(output, src_mask=mask, is_causal=is_causal, src_key_padding_mask=src_key_padding_mask_for_layers)
File "/workspace/pytorch/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/workspace/pytorch/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/workspace/pytorch/torch/nn/modules/transformer.py", line 678, in forward
x = x + self._sa_block(self.norm1(x), src_mask, src_key_padding_mask, is_causal=is_causal)
File "/workspace/pytorch/torch/nn/modules/transformer.py", line 689, in _sa_block
x = self.self_attn(x, x, x,
File "/workspace/pytorch/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/workspace/pytorch/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/workspace/pytorch/torch/nn/modules/activation.py", line 1182, in forward
return torch._native_multi_head_attention(
RuntimeError: expected scalar type BFloat16 but found Float
eager_1st_run_fail
```
### Versions
torchbench : 770d5cf7
torch : 2.1.0a0+git4734e4d
torchvision : 0.16.0a0+58366ab
torchtext : 0.16.0a0+c11d758
torchaudio : 2.1.0a0+dc83b38
torchdata : 0.7.0a0+1d231d1
dynamo_benchmarks : f228c8b
| 3 |
1,626 | 106,748 |
[FX][ONNX][exporter] Failed to export traced fx graph to onnx model
|
module: onnx, triaged, module: export
|
### ๐ Describe the bug
I traced and quantized a ResNet50 model (only backbone and fused some skip-adds) and tried to export it to a onnx model format, but using `torch.onnx.dynamo_export`, here're informations:
Exporting codes:
```python
qmodel = xxxx # trace and quantize
tensor_x = torch.randn(1, 3, 224, 224)
onnx_model = dynamo_export(qmodel, tensor_x) # failed!
```
Model graph:
```txt
GraphModule(
(conv1): QuantizedConvReLU2d(3, 64, kernel_size=(3, 3), stride=(1, 1), scale=0.013380219228565693, zero_point=128, padding=(1, 1))
(layer1): Module(
(0): Module(
(conv1): QuantizedConvReLU2d(64, 64, kernel_size=(1, 1), stride=(1, 1), scale=0.005257370416074991, zero_point=128)
(conv2): QuantizedConvReLU2d(64, 64, kernel_size=(3, 3), stride=(1, 1), scale=0.008329497650265694, zero_point=128, padding=(1, 1))
(conv3): QuantizedConvAddReLU2d(64, 256, kernel_size=(1, 1), stride=(1, 1), scale=0.010099029168486595, zero_point=128)
(downsample): Module(
(0): QuantizedConv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), scale=0.005250006914138794, zero_point=128)
)
)
(1): Module(
(conv1): QuantizedConvReLU2d(256, 64, kernel_size=(1, 1), stride=(1, 1), scale=0.005096939858049154, zero_point=128)
(conv2): QuantizedConvReLU2d(64, 64, kernel_size=(3, 3), stride=(1, 1), scale=0.004832049366086721, zero_point=128, padding=(1, 1))
(conv3): QuantizedConvAddReLU2d(64, 256, kernel_size=(1, 1), stride=(1, 1), scale=0.00993015430867672, zero_point=128)
)
(2): Module(
(conv1): QuantizedConvReLU2d(256, 64, kernel_size=(1, 1), stride=(1, 1), scale=0.0038794134743511677, zero_point=128)
(conv2): QuantizedConvReLU2d(64, 64, kernel_size=(3, 3), stride=(1, 1), scale=0.005039558280259371, zero_point=128, padding=(1, 1))
(conv3): QuantizedConvAddReLU2d(64, 256, kernel_size=(1, 1), stride=(1, 1), scale=0.010828116908669472, zero_point=128)
)
)
(layer2): Module(
(0): Module(
(conv1): QuantizedConvReLU2d(256, 128, kernel_size=(1, 1), stride=(1, 1), scale=0.004017043858766556, zero_point=128)
(conv2): QuantizedConvReLU2d(128, 128, kernel_size=(3, 3), stride=(2, 2), scale=0.004348098766058683, zero_point=128, padding=(1, 1))
(conv3): QuantizedConvAddReLU2d(128, 512, kernel_size=(1, 1), stride=(1, 1), scale=0.005679312162101269, zero_point=128)
(downsample): Module(
(0): QuantizedConv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), scale=0.004687752574682236, zero_point=128)
)
)
(1): Module(
(conv1): QuantizedConvReLU2d(512, 128, kernel_size=(1, 1), stride=(1, 1), scale=0.0029946595896035433, zero_point=128)
(conv2): QuantizedConvReLU2d(128, 128, kernel_size=(3, 3), stride=(1, 1), scale=0.002591489814221859, zero_point=128, padding=(1, 1))
(conv3): QuantizedConvAddReLU2d(128, 512, kernel_size=(1, 1), stride=(1, 1), scale=0.0066427248530089855, zero_point=128)
)
(2): Module(
(conv1): QuantizedConvReLU2d(512, 128, kernel_size=(1, 1), stride=(1, 1), scale=0.003675809595733881, zero_point=128)
(conv2): QuantizedConvReLU2d(128, 128, kernel_size=(3, 3), stride=(1, 1), scale=0.004037801176309586, zero_point=128, padding=(1, 1))
(conv3): QuantizedConvAddReLU2d(128, 512, kernel_size=(1, 1), stride=(1, 1), scale=0.0076212105341255665, zero_point=128)
)
(3): Module(
(conv1): QuantizedConvReLU2d(512, 128, kernel_size=(1, 1), stride=(1, 1), scale=0.0036621815524995327, zero_point=128)
(conv2): QuantizedConvReLU2d(128, 128, kernel_size=(3, 3), stride=(1, 1), scale=0.003685015020892024, zero_point=128, padding=(1, 1))
(conv3): QuantizedConvAddReLU2d(128, 512, kernel_size=(1, 1), stride=(1, 1), scale=0.006834834348410368, zero_point=128)
)
)
(layer3): Module(
(0): Module(
(conv1): QuantizedConvReLU2d(512, 256, kernel_size=(1, 1), stride=(1, 1), scale=0.00595456175506115, zero_point=128)
(conv2): QuantizedConvReLU2d(256, 256, kernel_size=(3, 3), stride=(2, 2), scale=0.006026321556419134, zero_point=128, padding=(1, 1))
(conv3): QuantizedConvAddReLU2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), scale=0.007073363289237022, zero_point=128)
(downsample): Module(
(0): QuantizedConv2d(512, 1024, kernel_size=(1, 1), stride=(2, 2), scale=0.004362074658274651, zero_point=128)
)
)
(1): Module(
(conv1): QuantizedConvReLU2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), scale=0.003083687275648117, zero_point=128)
(conv2): QuantizedConvReLU2d(256, 256, kernel_size=(3, 3), stride=(1, 1), scale=0.0031905779615044594, zero_point=128, padding=(1, 1))
(conv3): QuantizedConvAddReLU2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), scale=0.007137070409953594, zero_point=128)
)
(2): Module(
(conv1): QuantizedConvReLU2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), scale=0.004007844254374504, zero_point=128)
(conv2): QuantizedConvReLU2d(256, 256, kernel_size=(3, 3), stride=(1, 1), scale=0.0028449269011616707, zero_point=128, padding=(1, 1))
(conv3): QuantizedConvAddReLU2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), scale=0.00658777728676796, zero_point=128)
)
(3): Module(
(conv1): QuantizedConvReLU2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), scale=0.00378667120821774, zero_point=128)
(conv2): QuantizedConvReLU2d(256, 256, kernel_size=(3, 3), stride=(1, 1), scale=0.00343075068667531, zero_point=128, padding=(1, 1))
(conv3): QuantizedConvAddReLU2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), scale=0.006993501912802458, zero_point=128)
)
(4): Module(
(conv1): QuantizedConvReLU2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), scale=0.0034908351954072714, zero_point=128)
(conv2): QuantizedConvReLU2d(256, 256, kernel_size=(3, 3), stride=(1, 1), scale=0.004998135380446911, zero_point=128, padding=(1, 1))
(conv3): QuantizedConvAddReLU2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), scale=0.008047958835959435, zero_point=128)
)
(5): Module(
(conv1): QuantizedConvReLU2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), scale=0.00446851784363389, zero_point=128)
(conv2): QuantizedConvReLU2d(256, 256, kernel_size=(3, 3), stride=(1, 1), scale=0.004318995401263237, zero_point=128, padding=(1, 1))
(conv3): QuantizedConvAddReLU2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), scale=0.008282049559056759, zero_point=128)
)
)
(layer4): Module(
(0): Module(
(conv1): QuantizedConvReLU2d(1024, 512, kernel_size=(1, 1), stride=(1, 1), scale=0.006188110448420048, zero_point=128)
(conv2): QuantizedConvReLU2d(512, 512, kernel_size=(3, 3), stride=(2, 2), scale=0.006452975329011679, zero_point=128, padding=(1, 1))
(conv3): QuantizedConvAddReLU2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), scale=0.014389239251613617, zero_point=128)
(downsample): Module(
(0): QuantizedConv2d(1024, 2048, kernel_size=(1, 1), stride=(2, 2), scale=0.005995641928166151, zero_point=128)
)
)
(1): Module(
(conv1): QuantizedConvReLU2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), scale=0.004341470077633858, zero_point=128)
(conv2): QuantizedConvReLU2d(512, 512, kernel_size=(3, 3), stride=(1, 1), scale=0.0054984972812235355, zero_point=128, padding=(1, 1))
(conv3): QuantizedConvAddReLU2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), scale=0.02057485468685627, zero_point=128)
)
(2): Module(
(conv1): QuantizedConvReLU2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), scale=0.003666398348286748, zero_point=128)
(conv2): QuantizedConvReLU2d(512, 512, kernel_size=(3, 3), stride=(1, 1), scale=0.006707869004458189, zero_point=128, padding=(1, 1))
(conv3): QuantizedConvAddReLU2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), scale=0.0357082337141037, zero_point=128)
)
)
)
```
Error message:
```
Traceback (most recent call last):
File "/workspace/torchint/lib/python3.10/site-packages/torch/onnx/_internal/exporter.py", line 595, in dynamo_export
).export()
File "/workspace/torchint/lib/python3.10/site-packages/torch/onnx/_internal/exporter.py", line 446, in export
graph_module = self.options.fx_tracer.generate_fx(
File "/workspace/torchint/lib/python3.10/site-packages/torch/onnx/_internal/fx/dynamo_graph_extractor.py", line 196, in generate_fx
graph_module, graph_guard = torch._dynamo.export(
File "/workspace/torchint/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 948, in export
result_traced = opt_f(*args, **kwargs)
File "/workspace/torchint/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 295, in _fn
return fn(*args, **kwargs)
File "/workspace/torchint/lib/python3.10/site-packages/torch/onnx/_internal/fx/dynamo_graph_extractor.py", line 153, in wrapped
return output_adapter.apply(model_func(*args, **kwargs)) File "/workspace/torchint/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 448, in catch_errors
return callback(frame, cache_size, hooks, frame_state)
File "/workspace/torchint/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 127, in _fn
return fn(*args, **kwargs)
File "/workspace/torchint/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 360, in _convert_frame_assert
return _compile(
File "/workspace/torchint/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 180, in time_wrapper
r = func(*args, **kwargs)
File "/workspace/torchint/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 430, in _compile
out_code = transform_code_object(code, transform)
File "/workspace/torchint/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py", line 1000, in transform_code_object
transformations(instructions, code_options)
File "/workspace/torchint/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 415, in transform
tracer.run()
File "/workspace/torchint/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2026, in run
super().run()
File "/workspace/torchint/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 708, in run
and self.step()
File "/workspace/torchint/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 668, in step
getattr(self, inst.opname)(inst)
File "/workspace/torchint/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 390, in wrapper
return inner_fn(self, inst)
File "/workspace/torchint/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1100, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/workspace/torchint/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 559, in call_function
self.push(fn.call_function(self, args, kwargs))
File "/workspace/torchint/lib/python3.10/site-packages/torch/_dynamo/variables/torch.py", line 610, in call_function
tensor_variable = wrap_fx_proxy(
File "/workspace/torchint/lib/python3.10/site-packages/torch/_dynamo/variables/builder.py", line 1115, in wrap_fx_proxy
return wrap_fx_proxy_cls(
File "/workspace/torchint/lib/python3.10/site-packages/torch/_dynamo/variables/builder.py", line 1151, in wrap_fx_proxy_cls
example_value = get_fake_value(proxy.node, tx)
File "/workspace/torchint/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 1313, in get_fake_value
raise TorchRuntimeError(str(e)).with_traceback(e.__traceback__) from None
File "/workspace/torchint/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 1281, in get_fake_value
return wrap_fake_exception(
File "/workspace/torchint/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 871, in wrap_fake_exception
return fn()
File "/workspace/torchint/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 1282, in <lambda>
lambda: run_node(tx.output, node, args, kwargs, nnmodule)
File "/workspace/torchint/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 1347, in run_node
raise RuntimeError(fn_str + str(e)).with_traceback(e.__traceback__) from e
File "/workspace/torchint/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 1334, in run_node
return node.target(*args, **kwargs)
File "/workspace/torchint/lib/python3.10/site-packages/torch/utils/_stats.py", line 20, in wrapper
return fn(*args, **kwargs)
File "/workspace/torchint/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.py", line 1161, in __torch_dispatch__
return self.dispatch(func, types, args, kwargs)
File "/workspace/torchint/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.py", line 1410, in dispatch
return run_fallback_kernel(self, func, args, kwargs, not_implemented_error)
File "/workspace/torchint/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.py", line 1657, in run_fallback_kernel
return tree_map(map_out, r)
File "/workspace/torchint/lib/python3.10/site-packages/torch/utils/_pytree.py", line 323, in tree_map
return tree_unflatten([fn(i) for i in flat_args], spec)
File "/workspace/torchint/lib/python3.10/site-packages/torch/utils/_pytree.py", line 323, in <listcomp>
return tree_unflatten([fn(i) for i in flat_args], spec)
File "/workspace/torchint/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.py", line 1653, in map_out
return fake_mode.fake_tensor_converter(fake_mode, e)
File "/workspace/torchint/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.py", line 341, in __call__
return self.from_real_tensor(
File "/workspace/torchint/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.py", line 274, in from_real_tensor
raise UnsupportedFakeTensorException("quantized nyi in meta tensors")
raise UnsupportedFakeTensorException("quantized nyi in meta tensors")
torch._dynamo.exc.TorchRuntimeError: Failed running call_function <built-in method quantize_per_tensor of type object at 0x7f3ab3e18160>(*(Fake
Tensor(..., size=(24, 3, 32, 32)), FakeTensor(..., size=()), FakeTensor(..., size=(), dtype=torch.int64), torch.quint8), **{}):
quantized nyi in meta tensors
```
### Versions
```txt
PyTorch version: 2.1.0.dev20230621+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04.1) 11.3.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.6 (main, Mar 10 2023, 10:55:28) [GCC 11.3.0] (64-bit runtime)
Python platform: Linux-5.4.0-42-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.7.99
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100 80GB PCIe
GPU 1: NVIDIA A100 80GB PCIe
GPU 2: NVIDIA A100 80GB PCIe
GPU 3: NVIDIA A100 80GB PCIe
GPU 4: NVIDIA A100 80GB PCIe
GPU 5: NVIDIA A100 80GB PCIe
GPU 6: NVIDIA A100 80GB PCIe
GPU 7: NVIDIA A100 80GB PCIe
Nvidia driver version: 515.48.07 [28/4606]
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6346 CPU @ 3.10GHz
CPU family: 6
Model: 106
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 2
Stepping: 6
Frequency boost: enabled
CPU max MHz: 3600.0000
CPU min MHz: 800.0000
BogoMIPS: 6200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss
ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqd
q dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c
rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept
vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt
avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local wbnoinvd dtherm ida arat pl
n pts avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq rdpid md_clear pconfig flush_l
1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.5 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 40 MiB (32 instances)
L3 cache: 72 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-15,32-47
NUMA node1 CPU(s): 16-31,48-63
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.25.1
[pip3] pytorch-triton==2.1.0+440fd1bf20
[pip3] torch==2.1.0.dev20230621+cu117
[pip3] torch-int==0.0.0
[pip3] torchvision==0.15.2
[conda] Could not collect
```
| 6 |
1,627 | 106,746 |
pytorch with ROCM on Windows
|
module: windows, module: rocm, triaged
|
### ๐ The feature, motivation and pitch
pytorch with ROCM on Windows
Since the ROCM has supported Windows, when will the Pytorch be availabled on Windows?
### Alternatives
_No response_
### Additional context
_No response_
cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @vladimir-aubrecht @iremyux @Blackhex @cristianPanaite @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang
| 5 |
1,628 | 106,738 |
[FSDP][WIP] [Do not review] Trace FSDP
|
Stale, release notes: distributed (fsdp), module: dynamo, ciflow/inductor
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #106738
* #106906
* #106890
* #106888
* #106886
* #106884
cc @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng @anijain2305 @ipiszy
| 3 |
1,629 | 106,735 |
[vision hash update] update the pinned vision hash
|
open source, Stale, ciflow/trunk, topic: not user facing, ciflow/inductor
|
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/_update-commit-hash.yml).
Update the pinned vision hash.
| 5 |
1,630 | 106,733 |
Expose dcp utils
|
open source, Stale
|
Expose DCP's _dedup_tensors utils
| 3 |
1,631 | 106,732 |
Hugging Face safetensor does not work with FakeTensorMode
|
triaged, oncall: pt2, module: fakeTensor
|
### ๐ Describe the bug
When certain HF models are loaded within Fake mode, the `load_state_dict` API tries to load data using `safetensors`, which fail to determine the tensor shape and crash
```python
from torch._subclasses.fake_tensor import FakeTensorMode
from transformers import AutoModel
model_name = "bigscience/bloom-560m"
fake_mode = FakeTensorMode()
with fake_mode:
model = AutoModel.from_pretrained(model_name)
```
The error is:
```bash
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ Traceback (most recent call last) โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ /opt/pytorch/repro_bloom.py:12 in <module> โ
โ โ
โ 9 # with torch.onnx.enable_fake_mode() as fake_context: โ
โ 10 fake_mode = FakeTensorMode() โ
โ 11 with fake_mode: โ
โ โฑ 12 โ model = AutoModel.from_pretrained(model_name) โ
โ 13 # outputs = model(**inputs) โ
โ 14 # ( โ
โ 15 # onnx_model, โ
โ โ
โ /opt/conda/envs/ptca/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py:484 in โ
โ from_pretrained โ
โ โ
โ 481 โ โ โ ) โ
โ 482 โ โ elif type(config) in cls._model_mapping.keys(): โ
โ 483 โ โ โ model_class = _get_model_class(config, cls._model_mapping) โ
โ โฑ 484 โ โ โ return model_class.from_pretrained( โ
โ 485 โ โ โ โ pretrained_model_name_or_path, *model_args, config=config, **hub_kwargs, โ
โ 486 โ โ โ ) โ
โ 487 โ โ raise ValueError( โ
โ โ
โ /opt/conda/envs/ptca/lib/python3.8/site-packages/transformers/modeling_utils.py:2604 in โ
โ from_pretrained โ
โ โ
โ 2601 โ โ if from_pt: โ
โ 2602 โ โ โ if not is_sharded and state_dict is None: โ
โ 2603 โ โ โ โ # Time to load the checkpoint โ
โ โฑ 2604 โ โ โ โ state_dict = load_state_dict(resolved_archive_file) โ
โ 2605 โ โ โ โ
โ 2606 โ โ โ # set dtype to instantiate the model under: โ
โ 2607 โ โ โ # 1. If torch_dtype is not None, we use that dtype โ
โ โ
โ /opt/conda/envs/ptca/lib/python3.8/site-packages/transformers/modeling_utils.py:461 in โ
โ load_state_dict โ
โ โ
โ 458 โ โ โ raise NotImplementedError( โ
โ 459 โ โ โ โ f"Conversion from a {metadata['format']} safetensors archive to PyTorch โ
โ 460 โ โ โ ) โ
โ โฑ 461 โ โ return safe_load_file(checkpoint_file) โ
โ 462 โ try: โ
โ 463 โ โ return torch.load(checkpoint_file, map_location="cpu") โ
โ 464 โ except Exception as e: โ
โ โ
โ /opt/conda/envs/ptca/lib/python3.8/site-packages/safetensors/torch.py:261 in load_file โ
โ โ
โ 258 โ result = {} โ
โ 259 โ with safe_open(filename, framework="pt", device=device) as f: โ
โ 260 โ โ for k in f.keys(): โ
โ โฑ 261 โ โ โ result[k] = f.get_tensor(k) โ
โ 262 โ return result โ
โ 263 โ
โ 264 โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
ValueError: could not determine the shape of object type 'torch.storage.UntypedStorage'
```
Looking at the [torch/csrc/utils/tensor_new.cpp code](https://github.com/pytorch/pytorch/blob/646fa36875821f2bcf4fbfbf669c1f4f9f69700d/torch/csrc/utils/tensor_new.cpp), I found this
```cpp
std::vector<int64_t> compute_sizes(PyObject* seq, ScalarType scalar_type) {
bool is_storage = isStorage(seq);
std::vector<int64_t> sizes;
THPObjectPtr handle;
while (PySequence_Check(seq)) {
auto length = PySequence_Length(seq);
if (length < 0)
throw python_error();
if (is_storage) {
length /= static_cast<int64_t>(elementSize(scalar_type));
}
sizes.push_back(length);
if (sizes.size() > MAX_DIMS) {
throw ValueError("too many dimensions '%s'", Py_TYPE(seq)->tp_name);
}
if (length == 0)
break;
handle = THPObjectPtr(PySequence_GetItem(seq, 0));
if (!handle) {
throw ValueError(
"could not determine the shape of object type '%s'",
Py_TYPE(seq)->tp_name);
}
seq = handle.get();
}
return sizes;
}
```
### Versions
pytorch main branch ()
safetensors 0.3.1
transformers 4.30.0
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 6 |
1,632 | 106,718 |
Add AMD image to the .devcontainer spec
|
triaged, better-engineering
|
# Summary
Add an AMD/ROCM image to the .devcontainer spec. This will aid in users seeking to contribute for AMD based gpu enviornments. This can likely be adapted from the CUDA container. Research into the tooling required packages needs to be done.
| 0 |
1,633 | 106,717 |
Provide .devcontainer PyTorch - MPS environment
|
triaged, better-engineering
|
# Summary
Add a MPS image to the .devcontainer spec. This will aid in users seeking to contribute to the MPS backend.
### Blockers
This is likely blocked via lack of support in linux based containers. See: https://github.com/pytorch/pytorch/issues/81224#issuecomment-1499741152.
| 0 |
1,634 | 106,715 |
Add reset_parameters to nn.Module base class
|
Stale, release notes: nn
|
**We need a good solution to the subclass issue https://github.com/pytorch/pytorch/issues/71404#issuecomment-1026011644 before landing probably, I'm working on it!**
[Some wip thoughts on the above](https://colab.research.google.com/drive/1RFZjWpGfBVZka-NU6jmCZrftvq5HdT5u)
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #106508
* #106506
* __->__ #106715
| 2 |
1,635 | 106,713 |
Dev Container Support for PyTorch
|
triaged, better-engineering
|
# Summary
This servers as the top-level project tracker for making contributions to the PyTorch project easier and more device agnostic.
The primary tool for doing this is the devcontainer spec. This spec provides two nice entrypoints, [codespaces](https://github.com/features/codespaces) which is Githubs compute service for running devcontainers. Github users get access to a free tier spec.
There is comprehensive VSCode support for devcontainers. [VSCode integration](https://code.visualstudio.com/docs/devcontainers/containers)
### Sub Tasks
- [x] Initial devcontainer support for local CPU development: https://github.com/pytorch/pytorch/issues/106714
- [x] Initial devcontainer support for remote codespaces development. See attached screenshot for
- [x] Initial devcontainer support for local GPU development: https://github.com/pytorch/pytorch/issues/106714
- [ ] Document process for building locally + codespaces
- [ ] Add to contributing guide
- [ ] Add support for MPS based dev: #106717
- [ ] Add support for AMD based dev: #106718
<img width="1064" alt="Screenshot 2023-08-07 at 11 42 24 AM" src="https://github.com/pytorch/pytorch/assets/32754868/371270c9-2b88-4879-889f-cb9ed5689ec4">
| 0 |
1,636 | 106,711 |
'CUDA out of memory' when using a GPU services for reinforcement learning in Torch rpc tutorial
|
oncall: distributed, module: rpc
|
### ๐ Describe the bug
I followed this [tutorial](https://pytorch.org/tutorials/intermediate/rpc_tutorial.html) to implement reinforcement learning with RPC on Torch. And I can run the original tutorial well.
After adding the specified **GPU** device for the model as shown in the original tutorial, I encountered a "cuda out of memory" issue.
Currently, I use one trainer process and one observer process.
The trainer process creating the model, and the observer process calls the model forward using **RPC**.
To simplify reproduction, I removed some of the original code from the tutorial.
https://gist.github.com/lpf6/9cd5bf55489758defc6b9fdbc2dcee62
The modifications compared to the tutorial are `select_action` and `Agent.__init__` function add `to(self.device_id)`. For example, the `select_action` function:
``` python
def select_action(self, ob_id, state):
state: torch.Tensor = torch.from_numpy(state).float().unsqueeze(0).to(self.device_id)
probs = self.policy(state)
m = Categorical(probs)
action = m.sample()
self.saved_log_probs[ob_id].append(m.log_prob(action))
result = action.item()
del action, m, state, probs
return result
```
And I got result
```
/home/lu/PycharmProjects/tetris/venv/bin/python /home/lu/PycharmProjects/tetris/test_error.py
obs_0_for_0 started
WARNING: Logging before InitGoogleLogging() is written to STDERR
I20230807 23:22:56.714407 262881 ProcessGroupNCCL.cpp:665] [Rank 0] ProcessGroupNCCL initialized with following options:
NCCL_ASYNC_ERROR_HANDLING: 0
NCCL_DESYNC_DEBUG: 0
NCCL_BLOCKING_WAIT: 0
TIMEOUT(ms): 1800000
USE_HIGH_PRIORITY_STREAM: 0
I20230807 23:22:56.714471 262980 ProcessGroupNCCL.cpp:842] [Rank 0] NCCL watchdog thread started!
agent_0 started
/home/lu/PycharmProjects/tetris/venv/lib/python3.11/site-packages/gym/utils/passive_env_checker.py:233: DeprecationWarning: `np.bool8` is a deprecated alias for `np.bool_`. (Deprecated NumPy 1.24)
if not isinstance(terminated, (bool, np.bool8)):
episode : 1, mem_used: 146.25Mb
episode : 2, mem_used: 251.88Mb
episode : 3, mem_used: 438.75Mb
episode : 4, mem_used: 682.50Mb
.....
episode : 44, mem_used: 4834.38Mb
At:
/usr/lib/python3.11/site-packages/torch/distributed/rpc/internal.py(234): _handle_exception
')
Traceback (most recent call last):
File "/usr/lib/python3.11/site-packages/torch/distributed/rpc/internal.py", line 207, in _run_function
result = python_udf.func(*python_udf.args, **python_udf.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/torch/distributed/rpc/rref_proxy.py", line 42, in _invoke_rpc
return _rref_type_cont(rref_fut)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/torch/distributed/rpc/rref_proxy.py", line 31, in _rref_type_cont
return rpc_api(
^^^^^^^^
File "/usr/lib/python3.11/site-packages/torch/distributed/rpc/api.py", line 82, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/torch/distributed/rpc/api.py", line 809, in rpc_sync
return fut.wait()
^^^^^^^^^^
RuntimeError: RuntimeError: On WorkerInfo(id=1, name=obs_0_for_0):
RuntimeError('OutOfMemoryError: On WorkerInfo(id=0, name=agent_0):
OutOfMemoryError('CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 10.75 GiB total capacity; 4.73 GiB already allocated; 10.88 MiB free; 5.82 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF')
Traceback (most recent call last):
File "/usr/lib/python3.11/site-packages/torch/distributed/rpc/internal.py", line 207, in _run_function
result = python_udf.func(*python_udf.args, **python_udf.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/torch/distributed/rpc/rref_proxy.py", line 11, in _local_invoke
return getattr(rref.local_value(), func_name)(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lu/PycharmProjects/tetris/test_error.py", line 71, in select_action
probs = self.policy(state)
^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lu/PycharmProjects/tetris/test_error.py", line 25, in forward
x = self.affine1(x)
^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/torch/nn/modules/linear.py", line 114, in forward
return F.linear(input, self.weight, self.bias)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 10.75 GiB total capacity; 4.73 GiB already allocated; 10.88 MiB free; 5.82 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Process finished with exit code 1
```
### Versions
Collecting environment information...
PyTorch version: 2.0.1
Is debug build: False
CUDA used to build PyTorch: 12.2
ROCM used to build PyTorch: N/A
OS: Manjaro Linux (x86_64)
GCC version: (GCC) 13.1.1 20230714
Clang version: 15.0.7
CMake version: Could not collect
Libc version: glibc-2.37
Python version: 3.11.3 (main, Jun 5 2023, 09:32:32) [GCC 13.1.1 20230429] (64-bit runtime)
Python platform: Linux-6.4.6-1-MANJARO-x86_64-with-glibc2.37
Is CUDA available: True
CUDA runtime version: 12.2.91
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 2080 Ti
GPU 1: NVIDIA GeForce RTX 2080 Ti
GPU 2: NVIDIA GeForce RTX 2080 Ti
GPU 3: NVIDIA GeForce RTX 2080 Ti
Nvidia driver version: 535.86.05
cuDNN version: Probably one of the following:
/usr/lib/libcudnn.so.8.9.2
/usr/lib/libcudnn_adv_infer.so.8.9.2
/usr/lib/libcudnn_adv_train.so.8.9.2
/usr/lib/libcudnn_cnn_infer.so.8.9.2
/usr/lib/libcudnn_cnn_train.so.8.9.2
/usr/lib/libcudnn_ops_infer.so.8.9.2
/usr/lib/libcudnn_ops_train.so.8.9.2
/usr/local/cuda-11.8/targets/x86_64-linux/lib/libcudnn.so.8
/usr/local/cuda-11.8/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8
/usr/local/cuda-11.8/targets/x86_64-linux/lib/libcudnn_adv_train.so.8
/usr/local/cuda-11.8/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8
/usr/local/cuda-11.8/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8
/usr/local/cuda-11.8/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8
/usr/local/cuda-11.8/targets/x86_64-linux/lib/libcudnn_ops_train.so.8
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
ๆถๆ๏ผ x86_64
CPU ่ฟ่กๆจกๅผ๏ผ 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
ๅญ่ๅบ๏ผ Little Endian
CPU: 48
ๅจ็บฟ CPU ๅ่กจ๏ผ 0-47
ๅๅ ID๏ผ GenuineIntel
ๅๅทๅ็งฐ๏ผ Intel(R) Xeon(R) CPU E5-2678 v3 @ 2.50GHz
CPU ็ณปๅ๏ผ 6
ๅๅท๏ผ 63
ๆฏไธชๆ ธ็็บฟ็จๆฐ๏ผ 2
ๆฏไธชๅบง็ๆ ธๆฐ๏ผ 12
ๅบง๏ผ 2
ๆญฅ่ฟ๏ผ 2
CPU(s) scaling MHz: 47%
CPU ๆๅคง MHz๏ผ 3300.0000
CPU ๆๅฐ MHz๏ผ 1200.0000
BogoMIPS๏ผ 5001.30
ๆ ่ฎฐ๏ผ fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perf pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm cpuid_fault epb invpcid_single pti intel_ppin ssbd ibrs ibpb stibp tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm xsaveopt cqm_llc cqm_occup_llc dtherm ida arat pln pts vnmi md_clear flush_l1d
่ๆๅ๏ผ VT-x
L1d ็ผๅญ๏ผ 768 KiB (24 instances)
L1i ็ผๅญ๏ผ 768 KiB (24 instances)
L2 ็ผๅญ๏ผ 6 MiB (24 instances)
L3 ็ผๅญ๏ผ 60 MiB (2 instances)
NUMA ่็น๏ผ 2
NUMA ่็น0 CPU๏ผ 0-11,24-35
NUMA ่็น1 CPU๏ผ 12-23,36-47
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.25.1
[pip3] torch==2.0.1
[pip3] torchsummary==1.5.1
[conda] blas 1.0 mkl defaults
[conda] mkl 2021.4.0 h06a4308_640 defaults
[conda] mkl-service 2.4.0 py310h7f8727e_0 defaults
[conda] mkl_fft 1.3.1 py310hd6ae3a3_0 defaults
[conda] mkl_random 1.2.2 py310h00e6091_0 defaults
[conda] numpy 1.23.5 py310hd5efca6_0 defaults
[conda] numpy-base 1.23.5 py310h8e6c178_0 defaults
[conda] numpydoc 1.5.0 py310h06a4308_0 defaults
[conda] pytorch 1.12.1 cpu_py310hb1f1ab4_1 defaults
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @penguinwu @pietern @jjlilley @mrzzd
| 4 |
1,637 | 106,705 |
[WIP] FSDP rate limiter
|
Stale, release notes: distributed (fsdp)
|
Use streams wait to hold all-gather execution instead of using CPU synchronization.
The basic implementation relies on using ping-pong buffers (for maximally two outstanding all-gathers). Each buffer would have size of the largest FSDP instance.
| 2 |
1,638 | 106,704 |
Dataloader extremely slow on in-memory datasets
|
module: dataloader, triaged, module: data
|
### ๐ Describe the bug
I am not sure this is a bug but it is a major performance issue and there seems to be no suitable issue category for that.
Torch dataloaders are the natural entry point for many applications but they apparently are extremely slow when working on in-memory data. Here a full example.
```python
from torch.utils.data import DataLoader, TensorDataset
import torch
import time
X = torch.ones(20000, 20)
Y = torch.ones(20000, 5)
ds = TensorDataset(X, Y)
dl_torch = DataLoader(ds, batch_size=256, shuffle=False, drop_last=True)
def get_custom_dl(batch_size=256):
i_low = 0
for i_high in range(batch_size, len(ds), batch_size):
yield ds[i_low:i_high]
i_low = i_high
# Evaluate speed of dl_torch
now = time.time()
for _ in range(100):
list(dl_torch)
print(time.time() - now)
# >>> gives 4.5 secs on my system
# Now custom_dl
now = time.time()
for _ in range(100):
list(get_custom_dl())
print(time.time() - now)
# >>> 0.014 secs
```
After some profiling, I found one of the problems to be in `_MapDatasetFetcher`, which calls the slow `data = [self.dataset[idx] for idx in possibly_batched_index]` and then collates it. This can be bypassed by using the reserved `__getitems__` method
which is used in `_MapDatasetFetcher`:
```python
from torch.utils.data import DataLoader, TensorDataset
import torch
import time
X = torch.ones(20000, 20)
Y = torch.ones(20000, 5)
class ExtendedTensorDataset(TensorDataset):
def __getitems__(self, item):
return self.__getitem__(item)
ds_ext = ExtendedTensorDataset(X, Y)
dl_ext = DataLoader(dl_ext, batch_size=256, shuffle=False, collate_fn=lambda x:x)
now = time.time()
for _ in range(100):
list(dl_ext)
print(time.time() - now)
# >>> 0.58 secs
```
This feels very hacky and the performance is still off by a factor of 40 from pure python (much better than the original factor 400 though).
I feel like something here is missing. I searched the torch package for the usage of `__getitems__` but found nothing, is this mechanism being deprecated?
I would be happy to work on a fix in a PR, as I feel that the default performance of `DataLoader` should not be so bad. But maybe I am just missing some way of improving it?
I understand that I can pass a custom sampler to dataloader which would alleviate these issues, but I feel like the default SequenceSampler shouldn't be so slow
### Versions
PyTorch version: 2.0.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.26.4
Libc version: glibc-2.35
Python version: 3.11.4 (main, Jun 7 2023, 12:45:48) [GCC 11.3.0] (64-bit runtime)
Python platform: Linux-6.2.0-10018-tuxedo-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: GenuineIntel
Model name: 12th Gen Intel(R) Core(TM) i7-1260P
CPU family: 6
Model: 154
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 1
Stepping: 3
CPU max MHz: 4700,0000
CPU min MHz: 400,0000
BogoMIPS: 4992.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 448 KiB (12 instances)
L1i cache: 640 KiB (12 instances)
L2 cache: 9 MiB (6 instances)
L3 cache: 18 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-15
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy==1.4.1
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.4
[pip3] pytorch-lightning==2.0.6
[pip3] torch==2.0.1
[pip3] torchdata==0.6.1
[pip3] torchmetrics==1.0.1
[pip3] triton==2.0.0
[conda] Could not collect
cc @SsnL @VitalyFedyunin @ejguan @dzhulgakov
| 1 |
1,639 | 106,700 |
C++ API `torch::nn::MultiheadAttention` Crashes by division by zero
|
module: crash, module: cpp, triaged, oncall: transformer/mha
|
### ๐ Describe the bug
`num_heads = 0` leads to crash by division by zero.
Test code:
```c++
#include <stdint.h>
#include <stddef.h>
#include <c10/util/irange.h>
#include <cassert>
#include <torch/torch.h>
namespace F = torch::nn::functional;
using namespace torch::nn;
int main() {
try {
torch::TensorOptions toptions = torch::TensorOptions();
auto moptions = torch::nn::MultiheadAttentionOptions(0,0);
auto m = torch::nn::MultiheadAttention(moptions);
auto result = m->forward(
torch::randn({}, toptions),
torch::randn({}, toptions),
torch::randn({}, toptions));
} catch (std::exception& e) {
return -2;
}
return 0;
}
```
Error Log:
```
AddressSanitizer:DEADLYSIGNAL
=================================================================
==700223==ERROR: AddressSanitizer: FPE on unknown address 0x7f0cd209f172 (pc 0x7f0cd209f172 bp 0x7ffdf04ab770 sp 0x7ffdf04aa360 T0)
#0 0x7f0cd209f172 in torch::nn::MultiheadAttentionImpl::reset() /home/sehoon/pytorch/torch/csrc/api/src/nn/modules/activation.cpp:500:34
#1 0x7f0cd209d092 in torch::nn::MultiheadAttentionImpl::MultiheadAttentionImpl(torch::nn::MultiheadAttentionOptions const&) /home/sehoon/pytorch/torch/csrc/api/src/nn/modules/activation.cpp:437:3
#2 0x407115 in torch::nn::ModuleHolder<torch::nn::MultiheadAttentionImpl>::ModuleHolder<torch::nn::MultiheadAttentionOptions&, void>(torch::nn::MultiheadAttentionOptions&) /home/sehoon/pytorch/torch/csrc/api/include/torch/nn/pimpl.h:65:19
#3 0x4057ac in torch::nn::MultiheadAttention::MultiheadAttention<torch::nn::MultiheadAttentionOptions&, void>(torch::nn::MultiheadAttentionOptions&) /home/sehoon/pytorch/torch/csrc/api/include/torch/nn/modules/activation.h:872:1
#4 0x4047ff in main /home/sehoon/pytorch/test/cpp/reproduce/MultiheadAttention.cpp:15:14
#5 0x7f0cb084dd8f in __libc_start_call_main csu/../sysdeps/nptl/libc_start_call_main.h:58:16
#6 0x7f0cb084de3f in __libc_start_main csu/../csu/libc-start.c:392:3
#7 0x404484 in _start (/home/sehoon/pytorch/build/bin/reproduce_MultiheadAttention+0x404484)
AddressSanitizer can not provide additional info.
SUMMARY: AddressSanitizer: FPE /home/sehoon/pytorch/torch/csrc/api/src/nn/modules/activation.cpp:500:34 in torch::nn::MultiheadAttentionImpl::reset()
==700223==ABORTING
```
[Error location](https://github.com/pytorch/pytorch/blob/1cc002621d23749c6742193ef3c1eda1015480c2/torch/csrc/api/src/nn/modules/activation.cpp#L500):
```c++
head_dim = options.embed_dim() / options.num_heads();
```
Because there is no guard which bans `num_heads=0`, it ends up with crash.
### Versions
PyTorch version: 2.1.0a0+git416bf4e
Is debug build: True
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 12.0.1 (git@github.com:starlab-unist/llvm-project.git 82ccd79bfce79353c2bb4c1ab258ebfeb536a67d)
CMake version: version 3.26.4
Libc version: glibc-2.35
Python version: 3.9.17 (main, Jul 5 2023, 20:41:20) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-78-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: 11.7.99
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
GPU 3: NVIDIA GeForce RTX 3090
Nvidia driver version: 535.86.10
cuDNN version: Probably one of the following:
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn.so.8.9.2
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.9.2
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.9.2
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.9.2
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.9.2
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.9.2
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.9.2
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 96
On-line CPU(s) list: 0-95
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6248R CPU @ 3.00GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 2
Stepping: 7
CPU max MHz: 4000.0000
CPU min MHz: 1200.0000
BogoMIPS: 6000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.5 MiB (48 instances)
L1i cache: 1.5 MiB (48 instances)
L2 cache: 48 MiB (48 instances)
L3 cache: 71.5 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-23,48-71
NUMA node1 CPU(s): 24-47,72-95
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==1.25.2
[pip3] torch==2.1.0a0+git416bf4e
[conda] mkl 2023.1.0 h6d00ec8_46342
[conda] mkl-include 2023.1.0 h06a4308_46342
[conda] numpy 1.25.2 pypi_0 pypi
[conda] torch 2.1.0a0+git416bf4e dev_0 <develop>
cc @jbschlosser @bhosmer @cpuhrsch @erichan1 @drisspg
| 1 |
1,640 | 106,699 |
torch.jit.script: scripting doesn't work with wraps
|
oncall: jit
|
### ๐ Describe the bug
Say we have a decorator that does some kind of preprocessing (maybe some checks on the arguments, maybe some kind of normalization etc) and we decide to decorate and then script a function as shown below:
```python
import functools
import torch
def decorator(obj):
@functools.wraps(obj)
def wrapper(inpt):
return obj((inpt).sigmoid())
return wrapper
@decorator
def func(inpt):
return torch.sqrt(inpt)
script_func = torch.jit.script(func)
inpt = torch.ones(1)
print(func(inpt))
print(script_func(inpt))
print(script_func.code)
```
Running this gives:
```
tensor([0.8550])
tensor([1.])
def func(inpt: Tensor) -> Tensor:
return torch.sqrt(inpt)
```
So it seems that scripting jumps straight to `__wrapped__` and ignores the decorator.
If we separate the decorator and the function:
```python
# decorator_file.py
import functools
def decorator(obj):
@functools.wraps(obj)
def wrapper(inpt):
return obj((inpt).sigmoid())
return wrapper
```
```python
# script_file.py
import torch
from decorator_file import decorator
@decorator
def func(inpt):
return torch.sqrt(inpt)
script_func = torch.jit.script(func)
```
We get the following:
```
Traceback (most recent call last):
File "C:\Users\user\Desktop\test-env\script_file.py", line 10, in <module>
script_func = torch.jit.script(func)
^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\user\Desktop\test-env\env\Lib\site-packages\torch\jit\_script.py", line 1341, in script
fn = torch._C._jit_script_compile(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError:
undefined value torch:
File "C:\Users\user\Desktop\test-env\decorator_file.py", line 7
@decorator
def func(inpt):
return torch.sqrt(inpt)
~~~~~ <--- HERE
```
So the decorator and the imports from `script_file.py` are ignored (need to `import torch` in `decorator_file.py`).
### Versions
```
Collecting environment information...
PyTorch version: 2.0.1+cpu
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 10 Home
GCC version: (x86_64-posix-seh-rev0, Built by MinGW-W64 project) 8.1.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A
Python version: 3.11.3 (tags/v3.11.3:f3909b8, Apr 4 2023, 23:49:59) [MSC v.1934 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.19045-SP0
Is CUDA available: False
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: NVIDIA GeForce GTX 1650
Nvidia driver version: 471.11
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture=9
CurrentClockSpeed=2592
DeviceID=CPU0
Family=198
L2CacheSize=1536
L2CacheSpeed=
Manufacturer=GenuineIntel
MaxClockSpeed=2592
Name=Intel(R) Core(TM) i7-9750H CPU @ 2.60GHz
ProcessorType=3
Revision=
Versions of relevant libraries:
[pip3] torch==2.0.1
[conda] Could not collect
```
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| 0 |
1,641 | 106,697 |
Enable Mypy Checking in torch/_inductor/fx_passes/pad_mm.py
|
triaged, open source, module: inductor, ciflow/inductor
|
Fixes #105230
Summary:
As suggested in https://github.com/pytorch/pytorch/issues/105230 mypy checking is enabled in torch/_inductor/fx_passes/pad_mm.py
After Fix:
mypy --follow-imports=skip torch/_inductor/fx_passes/pad_mm.py Success: no issues found in 1 source file
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov
| 3 |
1,642 | 106,692 |
torch.polygamma inconsistent with scipy.special.polygamma for n >= 1
|
triaged, module: numpy, module: special
|
kshitij12345 raised an issue two years ago concerning the fact that pytorch's implementation of the 1st order polygamma function incorrectly produces finite values for negative integers. The issue was closed, but the issue still seems to be present in the latest nightlies and the latest release of pytorch.
> n = 1
>
> ```python
> >>> t = torch.tensor([-1.])
> >>> torch.polygamma(1, t)
> tensor([1.2914e+15])
> >>> scipy.special.polygamma(1, t.numpy())
> array([inf], dtype=float32)
>
> >>> t = torch.tensor([-501.])
> >>> torch.polygamma(1, t)
> tensor([2.0831e+09])
> >>> scipy.special.polygamma(1, t.numpy())
> array([inf], dtype=float32)
>
> >>> t = torch.tensor([-float('inf')])
> >>> torch.polygamma(1, t)
> tensor([nan])
> >>> scipy.special.polygamma(1, t.numpy())
> array([inf], dtype=float32)
> >>>
> ```
>
> n > 1
>
> ```python
> >>> t = torch.tensor([float('inf')])
> >>> torch.polygamma(2, t)
> tensor([nan])
> >>> scipy.special.polygamma(2, t.numpy())
> array([-0.], dtype=float32)
> >>> torch.polygamma(3, t)
> tensor([nan])
> >>> scipy.special.polygamma(3, t.numpy())
> array([0.], dtype=float32)
> >>> torch.polygamma(4, t)
> tensor([nan])
> >>> scipy.special.polygamma(4, t.numpy())
> array([-0.], dtype=float32)
> ```
>
> cc @mruberry @rgommers @kshitij12345
_Originally posted in https://github.com/pytorch/pytorch/issues/55357
| 0 |
1,643 | 106,691 |
Enable Mypy Checking in torch/_inductor/fx_passes/joint_graph.py
|
triaged, open source, module: inductor, ciflow/inductor
|
Fixes #105230
Summary:
As suggested in https://github.com/pytorch/pytorch/issues/105230 mypy checking is enabled in torch/_inductor/fx_passes/joint_graph.py
After Fix:
mypy --follow-imports=skip torch/_inductor/fx_passes/joint_graph.py Success: no issues found in 1 source file
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov
| 2 |
1,644 | 106,690 |
DDP grads not synced when static_graph=True and module output is a dict subclass?
|
oncall: distributed, module: pytree
|
### ๐ Describe the bug
I am using something very similar to [`transformers.utils.ModelOutput`](https://github.com/huggingface/transformers/blob/d533465150532b0c5de167b574e59f64c68b1154/src/transformers/utils/generic.py#L237) for my model outputs, and I noticed that when I use DDP with `static_graph=True`, the grads do not appear to be synchronized after `backward()`, but they are if I use a plain `dict` instead. Is there some intricacy of supported DDP outputs that I am missing?
command:
```
CUDA_VISIBLE_DEVICES=0,1 torchrun \
--nproc_per_node=2 \
--nnodes=1 \
--node_rank=0 \
--rdzv_id=462 \
--rdzv_backend=c10d \
a.py
```
**a.py**:
```python
from typing import Dict
import torch
import torch.distributed as dist
from torch import hub, nn
class ModelOutput(dict):
"""Like `transformers.utils.ModelOutput`.
See: https://github.com/huggingface/transformers/blob/d533465150532b0c5de167b574e59f64c68b1154/src/transformers/utils/generic.py#L237.
"""
class MobileNetBinaryClassifier(nn.Module):
def __init__(self, num_classes=2):
super().__init__()
self.model = hub.load("pytorch/vision:v0.6.0", "mobilenet_v2", pretrained=True, verbose=False)
self.model.classifier[1] = nn.Linear(1280, num_classes)
for p in self.model.parameters():
p.requires_grad = True
def forward(self, x: torch.Tensor) -> Dict[str, torch.Tensor]:
output = self.model(x)
# return dict(output=output) # grads are synced
return ModelOutput(output=output) # grads are not synced?
def setup():
dist.init_process_group(backend="nccl")
def cleanup():
dist.destroy_process_group()
def demo_basic():
setup()
rank = dist.get_rank() if dist.is_initialized() else 0
model = MobileNetBinaryClassifier().to(rank)
ddp_model = nn.parallel.DistributedDataParallel(model, device_ids=[rank], static_graph=True)
optimizer = torch.optim.Adam(ddp_model.parameters(), lr=0.001)
optimizer.zero_grad()
data = torch.randn((16, 3, 224, 224), device=torch.device(rank))
outputs = ddp_model(data)
outputs["output"].sum().backward()
print(f"rank{rank}: {ddp_model.module.model.classifier[1].weight.grad[0,:5]}")
cleanup()
if __name__ == "__main__":
demo_basic()
```
`return dict(output=output)` (expected):
```
rank0: tensor([ 8.5775, 9.8441, 10.3479, 8.0700, 7.9398], device='cuda:0')
rank1: tensor([ 8.5775, 9.8441, 10.3479, 8.0700, 7.9398], device='cuda:1')
```
`return ModelOutput(output=output)`
```
rank1: tensor([9.3800, 8.2608, 9.2448, 9.0277, 9.6185], device='cuda:1')
rank0: tensor([8.1090, 9.8833, 9.2154, 6.4282, 8.2903], device='cuda:0')
```
### Versions
```console
$ python collect_env.py
Collecting environment information...
PyTorch version: 2.0.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.27.0
Libc version: glibc-2.31
Python version: 3.10.6 (main, Oct 24 2022, 16:07:47) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-76-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA RTX A6000
GPU 1: NVIDIA RTX A6000
GPU 2: NVIDIA RTX A6000
GPU 3: NVIDIA RTX A6000
Nvidia driver version: 525.105.17
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 43 bits physical, 48 bits virtual
CPU(s): 64
On-line CPU(s) list: 0-63
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 1
NUMA node(s): 1
Vendor ID: AuthenticAMD
CPU family: 23
Model: 49
Model name: AMD Ryzen Threadripper 3970X 32-Core Processor
Stepping: 0
Frequency boost: enabled
CPU MHz: 2746.204
CPU max MHz: 3700.0000
CPU min MHz: 2200.0000
BogoMIPS: 7400.62
Virtualization: AMD-V
L1d cache: 1 MiB
L1i cache: 1 MiB
L2 cache: 16 MiB
L3 cache: 128 MiB
NUMA node0 CPU(s): 0-63
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sme sev sev_es
Versions of relevant libraries:
[pip3] facenet-pytorch==2.5.2
[pip3] flytekitplugins-kfpytorch==1.6.2
[pip3] mypy==1.4.1
[pip3] mypy-boto3-s3==1.26.0.post1
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.25.2
[pip3] onnx2pytorch==0.4.1
[pip3] torch==2.0.1+cu117
[pip3] torchaudio==2.0.2
[pip3] torchdata==0.6.1
[pip3] torchjpeg==0.9.29
[pip3] torchmetrics==1.0.2
[pip3] torchsnapshot==0.1.0
[pip3] torchvision==0.15.2+cu117
[pip3] triton==2.0.0
[conda] facenet-pytorch 2.5.2 pypi_0 pypi
[conda] flytekitplugins-kfpytorch 1.6.2 pypi_0 pypi
[conda] numpy 1.25.2 pypi_0 pypi
[conda] onnx2pytorch 0.4.1 pypi_0 pypi
[conda] torch 2.0.1+cu117 pypi_0 pypi
[conda] torchaudio 2.0.2 pypi_0 pypi
[conda] torchdata 0.6.1 pypi_0 pypi
[conda] torchjpeg 0.9.29 pypi_0 pypi
[conda] torchmetrics 1.0.2 pypi_0 pypi
[conda] torchsnapshot 0.1.0 pypi_0 pypi
[conda] torchvision 0.15.2+cu117 pypi_0 pypi
[conda] triton 2.0.0 pypi_0 pypi
```
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @penguinwu @zou3519
| 6 |
1,645 | 106,687 |
add register backend for custom device
|
triaged, open source, Stale, topic: not user facing
|
Fixes #ISSUE_NUMBER
as title, add register backend func for custom device, then we could call func by torch.backends.foo for `foo` custom device.
| 3 |
1,646 | 106,668 |
Enabling Transformer fast path for not batch_first (MHA, TE, TEL)
|
enhancement, fb-exported, release notes: nn
|
Summary: The fast path for the `forward()` method in `MultiheadAttention`, `TE`, `TEL` only accepted `batch_first = True`. This diff enables fast path for `batch_first=False` as well.
Differential Revision: D48095703
| 12 |
1,647 | 106,667 |
[docs] Idea collection of examples of custom ops / inline torch extensions
|
module: docs, triaged, module: custom-operators
|
### ๐ The feature, motivation and pitch
Following up on https://github.com/pytorch/pytorch/issues/68066#issuecomment-1611836317:
<hr>
- a complete example of a custom op (maybe with forward/backward and registration) implemented via load_inline with only CUDA op or both CPU/SSE (how to properly do SSE impls?) and CUDA code - maybe for bilateral filter or batched base64 encoding/decoding or some bit magics like popcnt (https://github.com/pytorch/pytorch/issues/32867 or groupbitsum from https://github.com/Felix-Petersen/difflogic)
- maybe general discussion of whether `cupy` api/features for this are worth being compatible with? pure nvrtc compilation? pure C compilation / marshalling (of pure pointers + numbers + C strings + ctype structs?)?
- support for launcher-less kernels via `cupy`-like api? (also a good fit for nvrtc) for instance, here https://github.com/pytorch/pytorch/blob/main/test/test_cpp_extensions_jit.py#L338 there's even an unintuitive need for a c++ source for function declaration. it would be nicer to do without and be able to directly call the kernel
- example of calling a C function consuming raw data pointers or DLPack structures. It often is much simpler to have these kinds of interfaces for thin C wrappers (e.g. audio decoding). It would be nice to have this example as inline, as often it's just dozen lines of C++ code in extern C function to call some underlying library like BLAS / cuBLAS / some media encoding-decoding.
- example of using extra (currently unbound) functions cuBLAS / cudnn / blas. Maybe an example here could be some code using `ctypes`? it should be explained how to correctly marshal CUDA pointers.
<hr>
More ideas from https://github.com/pytorch/pytorch/pull/105947#issuecomment-1655693163:
- also interesting would be examples on how to register ops currently missing in pytorch, e.g. adding a quantized impl or some missing sparse op, so that dispatcher would use it and let go the exception that some op impl can't be found
- also interesting an example of adding a custom backward impl via a decorator
- also interesting an example of cuda op, but without having to manually add a C++ launcher stub (e.g. so that pytorch would generate the kernel launcher and maybe that the kernel can be compiled by nvrtc - like in CuPy)
- and also the CuPy-like API could be worth a discussion
- also interesting would be an example of adding a pure C-op passing/consuming a dlpack tensor or even pure pointers - useful for interaction with C libraries of image/audio encoding/decoding
<hr>
Trying to distill from above concrete ideas on improving FFI story of PyTorch:
- Fixing interop issues: https://github.com/pytorch/pytorch/issues/34651 https://github.com/pytorch/pytorch/issues/69491
- CuPy-like features:
- API like [RawKernel](https://docs.cupy.dev/en/stable/reference/generated/cupy.RawKernel.html) [RawModule](https://docs.cupy.dev/en/stable/reference/generated/cupy.RawModule.html)
- `nvrtc`-compilation (supposed to be faster than dealing with full nvcc / working with disk)
- kernel launch functionality from Python directly (like CuPy)
- Examples of interop with pure C functions:
- consuming pure pointers
- consuming DLPack tensors
- Examples of using libopus loop-less API: https://github.com/jlaine/opuslib/blob/master/opuslib/api/decoder.py https://github.com/orion-labs/opuslib
- Examples of interop PyTorch<> C code using ffmpeg libraries
- Examples of interaction of interop of PyTorch tensors with ctypes-functions (e.g. how to properly deal with tensor memory ownership)
- Examples of C++ extensions:
- Examples of interop of PyTorch with opencv via C++ code snippets interacting with libopencv-dev
- Examples of using directly CUDA kernel files from [FasterTransformer](https://github.com/NVIDIA/FasterTransformer/blob/main/src/fastertransformer/kernels/layernorm_kernels.cu ) or [OneFlow](https://github.com/Oneflow-Inc/oneflow/blob/2d24fe08be1b1bedcc22fb409c5d688924ce89fc/oneflow/user/kernels/layer_norm_gpu_kernel.cu). Somehow getting a compiled kernel without fully installing a python Package gives a nice feeling of experimentation and flexibility :)
cc @svekars @carljparker
| 6 |
1,648 | 106,665 |
Inconsistency between CPU and GPU for `Linear()` layer with input size 0
|
module: nn, triaged, actionable, module: edge cases
|
### ๐ Describe the bug
On a CPU, the following code:
```python
import torch
layer = torch.nn.Linear(0, 2)
x = torch.randn(3, 0)
print(x.shape)
y = layer(x)
print(y.shape)
print(y)
```
gives the following output:
```
/home/tgebhard/.virtualenvs/venv/lib/python3.10/site-packages/torch/nn/init.py:405: UserWarning: Initializing zero-element tensors is a no-op
warnings.warn("Initializing zero-element tensors is a no-op")
torch.Size([3, 0])
torch.Size([3, 2])
tensor([[0., 0.],
[0., 0.],
[0., 0.]], grad_fn=<AddmmBackward0>)
```
Meanwhile, on a GPU, the equivalent code:
```python
import torch
layer = torch.nn.Linear(0, 2).to("cuda")
x = torch.randn(2, 0, device="cuda")
print(x.shape)
y = layer(x)
print(y.shape)
print(y)
```
produces:
```
torch.Size([3, 0])
/home/tgebhard/.virtualenvs/venv/lib/python3.10/site-packages/torch/nn/modules/linear.py:114: UserWarning: An output with one or more elements was resized since it had shape [3, 2], which does not match the required output shape [2]. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at ../aten/src/ATen/native/Resize.cpp:26.)
return F.linear(input, self.weight, self.bias)
torch.Size([2])
tensor([0., 0.], device='cuda:0', grad_fn=<AddmmBackward0>)
```
I will admit that a linear layer with zero input features is a somewhat questionable use case, but as long as PyTorch supports such layers, I feel they should behave consistently?
### Versions
```
PyTorch version: 2.0.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
Clang version: Could not collect
CMake version: version 3.26.4
Libc version: glibc-2.35
Python version: 3.10.6 (main, Nov 14 2022, 16:10:14) [GCC 11.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-58-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA A100-SXM4-40GB
Nvidia driver version: 530.30.02
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 256
On-line CPU(s) list: 4-7
Off-line CPU(s) list: 0-3,8-255
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7662 64-Core Processor
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 64
Socket(s): 2
Stepping: 0
Frequency boost: enabled
CPU max MHz: 2154.2959
CPU min MHz: 1500.0000
BogoMIPS: 4000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sme sev sev_es
Virtualization: AMD-V
L1d cache: 4 MiB (128 instances)
L1i cache: 4 MiB (128 instances)
L2 cache: 64 MiB (128 instances)
L3 cache: 512 MiB (32 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-63,128-191
NUMA node1 CPU(s): 64-127,192-255
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy==1.4.1
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.4
[pip3] torch==2.0.1
[pip3] torchdiffeq==0.2.3
[pip3] torchvision==0.15.2
[pip3] triton==2.0.0
[conda] blas 1.0 mkl
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py310h7f8727e_0
[conda] mkl_fft 1.3.1 py310hd6ae3a3_0
[conda] mkl_random 1.2.2 py310h00e6091_0
[conda] numpy 1.23.5 py310hd5efca6_0
[conda] numpy-base 1.23.5 py310h8e6c178_0
[conda] numpydoc 1.5.0 py310h06a4308_0
[conda] pytorch 1.12.1 cpu_py310hb1f1ab4_1
```
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| 5 |
1,649 | 106,664 |
[docs] URL and link format proposal to make function page URLs more concise
|
module: docs, triaged, enhancement
|
### ๐ Describe the bug
These links end up copy-pasted in messengers for chat with colleagues, so making them have less bloat would be nice.
If the links are concise, one can also do quick lookups without going to relatively slower loading https://pytorch.org/docs - e.g. by typing https://pytorch.org/docs/torch.div or https://pytorch.org/docs/div https://docs.pytorch.org/torch.div or https://docs.pytorch.org/div. These pages can also be auto-generated or symlinked for all function names at least for the torch/tensor functions (and such pages can list all functions with the same name e.g. torch.sigmoid, F.sigmoid which may be good by itself https://github.com/pytorch/pytorch/issues/105318 or https://discuss.pytorch.org/t/various-quantized-quantizable-intrinsic-modules-purpose/183562 for LinearReLU)
<hr>
If we go to https://pytorch.org/docs/stable/torch.html, we have links like https://pytorch.org/docs/stable/generated/torch.div.html#torch.div
I'm proposing that instead these function links should be more concise and not include the hash. Going further, there should be symlinks https://pytorch.org/docs/torch.div.html or https://pytorch.org/docs/torch.div or even https://docs.pytorch.org/torch.div (and https://docs.pytorch.org/div)
When doing simple search like `torch.div` we get:
<img width="517" alt="image" src="https://github.com/pytorch/pytorch/assets/1041752/1612511a-0941-43c7-8514-18d763b2946b">
Clicking the first hit will get us to https://pytorch.org/docs/stable/generated/torch.div.html?highlight=torch+div#torch.div
Also, the highlight seems to have no effect now (as opposed to the Google's https://pytorch.org/docs/stable/generated/torch.div.html#:~:text=torch.div https://stackoverflow.com/questions/62161819/what-exactly-is-the-text-location-hash-in-an-url). So maybe worth dropping the highligh part from the Sphinx search hit URLs too?
### Versions
N/A
cc @svekars @carljparker
| 1 |
1,650 | 106,662 |
Will torch.sparse.mm support multiplying two boolean matrices?
|
module: sparse, triaged, module: boolean tensor
|
### ๐ The feature, motivation and pitch
I have two huge matrices. They are sparse Boolean matrices. I call torch.sparse.mm and found that it cannot be successful. Will the multiplication of bool sparse matrices be supported in the future?
### Alternatives
_No response_
### Additional context
_No response_
cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer
| 0 |
1,651 | 106,660 |
Question about garbage collection without GPU sync
|
triaged, module: CUDACachingAllocator
|
### ๐ Describe the bug
I found that in `void garbage_collect_cached_blocks()`
```
size_t gc_threshold = static_cast<size_t>(
CachingAllocatorConfig::garbage_collection_threshold() *
allowed_memory_maximum);
```
If the `torch._C._cuda_setMemoryFraction` function is not called๏ผ`allowed_memory_maximum` value is 0. The value of `gc_threshold` is also 0. The garbage collection without GPU sync mechanism does not take effect. And there's no prompt.
So I think it's more reasonable whether to use `device_total` (obtained using `cudaMemGetInfo`) when `allowed_memory_maximum` is 0.
### Versions
https://github.com/pytorch/pytorch/blob/dc22b4fdb1b4af1f0b7b77a78b30237d337d7109/c10/cuda/CUDACachingAllocator.cpp#L2577C47-L2577C47
| 1 |
1,652 | 106,659 |
Enable transformer.py fastpath for not batch_first for TE & TEL
|
fb-exported, Stale
|
Summary:
TE & TEL support for not batch_first
transpose position of (variable) seqlen and batch position, prior to converting inputs to nested tensor, and undo the transformation after. Also, in TE-layer.
Test Plan: sandcastle
Differential Revision: D47989561
| 5 |
1,653 | 106,655 |
Add functional collective all_to_all_single and support it in Inductor
|
fb-exported, module: inductor, module: dynamo, ciflow/inductor, release notes: AO frontend
|
Summary: `all_to_all_single` is being used by all the important All2All workloads (e.g. alltoall_pooled / alltoall_sequence / alltoallv). This PR adds functional collective `all_to_all_single` and support it in Inductor, to unlock our exploration on internal workloads.
Differential Revision: D48013416
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @chenyang78 @kadeng @muchulee8 @aakhundov @anijain2305
| 24 |
1,654 | 106,649 |
Dynamo graph break on triplet_margin_with_distance_loss
|
triaged, module: dynamo
|
### ๐ Describe the bug
python test/inductor/test_torchinductor_opinfo.py -k test_comprehensive_nn_functional_triplet_margin_with_distance_loss_cuda_float32
> File "/scratch/eellison/work/pytorch/torch/_dynamo/symbolic_convert.py", line 392, in wrapper
return inner_fn(self, inst)
File "/scratch/eellison/work/pytorch/torch/_dynamo/symbolic_convert.py", line 1156, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars.items)
File "/scratch/eellison/work/pytorch/torch/_dynamo/symbolic_convert.py", line 562, in call_function
self.push(fn.call_function(self, args, kwargs))
File "/scratch/eellison/work/pytorch/torch/_dynamo/variables/torch.py", line 723, in call_function
*proxy_args_kwargs(args, kwargs),
File "/scratch/eellison/work/pytorch/torch/_dynamo/utils.py", line 504, in proxy_args_kwargs
f"call_function args: {typestr(*args)} {typestr(*list(kwargs.values()))}"
File "/scratch/eellison/work/pytorch/torch/_dynamo/exc.py", line 143, in unimplemented
raise Unsupported(msg)
torch._dynamo.exc.Unsupported: call_function args: TensorVariable() TensorVariable() TensorVariable() ConstantVariable(float) NNModuleVariable()
related: https://github.com/pytorch/pytorch/issues/105534
### Versions
master
cc @voznesenskym @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @aakhundov @peterbell10 @ngimel @yf225 @kadeng @muchulee8
| 1 |
1,655 | 106,638 |
add is_complex op for ShardedTensor #93886
|
Stale, release notes: distributed (sharded)
|
Summary: ATT
Test Plan: unit tests
| 4 |
1,656 | 106,637 |
Using retain_graph in backward() with FSDP
|
oncall: distributed, triaged, module: fsdp
|
### ๐ Describe the bug
We need to support a use case where for the same model output, we need to compute loss (and corresponding gradients) twice, each time with a different set of target labels. To do the above in one forward pass, we do not want the `backward()` to clean up the computation graph and hence we use `backward(retain_graph=True)`.
**Without FSDP**
When the model is not wrapped in FSDP, we can make successive calls `loss1.backward(retain_graph=True), loss2.backward()` and they are executed successfully.
```
class Model(nn.Module):
def __init__(self):
super(Model, self).__init__()
self.layer = nn.Linear(2, 2, bias=False)
self.softmax = nn.Softmax(dim=1)
def forward(self, x):
return self.softmax(self.layer(x))
loss_fn = nn.CrossEntropyLoss()
model = Model()
optimizer = torch.optim.AdamW(opt_model.parameters(), lr=0.01)
dummy_data = torch.ones((2, 2))
out = opt_model(dummy_data)
target1 = torch.zeros(2, dtype=torch.long)
optimizer.zero_grad()
loss1 = loss_fn(out, target1)
loss1.backward(retain_graph=True)
optimizer.zero_grad()
target2 = torch.ones(2, dtype=torch.long)
loss2 = loss_fn(out, target2)
loss2.backward()
```
**With FSDP (Script to reproduce error)**
However, when the model is wrapped using FSDP, multiple calls to `backward()` with the retain_graph option leads to error. The code to reproduce the error is given below:
```
import functools
import os
import torch
import torch.distributed as dist
import torch.nn as nn
from torch.distributed.fsdp import FullyShardedDataParallel as FSDP
from torch.distributed.fsdp import MixedPrecision, ShardingStrategy, StateDictType
from torch.distributed.fsdp.wrap import enable_wrap, transformer_auto_wrap_policy, wrap
local_rank = int(os.getenv("LOCAL_RANK"))
torch.manual_seed(42 + local_rank)
torch.cuda.manual_seed_all(42 + local_rank)
mp_policy = MixedPrecision(
param_dtype=torch.bfloat16,
reduce_dtype=torch.bfloat16,
buffer_dtype=torch.bfloat16,
)
wrapping_policy = functools.partial(transformer_auto_wrap_policy, transformer_layer_cls={})
model_sharding_strategy = ShardingStrategy.FULL_SHARD
torch.cuda.set_device(int(os.getenv("RANK")))
dist.init_process_group("nccl")
v = 2048
d = 1024
model = nn.Sequential(nn.Embedding(v, d), nn.Linear(d, v))
model = FSDP(
model,
auto_wrap_policy=wrapping_policy,
mixed_precision=mp_policy,
sharding_strategy=model_sharding_strategy,
device_id=local_rank,
limit_all_gathers=True,
use_orig_params=True,
)
optimizer = torch.optim.AdamW(model.parameters(), lr=0.0001)
criterion = nn.CrossEntropyLoss()
# TRAINING LOOP
for i in range(10):
inp = torch.randint(v, (8, 256), device=local_rank)
label = inp.add(1) % v
print("Input and labels created")
optimizer.zero_grad()
pred = model(inp)
loss = criterion(pred.view(-1, pred.size(-1)), label.view(-1))
print("Loss computed. Calling backward() with retain_graph..")
loss.backward(retain_graph=True)
print("Gradient computed once. Called backward() again without retain_graph..")
loss.backward()
print("Gradient computed twice")
optimizer.step()
print("Update step complete")
if local_rank == 0 and (i + 1) % 10 == 0:
print(f"Step {i+1}, loss {loss.item()}")
```
The relevant error log is given below:
```
Input and labels created
NCCL version 2.17.1+cuda11.8
Loss computed. Calling backward() with retain_graph..
Gradient computed once. Called backward() again without retain_graph..
Traceback (most recent call last):
File "/opt/conda/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/opt/conda/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/workspace/foundation-model-stack/nlp/scripts/pretraining/repro.py", line 59, in <module>
loss.backward()
File "/opt/conda/lib/python3.10/site-packages/torch/_tensor.py", line 491, in backward
torch.autograd.backward(
File "/opt/conda/lib/python3.10/site-packages/torch/autograd/__init__.py", line 204, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/opt/conda/lib/python3.10/site-packages/torch/distributed/fsdp/_runtime_utils.py", line 690, in _pre_backward_hook
if state._needs_pre_backward_unshard[_handles_key]:
KeyError: (<torch.distributed.fsdp.flat_param.FlatParamHandle object at 0x7f8889313910>,)
[2023-08-04 20:41:22,975] torch.distributed.elastic.multiprocessing.api: [ERROR] failed (exitcode: 1) local_rank: 0 (pid: 135) of binary: /opt/conda/bin/python
Traceback (most recent call last):
File "/opt/conda/bin/torchrun", line 8, in <module>
sys.exit(main())
File "/opt/conda/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 346, in wrapper
return f(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/distributed/run.py", line 797, in main
run(args)
File "/opt/conda/lib/python3.10/site-packages/torch/distributed/run.py", line 788, in run
elastic_launch(
File "/opt/conda/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 134, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/opt/conda/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 264, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
```
### Versions
Pytorch nightly image from 07-17-2023.
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @penguinwu @fegin
| 0 |
1,657 | 106,635 |
Hackable distributed filesystem reader and writer
|
triaged, open source
|
I propose some changes so that the `FileSystemReader` and `FileSystemWriter` can be used on other file systems. User only needs to provide `path` as a subclass of `Path` that overrides the necessary interfaces.
For example, one can utilize `tf.io.gfile` to implement an interface to save to or load from HDFS. The following code snippet shows a working implementation.
```python
from pathlib import Path
import tensorflow as tf
class GFileWrapper(tf.io.gfile.GFile):
def __init__(self, path, mode="r") -> None:
super().__init__(path, mode)
def write(self, data):
return super().write(bytes(data))
# a not quite efficient readinto, but it works
def readinto(self, buffer):
# read up to buffer's length
data = self.read(len(buffer))
length = len(data)
buffer[:length] = data
return length
class HdfsPath(type(Path())):
def __new__(cls, *pathsegments):
return super().__new__(cls, *pathsegments)
@staticmethod
def _fix_path(path):
path = str(path)
if path.startswith("hdfs:/") and not path.startswith("hdfs://"):
path = path.replace("hdfs:/", "hdfs://")
return path
def open(self, mode="r", *args, **kwargs):
return GFileWrapper(HdfsPath._fix_path(self), mode=mode)
def mkdir(self, **kwargs) -> None:
return tf.io.gfile.makedirs(HdfsPath._fix_path(self))
def rename(self, target):
return tf.io.gfile.rename(HdfsPath._fix_path(self), HdfsPath._fix_path(target))
```
```python
writer = FileSystemWriter(HdfsPath("hdfs://..."), sync_files=False)
reader = FileSystemReader(HdfsPath("hdfs://..."))
```
| 4 |
1,658 | 106,634 |
Confusing error message for DataLoader with num_workers=0 and non-zero timeout
|
module: dataloader, triaged
|
### ๐ Describe the bug
It looks like setting a timeout on the DataLoader is illegal with num_workers=0, but the documentation doesn't mention it and the error message is very confusing. I would expect the timeout to be ignored under those circumstances, but at the very least this deserves a clear error message, since the current one reads as if a timeout occurred, rather than an illegal value for it was passed.
Minimal reproducing example
```python
from torch.utils.data import DataLoader
train_dataloader_full_tracks = DataLoader(
[], num_workers=0, timeout=1
)
iter(train_dataloader_full_tracks)
```
Output
```
Traceback (most recent call last):
File "dataloader_problem.py", line 6, in <module>
iter(train_dataloader_full_tracks)
File "/usr/local/lib/python3.8/dist-packages/torch/utils/data/dataloader.py", line 442, in __iter__
return self._get_iterator()
File "/usr/local/lib/python3.8/dist-packages/torch/utils/data/dataloader.py", line 385, in _get_iterator
return _SingleProcessDataLoaderIter(self)
File "/usr/local/lib/python3.8/dist-packages/torch/utils/data/dataloader.py", line 664, in __init__
assert self._timeout == 0
AssertionError
```
### Versions
I use 2.0.0, but I see that the main branch has the same issue as of today.
cc @SsnL @VitalyFedyunin @ejguan @dzhulgakov
| 0 |
1,659 | 106,633 |
Refcount problem for torch.distributed.Store objects defined in Python
|
oncall: distributed, module: c10d
|
### ๐ Describe the bug
Consider the following Python code, call it `store_bug.py` (for a fully working version, see [here](https://gist.github.com/heiner/7ed5802eb9b3218a418262d542ecb827#file-store_bug-py)):
```python
import datetime
import sys
import torch
class MyStore(torch.distributed.Store):
# Some store implementation.
def init(rank, world_size, use_workaround=False, *, _cache={}):
store = MyStore()
torch.distributed.init_process_group(
backend="gloo",
world_size=world_size,
rank=rank,
timeout=datetime.timedelta(seconds=3.0),
store=store,
)
if use_workaround:
_cache["store"] = store
def main():
rank = int(sys.argv[1])
world_size = int(sys.argv[2])
use_workaround = sys.argv[3] if len(sys.argv) > 3 else False
print(f"starting rank {rank}/{world_size} with use_workaround={use_workaround}")
init(rank, world_size, use_workaround)
g = torch.distributed.new_group(timeout=datetime.timedelta(seconds=2.0))
payload = torch.tensor([3 * rank + 1], dtype=torch.int64)
torch.distributed.all_reduce(payload, group=g)
print(f"Rank {rank}, sum is {payload}")
if __name__ == "__main__":
main()
```
Now, run the above code like this:
```sh
rm -rf /tmp/store_bug; for i in {0..1}; do python -u store_bug.py $i 2 & done && wait
```
This will result in
```sh
$ rm -rf /tmp/store_bug; for i in {0..1}; do python -u store_bug.py $i 2 & done && wait
[2] 69038
[3] 69039
starting rank 1/2 with use_workaround=False
starting rank 0/2 with use_workaround=False
Traceback (most recent call last):
File "/Users/heiner/src/projects/pytorch_store_bug/store_bug.py", line 147, in <module>
main()
File "/Users/heiner/src/projects/pytorch_store_bug/store_bug.py", line 139, in main
g = torch.distributed.new_group(timeout=datetime.timedelta(seconds=2.0))
File "/Users/heiner/miniconda3/envs/dev/lib/python3.9/site-packages/torch/distributed/distributed_c10d.py", line 3520, in new_group
pg = _new_process_group_helper(
File "/Users/heiner/miniconda3/envs/dev/lib/python3.9/site-packages/torch/distributed/distributed_c10d.py", line 1009, in _new_process_group_helper
backend_class = ProcessGroupGloo(backend_prefix_store, group_rank, group_size, timeout=timeout)
RuntimeError: fn INTERNAL ASSERT FAILED at "/Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/distributed/c10d/init.cpp":137, please report a bug to PyTorch.
Traceback (most recent call last):
File "/Users/heiner/src/projects/pytorch_store_bug/store_bug.py", line 147, in <module>
main()
File "/Users/heiner/src/projects/pytorch_store_bug/store_bug.py", line 139, in main
g = torch.distributed.new_group(timeout=datetime.timedelta(seconds=2.0))
File "/Users/heiner/miniconda3/envs/dev/lib/python3.9/site-packages/torch/distributed/distributed_c10d.py", line 3520, in new_group
pg = _new_process_group_helper(
File "/Users/heiner/miniconda3/envs/dev/lib/python3.9/site-packages/torch/distributed/distributed_c10d.py", line 1009, in _new_process_group_helper
backend_class = ProcessGroupGloo(backend_prefix_store, group_rank, group_size, timeout=timeout)
RuntimeError: fn INTERNAL ASSERT FAILED at "/Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/distributed/c10d/init.cpp":137, please report a bug to PyTorch.
[3] + exit 1 python -u store_bug.py $i 2
[2] + exit 1 python -u store_bug.py $i 2
```
In contrast, running the code such that `use_workaround` becomes true works:
```sh
$ rm -rf /tmp/store_bug; for i in {0..1}; do python -u store_bug.py $i 2 1 & done && wait
[2] 69067
[3] 69068
starting rank 1/2 with use_workaround=1
starting rank 0/2 with use_workaround=1
Rank 1, sum is tensor([5])
Rank 0, sum is tensor([5])
[3] + done python -u store_bug.py $i 2 1
[2] + done python -u store_bug.py $i 2 1
```
The difference, of course, is that the working version keeps an additional reference to the `store` object, making sure it stays alive.
This is a regression introduced in or around PyTorch 2.0, possibly in https://github.com/pytorch/pytorch/pull/91178. The [existing test](https://github.com/pytorch/pytorch/blob/a01e795a6d03dff853534c4c4a9f3522f3fd717d/test/distributed/test_store.py#L461) for this via `torch.distributed._test_python_store` doesn't catch this as it does not create a new group via the store.
To recreate at home:
```sh
git clone https://gist.github.com/7ed5802eb9b3218a418262d542ecb827.git storebug
cd storebug/
rm -rf /tmp/store_bug; for i in {0..1}; do python -u store_bug.py $i 2 1 & done && wait # Works.
rm -rf /tmp/store_bug; for i in {0..1}; do python -u store_bug.py $i 2 & done && wait # Breaks.
```
### Versions
Collecting environment information...
PyTorch version: 2.0.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 12.6 (arm64)
GCC version: Could not collect
Clang version: 14.0.0 (clang-1400.0.29.102)
CMake version: version 3.24.2
Libc version: N/A
Python version: 3.9.11 (main, Mar 29 2022, 14:04:34) [Clang 12.0.0 ] (64-bit runtime)
Python platform: macOS-12.6-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Pro
Versions of relevant libraries:
[pip3] flake8==4.0.1
[pip3] flake8-bugbear==22.3.23
[pip3] flake8-unused-arguments==0.0.10
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.23.5
[pip3] open-clip-torch==2.9.1
[pip3] pytorch-lightning==1.4.2
[pip3] torch==2.0.1
[pip3] torchaudio==0.13.1
[pip3] torchdiffeq==0.2.3
[pip3] torchmetrics==0.6.0
[pip3] torchvision==0.14.1
[conda] numpy 1.23.5 pypi_0 pypi
[conda] open-clip-torch 2.9.1 pypi_0 pypi
[conda] pytorch-lightning 1.4.2 pypi_0 pypi
[conda] torch 1.11.0 pypi_0 pypi
[conda] torchaudio 0.13.1 py39_cpu pytorch
[conda] torchdiffeq 0.2.3 pypi_0 pypi
[conda] torchmetrics 0.6.0 pypi_0 pypi
[conda] torchvision 0.14.1 pypi_0 pypi
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @penguinwu
| 5 |
1,660 | 106,630 |
no_grad() changes output of TransformerDecoder module during evaluation
|
oncall: transformer/mha
|
### ๐ Describe the bug
The TransformerDecoder module gives different outputs at validation time depending on whether gradient calculations are being performed.
Minimal example:
```
import torch
torch.manual_seed(5)
class TestModule(torch.nn.Module):
def __init__(self):
super().__init__()
layer = torch.nn.TransformerDecoderLayer(
d_model=8,
nhead=2,
batch_first=True,
dropout=0
)
self.transformer_decoder = torch.nn.TransformerDecoder(layer, num_layers=1)
self.final = torch.nn.Linear(8, 1)
def forward(self, batch):
tgt, memory = batch
mask = (~torch.triu(torch.ones(tgt.shape[1], tgt.shape[1], dtype=torch.bool)).transpose(0, 1)).type_as(tgt)
preds = self.transformer_decoder(
tgt=tgt,
memory=memory,
tgt_mask=mask
)
return torch.sum(self.final(preds))
model = TestModule()
tgt = torch.normal(0, 1, size=(1, 5, 8))
mem = torch.zeros(1, 5, 8)
model.train()
print(model((tgt, mem)))
model.train()
with torch.no_grad():
print(model((tgt, mem)))
model.eval()
print(model((tgt, mem)))
model.eval()
with torch.no_grad():
print(model((tgt, mem)))
````
**Output:**
> tensor(1.8914, grad_fn=<SumBackward0>)
> tensor(1.8914)
> tensor(1.8914, grad_fn=<SumBackward0>)
> tensor(2.9639)
**Expected output:**
All four outputs should be the same - whether or not the computation graph is stored shouldn't affect the output of the model.
The difference isn't caused by unexpected train vs. test logic somewhere in the TransformerDecoderLayer (eg. a hidden dropout or batch norm), since the model behaves as expected when run in eval model but with gradients.
### Versions
**Output of collect_env.py:**
PyTorch version: 2.0.1+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.25.2
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.109+-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 2
On-line CPU(s) list: 0,1
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU @ 2.20GHz
CPU family: 6
Model: 79
Thread(s) per core: 2
Core(s) per socket: 1
Socket(s): 1
Stepping: 0
BogoMIPS: 4399.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx smap xsaveopt arat md_clear arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32 KiB (1 instance)
L1i cache: 32 KiB (1 instance)
L2 cache: 256 KiB (1 instance)
L3 cache: 55 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0,1
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable; SMT Host state unknown
Vulnerability Meltdown: Vulnerable
Vulnerability Mmio stale data: Vulnerable
Vulnerability Retbleed: Vulnerable
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled, PBRSB-eIBRS: Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable
Versions of relevant libraries:
[pip3] numpy==1.22.4
[pip3] torch==2.0.1+cu118
[pip3] torchaudio==2.0.2+cu118
[pip3] torchdata==0.6.1
[pip3] torchsummary==1.5.1
[pip3] torchtext==0.15.2
[pip3] torchvision==0.15.2+cu118
[pip3] triton==2.0.0
cc @jbschlosser @bhosmer @cpuhrsch @erichan1 @drisspg
| 2 |
1,661 | 106,625 |
Dynamo debug decorator
|
Stale, module: dynamo, ciflow/inductor
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #106625
Dynamo debug decorator
```
@torch._dynamo.debug
def inner_foo(x):
return x * x
def fn(x, y):
x2 = inner_foo(x)
return x2 * y
x = torch.rand([4, 10])
y = torch.rand([4, 10])
torch._dynamo.optimize("eager")(fn)(x, y)
```
Output:
```
[2023-08-04 17:48:38,550] torch._dynamo.symbolic_convert: [DEBUG] INLINING <code object inner_foo at 0x7fd962616e40, file "/scratch/voz/work/pytorch/x.py", line 7>
[2023-08-04 17:48:38,550] torch._dynamo.symbolic_convert: [DEBUG] 9 0 LOAD_FAST 0 (x)
[2023-08-04 17:48:38,550] torch._dynamo.symbolic_convert: [DEBUG] 2 LOAD_FAST 0 (x)
[2023-08-04 17:48:38,550] torch._dynamo.symbolic_convert: [DEBUG] 4 BINARY_MULTIPLY
[2023-08-04 17:48:38,550] torch._dynamo.symbolic_convert: [DEBUG] 6 RETURN_VALUE
[2023-08-04 17:48:38,550] torch._dynamo.symbolic_convert: [DEBUG]
[2023-08-04 17:48:38,550] torch._dynamo.symbolic_convert.__trace_source: [DEBUG] TRACE starts_line inner_foo /scratch/voz/work/pytorch/x.py:7 (inline depth: 1)
[2023-08-04 17:48:38,550] torch._dynamo.symbolic_convert.__trace_source: [DEBUG] @torch._dynamo.debug
[2023-08-04 17:48:38,550] torch._dynamo.symbolic_convert.__trace_source: [DEBUG] TRACE starts_line inner_foo /scratch/voz/work/pytorch/x.py:9 (inline depth: 1)
[2023-08-04 17:48:38,550] torch._dynamo.symbolic_convert.__trace_source: [DEBUG] return x * x
[2023-08-04 17:48:38,550] torch._dynamo.symbolic_convert: [DEBUG] TRACE LOAD_FAST x []
[2023-08-04 17:48:38,550] torch._dynamo.symbolic_convert: [DEBUG] TRACE LOAD_FAST x [TensorVariable()]
[2023-08-04 17:48:38,550] torch._dynamo.symbolic_convert: [DEBUG] TRACE BINARY_MULTIPLY None [TensorVariable(), TensorVariable()]
[2023-08-04 17:48:38,555] torch._dynamo.symbolic_convert: [DEBUG] TRACE RETURN_VALUE None [TensorVariable()]
[2023-08-04 17:48:38,555] torch._dynamo.symbolic_convert: [DEBUG] DONE INLINING <code object inner_foo at 0x7fd962616e40, file "/scratch/voz/work/pytorch/x.py", line 7>
```
And drops a breakpoint in.
cc @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng @anijain2305 @ipiszy
| 2 |
1,662 | 106,624 |
[feature request] [ux] Frontend methods for fused elementwise affine transform: mul+add+dtype convert + support `integer_tensor.mul_(float_constant)` and `float_tensor.mul(some_constant, out = integer_tensor)` maybe via new args `rounding_mode=...` and `dtype=...` + maybe support OpenCV-style saturated dtype conversions (e.g. `clamp_` before conversion)
|
triaged, module: type promotion, module: python frontend
|
### ๐ The feature, motivation and pitch
This should be well-compiled as it's elementwise, but would be nice to have as a frontend method for colloquial use.
A prototypical example is conversion of float32 image to uint8 image:
```python
import torch
a = torch.rand(10)
b = torch.empty_like(a, dtype = torch.uint8)
f32_to_u8 = torch.mul(a, 255, out = b)
# RuntimeError: result type Float can't be cast to the desired output type Byte
```
Might be that these methods already exist in the form of helpers for quantization, but would be nice to expose them for wider usage.
OpenCV https://docs.opencv.org/4.x/d3/d63/classcv_1_1Mat.html#adf88c60c5b4980e05bb556080916978b has a similar method `convertTo` which multiplies by a scalar, than adds another scalar and then casts dtype. OpenCV also does saturate cast functionality (to make sure that uint8 doesn't overflow too badly) - might be useful for PyTorch too, but saturation might be done prior to actual conversion by `clamp_` or by providing also a fused `mul_add_clamp(..., dtype=...)` (`torch.add(a, b, alpha = ...)` exists but also doesn't support the needed casting).
Related: `addcmul` generalization discussion: https://github.com/pytorch/pytorch/issues/104849
(although I realize, this might be somwhat niche - also sth similar exists in torchvision: https://pytorch.org/vision/stable/generated/torchvision.transforms.Normalize.html)
Being able to do `int32_tensor.mul_(float_constant)` or `uint8_tensor.mul_(float_constant)` is also useful and related to https://github.com/pytorch/pytorch/issues/54389. It could certainly support rounding, similar to `torch.div(..., rounding_mode = 'trunc')` e.g. by doing int32_tensor.mul_(numerator).div_(denominator, rounding_mode = 'floor')
Currently:
```python
x = torch.tensor([2, 4, 8], dtype = torch.int32)
although docs say "None - default behavior. Performs no rounding and, if both input and other are integer types, promotes the inputs to the default scalar type."
x.div_(2) # RuntimeError: result type Float can't be cast to the desired output type Long
x.div_(2, rounding_mode = 'trunc') # works
x //= 2 # works
```
cc @nairbv @mruberry @albanD
| 12 |
1,663 | 106,623 |
Meta implementations of FFT operators often have incorrect strides
|
triaged, module: fft, oncall: pt2
|
## Issue description
`aten._fft_c2r` and `aten._fft_r2c` meta implementations currently return contiguous strides in all cases, which is not consistent with eager.
`aten._fft_c2c` makes a much better effort, but is still incorrect on CUDA when there are more than 3 transform dimensions or on CPU when using the pocketfft implementation instead of mkl-fft.
Ref:
- [c2r](https://github.com/pytorch/pytorch/blob/df8abaaf5f7e19d897aa30896bdbbe416ca2f825/torch/_meta_registrations.py#L279)
- [r2c](https://github.com/pytorch/pytorch/blob/df8abaaf5f7e19d897aa30896bdbbe416ca2f825/torch/_meta_registrations.py#L221)
- [c2c](https://github.com/pytorch/pytorch/blob/df8abaaf5f7e19d897aa30896bdbbe416ca2f825/torch/_meta_registrations.py#L205)
## Code example
This real-to-complex example falls back to eager with `torch.compile` due to `UnsupportedOperatorException`:
```python
import torch
def fn(x):
return torch.fft.rfft(x)
torch.compile(fn, dynamic=True)
```
This complex-to-complex example compiles, but fails at runtime with stride mismatch:
```python
import torch
@torch.compile
def fn(x):
return torch.fft.fftn(x, dim=(1, 2, 3, 4))
fn(torch.randn((5, 5, 5, 5, 5), dtype=torch.complex64, device="cuda"))
# AssertionError: expected size 5==5, stride 1==125 at dim=1
```
cc @mruberry @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @nkaretnikov
| 1 |
1,664 | 106,622 |
FFT Samples Inputs with More than Three Dimensions
|
module: tests, triaged, module: fft
|
### ๐ The feature, motivation and pitch
This is worth capturing because cuFFT can only transform three dimensions at once, so, 4dim+ has divergent behavior. See https://github.com/pytorch/pytorch/pull/103616#discussion_r1237295970.
### Alternatives
_No response_
### Additional context
_No response_
cc @mruberry @peterbell10
| 2 |
1,665 | 106,614 |
Case study of torch.compile / cpp inductor on CPU: min_sum / mul_sum with 1d / matmul-like with static / dynamic shapes
|
triaged, module: custom-operators, oncall: pt2, module: dynamic shapes
|
### ๐ Describe the bug
(I'll add actual benchmarking details and logs and output_code.py in a bit)
I'm doing min_sum and mul_sum in two setups:
1. (D, ) x (D, ) -> scalar
2. (B, N, 1, D) x (B, 1, N, D) -> (B, N, N)
Case (1) is similar to computing some sort of norm of a given input or a distance between two inputs.
Case (2) is matmul-like and is similar to computing distances between all pairs of the batch (mm / pdist / cdist).
When running as `python3 bench_minmul_sum.py --enable-dynamic --avx512 --verbose1`
Findings and questions regarding dynamic shapes:
1. Dynamic / static shape options are not confined to the torch.compile call (which is super-unintuitive and brittle). When using `--enable-dynamic`, all produced `output_code.py` contain dynamic shapes (meaning, dynamic shapes ended up using not only for `min_sum_compiled_dynamic` but also for `min_sum_compiled`)
2. When not using `--enable-dynamic`, all produced `output_code.py` contain static shapes - ideally this may lead to full loop unrolling for small dims, but currently loop unrolling may only be done by the underlying C++ compiler, loop unrolling pragmas are currently missing from the generated C++
3. Dynamic shapes `output_code.py` do not record divisibility and always contain an extra tail loop.
4. Ideally, I would like to have static shapes for 1d ops (inner dim is static or there are only a few possible values in my usecase); static shapes for the inner dim of the 2d ops and dynamic shapes for the batch dims (like can be done for torch.onnx.export). Currently parametrizing static or dynamic shapes is unpredictable :( This is not very good :(
5. Why is `#pragma omp simd simdlen(8)` used? Apparently it's correct, but if it's the register-size-in-float32, I would expect it to be `simdlen(16)`, no? Also, I had hard time to find out the resulting assembly and whether the compiled did loop unrolling (there was no `#pragma omp unroll`). How can one do that? Could the debug option print the produced assembly too? (obtained from `objdump`). Currently there are also too much logs, hard to get through all of them.
6. Better explaining of the tiling story to the users is important - for matmul-like ops especially. And these custom matmul-like ops are quite common in kernel methods and similar (see pykeops): https://github.com/pytorch/pytorch/issues/59545 https://github.com/pytorch/pytorch/issues/97006 https://github.com/pytorch/pytorch/issues/71386
Findings and questions regarding NaNs:
1. min_sum produced codes have NaN handling
2. It can be good to give an asserting or a hint to the compiler that the inputs will not contain NaNs, so being NaN-compliant is not needed. This NaN handling might be a perf hit for small 1d vectors.
Findings regarding mul_sum (which is equivalent to matrix product or dot product):
1. The mul_sum pattern is not recognized and `gemm` call or `dot` call is not produced. Seems that no tiling is done despite the fact that sum-reduction is used.
Findings regarding the benchmarking:
1. On some Linux platforms (e.g. Windows WSLv1) it can be hard to account for CPU throttling. It would be good if PyTorch had some recommendations on how to account for it properly. And on laptops on battery there is often quite severe CPU throttling.
2. config.cpp.simdlen is 512 despite that without `ATEN_CPU_CAPABILITY=avx512`, config.show() shows `CPU capability usage: AVX2` (which 256-bits registers only). And it seems that this ATEN_CPU_CAPABILITY of only 256-bits is discovered automatically, despite that the laptop supports avx512
Misc:
1. NVidia Triton will soon merge more direct and discoverable PyTorch + compile support: https://github.com/triton-inference-server/python_backend/pull/282, they are even considering of making torch.compile enabled as default option: https://github.com/triton-inference-server/python_backend/pull/282#discussion_r1275234208. This means that better telemetry and predictability about torch.compile would soon be more important. E.g. being able to completely save the best benchmarked kernels / selected cudnn algos to some file/database and then providing them as is when deploying to new servers. This is important to force the same algos/impls at every startup (I assume currently the selected cudnn algos might change if at algo benchmarking time, someone at a shared server e.g. occupies the needed benchmarking memory) and giving some indication / hooks when it was not possible.
2. Better compiler debug output visualization report is needed similar to https://godbolt.org for C++ -> Assembly. Maybe some HTML report containing the Python source code so that one can click on compiled function and have C++ / CUDA / Triton / Assembly shown for inspection?
```python
# bench_minmul_sum.py
import os
import sys
import time
enable_dynamic = '--enable-dynamic' in sys.argv
if '--avx512' in sys.argv:
os.environ['ATEN_CPU_CAPABILITY'] = 'avx512'
if '--verbose1' in sys.argv:
os.environ['TORCH_COMPILE_DEBUG'] = '1'
if '--verbose2' in sys.argv:
os.environ['TORCH_LOGS'] = 'output_code'
if '--simd512':
from torch._inductor import config
config.cpp.simdlen = 512
from torch._inductor import config
print('config.cpp.simdlen', config.cpp.simdlen)
print([line for line in torch.__config__.show().splitlines() if 'CPU capability' in line])
import torch
min_sum = lambda a, b: torch.min(a, b).sum(-1)
mul_sum = lambda a, b: torch.mul(a, b).sum(-1)
min_sum_compiled_dynamic = torch.compile(min_sum, dynamic = enable_dynamic)
mul_sum_compiled_dynamic = torch.compile(mul_sum, dynamic = enable_dynamic)
min_sum_compiled = torch.compile(min_sum)
mul_sum_compiled = torch.compile(mul_sum)
# 192 is 12 float32x16 (512-bit registers) or 24 float32x8 (256-bit registers)
static_shape = (192,)
dynamic_shape = (6, 183, 192)
def benchmark(name, f, a, b, K = 100):
tic = time.time()
for k in range(K):
f(a, b)
print(name, a.shape, b.shape, (time.time() - tic) * 1000, 'ms')
# warmup
for k in range(5):
A = torch.rand(*dynamic_shape, dtype = torch.float32).unsqueeze(2)
B = torch.rand(*dynamic_shape, dtype = torch.float32).unsqueeze(1)
a = torch.rand(*static_shape, dtype = torch.float32)
b = torch.rand(*static_shape, dtype = torch.float32)
if enable_dynamic:
min_sum_compiled_dynamic(A, B)
mul_sum_compiled_dynamic(A, B)
mul_sum_compiled_dynamic(A, B)
min_sum_compiled_dynamic(A, A)
min_sum_compiled(A, B)
mul_sum_compiled(A, B)
min_sum_compiled(A, A)
mul_sum_compiled(A, A)
min_sum(A, B)
mul_sum(A, B)
min_sum(A, A)
mul_sum(A, A)
if enable_dynamic:
min_sum_compiled_dynamic(a, b)
mul_sum_compiled_dynamic(a, b)
min_sum_compiled_dynamic(a, a)
mul_sum_compiled_dynamic(a, a)
min_sum_compiled(a, b)
mul_sum_compiled(a, b)
min_sum_compiled(a, a)
mul_sum_compiled(a, a)
min_sum(a, b)
mul_sum(a, b)
min_sum(a, a)
mul_sum(a, a)
# benchmark
A = torch.rand(*dynamic_shape, dtype = torch.float32).unsqueeze(2)
B = torch.rand(*dynamic_shape, dtype = torch.float32).unsqueeze(1)
a = torch.rand(*static_shape, dtype = torch.float32)
b = torch.rand(*static_shape, dtype = torch.float32)
if enable_dynamic:
benchmark('min_sum_compiled_dynamic_AB', min_sum_compiled_dynamic, A, B)
benchmark('mul_sum_compiled_dynamic_ab', mul_sum_compiled_dynamic, a, b)
benchmark('min_sum_compiled_dynamic_AA', min_sum_compiled_dynamic, A, A)
benchmark('mul_sum_compiled_dynamic_aa', mul_sum_compiled_dynamic, a, a)
benchmark('min_sum_compiled_AB', min_sum_compiled, A, B)
benchmark('mul_sum_compiled_ab', mul_sum_compiled, a, b)
benchmark('min_sum_compiled_AA', min_sum_compiled, A, A)
benchmark('mul_sum_compiled_aa', mul_sum_compiled, a, a)
benchmark('min_sum_AB', min_sum, A, B)
benchmark('mul_sum_ab', mul_sum, a, b)
benchmark('min_sum_AA', min_sum, A, A)
benchmark('mul_sum_aa', mul_sum, a, a)
```
### Versions
3.1.0.dev20230802+cpu
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 17 |
1,666 | 106,613 |
Stopgap patch for avoiding polluting cache
|
Stale, module: dynamo, ciflow/inductor
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #106613
This is a complete and utter hack.
See https://github.com/pytorch/pytorch/issues/106547 for details.
Signed-off-by: Edward Z. Yang <ezyang@meta.com>
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng @anijain2305 @ipiszy
| 2 |
1,667 | 106,608 |
ROCm & Windows Support
|
module: rocm, triaged
|
### ๐ The feature, motivation and pitch
AMD has release ROCm windows support, as [docs.amd.com](https://docs.amd.com) shows:


Please add PyTorch support of Windows on AMD GPUs!
### Alternatives
_No response_
### Additional context
_No response_
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang
| 5 |
1,668 | 106,606 |
More Performant CachingHostAllocator for Pinned Memory Allocation
|
module: cuda, triaged
|
### ๐ The feature, motivation and pitch
I intend to employ a memory allocator for pinned memory allocation and have come across the `CachingHostAllocator` in PyTorch. Regrettably, the practical memory consumption surpasses what is expected. The allocator follows a power-of-two allocation strategy without memory coalescing, resulting in substantial memory wastage. This inefficiency can lead to increased memory consumption and suboptimal utilization of resources.
The current implementation of the `CachingHostAllocator` in PyTorch for pinned memory allocation seems to exhibit suboptimal performance under certain conditions. As a crucial component in deep learning workloads, efficient memory allocation and management are essential to ensure optimal training and inference performance.
### Alternatives
I suggest the development of a more performant alternative to the existing `CachingHostAllocator` that addresses the performance concerns. This new allocator should focus on improving memory allocation speed, reducing memory fragmentation, and better leveraging modern hardware characteristics.
### Additional context
_No response_
```[tasklist]
### Tasks
```
cc @ptrblck
| 0 |
1,669 | 106,604 |
Relu6 not able to process nan values
|
triaged, module: NaNs and Infs
|
### ๐ Describe the bug
import numpy as np
import torch
torch_tensor = torch.tensor(np.full([1], np.nan)).cuda()
print(np.isnan(torch_tensor.detach().cpu().numpy()))
torch_relu = torch.nn.ReLU6().cuda()
print(np.isnan(torch_relu(torch_tensor).detach().cpu().numpy()))
other competitors framework like MindSpore is able to process nan values, which will output zero instead of nan.
### Versions
PyTorch version: 1.10.1+cu111
Is debug build: False
CUDA used to build PyTorch: 11.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.27
Python version: 3.8.13 (default, Mar 28 2022, 11:38:47) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-4.15.0-211-generic-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to:
GPU models and configuration:
GPU 0: Tesla V100-SXM2-32GB
GPU 1: Tesla V100-SXM2-32GB
Nvidia driver version: 470.182.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.0.5
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 10
On-line CPU(s) list: 0-9
Thread(s) per core: 1
Core(s) per socket: 10
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz
Stepping: 7
CPU MHz: 2499.998
BogoMIPS: 4999.99
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 4096K
L3 cache: 16384K
NUMA node0 CPU(s): 0-9
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch topoext cpuid_fault invpcid_single pti ssbd ibrs ibpb fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat umip pku ospke avx512_vnni
Versions of relevant libraries:
[pip3] numpy==1.24.2
[pip3] torch==1.10.1+cu111
[pip3] torchaudio==0.10.1+cu111
[pip3] torchsummary==1.5.1
[pip3] torchvision==0.11.2+cu111
[conda] cudatoolkit 11.1.1 h6406543_8 conda-forge
[conda] numpy 1.24.2 pypi_0 pypi
[conda] torch 1.10.1+cu111 pypi_0 pypi
[conda] torchaudio 0.10.1+cu111 pypi_0 pypi
[conda] torchsummary 1.5.1 pypi_0 pypi
[conda] torchvision 0.11.2+cu111 pypi_0 pypi
| 3 |
1,670 | 106,602 |
onednn ops supported in pytorch
|
triaged, module: intel
|
How I get to know what are the onednn ops supported in pytorch?
what aten functions are going to ideep and further to onednn?
IS there any documentation which I can refer to understand this.
cc @frank-wei @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| 4 |
1,671 | 106,601 |
[ONNX] Keep functional ops as functions in dynamo exported onnx
|
module: onnx, triaged
|
### ๐ The feature, motivation and pitch
`nn.Module` may be represented as function in onnx
I was expecting the functional op to be expressed as such
Although most functional ops correspond to aten ops one-to-one, there are only a few parameter differences
But there are still some common functional ops that will be expressed as a combination of multiple aten ops, such as
* F.normalize
* F.interpolate
* etc.
Representing it as a function can express the structure of the model more meaningfully, and is more friendly to the optimization of the backend
Otherwise downstream tools need to write rules and combine them back together, like https://github.com/Tencent/ncnn/blob/master/tools/pnnx/src/pass_level2/F_interpolate.cpp
### Alternatives
_No response_
### Additional context
_No response_
| 1 |
1,672 | 106,596 |
[discussion] move-semantics for tensors
|
triaged, module: python frontend
|
### ๐ The feature, motivation and pitch
Sometimes we want to kind of pass the tensor to a function entirely (so that it can safely do inplace modifications without worrying to disturb other consumers).
One way it could work is that after a new_tensor_ref = tensor.move() is called, the previous tensor reference becomes somewhat invalid and would throw exception on any usage.
There could also be some `assert tensor.is_single_owner()` which could ensure that there is only one reference to the tensor.
Of course, there would be ways to circumvent this (e.g. by recording manually data_ptr), but it could be a way to increase expressiveness, give compiler more hints for compilation, and let the user express clearer ownership semantics and have some guards.
### Alternatives
_No response_
### Additional context
_No response_
cc @albanD
| 10 |
1,673 | 106,585 |
[ROCm] Add summary warning about no write permissions for hipify
|
module: rocm, triaged, open source, ciflow/trunk, rocm
|
Fixes #66573
CC @hongxiayang
cc @jeffdaily @sunway513 @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang
| 2 |
1,674 | 106,584 |
Lacking commutativity of `tensor.expand` and `tensor.flatten`
|
triaged, module: viewing and reshaping
|
### ๐ Describe the bug
I guess this is just another facet of https://github.com/pytorch/pytorch/issues/28090
`flatten` could realloc only the needed amount needed to deal with the required dimensions (and keep some unaffected dimensions expanded).
In this example, `x1` consumes less memory than `x2`. If `flatten` was smarter, `x2` would consume as little memory as `x1`.
```python
import torch
# output from meshgrid
x0 = torch.arange(100).unsqueeze(-1).expand(-1, 200)
print(x0.numel(), x0.storage().size())
#bar.py:4: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage #will be the only storage class. This should only matter to you if you are using storages directly. To #access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
# print(x0.numel(), x0.storage().size())
# 20000 100
x1 = x0.flatten(start_dim = -2).unsqueeze(0).expand(30, -1)
print(x1.numel(), x1.storage().size())
# 600000 20000
x2 = x0.unsqueeze(0).expand(30, -1, -1).flatten(start_dim = -2)
print(x2.numel(), x2.storage().size())
# 600000 600000
```
### Versions
2.1.0 nightly
| 5 |
1,675 | 106,581 |
[inductor] Add ir.Scan and lower aten.cumsum on CUDA
|
open source, Merged, Reverted, ciflow/trunk, module: inductor, ciflow/inductor, ciflow/unstable, release notes: inductor
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #110911
* #109132
* __->__ #106581
* #109601
This adds the `ir.Scan` node (currently only supported on CUDA) which re-uses the existing reduction kernel machinery to support different kinds of non-pointwise ops. Just like reductions it supports prologue and epilogue fusions and has both persistent and non-persistent kernel generation.
Currently this doesn't support the equivalent of `Reduction.create_multilayer` and will instead fall back to eager in those cases. This is because splitting into multiple kernel invocations ends up being far slower than cub's single kernel strategy which matches the performance of a copy kernel.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov
| 36 |
1,676 | 106,580 |
[dynamo] Unsupported to trace through Boolean Tensor indexing
|
triaged, oncall: pt2, module: dynamic shapes
|
### ๐ Describe the bug
even though indexing by boolean tensor is supported in normal torch, it is not supported in dynamo
### Reprod Script
```python
import torch
def reprod_function(t: torch.Tensor, a: torch.Tensor):
t[a] *= 0.2
return t
reprod_function_compile = torch.compile(reprod_function, backend="eager", fullgraph=True)
x = torch.rand(1,2,3)
y = torch.rand(1,2,3).to(torch.bool)
z = reprod_function(x,y)
print(z.dtype)
z = reprod_function_compile(x,y)
print(z.dtype)
```
### Traceback
```python
File "PATH_TO_PYTHON_LIB/site-packages/torch/_dynamo/variables/builtin.py", line 450, in call_function
unimplemented("dynamic Tensor.__getitem__(bool[])")
File "PATH_TO_PYTHON_LIB/site-packages/torch/_dynamo/exc.py", line 71, in unimplemented
raise Unsupported(msg)
torch._dynamo.exc.Unsupported: dynamic Tensor.__getitem__(bool[])
from user code:
File "./reprod.py", line 4, in reprod_function
t[a] *= 0.2
Set torch._dynamo.config.verbose=True for more information
You can suppress this exception and fall back to eager by setting:
torch._dynamo.config.suppress_errors = True
```
cc: @cchan @ezhang887 @ezyang
### Versions
torch==2.0.1
python==3.8
### Workaround
```python
t = torch.where(a, t * 0.2, t)
```
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 4 |
1,677 | 106,579 |
Boolean valued images loaded from disk, when converted to torch int/float tensor, the True valued pixels gets converted to 255 instead of 1
|
triaged, module: boolean tensor
|
### ๐ Describe the bug
Boolean valued images loaded from disk, when converted to torch int/float tensor, the True valued pixels gets converted to 255 instead of 1
**Step-by-step procedure:**
- Load a boolean-valued image from disk using PIL.Image.open.
- Convert image to boolean numpy array
- Convert to torch.tensor
- Convert to torch int / float tensor
- The `True` valued pixels in the original image now have a value of `255`, instead of `1`
**Minimal reproducible code**
```python
print_img_info = lambda desc, inp_img: print(
f"{desc} | type: {type(inp_img)} dtype: {inp_img.dtype}, max: {inp_img.max()}, min: {inp_img.min()}"
)
img = np.array(Image.open(img_path))
print_img_info("orignal image", img)
# orignal image | type: <class 'numpy.ndarray'> dtype: bool, max: True, min: False
imgT = torch.tensor(img)
print_img_info("torch image", imgT)
# torch image | type: <class 'torch.Tensor'> dtype: torch.bool, max: True, min: False
imgTInt = imgT.to(torch.int)
print_img_info("torch image int", imgTInt)
# torch image int | type: <class 'torch.Tensor'> dtype: torch.int32, max: 255, min: 0
imgTFloat = imgT.to(torch.float)
print_img_info("torch image float", imgTFloat)
# torch image float | type: <class 'torch.Tensor'> dtype: torch.float32, max: 255.0, min: 0.0
```
Note that the output of every print statement is commented in the next line. Notice that the int/float tensor's max pixel value is 255 instead of 1.
PFA the code and sample images: [torch_boolean_conversion_issue_minimal_reproducible_code.zip](https://github.com/pytorch/pytorch/files/12255240/torch_boolean_conversion_issue_minimal_reproducible_code.zip)
### Versions
I tested the attached zip on many of my colleagues' machines. I was able to reproduce the issue on all of them except for one.
Attaching some of those environments here. I have added the PIL library versions for these environments as well.
<details>
<summary>Torch environment 1 (The issue is observed)</summary>
PIL library version: 10.0.0
```sh
PyTorch version: 1.9.0+cu111
Is debug build: False
CUDA used to build PyTorch: 11.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.8.17 (default, Jul 5 2023, 21:04:15) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-76-generic-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: 11.6.124
CUDA_MODULE_LOADING set to:
GPU models and configuration:
GPU 0: NVIDIA GeForce GTX 1080 Ti
GPU 1: NVIDIA GeForce GTX 1080 Ti
GPU 2: NVIDIA GeForce GTX 1080 Ti
GPU 3: NVIDIA GeForce GTX 1080 Ti
Nvidia driver version: 510.108.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.3
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.3
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.3
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.3
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.3
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.3
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.3
/usr/local/cuda-10.2/targets/x86_64-linux/lib/libcudnn.so.8
/usr/local/cuda-10.2/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8
/usr/local/cuda-10.2/targets/x86_64-linux/lib/libcudnn_adv_train.so.8
/usr/local/cuda-10.2/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8
/usr/local/cuda-10.2/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8
/usr/local/cuda-10.2/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8
/usr/local/cuda-10.2/targets/x86_64-linux/lib/libcudnn_ops_train.so.8
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn.so.8
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_adv_train.so.8
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_ops_train.so.8
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 24
On-line CPU(s) list: 0-23
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Core(TM) i9-7920X CPU @ 2.90GHz
Stepping: 4
CPU MHz: 1200.191
CPU max MHz: 4400.0000
CPU min MHz: 1200.0000
BogoMIPS: 5799.77
Virtualization: VT-x
L1d cache: 384 KiB
L1i cache: 384 KiB
L2 cache: 12 MiB
L3 cache: 16.5 MiB
NUMA node0 CPU(s): 0-23
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS, IBPB conditional, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT vulnerable
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti ssbd mba ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req md_clear flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.19.0
[pip3] torch==1.9.0+cu111
[pip3] torchvision==0.10.0+cu111
[conda] numpy 1.19.0 pypi_0 pypi
[conda] torch 1.9.0+cu111 pypi_0 pypi
[conda] torchvision 0.10.0+cu111 pypi_0 pypi
```
</details>
<details>
<summary> Torch environment 2 (The issue is observed) </summary>
PIL library version: 9.2.0
```sh
PyTorch version: 1.12.1+cu102
Is debug build: False
CUDA used to build PyTorch: 10.2
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.9.12 (main, Apr 5 2022, 06:56:58) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.15.0-76-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 10.1.243
CUDA_MODULE_LOADING set to:
GPU models and configuration:
GPU 0: NVIDIA GeForce GTX 1080 Ti
GPU 1: NVIDIA GeForce GTX 1080 Ti
GPU 2: NVIDIA GeForce GTX 1080 Ti
GPU 3: NVIDIA GeForce GTX 1080 Ti
Nvidia driver version: 525.125.06
cuDNN version: /usr/lib/libcudnn.so.7
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 24
On-line CPU(s) list: 0-23
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Core(TM) i9-7920X CPU @ 2.90GHz
Stepping: 4
CPU MHz: 2900.000
CPU max MHz: 4400.0000
CPU min MHz: 1200.0000
BogoMIPS: 5799.77
Virtualization: VT-x
L1d cache: 384 KiB
L1i cache: 384 KiB
L2 cache: 12 MiB
L3 cache: 16.5 MiB
NUMA node0 CPU(s): 0-23
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS, IBPB conditional, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT vulnerable
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti ssbd mba ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req md_clear flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.23.2
[pip3] torch==1.12.1
[pip3] torchaudio==0.12.1
[pip3] torchvision==0.13.1
[conda] blas 1.0 mkl
[conda] clip-anytorch 2.5.0 pypi_0 pypi
[conda] coca-pytorch 0.0.7 pypi_0 pypi
[conda] cudatoolkit 11.3.1 h2bc3f7f_2
[conda] dalle2-pytorch 1.11.4 pypi_0 pypi
[conda] ema-pytorch 0.1.2 pypi_0 pypi
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.23.5 py39h14f4228_0
[conda] numpy-base 1.23.5 py39h31eccc5_0
[conda] open-clip-torch 2.8.2 pypi_0 pypi
[conda] pytorch 1.12.1 py3.9_cuda11.3_cudnn8.3.2_0 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] pytorch-scatter 2.1.0 py39_torch_1.12.0_cu113 pyg
[conda] pytorch-warmup 0.1.1 pypi_0 pypi
[conda] rotary-embedding-torch 0.1.5 pypi_0 pypi
[conda] torch-ema 0.3 pypi_0 pypi
[conda] torch-fidelity 0.3.0 pypi_0 pypi
[conda] torchaudio 0.12.1 py39_cu113 pytorch
[conda] torchmetrics 0.11.0 pypi_0 pypi
[conda] torchvision 0.13.1 py39_cu113 pytorch
[conda] vector-quantize-pytorch 0.10.14 pypi_0 pypi
```
</details>
<details>
<summary>Torch environment 3 (<strong>The issue is not observed</strong>)</summary>
PIL library version: 8.2.0
```sh
PyTorch version: 2.0.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.26.4
Libc version: glibc-2.31
Python version: 3.9.16 (main, May 15 2023, 23:46:34) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-155-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.3.58
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce GTX 1080 Ti
GPU 1: NVIDIA GeForce GTX 1080 Ti
GPU 2: NVIDIA GeForce GTX 1080 Ti
GPU 3: NVIDIA GeForce GTX 1080 Ti
Nvidia driver version: 470.199.02
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 24
On-line CPU(s) list: 0-23
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Core(TM) i9-7920X CPU @ 2.90GHz
Stepping: 4
CPU MHz: 1200.167
CPU max MHz: 4400.0000
CPU min MHz: 1200.0000
BogoMIPS: 5799.77
Virtualization: VT-x
L1d cache: 384 KiB
L1i cache: 384 KiB
L2 cache: 12 MiB
L3 cache: 16.5 MiB
NUMA node0 CPU(s): 0-23
Vulnerability Itlb multihit: KVM: Mitigation: Split huge pages
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS, IBPB conditional, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT vulnerable
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti ssbd mba ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req md_clear flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.20.2
[pip3] torch==2.0.1
[pip3] torchvision==0.15.2
[pip3] triton==2.0.0
[conda] numpy 1.20.2 pypi_0 pypi
[conda] torch 2.0.1 pypi_0 pypi
[conda] torchvision 0.15.2 pypi_0 pypi
[conda] triton 2.0.0 pypi_0 pypi
```
</details>
| 2 |
1,678 | 106,571 |
Enable more flake8-bugbear lints
|
good first issue, module: lint, triaged, enhancement, better-engineering
|
### ๐ The feature, motivation and pitch
* There are a lot of flake8-bugbear code reported by flake8-bugbear and ruff that need to be fixed. These would be a good starting issue for a new contributor.
This can be found be removing the codes from the ignore list in the .flake8 file and the .pyproject file for ruff.
- [ ] "B007",
- [ ] "B008"
- [ ] "B017",
- [ ] "B018", # Useless expression
- [ ] "B019",
- [ ] "B020",
- [ ] "B023",
- [ ] "B024",
- [ ] "B026",
- [ ] "B028", # No explicit `stacklevel` keyword argument found
- [ ] "B904",
### Alternatives
_No response_
### Additional context
_No response_
| 10 |
1,679 | 106,566 |
DTensor Sharding prop cache stats
|
oncall: distributed, triaged, module: dtensor
|
### ๐ The feature, motivation and pitch
We have built `_CachingPropagator` and we need some stats to monitor cache miss rate so that we can understand potential perf regression in DTensor CPU overhead.
### Alternatives
_No response_
### Additional context
_No response_
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @penguinwu
| 0 |
1,680 | 106,565 |
install cuda version always get cpuonly
|
oncall: releng, triaged
|
### ๐ Describe the bug
I found that when installing PyTorch with CUDA version 1.9.1 or below, the cpuonly library is also installed, which prevents training from using the GPU. Even after uninstalling cpuonly, it still doesnโt work.
I have experimented with versions 1.10.1, 1.10.0, 1.9.1, 1.9.0, 1.8.1 and 1.8.0.
### Versions
Collecting environment information...
PyTorch version: 1.8.1
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: Could not collect
CMake version: version 3.10.2
Libc version: glibc-2.27
Python version: 3.8.16 (default, Jun 12 2023, 18:09:05) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-150-generic-x86_64-with-glibc2.17
Is CUDA available: False
CUDA runtime version: 11.4.152
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
Nvidia driver version: 470.182.03
cuDNN version: Probably one of the following:
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn.so.8.2.1
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.2.1
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.2.1
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.2.1
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.2.1
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.2.1
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.2.1
/usr/local/cuda-11.4/targets/x86_64-linux/lib/libcudnn.so.8.6.0
/usr/local/cuda-11.4/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.6.0
/usr/local/cuda-11.4/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.6.0
/usr/local/cuda-11.4/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.6.0
/usr/local/cuda-11.4/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.6.0
/usr/local/cuda-11.4/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.6.0
/usr/local/cuda-11.4/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.6.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
NUMA node(s): 1
Vendor ID: AuthenticAMD
CPU family: 25
Model: 33
Model name: AMD Ryzen 9 5950X 16-Core Processor
Stepping: 0
CPU MHz: 2197.976
CPU max MHz: 3400.0000
CPU min MHz: 2200.0000
BogoMIPS: 6799.97
Virtualization: AMD-V
L1d cache: 32K
L1i cache: 32K
L2 cache: 512K
L3 cache: 32768K
NUMA node0 CPU(s): 0-31
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca
Versions of relevant libraries:
[pip3] numpy==1.24.3
[pip3] torch==1.8.1
[pip3] torchaudio==0.8.0a0+e4e171a
[pip3] torchvision==0.9.1
[conda] blas 1.0 mkl
[conda] cpuonly 1.0 0 pytorch
[conda] cudatoolkit 11.3.1 h9edb442_10 conda-forge
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py38h95df7f1_0 conda-forge
[conda] mkl_fft 1.3.1 py38h8666266_1 conda-forge
[conda] mkl_random 1.2.2 py38h1abd341_0 conda-forge
[conda] numpy 1.24.3 py38h14f4228_0
[conda] numpy-base 1.24.3 py38h31eccc5_0
[conda] pytorch 1.8.1 py3.8_cpu_0 [cpuonly] pytorch
[conda] torchaudio 0.8.1 py38 pytorch
[conda] torchvision 0.9.1 py38_cpu [cpuonly] pytorch
| 1 |
1,681 | 106,563 |
NotImplementedError: Could not run 'aten::multinomial' with arguments from the 'Meta' backend.
|
triaged, module: random
|
### ๐ Describe the bug
The error says:
NotImplementedError: Could not run 'aten::multinomial' with arguments from the 'Meta' backend. This could be because the operator doesn't exist for thi
s backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, pl
ease visit https://fburl.com/ptmfixes for possible resolutions. 'aten::multinomial' is only available for these backends: [CPU, CUDA, BackendSelect, Py
thon, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA,
AutogradHIP, AutogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMeta, AutogradMTIA, AutogradPrivateUse1
, AutogradPrivateUse2, AutogradPrivateUse3, AutogradNestedTensor, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, FuncTorchVmapMode, Batched, Vmap
Mode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PythonDispatcher].
Code:
def sample_top_p(probs, p):
probs_sort, probs_idx = torch.sort(probs, dim=-1, descending=True)
probs_sum = torch.cumsum(probs_sort, dim=-1)
mask = probs_sum - probs_sort > p
probs_sort[mask] = 0.0
probs_sort.div_(probs_sort.sum(dim=-1, keepdim=True))
next_token = torch.multinomial(probs_sort, num_samples=1)
next_token = torch.gather(probs_idx, -1, next_token)
return next_token
### Versions
StatusCode : 200
StatusDescription : OK
Content :
# Unlike the rest of the PyTorch this file must be python2 compliant.
# This script outputs relevant system environment info
# Run it with `python collect_env.py`.
import datetime
import locale
impor...
RawContent : HTTP/1.1 200 OK
Connection: keep-alive
Content-Security-Policy: default-src 'none'; style-src 'unsafe-inline'; sandbox
Strict-Transport-Security: max-age=31536000
X-Content-Type-Options: nosniff
...
Forms : {}
Headers : {[Connection, keep-alive], [Content-Security-Policy, default-src 'none'; style-src 'unsafe-inline'; sandbox],
[Strict-Transport-Security, max-age=31536000], [X-Content-Type-Options, nosniff]...}
Images : {}
InputFields : {}
Links : {}
ParsedHtml : mshtml.HTMLDocumentClass
RawContentLength : 21653
cc @pbelevich
| 1 |
1,682 | 106,560 |
Serialize
|
Stale, release notes: vulkan, module: export
|
Fixes #ISSUE_NUMBER
cc @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan
| 3 |
1,683 | 106,557 |
DISABLED test_cpp_wrapper_cpu (__main__.FreezingCpuTests)
|
triaged, skipped, oncall: pt2
|
Platforms: macos
This test was disabled because it is failing on master ([recent examples](http://torch-ci.com/failure/inductor%2Ftest_inductor_freezing.py%3A%3AFreezingCpuTests%3A%3Atest_cpp_wrapper_cpu)).
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @albanD
| 1 |
1,684 | 106,556 |
Pytorch: torch.autograd.grad returns NoneType
|
module: autograd, triaged
|
### ๐ Describe the bug
```
import torch
#Captum Attribution
from captum.attr import Saliency
model = torch.hub.load('pytorch/vision:v0.10.0', 'squeezenet1_1', pretrained=True)
model.eval()
sal = Saliency(model)
#X, y is an image and label
original_label = y
test_image = X.reshape([1,3,227,227]).float()
#I need gradient w.r.t. this test_image
test_image.requires_grad = True
test_image.retain_grad()
#Calculate saliency
attribution = sal.attribute(test_image, target=original_label)
attribution = torch.sum(torch.abs(attribution[0]), dim=0)
attribution = 227 * 227 * attribution / torch.sum(attribution)
attribution = attribution.view(-1)
elem1 = torch.argsort(attribution)[-1000:]
elements1 = torch.zeros(227 * 227)
elements1[elem1] = `1`
#I need gradient of topK_loss w.r.t. test_image
topK_loss = torch.sum(attribution * elements1)
topK_loss.requires_grad = True
topK_loss.retain_grad()
gradients = -torch.autograd.grad(outputs=topK_loss, inputs=test_image, allow_unused=True)[0]
```
I get this error: `bad operand type for unary -: 'NoneType'`
### Versions
PyTorch version: 2.0.0+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.10.12 | packaged by conda-forge | (main, Jun 23 2023, 22:40:32) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-5.15.120+-x86_64-with-glibc2.31
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 4
On-line CPU(s) list: 0-3
Thread(s) per core: 2
Core(s) per socket: 2
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 79
Model name: Intel(R) Xeon(R) CPU @ 2.20GHz
Stepping: 0
CPU MHz: 2200.186
BogoMIPS: 4400.37
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 64 KiB
L1i cache: 64 KiB
L2 cache: 512 KiB
L3 cache: 55 MiB
NUMA node0 CPU(s): 0-3
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS, IBPB conditional, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT Host state unknown
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx smap xsaveopt arat md_clear arch_capabilities
cc @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved @soulitzer @Lezcano @Varal7
| 3 |
1,685 | 106,551 |
UFMT utils tensorboard, data, benchmark.
|
Stale, release notes: dataloader
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #106551
Signed-off-by: Edward Z. Yang <ezyang@meta.com>
| 2 |
1,686 | 106,550 |
[dynamo] teach dynamo about `pytree._broadcast_to_and_flatten`
|
triaged, oncall: pt2, module: dynamo
|
### ๐ Describe the bug
Given that dynamo knows about a few pytree functions, it would be great to support `_broadcast_to_and_flatten` as it will allow us to improve `vmap` support for dynamo
Ref: https://github.com/pytorch/pytorch/pull/101707#discussion_r1280562461
Repro:
```python
import torch
torch._dynamo.config.suppress_errors=False
@torch.compile(backend="eager")
def test():
args = (0, 1, 1, 0)
_, arg_spec = torch.utils._pytree.tree_flatten(args)
# Dynamo doesn't know how to handle this.
torch.utils._pytree._broadcast_to_and_flatten(0, arg_spec)
test()
```
Output
```
File "/home/kshiteej/Pytorch/pytorch_functorch/torch/fx/node.py", line 644, in <genexpr>
t = tuple(map_aggregate(elem, fn) for elem in a)
File "/home/kshiteej/Pytorch/pytorch_functorch/torch/fx/node.py", line 654, in map_aggregate
return fn(a)
File "/home/kshiteej/Pytorch/pytorch_functorch/torch/fx/node.py", line 636, in <lambda>
return map_aggregate(a, lambda x: fn(x) if isinstance(x, Node) else x)
File "/home/kshiteej/Pytorch/pytorch_functorch/torch/_dynamo/utils.py", line 1302, in visit
return n.meta["example_value"]
torch._dynamo.exc.InternalTorchDynamoError: example_value
```
### Versions
master
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @aakhundov
| 2 |
1,687 | 106,549 |
Can't build PyTorch 1.13.1 with Vulkan support
|
triaged, module: vulkan, ciflow/periodic
|
### ๐ Describe the bug
Attempting to build PyTorch with Vulkan support fails with:
```
FAILED: caffe2/CMakeFiles/torch_cpu.dir/__/vulkan/ATen/native/vulkan/glsl.cpp.o
/usr/bin/ccache /usr/bin/c++ -DAT_PER_OPERATOR_HEADERS -DBUILD_ONEDNN_GRAPH -DCPUINFO_SUPPORTED_PLATFORM=1 -DFMT_HEADER_ONLY=1 -DFXDIV_USE_INLINE_ASSEMBLY=0 -DHAVE_MALLOC_USABLE_SIZE=1 -DHAVE_MMAP=1 -DHAVE_SHM_OPEN=1 -DHAVE_SHM_UNLINK=1 -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DNNP_CONVOLUTION_ONLY=0 -DNNP_INFERENCE_ONLY=0 -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DUSE_C10D_GLOO -DUSE_DISTRIBUTED -DUSE_EXTERNAL_MZCRC -DUSE_RPC -DUSE_TENSORPIPE -D_FILE_OFFSET_BITS=64 -Dtorch_cpu_EXPORTS -I/libtorch-factory/pytorch/build/aten/src -I/libtorch-factory/pytorch/aten/src -I/libtorch-factory/pytorch/build -I/libtorch-factory/pytorch -I/libtorch-factory/pytorch/cmake/../third_party/benchmark/include -I/libtorch-factory/pytorch/third_party/onnx -I/libtorch-factory/pytorch/build/third_party/onnx -I/libtorch-factory/pytorch/third_party/foxi -I/libtorch-factory/pytorch/build/third_party/foxi -I/libtorch-factory/pytorch/torch/csrc/api -I/libtorch-factory/pytorch/torch/csrc/api/include -I/libtorch-factory/pytorch/caffe2/aten/src/TH -I/libtorch-factory/pytorch/build/caffe2/aten/src/TH -I/libtorch-factory/pytorch/build/caffe2/aten/src -I/libtorch-factory/pytorch/build/caffe2/../aten/src -I/libtorch-factory/pytorch/torch/csrc -I/libtorch-factory/pytorch/third_party/miniz-2.1.0 -I/libtorch-factory/pytorch/third_party/kineto/libkineto/include -I/libtorch-factory/pytorch/third_party/kineto/libkineto/src -I/libtorch-factory/pytorch/build/vulkan -I/libtorch-factory/pytorch/aten/../third_party/VulkanMemoryAllocator -I/libtorch-factory/pytorch/aten/../third_party/catch/single_include -I/libtorch-factory/pytorch/aten/src/ATen/.. -I/libtorch-factory/pytorch/third_party/FXdiv/include -I/libtorch-factory/pytorch/c10/.. -I/libtorch-factory/pytorch/third_party/pthreadpool/include -I/libtorch-factory/pytorch/third_party/cpuinfo/include -I/libtorch-factory/pytorch/third_party/QNNPACK/include -I/libtorch-factory/pytorch/aten/src/ATen/native/quantized/cpu/qnnpack/include -I/libtorch-factory/pytorch/aten/src/ATen/native/quantized/cpu/qnnpack/src -I/libtorch-factory/pytorch/third_party/cpuinfo/deps/clog/include -I/libtorch-factory/pytorch/third_party/NNPACK/include -I/libtorch-factory/pytorch/third_party/fbgemm/include -I/libtorch-factory/pytorch/third_party/fbgemm -I/libtorch-factory/pytorch/third_party/fbgemm/third_party/asmjit/src -I/libtorch-factory/pytorch/third_party/ittapi/src/ittnotify -I/libtorch-factory/pytorch/third_party/FP16/include -I/libtorch-factory/pytorch/third_party/tensorpipe -I/libtorch-factory/pytorch/build/third_party/tensorpipe -I/libtorch-factory/pytorch/third_party/tensorpipe/third_party/libnop/include -I/libtorch-factory/pytorch/third_party/fmt/include -I/libtorch-factory/pytorch/build/third_party/ideep/mkl-dnn/third_party/oneDNN/include -I/libtorch-factory/pytorch/third_party/ideep/mkl-dnn/third_party/oneDNN/src/../include -I/libtorch-factory/pytorch/third_party/flatbuffers/include -isystem /libtorch-factory/pytorch/build/third_party/gloo -isystem /libtorch-factory/pytorch/cmake/../third_party/gloo -isystem /libtorch-factory/pytorch/cmake/../third_party/googletest/googlemock/include -isystem /libtorch-factory/pytorch/cmake/../third_party/googletest/googletest/include -isystem /libtorch-factory/pytorch/third_party/protobuf/src -isystem /libtorch-factory/pytorch/third_party/gemmlowp -isystem /libtorch-factory/pytorch/third_party/neon2sse -isystem /libtorch-factory/pytorch/third_party/XNNPACK/include -isystem /libtorch-factory/pytorch/third_party/ittapi/include -isystem /libtorch-factory/pytorch/cmake/../third_party/eigen -isystem /libtorch-factory/pytorch/third_party/ideep/mkl-dnn/third_party/oneDNN/include -isystem /libtorch-factory/pytorch/third_party/ideep/include -isystem /libtorch-factory/pytorch/third_party/ideep/mkl-dnn/include -isystem /libtorch-factory/pytorch/build/include -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DUSE_VULKAN -DUSE_VULKAN_API -DUSE_VULKAN_SHADERC_RUNTIME -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wunused-local-typedefs -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -Wno-stringop-overflow -DHAVE_AVX2_CPU_DEFINITION -O3 -DNDEBUG -DNDEBUG -fPIC -DCAFFE2_USE_GLOO -DTH_HAVE_THREAD -Wall -Wextra -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-missing-field-initializers -Wno-write-strings -Wno-unknown-pragmas -Wno-type-limits -Wno-array-bounds -Wno-sign-compare -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-missing-braces -Wno-maybe-uninitialized -fvisibility=hidden -O2 -fopenmp -DCAFFE2_BUILD_MAIN_LIB -pthread -DASMJIT_STATIC -std=gnu++14 -MD -MT caffe2/CMakeFiles/torch_cpu.dir/__/vulkan/ATen/native/vulkan/glsl.cpp.o -MF caffe2/CMakeFiles/torch_cpu.dir/__/vulkan/ATen/native/vulkan/glsl.cpp.o.d -o caffe2/CMakeFiles/torch_cpu.dir/__/vulkan/ATen/native/vulkan/glsl.cpp.o -c /libtorch-factory/pytorch/build/vulkan/ATen/native/vulkan/glsl.cpp
/libtorch-factory/pytorch/build/vulkan/ATen/native/vulkan/glsl.cpp:1166:1: error: unable to find string literal operator โoperator""fullyโ with โconst char [1326]โ, โlong unsigned intโ arguments
1166 | " // and compute partial sums of texels that are "fully filled"\n"
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
```
### Versions
PyTorch: 1.13.1
Vulkan: 1.3.231.1
Workflow: https://pytorch.org/tutorials/prototype/vulkan_workflow.html
| 2 |
1,688 | 106,546 |
Potential Issue with Pandas Dataframe
|
needs reproduction, triaged, oncall: pt2
|
### ๐ Describe the bug
When using torch.compile, I am experiencing an error in my dataset class specifically due to a len() call on a pandas data frame. My guess is that the code may be getting compiled into a boolean statement with incorrect syntax checking if the data frame is empty or not.
### Error logs
ValueError: The truth value of a DataFrame is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
### Minified repro
_No response_
### Versions
Collecting environment information...
PyTorch version: 2.0.1+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.25.0
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.90.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA RTX A2000 8GB Laptop GPU
Nvidia driver version: 528.79
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 20
On-line CPU(s) list: 0-19
Vendor ID: GenuineIntel
Model name: 12th Gen Intel(R) Core(TM) i7-12800H
CPU family: 6
Model: 154
Thread(s) per core: 2
Core(s) per socket: 10
Socket(s): 1
Stepping: 3
BogoMIPS: 5606.40
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mc
a cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_t
sc rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid pni pclmulqdq ssse3 fma cx16 p
cid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowpre
fetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase bmi1 avx2 smep bmi2 erm
s invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves umip gfn
i vaes vpclmulqdq rdpid fsrm flush_l1d arch_capabilities
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 480 KiB (10 instances)
L1i cache: 320 KiB (10 instances)
L2 cache: 12.5 MiB (10 instances)
L3 cache: 24 MiB (1 instance)
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer
sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB fillin
g, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.25.2
[pip3] torch==2.0.1+cu118
[pip3] torchaudio==2.0.2+cu118
[pip3] torchmetrics==1.0.1
[pip3] torchvision==0.15.2+cu118
[pip3] triton==2.0.0
[conda] Could not collect
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 1 |
1,689 | 106,545 |
`softmax` to handle dimensions comprised of `-inf`
|
module: nn, triaged
|
### ๐ The feature, motivation and pitch
Currently `softmax` handles `-inf` in most of the cases, for example:
```python
>>> x = torch.tensor([-float('inf'), 1])
>>> x.softmax(-1)
tensor([0., 1.])
>>> x.cuda().softmax(-1)
tensor([0., 1.], device='cuda:0')
```
However, once the normalized dimension is comprised of solely `-inf`, there is an issue:
```python
>>> x = torch.tensor([-float('inf')] * 2)
>>> x.softmax(-1)
tensor([nan, nan])
>>> x.cuda().softmax(-1)
tensor([nan, nan], device='cuda:0')
```
I understand we get it because `inf - inf = nan`, but why not assume that `softmax` is computed in the limit?
In such a case `softmax` should return a uniform distribution. This case is quite easy to detect since `max == -inf` will imply a uniform distribution.
Issues for more context:
https://github.com/pytorch/pytorch/issues/25110
https://github.com/pytorch/pytorch/issues/103749
https://github.com/pytorch/pytorch/issues/103963
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| 1 |
1,690 | 106,544 |
Branch name in double quotes ""
|
module: ci, triaged
|
### ๐ Describe the bug
TBD
### Versions
It is not a bug per say, but a issue we are facing. We see that a new branch has been added with double quotes in it name: "serialize".
We have forked the pytorch repository and our internal automation of fetching the latest code is broken due to branch name in quotes.
Just curious, if there any guidelines about branch name?
cc @seemethere @malfet @pytorch/pytorch-dev-infra
| 5 |
1,691 | 106,540 |
Dataset with Queue issue
|
module: dataloader, triaged, module: data
|
### ๐ Describe the bug
In a classification task, due to the small number of certain categories, I added a queue in the function. This queue is used to cache images with a small number, but I found that each epoch will empty my queue. Why๏ผ
here is demo
```
import random
import cv2
import torch
import os
from torch.utils.data import Dataset, DataLoader
from queue import Queue
class DataSet_my(Dataset):
def __init__(self, path_dir, is_train=True):
self.path_dir = path_dir
self.jpg_path_List = []
self.label_2 = 2
self.label_1 = 1
self.out_h = 64
self.out_w = 64
self.T_max_obj = 10
self.T_maxqueue_size = 1000
self.queue_obj = Queue(maxsize = self.T_maxqueue_size) # ๅๅปบไธไธช้ๅๅฏน่ฑก
def __getitem__(self, idx):
path_img = self.jpg_path_List[idx]
path_obscure_txt = path_img.replace(".jpg", ".txt")
img_src = cv2.imread(path_img)
with open(path_obscure_txt, "r") as fr:
lines = fr.readlines()
roi_img = cv2.imread(lines[0])
label = lines[1]
self.queue_obj.qsize() < self.T_maxqueue_size * 0.7:
self.queue_obj.put(roi_img.copy())
if self.queue_obj.qsize() > self.T_max_obj * 5 and (len(m_roi_obscure) < len(m_roi_normal)):
while not self.queue_obj.empty():
m_out.append(self.queue_obj.get())
out_label = np.array(label)
out_img = np.array(m_out)
return out_img, out_label.astype(np.float32)
```
### Versions
```
Package Version
-------------------- -----------
anyio 3.7.1
argon2-cffi 21.3.0
argon2-cffi-bindings 21.2.0
attrs 23.1.0
backcall 0.2.0
beautifulsoup4 4.12.2
bleach 6.0.0
certifi 2022.12.7
cffi 1.15.1
charset-normalizer 3.1.0
cPython 0.0.6
cycler 0.11.0
debugpy 1.6.7
decorator 5.1.1
defusedxml 0.7.1
dnspython 2.3.0
easydict 1.10
entrypoints 0.4
exceptiongroup 1.1.2
fastjsonschema 2.17.1
fire 0.5.0
fonttools 4.38.0
glog 0.3.1
idna 3.4
imageio 2.28.1
importlib-metadata 6.7.0
importlib-resources 5.12.0
ipykernel 6.16.2
ipython 7.34.0
ipython-genutils 0.2.0
ipywidgets 8.0.7
jedi 0.18.2
Jinja2 3.1.2
joblib 1.1.0
jsonschema 4.17.3
jupyter_client 7.4.9
jupyter_core 4.12.0
jupyter-server 1.24.0
jupyterlab-pygments 0.2.2
jupyterlab-widgets 3.0.8
kiwisolver 1.4.4
lmdb 1.4.0
MarkupSafe 2.1.3
matplotlib 3.5.3
matplotlib-inline 0.1.6
mistune 3.0.1
mkl-fft 1.3.1
mkl-random 1.2.2
mkl-service 2.4.0
nbclassic 1.0.0
nbclient 0.7.4
nbconvert 7.6.0
nbformat 5.8.0
nest-asyncio 1.5.6
networkx 2.6.3
notebook 6.5.4
notebook_shim 0.2.3
numpy 1.21.6
open3d-python 0.7.0.0
opencv-python 4.6.0.66
packaging 23.1
pandocfilters 1.5.0
parso 0.8.3
pexpect 4.8.0
pickleshare 0.7.5
Pillow 9.5.0
pip 22.3.1
pkgutil_resolve_name 1.3.10
prefetch-generator 1.0.1
prettytable 3.7.0
prometheus-client 0.17.1
prompt-toolkit 3.0.39
protobuf 3.19.1
psutil 5.9.5
ptyprocess 0.7.0
pycparser 2.21
Pygments 2.15.1
pymongo 4.3.3
pyparsing 3.1.0
pyrsistent 0.19.3
python-dateutil 2.8.2
python-gflags 3.1.2
PyWavelets 1.4.0
PyYAML 3.13
pyzmq 25.1.0
requests 2.31.0
scikit-image 0.19.3
scikit-learn 1.0.2
scipy 1.7.3
Send2Trash 1.8.2
setuptools 65.6.3
six 1.16.0
sklearn 0.0
sniffio 1.3.0
soupsieve 2.4.1
tensorboardX 2.1
termcolor 2.3.0
terminado 0.17.1
threadpoolctl 3.1.0
tifffile 2021.11.2
tinycss2 1.2.1
torch 1.7.1+cu110
torchvision 0.8.2+cu110
tornado 6.2
tqdm 4.64.1
traitlets 5.9.0
typing_extensions 4.5.0
urllib3 2.0.2
wcwidth 0.2.6
webencodings 0.5.1
websocket-client 1.6.1
wheel 0.38.4
widgetsnbextension 4.0.8
zipp 3.15.0
```
cc @SsnL @VitalyFedyunin @ejguan @dzhulgakov
| 1 |
1,692 | 106,538 |
Enable Mypy Checking in torch/_inductor/fx_passes/split_cat.py
|
triaged, open source, module: inductor, ciflow/inductor
|
Fixes #105230 Enable Mypy Checking in torch/_inductor
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov
| 4 |
1,693 | 106,533 |
CUDA device support does not register allocator to c10::GetAllocator(...)
|
module: cpp, module: cuda, triaged
|
### ๐ Describe the bug
The CUDA device support does not register its device allocator to `c10::GetAllocator(c10::DeviceType)` but explicitly requires to use `c10::cuda::CUDACachingAllocator::get()`.
Here a small demonstrator, that prints the pointer to the CUDACachingAllocator (to proof that CUDA support is loaded), and then all known allocators that are registered to `c10::GetAllocator(...)`.
```c++
#include <c10/cuda/CUDACachingAllocator.h>
#include <iostream>
int main(int argc, char** argv) {
std::cout << "CUDACachingAllocator: " << c10::cuda::CUDACachingAllocator::get() << std::endl;
for(int D = 0; D < (int)c10::DeviceType::COMPILE_TIME_MAX_DEVICE_TYPES; D++) {
auto d = (c10::DeviceType)D;
std::cout << c10::DeviceTypeName(d) << ": " << c10::GetAllocator(d) << std::endl;
}
return 0;
}
```
And then run it using:
```bash
python3 -m venv error
. error/bin/activate
python3 -m pip install torch
LIB=$(python3 -c "import site;print(site.getsitepackages()[0])")
g++ -D_GLIBCXX_USE_CXX11_ABI=0 -L $LIB/torch/lib -I $LIB/torch/include/ -lc10 -lc10_cuda error.cpp
LD_LIBRARY_PATH=$LIB/torch/lib ./a.out
```
Output:
```
CUDACachingAllocator: 0x7f9ab62efda0
CPU: 0x7f9ab639ed68
CUDA: 0
MKLDNN: 0
OPENGL: 0
OPENCL: 0
IDEEP: 0
HIP: 0
FPGA: 0
ORT: 0
XLA: 0
VULKAN: 0
METAL: 0
XPU: 0
MPS: 0
META: 0
HPU: 0
VE: 0
LAZY: 0
IPU: 0
MTIA: 0
privateuseone: 0
```
To my understanding in `c10/cuda/CUDACachingAllocator.cpp` in `BackendStaticInitializer::BackendStaticInitializer()` it should call `c10::SetAllocator(c10::DeviceType::CUDA, r);`
### Versions
Collecting environment information...
PyTorch version: 2.0.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: CentOS Linux 7 (Core) (x86_64)
GCC version: (GCC) 10.5.0
Clang version: Could not collect
CMake version: version 3.27.0
Libc version: glibc-2.17
Python version: 3.8.17 (default, Jun 26 2023, 05:54:38) [GCC 10.2.1 20210130 (Red Hat 10.2.1-11)] (64-bit runtime)
Python platform: Linux-3.10.0-1160.76.1.el7.x86_64-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: 11.8.89
GPU models and configuration: GPU 0: Quadro P4000
Nvidia driver version: 535.54.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] torch==2.0.1
[conda] Could not collect
cc @jbschlosser @ptrblck
| 0 |
1,694 | 106,529 |
Pytorch + ROCm+ Windows
|
module: rocm, triaged
|
### ๐ The feature, motivation and pitch
A week ago, amd published ROCm for windows and cards like the 6600xt. Yet, you can't install pytorch under this configuration:

I asked in the unofficial pytorch discord server and somebody told me to try:
pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/rocm5.6
but that simply resulted in the error:
ERROR: Could not find a version that satisfies the requirement torch (from versions: none)
ERROR: No matching distribution found for torch
Is there any way to get a working version yet and if not, will that come in the future and when can it be expected?
Thanks for any help and I hope I posted this under the correct category.
### Alternatives
_No response_
### Additional context
_No response_
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang
| 2 |
1,695 | 106,525 |
Add ModuleInfo for torch.nn.ChannelShuffle
|
triaged, open source, fb-exported, topic: not user facing
|
Summary:
Add ModuleInfo for torch.nn.ChannelShuffle:
https://github.com/pytorch/pytorch/pull/105351#pullrequestreview-1533879586
Test Plan: Please see GitHub Actions.
Differential Revision: D48021100
| 5 |
1,696 | 106,520 |
Distributed torch.linalg.eigh (and other functions) on cuda using cuSOLVERMG
|
module: cuda, triaged, module: linear algebra
|
### ๐ The feature, motivation and pitch
The current torch.linalg system seems to primarily call cuSOLVER for many of the operations. I was wondering if it would be possible to ``upgrade" this to use cuSOLVERMG, which is the distributed version of cuSOLVER. Doing this would allow us to, for example, take a eigenvalue decomposition of larger matrices without OOMing.
### Alternatives
_No response_
### Additional context
_No response_
cc @ptrblck @jianyuh @nikitaved @pearu @mruberry @walterddr @IvanYashchuk @xwang233 @Lezcano
| 0 |
1,697 | 106,518 |
Use FastCat in PT Concat implementation
|
Stale
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #106518
Differential Revision: [D48081898](https://our.internmc.facebook.com/intern/diff/D48081898)
| 6 |
1,698 | 106,508 |
Fix reset_parameters for nn.MHA
|
Stale
|
**This is not ready for review but hoping for general feedback on how we should resolve this:**
`_reset_parameters()` here does not follow the contract for `reset_parameters()` for two reasons
1) The order in which we initialize things in MHA makes `torch.get_rng_state()` inconsistent when initializing the same param/buffer in `nn.MultiheadAttention.__init__()` and when resetting that param/buffer in `nn.MultiheadAttention()._reset_parameters`.
In `__init__` we initialize `out_proj` (a submodule) first before using `self._reset_parameters()` to initialize the reset of the parameters owned by the root. So if we consider the `rng_states` as some sequence $r_0, r_1, r_2, ...$ where $r_0$ is the first `rng_state` after `torch.manual_seed(0)`
```
torch.manual_seed(seed)
# torch.get_rng_state() when initializing (e.g. `self.in_proj_weight`) is r_2
# 2 because NonDynamicallyQuantizableLinear has 2 parameters
m = nn.MultiheadAttention()
torch.manual_seed(seed)
# torch.get_rng_state() when initializing (e.g. `self.in_proj_weight`) is r_0
m.reset_parameters()
```
2) It resets `out_proj.bias` which is a parameter owned by a submodule.
This PR tries to fix it by moving `out_proj` to be initialized after the call to `_reset_parameter()` in `__init__()` and also not initializing `out_proj.bias` in `_reset_parameters()` **but this would be BC-breaking**.
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #106508
* #106506
* #106715
| 2 |
1,699 | 106,506 |
Add ModuleInfo testing for reset_parameters
|
Stale, ciflow/trunk, topic: not user facing
|
The contract on `nn.Module.reset_parameters()` should be as follows
1) `reset_parameters` should re-initialize parameters and buffers to how they were initialized in `nn.Module.__init__()`
2) `reset_parameters` should **not** recurse, (i.e. it should only reset parameters and buffers owned by the root, and not those of submodules)
3) modules that do not have parameters do not need to have a `reset_parameters` (*perhaps we could add one that is a no-op to the `nn.Module` base class*)
4) The following should hold
```
# set some seed
torch.manual_seed(some_seed)
m = SomeModule()
# modify params/buffers in some way
...
torch.manual_seed(some_seed)
m.apply(lambda mod: mod.reset_parameters()
# params/bufs given by m.{parameters/buffers}(recurse=False) should match those of the initial m
```
Going over the ModuleInfos that we have not migrated, only `nn.EmbeddingBag` has parameters/buffers. So otherwise the testing here should be quite complete.
Some modules have an underscore `_reset_parameters()`. However the behavior of these does not necessarily match the contract (in particular for `nn.MultiheadAttention`, see further discussion in the description of #106508)
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #106508
* __->__ #106506
* #106715
| 2 |
1,700 | 106,485 |
Increasing batch size makes network forward 1000 times slower
|
module: cudnn, module: nn, module: cuda, triaged
|
### ๐ Describe the bug
I have a two layer network. The input is a 2D array of token ids, first layer is an embedding layer that replaces each pixel the respective embedding, the second layer does a convolution. Find the network definition below.
```python
class Model(nn.Module):
def __init__(self, voc_dim, emb_dim, out_channels):
super().__init__()
self.embedding = nn.Embedding(
num_embeddings=voc_dim,
embedding_dim=emb_dim,
padding_idx=0,
sparse=True,
)
self.conv = nn.Conv2d(
in_channels=emb_dim,
out_channels=out_channels,
kernel_size=3,
stride=1,
dilation=1,
padding=1,
bias=False,
)
def forward(self, x):
# x is (batch_size, 672, 512), embeddings (batch_size, 768, 672, 512)
embeddings = self.embedding(x).permute(0, 3, 1, 2)
out = self.conv(embeddings)
return out
network = Model(voc_dim=100000, emb_dim=768, out_channels=64)
network.cuda()
network.train()
```
For batch sizes of 4 and 8, inference with mixed precision, forward pass runs smoothly. But for batch size 16, forward becomes ~1000 times slower. Batch size 16 does not fit in the memory without mixed precision. Forward pass is timed with the following code.
```python
def forward_speed(batch_size, n=5):
forward_pass_timed = []
for i in range(n):
start = torch.cuda.Event(enable_timing=True)
end = torch.cuda.Event(enable_timing=True)
inp = torch.rand(batch_size, 672, 512).long().cuda()
start.record()
with torch.autocast(device_type="cuda", dtype=torch.float16):
x = network(inp)
end.record()
torch.cuda.synchronize()
forward_pass_timed.append(start.elapsed_time(end))
print(f'Batch size - {batch_size}, Mean of {n} runs in ms {round(np.mean(forward_pass_timed), 2)}, STD {round(np.std(forward_pass_timed), 2)}')
```
The output of the function for batch sizes 4,8,16.
<img width="604" alt="Screenshot 2023-08-02 at 23 42 12" src="https://github.com/pytorch/pytorch/assets/127740186/c894d48b-1cee-408a-b85b-40878e71fd57">
Interestingly, when I measure just one layer i.e only embedding or only convolution, they are not slow, this effect is seen only when they are called together in a forward pass. GPU is A100 40GB.
### Versions
PyTorch version: 2.0.0+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Red Hat Enterprise Linux 8.4 (Ootpa) (x86_64)
GCC version: (GCC) 8.5.0 20210514 (Red Hat 8.5.0-18)
Clang version: Could not collect
CMake version: version 3.26.3
Libc version: glibc-2.28
Python version: 3.10.10 (main, Mar 21 2023, 18:45:11) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-4.18.0-305.19.1.el8_4.x86_64-x86_64-with-glibc2.28
Is CUDA available: True
CUDA runtime version: 11.0.221
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA A100-PCIE-40GB
Nvidia driver version: 495.29.05
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
NUMA node(s): 2
Vendor ID: AuthenticAMD
CPU family: 25
Model: 1
Model name: AMD EPYC 7543 32-Core Processor
Stepping: 1
CPU MHz: 1638.617
CPU max MHz: 2800.0000
CPU min MHz: 1500.0000
BogoMIPS: 5589.65
Virtualization: AMD-V
L1d cache: 32K
L1i cache: 32K
L2 cache: 512K
L3 cache: 32768K
NUMA node0 CPU(s): 0-31,64-95
NUMA node1 CPU(s): 32-63,96-127
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate sme ssbd mba sev ibrs ibpb stibp vmmcall sev_es fsgsbase bmi1 avx2 smep bmi2 invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca
Versions of relevant libraries:
[pip3] numpy==1.23.5
[pip3] torch==2.0.0
[pip3] torchvision==0.15.2
[pip3] triton==2.0.0
[conda] numpy 1.23.5 pypi_0 pypi
[conda] torch 2.0.0 pypi_0 pypi
[conda] torchvision 0.15.2 pypi_0 pypi
[conda] triton 2.0.0 pypi_0 pypi
cc @csarofeen @ptrblck @xwang233 @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.