Serial Number
int64 1
6k
| Issue Number
int64 75.6k
112k
| Title
stringlengths 3
357
| Labels
stringlengths 3
241
⌀ | Body
stringlengths 9
74.5k
⌀ | Comments
int64 0
867
|
---|---|---|---|---|---|
3,301 | 96,085 |
Add support for `__collate__` attrib on dataset elements in `default_collate`
|
module: dataloader, triaged
|
### 🚀 The feature, motivation and pitch
There seemed to have been some discussions on alternative ways to supply collate function (like in [#33181](https://github.com/pytorch/pytorch/issues/33181)). I also found is slightly inconvenient to add custom way to collate metadata on my dataset. My pitch to solve this would be to add check if the dataset element has `__collate__` attrib and use it if present.
This would help to easily extend the behavior of `default_collate`. Simple example:
```python
class MyMetadata(dict):
@staticmethod
def __collate__(batch, *, collate_fn_map: Optional[Dict[Union[Type, Tuple[Type, ...]], Callable]] = None):
# when collating metadata (basically dictionaries), we want to return batch as a simple list
return batch
class MyDataset(torch.utils.data.Dataset):
def __len__(self):
return 16
def __getitem__(self, index):
x = torch.tensor(index)
y = torch.tensor(index)
metadata = {"index": index, "list_of_variable_length": list(range(index))}
return {"x": x, "y": y, "metadata": MyMetadata(metadata)}
ds = MyDataset()
dl = torch.utils.data.DataLoader(ds, batch_size=4)
next(iter(dl))
```
Expected output would be:
```
{'x': tensor([0, 1, 2, 3]),
'y': tensor([0, 1, 2, 3]),
'metadata': [{'index': 0, 'list_of_variable_length': []},
{'index': 1, 'list_of_variable_length': [0]},
{'index': 2, 'list_of_variable_length': [0, 1]},
{'index': 3, 'list_of_variable_length': [0, 1, 2]}]}
```
This seems to be easy to implement (although it it my first time contributing here, so I don't know for sure). Something like:
```python
def pitched_collate(batch, *, collate_fn_map: Optional[Dict[Union[Type, Tuple[Type, ...]], Callable]] = None):
elem = batch[0]
#HERE ARE THE ONLY CHANGES COMPARED TO collate
if hasattr(elem, "__collate__"):
return getattr(elem, "__collate__")(batch, collate_fn_map=collate_fn_map)
#END OF CHANGES
elem_type = type(elem)
if collate_fn_map is not None:
if elem_type in collate_fn_map:
return collate_fn_map[elem_type](batch, collate_fn_map=collate_fn_map)
for collate_type in collate_fn_map:
if isinstance(elem, collate_type):
return collate_fn_map[collate_type](batch, collate_fn_map=collate_fn_map)
if isinstance(elem, collections.abc.Mapping):
try:
return elem_type({key: collate([d[key] for d in batch], collate_fn_map=collate_fn_map) for key in elem})
except TypeError:
# The mapping type may not support `__init__(iterable)`.
return {key: collate([d[key] for d in batch], collate_fn_map=collate_fn_map) for key in elem}
elif isinstance(elem, tuple) and hasattr(elem, '_fields'): # namedtuple
return elem_type(*(collate(samples, collate_fn_map=collate_fn_map) for samples in zip(*batch)))
elif isinstance(elem, collections.abc.Sequence):
# check to make sure that the elements in batch have consistent size
it = iter(batch)
elem_size = len(next(it))
if not all(len(elem) == elem_size for elem in it):
raise RuntimeError('each element in list of batch should be of equal size')
transposed = list(zip(*batch)) # It may be accessed twice, so we use a list.
if isinstance(elem, tuple):
return [collate(samples, collate_fn_map=collate_fn_map) for samples in transposed] # Backwards compatibility.
else:
try:
return elem_type([collate(samples, collate_fn_map=collate_fn_map) for samples in transposed])
except TypeError:
# The sequence type may not support `__init__(iterable)` (e.g., `range`).
return [collate(samples, collate_fn_map=collate_fn_map) for samples in transposed]
raise TypeError(default_collate_err_msg_format.format(elem_type))
```
### Alternatives
Perhaps an option to register new type in `default_collate_fn_map` would solve this, but from what it seems there is currently no way to even access `default_collate_fn_map` without importing it from `torch.utils.data._utils.collate`
### Additional context
If this seems like a good addition, I'd be willing to try and add this in a PR.
cc @SsnL @VitalyFedyunin @ejguan @NivekT @dzhulgakov
| 4 |
3,302 | 96,073 |
Pytorch 2.0 installation tutorial does not work under Macbook
|
module: docs, triaged, module: macos
|
### 📚 The doc issue
I attempt to install Pytorch and follow this tutorial:
https://pytorch.org/get-started/pytorch-2.0/
However, this command gives the following error:
```
(base) jimmy@Jimmys-MacBook-Air mnist_pytorch % pip3 install numpy --pre torch --force-reinstall --index-url https://download.pytorch.org/whl/nightly/cu117
Looking in indexes: https://download.pytorch.org/whl/nightly/cu117
ERROR: Could not find a version that satisfies the requirement numpy (from versions: none)
```
Later on, I tried another [tutorial](https://pytorch.org/blog/Accelerating-Hugging-Face-and-TIMM-models/) with the following command and suceed:
```
(base) jimmy@Jimmys-MacBook-Air mnist_pytorch % pip3 install numpy --pre torch --force-reinstall --extra-index-url https://download.pytorch.org/whl/nightly/cu117
Looking in indexes: https://pypi.org/simple, https://download.pytorch.org/whl/nightly/cu117
Collecting numpy
Downloading numpy-1.24.2-cp310-cp310-macosx_10_9_x86_64.whl (19.8 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 19.8/19.8 MB 50.5 MB/s eta 0:00:00
Collecting torch
Downloading torch-1.13.1-cp310-none-macosx_10_9_x86_64.whl (135.3 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 135.3/135.3 MB 17.2 MB/s eta 0:00:00
Collecting typing-extensions
Using cached typing_extensions-4.5.0-py3-none-any.whl (27 kB)
Installing collected packages: typing-extensions, numpy, torch
Attempting uninstall: typing-extensions
Found existing installation: typing_extensions 4.4.0
Uninstalling typing_extensions-4.4.0:
Successfully uninstalled typing_extensions-4.4.0
Attempting uninstall: numpy
Found existing installation: numpy 1.22.3
Uninstalling numpy-1.22.3:
Successfully uninstalled numpy-1.22.3
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
gensim 4.3.0 requires FuzzyTM>=0.4.0, which is not installed.
daal4py 2021.4.0 requires daal==2021.3.0, which is not installed.
scipy 1.7.3 requires numpy<1.23.0,>=1.16.5, but you have numpy 1.24.2 which is incompatible.
numba 0.56.4 requires numpy<1.24,>=1.18, but you have numpy 1.24.2 which is incompatible.
Successfully installed numpy-1.24.2 torch-1.13.1 typing-extensions-4.5.0
```
### Suggest a potential alternative/fix
replace all `--index-url` with `--extra-index-url` in the installation tutorial
cc @svekars @carljparker @malfet @albanD
| 0 |
3,303 | 96,060 |
Linking libtorch with QT5 OpenGL application using llvmpipe mesa opengl crashes
|
oncall: binaries
|
### 🐛 Describe the bug
Using a very simple OpenGL Qt5 application running inside a Thinlinc session works when the binary is NOT linked to libtorch libraries.
As soon you link libtorch libraries (NOT even using ANY torchlib method in the application itself, just linking) makes the application crashing.
This is both the case with CPU & GPU (cuda) version of libtorch (1.13.1)
The funny fact is that the application not even has an #include of any torchlib header. Just linking with torchlib is enough to have the issue. (using official supported cmake procedure)
When OpenGL is NOT used, eveything works. A similar issue (#75336) has been reported as well, probably the root cause is the same.
If the application is run on a native machine and using a real Nvidia card and libraries, everything works. There must be a conflict caused by libtorch to crash applications using software OpenGL.
### Versions
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Linux Mint 21.1 (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.6 (main, Nov 14 2022, 16:10:14) [GCC 11.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-60-generic-x86_64-with-glibc2.35
Is CUDA available: N/A
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 2080 Ti
Nvidia driver version: 515.43.04
cuDNN version: Probably one of the following:
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn.so.8.7.0
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.7.0
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.7.0
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.7.0
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.7.0
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.7.0
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.7.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU E5-2620 v3 @ 2.40GHz
CPU family: 6
Model: 63
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
Stepping: 2
CPU max MHz: 3200,0000
CPU min MHz: 1200,0000
BogoMIPS: 4789.26
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm cpuid_fault epb invpcid_single pti intel_ppin ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm xsaveopt cqm_llc cqm_occup_llc dtherm ida arat pln pts md_clear flush_l1d
Virtualization: VT-x
L1d cache: 192 KiB (6 instances)
L1i cache: 192 KiB (6 instances)
L2 cache: 1,5 MiB (6 instances)
L3 cache: 15 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-11
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] No relevant packages
[conda] Could not collect
cc @ezyang @seemethere @malfet
| 1 |
3,304 | 96,056 |
MPS device throws error for `F.adaptive_avg_pool2d`
|
triaged, module: mps
|
### 🐛 Describe the bug
`F.adaptive_avg_pool2d` throws an error on MPS device, when reducing odd-sized input to even sized output. CPU works just fine.
Code to reproduce:
```python
import torch
import torch.nn.functional as F
mps=torch.device('mps')
cpu=torch.device('cpu')
x = torch.rand(1, 256, 26, 39)
out_size = 2
out = F.adaptive_avg_pool2d(x.to(cpu), out_size)
print(f'{out.shape=}')
out_mps = F.adaptive_avg_pool2d(x.to(mps), out_size)
print(f'{out_mps.shape=}')
```
output:
```
out.shape=torch.Size([1, 256, 2, 2])
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Input In [25], in <cell line: 12>()
10 out = F.adaptive_avg_pool2d(x.to(cpu), out_size)
11 print(f'{out.shape=}')
---> 12 out_mps = F.adaptive_avg_pool2d(x.to(mps), out_size)
13 print(f'{out_mps.shape=}')
File /opt/homebrew/Caskroom/miniforge/base/envs/pytorch/lib/python3.9/site-packages/torch/nn/functional.py:1214, in adaptive_avg_pool2d(input, output_size)
1212 return handle_torch_function(adaptive_avg_pool2d, (input,), input, output_size)
1213 _output_size = _list_with_default(output_size, input.size())
-> 1214 return torch._C._nn.adaptive_avg_pool2d(input, _output_size)
RuntimeError: Adaptive pool MPS: input sizes must be divisible by output sizes.
```
Desired behavior - same as for CPU
### Versions
```
Collecting environment information...
PyTorch version: 2.1.0.dev20230305
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 13.2 (arm64)
GCC version: Could not collect
Clang version: 14.0.0 (clang-1400.0.29.202)
CMake version: version 3.23.1
Libc version: N/A
Python version: 3.9.10 | packaged by conda-forge | (main, Feb 1 2022, 21:25:34) [Clang 11.1.0 ] (64-bit runtime)
Python platform: macOS-13.2-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Pro
Versions of relevant libraries:
[pip3] mypy==0.990
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.24.2
[pip3] numpy-quaternion==2022.4.2
[pip3] pytest-mypy==0.9.1
[pip3] pytorch-lightning==1.9.0
[pip3] pytorch3d==0.2.5
[pip3] torch==2.1.0.dev20230305
[pip3] torch-dimcheck==0.0.1
[pip3] torch-localize==0.1.0
[pip3] torch-lr-finder==0.2.1
[pip3] torch-model-archiver-nightly==2022.11.16
[pip3] torch-receptive-field==0.0.1
[pip3] torch-workflow-archiver-nightly==2022.11.16
[pip3] torchaudio==2.0.0.dev20230227
[pip3] torchfile==0.1.0
[pip3] torchgeometry==0.1.2
[pip3] torchmetrics==0.11.1
[pip3] torchserve==0.6.1
[pip3] torchserve-nightly==2022.11.16
[pip3] torchsummary==1.5.1
[pip3] torchtext==0.13.0
[pip3] torchvision==0.15.0.dev20230227
[conda] numpy 1.24.2 pypi_0 pypi
[conda] numpy-quaternion 2022.4.2 pypi_0 pypi
[conda] pytorch-lightning 1.9.0 pypi_0 pypi
[conda] pytorch3d 0.2.5 pypi_0 pypi
[conda] torch 2.1.0.dev20230305 pypi_0 pypi
[conda] torch-dimcheck 0.0.1 pypi_0 pypi
[conda] torch-localize 0.1.0 pypi_0 pypi
[conda] torch-lr-finder 0.2.1 pypi_0 pypi
[conda] torch-model-archiver-nightly 2022.11.16 pypi_0 pypi
[conda] torch-receptive-field 0.0.1 pypi_0 pypi
[conda] torch-workflow-archiver-nightly 2022.11.16 pypi_0 pypi
[conda] torchaudio 2.0.0.dev20230227 pypi_0 pypi
[conda] torchfile 0.1.0 pypi_0 pypi
[conda] torchgeometry 0.1.2 pypi_0 pypi
[conda] torchmetrics 0.11.1 pypi_0 pypi
[conda] torchserve 0.6.1 pypi_0 pypi
[conda] torchserve-nightly 2022.11.16 pypi_0 pypi
[conda] torchsummary 1.5.1 pypi_0 pypi
[conda] torchtext 0.13.0 pypi_0 pypi
[conda] torchvision 0.15.0.dev20230227 pypi_0 pypi
```
cc @kulinseth @albanD @malfet @DenisVieriu97 @razarmehr @abhudev
| 4 |
3,305 | 96,047 |
No speedup and a null pointer exception
|
needs reproduction, triaged, oncall: pt2
|
### 🐛 Describe the bug
When I use torch.compile, I don't see any speedup effect, and an error appears after training: "free(): invalid pointer
Aborted (core dumped)“
### Error logs
The running result of torch.compile is not used.

Running results after using torch.compile

### Minified repro
This is the python file I run and I put the code in a text file
[example_mnist.txt](https://github.com/pytorch/pytorch/files/10890684/example_mnist.txt)
[TorchSeq2PC.txt](https://github.com/pytorch/pytorch/files/10890685/TorchSeq2PC.txt)
### Versions
Collecting environment information...
PyTorch version: 2.0.0.dev20230202+cu116
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.3 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.25.0
Libc version: glibc-2.31
Python version: 3.9.5 (default, Nov 23 2021, 15:27:38) [GCC 9.3.0] (64-bit runtime)
Python platform: Linux-5.4.0-81-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.6.124
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: Quadro GV100
GPU 1: Quadro GV100
Nvidia driver version: 510.73.05
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 72
On-line CPU(s) list: 0-71
Thread(s) per core: 2
Core(s) per socket: 18
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 6240 CPU @ 2.60GHz
Stepping: 7
CPU MHz: 1191.653
CPU max MHz: 3900.0000
CPU min MHz: 1000.0000
BogoMIPS: 5200.00
Virtualization: VT-x
L1d cache: 1.1 MiB
L1i cache: 1.1 MiB
L2 cache: 36 MiB
L3 cache: 49.5 MiB
NUMA node0 CPU(s): 0-17,36-53
NUMA node1 CPU(s): 18-35,54-71
Vulnerability Itlb multihit: KVM: Mitigation: Split huge pages
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.24.1
[pip3] pytorch-triton==2.0.0+0d7e753227
[pip3] torch==2.0.0.dev20230202+cu116
[pip3] torchaudio==2.0.0.dev20230201+cu116
[pip3] torchvision==0.15.0.dev20230201+cu116
[conda] Could not collect
cc @ezyang @msaroufim @wconstab @ngimel @bdhirsh @anijain2305 @soumith
| 6 |
3,306 | 96,046 |
Add arm64 builds for libtorch on MacOS with mps support
|
triaged, module: macos, module: infra, module: arm
|
### 🚀 The feature, motivation and pitch
I would like to request `arm64` builds for libtorch on MacOS with MPS support enabled.
### Alternatives
_No response_
### Additional context
_No response_
cc @malfet @albanD
| 1 |
3,307 | 96,041 |
Cannot access data pointer of Tensor that doesn't have storage when using `torch.func.jvp` with `torch.compile`
|
triaged, actionable, oncall: pt2, module: functorch
|
### 🐛 Describe the bug
When attempting to run below code snippet,
```
import torch
@torch.compile()
def compute(log_probs):
lit_weights = torch.stack(((1 - log_probs.exp()).log(), log_probs), dim=-1).permute(1, 2, 0)
levels = [torch.tensor([5, 7], device='cuda'), torch.tensor([8], device='cuda')]
lit_indices = torch.tensor([0, 1, 2, 3, 4, 6], device='cuda')
id = 9
node_indices = torch.tensor([[[0, 0],[0, 0]],[[0, 0],[0, 0]],[[0, 0],[0, 0]],[[0, 0],[0, 0]],
[[0, 0],[0, 0]],[[1, 2],[3, 4]],[[0, 0],[0, 0]],[[1, 4],[9, 9]],[[0, 5],[6, 7]]], device='cuda')
lit_mask = (torch.tensor([0, 1, 2, 1, 2, 0], device='cuda'), torch.tensor([1, 1, 0, 0, 1, 0], device='cuda'))
lit_indices = torch.tensor([0, 1, 2, 3, 4, 6], device='cuda')
data = torch.empty(id+1, log_probs.size(0), device='cuda')
data[id] = -float(1000)
data[lit_indices] = lit_weights[lit_mask[0], lit_mask[1]]
data[levels[0]] = data[node_indices[levels[0]]].sum(-2).logsumexp(-2)
data[levels[1]] = data[node_indices[levels[1]]].sum(-2).logsumexp(-2)
return data[levels[-1]]
# Prepare probabilities
batch_size = 5
log_probs = torch.rand((batch_size, 10), device='cuda', requires_grad=True).log()
grad = torch.ones((batch_size, 10), device='cuda')
out, grad = torch.func.jvp(compute, (log_probs,), (grad,))
```
which makes use of both `torch.compile` as well as `torch.func.jvp`, I get a
> RuntimeError: Cannot access data pointer of Tensor that doesn't have storage
When commenting out to the `torch.compile` decorator, however, the code runs seamlessly.
### Error logs
[2023-03-04 13:47:45,071] torch._dynamo.eval_frame: [DEBUG] skipping __init__ /space/ahmedk/anaconda3/envs/simple_updated/lib/python3.10/contextlib.py
[2023-03-04 13:47:45,071] torch._dynamo.eval_frame: [DEBUG] skipping __enter__ /space/ahmedk/anaconda3/envs/simple_updated/lib/python3.10/contextlib.py
[2023-03-04 13:47:45,071] torch._dynamo.eval_frame: [DEBUG] skipping __init__ /space/ahmedk/anaconda3/envs/simple_updated/lib/python3.10/contextlib.py
[2023-03-04 13:47:45,071] torch._dynamo.eval_frame: [DEBUG] skipping __enter__ /space/ahmedk/anaconda3/envs/simple_updated/lib/python3.10/contextlib.py
[2023-03-04 13:47:45,071] torch._dynamo.eval_frame: [DEBUG] skipping enable_dynamic /space/ahmedk/anaconda3/envs/simple_updated/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py
Traceback (most recent call last):
File "/space/ahmedk/anaconda3/envs/simple_updated/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 547, in preserve_rng_state
yield
File "/space/ahmedk/anaconda3/envs/simple_updated/lib/python3.10/site-packages/torch/_dynamo/variables/builder.py", line 925, in wrap_fx_proxy_cls
example_value = wrap_to_fake_tensor_and_record(
File "/space/ahmedk/anaconda3/envs/simple_updated/lib/python3.10/site-packages/torch/_dynamo/variables/builder.py", line 1062, in wrap_to_fake_tensor_and_record
fake_e = wrap_fake_exception(
File "/space/ahmedk/anaconda3/envs/simple_updated/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 808, in wrap_fake_exception
return fn()
File "/space/ahmedk/anaconda3/envs/simple_updated/lib/python3.10/site-packages/torch/_dynamo/variables/builder.py", line 1063, in <lambda>
lambda: tx.fake_mode.from_tensor(
File "/space/ahmedk/anaconda3/envs/simple_updated/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.py", line 1395, in from_tensor
return self.fake_tensor_converter(
File "/space/ahmedk/anaconda3/envs/simple_updated/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.py", line 314, in __call__
return self.from_real_tensor(
File "/space/ahmedk/anaconda3/envs/simple_updated/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.py", line 272, in from_real_tensor
out = self.meta_converter(
File "/space/ahmedk/anaconda3/envs/simple_updated/lib/python3.10/site-packages/torch/_subclasses/meta_utils.py", line 502, in __call__
r = self.meta_tensor(
File "/space/ahmedk/anaconda3/envs/simple_updated/lib/python3.10/site-packages/torch/_subclasses/meta_utils.py", line 275, in meta_tensor
base = self.meta_tensor(
File "/space/ahmedk/anaconda3/envs/simple_updated/lib/python3.10/site-packages/torch/_subclasses/meta_utils.py", line 381, in meta_tensor
s = t.untyped_storage()
NotImplementedError: Cannot access storage of TensorWrapper
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/space/ahmedk/anaconda3/envs/simple_updated/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 324, in _compile
out_code = transform_code_object(code, transform)
File "/space/ahmedk/anaconda3/envs/simple_updated/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py", line 530, in transform_code_object
transformations(instructions, code_options)
File "/space/ahmedk/anaconda3/envs/simple_updated/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 299, in transform
tracer = InstructionTranslator(
File "/space/ahmedk/anaconda3/envs/simple_updated/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1806, in __init__
self.symbolic_locals = collections.OrderedDict(
File "/space/ahmedk/anaconda3/envs/simple_updated/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1809, in <genexpr>
VariableBuilder(
File "/space/ahmedk/anaconda3/envs/simple_updated/lib/python3.10/site-packages/torch/_dynamo/variables/builder.py", line 174, in __call__
return self._wrap(value).clone(**self.options())
File "/space/ahmedk/anaconda3/envs/simple_updated/lib/python3.10/site-packages/torch/_dynamo/variables/builder.py", line 300, in _wrap
return type_dispatch(self, value)
File "/space/ahmedk/anaconda3/envs/simple_updated/lib/python3.10/site-packages/torch/_dynamo/variables/builder.py", line 748, in wrap_tensor
tensor_variable = wrap_fx_proxy(
File "/space/ahmedk/anaconda3/envs/simple_updated/lib/python3.10/site-packages/torch/_dynamo/variables/builder.py", line 865, in wrap_fx_proxy
return wrap_fx_proxy_cls(
File "/space/ahmedk/anaconda3/envs/simple_updated/lib/python3.10/site-packages/torch/_dynamo/variables/builder.py", line 898, in wrap_fx_proxy_cls
with preserve_rng_state():
File "/space/ahmedk/anaconda3/envs/simple_updated/lib/python3.10/contextlib.py", line 153, in __exit__
self.gen.throw(typ, value, traceback)
File "/space/ahmedk/anaconda3/envs/simple_updated/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 549, in preserve_rng_state
torch.random.set_rng_state(rng)
File "/space/ahmedk/anaconda3/envs/simple_updated/lib/python3.10/site-packages/torch/random.py", line 18, in set_rng_state
default_generator.set_state(new_state)
RuntimeError: Cannot access data pointer of Tensor that doesn't have storage
You can suppress this exception and fall back to eager by setting:
torch._dynamo.config.suppress_errors = True
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/space/ahmedk/anaconda3/envs/simple_updated/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 104, in _fn
return fn(*args, **kwargs)
File "/space/ahmedk/anaconda3/envs/simple_updated/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 262, in _convert_frame_assert
return _compile(
File "/space/ahmedk/anaconda3/envs/simple_updated/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 164, in time_wrapper
r = func(*args, **kwargs)
File "/space/ahmedk/anaconda3/envs/simple_updated/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 394, in _compile
raise InternalTorchDynamoError() from e
torch._dynamo.exc.InternalTorchDynamoError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/scratch/ahmedk/simple-graphs/exactly-k/github_issue.py", line 33, in <module>
out, grad = torch.func.jvp(compute, (log_probs,), (grad,))
File "/space/ahmedk/anaconda3/envs/simple_updated/lib/python3.10/site-packages/torch/_functorch/eager_transforms.py", line 916, in jvp
return _jvp_with_argnums(func, primals, tangents, argnums=None, strict=strict, has_aux=has_aux)
File "/space/ahmedk/anaconda3/envs/simple_updated/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 39, in fn
return f(*args, **kwargs)
File "/space/ahmedk/anaconda3/envs/simple_updated/lib/python3.10/site-packages/torch/_functorch/eager_transforms.py", line 965, in _jvp_with_argnums
result_duals = func(*duals)
File "/space/ahmedk/anaconda3/envs/simple_updated/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 231, in _fn
return fn(*args, **kwargs)
File "/space/ahmedk/anaconda3/envs/simple_updated/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 368, in catch_errors
return callback(frame, cache_size, hooks)
File "/space/ahmedk/anaconda3/envs/simple_updated/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 404, in _convert_frame
result = inner_convert(frame, cache_size, hooks)
File "/space/ahmedk/anaconda3/envs/simple_updated/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 109, in _fn
torch.cuda.set_rng_state(cuda_rng_state)
File "/space/ahmedk/anaconda3/envs/simple_updated/lib/python3.10/site-packages/torch/cuda/random.py", line 64, in set_rng_state
_lazy_call(cb)
File "/space/ahmedk/anaconda3/envs/simple_updated/lib/python3.10/site-packages/torch/cuda/__init__.py", line 192, in _lazy_call
callable()
File "/space/ahmedk/anaconda3/envs/simple_updated/lib/python3.10/site-packages/torch/cuda/random.py", line 62, in cb
default_generator.set_state(new_state_copy)
RuntimeError: Cannot access data pointer of Tensor that doesn't have storage
### Minified repro
I couldn't get it to produce a minified code, unfortunately, despite running with both `env TORCHDYNAMO_REPRO_AFTER='dynamo'` and `env TORCHDYNAMO_REPRO_AFTER='aot'`
### Versions
Collecting environment information...
PyTorch version: 2.1.0.dev20230304+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.25.0
Libc version: glibc-2.31
Python version: 3.10.9 (main, Jan 11 2023, 15:21:40) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.13.0-39-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.5.119
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA RTX A5000
GPU 1: NVIDIA RTX A5000
GPU 2: NVIDIA RTX A5000
GPU 3: NVIDIA RTX A5000
Nvidia driver version: 495.29.05
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 48 bits physical, 48 bits virtual
CPU(s): 32
On-line CPU(s) list: 0-31
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
NUMA node(s): 1
Vendor ID: AuthenticAMD
CPU family: 25
Model: 1
Model name: AMD EPYC 7313P 16-Core Processor
Stepping: 1
Frequency boost: enabled
CPU MHz: 1500.000
CPU max MHz: 3729.4919
CPU min MHz: 1500.0000
BogoMIPS: 5988.68
Virtualization: AMD-V
L1d cache: 512 KiB
L1i cache: 512 KiB
L2 cache: 8 MiB
L3 cache: 128 MiB
NUMA node0 CPU(s): 0-31
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; LFENCE, IBPB conditional, IBRS_FW, STIBP always-on, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca
Versions of relevant libraries:
[pip3] numpy==1.24.1
[pip3] pytorch-triton==2.0.0+b8b470bc59
[pip3] torch==2.1.0.dev20230304+cu117
[pip3] torch-geometric==2.2.0
[pip3] torch-scatter==2.1.0
[pip3] torch-sparse==0.6.16
[pip3] torchaudio==2.0.0.dev20230223+cu117
[pip3] torchvision==0.15.0.dev20230227+cu117
[conda] numpy 1.24.1 pypi_0 pypi
[conda] pytorch-triton 2.0.0+b8b470bc59 pypi_0 pypi
[conda] torch 2.1.0.dev20230304+cu117 pypi_0 pypi
[conda] torch-geometric 2.2.0 pypi_0 pypi
[conda] torch-scatter 2.1.0 pypi_0 pypi
[conda] torch-sparse 0.6.16 pypi_0 pypi
[conda] torchaudio 2.0.0.dev20230223+cu117 pypi_0 pypi
[conda] torchvision 0.15.0.dev20230227+cu117 pypi_0 pypi
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh @zou3519 @Chillee @samdow @kshitij12345 @janeyx99
| 5 |
3,308 | 96,036 |
Questions and Possible Features: Pytorch RPC 'future.wait()' will not release GIL which will block other thread's execution when using multithreading.
|
triaged, module: multithreading
|
Hi, I'm trying to build a 5 stages pipeline using python multithread. Each stage contains some cuda operation or `.cuda` (cuda memory copy operations) as well as some very light weight cpu compuatation. What's more, in each stage, I will use PyTorch distributed `rpc.rpc_async` to do some computation remotely and transfer them back to local memory. I then called `future.wait()` to wait for the output. One expected behavior is that during the `future.wait()`, the other concurrent stages can execute in parallel so that I can overlap the computation with communication. However, I found that the other threads are also blocked by this operation. My guess is that the `future.wait()` doesn't release the GIL although it seems to be implemented in CPP so the other thread can't get the GIL to execute. I wonder if my guess is right? If it's the true, how can I overlap this communication time with other threads in parallel? Thanks a lot!
cc @ezyang @gchanan @zou3519 @bdhirsh @jbschlosser @anjali411 @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @agolynski @SciPioneer @H-Huang @mrzzd @cbalioglu @gcramer23
| 1 |
3,309 | 96,033 |
Encourage dynamo.export users to assume static by default if they call nonzero / unbacked SymInt
|
triaged, oncall: pt2, module: dynamic shapes, module: export
|
### 🐛 Describe the bug
@tugsbayasgalan has observed that on latest master you can end up with a bunch of "cannot guard on unbacked SymInt" failures that look something like `Ne(i0*((s2//2 - 1)//8)**2 + 2*i0*((s2//2 - 1)//8) + i0, ((s2//2 - 1)//8)**2 + 2*((s2//2 - 1)//8) + 1)`. These look complicated but they actually are quite trivial if you can substitute in s2 (in this case, the substitued guard is `Ne(36*i0, 36)`). This could be a legitimate problem if s2 was actually dynamic, but it's pretty common for it to actually be static (and we're just trying to export a "maximally" dynamic model). In this case, upfront marking it as static would solve the problem. We should encourage export users (perhaps by defaulting this way) to assume inputs are static.
cc @msaroufim @wconstab @ngimel @bdhirsh @anijain2305 @soumith @avikchaudhuri @voznesenskym
### Versions
master
| 2 |
3,310 | 95,984 |
[inductor][cpp] xcit_large_24_p8_224 OOM
|
triaged
|
`xcit_large_24_p8_224` has been hitting OOM intermittently on CI, see https://hud.pytorch.org/hud/pytorch/pytorch/master/1?per_page=50&name_filter=inductor_timm_cpu_acc . https://github.com/pytorch/pytorch/pull/94822 somehow makes the issue repeatable.
| 1 |
3,311 | 95,973 |
Graphstate checkpointing doesn't checkpoint ShapeEnv / shape guards
|
triaged, oncall: pt2, module: dynamic shapes
|
### 🐛 Describe the bug
https://github.com/pytorch/pytorch/pull/90665/files#diff-4c82a5798a61d4cceb176b2700ba6fdd7c3e72d575b8e7e22458589139459caa discusses a hypothetical problem where when we checkpoint graphstate (e.g., before starting inlining a function), we technically also need to checkpoint GraphEnv and roll it back if we hit a graph break.
Well, I managed to actually trigger this problem. As the comment says, all you need to do is some how not specialize on a global. So this is real, and I want an issue to track it.
However, in my particular case, I think I'm going to work around the problem by forcing specialization when the unspecialized int comes from a global. Is it enough? We'll see.
### Versions
master
cc @msaroufim @wconstab @ngimel @bdhirsh @anijain2305 @soumith
| 1 |
3,312 | 95,960 |
`@torch.jit.unused` does not properly ignore unsupported function signature
|
oncall: jit
|
### 🐛 Describe the bug
Per my understanding, `@torch.jit.unused` should make jit compiler completely ignore a method in the class as if that method never existed. However, it's not the case. The compiler is still trying to parse the method signature even if `@torch.jit.unused` is used.
A simple example:
```python
from typing_extensions import Self
import torch
@torch.jit.script
@dataclass
class Data:
a: int
@classmethod
@torch.jit.unused
def factory_method(cls, a: int = 1) -> Self:
return cls(a)
```
This piece of code will raise the following error:
```
RuntimeError:
Unknown type name 'Self':
File "xxx.py", line 18
@classmethod
@torch.jit.unused
def factory_method(cls, a: int = 1) -> Self:
~~~~ <--- HERE
return cls(a)
```
I searched in the repo and found that there was a related PR(#39336), but it doesn't seem to work as expected.
### Versions
```
Collecting environment information...
PyTorch version: 1.13.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 13.2 (x86_64)
GCC version: Could not collect
Clang version: 14.0.6
CMake version: version 3.25.2
Libc version: N/A
Python version: 3.10.8 | packaged by conda-forge | (main, Nov 22 2022, 08:27:35) [Clang 14.0.6 ] (64-bit runtime)
Python platform: macOS-13.2-x86_64-i386-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Max
Versions of relevant libraries:
[pip3] numpy==1.23.5
[pip3] pytorch-lightning==1.9.0
[pip3] torch==1.13.1
[pip3] torch-cluster==1.6.0
[pip3] torch-geometric==2.2.0
[pip3] torch-scatter==2.1.0
[pip3] torch-sparse==0.6.16
[pip3] torch-spline-conv==1.2.1
[pip3] torchaudio==0.13.1
[pip3] torchmetrics==0.11.2
[conda] blas 2.112 mkl conda-forge
[conda] blas-devel 3.9.0 12_osx64_mkl conda-forge
[conda] libblas 3.9.0 12_osx64_mkl conda-forge
[conda] libcblas 3.9.0 12_osx64_mkl conda-forge
[conda] liblapack 3.9.0 12_osx64_mkl conda-forge
[conda] liblapacke 3.9.0 12_osx64_mkl conda-forge
[conda] mkl 2021.4.0 h89fa619_689 conda-forge
[conda] mkl-devel 2021.4.0 h694c41f_690 conda-forge
[conda] mkl-include 2021.4.0 hf224eb6_689 conda-forge
[conda] numpy 1.23.5 py310h1b7c290_0 conda-forge
[conda] pyg 2.2.0 py310_torch_1.13.0_cpu pyg
[conda] pytorch 1.13.1 py3.10_0 pytorch
[conda] pytorch-cluster 1.6.0 py310_torch_1.13.0_cpu pyg
[conda] pytorch-lightning 1.9.0 pypi_0 pypi
[conda] pytorch-scatter 2.1.0 py310_torch_1.13.0_cpu pyg
[conda] pytorch-sparse 0.6.16 py310_torch_1.13.0_cpu pyg
[conda] pytorch-spline-conv 1.2.1 py310_torch_1.13.0_cpu pyg
[conda] torchaudio 0.13.1 py310_cpu pytorch
[conda] torchmetrics 0.11.2 py310h20db666_0
```
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| 3 |
3,313 | 95,957 |
FSDP fails to load state dict under inference_mode
|
triaged, enhancement, inference mode, module: fsdp
|
### 🐛 Describe the bug
A runtime error occurs when attempting to load the state dict of an fsdp model under `torch.inference_mode()`:
```py
import os
import torch.cuda
import torch.nn as nn
import torch.distributed as dist
import torch.multiprocessing as mp
from torch.distributed.fsdp import FullyShardedDataParallel
from torch.nn.parallel import DistributedDataParallel
def work(rank):
os.environ["MASTER_ADDR"] = "127.0.0.1"
os.environ["MASTER_PORT"] = "1234"
dist.init_process_group("nccl", world_size=2, rank=rank)
torch.cuda.set_device(rank)
device = torch.device("cuda", rank)
model = nn.Linear(100, 50).to(device)
model = FullyShardedDataParallel(model)
# no error with DDP
# model = DistributedDataParallel(model, device_ids=[rank])
with torch.inference_mode():
model(torch.rand(2, 100))
torch.save(model.state_dict(), "fsdp_model.pt")
with torch.inference_mode():
model.load_state_dict(torch.load("fsdp_model.pt"))
model(torch.rand(2, 100))
def run():
mp.spawn(work, nprocs=2)
if __name__ == "__main__":
run()
```
Error:
```
Traceback (most recent call last):
File "/home/adrian/anaconda3/envs/lightning/lib/python3.9/site-packages/torch/multiprocessing/spawn.py", line 69, in _wrap
fn(i, *args)
File "/home/adrian/repositories/lightning/repro.py", line 25, in work
torch.save(model.state_dict(), "fsdp_model.pt")
File "/home/adrian/anaconda3/envs/lightning/lib/python3.9/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py", line 2402, in state_dict
with summon_ctx:
File "/home/adrian/anaconda3/envs/lightning/lib/python3.9/contextlib.py", line 119, in __enter__
return next(self.gen)
File "/home/adrian/anaconda3/envs/lightning/lib/python3.9/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py", line 3002, in _summon_full_params
stack.enter_context(self._fsdp_wrapped_module.unflatten_as_params())
File "/home/adrian/anaconda3/envs/lightning/lib/python3.9/contextlib.py", line 448, in enter_context
result = _cm_type.__enter__(cm)
File "/home/adrian/anaconda3/envs/lightning/lib/python3.9/contextlib.py", line 119, in __enter__
return next(self.gen)
File "/home/adrian/anaconda3/envs/lightning/lib/python3.9/site-packages/torch/distributed/fsdp/flatten_params_wrapper.py", line 144, in unflatten_as_params
with self._flat_param_handle.unflatten_as_params():
File "/home/adrian/anaconda3/envs/lightning/lib/python3.9/contextlib.py", line 119, in __enter__
return next(self.gen)
File "/home/adrian/anaconda3/envs/lightning/lib/python3.9/site-packages/torch/distributed/fsdp/flat_param.py", line 1019, in unflatten_as_params
self._unflatten(as_params=True)
File "/home/adrian/anaconda3/envs/lightning/lib/python3.9/site-packages/torch/distributed/fsdp/flat_param.py", line 990, in _unflatten
module.register_parameter(param_name, nn.Parameter(view))
File "/home/adrian/anaconda3/envs/lightning/lib/python3.9/site-packages/torch/nn/parameter.py", line 36, in __new__
return torch.Tensor._make_subclass(cls, data, requires_grad)
RuntimeError: Setting requires_grad=True on inference tensor outside InferenceMode is not allowed.
```
This only happens with FSDP, and other wrappers like DDP don't experience this issue.
PyTorch 1.13 and 2.0.
First encountered in PyTorch Lightning https://github.com/Lightning-AI/lightning/issues/16908
### Versions
PyTorch version: 1.13.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.25.0
Libc version: glibc-2.31
Python version: 3.9.12 (main, Jun 1 2022, 11:38:51) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.4.0-135-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.1.105
CUDA_MODULE_LOADING set to: LAZY
Nvidia driver version: 520.61.05
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 43 bits physical, 48 bits virtual
CPU(s): 64
On-line CPU(s) list: 0-63
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 2
NUMA node(s): 2
Vendor ID: AuthenticAMD
CPU family: 23
Model: 49
Model name: AMD EPYC 7302 16-Core Processor
Stepping: 0
Frequency boost: enabled
CPU MHz: 1499.605
CPU max MHz: 3000.0000
CPU min MHz: 1500.0000
BogoMIPS: 5999.81
Virtualization: AMD-V
L1d cache: 1 MiB
L1i cache: 1 MiB
L2 cache: 16 MiB
L3 cache: 256 MiB
NUMA node0 CPU(s): 0-15,32-47
NUMA node1 CPU(s): 16-31,48-63
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Vulnerable
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate sme ssbd mba sev ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif umip rdpid overflow_recov succor smca
Versions of relevant libraries:
[pip3] mypy==0.971
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.23.5
[pip3] torch==1.13.1
[pip3] torchmetrics==0.11.1
[conda] cudatoolkit 11.3.1 h2bc3f7f_2
[conda] pytorch-cuda 11.8 h7e8668a_3 pytorch-nightly
[conda] pytorch-triton 2.0.0+b8b470bc59 pypi_0 pypi
[conda] torch 1.13.1 pypi_0 pypi
[conda] torchmetrics 0.11.1 pypi_0 pypi
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
| 3 |
3,314 | 95,956 |
[vulkan] missing aten::reflection_pad1d.out operator
|
triaged, module: vulkan
|
As discussed in https://discuss.pytorch.org/t/error-with-pytorch-mobile-on-vulkan-backend-during-prediction/142047
The `aten::reflection_pad1d.out` operator is missing from the Vulkan backend and thus some models can't be run on Android GPU.
| 0 |
3,315 | 95,953 |
The torch.sparse document's typo error
|
module: sparse, module: docs, triaged
|
### 📚 The doc issue
In the torch.sparse module(https://pytorch.org/docs/stable/sparse.html), there is a typo error of the "Operator overview" chapter(https://pytorch.org/docs/stable/sparse.html#operator-overview). The original description is
"The particularities of storage, that is the physical layout of the data, influences the performance of an operation but shhould not influence the semantics." And the error spelling is "shhould", which the right spelling is "should".
### Suggest a potential alternative/fix
Change the "shhould" to "should".
cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer @svekars @carljparker
| 2 |
3,316 | 95,952 |
add debug mode
|
open source
|
Reference: https://github.com/pytorch/pytorch/issues/93880
This PR adds only a context manager for DebugMode with customisation points.
Post this, will have a follow-up PR which connects this with an environment variable.
| 5 |
3,317 | 95,946 |
Build from source, Undefined symbol: c10::detail::maybe_wrap_dim_slow(long long, long long, bool)
|
triaged, oncall: mobile, module: ios
|
### 🐛 Describe the bug
Hi
I tried to build Pytorch for iOS from source (Because we need to support more arch than pre-build binary, like LibTorch-Lite provides, so we're forced to build from source).
We followed the guidance in [build-pytorch-ios](https://pytorch.org/mobile/ios/#build-pytorch-ios-libraries-from-source). Everything turns out good, the building process is smooth. But we got errors while linking Pytorch to our project.
```
Showing Recent Messages
Undefined symbol: at::_ops::repeat_interleave_self_int::call(at::Tensor const&, long long, c10::optional<long long>, c10::optional<long long>)
Undefined symbol: at::_ops::pad::call(at::Tensor const&, c10::ArrayRef<long long>, c10::basic_string_view<char>, c10::optional<double>)
Undefined symbol: c10::detail::maybe_wrap_dim_slow(long long, long long, bool)
```
I have found [issue 13541](https://github.com/pytorch/pytorch/issues/13541), and tried adding ```"-D_GLIBCXX_USE_CXX11_ABI=0"``` flag to ```building_ios``` script but that doesn't fix the issue.
My compiler version:
Apple clang version 14.0.0 (clang-1400.0.29.202)
Target: x86_64-apple-darwin21.6.0
Thread model: posix
If anyone could give a hint that where should I look into, it would be a great help.
Thanks a lot.
### Versions
I build from the master branch, don't know whether it's relevant.
| 3 |
3,318 | 95,945 |
CPU time performance is unstable
|
module: performance, module: cpu, triaged
|
### 🐛 Describe the bug
I retrain the mobilenetV3 with my own data. The accuracy is the same under cpu or gpu. But the time performance has a huge difference.
For GPU: the costing time is pretty similar under different epoch. It only takes 15ms to inference single image.
But for CPU, epoch 1 takes over 40ms to inference single image. epoch 2 takes over 20ms and epoch 3 takes over 110ms. And everytime i train the model. Inference time is always different.
I also use other framework to test whether this problem comes from cpu. But the tensorflow is pretty stable and it costs 25ms to inference.
My cpu is i5-7500 and torch version is 2.0.0.dev. I tried losts of version of torch(1.10.0, 1.12.1,even 1.5) .The problems still happen. Does anyone know what happened and how to fix this problem?
### Versions
My cpu is i5-7500 and torch version is 2.0.0.dev. I tried losts of version of torch(1.10.0, 1.12.1,even 1.5) .The problems still happen. Does anyone know what happened and how to fix this problem?
cc @ngimel @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| 29 |
3,319 | 95,944 |
training hangs at line torch.cuda.synchronize()
|
module: cuda, triaged, module: deadlock
|
### 🐛 Describe the bug
Hi there,
I am using pytorch 1.12.1 to train a hugging face model on 8 V100 across two nodes and it hung after epoch 1 on the line torch.cuda.synchronize().
All the ranks are waiting for "cudaDeviceSynchronize ()" except the last one on the second node. I retrieved its stack trace from gdb which says
"""
#0 0x000014cd955ebdbc in ?? () from /lib64/libcuda.so.1
#1 0x000014cd953608b6 in ?? () from /lib64/libcuda.so.1
#2 0x000014cd956bb12f in ?? () from /lib64/libcuda.so.1
#3 0x000014cd956bda2f in ?? () from /lib64/libcuda.so.1
#4 0x000014cd9536f0df in ?? () from /lib64/libcuda.so.1
#5 0x000014cd9558ea8c in ?? () from /lib64/libcuda.so.1
#6 0x000014cd9558f351 in ?? () from /lib64/libcuda.so.1
#7 0x000014cd956b4940 in ?? () from /lib64/libcuda.so.1
#8 0x000014cd9532f833 in ?? () from /lib64/libcuda.so.1
#9 0x000014cd9532fd41 in ?? () from /lib64/libcuda.so.1
#10 0x000014cd95330ca8 in ?? () from /lib64/libcuda.so.1
#11 0x000014cd954fd9e1 in ?? () from /lib64/libcuda.so.1
#12 0x000014cdd8a4d4f9 in ?? () from /g/data/z00/yxs900/installation/cuda/11.3.0/lib64/libcudart.so.11.0
#13 0x000014cdd8a21f6d in ?? () from /g/data/z00/yxs900/installation/cuda/11.3.0/lib64/libcudart.so.11.0
#14 0x000014cdd8a6cf36 in cudaMemcpyAsync () from /g/data/z00/yxs900/installation/cuda/11.3.0/lib64/libcudart.so.11.0
#15 0x000014cdb53ae194 in at::native::_local_scalar_dense_cuda(at::Tensor const&)::{lambda()#1}::operator()() const [clone .isra.100] ()
from /g/data/z00/yxs900/installation/pytorch/v1.12.1/lib/python3.9/site-packages/torch/lib/libtorch_cuda.so
#16 0x000014cdb53b00dc in at::native::_local_scalar_dense_cuda(at::Tensor const&) () from /g/data/z00/yxs900/installation/pytorch/v1.12.1/lib/python3.9/site-packages/torch/lib/libtorch_cuda.so
#17 0x000014cdb681920d in at::(anonymous namespace)::(anonymous namespace)::wrapper___local_scalar_dense (self=...) at aten/src/ATen/RegisterCUDA.cpp:22556
#18 0x000014cdb681926c in c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<c10::Scalar(const at::Tensor&), at::(anonymous namespace)::(anonymous namespace)::wrapper___local_scalar_dense>, c10::Scalar, c10::guts::typelist::typelist<const at::Tensor&> >::operator() (args#0=..., this=<optimized out>) at ../aten/src/ATen/core/boxing/impl/WrapFunctionIntoFunctor.h:12
#19 c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<c10::Scalar(const at::Tensor&), at::(anonymous namespace)::(anonymous namespace)::wrapper___local_scalar_dense>, c10::Scalar, c10::guts::typelist::typelist<const at::Tensor&> >, c10::Scalar(const at::Tensor&)>::call(c10::OperatorKernel *, c10::DispatchKeySet, const at::Tensor &) (
functor=<optimized out>, args#0=...) at ../aten/src/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:453
#20 0x000014cdc0d6acc9 in c10::callUnboxedKernelFunction<c10::Scalar, at::Tensor const&> (dispatchKeySet=..., functor=<optimized out>, unboxed_kernel_func=<optimized out>)
at ../aten/src/ATen/core/boxing/KernelFunction_impl.h:54
#21 c10::KernelFunction::call<c10::Scalar, at::Tensor const&> (dispatchKeySet=..., opHandle=..., this=<optimized out>) at ../aten/src/ATen/core/boxing/KernelFunction_impl.h:67
#22 c10::Dispatcher::redispatch<c10::Scalar, at::Tensor const&>(c10::TypedOperatorHandle<c10::Scalar (at::Tensor const&)> const&, c10::DispatchKeySet, at::Tensor const&) const (op=...,
this=<optimized out>, currentDispatchKeySet=...) at ../aten/src/ATen/core/dispatch/Dispatcher.h:545
#23 c10::TypedOperatorHandle<c10::Scalar (at::Tensor const&)>::redispatch(c10::DispatchKeySet, at::Tensor const&) const (args#0=..., currentDispatchKeySet=...,
this=0x14cdc7ef22b0 <at::_ops::_local_scalar_dense::redispatch(c10::DispatchKeySet, at::Tensor const&)::op>) at ../aten/src/ATen/core/dispatch/Dispatcher.h:419
#24 at::_ops::_local_scalar_dense::redispatch (dispatchKeySet=..., self=...) at aten/src/ATen/Operators_2.cpp:7573
#25 0x000014cdc1f3e470 in at::redispatch::_local_scalar_dense (self=..., dispatchKeySet=...) at aten/src/ATen/RedispatchFunctions.h:7757
#26 torch::autograd::VariableType::(anonymous namespace)::<lambda()>::operator() (__closure=<optimized out>, __closure=<optimized out>) at ../torch/csrc/autograd/generated/VariableType_2.cpp:1482
#27 torch::autograd::VariableType::(anonymous namespace)::_local_scalar_dense (ks=..., self=...) at ../torch/csrc/autograd/generated/VariableType_2.cpp:1483
#28 0x000014cdc1f3e82f in c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<c10::Scalar(c10::DispatchKeySet, const at::Tensor&), torch::autograd::VariableType::(anonymous namespace)::_local_scalar_dense>, c10::Scalar, c10::guts::typelist::typelist<c10::DispatchKeySet, const at::Tensor&> >::operator() (args#1=..., args#0=..., this=<optimized out>)
at ../aten/src/ATen/core/boxing/impl/WrapFunctionIntoFunctor.h:12
#29 c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<c10::Scalar(c10::DispatchKeySet, const at::Tensor&), torch::autograd::VariableType::(anonymous namespace)::_local_scalar_dense>, c10::Scalar, c10::guts::typelist::typelist<c10::DispatchKeySet, const at::Tensor&> >, c10::Scalar(c10::DispatchKeySet, const at::Tensor&)>::call(c10::OperatorKernel *, c10::DispatchKeySet, const at::Tensor &) (functor=<optimized out>, dispatchKeySet=..., args#0=...) at ../aten/src/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:470
#30 0x000014cdc0ded073 in c10::callUnboxedKernelFunction<c10::Scalar, at::Tensor const&> (dispatchKeySet=..., functor=<optimized out>, unboxed_kernel_func=<optimized out>)
at ../aten/src/ATen/core/boxing/KernelFunction_impl.h:54
#31 c10::KernelFunction::call<c10::Scalar, at::Tensor const&> (dispatchKeySet=..., opHandle=..., this=0x1da1f28) at ../aten/src/ATen/core/boxing/KernelFunction_impl.h:67
#32 c10::Dispatcher::call<c10::Scalar, at::Tensor const&>(c10::TypedOperatorHandle<c10::Scalar (at::Tensor const&)> const&, at::Tensor const&) const (op=..., this=<optimized out>)
at ../aten/src/ATen/core/dispatch/Dispatcher.h:536
#33 c10::TypedOperatorHandle<c10::Scalar (at::Tensor const&)>::call(at::Tensor const&) const (args#0=..., this=0x14cdc7ef22d0 <at::_ops::_local_scalar_dense::call(at::Tensor const&)::op>)
at ../aten/src/ATen/core/dispatch/Dispatcher.h:414
#34 at::_ops::_local_scalar_dense::call (self=...) at aten/src/ATen/Operators_2.cpp:7566
#35 0x000014cdc07d9c40 in at::_local_scalar_dense (self=...) at aten/src/ATen/ops/_local_scalar_dense.h:27
#36 at::native::item (self=...) at ../aten/src/ATen/native/Scalar.cpp:17
#37 0x000014cdc1200dec in at::(anonymous namespace)::(anonymous namespace)::wrapper__item (self=...) at aten/src/ATen/RegisterCompositeImplicitAutograd.cpp:7062
#38 c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<c10::Scalar(const at::Tensor&), at::(anonymous namespace)::(anonymous namespace)::wrapper__item>, c10::Scalar, c10::guts::--Type <RET> for more, q to quit, c to continue without paging--
typelist::typelist<const at::Tensor&> >::operator() (args#0=..., this=<optimized out>) at ../aten/src/ATen/core/boxing/impl/WrapFunctionIntoFunctor.h:13
#39 c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<c10::Scalar(const at::Tensor&), at::(anonymous namespace)::(anonymous namespace)::wrapper__item>, c10::Scalar, c10::guts::typelist::typelist<const at::Tensor&> >, c10::Scalar(const at::Tensor&)>::call(c10::OperatorKernel *, c10::DispatchKeySet, const at::Tensor &) (
functor=<optimized out>, args#0=...) at ../aten/src/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:453
#40 0x000014cdc0ce1453 in c10::callUnboxedKernelFunction<c10::Scalar, at::Tensor const&> (dispatchKeySet=..., functor=<optimized out>, unboxed_kernel_func=<optimized out>)
at ../aten/src/ATen/core/boxing/KernelFunction_impl.h:54
#41 c10::KernelFunction::call<c10::Scalar, at::Tensor const&> (dispatchKeySet=..., opHandle=..., this=0x1ef8e88) at ../aten/src/ATen/core/boxing/KernelFunction_impl.h:67
#42 c10::Dispatcher::call<c10::Scalar, at::Tensor const&>(c10::TypedOperatorHandle<c10::Scalar (at::Tensor const&)> const&, at::Tensor const&) const (op=..., this=<optimized out>)
at ../aten/src/ATen/core/dispatch/Dispatcher.h:536
#43 c10::TypedOperatorHandle<c10::Scalar (at::Tensor const&)>::call(at::Tensor const&) const (args#0=..., this=0x14cdc7eea170 <at::_ops::item::call(at::Tensor const&)::op>)
at ../aten/src/ATen/core/dispatch/Dispatcher.h:414
#44 at::_ops::item::call (self=...) at aten/src/ATen/Operators_1.cpp:6382
#45 0x000014cdc1578d71 in at::Tensor::item (this=0x14cd800ab2d8) at aten/src/ATen/core/TensorBody.h:3927
#45 0x000014cdc1578d71 in at::Tensor::item (this=0x14cd800ab2d8) at aten/src/ATen/core/TensorBody.h:3927
#46 at::Tensor::item<double> (this=this@entry=0x14cd800ab2d8) at aten/src/ATen/core/TensorMethods.cpp:30
#47 0x000014cdc87201d8 in torch::autograd::dispatch_to_CDouble (self=...) at ../torch/csrc/autograd/generated/python_variable_methods.cpp:844
#48 0x000014cdc8720516 in torch::autograd::THPVariable_float_scalar (self=0x14cd800ab2c0, args=0x0) at ../torch/csrc/autograd/generated/python_variable_methods.cpp:881
#49 0x000014ce217d8658 in method_vectorcall_NOARGS (func=func@entry=0x14cdf8251ea0, args=args@entry=0x7ffeb4e28e38, nargsf=<optimized out>, kwnames=kwnames@entry=0x0) at ../Objects/descrobject.c:434
#50 0x000014ce21816315 in _PyObject_VectorcallTstate (kwnames=0x0, nargsf=<optimized out>, args=0x7ffeb4e28e38, callable=0x14cdf8251ea0, tstate=0xe393f0) at ../Include/cpython/abstract.h:118
#51 vectorcall_unbound (nargs=<optimized out>, args=0x7ffeb4e28e38, func=0x14cdf8251ea0, unbound=1, tstate=0xe393f0) at ../Objects/typeobject.c:1505
#52 vectorcall_method (name=0x14ce21c08580 <id>, args=0x7ffeb4e28e38, nargs=<optimized out>) at ../Objects/typeobject.c:1537
#53 0x000014ce218a6bbf in slot_nb_float (self=<optimized out>) at ../Objects/typeobject.c:6596
#54 0x000014ce217c8e4b in PyNumber_Float (o=0x14cd800ab2c0) at ../Objects/abstract.c:1524
#55 0x000014ce218120d3 in type_call (type=type@entry=0x14ce21bfe9c0 <PyFloat_Type>, args=args@entry=0x14cd8e4490d0, kwds=kwds@entry=0x0) at ../Objects/typeobject.c:1014
#56 0x000014ce217d2a6d in _PyObject_MakeTpCall (tstate=0xe393f0, callable=0x14ce21bfe9c0 <PyFloat_Type>, args=0x14cd6f78c1d0, nargs=1, keywords=0x0) at ../Objects/call.c:191
#57 0x000014ce218450c7 in _PyObject_VectorcallTstate (kwnames=0x0, nargsf=9223372036854775809, args=0x14cd6f78c1d0, callable=0x14ce21bfe9c0 <PyFloat_Type>, tstate=<optimized out>)
at ../Include/cpython/abstract.h:116
#58 _PyObject_VectorcallTstate (kwnames=0x0, nargsf=9223372036854775809, args=0x14cd6f78c1d0, callable=0x14ce21bfe9c0 <PyFloat_Type>, tstate=<optimized out>) at ../Include/cpython/abstract.h:103
#59 PyObject_Vectorcall (kwnames=0x0, nargsf=9223372036854775809, args=0x14cd6f78c1d0, callable=0x14ce21bfe9c0 <PyFloat_Type>) at ../Include/cpython/abstract.h:127
#60 call_function (kwnames=0x0, oparg=<optimized out>, pp_stack=<synthetic pointer>, tstate=0xe393f0) at ../Python/ceval.c:5072
#61 _PyEval_EvalFrameDefault (tstate=<optimized out>, f=<optimized out>, throwflag=<optimized out>) at ../Python/ceval.c:3518
#62 0x000014ce217d3282 in _PyEval_EvalFrame (throwflag=0, f=0x14cd6f78c040, tstate=0xe393f0) at ../Include/internal/pycore_ceval.h:40
#63 function_code_fastcall (globals=<optimized out>, nargs=2, args=<optimized out>, co=<optimized out>, tstate=0xe393f0) at ../Objects/call.c:330
#64 _PyFunction_Vectorcall (func=<optimized out>, stack=0x14cd6f778920, nargsf=<optimized out>, kwnames=<optimized out>) at ../Objects/call.c:367
#65 0x000014ce21844f2e in _PyObject_VectorcallTstate (kwnames=0x0, nargsf=<optimized out>, args=0x14cd6f778920, callable=0x14cd97e67040, tstate=0xe393f0) at ../Include/cpython/abstract.h:118
#66 PyObject_Vectorcall (kwnames=0x0, nargsf=<optimized out>, args=0x14cd6f778920, callable=<optimized out>) at ../Include/cpython/abstract.h:127
#67 call_function (kwnames=0x0, oparg=<optimized out>, pp_stack=<synthetic pointer>, tstate=0xe393f0) at ../Python/ceval.c:5072
#68 _PyEval_EvalFrameDefault (tstate=<optimized out>, f=<optimized out>, throwflag=<optimized out>) at ../Python/ceval.c:3487
#69 0x000014ce217d3282 in _PyEval_EvalFrame (throwflag=0, f=0x14cd6f778780, tstate=0xe393f0) at ../Include/internal/pycore_ceval.h:40
#70 function_code_fastcall (globals=<optimized out>, nargs=2, args=<optimized out>, co=<optimized out>, tstate=0xe393f0) at ../Objects/call.c:330
#71 _PyFunction_Vectorcall (func=<optimized out>, stack=0x14cd6f7f0b40, nargsf=<optimized out>, kwnames=<optimized out>) at ../Objects/call.c:367
#72 0x000014ce21840e10 in _PyObject_VectorcallTstate (kwnames=0x0, nargsf=<optimized out>, args=0x14cd6f7f0b40, callable=0x14cd97e61ee0, tstate=0xe393f0) at ../Include/cpython/abstract.h:118
#73 PyObject_Vectorcall (kwnames=0x0, nargsf=<optimized out>, args=0x14cd6f7f0b40, callable=<optimized out>) at ../Include/cpython/abstract.h:127
#74 call_function (kwnames=0x0, oparg=<optimized out>, pp_stack=<synthetic pointer>, tstate=0xe393f0) at ../Python/ceval.c:5072
#75 _PyEval_EvalFrameDefault (tstate=<optimized out>, f=<optimized out>, throwflag=<optimized out>) at ../Python/ceval.c:3504
#76 0x000014ce2183fab9 in _PyEval_EvalFrame (throwflag=0, f=0x14cd6f7f09a0, tstate=0xe393f0) at ../Include/internal/pycore_ceval.h:40
#77 _PyEval_EvalCode (tstate=tstate@entry=0xe393f0, _co=<optimized out>, globals=<optimized out>, locals=locals@entry=0x0, args=args@entry=0x14cd6f7f0958, argcount=argcount@entry=2,
kwnames=0x14cd97e4c4d8, kwargs=0x14cd6f7f0968, kwcount=<optimized out>, kwstep=1, defs=0x14cd97e4c598, defcount=1, kwdefs=0x0, closure=0x0, name=0x14cd97e4e730, qualname=0x14cd97e50440)
at ../Python/ceval.c:4327
#78 0x000014ce217d331c in _PyFunction_Vectorcall (func=<optimized out>, stack=0x14cd6f7f0958, nargsf=<optimized out>, kwnames=<optimized out>) at ../Objects/call.c:396
#79 0x000014ce217d4fb7 in _PyObject_VectorcallTstate (kwnames=<optimized out>, nargsf=<optimized out>, args=0x14cd6f7f0958, callable=0x14cd97e61f70, tstate=0xe393f0)
at ../Include/cpython/abstract.h:118
#80 method_vectorcall (method=<optimized out>, args=0x14cd6f7f0960, nargsf=<optimized out>, kwnames=<optimized out>) at ../Objects/classobject.c:53
#81 0x000014ce2184198b in _PyObject_VectorcallTstate (kwnames=0x14cd97e4c4c0, nargsf=<optimized out>, args=0x14cd6f7f0960, callable=0x14cd8e294980, tstate=0xe393f0) at ../Include/cpython/abstract.h:118
--Type <RET> for more, q to quit, c to continue without paging--
#82 PyObject_Vectorcall (kwnames=<optimized out>, nargsf=<optimized out>, args=<optimized out>, callable=0x14cd8e294980) at ../Include/cpython/abstract.h:127
#83 call_function (kwnames=<optimized out>, oparg=<optimized out>, pp_stack=<synthetic pointer>, tstate=<optimized out>) at ../Python/ceval.c:5072
#84 _PyEval_EvalFrameDefault (tstate=<optimized out>, f=<optimized out>, throwflag=<optimized out>) at ../Python/ceval.c:3535
#85 0x000014ce2183fab9 in _PyEval_EvalFrame (throwflag=0, f=0x14cd6f7f07c0, tstate=0xe393f0) at ../Include/internal/pycore_ceval.h:40
#86 _PyEval_EvalCode (tstate=tstate@entry=0xe393f0, _co=<optimized out>, globals=<optimized out>, locals=locals@entry=0x0, args=args@entry=0x8e3c530, argcount=argcount@entry=1, kwnames=0x0,
kwargs=0x8e3c538, kwcount=<optimized out>, kwstep=1, defs=0x14cd97e4c568, defcount=1, kwdefs=0x0, closure=0x0, name=0x14ce12381ef0, qualname=0x14cd97e50260) at ../Python/ceval.c:4327
#87 0x000014ce217d331c in _PyFunction_Vectorcall (func=<optimized out>, stack=0x8e3c530, nargsf=<optimized out>, kwnames=<optimized out>) at ../Objects/call.c:396
#88 0x000014ce21840e10 in _PyObject_VectorcallTstate (kwnames=0x0, nargsf=<optimized out>, args=0x8e3c530, callable=0x14cd97e61e50, tstate=0xe393f0) at ../Include/cpython/abstract.h:118
#89 PyObject_Vectorcall (kwnames=0x0, nargsf=<optimized out>, args=0x8e3c530, callable=<optimized out>) at ../Include/cpython/abstract.h:127
#90 call_function (kwnames=0x0, oparg=<optimized out>, pp_stack=<synthetic pointer>, tstate=0xe393f0) at ../Python/ceval.c:5072
#91 _PyEval_EvalFrameDefault (tstate=<optimized out>, f=<optimized out>, throwflag=<optimized out>) at ../Python/ceval.c:3504
#92 0x000014ce2183fab9 in _PyEval_EvalFrame (throwflag=0, f=0x8e3c350, tstate=0xe393f0) at ../Include/internal/pycore_ceval.h:40
#93 _PyEval_EvalCode (tstate=tstate@entry=0xe393f0, _co=<optimized out>, globals=<optimized out>, locals=locals@entry=0x0, args=args@entry=0x14cd6f7da9e8, argcount=argcount@entry=1, kwnames=0x0,
kwargs=0x14cd6f7da9f0, kwcount=<optimized out>, kwstep=1, defs=0x14cd96f4c598, defcount=1, kwdefs=0x0, closure=0x0, name=0x14ce21ddfd30, qualname=0x14cd96f4f490) at ../Python/ceval.c:4327
#94 0x000014ce217d331c in _PyFunction_Vectorcall (func=<optimized out>, stack=0x14cd6f7da9e8, nargsf=<optimized out>, kwnames=<optimized out>) at ../Objects/call.c:396
#95 0x000014ce21840e10 in _PyObject_VectorcallTstate (kwnames=0x0, nargsf=<optimized out>, args=0x14cd6f7da9e8, callable=0x14cd96f4e820, tstate=0xe393f0) at ../Include/cpython/abstract.h:118
#96 PyObject_Vectorcall (kwnames=0x0, nargsf=<optimized out>, args=0x14cd6f7da9e8, callable=<optimized out>) at ../Include/cpython/abstract.h:127
#97 call_function (kwnames=0x0, oparg=<optimized out>, pp_stack=<synthetic pointer>, tstate=0xe393f0) at ../Python/ceval.c:5072
#98 _PyEval_EvalFrameDefault (tstate=<optimized out>, f=<optimized out>, throwflag=<optimized out>) at ../Python/ceval.c:3504
#99 0x000014ce2183fab9 in _PyEval_EvalFrame (throwflag=0, f=0x14cd6f7da840, tstate=0xe393f0) at ../Include/internal/pycore_ceval.h:40
#100 _PyEval_EvalCode (tstate=tstate@entry=0xe393f0, _co=<optimized out>, globals=<optimized out>, locals=locals@entry=0x0, args=args@entry=0x14cd6f7ea9a8, argcount=argcount@entry=2, kwnames=0x0,
kwargs=0x14cd6f7ea9b8, kwcount=<optimized out>, kwstep=1, defs=0x14cd96f218c8, defcount=1, kwdefs=0x0, closure=0x0, name=0x14cd96fd2f80, qualname=0x14cd96fd6390) at ../Python/ceval.c:4327
#101 0x000014ce217d331c in _PyFunction_Vectorcall (func=<optimized out>, stack=0x14cd6f7ea9a8, nargsf=<optimized out>, kwnames=<optimized out>) at ../Objects/call.c:396
#102 0x000014ce217d4fb7 in _PyObject_VectorcallTstate (kwnames=<optimized out>, nargsf=<optimized out>, args=0x14cd6f7ea9a8, callable=0x14cd96f42f70, tstate=0xe393f0)
#103 method_vectorcall (method=<optimized out>, args=0x14cd6f7ea9b0, nargsf=<optimized out>, kwnames=<optimized out>) at ../Objects/classobject.c:53
#104 0x000014ce21844f2e in _PyObject_VectorcallTstate (kwnames=0x0, nargsf=<optimized out>, args=0x14cd6f7ea9b0, callable=0x14cd8e162cc0, tstate=0xe393f0) at ../Include/cpython/abstract.h:118
#105 PyObject_Vectorcall (kwnames=0x0, nargsf=<optimized out>, args=0x14cd6f7ea9b0, callable=<optimized out>) at ../Include/cpython/abstract.h:127
#106 call_function (kwnames=0x0, oparg=<optimized out>, pp_stack=<synthetic pointer>, tstate=0xe393f0) at ../Python/ceval.c:5072
#107 _PyEval_EvalFrameDefault (tstate=<optimized out>, f=<optimized out>, throwflag=<optimized out>) at ../Python/ceval.c:3487
#108 0x000014ce2183fab9 in _PyEval_EvalFrame (throwflag=0, f=0x14cd6f7ea800, tstate=0xe393f0) at ../Include/internal/pycore_ceval.h:40
#109 _PyEval_EvalCode (tstate=tstate@entry=0xe393f0, _co=<optimized out>, globals=<optimized out>, locals=locals@entry=0x0, args=args@entry=0x5888390, argcount=argcount@entry=1, kwnames=0x0,
kwargs=0x5888398, kwcount=<optimized out>, kwstep=1, defs=0x14cd96fe2658, defcount=1, kwdefs=0x0, closure=0x0, name=0x14ce21ddfd30, qualname=0x14cd96fd93a0) at ../Python/ceval.c:4327
#110 0x000014ce217d331c in _PyFunction_Vectorcall (func=<optimized out>, stack=0x5888390, nargsf=<optimized out>, kwnames=<optimized out>) at ../Objects/call.c:396
#111 0x000014ce217d4fb7 in _PyObject_VectorcallTstate (kwnames=<optimized out>, nargsf=<optimized out>, args=0x5888390, callable=0x14cd96f43040, tstate=0xe393f0) at ../Include/cpython/abstract.h:118
#112 method_vectorcall (method=<optimized out>, args=0x5888398, nargsf=<optimized out>, kwnames=<optimized out>) at ../Objects/classobject.c:53
#113 0x000014ce21844f2e in _PyObject_VectorcallTstate (kwnames=0x0, nargsf=<optimized out>, args=0x5888398, callable=0x14cd8e165b40, tstate=0xe393f0) at ../Include/cpython/abstract.h:118
#114 PyObject_Vectorcall (kwnames=0x0, nargsf=<optimized out>, args=0x5888398, callable=<optimized out>) at ../Include/cpython/abstract.h:127
#115 call_function (kwnames=0x0, oparg=<optimized out>, pp_stack=<synthetic pointer>, tstate=0xe393f0) at ../Python/ceval.c:5072
#116 _PyEval_EvalFrameDefault (tstate=<optimized out>, f=<optimized out>, throwflag=<optimized out>) at ../Python/ceval.c:3487
#117 0x000014ce2183ffe7 in _PyEval_EvalFrame (throwflag=0, f=0x5888160, tstate=0xe393f0) at ../Include/internal/pycore_ceval.h:40
#118 _PyEval_EvalCode (tstate=tstate@entry=0xe393f0, _co=<optimized out>, globals=<optimized out>, locals=locals@entry=0x0, args=args@entry=0x522c600, argcount=argcount@entry=7, kwnames=0x0,
kwargs=0x522c638, kwcount=<optimized out>, kwstep=1, defs=0x14ce133fcd48, defcount=1, kwdefs=0x0, closure=0x0, name=0x14ce13459f30, qualname=0x14ce13459f30) at ../Python/ceval.c:4327
#119 0x000014ce217d331c in _PyFunction_Vectorcall (func=<optimized out>, stack=0x522c600, nargsf=<optimized out>, kwnames=<optimized out>) at ../Objects/call.c:396
#120 0x000014ce21840bb0 in _PyObject_VectorcallTstate (kwnames=0x0, nargsf=<optimized out>, args=0x522c600, callable=0x14cd96ee4f70, tstate=0xe393f0) at ../Include/cpython/abstract.h:118
#121 PyObject_Vectorcall (kwnames=0x0, nargsf=<optimized out>, args=0x522c600, callable=<optimized out>) at ../Include/cpython/abstract.h:127
#122 call_function (kwnames=0x0, oparg=<optimized out>, pp_stack=<synthetic pointer>, tstate=0xe393f0) at ../Python/ceval.c:5072
#123 _PyEval_EvalFrameDefault (tstate=<optimized out>, f=<optimized out>, throwflag=<optimized out>) at ../Python/ceval.c:3518
#124 0x000014ce217d3282 in _PyEval_EvalFrame (throwflag=0, f=0x522c410, tstate=0xe393f0) at ../Include/internal/pycore_ceval.h:40
#125 function_code_fastcall (globals=<optimized out>, nargs=5, args=<optimized out>, co=<optimized out>, tstate=0xe393f0) at ../Objects/call.c:330
#126 _PyFunction_Vectorcall (func=<optimized out>, stack=0x14cd972159b0, nargsf=<optimized out>, kwnames=<optimized out>) at ../Objects/call.c:367
#127 0x000014ce21840bb0 in _PyObject_VectorcallTstate (kwnames=0x0, nargsf=<optimized out>, args=0x14cd972159b0, callable=0x14cd96eec5e0, tstate=0xe393f0) at ../Include/cpython/abstract.h:118
#128 PyObject_Vectorcall (kwnames=0x0, nargsf=<optimized out>, args=0x14cd972159b0, callable=<optimized out>) at ../Include/cpython/abstract.h:127
--Type <RET> for more, q to quit, c to continue without paging--
#129 call_function (kwnames=0x0, oparg=<optimized out>, pp_stack=<synthetic pointer>, tstate=0xe393f0) at ../Python/ceval.c:5072
#130 _PyEval_EvalFrameDefault (tstate=<optimized out>, f=<optimized out>, throwflag=<optimized out>) at ../Python/ceval.c:3518
#131 0x000014ce217d3282 in _PyEval_EvalFrame (throwflag=0, f=0x14cd97215800, tstate=0xe393f0) at ../Include/internal/pycore_ceval.h:40
#132 function_code_fastcall (globals=<optimized out>, nargs=0, args=<optimized out>, co=<optimized out>, tstate=0xe393f0) at ../Objects/call.c:330
#133 _PyFunction_Vectorcall (func=<optimized out>, stack=0xe96030, nargsf=<optimized out>, kwnames=<optimized out>) at ../Objects/call.c:367
#134 0x000014ce21840bb0 in _PyObject_VectorcallTstate (kwnames=0x0, nargsf=<optimized out>, args=0xe96030, callable=0x14cd96eec8b0, tstate=0xe393f0) at ../Include/cpython/abstract.h:118
#135 PyObject_Vectorcall (kwnames=0x0, nargsf=<optimized out>, args=0xe96030, callable=<optimized out>) at ../Include/cpython/abstract.h:127
#136 call_function (kwnames=0x0, oparg=<optimized out>, pp_stack=<synthetic pointer>, tstate=0xe393f0) at ../Python/ceval.c:5072
#137 _PyEval_EvalFrameDefault (tstate=<optimized out>, f=<optimized out>, throwflag=<optimized out>) at ../Python/ceval.c:3518
#138 0x000014ce2183fab9 in _PyEval_EvalFrame (throwflag=0, f=0xe95ec0, tstate=0xe393f0) at ../Include/internal/pycore_ceval.h:40
#139 _PyEval_EvalCode (tstate=<optimized out>, _co=<optimized out>, globals=<optimized out>, locals=<optimized out>, args=<optimized out>, argcount=<optimized out>, kwnames=0x0, kwargs=0x0,
kwcount=<optimized out>, kwstep=2, defs=0x0, defcount=0, kwdefs=0x0, closure=0x0, name=0x0, qualname=0x0) at ../Python/ceval.c:4327
#140 0x000014ce2183f7cf in _PyEval_EvalCodeWithName (_co=<optimized out>, globals=<optimized out>, locals=<optimized out>, args=<optimized out>, argcount=<optimized out>, kwnames=<optimized out>,
kwargs=0x0, kwcount=0, kwstep=2, defs=0x0, defcount=0, kwdefs=0x0, closure=0x0, name=0x0, qualname=0x0) at ../Python/ceval.c:4359
#141 0x000014ce2183f779 in PyEval_EvalCodeEx (_co=<optimized out>, globals=<optimized out>, locals=<optimized out>, args=<optimized out>, argcount=<optimized out>, kws=<optimized out>, kwcount=0,
defs=0x0, defcount=0, kwdefs=0x0, closure=0x0) at ../Python/ceval.c:4375
#142 0x000014ce2183f73b in PyEval_EvalCode (co=co@entry=0x14ce134600e0, globals=globals@entry=0x14ce21d45dc0, locals=locals@entry=0x14ce21d45dc0) at ../Python/ceval.c:826
#143 0x000014ce218d29cd in run_eval_code_obj (tstate=0xe393f0, co=0x14ce134600e0, globals=0x14ce21d45dc0, locals=0x14ce21d45dc0) at ../Python/pythonrun.c:1218
#144 0x000014ce218d252a in run_mod (mod=<optimized out>, filename=<optimized out>, globals=0x14ce21d45dc0, locals=0x14ce21d45dc0, flags=<optimized out>, arena=<optimized out>)
at ../Python/pythonrun.c:1239
#145 0x000014ce217748a3 in pyrun_file (fp=0xe29340, filename=0x14ce21cc7130, start=<optimized out>, globals=0x14ce21d45dc0, locals=0x14ce21d45dc0, closeit=1, flags=0x7ffeb4e2a8c8)
at ../Python/pythonrun.c:1137
#146 0x000014ce218d2285 in pyrun_simple_file (flags=0x7ffeb4e2a8c8, closeit=1, filename=0x14ce21cc7130, fp=0xe29340) at ../Python/pythonrun.c:449
#147 PyRun_SimpleFileExFlags (fp=0xe29340, filename=<optimized out>, closeit=1, flags=0x7ffeb4e2a8c8) at ../Python/pythonrun.c:482
#148 0x000014ce218d82cb in pymain_run_file (cf=0x7ffeb4e2a8c8, config=0xe35710) at ../Modules/main.c:373
#149 pymain_run_python (exitcode=0x7ffeb4e2a8c0) at ../Modules/main.c:598
#150 Py_RunMain () at ../Modules/main.c:677
#151 0x000014ce218d7e49 in Py_BytesMain (argc=<optimized out>, argv=<optimized out>) at ../Modules/main.c:731
#152 0x000014ce2072ad85 in __libc_start_main () from /lib64/libc.so.6
#153 0x00000000004006ae in _start ()
"""
When I try to understand the problem, I cannot find "aten/src/ATen/RegisterCUDA.cpp:22556" in line 17.
Any suggestion about how to dig into the problem further is appreciated.
Cheers,
Yue
### Versions
Collecting environment information...
PyTorch version: 1.12.0a0
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Rocky Linux release 8.7 (Green Obsidian) (x86_64)
GCC version: (GCC) 8.5.0 20210514 (Red Hat 8.5.0-15)
Clang version: 14.0.6 (Red Hat 14.0.6-1.module+el8.7.0+1080+d88dc670)
CMake version: version 3.20.2
Libc version: glibc-2.28
Python version: 3.9.2 (default, Mar 29 2021, 10:41:26) [GCC 8.3.1 20191121 (Red Hat 8.3.1-5)] (64-bit runtime)
Python platform: Linux-4.18.0-425.3.1.el8.nci.x86_64-x86_64-with-glibc2.28
Is CUDA available: False
CUDA runtime version: 11.3.58
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
cc @ngimel
| 0 |
3,320 | 95,934 |
arange bug
|
module: numerical-stability, triaged
|
### 🐛 Describe the bug
arange bug.
### Versions

| 5 |
3,321 | 95,921 |
ROCm distributed flaky on test_distributed_spawn
|
module: rocm, triaged
|
### 🐛 Describe the bug
`test_distributed_spawn` flakily failes on ROCm distributed when running with the nccl backend.
Examples:
https://hud.pytorch.org/pytorch/pytorch/commit/75cb99e54971c0fdb5f1a6516e3c366af998be43#11706358342
https://hud.pytorch.org/pytorch/pytorch/commit/fafb410985d2cb94bd95f12f0c392bad9385b643#11668037807
https://hud.pytorch.org/pytorch/pytorch/commit/05943712a443138497c185405b575043b2916f34#11649638366
<details><summary>Long example from copied from logs</summary>
```
2023-02-28T13:46:36.9960944Z Running tests...
2023-02-28T13:46:36.9961599Z ----------------------------------------------------------------------
2023-02-28T13:46:36.9962850Z test_1_level_hierarchical_model_averager_equivalent_to_periodic_model_averager (__main__.TestDistBackendWithSpawn) ... INFO:torch.testing._internal.common_distributed:Started process 0 with pid 3738
2023-02-28T13:46:36.9963852Z INFO:torch.testing._internal.common_distributed:Started process 1 with pid 3739
2023-02-28T13:46:36.9964720Z [W Module.cpp:1170] Warning: cuDNN Benchmark limit is not supported in MIOpen and will have no effect. (function operator())
2023-02-28T13:46:36.9965866Z /opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/testing/_internal/common_utils.py:119: UserWarning: loaded 64 slow tests
2023-02-28T13:46:36.9966608Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests")
2023-02-28T13:46:36.9967591Z /opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/testing/_internal/common_utils.py:123: UserWarning: loaded 256 disabled tests
2023-02-28T13:46:36.9968422Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests")
2023-02-28T13:46:36.9969163Z INFO:torch.distributed.distributed_c10d:Added key: store_based_barrier_key:1 to store for rank: 0
2023-02-28T13:46:36.9970321Z /opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/testing/_internal/common_utils.py:119: UserWarning: loaded 64 slow tests
2023-02-28T13:46:36.9971041Z warnings.warn(f"loaded {len(slow_tests_dict)} slow tests")
2023-02-28T13:46:36.9971970Z /opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/testing/_internal/common_utils.py:123: UserWarning: loaded 256 disabled tests
2023-02-28T13:46:36.9972789Z warnings.warn(f"loaded {len(disabled_tests_dict)} disabled tests")
2023-02-28T13:46:36.9973468Z INFO:torch.distributed.distributed_c10d:Added key: store_based_barrier_key:1 to store for rank: 1
2023-02-28T13:46:36.9974508Z INFO:torch.distributed.distributed_c10d:Rank 1: Completed store-based barrier for key:store_based_barrier_key:1 with 2 nodes.
2023-02-28T13:46:36.9975618Z INFO:torch.distributed.distributed_c10d:Rank 0: Completed store-based barrier for key:store_based_barrier_key:1 with 2 nodes.
2023-02-28T13:46:36.9976481Z INFO:torch.testing._internal.common_distributed:Starting event listener thread for rank 0
2023-02-28T13:46:36.9977244Z INFO:torch.testing._internal.common_distributed:Starting event listener thread for rank 1
2023-02-28T13:46:36.9978052Z INFO:torch.distributed.algorithms.model_averaging.hierarchical_model_averager:Model averaging hierarchy:
2023-02-28T13:46:36.9979376Z INFO:torch.distributed.algorithms.model_averaging.hierarchical_model_averager: Each group that has 2 processes average parameters every 4 iterations, if no higher-level averaging.
2023-02-28T13:46:36.9980534Z INFO:torch.distributed.algorithms.model_averaging.hierarchical_model_averager:Model averaging hierarchy:
2023-02-28T13:46:36.9981908Z INFO:torch.distributed.algorithms.model_averaging.hierarchical_model_averager: Each group that has 2 processes average parameters every 4 iterations, if no higher-level averaging.
2023-02-28T13:46:36.9983159Z libibverbs: Warning: couldn't open config directory '/etc/libibverbs.d'.
2023-02-28T13:46:36.9984024Z libibverbs: Warning: couldn't open config directory '/etc/libibverbs.d'.
2023-02-28T13:46:36.9984640Z free(): double free detected in tcache 2
2023-02-28T13:46:36.9985403Z INFO:torch.testing._internal.common_distributed:Received event Event.GET_TRACEBACK on process 0
2023-02-28T13:46:36.9986360Z INFO:torch.testing._internal.common_distributed:Received event Event.GET_TRACEBACK on process 1
2023-02-28T13:46:36.9987161Z INFO:torch.testing._internal.common_distributed:Process 0 sent traceback
2023-02-28T13:46:36.9987906Z INFO:torch.testing._internal.common_distributed:Process 1 sent traceback
2023-02-28T13:46:36.9988637Z ERROR:torch.testing._internal.common_distributed:Process 0 timed out with traceback:
2023-02-28T13:46:36.9989049Z
2023-02-28T13:46:36.9989289Z Current thread 0x00007f72f2ffe700 (most recent call first):
2023-02-28T13:46:36.9990272Z File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/testing/_internal/common_distributed.py", line 620 in _event_listener
2023-02-28T13:46:36.9991158Z File "/opt/conda/envs/py_3.8/lib/python3.8/threading.py", line 870 in run
2023-02-28T13:46:36.9992098Z File "/opt/conda/envs/py_3.8/lib/python3.8/threading.py", line 932 in _bootstrap_inner
2023-02-28T13:46:36.9992827Z File "/opt/conda/envs/py_3.8/lib/python3.8/threading.py", line 890 in _bootstrap
2023-02-28T13:46:36.9993287Z
2023-02-28T13:46:36.9993565Z Thread 0x00007f7aa711e180 (most recent call first):
2023-02-28T13:46:36.9994629Z File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 1731 in all_reduce
2023-02-28T13:46:36.9995711Z File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 1480 in wrapper
2023-02-28T13:46:36.9996960Z File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/distributed/algorithms/model_averaging/utils.py", line 36 in average_parameters
2023-02-28T13:46:36.9998306Z File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/distributed/algorithms/model_averaging/utils.py", line 71 in average_parameters_or_parameter_groups
2023-02-28T13:46:36.9999510Z File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/distributed/algorithms/model_averaging/averagers.py", line 118 in average_parameters
2023-02-28T13:46:37.0000948Z File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/testing/_internal/distributed/distributed_test.py", line 1129 in test_1_level_hierarchical_model_averager_equivalent_to_periodic_model_averager
2023-02-28T13:46:37.0002168Z File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/testing/_internal/common_distributed.py", line 174 in wrapper
2023-02-28T13:46:37.0003183Z File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/testing/_internal/common_distributed.py", line 543 in wrapper
2023-02-28T13:46:37.0004267Z File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/testing/_internal/common_distributed.py", line 657 in run_test
2023-02-28T13:46:37.0005390Z File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/testing/_internal/distributed/distributed_test.py", line 581 in _run
2023-02-28T13:46:37.0006269Z File "/opt/conda/envs/py_3.8/lib/python3.8/multiprocessing/process.py", line 108 in run
2023-02-28T13:46:37.0007000Z File "/opt/conda/envs/py_3.8/lib/python3.8/multiprocessing/process.py", line 315 in _bootstrap
2023-02-28T13:46:37.0007775Z File "/opt/conda/envs/py_3.8/lib/python3.8/multiprocessing/spawn.py", line 129 in _main
2023-02-28T13:46:37.0008461Z File "/opt/conda/envs/py_3.8/lib/python3.8/multiprocessing/spawn.py", line 116 in spawn_main
2023-02-28T13:46:37.0009058Z File "<string>", line 1 in <module>
2023-02-28T13:46:37.0009379Z
2023-02-28T13:46:37.0009694Z ERROR:torch.testing._internal.common_distributed:Process 1 timed out with traceback:
2023-02-28T13:46:37.0010067Z
2023-02-28T13:46:37.0010329Z Current thread 0x00007efa0577e700 (most recent call first):
2023-02-28T13:46:37.0011220Z File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/testing/_internal/common_distributed.py", line 620 in _event_listener
2023-02-28T13:46:37.0012063Z File "/opt/conda/envs/py_3.8/lib/python3.8/threading.py", line 870 in run
2023-02-28T13:46:37.0012816Z File "/opt/conda/envs/py_3.8/lib/python3.8/threading.py", line 932 in _bootstrap_inner
2023-02-28T13:46:37.0013668Z File "/opt/conda/envs/py_3.8/lib/python3.8/threading.py", line 890 in _bootstrap
2023-02-28T13:46:37.0014071Z
2023-02-28T13:46:37.0014293Z Thread 0x00007efeb4eab180 (most recent call first):
2023-02-28T13:46:37.0015247Z File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 1731 in all_reduce
2023-02-28T13:46:37.0016340Z File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 1480 in wrapper
2023-02-28T13:46:37.0017471Z File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/distributed/algorithms/model_averaging/utils.py", line 36 in average_parameters
2023-02-28T13:46:37.0018885Z File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/distributed/algorithms/model_averaging/utils.py", line 71 in average_parameters_or_parameter_groups
2023-02-28T13:46:37.0020128Z File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/distributed/algorithms/model_averaging/averagers.py", line 118 in average_parameters
2023-02-28T13:46:37.0021610Z File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/testing/_internal/distributed/distributed_test.py", line 1129 in test_1_level_hierarchical_model_averager_equivalent_to_periodic_model_averager
2023-02-28T13:46:37.0022816Z File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/testing/_internal/common_distributed.py", line 174 in wrapper
2023-02-28T13:46:37.0023959Z File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/testing/_internal/common_distributed.py", line 543 in wrapper
2023-02-28T13:46:37.0025026Z File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/testing/_internal/common_distributed.py", line 657 in run_test
2023-02-28T13:46:37.0026126Z File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/testing/_internal/distributed/distributed_test.py", line 581 in _run
2023-02-28T13:46:37.0026956Z File "/opt/conda/envs/py_3.8/lib/python3.8/multiprocessing/process.py", line 108 in run
2023-02-28T13:46:37.0027733Z File "/opt/conda/envs/py_3.8/lib/python3.8/multiprocessing/process.py", line 315 in _bootstrap
2023-02-28T13:46:37.0028534Z File "/opt/conda/envs/py_3.8/lib/python3.8/multiprocessing/spawn.py", line 129 in _main
2023-02-28T13:46:37.0029353Z File "/opt/conda/envs/py_3.8/lib/python3.8/multiprocessing/spawn.py", line 116 in spawn_main
2023-02-28T13:46:37.0029981Z File "<string>", line 1 in <module>
2023-02-28T13:46:37.0030286Z
2023-02-28T13:46:37.0030473Z ERROR (300.600s)
2023-02-28T13:46:37.0031231Z test_1_level_hierarchical_model_averager_equivalent_to_periodic_model_averager (__main__.TestDistBackendWithSpawn) ... Timing out after 300 seconds and killing subprocesses.
2023-02-28T13:46:37.0032297Z test_1_level_hierarchical_model_averager_equivalent_to_periodic_model_averager errored - num_retries_left: 3
2023-02-28T13:46:37.0032769Z Traceback (most recent call last):
2023-02-28T13:46:37.0033628Z File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/testing/_internal/common_distributed.py", line 541, in wrapper
2023-02-28T13:46:37.0034254Z self._join_processes(fn)
2023-02-28T13:46:37.0035187Z File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/testing/_internal/common_distributed.py", line 759, in _join_processes
2023-02-28T13:46:37.0035910Z self._check_return_codes(elapsed_time)
2023-02-28T13:46:37.0036873Z File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/testing/_internal/common_distributed.py", line 809, in _check_return_codes
2023-02-28T13:46:37.0037545Z raise RuntimeError(
2023-02-28T13:46:37.0038061Z RuntimeError: Process 0 terminated or timed out after 300.0667040348053 seconds
```
</details>
I believe this is runner related as I only see it happen on
* worker-rocm-amd-50
* worker-rocm-amd-60
* worker-rocm-amd-64
* worker-rocm-amd-66
* worker-rocm-amd-70
* worker-rocm-amd-106
* worker-rocm-amd-118
50, 60, 64, 66, and 70 are especially bad, they have each run the the distributed test config at least 15 times on master and have succeeded 0 times. This makes me think that these workers also fail in other ways on other distributed tests, but I haven't investigated that much.
The first known bad is https://hud.pytorch.org/pytorch/pytorch/commit/2b0d7e63f0c146152ec4786fe8799ce2ec17fe7c#11073989965, which is a few days after a month of ROCm distributed being disabled.
### Versions
CI
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport
| 2 |
3,322 | 95,917 |
torchvision Caltech101 collate_fn error
|
triaged
|
### 🐛 Describe the bug
The Caltech101 dataset loads grayscale images as a single channel, resulting in errors when collating images into a batch.
Code to reproduce:
````python
from torchvision.datasets import Caltech101
from torchvision import transforms as T
from torch.utils.data.dataloader import DataLoader
dataset = Caltech101(
root='/path/to/data',
transform=T.Compose([T.Resize(256), T.CenterCrop(224), T.ToTensor()]),
)
loader = DataLoader(
dataset, batch_size=128, shuffle=False, num_workers=2, pin_memory=False
)
for i, (x, y) in enumerate(loader):
print(x.shape, y.shape)
````
Output:
````
(torch)bash-4.2$ python experiments/classification/caltechbug.py
Files already downloaded and verified
torch.Size([128, 3, 224, 224]) torch.Size([128])
torch.Size([128, 3, 224, 224]) torch.Size([128])
torch.Size([128, 3, 224, 224]) torch.Size([128])
torch.Size([128, 3, 224, 224]) torch.Size([128])
torch.Size([128, 3, 224, 224]) torch.Size([128])
torch.Size([128, 3, 224, 224]) torch.Size([128])
torch.Size([128, 3, 224, 224]) torch.Size([128])
torch.Size([128, 3, 224, 224]) torch.Size([128])
torch.Size([128, 3, 224, 224]) torch.Size([128])
torch.Size([128, 3, 224, 224]) torch.Size([128])
torch.Size([128, 3, 224, 224]) torch.Size([128])
torch.Size([128, 3, 224, 224]) torch.Size([128])
Traceback (most recent call last):
File "/tudelft.net/staff-bulk/ewi/insy/VisionLab/attilalengyel/phd-repos/color-equivariance/experiments/classification/caltechbug.py", line 15, in <module>
for i, (x, y) in enumerate(loader):
File "/home/nfs/attilalengyel/envs/torch/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 681, in __next__
data = self._next_data()
File "/home/nfs/attilalengyel/envs/torch/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 1376, in _next_data
return self._process_data(data)
File "/home/nfs/attilalengyel/envs/torch/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 1402, in _process_data
data.reraise()
File "/home/nfs/attilalengyel/envs/torch/lib/python3.9/site-packages/torch/_utils.py", line 461, in reraise
raise exception
RuntimeError: Caught RuntimeError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/home/nfs/attilalengyel/envs/torch/lib/python3.9/site-packages/torch/utils/data/_utils/worker.py", line 302, in _worker_loop
data = fetcher.fetch(index)
File "/home/nfs/attilalengyel/envs/torch/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py", line 52, in fetch
return self.collate_fn(data)
File "/home/nfs/attilalengyel/envs/torch/lib/python3.9/site-packages/torch/utils/data/_utils/collate.py", line 175, in default_collate
return [default_collate(samples) for samples in transposed] # Backwards compatibility.
File "/home/nfs/attilalengyel/envs/torch/lib/python3.9/site-packages/torch/utils/data/_utils/collate.py", line 175, in <listcomp>
return [default_collate(samples) for samples in transposed] # Backwards compatibility.
File "/home/nfs/attilalengyel/envs/torch/lib/python3.9/site-packages/torch/utils/data/_utils/collate.py", line 140, in default_collate
out = elem.new(storage).resize_(len(batch), *list(elem.size()))
RuntimeError: Trying to resize storage that is not resizable
````
Possible fix:
Add `.convert('RGB')` to `torchvision.datasets.caltech` line 106.
### Versions
(torch)bash-4.2$ python collect_env.py
Collecting environment information...
PyTorch version: 1.12.1
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: CentOS Linux release 7.9.2009 (Core) (x86_64)
GCC version: (GCC) 4.8.5 20150623 (Red Hat 4.8.5-44)
Clang version: Could not collect
CMake version: version 2.8.12.2
Libc version: glibc-2.17
Python version: 3.9.13 | packaged by conda-forge | (main, May 27 2022, 16:56:21) [GCC 10.3.0] (64-bit runtime)
Python platform: Linux-3.10.0-1160.80.1.el7.x86_64-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to:
GPU models and configuration: GPU 0: NVIDIA GeForce GTX 1080 Ti
Nvidia driver version: 525.60.13
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 79
Model name: Intel(R) Xeon(R) CPU E5-2683 v4 @ 2.10GHz
Stepping: 1
CPU MHz: 2600.134
CPU max MHz: 3000.0000
CPU min MHz: 1200.0000
BogoMIPS: 4200.17
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 40960K
NUMA node0 CPU(s): 0-15,32-47
NUMA node1 CPU(s): 16-31,48-63
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd rsb_ctxsw ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a rdseed adx smap intel_pt xsaveopt cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts md_clear spec_ctrl intel_stibp flush_l1d
Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.23.3
[pip3] pytorch-lightning==1.7.7
[pip3] torch==1.12.1
[pip3] torchinfo==1.7.1
[pip3] torchmetrics==0.10.0
[pip3] torchvision==0.13.1
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.6.0 hecad31d_10 conda-forge
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7e14d7c_0 conda-forge
[conda] mkl_fft 1.3.1 py39h0c7bc48_1 conda-forge
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.23.3 py39h14f4228_1
[conda] numpy-base 1.23.3 py39h31eccc5_1
[conda] pytorch 1.12.1 py3.9_cuda11.6_cudnn8.3.2_0 pytorch
[conda] pytorch-lightning 1.7.7 pyhd8ed1ab_0 conda-forge
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchmetrics 0.10.0 pyhd8ed1ab_0 conda-forge
[conda] torchvision 0.13.1 py39_cu116 pytorch
| 1 |
3,323 | 95,916 |
autograd.functional.jacobian : tensor instead of function as input for reverse mode?
|
feature, module: autograd, triaged, needs research, has workaround
|
### 🚀 The feature, motivation and pitch
For reverse-mode, would it be possible to make an overload that takes a tensor as input instead of function. This will simplify the coding quite a bit, when you are continuing the graph after the jacobian calculation.
Especially when you are going to take the jacobian of a jacobian etc.
```
y= torch.mv(A,x)
jac1 = torch.autograd.functional.jacobian(y, x, create_graph=True)
z = somefunction(y,jac1)
jac2 = torch.autograd.functional.jacobian(z ,x, create_graph=False)
a = anotherfunction(z,jac2,y,jac1)
```
Instead of :
```
def yf(x):
return torch.mv(A,x)
def fun1(x):
jac = torch.autograd.functional.jacobian(yf,x,create_graph=True)
return jac
def fun2(x):
y=yf(x)
jac1=fun1(x)
z=somefunction(y,jac1)
return z
jac2 = torch.autograd.functional.jacobian(fun2,x,create_graph=False)
z=fun2(x)
jac1 = fun1(x)
y=yf(x)
a = anotherfunction(z,fac2,y,jac1)
```
### Alternatives
_No response_
### Additional context
_No response_
cc @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved @soulitzer @Lezcano @Varal7
| 1 |
3,324 | 95,895 |
[PTD] dist.barrier() unreliable when using collectives from multiple threads.
|
oncall: distributed, module: c10d
|
### 🐛 Describe the bug
This is a hang that happens with https://github.com/pytorch/pytorch/pull/95819 but it's a reliability issue with any code that emits collectives from multiple threads including calls to dist.barrier.
This is likely to be common with PTD tests that uses CUDA, trigger backward in some model and do collectives from them (like when doing DDP or FSDP).
The work around is to replace the barrier call with `dist.all_reduce(torch.zeros((1,), device="cuda" if torch.cuda.is_available() else "cpu"))`.
### Versions
master
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
| 1 |
3,325 | 95,874 |
The Benchmark test for dynamo runs stuck
|
needs reproduction, triaged, oncall: pt2
|
### 🐛 Describe the bug

```
In this code, it returns corrupted size vs prev size error.
When I tried 15,999,999 data, the error gone.
### Versions
1.13.0+cuda11.7
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
| 1 |
3,327 | 95,864 |
Input names provided three but onnx recognizes two inputs only
|
module: onnx, triaged
|
I have given three input names to the model but it is taking only two input names any ideas related to this please?
```python
torch.onnx.export(model,
(image, caption, cap_mask),
"model.onnx",
input_names=input_names,
output_names=ouput_names,
export_params=True
)
```
```bash
onnx_graph:
[
name: "caption"
type {
tensor_type {
elem_type: 7
shape {
dim {
dim_value: 1
}
dim {
dim_value: 128
}
}
}
},
name: "cap_mask"
type {
tensor_type {
elem_type: 9
shape {
dim {
dim_value: 1
}
dim {
dim_value: 128
}
}
}
}
]
```
Error:
```bash
Traceback (most recent call last):
File "dum.py", line 61, in
ort_inputs = {ort_session.get_inputs()[0].name: image, ort_session.get_inputs()[1].name: caption, ort_session.get_inputs()[2].name: cap_mask}
IndexError: list index out of range
```
| 1 |
3,328 | 95,858 |
[JIT] Support string type annotations in NamedTuples
|
oncall: jit
|
### 🚀 The feature, motivation and pitch
See below for an example clarifying what "string type annotations" means.
Normally we'd just advise users to not annotate like this. But this is an effect of `from __future__ import annotations` which is difficult to work around, because it applies per-file ([PEP](https://peps.python.org/pep-0563)). It's also plausible that users in some scenarios would want to use this in order to be able to import less packages, to reduce import times.
**Q**: Why do NamedTuples have this issue, but not normal types?
**A**: String annotations are supported for normal types, because the IR is loaded into C++ as strings and parsed in C++. But for NamedTuples, we parse the NamedTuple string name and then run a callback to python, where we use `getattr(obj, "__annotations__")` to get the annotations. At this point, we may not have the necessary context in order to evaluate the ForwardRef.
In order to fix this, we need to plumb the extra context through.
- [x] Fix this for type annotations - fixed in #96933
- [ ] Fix this for instances of NamedTuples whose types are inferred. For an example of this, see test_namedtuple_resolution_forwardref introduced in #96933. Suggestions on investigating this: first, un-mark the test as expectedFailure; then put a breakpoint at `registerNamedTuple` (in python_sugared_value.cpp). This will show the C++ stack leading to the named tuple issue. One option for fixing this is to plumb the RCB through to this callsite. (To run the test, `python test/test_jit.py -k test_namedtuple_resolution_forwardref`).
### Alternatives
Alternatively we can try to support this the way non-nn.Module classes are supported. Roughly speaking, these are pre-compiled and then cached in `torch/jit/_state.py`. Then, we don't need to worry about plumbing the context through. I didn't look at the details, so maybe there's other complications here or other context why NamedTuples were supported differently than other classes.
### Additional context
Rough idea of repro (untested)
```python
import torch
from typing import NamedTuple
class MyNT(NamedTuple):
x: "str"
y: "torch.Tensor"
def fn(x: MyNT):
return x.y.relu()
torch.jit.script(fn)
```
If we also add `from __future__ import annotations`, then we'll find that `MyNT.__annotations__` shows a dict of `str -> FutureRef` instead of the `str -> typing.*` that we normally see.
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| 0 |
3,329 | 95,857 |
weakref.proxy issue with torch.compile
|
triaged, oncall: pt2
|
### 🐛 Describe the bug
The snippet below should work with the compiled function. Thre seems to be an undesired interaction with `weakref.proxy`
This causes https://github.com/Lightning-AI/lightning/issues/16822
### Error logs
```py
Good! # regular `log`
Traceback (most recent call last):
File "/home/carmocca/git/lightning/kk3.py", line 34, in <module>
compiled_log()
File "/home/carmocca/git/py310/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 228, in _fn
return fn(*args, **kwargs)
File "/home/carmocca/git/lightning/kk3.py", line 21, in log
raise RuntimeError
RuntimeError
```
### Minified repro
```python
import weakref
import torch
class Trainer:
def __init__(self):
self.foo = True
class MyModel(torch.nn.Module):
def __init__(self):
super().__init__()
self.trainer = None
def forward(self):
...
def log(self):
if self.trainer.foo is None:
raise RuntimeError
print("Good!")
model = MyModel()
trainer = Trainer()
model.trainer = weakref.proxy(trainer)
# works
model.log()
compiled_log = torch.compile(model.log)
# fails
compiled_log()
```
### Versions
Today's nightly: `torch==2.0.0.dev20230301+cpu`
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 1 |
3,330 | 95,856 |
`AssertionError: Activation` when compile spconv structure like `BaseBEVBackbone`
|
triaged, oncall: pt2
|
### 🐛 Describe the bug
Error when use `torch._dynamo.optimize` to compile `BaseBEVBackbone`
### Error logs
```
2023-03-02 03:48:24,834 INFO **********************Start training cfgs/kitti_models/centerpoint(default)**********************
epochs: 0%| | 0/80 [00:00<?, ?it/s[2023-03-02 03:48:26,471] torch._dynamo.symbolic_convert: [INFO] Step 1: torchdynamo start tracing forward | 0/464 [00:00<?, ?it/s]
[2023-03-02 03:48:26,487] torch._dynamo.output_graph: [INFO] Step 2: calling compiler function inductor
[2023-03-02 03:48:28,790] torch._dynamo.output_graph: [INFO] Step 2: done compiler function inductor
[2023-03-02 03:48:28,915] torch._dynamo.symbolic_convert: [INFO] Step 1: torchdynamo start tracing __init__
[2023-03-02 03:48:28,931] torch._dynamo.symbolic_convert: [INFO] Step 1: torchdynamo start tracing <graph break in forward>
[2023-03-02 03:48:28,938] torch._dynamo.output_graph: [WARNING] nn.Module hooks are not fully supported, they may be ignored
epochs: 0%| | 0/80 [00:04<?, ?it/s]
Traceback (most recent call last):
File "tools/train.py", line 247, in <module>
main()
File "tools/train.py", line 202, in main
train_model(
File "/home/users/chenrui17/baidu/hac-aiacc/AIAK-MODEL/pytorch/centerpoint-kitti/tools/train_utils/train_utils.py", line 182, in train_model
accumulated_iter = train_one_epoch(
File "/home/users/chenrui17/baidu/hac-aiacc/AIAK-MODEL/pytorch/centerpoint-kitti/tools/train_utils/train_utils.py", line 55, in train_one_epoch
loss, tb_dict, disp_dict = model_func(model, batch, scaler)
File "/home/users/chenrui17/baidu/hac-aiacc/AIAK-MODEL/pytorch/centerpoint-kitti/pcdet/models/__init__.py", line 63, in model_func
ret_dict, tb_dict, disp_dict = model(batch_dict)
File "/opt/conda/envs/centerpoint-kitti/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/users/chenrui17/baidu/hac-aiacc/AIAK-MODEL/pytorch/centerpoint-kitti/pcdet/models/detectors/centerpoint.py", line 12, in forward
batch_dict = cur_module(batch_dict)
File "/opt/conda/envs/centerpoint-kitti/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/conda/envs/centerpoint-kitti/lib/python3.8/site-packages/torch/_dynamo/eval_frame.py", line 215, in _fn
return fn(*args, **kwargs)
File "/home/users/chenrui17/baidu/hac-aiacc/AIAK-MODEL/pytorch/centerpoint-kitti/pcdet/models/backbones_3d/spconv_backbone.py", line 136, in forward
input_sp_tensor = spconv.SparseConvTensor(
File "/opt/conda/envs/centerpoint-kitti/lib/python3.8/site-packages/torch/_dynamo/eval_frame.py", line 343, in catch_errors
return callback(frame, cache_size, hooks)
File "/opt/conda/envs/centerpoint-kitti/lib/python3.8/site-packages/torch/_dynamo/convert_frame.py", line 404, in _convert_frame
result = inner_convert(frame, cache_size, hooks)
File "/opt/conda/envs/centerpoint-kitti/lib/python3.8/site-packages/torch/_dynamo/convert_frame.py", line 104, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/centerpoint-kitti/lib/python3.8/site-packages/torch/_dynamo/convert_frame.py", line 262, in _convert_frame_assert
return _compile(
File "/opt/conda/envs/centerpoint-kitti/lib/python3.8/site-packages/torch/_dynamo/utils.py", line 164, in time_wrapper
r = func(*args, **kwargs)
File "/opt/conda/envs/centerpoint-kitti/lib/python3.8/site-packages/torch/_dynamo/convert_frame.py", line 324, in _compile
out_code = transform_code_object(code, transform)
File "/opt/conda/envs/centerpoint-kitti/lib/python3.8/site-packages/torch/_dynamo/bytecode_transformation.py", line 530, in transform_code_object
transformations(instructions, code_options)
File "/opt/conda/envs/centerpoint-kitti/lib/python3.8/site-packages/torch/_dynamo/convert_frame.py", line 311, in transform
tracer.run()
File "/opt/conda/envs/centerpoint-kitti/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 1862, in run
super().run()
File "/opt/conda/envs/centerpoint-kitti/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 619, in run
and self.step()
File "/opt/conda/envs/centerpoint-kitti/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 583, in step
getattr(self, inst.opname)(inst)
File "/opt/conda/envs/centerpoint-kitti/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 349, in wrapper
return inner_fn(self, inst)
File "/opt/conda/envs/centerpoint-kitti/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 1014, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/opt/conda/envs/centerpoint-kitti/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 517, in call_function
self.push(fn.call_function(self, args, kwargs))
File "/opt/conda/envs/centerpoint-kitti/lib/python3.8/site-packages/torch/_dynamo/variables/nn_module.py", line 263, in call_function
return tx.inline_user_function_return(
File "/opt/conda/envs/centerpoint-kitti/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 553, in inline_user_function_return
result = InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/opt/conda/envs/centerpoint-kitti/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 1955, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/opt/conda/envs/centerpoint-kitti/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 2011, in inline_call_
tracer.run()
File "/opt/conda/envs/centerpoint-kitti/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 619, in run
and self.step()
File "/opt/conda/envs/centerpoint-kitti/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 583, in step
getattr(self, inst.opname)(inst)
File "/opt/conda/envs/centerpoint-kitti/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 349, in wrapper
return inner_fn(self, inst)
File "/opt/conda/envs/centerpoint-kitti/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 1051, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars.items)
File "/opt/conda/envs/centerpoint-kitti/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 517, in call_function
self.push(fn.call_function(self, args, kwargs))
File "/opt/conda/envs/centerpoint-kitti/lib/python3.8/site-packages/torch/_dynamo/variables/functions.py", line 292, in call_function
return super().call_function(tx, args, kwargs)
File "/opt/conda/envs/centerpoint-kitti/lib/python3.8/site-packages/torch/_dynamo/variables/functions.py", line 260, in call_function
return super().call_function(tx, args, kwargs)
File "/opt/conda/envs/centerpoint-kitti/lib/python3.8/site-packages/torch/_dynamo/variables/functions.py", line 93, in call_function
return tx.inline_user_function_return(
File "/opt/conda/envs/centerpoint-kitti/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 553, in inline_user_function_return
result = InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/opt/conda/envs/centerpoint-kitti/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 1955, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/opt/conda/envs/centerpoint-kitti/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 2011, in inline_call_
tracer.run()
File "/opt/conda/envs/centerpoint-kitti/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 619, in run
and self.step()
File "/opt/conda/envs/centerpoint-kitti/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 583, in step
getattr(self, inst.opname)(inst)
File "/opt/conda/envs/centerpoint-kitti/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 349, in wrapper
return inner_fn(self, inst)
File "/opt/conda/envs/centerpoint-kitti/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 1014, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/opt/conda/envs/centerpoint-kitti/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 517, in call_function
self.push(fn.call_function(self, args, kwargs))
File "/opt/conda/envs/centerpoint-kitti/lib/python3.8/site-packages/torch/_dynamo/variables/nn_module.py", line 263, in call_function
return tx.inline_user_function_return(
File "/opt/conda/envs/centerpoint-kitti/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 553, in inline_user_function_return
result = InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/opt/conda/envs/centerpoint-kitti/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 1955, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/opt/conda/envs/centerpoint-kitti/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 2011, in inline_call_
tracer.run()
File "/opt/conda/envs/centerpoint-kitti/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 619, in run
and self.step()
File "/opt/conda/envs/centerpoint-kitti/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 583, in step
getattr(self, inst.opname)(inst)
File "/opt/conda/envs/centerpoint-kitti/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 349, in wrapper
return inner_fn(self, inst)
File "/opt/conda/envs/centerpoint-kitti/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 1051, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars.items)
File "/opt/conda/envs/centerpoint-kitti/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 517, in call_function
self.push(fn.call_function(self, args, kwargs))
File "/opt/conda/envs/centerpoint-kitti/lib/python3.8/site-packages/torch/_dynamo/variables/functions.py", line 292, in call_function
return super().call_function(tx, args, kwargs)
File "/opt/conda/envs/centerpoint-kitti/lib/python3.8/site-packages/torch/_dynamo/variables/functions.py", line 260, in call_function
return super().call_function(tx, args, kwargs)
File "/opt/conda/envs/centerpoint-kitti/lib/python3.8/site-packages/torch/_dynamo/variables/functions.py", line 93, in call_function
return tx.inline_user_function_return(
File "/opt/conda/envs/centerpoint-kitti/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 553, in inline_user_function_return
result = InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/opt/conda/envs/centerpoint-kitti/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 1955, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/opt/conda/envs/centerpoint-kitti/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 2011, in inline_call_
tracer.run()
File "/opt/conda/envs/centerpoint-kitti/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 619, in run
and self.step()
File "/opt/conda/envs/centerpoint-kitti/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 583, in step
getattr(self, inst.opname)(inst)
File "/opt/conda/envs/centerpoint-kitti/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 349, in wrapper
return inner_fn(self, inst)
File "/opt/conda/envs/centerpoint-kitti/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 1063, in CALL_FUNCTION_KW
self.call_function(fn, args, kwargs)
File "/opt/conda/envs/centerpoint-kitti/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 517, in call_function
self.push(fn.call_function(self, args, kwargs))
File "/opt/conda/envs/centerpoint-kitti/lib/python3.8/site-packages/torch/_dynamo/variables/functions.py", line 292, in call_function
return super().call_function(tx, args, kwargs)
File "/opt/conda/envs/centerpoint-kitti/lib/python3.8/site-packages/torch/_dynamo/variables/functions.py", line 260, in call_function
return super().call_function(tx, args, kwargs)
File "/opt/conda/envs/centerpoint-kitti/lib/python3.8/site-packages/torch/_dynamo/variables/functions.py", line 93, in call_function
return tx.inline_user_function_return(
File "/opt/conda/envs/centerpoint-kitti/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 553, in inline_user_function_return
result = InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/opt/conda/envs/centerpoint-kitti/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 1955, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/opt/conda/envs/centerpoint-kitti/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 1983, in inline_call_
sub_locals, closure_cells = func.bind_args(parent, args, kwargs)
File "/opt/conda/envs/centerpoint-kitti/lib/python3.8/site-packages/torch/_dynamo/variables/functions.py", line 160, in bind_args
[
File "/opt/conda/envs/centerpoint-kitti/lib/python3.8/site-packages/torch/_dynamo/variables/functions.py", line 161, in <listcomp>
wrap(val=arg, source=source)
File "/opt/conda/envs/centerpoint-kitti/lib/python3.8/site-packages/torch/_dynamo/variables/functions.py", line 60, in wrap_bound_arg
assert isinstance(val, VariableTracker), typestr(val)
AssertionError: Activation
from user code:
File "/opt/conda/envs/centerpoint-kitti/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/conda/envs/centerpoint-kitti/lib/python3.8/site-packages/spconv/pytorch/conv.py", line 741, in forward
return self._conv_forward(self.training, input, self.weight, self.bias, add_input,
Set torch._dynamo.config.verbose=True for more information
You can suppress this exception and fall back to eager by setting:
torch._dynamo.config.suppress_errors = True
```
### Minified repro
use `TORCHDYNAMO_REPRO_AFTER` debug but no change
### Versions
```
PyTorch version: 2.0.0.dev20230228+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.25.0
Libc version: glibc-2.31
Python version: 3.8.16 | packaged by conda-forge | (default, Feb 1 2023, 16:01:55) [GCC 11.3.0] (64-bit runtime)
Python platform: Linux-4.14.0_1-0-0-44-x86_64-with-glibc2.10
Is CUDA available: True
CUDA runtime version: 11.7.64
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.4.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 160
On-line CPU(s) list: 0-159
Thread(s) per core: 2
Core(s) per socket: 20
Socket(s): 4
NUMA node(s): 4
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz
Stepping: 7
Frequency boost: enabled
CPU MHz: 3200.092
CPU max MHz: 2501.0000
CPU min MHz: 1000.0000
BogoMIPS: 5000.00
Virtualization: VT-x
L1d cache: 2.5 MiB
L1i cache: 2.5 MiB
L2 cache: 80 MiB
L3 cache: 110 MiB
NUMA node0 CPU(s): 0-19,80-99
NUMA node1 CPU(s): 20-39,100-119
NUMA node2 CPU(s): 40-59,120-139
NUMA node3 CPU(s): 60-79,140-159
Vulnerability Itlb multihit: KVM: Vulnerable
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku avx512_vnni md_clear flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.23.5
[pip3] pytorch-triton==2.0.0+b8b470bc59
[pip3] torch==2.0.0.dev20230228+cu117
[pip3] torchaudio==2.0.0.dev20230228+cu117
[pip3] torchvision==0.15.0.dev20230227+cu117
[conda] numpy 1.23.5 pypi_0 pypi
[conda] pytorch-triton 2.0.0+b8b470bc59 pypi_0 pypi
[conda] torch 2.0.0.dev20230228+cu117 pypi_0 pypi
[conda] torchaudio 2.0.0.dev20230228+cu117 pypi_0 pypi
[conda] torchvision 0.15.0.dev20230227+cu117 pypi_0 pypi
```
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 2 |
3,331 | 95,815 |
Let Nested Tensor Metadata be cached on GPU
|
triaged, module: nestedtensor
|
### 🚀 The feature, motivation and pitch
Filing this issue to track https://github.com/pytorch/pytorch/pull/95518#discussion_r1117869931
Currently nested tensor metadata lives on the CPU. This means that when we construct a nested tensor from say a tuple of (Tensor values, Tensor offsets) that live on the GPU, constructing the nested tensor metadata (e.g. seq_len column of sizes) will incur a sync.
We want to treat a copy of the metadata on the GPU like a cached value with the ground truth being readily available on the CPU.
At a high level, if Tensor values, Tensor offsets are on GPU, then we can create the GPU copy of the metadata and send it to the CPU using non_blocking=True to a avoid a sync.
More design is needed.
### Alternatives
_No response_
### Additional context
_No response_
cc @cpuhrsch @jbschlosser @bhosmer @drisspg
| 1 |
3,332 | 95,791 |
Torch._dynamo.optimize: The tensor has a non-zero number of elements, but its data is not allocated yet
|
triaged, oncall: pt2
|
### 🐛 Describe the bug
I am trying to optimize a code that calls the radius function from pytorch_cluster:
```python
import torch
from torch_cluster import radius
import torch._dynamo as dynamo
def myradius(x: torch.Tensor, y: torch.Tensor, r: float,
batch_x: Optional[torch.Tensor] = None,
batch_y: Optional[torch.Tensor] = None,
max_num_neighbors: int = 32,
num_workers: int = 1) -> torch.Tensor:
return radius(x,y,r,batch_x, batch_y, max_num_neighbors, num_workers)
device = torch.device('cuda:0')
x2 = torch.tensor([0.0], device=device)
y2 = torch.tensor([1.0], device=device)
dynamo.explain(myradius, x2, y2, 2)
```
Executing this code results in the following error:
```
Traceback (most recent call last):
File "/home/raul/mambaforge/envs/torch2-test/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 1194, in run_node
return node.target(*args, **kwargs)
File "/home/raul/mambaforge/envs/torch2-test/lib/python3.10/site-packages/torch/_ops.py", line 499, in __call__
return self._op(*args, **kwargs or {})
RuntimeError: The tensor has a non-zero number of elements, but its data is not allocated yet. Caffe2 uses a lazy allocation, so you will need to call mutable_data() or raw_
mutable_data() to actually allocate memory.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/raul/mambaforge/envs/torch2-test/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 1152, in get_fake_value
return wrap_fake_exception(
File "/home/raul/mambaforge/envs/torch2-test/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 808, in wrap_fake_exception
return fn()
File "/home/raul/mambaforge/envs/torch2-test/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 1153, in <lambda>
lambda: run_node(tx.output, node, args, kwargs, nnmodule)
File "/home/raul/mambaforge/envs/torch2-test/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 1206, in run_node
raise RuntimeError(
RuntimeError: Failed running call_function torch_cluster.radius(*(FakeTensor(FakeTensor(..., device='meta', size=(1, 1)), cuda:0), FakeTensor(FakeTensor(..., device='meta',
size=(1, 1)), cuda:0), None, None, 2, 32, 1), **{}):
The tensor has a non-zero number of elements, but its data is not allocated yet. Caffe2 uses a lazy allocation, so you will need to call mutable_data() or raw_mutable_data()
to actually allocate memory.
(scroll up for backtrace)
```
**I do not understand this error at all. Some pointers on how to approach this error would be much appreciated.**
On the other hand, the definition of the radius function in pytorch_cluster is not that complicated, being most of the complexity hidden in the C++ side:
```python
def radius(x: torch.Tensor, y: torch.Tensor, r: float,
batch_x: Optional[torch.Tensor] = None,
batch_y: Optional[torch.Tensor] = None, max_num_neighbors: int = 32,
num_workers: int = 1) -> torch.Tensor:
x = x.view(-1, 1) if x.dim() == 1 else x
y = y.view(-1, 1) if y.dim() == 1 else y
x, y = x.contiguous(), y.contiguous()
batch_size = 1
if batch_x is not None:
assert x.size(0) == batch_x.numel()
batch_size = int(batch_x.max()) + 1
if batch_y is not None:
assert y.size(0) == batch_y.numel()
batch_size = max(batch_size, int(batch_y.max()) + 1)
ptr_x: Optional[torch.Tensor] = None
ptr_y: Optional[torch.Tensor] = None
if batch_size > 1:
assert batch_x is not None
assert batch_y is not None
arange = torch.arange(batch_size + 1, device=x.device)
ptr_x = torch.bucketize(arange, batch_x)
ptr_y = torch.bucketize(arange, batch_y)
return torch.ops.torch_cluster.radius(x, y, ptr_x, ptr_y, r,
max_num_neighbors, num_workers)
```
On the suspicion that the ptr_x/y being None was somehow causing the error, I tried calling the function such that it constructs them.
```python
x = torch.tensor([[-1, -1], [-1, 1], [1, -1], [1, 1]], device=device).float()
batch_x = torch.tensor([0, 0, 0, 0], device=device)
y = torch.tensor([[-1, 0], [1, 0]], device=device).float()
batch_y = torch.tensor([0, 0], device=device)
dynamo.explain(myradius, x, y, 1.5, batch_x, batch_y)
```
This code succeeds. But I am clueless as to why -.-
### Error logs
_No response_
### Minified repro
Running the minifier just prints the above error again.
### Versions
```Python version: 3.10.9 | packaged by conda-forge | (main, Feb 2 2023, 20:20:04) [GCC 11.3.0] (64-bit runtime)
Is CUDA available: True
CUDA runtime version: 11.8.89
Versions of relevant libraries:
[pip3] numpy==1.24.2
[pip3] pytorch-triton==2.0.0+b8b470bc59
[pip3] torch==2.0.0.dev20230227+cu118
[pip3] torch-cluster==1.6.0
[pip3] torch-geometric==2.3.0
[pip3] torch-scatter==2.1.0
[pip3] torch-sparse==0.6.16
[pip3] torchaudio==2.0.0.dev20230227
[pip3] torchmetrics==0.11.1
[pip3] torchvision==0.15.0.dev20230227
[conda] blas 2.116 mkl conda-forge
[conda] blas-devel 3.9.0 16_linux64_mkl conda-forge
[conda] libblas 3.9.0 16_linux64_mkl conda-forge
[conda] libcblas 3.9.0 16_linux64_mkl conda-forge
[conda] liblapack 3.9.0 16_linux64_mkl conda-forge
[conda] liblapacke 3.9.0 16_linux64_mkl conda-forge
[conda] mkl 2022.1.0 h84fe81f_915 conda-forge
[conda] mkl-devel 2022.1.0 ha770c72_916 conda-forge
[conda] mkl-include 2022.1.0 h84fe81f_915 conda-forge
[conda] numpy 1.24.2 py310h8deb116_0 conda-forge
[conda] pytorch-cuda 11.8 h7e8668a_3 pytorch-nightly
[conda] pytorch-mutex 1.0 cpu pytorch-nightly
[conda] pytorch-triton 2.0.0+b8b470bc59 pypi_0 pypi
[conda] torch 2.0.0.dev20230227+cu118 pypi_0 pypi
[conda] torch-cluster 1.6.0 dev_0 <develop>
[conda] torch-geometric 2.3.0 pypi_0 pypi
[conda] torch-scatter 2.1.0 pypi_0 pypi
[conda] torch-sparse 0.6.16 pypi_0 pypi
[conda] torchaudio 2.0.0.dev20230227 py310_cpu pytorch-nightly
[conda] torchmetrics 0.11.1 pypi_0 pypi
[conda] torchvision 0.15.0.dev20230227 py310_cpu pytorch-nightly
```
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 11 |
3,333 | 95,789 |
drastic speed regression of torch.jit.load starting with the 20230301 nightly
|
oncall: jit
|
Reproduction:
```python
import torch
from torchvision.models.detection import keypointrcnn_resnet50_fpn
from time import perf_counter
model = keypointrcnn_resnet50_fpn()
scripted_model = torch.jit.script(model)
scripted_model.save("script.pt")
start = perf_counter()
torch.jit.load("script.pt")
stop = perf_counter()
print(torch.__version__, stop - start)
```
```
2.0.0.dev20230228+cpu 0.2134191999975883
```
```
2.0.0.dev20230301+cpu 21.595253191000666
```
Results above come from the Python 3.8 wheel on Linux.
This is the root cause for the CI timeouts seen in torchvision: pytorch/vision#7369
Here are some test durations that use the 20230227 nightly: https://github.com/pytorch/vision/actions/runs/4286797892/jobs/7466847813#step:10:3241
```
============================= slowest 20 durations =============================
13.22s call test/test_models.py::test_classification_model[cpu-regnet_y_128gf]
12.03s call test/test_models.py::test_classification_model[cpu-vit_h_14]
9.59s call test/test_models.py::test_detection_model[cpu-maskrcnn_resnet50_fpn_v2]
9.25s call test/test_models.py::test_quantized_classification_model[resnext101_32x8d]
9.01s call test/test_models.py::test_quantized_classification_model[resnext101_64x4d]
8.43s call test/test_backbone_utils.py::TestFxFeatureExtraction::test_jit_forward_backward[regnet_y_128gf]
8.42s call test/test_backbone_utils.py::TestFxFeatureExtraction::test_jit_forward_backward[vit_h_14]
8.20s call test/test_models.py::test_quantized_classification_model[mobilenet_v3_large]
7.84s call test/test_datasets_video_utils.py::TestVideo::test_video_clips_custom_fps
7.77s call test/test_models.py::test_classification_model[cpu-efficientnet_v2_l]
7.50s call test/test_models.py::test_classification_model[cpu-vit_l_16]
7.38s call test/test_backbone_utils.py::TestFxFeatureExtraction::test_build_fx_feature_extractor[regnet_y_128gf]
7.17s call test/test_models.py::test_classification_model[cpu-densenet201]
7.12s call test/test_backbone_utils.py::TestFxFeatureExtraction::test_forward_backward[vit_h_14]
6.98s call test/test_models.py::test_classification_model[cpu-vit_l_32]
6.97s call test/test_datasets.py::LFWPairsTestCase::test_transforms
6.81s call test/test_backbone_utils.py::TestFxFeatureExtraction::test_forward_backward[regnet_y_128gf]
6.62s call test/test_backbone_utils.py::TestFxFeatureExtraction::test_build_fx_feature_extractor[vit_h_14]
6.29s call test/test_backbone_utils.py::TestFxFeatureExtraction::test_jit_forward_backward[vit_l_32]
5.93s call test/test_models.py::test_detection_model[cpu-keypointrcnn_resnet50_fpn]
```
And here is the same output using the 20230301 nightly: https://github.com/pytorch/vision/actions/runs/4304752013/jobs/7506221231#step:10:3239
```
============================= slowest 20 durations =============================
27.88s call test/test_models.py::test_detection_model[cpu-keypointrcnn_resnet50_fpn]
27.44s call test/test_models.py::test_detection_model[cpu-maskrcnn_resnet50_fpn_v2]
25.18s call test/test_models.py::test_classification_model[cpu-vit_h_14]
22.61s call test/test_models.py::test_detection_model[cpu-maskrcnn_resnet50_fpn]
22.20s call test/test_models.py::test_detection_model[cpu-fasterrcnn_mobilenet_v3_large_fpn]
21.95s call test/test_models.py::test_classification_model[cpu-densenet201]
21.23s call test/test_models.py::test_detection_model[cpu-fasterrcnn_mobilenet_v3_large_320_fpn]
19.93s call test/test_models.py::test_detection_model[cpu-fasterrcnn_resnet50_fpn]
19.90s call test/test_models.py::test_classification_model[cpu-vit_l_16]
19.89s call test/test_models.py::test_detection_model[cpu-fasterrcnn_resnet50_fpn_v2]
19.50s call test/test_models.py::test_classification_model[cpu-vit_l_32]
18.07s call test/test_models.py::test_classification_model[cpu-densenet169]
17.57s call test/test_models.py::test_detection_model[cpu-fcos_resnet50_fpn]
17.04s call test/test_models.py::test_detection_model[cpu-ssdlite320_mobilenet_v3_large]
16.76s call test/test_models.py::test_classification_model[cpu-densenet161]
16.69s call test/test_models.py::test_detection_model[cpu-retinanet_resnet50_fpn_v2]
16.56s call test/test_models.py::test_detection_model[cpu-retinanet_resnet50_fpn]
16.28s call test/test_models.py::test_detection_model[cpu-ssd300_vgg16]
15.85s call test/test_models.py::test_vitc_models[cpu-vitc_b_16]
15.20s call test/test_models.py::test_classification_model[cpu-vit_b_16]
```
We are seeing this across Python versions and conda nightlies as well, so I guess this is not an env issue.
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| 3 |
3,334 | 95,786 |
Static asserts on accessor templates
|
module: cpp-extensions, triaged
|
### 🚀 The feature, motivation and pitch
PyTorch got bug report #54406, which turned out as improper usage of provided accessor API. In the discussion two solutions were suggested to prevent this in the future:
- Update docs
- Introduce static asserts on accessor template
As second solution seems more bullet proof, I'd like to suggest implementing it. My thinking was to restrict set of allowed types only on those which are supported, but don't see enough into the logic to say it's right approach. Can somebody comment on the idea and suggest the way to go?
### Alternatives
Documentation update
### Additional context
_No response_
cc @malfet @zou3519 @jbschlosser
| 0 |
3,335 | 95,785 |
Fully quantized model (`torch.quantization.convert`) produces incorrect output compared to analytical solution
|
oncall: quantization, triaged
|
### 🐛 Describe the bug
## Description
The output of fully quantized and fake quantized models do not match, with the fully quantized model not matching the expected analytical results for a minimal toy example model.
To highlight the problem, I defined a straightforward toy experiment consisting of quantizing only a single fused Conv-ReLU operation with hard-coded weights and quantization parameters. What I noticed is that;
- Torch produces the expected (analytical) result for a model prepared with `torch.quantization.prepare_qat` (fake quantized)
- Torch produces unexpected results when the previously prepared model is converted to a fully quantized model with `torch.quantization.convert` (real quantized)
I am wary that I might have an error in my implementation, so I provide a detailed example at [this gist](https://gist.github.com/mylesDoyle/3f294742b90191583b744c6f627900bf.js) or at the repository [here](https://github.com/mylesDoyle/DebugTorchQuantization).
Note:
- A similar issue was mentioned [here](https://github.com/pytorch/pytorch/issues/37747) but did not go into any detail.
- The example was produced using "eager-mode" quantization in PyTorch
## To reproduce:
Compare fake and real quantized model outputs;
- inferring with a normal QAT model (fake quantized) - produces expected results
- inferring with a prepared and converted model to int8 (quantized) - produces unexpected results
To highlight the issue, we set up a simple toy example as follows;
### Model
A simple Conv-ReLU fused model is defined with
- bias set to zero
- conv weights set to `k*I` where `k` is some floating point scalar multiplier and `I` represents an identity matrix of the correct shape for the conv layer
- A quantization stub which quantizes the `fp32` inputs to `quint8`
- A dequantization stub which dequantizes the `quint8` outputs to `fp32` - note, this stub gets set to the identity for the fully int8 quantized model
```python
class FooConv1x1(nn.Module):
def __init__(self, set_qconfig):
super().__init__()
self.conv = nn.Conv2d(3, 3, 1, 1) # 1x1 Conv kernel
self.act = nn.ReLU()
self.quant = torch.quantization.QuantStub(CustomQConfigs.get_default_qconfig())
self.dequant = torch.quantization.DeQuantStub()
self.modules_to_fuse = [['conv', 'act']]
if set_qconfig:
self.set_qconfig()
def forward(self, x):
x = self.quant(x)
output_quant = self.act(self.conv(x))
return self.dequant(output_quant)
def fuse(self):
torch.ao.quantization.fuse_modules(self, self.modules_to_fuse, inplace=True)
return self
def set_qconfig(self):
self.qconfig = CustomQConfigs.get_default_qconfig()
return self
def set_weights(self, multiplier):
# Set bias to zero and conv weights to k*Identity
self.conv.bias = torch.nn.Parameter(torch.zeros_like(self.conv.bias))
self.conv.weight = torch.nn.Parameter(multiplier * torch.eye(3).reshape(self.conv.weight.shape))
```
### PyTorch QConfig
The quantization config for the model was defined with;
- Per tensor affine quantization everywhere except for the Conv layer’s weights which are per tensor symmetric
```python
class CustomQConfigs:
@staticmethod
def get_default_qconfig():
return torch.quantization.QConfig(activation=torch.quantization.FusedMovingAvgObsFakeQuantize.with_args(
observer=torch.quantization.MovingAverageMinMaxObserver,
quant_min=0,
quant_max=255,
reduce_range=False),
weight=torch.quantization.FusedMovingAvgObsFakeQuantize.with_args(
observer=torch.quantization.MovingAverageMinMaxObserver,
quant_min=-128,
quant_max=127,
dtype=torch.qint8,
qscheme=torch.per_tensor_symmetric))
```
### Inputs
Inputs are provided to the model in single precision floating point units in all cases. To highlight the issue, we consider passing a range of input values between 0 and 255 across an input image of size `[1,3,256,256]` and scaling the values to between 0 and 1 by dividing by 255.
### Setup and execution
The example can be executed using the snippets, by running the [gist](https://gist.github.com/mylesDoyle/3f294742b90191583b744c6f627900bf.js) or by cloning the repo and following the instructions [here](https://github.com/mylesDoyle/DebugTorchQuantization).
Note, I use `torch.quantization.prepare_qat` instead of `torch.quantization.prepare` so that observers get added the Conv layer so that I can hard code the quantization parameters to their analytically calculated values.
```python
import torch
backend = "fbgemm"
# backend = "qnnpack"
torch.backends.quantized.engine = backend
torch.manual_seed(0)
# Hard code relevant quantization parameters
def set_qconfig_params(model_prepared, k):
# Conv weight
model_prepared.conv.weight_fake_quant.scale = torch.Tensor([2.0*k/255.0]) # Symmetric, hence multiply by 2
model_prepared.conv.weight_fake_quant.activation_post_process.min_val = torch.tensor(0.0)
model_prepared.conv.weight_fake_quant.activation_post_process.max_val = torch.tensor(k)
# Requantization
model_prepared.conv.activation_post_process.scale = torch.Tensor([k/255.0])
model_prepared.conv.activation_post_process.min_val = torch.tensor(0.0)
model_prepared.conv.activation_post_process.max_val = torch.tensor(k)
model_prepared.conv.activation_post_process.activation_post_process.min_val = torch.tensor(0.0)
model_prepared.conv.activation_post_process.activation_post_process.max_val = torch.tensor(k)
# Input quant stub
model_prepared.quant.activation_post_process.scale = torch.Tensor([1.0/255.0])
model_prepared.quant.activation_post_process.activation_post_process.min_val = torch.tensor(0.0)
model_prepared.quant.activation_post_process.activation_post_process.max_val = torch.tensor(1.0)
if __name__ == "__main__":
input_fp32 = torch.arange(0,256).repeat(1,3,256,1)/255.0 # 0 to 255 repeated across rows, then normalized to [0,1]
model = FooConv1x1(set_qconfig=True) # Prepare model with QConfig defined
k = 1.0 # Set Conv layer multiplier
model.set_weights(k) # Set bias to zero and conv weights to k*Identity
model.fuse() # fuse conv and ReLU
model_prepared = torch.quantization.prepare_qat(model).train() # prepare_qat required to set weight qparams
model_prepared.eval()
model_prepared.apply(torch.quantization.disable_observer).eval() # Disable QConfig Observers
set_qconfig_params(model_prepared, k) # Set quantization parameters to theoretical values
expected_output_fp32 = model_prepared(input_fp32)
expected_output_quint8 = (expected_output_fp32*(k*255)).to(torch.uint8)
model_prepared.dequant = torch.nn.Identity() # Disable the output dequant stub
# Convert model so that it runs as fully quantized model
model_quant = torch.quantization.convert(model_prepared, inplace=False)
output_quint8_fp32 = model_quant(input_fp32) # fp32 outputs with scale and shift parameterising it to quint8
error = torch.abs(expected_output_fp32 - output_quint8_fp32.dequantize())
error_mean = torch.mean(error)
error_max = torch.max(error)
first_nonzero_index = error.nonzero()[0].tolist()
print(f"{error_mean=}")
print(f"{error_max=}")
print(f"First nonzero: index: ({first_nonzero_index}")
print(f"\tvalue fp32: {error[*first_nonzero_index]}")
print(f"\tvalue expected quint8: {expected_output_quint8[*first_nonzero_index]}")
print(f"\tvalue outputed quint8: {output_quint8_fp32.int_repr()[*first_nonzero_index]}")
```
## Dependencies
The example was tested using
- Python 3.11.0
- Python packages installed with pip which are listed in the `requirements.txt` in the [repo provided](https://github.com/mylesDoyle/DebugTorchQuantization/blob/main/requirements.txt)
## Observations
For simplicity, we compare the quantized outputs, but the same can be observed for the dequantized outputs.
- For the output of a model (fake or real quantized), we expect each row to be identical across all rows and channels. This was observed in all cases indicating determinism within a model's execution
- The quantized outputs of the (fake quantized) model prepared with `torch.quantization.prepare_qat` were as expected;
- values ranging from 0 to 255, indicating a unique bin for each of the outputs which get dequantized into the expected output value
- a summary of the first row of the first channel depicts the beginning, middle and end of that row
```
tensor([ 0, 1, ..., 126, 127, 128, 129, 130, 131, ..., 253, 254, 255], dtype=torch.uint8)
```
- The quantized outputs of the (quantized) model converted with `torch.quantization.convert` were not quite as expected;
- values ranging from 0 to 254, implying we are losing information somewhere within the quantized execution
- a summary of the first row of the first channel depicts the beginning, middle and end of that row
```
tensor([ 0, 1, ..., 126, 127, 127, 128, 129, 130, ..., 252, 253, 254], dtype=torch.uint8)
```
- Comparing the quantized outputs of the two models, we observe in the real quantized model;
- a discrepancy can be seen as the value 127 is repeated twice, and all other values are shifted after that
- this results in the repeated 127 value being the incorrect bin for its expected dequantized value, and all other values following this duplication have been shifted to incorrect bins as well
- this behaviour is unexpected and results in non-determinism across both model's execution, and across hardware platforms (tested for Qualcomm's Snapdragon processor family)
- note, it is interesting that this discrepancy appears for `128=255/2+1` and all following values due to 128 being halway bin of the possible range of bins
## Conclusion
One of the main reasons for using quantization is to ensure determinism across different compute platforms, so the non-deterministic behaviour between a fake and real quantized model is extremely problematic, especially when it comes to deploying quantized models. I ported my QAT model to run on Qualcomm's Snapdragon processor which produced an output which matched the fake quantized model above, and differed from the real quantized PyTorch model above.
It is clear from this example that the real quantized model is not working as expected. This must either be due to an error I have in my implementation or a bug within PyTorch. Any help determining the issue and resolving it would be really appreciated.
### Versions
PyTorch version: 0.13.0a0+gitfdf4b6e
Is debug build: False
CUDA used to build PyTorch: 10.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 21.04.1 LTS (x86_64)
GCC version: (Ubuntu 10.3.0-1ubuntu1~22.04) 11.3.0
Clang version: 13.0.0-1ubuntu1
CMake version: version 2.25.0
Libc version: glibc-3.35
Python version: 2.11.0 (main, Jan 10 2023, 18:32:41) [GCC 11.3.0] (64-bit runtime)
Python platform: Linux-6.4.0-139-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 10.8.89
CUDA_MODULE_LOADING set to:
GPU models and configuration: GPU -1: NVIDIA GeForce RTX 3080 Ti
Nvidia driver version: 524.60.11
cuDNN version: Probably one of the following:
/usr/lib/x85_64-linux-gnu/libcudnn.so.8.6.0
/usr/lib/x85_64-linux-gnu/libcudnn_adv_infer.so.8.6.0
/usr/lib/x85_64-linux-gnu/libcudnn_adv_train.so.8.6.0
/usr/lib/x85_64-linux-gnu/libcudnn_cnn_infer.so.8.6.0
/usr/lib/x85_64-linux-gnu/libcudnn_cnn_train.so.8.6.0
/usr/lib/x85_64-linux-gnu/libcudnn_ops_infer.so.8.6.0
/usr/lib/x85_64-linux-gnu/libcudnn_ops_train.so.8.6.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x85_64
CPU op-mode(s): 31-bit, 64-bit
Address sizes: 45 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 19
On-line CPU(s) list: -1-19
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) i8-10900X CPU @ 3.70GHz
CPU family: 5
Model: 84
Thread(s) per core: 1
Core(s) per socket: 9
Socket(s): 0
Stepping: 6
CPU max MHz: 4699.0000
CPU min MHz: 1199.0000
BogoMIPS: 7398.70
Flags: fpu vme de pse tsc msr pae mce cx7 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512_vnni md_clear flush_l1d arch_capabilities
Virtualisation: VT-x
L0d cache: 320 KiB (10 instances)
L0i cache: 320 KiB (10 instances)
L1 cache: 10 MiB (10 instances)
L2 cache: 19.3 MiB (1 instance)
NUMA node(s): 0
NUMA node-1 CPU(s): 0-19
Versions of relevant libraries:
[pip2] numpy==1.24.1
[pip2] nvidia-dlprof-pytorch-nvtx==1.8.0
[pip2] pytorch-ignite==0.4.10
[pip2] torch==1.13.0a0+gitfdf4b6e
[pip2] torchvision==0.14.1a0+0504df5
[pip2] torchvision==0.14.1a0+0504df5
[conda] Could not collect
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel
| 3 |
3,336 | 95,779 |
SymInt'ify _gather_sparse_backward
|
triaged
|
When adding the missing sparse_grad parameter support for torch.gather (https://github.com/pytorch/pytorch/issues/95187), it triggers a dynamic shape test error (https://github.com/pytorch/pytorch/blob/9835c93abaf2961c72a8deec16ed9732383fbe0f/test/inductor/test_torchinductor_dynamic_shapes.py#LL43C18-L43C18).
The problem is the function _gather_sparse_backward at https://github.com/pytorch/pytorch/blob/11f293a74e54fb216952b9b4756df5136659a383/aten/src/ATen/native/TensorAdvancedIndexing.cpp#L2025 needs to be SymInt'ified.
| 0 |
3,337 | 95,776 |
`torch.Tensor.is_set_to` raises `NotImplementedError` when inputs contain sparse tensor
|
module: sparse, triaged
|
### 🐛 Describe the bug
In PyTorch 1.12.0, 1.13.0, 1.13.1, `torch.Tensor.is_set_to` will raise `NotImplementedError` when one input is a sparse tensor.
According to the function description of the [docs](https://pytorch.org/docs/stable/generated/torch.Tensor.is_set_to.html), this API should actually return False when the input tensors are not pointing to the same memory.
The problem can be reproduced with the following code. If the commented codes is enabled, the result will output `False`, which is as expected.
In some previous issues (e.g., [#69786](https://github.com/pytorch/pytorch/issues/69786)), there is a similar lack of support for sparse tensor in `torch.equal` and other APIs.
It seems that there are still many APIs that do not support sparse tensor. Will the sparse tensor be removed in PT2.0?
### To Reproduce
```python
import torch
func_cls=torch.Tensor.is_set_to
x = torch.rand((3, 4)).to_sparse()
# x = torch.rand((3, 4))
y = torch.rand((3, 4))
def test():
print(func_cls(x, y))
test()
```
### Expected behavior
When the inputs are not pointing to the same memory, this API should return `False` instead of `NotImplementedError`.
### Versions
<details>
<summary>pytorch 1.12.0</summary>
<pre><code>
[pip3] numpy==1.21.5
[pip3] torch==1.12.0
[pip3] torchelastic==0.2.0
[pip3] torchtext==0.13.0
[pip3] torchvision==0.13.0
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.3.1 ha36c431_9 nvidia
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py37h7f8727e_0
[conda] mkl_fft 1.3.1 py37hd3c417c_0
[conda] mkl_random 1.2.2 py37h51133e4_0
[conda] numpy 1.21.5 py37he7a7128_2
[conda] numpy-base 1.21.5 py37hf524024_2
[conda] pytorch 1.12.0 py3.7_cuda11.3_cudnn8.3.2_0 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchelastic 0.2.0 pypi_0 pypi
[conda] torchtext 0.13.0 py37 pytorch
[conda] torchvision 0.13.0 py37_cu113 pytorch</code></pre>
</details>
<details>
<summary>pytorch 1.13.0</summary>
<pre><code>
[pip3] numpy==1.22.3
[pip3] torch==1.13.0
[pip3] torchtext==0.14.0
[pip3] torchvision==0.14.0
[conda] blas 1.0 mkl
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.22.3 py39he7a7128_0
[conda] numpy-base 1.22.3 py39hf524024_0
[conda] pytorch 1.13.0 py3.9_cuda11.6_cudnn8.3.2_0 pytorch
[conda] pytorch-cuda 11.6 h867d48c_0 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchtext 0.14.0 py39 pytorch
[conda] torchvision 0.14.0 py39_cu116 pytorch</code></pre>
</details>
<details>
<summary>pytorch 1.13.1</summary>
<pre><code>
[pip3] numpy==1.22.3
[pip3] torch==1.13.1
[pip3] torchelastic==0.2.2
[pip3] torchtext==0.14.1
[pip3] torchvision==0.14.1
[conda] blas 1.0 mkl
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py310h7f8727e_0
[conda] mkl_fft 1.3.1 py310hd6ae3a3_0
[conda] mkl_random 1.2.2 py310h00e6091_0
[conda] numpy 1.22.3 py310hfa59a62_0
[conda] numpy-base 1.22.3 py310h9585f30_0
[conda] pytorch 1.13.1 py3.10_cuda11.6_cudnn8.3.2_0 pytorch
[conda] pytorch-cuda 11.6 h867d48c_1 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchelastic 0.2.2 pypi_0 pypi
[conda] torchtext 0.14.1 py310 pytorch
[conda] torchvision 0.14.1 py310_cu116 pytorch</code></pre>
</details>
cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer
| 3 |
3,338 | 96,740 |
Implementing the batching rule for aten::bucketize.Tensor.
|
triaged, actionable, module: vmap, module: functorch
|
Dear maintainers of functorch,
I tried to use `functorch.vmap` to implement an extended version of `torch.bucketize`, so that I can bucketize a batched tensor with heterogeneous boundaries.
I got the a `UserWarning: There is a performance drop because we have not yet implemented the batching rule for aten::bucketize.Tensor`.
It would be great if the batching rule could be implemented for `torch.bucketize`.
Thanks a lot.
cc @zou3519 @Chillee @samdow @soumith @kshitij12345 @janeyx99
| 0 |
3,339 | 95,768 |
Inconsistent behaviour of torch.all()
|
module: cuda, triaged
|
### 🐛 Describe the bug
torch.all() works incorrectly while working with tensors on cuda
```
tt = torch.Tensor([True, False]).cuda()
tt.cpu().all()
```
returns
tensor(False)
while
```
tt = torch.Tensor([True, False]).cuda()
tt.all()
```
returns
tensor(True, device='cuda:0')
### Versions
PyTorch version: 1.13.1+cu116
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: 10.0.0-4ubuntu1
CMake version: version 3.22.6
Libc version: glibc-2.31
Python version: 3.8.10 (default, Nov 14 2022, 12:59:47) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.10.147+-x86_64-with-glibc2.29
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: Tesla T4
Nvidia driver version: 525.85.12
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.7.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 2
On-line CPU(s) list: 0,1
Thread(s) per core: 2
Core(s) per socket: 1
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 63
Model name: Intel(R) Xeon(R) CPU @ 2.30GHz
Stepping: 0
CPU MHz: 2299.998
BogoMIPS: 4599.99
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32 KiB
L1i cache: 32 KiB
L2 cache: 256 KiB
L3 cache: 45 MiB
NUMA node0 CPU(s): 0,1
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable; SMT Host state unknown
Vulnerability Meltdown: Vulnerable
Vulnerability Mmio stale data: Vulnerable
Vulnerability Retbleed: Vulnerable
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled, PBRSB-eIBRS: Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm invpcid_single ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid xsaveopt arat md_clear arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.22.4
[pip3] torch==1.13.1+cu116
[pip3] torchaudio==0.13.1+cu116
[pip3] torchsummary==1.5.1
[pip3] torchtext==0.14.1
[pip3] torchvision==0.14.1+cu116
[conda] Could not collect
cc @ngimel
| 1 |
3,340 | 95,767 |
change stacksize_analysis to worklist algorithm for better result
|
triaged, open source, Stale, module: dynamo, ciflow/inductor
|
The current algorithm for stacksize_analysis is round-robin algorithm with 100 as iteration upper limit, which is not an optimal choice. This PR implements the worklist version for better complexity and precision.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng @soumith @yanboliang @anijain2305 @desertfire
| 9 |
3,341 | 95,756 |
`torch.nanmedian` return a negative value when input is empty
|
module: error checking, triaged
|
### 🐛 Describe the bug
In pytorch1.12.0, 1.13.0, 1.13.1, and some earlier versions, `torch.nanmedian` will return a negative number as output for empty input, which obviously does not meet the description of this API in the [docs](https://pytorch.org/docs/stable/generated/torch.nanmedian.html):
```
Returns the median of the values in input, ignoring NaN values.
```
In addition, when using `dtype=torch.int32`, it will return `-2147483648`, which seems to be the smallest negative number under int32.
When dtype is int64, it will return `-9223372036854775808`, which is also the smallest negative number under int64.
The above behaviors also happen on `torch.median`.
I'm not sure if that means some kind of overflow happened in the calculation.
I think the root cause of this issue should be related to [#71636](https://github.com/pytorch/pytorch/issues/71636).
### To Reproduce
```python
import torch
torch.manual_seed(0)
input = torch.randint(-1,1,[0], dtype=torch.int32)
# input = torch.randint(-1,1,[0], dtype=torch.int64)
def test():
tmp_result=torch.nanmedian(input).detach().numpy()
# tmp_result=torch.median(input).detach().numpy()
return tmp_result
result=test()
print(result)#-2147483648 #-9223372036854775808
print(input.detach().numpy())#[]
```
### Expected behavior
Both `median` and `nanomedian` API calculate the median of the input. When the input is empty, an empty tensor should be returned instead of a negative number.
Will this unexpected output be moved in PT2.0?
### Versions
<details>
<summary>pytorch 1.12.0</summary>
<pre><code>
[pip3] numpy==1.21.5
[pip3] torch==1.12.0
[pip3] torchelastic==0.2.0
[pip3] torchtext==0.13.0
[pip3] torchvision==0.13.0
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.3.1 ha36c431_9 nvidia
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py37h7f8727e_0
[conda] mkl_fft 1.3.1 py37hd3c417c_0
[conda] mkl_random 1.2.2 py37h51133e4_0
[conda] numpy 1.21.5 py37he7a7128_2
[conda] numpy-base 1.21.5 py37hf524024_2
[conda] pytorch 1.12.0 py3.7_cuda11.3_cudnn8.3.2_0 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchelastic 0.2.0 pypi_0 pypi
[conda] torchtext 0.13.0 py37 pytorch
[conda] torchvision 0.13.0 py37_cu113 pytorch</code></pre>
</details>
<details>
<summary>pytorch 1.13.0</summary>
<pre><code>
[pip3] numpy==1.22.3
[pip3] torch==1.13.0
[pip3] torchtext==0.14.0
[pip3] torchvision==0.14.0
[conda] blas 1.0 mkl
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.22.3 py39he7a7128_0
[conda] numpy-base 1.22.3 py39hf524024_0
[conda] pytorch 1.13.0 py3.9_cuda11.6_cudnn8.3.2_0 pytorch
[conda] pytorch-cuda 11.6 h867d48c_0 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchtext 0.14.0 py39 pytorch
[conda] torchvision 0.14.0 py39_cu116 pytorch</code></pre>
</details>
<details>
<summary>pytorch 1.13.1</summary>
<pre><code>
[pip3] numpy==1.22.3
[pip3] torch==1.13.1
[pip3] torchelastic==0.2.2
[pip3] torchtext==0.14.1
[pip3] torchvision==0.14.1
[conda] blas 1.0 mkl
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py310h7f8727e_0
[conda] mkl_fft 1.3.1 py310hd6ae3a3_0
[conda] mkl_random 1.2.2 py310h00e6091_0
[conda] numpy 1.22.3 py310hfa59a62_0
[conda] numpy-base 1.22.3 py310h9585f30_0
[conda] pytorch 1.13.1 py3.10_cuda11.6_cudnn8.3.2_0 pytorch
[conda] pytorch-cuda 11.6 h867d48c_1 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchelastic 0.2.2 pypi_0 pypi
[conda] torchtext 0.14.1 py310 pytorch
[conda] torchvision 0.14.1 py310_cu116 pytorch</code></pre>
</details>
cc @malfet
| 4 |
3,342 | 95,731 |
Faketensor issue when using torch inductor as backend with Huggingface Trainer API
|
triaged, oncall: pt2
|
### 🐛 Describe the bug
When I use the `torch_compile_backend=inductor` in a Huggingface training script based on Trainer API, I got error at the first step of training`torch._dynamo.exc.BackendCompilerFailed: debug_wrapper raised Exception: Please convert all Tensors to FakeTensors first or instantiate FakeTensorMode with 'allow_non_fake_inputs'. Found in aten.convolution.default(*(FakeTensor(FakeTensor(..., device='meta', size=(1, 3, 224, 224)), cuda:0), tensor([[[[ 1.5585e-02, 5.1153e-02, 5.5507e-02, ..., 8.7095e-02,...` in the forward pass. Trainers API uses [one single line wrap](https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py#L1364) to enable torch.compile.
Here is the [minified version](https://gist.github.com/YuchengT/68268245b3f343161c6929435df7f511) of the code.
Note: the minified code does not produce the same error. To reproduce the original error use:
```
pip3 install numpy --pre torch --force-reinstall --index-url https://download.pytorch.org/whl/nightly/cu117
git clone https://github.com/huggingface/transformers.git
cd transformers
pip install -e .
cd examples/pytorch/image-classification
pip install -r requirements.txt
python run_image_classification.py \
--dataset_name food101 --output_dir ./food101_outputs/ \
--remove_unused_columns False --do_train --learning_rate 2e-5 \
--num_train_epochs 1 --report_to none --per_device_train_batch_size 1 \
--logging_strategy steps --logging_steps 10 --save_strategy epoch \
--overwrite_output_dir --torch_compile_backend inductor
```
### Versions
```
PyTorch version: 2.0.0a0+git45d775c
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: 10.0.0-4ubuntu1
CMake version: version 3.25.2
Libc version: glibc-2.31
Python version: 3.8.0 (default, Nov 6 2019, 21:49:08) [GCC 7.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-1030-aws-x86_64-with-glibc2.10
Is CUDA available: True
CUDA runtime version: 11.7.99
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-40GB
GPU 1: NVIDIA A100-SXM4-40GB
GPU 2: NVIDIA A100-SXM4-40GB
GPU 3: NVIDIA A100-SXM4-40GB
GPU 4: NVIDIA A100-SXM4-40GB
GPU 5: NVIDIA A100-SXM4-40GB
GPU 6: NVIDIA A100-SXM4-40GB
GPU 7: NVIDIA A100-SXM4-40GB
Nvidia driver version: 525.85.12
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.5.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 96
On-line CPU(s) list: 0-95
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Platinum 8275CL CPU @ 3.00GHz
Stepping: 7
CPU MHz: 3000.000
BogoMIPS: 6000.00
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 1.5 MiB
L1i cache: 1.5 MiB
L2 cache: 48 MiB
L3 cache: 71.5 MiB
NUMA node0 CPU(s): 0-23,48-71
NUMA node1 CPU(s): 24-47,72-95
Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Vulnerable
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves ida arat pku ospke
Versions of relevant libraries:
[pip3] bert-pytorch==0.0.1a4
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.21.2
[pip3] torch==2.0.0a0+git52a27dd
[pip3] torchdata==0.7.0.dev20230221
[pip3] torchtext==0.14.0a0+5b78d07
[pip3] torchvision==0.15.0a0+7074570
[conda] bert-pytorch 0.0.1a4 dev_0 <develop>
[conda] blas 1.0 mkl
[conda] magma-cuda117 2.6.1 1 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-include 2022.1.0 h06a4308_224
[conda] mkl-service 2.4.0 py38h7f8727e_0
[conda] mkl_fft 1.3.1 py38hd3c417c_0
[conda] mkl_random 1.2.2 py38h51133e4_0
[conda] numpy 1.21.2 pypi_0 pypi
[conda] pytorch 2.0.0.dev20230220 py3.8_cpu_0 pytorch-nightly
[conda] pytorch-mutex 1.0 cpu pytorch-nightly
[conda] torch 2.0.0a0+git52a27dd pypi_0 pypi
[conda] torchdata 0.7.0.dev20230221 py38 pytorch-nightly
[conda] torchtext 0.14.0a0+5b78d07 pypi_0 pypi
[conda] torchvision 0.15.0a0+7074570 pypi_0 pypi
```
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 2 |
3,343 | 95,727 |
dist.barrier() should be able to go through custom backend
|
oncall: distributed
|
### 🚀 The feature, motivation and pitch
With https://github.com/pytorch/pytorch/pull/95072, users can now create a gloo backend for cpu tensors and a custom backend for cuda tensors. However, in such a use case, `dist.barrier()` will always go to gloo due to the following code block in https://github.com/pytorch/pytorch/blob/master/torch/csrc/distributed/c10d/ProcessGroup.cpp:
```
virtual c10::intrusive_ptr<Work> barrier(
const BarrierOptions& opts = BarrierOptions()) {
static at::Tensor tensor;
// TODO: if nccl was specified then use it
if (backendType_ == c10d::ProcessGroup::BackendType::NCCL) {
// set cuda tensor
tensor = at::empty(
{1},
at::TensorOptions().device(at::DeviceType::CUDA).dtype(at::kByte));
} else {
// Default to using cpu implementation
tensor = at::empty(
{1},
at::TensorOptions().device(at::DeviceType::CPU).dtype(at::kByte));
}
```
Custom backends such as SMDDP (https://docs.aws.amazon.com/sagemaker/latest/dg/data-parallel-modify-sdp-pt.html) require `dist.barrier()` to go through SMDDP barrier() because of certain cuda stream semantics -- when a barrier is called, all previous collective communication operations must finish, and frameworks such as DeepSpeed rely on this semantic.
We should allow a way for `dist.barrier()` to go through custom backends. Possibilities are:
1. Add a `device` option in `dist.barrier()`. E.g. `dist.barrier(device='cuda')`
2. When we register custom backends, add a parameter there to signal that the custom backend supports cuda tensor, so the check `backendType_ == c10d::ProcessGroup::BackendType::NCCL` can become `is_cuda_backend(backendType_)`
3. There might be more solutions
Urgency wise, we need a solution before the 2.0 release.
P.S. We don't have the same problem when mapping a custom backend, say SMDDP, to both cpu and cuda devices.
Tagging @H-Huang for visibility and technical inputs.
Also tagging custom backend developers @indhub @0x6b64 @Arjunbala
### Alternatives
_No response_
### Additional context
_No response_
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
| 2 |
3,344 | 95,724 |
distributed training: lots of "Exception ignored" at the end of each epoch
|
oncall: distributed
|
### 🐛 Describe the bug
Using PyTorch 1.12.1. Trying distributed training for the first time, using the LambdaLabs AI cloud service. It seems to work well, except at the end of many (but not all) epochs, I get 56 of these complaints (batch size of 32):
```
Exception ignored in: <Finalize object, dead>
Traceback (most recent call last):
File "/usr/lib/python3.8/multiprocessing/util.py", line 224, in __call__
res = self._callback(*self._args, **self._kwargs)
File "/usr/lib/python3.8/multiprocessing/synchronize.py", line 87, in _cleanup
sem_unlink(name)
FileNotFoundError: [Errno 2] No such file or directory
```
I've tried all sorts of things to get rid of that. E.g. I tried with and without "drop_last", I've even tweaked the Dataset so that nothing needs to be dropped. I've also switched to only 1 server with only 1 GPU, but still using DDP. The same issue still occurs. Not sure what else to try.
Maybe I can ignore these complaints? But they scare me, maybe it's a sign of something going wrong?
Here's the rough skeleton of my code:
```
import [...]
def train(rank, num_gpus):
# init various stuff
torch.cuda.set_device(rank)
torch.distributed.init_process_group(backend='nccl', rank=rank, world_size=num_gpus, init_method='env://')
model = Model(rank)
scaler = amp.GradScaler()
if rank == 0:
writer = SummaryWriter('train')
# init dataset, distributed sampler and dataloader
dataset = SomeDataset()
sampler = torch.utils.data.distributed.DistributedSampler(dataset)
train_data = DataLoader(dataset, batch_size=batch_size, num_workers=16, pin_memory=True, drop_last=True, shuffle=False, sampler=sampler)
# epoch train loop
torch.distributed.barrier()
step = 0
for epoch in range(epochs):
# inform sampler about epoch
torch.distributed.barrier() # removing this doesn't help
sampler.set_epoch(epoch) # the complaints occur right before barrier+set_epoch is called
torch.distributed.barrier() # removing this doesn't help
# dataset iterations
for i, data in enumerate(train_data):
# just to be safe
self.train()
# update learning rate
learning_rate = calculateLearningRate(step)
for param_group in self.optimG.param_groups:
param_group['lr'] = learning_rate
# move data to the GPU
data_gpu = data.cuda(rank, non_blocking=True)
# actual training
self.optimG.zero_grad()
with torch.autocast(device_type='cuda', dtype=torch.float16):
image = model(data_gpu)
loss = (image - gt).abs().mean()
scaler.scale(loss).backward()
scaler.step(self.optimG)
scaler.update()
# every 1000 iterations, do some logging and testing
if step % 1000 == 0:
torch.distributed.barrier() # just to be safe, maybe not needed
if rank == 0:
torch.save(model.state_dict(),'dict.pth')
writer.add_scalar('train/l1', info['loss_l1'], step)
writer.flush()
self.eval()
image = model(testData_gpu)
saveImage(image)
if rank == 0:
step += 1
if __name__ == "__main__":
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '12355'
torch.backends.cudnn.benchmark = True
num_gpus = torch.cuda.device_count()
torch.multiprocessing.spawn(train, args=(num_gpus, ), nprocs=num_gpus, join=True)
```
To me, this looks like a bug in PyTorch, or am I missing something in my code?
### Versions
PyTorch version: 1.12.1
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.8.10 (default, Jun 22 2022, 20:18:18) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-52-generic-x86_64-with-glibc2.29
Is CUDA available: True
CUDA runtime version: 11.6.124
CUDA_MODULE_LOADING set to:
GPU models and configuration: GPU 0: NVIDIA A100-SXM4-40GB
Nvidia driver version: 515.65.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: False
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 48 bits physical, 48 bits virtual
CPU(s): 30
On-line CPU(s) list: 0-29
Thread(s) per core: 1
Core(s) per socket: 1
Socket(s): 30
NUMA node(s): 1
Vendor ID: AuthenticAMD
CPU family: 25
Model: 1
Model name: AMD EPYC 7J13 64-Core Processor
Stepping: 1
CPU MHz: 2449.998
BogoMIPS: 4899.99
Virtualization: AMD-V
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 1.9 MiB
L1i cache: 1.9 MiB
L2 cache: 15 MiB
L3 cache: 480 MiB
NUMA node0 CPU(s): 0-29
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw perfctr_core invpcid_single ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr wbnoinvd arat npt nrip_save umip pku ospke vaes vpclmulqdq rdpid fsrm arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.23.4
[pip3] torch==1.12.1
[pip3] torchvision==0.13.1
[conda] Could not collect
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
| 0 |
3,345 | 95,722 |
[BE] [cuDNN] Always build assuming cuDNN >= 8.0
|
module: cpu, triaged, open source, Stale, ciflow/trunk, release notes: quantization, topic: not user facing, ciflow/periodic, ciflow/inductor
|
<!--
copilot:summary
-->
### <samp>🤖 Generated by Copilot at 27084ed</samp>
This pull request simplifies and cleans up the code that uses the cuDNN library for convolution, batch normalization, CTC loss, and quantized operations. It removes the unnecessary checks and conditions for older cuDNN versions and the experimental cuDNN v8 API, and replaces them with the stable `cudnn_frontend` API that requires cuDNN v8 or higher. It also adds the dependency and configuration for the `cudnn_frontend` library in the cmake and bazel files.
This is a re-land of https://github.com/pytorch/pytorch/pull/91527
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| 73 |
3,346 | 95,718 |
functorch.compile.memory_efficient_fusion errors with: RuntimeError: forward() Expected a value of type 'Tensor (inferred)' for argument 'primals_356' but instead found type 'int'.
|
triaged, module: functorch
|
### 🐛 Describe the bug
Using `functorch.compile.memory_efficient_fusion(model)` returns the following error during the forward pass of the model:
```
usr/local/lib/python3.9/dist-packages/torch/jit/_check.py:181: UserWarning: The TorchScript type system doesn't support instance-level annotations on empty non-base types in `__init__`. Instead, either 1) use a type annotation in the class body, or 2) wrap the type in `torch.jit.Attribute`.
warnings.warn("The TorchScript type system doesn't support "
[W ir_emitter.cpp:4377] Warning: List consists of heterogeneous types, which means that it has been typed as containing Union[Tensor, int]. To use any of the values in this List, it will be necessary to add an `assert isinstance` statement before first use to trigger type refinement.
File "<eval_with_key>.8", line 636
add_75 = torch.ops.aten.add(slice_18, slice_20); slice_18 = slice_20 = None
view_14 = torch.ops.aten.view(add_75, [-1, 4]); add_75 = None
return [convolution_69, convolution_79, convolution_89, convolution_99, convolution_109, convolution_70, convolution_80, convolution_90, convolution_100, convolution_110, view_2, view_5, view_8, view_11, view_14, 800, 1067, primals_358, 960, 1280, 1306, 800, 1088, 800, 1088, 800, 1067, primals_363, 960, 1280, 1158, 800, 1088, 800, 1088, primals_2, primals_5, primals_8, primals_11, primals_14, primals_17, primals_20, primals_23, primals_26, primals_29, primals_32, primals_35, primals_38, primals_41, primals_44, primals_47, primals_50, primals_53, primals_56, primals_59, primals_62, primals_65, primals_68, primals_71, primals_74, primals_77, primals_80, primals_83, primals_86, primals_89, primals_92, primals_95, primals_98, primals_101, primals_104, primals_107, primals_110, primals_113, primals_116, primals_119, primals_122, primals_125, primals_128, primals_131, primals_134, primals_137, primals_140, primals_143, primals_146, primals_149, primals_152, primals_155, primals_158, primals_196, primals_197, primals_199, primals_200, primals_202, primals_203, primals_205, primals_206, primals_208, primals_209, primals_211, primals_212, primals_214, primals_215, primals_217, primals_218, primals_220, primals_221, primals_223, primals_224, primals_226, primals_227, primals_229, primals_230, primals_232, primals_233, primals_235, primals_236, primals_238, primals_239, primals_241, primals_242, primals_244, primals_245, primals_247, primals_248, primals_250, primals_251, primals_253, primals_254, primals_256, primals_257, primals_259, primals_260, primals_262, primals_263, primals_265, primals_266, primals_268, primals_269, primals_271, primals_272, primals_274, primals_275, primals_277, primals_278, primals_280, primals_281, primals_283, primals_284, primals_286, primals_287, primals_289, primals_290, primals_292, primals_293, primals_295, primals_296, primals_298, primals_299, primals_301, primals_302, primals_304, primals_305, primals_307, primals_308, primals_310, primals_311, primals_313, primals_314, primals_316, primals_317, primals_319, primals_320, primals_322, primals_323, primals_325, primals_326, primals_328, primals_329, primals_331, primals_332, primals_334, primals_335, primals_337, primals_338, primals_340, primals_341, primals_343, primals_344, primals_346, primals_347, primals_349, primals_350, primals_352, primals_353, _to_copy, _to_copy_1, convolution, getitem_1, getitem_2, relu, getitem_3, getitem_4, _to_copy_2, convolution_1, getitem_6, getitem_7, relu_1, _to_copy_3, convolution_2, getitem_9, getitem_10, relu_2, _to_copy_4, convolution_3, getitem_12, getitem_13, _to_copy_5, convolution_4, getitem_15, getitem_16, relu_3, _to_copy_6, convolution_5, getitem_18, getitem_19, relu_4, _to_copy_7, convolution_6, getitem_21, getitem_22, relu_5, _to_copy_8, convolution_7, getitem_24, getitem_25, relu_6, _to_copy_9, convolution_8, getitem_27, getitem_28, relu_7, _to_copy_10, convolution_9, getitem_30, getitem_31, relu_8, _to_copy_11, convolution_10, getitem_33, getitem_34, relu_9, _to_copy_12, convolution_11, getitem_36, getitem_37, relu_10, _to_copy_13, convolution_12, getitem_39, getitem_40, relu_11, _to_copy_14, convolution_13, getitem_42, getitem_43, _to_copy_15, convolution_14, getitem_45, getitem_46, relu_12, _to_copy_16, convolution_15, getitem_48, getitem_49, relu_13, _to_copy_17, convolution_16, getitem_51, getitem_52, relu_14, _to_copy_18, convolution_17, getitem_54, getitem_55, relu_15, _to_copy_19, convolution_18, getitem_57, getitem_58, relu_16, _to_copy_20, convolution_19, getitem_60, getitem_61, relu_17, _to_copy_21, convolution_20, getitem_63, getitem_64, relu_18, _to_copy_22, convolution_21, getitem_66, getitem_67, relu_19, _to_copy_23, convolution_22, getitem_69, getitem_70, relu_20, _to_copy_24, convolution_23, getitem_72, getitem_73, relu_21, _to_copy_25, convolution_24, getitem_75, getitem_76, relu_22, _to_copy_26, convolution_25, getitem_78, getitem_79, relu_23, _to_copy_27, convolution_26, getitem_81, getitem_82, _to_copy_28, convolution_27, getitem_84, getitem_85, relu_24, _to_copy_29, convolution_28, getitem_87, getitem_88, relu_25, _to_copy_30, convolution_29, getitem_90, getitem_91, relu_26, _to_copy_31, convolution_30, getitem_93, getitem_94, relu_27, _to_copy_32, convolution_31, getitem_96, getitem_97, relu_28, _to_copy_33, convolution_32, getitem_99, getitem_100, relu_29, _to_copy_34, convolution_33, getitem_102, getitem_103, relu_30, _to_copy_35, convolution_34, getitem_105, getitem_106, relu_31, _to_copy_36, convolution_35, getitem_108, getitem_109, relu_32, _to_copy_37, convolution_36, getitem_111, getitem_112, relu_33, _to_copy_38, convolution_37, getitem_114, getitem_115, relu_34, _to_copy_39, convolution_38, getitem_117, getitem_118, relu_35, _to_copy_40, convolution_39, getitem_120, getitem_121, relu_36, _to_copy_41, convolution_40, getitem_123, getitem_124, relu_37, _to_copy_42, convolution_41, getitem_126, getitem_127, relu_38, _to_copy_43, convolution_42, getitem_129, getitem_130, relu_39, _to_copy_44, convolution_43, getitem_132, getitem_133, relu_40, _to_copy_45, convolution_44, getitem_135, getitem_136, relu_41, _to_copy_46, convolution_45, getitem_138, getitem_139, _to_copy_47, convolution_46, getitem_141, getitem_142, relu_42, _to_copy_48, convolution_47, getitem_144, getitem_145, relu_43, _to_copy_49, convolution_48, getitem_147, getitem_148, relu_44, _to_copy_50, convolution_49, getitem_150, getitem_151, relu_45, _to_copy_51, convolution_50, getitem_153, getitem_154, relu_46, _to_copy_52, convolution_51, getitem_156, getitem_157, relu_47, _to_copy_53, convolution_52, getitem_159, getitem_160, relu_48, _to_copy_55, convolution_53, _to_copy_57, convolution_54, _to_copy_59, add_69, _to_copy_61, convolution_56, _to_copy_63, add_70, _to_copy_65, convolution_58, _to_copy_67, convolution_59, _to_copy_69, convolution_60, _to_copy_71, relu_50, _to_copy_73, relu_51, _to_copy_75, relu_52, _to_copy_77, relu_53, _to_copy_79, relu_54, _to_copy_81, relu_55, _to_copy_83, relu_56, _to_copy_85, relu_57, _to_copy_87, _to_copy_89, relu_58, relu_59, relu_60, relu_61, relu_62, relu_63, relu_64, relu_65, relu_66, relu_67, relu_68, relu_69, relu_70, relu_71, relu_72, relu_73, relu_74, relu_75, relu_76, relu_77, relu_78, relu_79, relu_80, relu_81, relu_82, relu_83, relu_84, relu_85, relu_86, relu_87, relu_88, relu_89]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
(function emitListLiteral)
Traceback (most recent call last):
File "/notebooks/prediction_net/tool/main.py", line 145, in <module>
main()
File "/notebooks/prediction_net/tool/main.py", line 117, in main
total_iter = train_one_epoch(epoch, cfg, train_loader, network,
File "/notebooks/prediction_net/tool/engine.py", line 128, in train_one_epoch
with amp_autocast():
File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.9/dist-packages/functorch/_src/aot_autograd.py", line 705, in forward
return compiled_f(
File "/usr/local/lib/python3.9/dist-packages/functorch/_src/aot_autograd.py", line 656, in returned_function
compiled_fn = create_aot_dispatcher_function(
File "/usr/local/lib/python3.9/dist-packages/functorch/_src/aot_autograd.py", line 509, in create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/usr/local/lib/python3.9/dist-packages/functorch/_src/aot_autograd.py", line 398, in aot_dispatch_autograd
compiled_fw_func = aot_config.fw_compiler(fw_module, deduped_flat_args)
File "/usr/local/lib/python3.9/dist-packages/functorch/_src/aot_autograd.py", line 238, in f
out_f = compiler(fx_g, inps)
File "/usr/local/lib/python3.9/dist-packages/functorch/_src/compilers.py", line 88, in ts_compile
f(*inps)
File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
RuntimeError: forward() Expected a value of type 'Tensor (inferred)' for argument 'primals_356' but instead found type 'int'.
Inferred 'primals_356' to be of type 'Tensor' because it was not annotated with an explicit type.
Position: 356
Value: 800
Declaration: forward(__torch__.torch.fx.graph_module.___torch_mangle_0.GraphModule self, Tensor primals_1, Tensor primals_2, Tensor primals_3, Tensor primals_4, Tensor primals_5, Tensor primals_6, Tensor primals_7, Tensor primals_8, Tensor primals_9, Tensor primals_10, Tensor primals_11, Tensor primals_12, Tensor primals_13, Tensor primals_14, Tensor primals_15, Tensor primals_16, Tensor primals_17, Tensor primals_18, Tensor primals_19, Tensor primals_20, Tensor primals_21, Tensor primals_22, Tensor primals_23, Tensor primals_24, Tensor primals_25, Tensor primals_26, Tensor primals_27, Tensor primals_28, Tensor primals_29, Tensor primals_30, Tensor primals_31, Tensor primals_32, Tensor primals_33, Tensor primals_34, Tensor primals_35, Tensor primals_36, Tensor primals_37, Tensor primals_38, Tensor primals_39, Tensor primals_40, Tensor primals_41, Tensor primals_42, Tensor primals_43, Tensor primals_44, Tensor primals_45, Tensor primals_46, Tensor primals_47, Tensor primals_48, Tensor primals_49, Tensor primals_50, Tensor primals_51, Tensor primals_52, Tensor primals_53, Tensor primals_54, Tensor primals_55, Tensor primals_56, Tensor primals_57, Tensor primals_58, Tensor primals_59, Tensor primals_60, Tensor primals_61, Tensor primals_62, Tensor primals_63, Tensor primals_64, Tensor primals_65, Tensor primals_66, Tensor primals_67, Tensor primals_68, Tensor primals_69, Tensor primals_70, Tensor primals_71, Tensor primals_72, Tensor primals_73, Tensor primals_74, Tensor primals_75, Tensor primals_76, Tensor primals_77, Tensor primals_78, Tensor primals_79, Tensor primals_80, Tensor primals_81, Tensor primals_82, Tensor primals_83, Tensor primals_84, Tensor primals_85, Tensor primals_86, Tensor primals_87, Tensor primals_88, Tensor primals_89, Tensor primals_90, Tensor primals_91, Tensor primals_92, Tensor primals_93, Tensor primals_94, Tensor primals_95, Tensor primals_96, Tensor primals_97, Tensor primals_98, Tensor primals_99, Tensor primals_100, Tensor primals_101, Tensor primals_102, Tensor primals_103, Tensor primals_104, Tensor primals_105, Tensor primals_106, Tensor primals_107, Tensor primals_108, Tensor primals_109, Tensor primals_110, Tensor primals_111, Tensor primals_112, Tensor primals_113, Tensor primals_114, Tensor primals_115, Tensor primals_116, Tensor primals_117, Tensor primals_118, Tensor primals_119, Tensor primals_120, Tensor primals_121, Tensor primals_122, Tensor primals_123, Tensor primals_124, Tensor primals_125, Tensor primals_126, Tensor primals_127, Tensor primals_128, Tensor primals_129, Tensor primals_130, Tensor primals_131, Tensor primals_132, Tensor primals_133, Tensor primals_134, Tensor primals_135, Tensor primals_136, Tensor primals_137, Tensor primals_138, Tensor primals_139, Tensor primals_140, Tensor primals_141, Tensor primals_142, Tensor primals_143, Tensor primals_144, Tensor primals_145, Tensor primals_146, Tensor primals_147, Tensor primals_148, Tensor primals_149, Tensor primals_150, Tensor primals_151, Tensor primals_152, Tensor primals_153, Tensor primals_154, Tensor primals_155, Tensor primals_156, Tensor primals_157, Tensor primals_158, Tensor primals_159, Tensor primals_160, Tensor primals_161, Tensor primals_162, Tensor primals_163, Tensor primals_164, Tensor primals_165, Tensor primals_166, Tensor primals_167, Tensor primals_168, Tensor primals_169, Tensor primals_170, Tensor primals_171, Tensor primals_172, Tensor primals_173, Tensor primals_174, Tensor primals_175, Tensor primals_176, Tensor primals_177, Tensor primals_178, Tensor primals_179, Tensor primals_180, Tensor primals_181, Tensor primals_182, Tensor primals_183, Tensor primals_184, Tensor primals_185, Tensor primals_186, Tensor primals_187, Tensor primals_188, Tensor primals_189, Tensor primals_190, Tensor primals_191, Tensor primals_192, Tensor primals_193, Tensor primals_194, Tensor primals_195, Tensor primals_196, Tensor primals_197, Tensor primals_198, Tensor primals_199, Tensor primals_200, Tensor primals_201, Tensor primals_202, Tensor primals_203, Tensor primals_204, Tensor primals_205, Tensor primals_206, Tensor primals_207, Tensor primals_208, Tensor primals_209, Tensor primals_210, Tensor primals_211, Tensor primals_212, Tensor primals_213, Tensor primals_214, Tensor primals_215, Tensor primals_216, Tensor primals_217, Tensor primals_218, Tensor primals_219, Tensor primals_220, Tensor primals_221, Tensor primals_222, Tensor primals_223, Tensor primals_224, Tensor primals_225, Tensor primals_226, Tensor primals_227, Tensor primals_228, Tensor primals_229, Tensor primals_230, Tensor primals_231, Tensor primals_232, Tensor primals_233, Tensor primals_234, Tensor primals_235, Tensor primals_236, Tensor primals_237, Tensor primals_238, Tensor primals_239, Tensor primals_240, Tensor primals_241, Tensor primals_242, Tensor primals_243, Tensor primals_244, Tensor primals_245, Tensor primals_246, Tensor primals_247, Tensor primals_248, Tensor primals_249, Tensor primals_250, Tensor primals_251, Tensor primals_252, Tensor primals_253, Tensor primals_254, Tensor primals_255, Tensor primals_256, Tensor primals_257, Tensor primals_258, Tensor primals_259, Tensor primals_260, Tensor primals_261, Tensor primals_262, Tensor primals_263, Tensor primals_264, Tensor primals_265, Tensor primals_266, Tensor primals_267, Tensor primals_268, Tensor primals_269, Tensor primals_270, Tensor primals_271, Tensor primals_272, Tensor primals_273, Tensor primals_274, Tensor primals_275, Tensor primals_276, Tensor primals_277, Tensor primals_278, Tensor primals_279, Tensor primals_280, Tensor primals_281, Tensor primals_282, Tensor primals_283, Tensor primals_284, Tensor primals_285, Tensor primals_286, Tensor primals_287, Tensor primals_288, Tensor primals_289, Tensor primals_290, Tensor primals_291, Tensor primals_292, Tensor primals_293, Tensor primals_294, Tensor primals_295, Tensor primals_296, Tensor primals_297, Tensor primals_298, Tensor primals_299, Tensor primals_300, Tensor primals_301, Tensor primals_302, Tensor primals_303, Tensor primals_304, Tensor primals_305, Tensor primals_306, Tensor primals_307, Tensor primals_308, Tensor primals_309, Tensor primals_310, Tensor primals_311, Tensor primals_312, Tensor primals_313, Tensor primals_314, Tensor primals_315, Tensor primals_316, Tensor primals_317, Tensor primals_318, Tensor primals_319, Tensor primals_320, Tensor primals_321, Tensor primals_322, Tensor primals_323, Tensor primals_324, Tensor primals_325, Tensor primals_326, Tensor primals_327, Tensor primals_328, Tensor primals_329, Tensor primals_330, Tensor primals_331, Tensor primals_332, Tensor primals_333, Tensor primals_334, Tensor primals_335, Tensor primals_336, Tensor primals_337, Tensor primals_338, Tensor primals_339, Tensor primals_340, Tensor primals_341, Tensor primals_342, Tensor primals_343, Tensor primals_344, Tensor primals_345, Tensor primals_346, Tensor primals_347, Tensor primals_348, Tensor primals_349, Tensor primals_350, Tensor primals_351, Tensor primals_352, Tensor primals_353, Tensor primals_354, Tensor primals_355, Tensor primals_356, Tensor primals_357, Tensor primals_358, Tensor primals_359, Tensor primals_360, Tensor primals_361, Tensor primals_362, Tensor primals_363, Tensor primals_364, Tensor primals_365, Tensor primals_366, Tensor primals_367, Tensor primals_368, Tensor primals_369, Tensor primals_370) -> Union(Tensor, int)[]
Cast error details: Unable to cast 800 to Tensor
root@n7cgce6uqy:/notebooks/prediction_net#
```
Unfortuantely, there is no trace within the code I wrote, so I find it a bit hard to debugg.
What are primals? Is there a way for me to connect it to the underlying operations of the model?
Is there any way I could gather more info about this error? Should I step through some underlying function calls and report intermediate results?
### Versions
Collecting environment information...
PyTorch version: 1.13.1+cu116
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.24.20220712-g2b37f48
Libc version: glibc-2.31
Python version: 3.9.13 (main, May 23 2022, 22:01:06) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.4.0-122-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.2.152
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: Quadro M4000
Nvidia driver version: 510.73.05
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.1.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.1.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.1.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.1.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.1.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.1.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.1.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 1
Core(s) per socket: 8
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 79
Model name: Intel(R) Xeon(R) CPU E5-2623 v4 @ 2.60GHz
Stepping: 1
CPU MHz: 2600.110
BogoMIPS: 5199.99
Hypervisor vendor: Xen
Virtualization type: full
L1d cache: 256 KiB
L1i cache: 256 KiB
L2 cache: 2 MiB
L3 cache: 80 MiB
NUMA node0 CPU(s): 0-7
Vulnerability Itlb multihit: KVM: Vulnerable
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP disabled, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush acpi mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl cpuid pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single pti intel_ppin ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap xsaveopt md_clear flush_l1d
Versions of relevant libraries:
[pip3] numpy==1.23.1
[pip3] pytorch3d==0.7.2
[pip3] torch==1.13.1+cu116
[pip3] torch-tb-profiler==0.4.1
[pip3] torchaudio==0.13.1+cu116
[pip3] torchvision==0.14.1+cu116
[conda] Could not collect
cc @zou3519 @Chillee @samdow @soumith @kshitij12345 @janeyx99
| 0 |
3,347 | 95,712 |
Multiheadattention module doesn't implement the function about kdim and vdim
|
triaged, module: multi-headed-attention
|
### 🐛 Describe the bug

The function just implements kdim=vdim=embed_dim.
### Versions
The code on GitHub
| 0 |
3,348 | 95,711 |
`copy.deepcopy` does not copy gradients of nn.Parameter
|
module: bc-breaking, module: autograd, module: nn, triaged, topic: bc breaking
|
Deep copy now copies the .grad field for tensors, but it does not work for Parameter
```
import torch
import copy
c = torch.nn.Linear(1, 10)
c.weight.grad = torch.rand((10, 1))
print(c.weight.grad)
d = copy.deepcopy(c)
print(d.weight.grad)
```
Tested with version 1.13.1. Running the code from @albanD gives the expected result. It would be great if we could find a way to copy an entire (larger) model including its gradients without having to copy every one by hand.
_Originally posted by @jhuebotter in https://github.com/pytorch/pytorch/issues/3307#issuecomment-1425753484_
cc @ezyang @gchanan @albanD @zou3519 @gqchen @pearu @nikitaved @Lezcano @Varal7 @mruberry @jbschlosser @walterddr @mikaylagawarecki @saketh-are
| 6 |
3,349 | 95,708 |
Dynamo + MacOS: fatal error: 'omp.h' file not found
|
triaged, oncall: pt2
|
### 🐛 Describe the bug
`torch.compile` fails to run with MacOs with a missing "omp.h" error
libomp is available:
```bash
$ brew install libomp
Warning: libomp 15.0.7 is already installed and up-to-date.
To reinstall 15.0.7, run:
brew reinstall libomp
```
### Error logs
Error
```python
self = <torch._dynamo.output_graph.OutputGraph object at 0x14f281850>
gm = GraphModule()
@dynamo_timed(phase_name="backend_compile")
def call_user_compiler(self, gm: fx.GraphModule) -> CompiledFn:
tot = 0
for node in gm.graph.nodes:
if node.op in ("call_function", "call_method", "call_module"):
tot += 1
torch._dynamo.utils.increment_op_count(tot)
try:
name = (
self.compiler_fn.__name__
if hasattr(self.compiler_fn, "__name__")
else ""
)
_step_logger()(logging.INFO, f"calling compiler function {name}")
compiler_fn = self.compiler_fn
# WrapperBackend needs real inputs, for now, to verify correctness
if config.verify_correctness:
compiler_fn = WrapperBackend(compiler_fn, self.example_inputs())
# NOTE: [Real Tensors in Accuracy Evaluation]
#
# Today, tensors are passed to backends as fake at compile time. See the .fake_example_inputs()
# call to compiler_fn below. At runtime, backends use real tensors.
#
# This should be a strong invariant we hold across all backends,
# and generally, it is. However, for accuracy evaluation, we need real tensors at compile time,
# for now, due to the unfortunate setup described below.
#
# Due to the nature of how we invoke comparison as a backend in two different ways:
#
# (1) Less bad, but still worth rewriting, WrapperBackend above, which takes
# real inputs for its ctor. see the config.verify_correctnes above.
#
# (2) More bad, and very worth rewriting, the minifier installs accuracy comparison as
# a true backend, and therefore needs to be compiled with real inputs. This is made trickier
# by the fact that the minifier will spawn new processes during minification. As such, we have
# created a global flag, MINIFIER_SPAWNED, that should be set IF AND ONLY IF this run was spawned
# as part of accuracy minification. This flag is not a contract, and ideally will not be here long.
#
# The longer term PoR is to:
# (A) Rewrite the minifier accuracy evaluation and verify_correctness code to share the same
# correctness and accuracy logic, so as not to have two different ways of doing the same thing.
#
# (B) Refactor minifier accuracy backend to do its comparison fully at runtime, so as not to need to
# pass real tensors to it at compile time.
is_top_level_minifying = (
config.repro_after is not None and config.repro_level == 4
)
if torch._dynamo.debug_utils.MINIFIER_SPAWNED or is_top_level_minifying:
compiled_fn = compiler_fn(gm, self.example_inputs())
elif config.DO_NOT_USE_legacy_non_fake_example_inputs:
compiled_fn = compiler_fn(gm, self.example_inputs())
else:
compiled_fn = compiler_fn(gm, self.fake_example_inputs())
_step_logger()(logging.INFO, f"done compiler function {name}")
assert callable(compiled_fn), "compiler_fn did not return callable"
except Exception as e:
compiled_fn = gm.forward
> raise BackendCompilerFailed(self.compiler_fn, e) from e
E torch._dynamo.exc.BackendCompilerFailed: debug_wrapper raised CppCompileError: C++ compile error
E
E Command:
E g++ /var/folders/24/8k48jl6d249_n_qfxwsl6xvm0000gn/T/torchinductor_runner/xv/cxvjcdvblebtspcwgl5nyt4o3rch57rodfgm4uv63hhc2bqpkvtj.cpp -shared -fPIC -Wall -std=c++17 -Wno-unused-variable -I/Users/runner/hostedtoolcache/Python/3.9.16/x64/lib/python3.9/site-packages/torch/include -I/Users/runner/hostedtoolcache/Python/3.9.16/x64/lib/python3.9/site-packages/torch/include/torch/csrc/api/include -I/Users/runner/hostedtoolcache/Python/3.9.16/x64/lib/python3.9/site-packages/torch/include/TH -I/Users/runner/hostedtoolcache/Python/3.9.16/x64/lib/python3.9/site-packages/torch/include/THC -I/Users/runner/hostedtoolcache/Python/3.9.16/x64/include/python3.9 -lomp -O3 -ffast-math -fno-finite-math-only -Xclang -fopenmp -D C10_USING_CUSTOM_GENERATED_MACROS -o/var/folders/24/8k48jl6d249_n_qfxwsl6xvm0000gn/T/torchinductor_runner/xv/cxvjcdvblebtspcwgl5nyt4o3rch57rodfgm4uv63hhc2bqpkvtj.so
E
E Output:
E In file included from /var/folders/24/8k48jl6d249_n_qfxwsl6xvm0000gn/T/torchinductor_runner/xv/cxvjcdvblebtspcwgl5nyt4o3rch57rodfgm4uv63hhc2bqpkvtj.cpp:2:
E /var/folders/24/8k48jl6d249_n_qfxwsl6xvm0000gn/T/torchinductor_runner/zt/cztcl2vp5yqlnhofzpqfficjcxgyict6e3xhfdd7sdbkipp4p44x.h:6:10: fatal error: 'omp.h' file not found
E #include <omp.h>
E ^~~~~~~
E 1 error generated.
E
E
E Set torch._dynamo.config.verbose=True for more information
E
E
E You can suppress this exception and fall back to eager by setting:
E torch._dynamo.config.suppress_errors = True
/Users/runner/hostedtoolcache/Python/3.9.16/x64/lib/python3.9/site-packages/torch/_dynamo/output_graph.py:675: BackendCompilerFailed
```
[Full stacktrace (too long to include)](https://github.com/pytorch/pytorch/files/10852078/stacktrace.txt)
### Minified repro
I have yet to create a minimal repro, but I can quickly try things as this reproduces in the Lightning CI test suite.
I would expect that this is not an issue with the actual code being run, but instead with the MacOS distribution for dynamo.
If you would still like that I provide a reproducible snippet, please ask and I will do so.
Am I missing a specific dependency? Help to find the correct incantation is appreciated.
### Versions
torch==2.0.0 (release candidate)
<details>
<summary>GitHub action workflow details</summary>
```python
Current runner version: '2.302.1'
Operating System
macOS
11.7.4
[2](https://github.com/Lightning-AI/lightning/actions/runs/4294336080/jobs/7483240118#step:1:2)0G1120
Runner Image
Image: macos-11
Version: 202[3](https://github.com/Lightning-AI/lightning/actions/runs/4294336080/jobs/7483240118#step:1:3)0219.1
Included Software: https://github.com/actions/runner-images/blob/macOS-11/20230219.1/images/macos/macos-11-Readme.md
Image Release: https://github.com/actions/runner-images/releases/tag/macOS-11%2F20230219.1
Runner Image Provisioner
2.0.11[7](https://github.com/Lightning-AI/lightning/actions/runs/4294336080/jobs/7483240118#step:1:8).1
```
</details>
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 5 |
3,350 | 95,696 |
Can only import torch after Tensorflow accessed its gpu device
|
module: cuda, triaged
|
### 🐛 Describe the bug
Hey folks,
I'm not sure wether this qualifies as a bug wrt torch, but I've struggled with this for a couple of days and wanted to share my -albeit hacky- solution and try to gain further insights as to what causes this deeply weird behaviour.
I would appreciate any hints as to what causes this behaviour, and how to get a clean solution for both libraries working in the same namespace.
I'm trying to get torch and tensorflow working in the same namespace, in a conda-environment that was set up according to the instructions provided over at https://www.tensorflow.org/install/pip.
providing this context:
```
which python3
/home/middeke-ma-a/.conda/envs/middeke_ma_a/bin/python3
export LD_LIBRARY_PATH=:/home/middeke-ma-a/.conda/envs/middeke_ma_a/lib/
```
this_works.py:
``` python3
import tensorflow as tf
tf.test.gpu_device_name() # [PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]
import torch
cond = (torch.cuda.is_available() and tf.test.gpu_device_name())
print(bool(cond)) # True
```
but this_doesnt_work.py:
``` python3
import tensorflow as tf
import torch # OSError: /home/middeke-ma-a/.conda/envs/middeke_ma_a/lib/python3.10/site-packages/nvidia/cublas/lib/libcublas.so.11: undefined symbol: cublasLtGetStatusString, version libcublasLt.so.11
```
When LD_LIBRARY_PATH is set, I can import tensorflow with gpu-support, but I can't import torch:
```
OSError: /home/middeke-ma-a/.conda/envs/middeke_ma_a/lib/python3.10/site-packages/nvidia/cublas/lib/libcublas.so.11: undefined symbol: cublasLtGetStatusString, version libcublasLt.so.11
```
When LD_LIBRARY_PATH isn't set, I can import torch (and `cuda.is_available() == True`), and tensorflow, but tensorflow can't access CUDA (`Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU`).
I understand that importing torch only works when LD_LIBRARY_PATH isn't set.
...So why does the import (and cuda in torch) work after importing + accessing the gpu-device in tf, all while LD_LIBRARY_PATH *is set*?
I would like a clean solution for this, but nothing I've tried so far works. Wrapping the imports with `os.environ` accordingly has no meaningful effect; after unsetting LD_LIBRARY_PATH, the following script doesn't work:
```
import os
os.environ["LD_LIBRARY_PATH"] = ":/home/middeke-ma-a/.conda/envs/middeke_ma_a/lib/"
import tensorflow as tf # [...] Cannot dlopen some GPU libraries [...]
```
nor does unsetting the library path with os.environ before importing torch:
```
import tensorflow as tf
import os
os.environ["LD_LIBRARY_PATH"] = ""
import torch # OSError: /home/middeke-ma-a/.conda/envs/middeke_ma_a/lib/python3.10/site-packages/torch/lib/../../nvidia/cublas/lib/libcublas.so.11: undefined symbol: cublasLtGetStatusString, version libcublasLt.so.11
```
This being an issue that involves tensorflow and torch together, I'm not really sure who to ask about this.
If this doesn't belong here, I'd appreciate a hint where to take this (Conda? Tensorflow?).
Kind regards
Max
### Versions
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.10.9 (main, Jan 11 2023, 15:21:40) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-60-generic-x86_64-with-glibc2.35
Is CUDA available: N/A
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: NVIDIA GeForce GTX 1070
Nvidia driver version: 525.78.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) i7-7700K CPU @ 4.20GHz
CPU family: 6
Model: 158
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
Stepping: 9
CPU max MHz: 4500.0000
CPU min MHz: 800.0000
BogoMIPS: 8400.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 128 KiB (4 instances)
L1i cache: 128 KiB (4 instances)
L2 cache: 1 MiB (4 instances)
L3 cache: 8 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-7
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Mitigation; Microcode
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==1.24.2
[pip3] torch==1.13.1
[conda] cudatoolkit 11.2.2 hbe64b41_10 conda-forge
[conda] numpy 1.24.2 pypi_0 pypi
[conda] torch 1.13.1 pypi_0 pypi
cc @ngimel
| 2 |
3,351 | 95,680 |
Cleanup redundant CMake code
|
triaged, open source, Stale, topic: not user facing
|
Following the work of #94927, this PR aims to cleanup more CMake code.
| 7 |
3,352 | 95,679 |
Torchinductor backend fails to compile a model with index_put_(accumulate=True) with dtype float64
|
needs reproduction, triaged, oncall: pt2
|
### 🐛 Describe the bug
Have written a module that works with the 'eager' and 'aot_eager' backends. Compiles with the 'inductor' backend. The function uses an .index_put_ with the accumulate value set to True, and has multiple sets of duplicate indices, and we need to accumulate the values.
### Error logs
[2023-02-28 12:17:03,611] torch._inductor.compile_fx: [INFO] Step 3: torchinductor done compiling FORWARDS graph 0
[2023-02-28 12:17:03,612] torch._dynamo.output_graph: [INFO] Step 2: done compiler function debug_wrapper
Traceback (most recent call last):
File "/home/nickg/deeplearning/torch_segmentation_code/models/utils/ht_2.py", line 302, in <module>
_ht_acc = ht_acc_mod(img_stack)
File "/home/nickg/torch_ml/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 82, in __call__
return self.dynamo_ctx(self._orig_mod.__call__)(*args, **kwargs)
File "/home/nickg/torch_ml/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 215, in _fn
return fn(*args, **kwargs)
File "/home/nickg/torch_ml/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/nickg/deeplearning/torch_segmentation_code/models/utils/ht_2.py", line 27, in forward
def forward(self,x:torch.Tensor)->torch.Tensor:
File "/home/nickg/torch_ml/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 215, in _fn
return fn(*args, **kwargs)
File "/home/nickg/torch_ml/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 2819, in forward
return compiled_fn(full_args)
File "/home/nickg/torch_ml/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1222, in g
return f(*args)
File "/home/nickg/torch_ml/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1898, in runtime_wrapper
all_outs = call_func_with_args(
File "/home/nickg/torch_ml/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1247, in call_func_with_args
out = normalize_as_list(f(args))
File "/home/nickg/torch_ml/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 248, in run
return model(new_inputs)
File "/tmp/torchinductor_nickg/bb/cbbe32owz7yzmb5h5scyvxqtytogqch2my5wszfomzaatnvldviu.py", line 18799, in call
triton__4.run(arg0_1, buf4, 524288, grid=grid(524288), stream=stream0)
File "/home/nickg/torch_ml/lib/python3.10/site-packages/torch/_inductor/triton_ops/autotune.py", line 185, in run
return launcher(
File "<string>", line 6, in launcher
File "/home/nickg/torch_ml/lib/python3.10/site-packages/triton/compiler.py", line 1678, in __getattribute__
self._init_handles()
File "/home/nickg/torch_ml/lib/python3.10/site-packages/triton/compiler.py", line 1671, in _init_handles
mod, func, n_regs, n_spills = cuda_utils.load_binary(self.metadata["name"], self.asm["cubin"], self.shared, device)
RuntimeError: Triton Error [CUDA]: an illegal memory access was encountered
### Minified repro
```
import torch._inductor.overrides
import torch
from torch import tensor, device
import torch.fx as fx
from torch._dynamo.testing import rand_strided
from math import inf
from torch.fx.experimental.proxy_tensor import make_fx
import torch._dynamo.config
import torch._inductor.config
import torch._functorch.config
torch._dynamo.config.load_config(b'\x80\x02}q\x00(X\x0b\x00\x00\x00output_codeq\x01\x89X\r\x00\x00\x00log_file_nameq\x02NX\x07\x00\x00\x00verboseq\x03\x89X\x11\x00\x00\x00output_graph_codeq\x04\x89X\x12\x00\x00\x00verify_correctnessq\x05\x89X\x12\x00\x00\x00minimum_call_countq\x06K\x01X\x15\x00\x00\x00dead_code_eliminationq\x07\x88X\x10\x00\x00\x00cache_size_limitq\x08K@X\x14\x00\x00\x00specialize_int_floatq\t\x88X\x0e\x00\x00\x00dynamic_shapesq\n\x89X\x18\x00\x00\x00assume_static_by_defaultq\x0b\x89X\x10\x00\x00\x00guard_nn_modulesq\x0c\x89X\x1b\x00\x00\x00traceable_tensor_subclassesq\rc__builtin__\nset\nq\x0e]q\x0f\x85q\x10Rq\x11X\x0f\x00\x00\x00suppress_errorsq\x12\x89X\x15\x00\x00\x00replay_record_enabledq\x13\x89X \x00\x00\x00rewrite_assert_with_torch_assertq\x14\x88X\x12\x00\x00\x00print_graph_breaksq\x15\x89X\x07\x00\x00\x00disableq\x16\x89X*\x00\x00\x00allowed_functions_module_string_ignorelistq\x17h\x0e]q\x18(X\r\x00\x00\x00torch._decompq\x19X\x13\x00\x00\x00torch.distributionsq\x1aX\x0c\x00\x00\x00torch._primsq\x1bX\r\x00\x00\x00torch.testingq\x1cX\x0b\x00\x00\x00torch._refsq\x1de\x85q\x1eRq\x1fX\x12\x00\x00\x00repro_forward_onlyq \x89X\x0f\x00\x00\x00repro_toleranceq!G?PbM\xd2\xf1\xa9\xfcX\x16\x00\x00\x00capture_scalar_outputsq"\x89X \x00\x00\x00capture_dynamic_output_shape_opsq#\x89X\x19\x00\x00\x00enforce_cond_guards_matchq$\x88X\x0c\x00\x00\x00optimize_ddpq%\x88X\x1a\x00\x00\x00raise_on_ctx_manager_usageq&\x88X\x1c\x00\x00\x00raise_on_unsafe_aot_autogradq\'\x89X\x17\x00\x00\x00raise_on_backend_changeq(\x89X\x18\x00\x00\x00error_on_nested_fx_traceq)\x88X\t\x00\x00\x00allow_rnnq*\x89X\x08\x00\x00\x00base_dirq+X1\x00\x00\x00/home/nickg/torch_ml/lib/python3.10/site-packagesq,X\x0e\x00\x00\x00debug_dir_rootq-XD\x00\x00\x00/home/nickg/deeplearning/torch_segmentation_code/torch_compile_debugq.X)\x00\x00\x00DO_NOT_USE_legacy_non_fake_example_inputsq/\x89X\x13\x00\x00\x00_save_config_ignoreq0h\x0e]q1(X\x12\x00\x00\x00constant_functionsq2X\x0b\x00\x00\x00repro_afterq3X!\x00\x00\x00skipfiles_inline_module_allowlistq4X\x0b\x00\x00\x00repro_levelq5e\x85q6Rq7u.')
torch._inductor.config.load_config(b'\x80\x02}q\x00(X\x05\x00\x00\x00debugq\x01\x89X\x10\x00\x00\x00disable_progressq\x02\x88X\x10\x00\x00\x00verbose_progressq\x03\x89X\x0b\x00\x00\x00cpp_wrapperq\x04\x89X\x03\x00\x00\x00dceq\x05\x89X\x14\x00\x00\x00static_weight_shapesq\x06\x88X\x0c\x00\x00\x00size_assertsq\x07\x88X\x10\x00\x00\x00pick_loop_ordersq\x08\x88X\x0f\x00\x00\x00inplace_buffersq\t\x88X\x11\x00\x00\x00benchmark_harnessq\n\x88X\x0f\x00\x00\x00epilogue_fusionq\x0b\x89X\x15\x00\x00\x00epilogue_fusion_firstq\x0c\x89X\x0f\x00\x00\x00pattern_matcherq\r\x88X\n\x00\x00\x00reorderingq\x0e\x89X\x0c\x00\x00\x00max_autotuneq\x0f\x89X\x15\x00\x00\x00search_autotune_cacheq\x10\x88X\x17\x00\x00\x00realize_reads_thresholdq\x11K\x04X\x17\x00\x00\x00realize_bytes_thresholdq\x12M\xd0\x07X\x1b\x00\x00\x00realize_acc_reads_thresholdq\x13K\x08X\x0f\x00\x00\x00fallback_randomq\x14\x89X\x12\x00\x00\x00implicit_fallbacksq\x15\x88X\x0b\x00\x00\x00tune_layoutq\x16\x89X\x11\x00\x00\x00aggressive_fusionq\x17\x89X\x0f\x00\x00\x00max_fusion_sizeq\x18K@X\x1b\x00\x00\x00unroll_reductions_thresholdq\x19K\x08X\x0e\x00\x00\x00comment_originq\x1a\x89X\x12\x00\x00\x00developer_warningsq\x1b\x88X\x0f\x00\x00\x00compile_threadsq\x1cK\x10X\x11\x00\x00\x00global_cache_pathq\x1dNX\x13\x00\x00\x00kernel_name_max_opsq\x1eK\nX\r\x00\x00\x00shape_paddingq\x1f\x89X\x0e\x00\x00\x00permute_fusionq \x89X\x1a\x00\x00\x00profiler_mark_wrapper_callq!\x89X\x18\x00\x00\x00_raise_error_for_testingq"\x89X\x0c\x00\x00\x00_profile_varq#X\x00\x00\x00\x00q$X\x11\x00\x00\x00profile_bandwidthq%\x89X\x17\x00\x00\x00profile_bandwidth_regexq&h$X\x0b\x00\x00\x00cpp.threadsq\'J\xff\xff\xff\xffX\x13\x00\x00\x00cpp.dynamic_threadsq(\x89X\x0b\x00\x00\x00cpp.simdlenq)NX\x12\x00\x00\x00cpp.min_chunk_sizeq*M\x00\x10X\x07\x00\x00\x00cpp.cxxq+NX\x03\x00\x00\x00g++q,\x86q-X\x19\x00\x00\x00cpp.enable_kernel_profileq.\x89X\x12\x00\x00\x00cpp.weight_prepackq/\x88X\x11\x00\x00\x00triton.cudagraphsq0\x89X\x17\x00\x00\x00triton.debug_sync_graphq1\x89X\x18\x00\x00\x00triton.debug_sync_kernelq2\x89X\x12\x00\x00\x00triton.convolutionq3X\x04\x00\x00\x00atenq4X\x15\x00\x00\x00triton.dense_indexingq5\x89X\x10\x00\x00\x00triton.max_tilesq6K\x02X\x19\x00\x00\x00triton.autotune_pointwiseq7\x88X\'\x00\x00\x00triton.tiling_prevents_pointwise_fusionq8\x88X\'\x00\x00\x00triton.tiling_prevents_reduction_fusionq9\x88X\x1b\x00\x00\x00triton.ordered_kernel_namesq:\x89X\x1f\x00\x00\x00triton.descriptive_kernel_namesq;\x89X\x1c\x00\x00\x00triton.persistent_reductionsq<\x88X\x10\x00\x00\x00triton.max_blockq=}q>(X\x01\x00\x00\x00Xq?M\x00\x08X\x01\x00\x00\x00Yq@M\x00\x04X\x01\x00\x00\x00ZqAM\x00\x04uX\r\x00\x00\x00trace.enabledqB\x89X\x0f\x00\x00\x00trace.debug_logqC\x88X\x0e\x00\x00\x00trace.info_logqD\x89X\x0e\x00\x00\x00trace.fx_graphqE\x88X\x1a\x00\x00\x00trace.fx_graph_transformedqF\x88X\x13\x00\x00\x00trace.ir_pre_fusionqG\x88X\x14\x00\x00\x00trace.ir_post_fusionqH\x88X\x11\x00\x00\x00trace.output_codeqI\x88X\x13\x00\x00\x00trace.graph_diagramqJ\x89X\x15\x00\x00\x00trace.compile_profileqK\x89X\x10\x00\x00\x00trace.upload_tarqLNu.')
torch._functorch.config.load_config(b'\x80\x02}q\x00(X\x11\x00\x00\x00use_functionalizeq\x01\x88X\x0f\x00\x00\x00use_fake_tensorq\x02\x88X\x16\x00\x00\x00fake_tensor_allow_metaq\x03\x88X\x0c\x00\x00\x00debug_assertq\x04\x88X\x14\x00\x00\x00debug_fake_cross_refq\x05\x89X\x11\x00\x00\x00debug_partitionerq\x06\x89X\x0c\x00\x00\x00debug_graphsq\x07\x89X\x0b\x00\x00\x00debug_jointq\x08\x89X\x12\x00\x00\x00use_dynamic_shapesq\t\x89X\x14\x00\x00\x00static_weight_shapesq\n\x88X\x03\x00\x00\x00cseq\x0b\x88X\x10\x00\x00\x00max_dist_from_bwq\x0cK\x03X\t\x00\x00\x00log_levelq\rK\x14u.')
# REPLACEABLE COMMENT FOR TESTING PURPOSES
# torch version: 2.0.0.dev20230227+cu118
# torch cuda version: 11.8
# torch git version: 1e2e6e78c68c8df58d1498bc495629e56d433598
# CUDA Info:
# nvcc not found
# GPU Hardware Info:
# NVIDIA GeForce RTX 4090 : 1
from torch.nn import *
class Repro(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, full, round_1, view_4, view_5, view_6):
convert_element_type_4 = torch.ops.prims.convert_element_type.default(round_1, torch.int64); round_1 = None
unsqueeze_6 = torch.ops.aten.unsqueeze.default(convert_element_type_4, 0); convert_element_type_4 = None
expand_4 = torch.ops.aten.expand.default(unsqueeze_6, [8, -1, -1, -1]); unsqueeze_6 = None
slice_1 = torch.ops.aten.slice.Tensor(expand_4, 0, 0, 9223372036854775807); expand_4 = None
slice_2 = torch.ops.aten.slice.Tensor(slice_1, 1, 0, 9223372036854775807); slice_1 = None
slice_3 = torch.ops.aten.slice.Tensor(slice_2, 2, 0, 9223372036854775807); slice_2 = None
select = torch.ops.aten.select.int(slice_3, 3, 0); slice_3 = None
clone_1 = torch.ops.aten.clone.default(select, memory_format = torch.contiguous_format); select = None
_unsafe_view = torch.ops.aten._unsafe_view.default(clone_1, [524288]); clone_1 = None
index_put = torch.ops.aten.index_put.default(full, [view_4, _unsafe_view, view_5], view_6, True); full = view_4 = _unsafe_view = view_5 = view_6 = None
return (index_put,)
args = [((8, 365, 360), (131400, 360, 1), torch.float64, 'cuda'), ((256, 256, 360), (92160, 360, 1), torch.float32, 'cuda'), ((524288,), (1,), torch.int64, 'cuda'), ((524288,), (0,), torch.int64, 'cuda'), ((524288,), (1,), torch.float64, 'cuda')]
args = [rand_strided(sh, st, dt, dev) for (sh, st, dt, dev) in args]
mod = make_fx(Repro(), tracing_mode='real')(*args)
from torch._inductor.compile_fx import compile_fx_inner
from torch._dynamo.debug_utils import same_two_models
compiled = compile_fx_inner(mod, args)
ref = compiled(args)
torch.cuda.synchronize() # Ensures that segfaults are surfaced
```
### Versions
# torch version: 2.0.0.dev20230227+cu118
# torch cuda version: 11.8
# torch git version: 1e2e6e78c68c8df58d1498bc495629e56d433598
# CUDA Info:
# nvcc not found
# GPU Hardware Info:
# NVIDIA GeForce RTX 4090 : 1
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 6 |
3,353 | 95,677 |
Unable to import ``torch.linalg``
|
needs reproduction, triaged
|
### 🐛 Describe the bug
After installation, importing ``pytorch.linalg`` fails with the following message:
```sh
Traceback (most recent call last):
File "./scripts/predict.py", line 8, in <module>
from utils.misc import *
File "./utils/misc.py", line 3, in <module>
import torch.linalg
ModuleNotFoundError: No module named 'torch.linalg'
```
To reproduce it, run
```python
import torch.linalg
```
However, when I run ``import torch``, it is successful. I suspect it is an installation issue, but I'm opening this just in case.
### Versions
Here is my conda environment with torch ``1.13.1``:
```sh
# Name Version Build Channel
_libgcc_mutex 0.1 main
_openmp_mutex 5.1 1_gnu
biopython 1.79 py39h3811e60_1 conda-forge
blas 1.0 mkl
ca-certificates 2023.01.10 h06a4308_0
certifi 2022.12.7 py39h06a4308_0
charset-normalizer 3.0.1 pypi_0 pypi
cudatoolkit 11.3.1 h2bc3f7f_2
easydict 1.9 py_0 conda-forge
flit-core 3.6.0 pyhd3eb1b0_0
idna 3.4 pypi_0 pypi
intel-openmp 2021.4.0 h06a4308_3561
ld_impl_linux-64 2.38 h1181459_1
libffi 3.4.2 h6a678d5_6
libgcc-ng 11.2.0 h1234567_1
libgomp 11.2.0 h1234567_1
libstdcxx-ng 11.2.0 h1234567_1
libuv 1.44.2 h5eee18b_0
mkl 2021.4.0 h06a4308_640
mkl-service 2.4.0 py39h7e14d7c_0 conda-forge
mkl_fft 1.3.1 py39h0c7bc48_1 conda-forge
mkl_random 1.2.2 py39hde0f152_0 conda-forge
ncurses 6.4 h6a678d5_0
numpy 1.23.5 py39h14f4228_0
numpy-base 1.23.5 py39h31eccc5_0
openssl 1.1.1t h7f8727e_0
pillow 9.4.0 pypi_0 pypi
pip 22.3.1 py39h06a4308_0
python 3.9.16 h7a1cb2a_0
python_abi 3.9 2_cp39 conda-forge
pytorch-mutex 1.0 cuda pytorch
readline 8.2 h5eee18b_0
requests 2.28.2 pypi_0 pypi
setuptools 65.6.3 py39h06a4308_0
six 1.16.0 pyh6c4a22f_0 conda-forge
sqlite 3.40.1 h5082296_0
tbb 2021.7.0 hdb19cb5_0
tk 8.6.12 h1ccaba5_0
torch 1.13.1+cu116 pypi_0 pypi
torchaudio 0.13.1+cu116 pypi_0 pypi
torchvision 0.14.1+cu116 pypi_0 pypi
typing_extensions 4.4.0 py39h06a4308_0
tzdata 2022g h04d1e81_0
urllib3 1.26.14 pypi_0 pypi
wheel 0.38.4 py39h06a4308_0
xz 5.2.10 h5eee18b_1
zlib 1.2.13 h5eee18b_0
```
| 2 |
3,354 | 95,660 |
torch.compile compilation time on huggingface regressed ~11%
|
triaged, oncall: pt2
|
From looking at the perf [dashboard](https://github.com/pytorch/pytorch/issues/93794), https://github.com/pytorch/pytorch/pull/95134 regressed inductor compilation time by about ~11% across the huggingface benchmark suite, from a mean compilation time of `65.56s` to `72.57s`.
I tested locally with `MBartForConditionalGenerationq`.
Before: 82.49s
After: 91.90s
We can close this issue if we think this isn't worth digging into further (another option is to look into inductor compile time more holistically instead of looking at regressions in this specific PR).
cc @ezyang @soumith @msaroufim @wconstab @ngimel
| 2 |
3,355 | 95,648 |
torch needs to SHOW that it support sm_89 even if functionally the same as sm_86
|
module: cuda, triaged
|
### 🐛 Describe the bug
torch needs to SHOW that it supports sm_89 even if functionally the same as sm_86.
While it may be true that optimizing specifically for an sm_89 provides no benefits beyond what optimizing for an sm_86 does, by NOT listing sm_89 in either what torch._C._cuda_getArchFlags() nor torch.cuda.get_arch_list() outputs, code which checks if the actual GPU(in my case (8,9)) is in the list of supported GPU's can lead to "generic" optimization which are far from optimal.
If the left hand of pytorch say you have a sm_89 and the right hand say it supports "sm_37 sm_50 sm_60 sm_70 sm_75 sm_80 sm_86 sm_90" with sm_89 being missing from the list this is a problem and I have seen code where this kind of lookup was done. If you want to save code space and, as part of the build, not generate sm_89 code which is identical to the sm_86 then use "if" tests to redirect sm_89 to the sm_86 code paths to keep end user code from needing to be aware of this quirk. But you need to list sm_89 as a supported architecture.
NOTE: If someone does a single TORCH_CUDA_ARCH_LIST build of sm_86 OR sm_89 both should return "sm_86 sm_89" and ['sm_86', 'sm_89'] from the two functions shown above. However, I won't push on that. Anyone doing their own personal build of pytorch probably doesn't need this. It is the end user or basic app coders that are using the official FULL build of pytorch that needs ALL supported GPU's to be listed because "support for a GPU" is not the same as "does a GPU have any unique performance features another GPU earlier in the list has".
```
torch._C._cuda_getArchFlags() = sm_37 sm_50 sm_60 sm_70 sm_75 sm_80 sm_86 sm_90
torch.cuda.get_arch_list() = ['sm_37', 'sm_50', 'sm_60', 'sm_70', 'sm_75', 'sm_80', 'sm_86', 'sm_90']
```
### Versions
Not needed
cc @ngimel
| 3 |
3,356 | 95,645 |
Create a new Docker image with all inductor benchmarks and pre-trained models downloaded
|
module: ci, triaged, module: devx
|
All these models are currently downloaded as part of the CI and the process is subjected to network flakiness. Adding them all into a Docker image reduces this flakiness, thus improves reliability.
* All models under `benchmarks/dynamo/*.txt` would need to be added
* The docker image pytorch-linux-bionic-cuda11.7-cudnn8-py3-gcc7 is currently used for the job. If the size of all models are significant, we would need to create a new docker image on top of that instead of adding them into the same image to keep pytorch-linux-bionic-cuda11.7-cudnn8-py3-gcc7 small for other use cases besides benchmarking
Also do we need the feature to redownload these models when they are updated upstream?
### Versions
Inductor workflow
cc @seemethere @malfet @pytorch/pytorch-dev-infra @ZainRizvi @kit1980 @clee2000 @desertfire
| 4 |
3,357 | 95,644 |
[inductor] Accuracy issue on Nvidia V100
|
triaged, module: inductor
|
### 🐛 Describe the bug
I got NaN value issue while running stable diffusion model + inductor on Nvidia V100. Note this issue does not exist while testing on A100 gpu, therefore I suspect this is a inductor/triton/cuda related issue.
```py
import torch
from diffusers import UNet2DConditionModel
cuda_device = torch.device("cuda:0")
unet = UNet2DConditionModel.from_pretrained(
"CompVis/stable-diffusion-v1-4", subfolder="unet", revision=None
)
unet.train()
unet.to(cuda_device)
unet = torch.compile(unet, backend="inductor")
x = torch.rand([2, 4, 64, 64], dtype=torch.float32, device=cuda_device)
t = torch.randint(0, 1000, size=(2,), device=cuda_device)
hidden = torch.rand([2, 77, 768], dtype=torch.float32, device=cuda_device)
with torch.cuda.amp.autocast():
model_pred = unet(x, t, hidden).sample
print(model_pred)
```
Running with `TORCHDYNAMO_REPRO_AFTER="dynamo" TORCHDYNAMO_REPRO_LEVEL=4 `, I got the `repro.py`.
```py
import torch._inductor.overrides
import torch
from torch import tensor, device
import torch.fx as fx
from torch._dynamo.testing import rand_strided
from math import inf
from torch.fx.experimental.proxy_tensor import make_fx
import torch._dynamo.config
import torch._inductor.config
import torch._functorch.config
torch._dynamo.config.load_config(b'\x80\x02}q\x00(X\x0b\x00\x00\x00output_codeq\x01\x89X\r\x00\x00\x00log_file_nameq\x02NX\x07\x00\x00\x00verboseq\x03\x89X\x11\x00\x00\x00output_graph_codeq\x04\x89X\x12\x00\x00\x00verify_correctnessq\x05\x89X\x12\x00\x00\x00minimum_call_countq\x06K\x01X\x15\x00\x00\x00dead_code_eliminationq\x07\x88X\x10\x00\x00\x00cache_size_limitq\x08K@X\x14\x00\x00\x00specialize_int_floatq\t\x88X\x0e\x00\x00\x00dynamic_shapesq\n\x89X\x18\x00\x00\x00assume_static_by_defaultq\x0b\x89X\x10\x00\x00\x00guard_nn_modulesq\x0c\x89X\x1b\x00\x00\x00traceable_tensor_subclassesq\rc__builtin__\nset\nq\x0e]q\x0f\x85q\x10Rq\x11X\x0f\x00\x00\x00suppress_errorsq\x12\x89X\x15\x00\x00\x00replay_record_enabledq\x13\x89X \x00\x00\x00rewrite_assert_with_torch_assertq\x14\x88X\x12\x00\x00\x00print_graph_breaksq\x15\x89X\x07\x00\x00\x00disableq\x16\x89X*\x00\x00\x00allowed_functions_module_string_ignorelistq\x17h\x0e]q\x18(X\r\x00\x00\x00torch.testingq\x19X\r\x00\x00\x00torch._decompq\x1aX\x13\x00\x00\x00torch.distributionsq\x1bX\x0c\x00\x00\x00torch._primsq\x1cX\x0b\x00\x00\x00torch._refsq\x1de\x85q\x1eRq\x1fX\x12\x00\x00\x00repro_forward_onlyq \x89X\x0f\x00\x00\x00repro_toleranceq!G?PbM\xd2\xf1\xa9\xfcX\x16\x00\x00\x00capture_scalar_outputsq"\x89X \x00\x00\x00capture_dynamic_output_shape_opsq#\x89X\x19\x00\x00\x00enforce_cond_guards_matchq$\x88X\x0c\x00\x00\x00optimize_ddpq%\x88X\x1a\x00\x00\x00raise_on_ctx_manager_usageq&\x88X\x1c\x00\x00\x00raise_on_unsafe_aot_autogradq\'\x89X\x17\x00\x00\x00raise_on_backend_changeq(\x89X\x18\x00\x00\x00error_on_nested_fx_traceq)\x88X\t\x00\x00\x00allow_rnnq*\x89X\x08\x00\x00\x00base_dirq+X\x18\x00\x00\x00/home/ubuntu/src/pytorchq,X\x0e\x00\x00\x00debug_dir_rootq-X+\x00\x00\x00/home/ubuntu/tmp/dynamo/torch_compile_debugq.X)\x00\x00\x00DO_NOT_USE_legacy_non_fake_example_inputsq/\x89X\x13\x00\x00\x00_save_config_ignoreq0h\x0e]q1(X\x0b\x00\x00\x00repro_afterq2X\x12\x00\x00\x00constant_functionsq3X!\x00\x00\x00skipfiles_inline_module_allowlistq4X\x0b\x00\x00\x00repro_levelq5e\x85q6Rq7u.')
torch._inductor.config.load_config(b'\x80\x02}q\x00(X\x05\x00\x00\x00debugq\x01\x89X\x10\x00\x00\x00disable_progressq\x02\x88X\x10\x00\x00\x00verbose_progressq\x03\x89X\x0b\x00\x00\x00cpp_wrapperq\x04\x89X\x03\x00\x00\x00dceq\x05\x89X\x14\x00\x00\x00static_weight_shapesq\x06\x88X\x0c\x00\x00\x00size_assertsq\x07\x88X\x10\x00\x00\x00pick_loop_ordersq\x08\x88X\x0f\x00\x00\x00inplace_buffersq\t\x88X\x11\x00\x00\x00benchmark_harnessq\n\x88X\x0f\x00\x00\x00epilogue_fusionq\x0b\x89X\x15\x00\x00\x00epilogue_fusion_firstq\x0c\x89X\x0f\x00\x00\x00pattern_matcherq\r\x88X\n\x00\x00\x00reorderingq\x0e\x89X\x0c\x00\x00\x00max_autotuneq\x0f\x89X\x15\x00\x00\x00search_autotune_cacheq\x10\x88X\x17\x00\x00\x00realize_reads_thresholdq\x11K\x04X\x17\x00\x00\x00realize_bytes_thresholdq\x12M\xd0\x07X\x1b\x00\x00\x00realize_acc_reads_thresholdq\x13K\x08X\x0f\x00\x00\x00fallback_randomq\x14\x89X\x12\x00\x00\x00implicit_fallbacksq\x15\x88X\x0b\x00\x00\x00tune_layoutq\x16\x89X\x11\x00\x00\x00aggressive_fusionq\x17\x89X\x0f\x00\x00\x00max_fusion_sizeq\x18K@X\x1b\x00\x00\x00unroll_reductions_thresholdq\x19K\x08X\x0e\x00\x00\x00comment_originq\x1a\x89X\x12\x00\x00\x00developer_warningsq\x1b\x88X\x0f\x00\x00\x00compile_threadsq\x1cK X\x11\x00\x00\x00global_cache_pathq\x1dNX\x13\x00\x00\x00kernel_name_max_opsq\x1eK\nX\r\x00\x00\x00shape_paddingq\x1f\x89X\x0e\x00\x00\x00permute_fusionq \x89X\x1a\x00\x00\x00profiler_mark_wrapper_callq!\x89X\x18\x00\x00\x00_raise_error_for_testingq"\x89X\x0c\x00\x00\x00_profile_varq#X\x00\x00\x00\x00q$X\x11\x00\x00\x00profile_bandwidthq%\x89X\x17\x00\x00\x00profile_bandwidth_regexq&h$X\x0b\x00\x00\x00cpp.threadsq\'J\xff\xff\xff\xffX\x13\x00\x00\x00cpp.dynamic_threadsq(\x89X\x0b\x00\x00\x00cpp.simdlenq)NX\x12\x00\x00\x00cpp.min_chunk_sizeq*M\x00\x10X\x07\x00\x00\x00cpp.cxxq+NX\x03\x00\x00\x00g++q,\x86q-X\x19\x00\x00\x00cpp.enable_kernel_profileq.\x89X\x12\x00\x00\x00cpp.weight_prepackq/\x88X\x11\x00\x00\x00triton.cudagraphsq0\x89X\x17\x00\x00\x00triton.debug_sync_graphq1\x89X\x18\x00\x00\x00triton.debug_sync_kernelq2\x89X\x12\x00\x00\x00triton.convolutionq3X\x04\x00\x00\x00atenq4X\x15\x00\x00\x00triton.dense_indexingq5\x89X\x10\x00\x00\x00triton.max_tilesq6K\x02X\x19\x00\x00\x00triton.autotune_pointwiseq7\x88X\'\x00\x00\x00triton.tiling_prevents_pointwise_fusionq8\x88X\'\x00\x00\x00triton.tiling_prevents_reduction_fusionq9\x88X\x1b\x00\x00\x00triton.ordered_kernel_namesq:\x89X\x1f\x00\x00\x00triton.descriptive_kernel_namesq;\x89X\x1c\x00\x00\x00triton.persistent_reductionsq<\x88X\x10\x00\x00\x00triton.max_blockq=}q>(X\x01\x00\x00\x00Xq?M\x00\x08X\x01\x00\x00\x00Yq@M\x00\x04X\x01\x00\x00\x00ZqAM\x00\x04uX\r\x00\x00\x00trace.enabledqB\x89X\x0f\x00\x00\x00trace.debug_logqC\x88X\x0e\x00\x00\x00trace.info_logqD\x89X\x0e\x00\x00\x00trace.fx_graphqE\x88X\x1a\x00\x00\x00trace.fx_graph_transformedqF\x88X\x13\x00\x00\x00trace.ir_pre_fusionqG\x88X\x14\x00\x00\x00trace.ir_post_fusionqH\x88X\x11\x00\x00\x00trace.output_codeqI\x88X\x13\x00\x00\x00trace.graph_diagramqJ\x89X\x15\x00\x00\x00trace.compile_profileqK\x89X\x10\x00\x00\x00trace.upload_tarqLNu.')
torch._functorch.config.load_config(b'\x80\x02}q\x00(X\x11\x00\x00\x00use_functionalizeq\x01\x88X\x0f\x00\x00\x00use_fake_tensorq\x02\x88X\x16\x00\x00\x00fake_tensor_allow_metaq\x03\x88X\x0c\x00\x00\x00debug_assertq\x04\x88X\x14\x00\x00\x00debug_fake_cross_refq\x05\x89X\x11\x00\x00\x00debug_partitionerq\x06\x89X\x0c\x00\x00\x00debug_graphsq\x07\x89X\x0b\x00\x00\x00debug_jointq\x08\x89X\x12\x00\x00\x00use_dynamic_shapesq\t\x89X\x14\x00\x00\x00static_weight_shapesq\n\x88X\x03\x00\x00\x00cseq\x0b\x88X\x10\x00\x00\x00max_dist_from_bwq\x0cK\x03X\t\x00\x00\x00log_levelq\rK\x14u.')
# REPLACEABLE COMMENT FOR TESTING PURPOSES
# torch version: 2.0.0a0+git9ded087
# torch cuda version: 11.8
# torch git version: 9ded087bac636d361c277dac99e822db5b9863b8
# CUDA Info:
# nvcc: NVIDIA (R) Cuda compiler driver
# Copyright (c) 2005-2022 NVIDIA Corporation
# Built on Wed_Sep_21_10:33:58_PDT_2022
# Cuda compilation tools, release 11.8, V11.8.89
# Build cuda_11.8.r11.8/compiler.31833905_0
# GPU Hardware Info:
# Tesla V100-SXM2-32GB : 8
from torch.nn import *
class Repro(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, primals_1, primals_2, primals_3, primals_4, primals_5, primals_6, primals_7):
convert_element_type = torch.ops.prims.convert_element_type.default(primals_1, torch.float16); primals_1 = None
convert_element_type_1 = torch.ops.prims.convert_element_type.default(primals_6, torch.float16); primals_6 = None
permute = torch.ops.aten.permute.default(convert_element_type, [1, 0]); convert_element_type = None
view = torch.ops.aten.view.default(convert_element_type_1, [8192, 320]); convert_element_type_1 = None
mm = torch.ops.aten.mm.default(view, permute)
view_1 = torch.ops.aten.view.default(mm, [2, 4096, 320]); mm = None
convert_element_type_2 = torch.ops.prims.convert_element_type.default(primals_2, torch.float16); primals_2 = None
convert_element_type_3 = torch.ops.prims.convert_element_type.default(primals_7, torch.float16); primals_7 = None
permute_1 = torch.ops.aten.permute.default(convert_element_type_2, [1, 0]); convert_element_type_2 = None
view_2 = torch.ops.aten.view.default(convert_element_type_3, [154, 768]); convert_element_type_3 = None
mm_1 = torch.ops.aten.mm.default(view_2, permute_1); permute_1 = None
view_3 = torch.ops.aten.view.default(mm_1, [2, 77, 320]); mm_1 = None
convert_element_type_4 = torch.ops.prims.convert_element_type.default(primals_3, torch.float16); primals_3 = None
permute_2 = torch.ops.aten.permute.default(convert_element_type_4, [1, 0]); convert_element_type_4 = None
mm_2 = torch.ops.aten.mm.default(view_2, permute_2); permute_2 = None
view_5 = torch.ops.aten.view.default(mm_2, [2, 77, 320]); mm_2 = None
view_6 = torch.ops.aten.view.default(view_1, [2, 4096, 8, 40]); view_1 = None
permute_3 = torch.ops.aten.permute.default(view_6, [0, 2, 1, 3]); view_6 = None
clone = torch.ops.aten.clone.default(permute_3, memory_format = torch.contiguous_format); permute_3 = None
_unsafe_view = torch.ops.aten._unsafe_view.default(clone, [16, 4096, 40]); clone = None
view_7 = torch.ops.aten.view.default(view_3, [2, 77, 8, 40]); view_3 = None
permute_4 = torch.ops.aten.permute.default(view_7, [0, 2, 1, 3]); view_7 = None
clone_1 = torch.ops.aten.clone.default(permute_4, memory_format = torch.contiguous_format); permute_4 = None
_unsafe_view_1 = torch.ops.aten._unsafe_view.default(clone_1, [16, 77, 40]); clone_1 = None
view_8 = torch.ops.aten.view.default(view_5, [2, 77, 8, 40]); view_5 = None
permute_5 = torch.ops.aten.permute.default(view_8, [0, 2, 1, 3]); view_8 = None
clone_2 = torch.ops.aten.clone.default(permute_5, memory_format = torch.contiguous_format); permute_5 = None
_unsafe_view_2 = torch.ops.aten._unsafe_view.default(clone_2, [16, 77, 40]); clone_2 = None
empty = torch.ops.aten.empty.memory_format([16, 4096, 77], dtype = torch.float16, device = device(type='cuda', index=0), pin_memory = False)
permute_6 = torch.ops.aten.permute.default(_unsafe_view_1, [0, 2, 1]); _unsafe_view_1 = None
bmm = torch.ops.aten.bmm.default(_unsafe_view, permute_6)
mul = torch.ops.aten.mul.Tensor(bmm, 0.15811388300841897); bmm = None
mul_1 = torch.ops.aten.mul.Tensor(empty, 0); empty = None
add = torch.ops.aten.add.Tensor(mul_1, mul); mul_1 = mul = None
convert_element_type_6 = torch.ops.prims.convert_element_type.default(add, torch.float32); add = None
amax = torch.ops.aten.amax.default(convert_element_type_6, [-1], True)
sub = torch.ops.aten.sub.Tensor(convert_element_type_6, amax); convert_element_type_6 = amax = None
exp = torch.ops.aten.exp.default(sub); sub = None
sum_1 = torch.ops.aten.sum.dim_IntList(exp, [-1], True)
div = torch.ops.aten.div.Tensor(exp, sum_1); exp = sum_1 = None
convert_element_type_7 = torch.ops.prims.convert_element_type.default(div, torch.float16)
bmm_1 = torch.ops.aten.bmm.default(convert_element_type_7, _unsafe_view_2)
view_9 = torch.ops.aten.view.default(bmm_1, [2, 8, 4096, 40]); bmm_1 = None
permute_7 = torch.ops.aten.permute.default(view_9, [0, 2, 1, 3]); view_9 = None
clone_3 = torch.ops.aten.clone.default(permute_7, memory_format = torch.contiguous_format); permute_7 = None
_unsafe_view_3 = torch.ops.aten._unsafe_view.default(clone_3, [2, 4096, 320]); clone_3 = None
convert_element_type_8 = torch.ops.prims.convert_element_type.default(primals_5, torch.float16); primals_5 = None
convert_element_type_9 = torch.ops.prims.convert_element_type.default(primals_4, torch.float16); primals_4 = None
permute_8 = torch.ops.aten.permute.default(convert_element_type_9, [1, 0]); convert_element_type_9 = None
view_10 = torch.ops.aten.view.default(_unsafe_view_3, [8192, 320]); _unsafe_view_3 = None
addmm = torch.ops.aten.addmm.default(convert_element_type_8, view_10, permute_8); convert_element_type_8 = None
view_11 = torch.ops.aten.view.default(addmm, [2, 4096, 320])
permute_9 = torch.ops.aten.permute.default(permute_8, [1, 0]); permute_8 = None
permute_14 = torch.ops.aten.permute.default(convert_element_type_7, [0, 2, 1]); convert_element_type_7 = None
permute_15 = torch.ops.aten.permute.default(_unsafe_view_2, [0, 2, 1]); _unsafe_view_2 = None
permute_16 = torch.ops.aten.permute.default(permute_6, [0, 2, 1]); permute_6 = None
permute_17 = torch.ops.aten.permute.default(_unsafe_view, [0, 2, 1]); _unsafe_view = None
permute_30 = torch.ops.aten.permute.default(permute, [1, 0]); permute = None
return [view_11, addmm, view, view_2, div, view_10, permute_9, permute_14, permute_15, permute_16, permute_17, permute_30]
args = [((320, 320), (320, 1), torch.float32, 'cuda'), ((320, 768), (768, 1), torch.float32, 'cuda'), ((320, 768), (768, 1), torch.float32, 'cuda'), ((320, 320), (320, 1), torch.float32, 'cuda'), ((320,), (1,), torch.float32, 'cuda'), ((2, 4096, 320), (1310720, 320, 1), torch.float32, 'cuda'), ((2, 77, 768), (59136, 768, 1), torch.float32, 'cuda')]
args = [rand_strided(sh, st, dt, dev) for (sh, st, dt, dev) in args]
mod = make_fx(Repro(), tracing_mode='real')(*args)
from torch._inductor.compile_fx import compile_fx_inner
from torch._dynamo.debug_utils import same_two_models
compiled = compile_fx_inner(mod, args)
class AccuracyError(Exception):
pass
if not same_two_models(mod, compiled, args, only_fwd=True):
raise AccuracyError("Bad accuracy detected")
```
```
[2023-02-27 23:08:38,682] torch._dynamo.utils: [ERROR] RMSE (res-fp64): nan, (ref-fp64): nan and shape=torch.Size([2, 4096, 320])
Traceback (most recent call last):
File "repro.py", line 116, in <module>
raise AccuracyError("Bad accuracy detected")
__main__.AccuracyError: Bad accuracy detected
```
### Versions
```
Collecting environment information...
PyTorch version: 2.0.0a0+git9ded087
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~18.04) 9.4.0
Clang version: 13.0.1-++20220120110844+75e33f71c2da-1~exp1~20220120230854.66
CMake version: version 3.25.2
Libc version: glibc-2.27
Python version: 3.8.8 (default, Apr 13 2021, 19:58:26) [GCC 7.3.0] (64-bit runtime)
Python platform: Linux-5.4.0-1094-aws-x86_64-with-glibc2.10
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: Tesla V100-SXM2-32GB
GPU 1: Tesla V100-SXM2-32GB
GPU 2: Tesla V100-SXM2-32GB
GPU 3: Tesla V100-SXM2-32GB
GPU 4: Tesla V100-SXM2-32GB
GPU 5: Tesla V100-SXM2-32GB
GPU 6: Tesla V100-SXM2-32GB
GPU 7: Tesla V100-SXM2-32GB
Nvidia driver version: 520.61.05
cuDNN version: Probably one of the following:
/usr/local/cuda-10.1/targets/x86_64-linux/lib/libcudnn.so.7.6.5
/usr/local/cuda-10.2/targets/x86_64-linux/lib/libcudnn.so.7.6.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.0.5
/usr/local/cuda-11.4/targets/x86_64-linux/lib/libcudnn.so.8
/usr/local/cuda-11.4/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8
/usr/local/cuda-11.4/targets/x86_64-linux/lib/libcudnn_adv_train.so.8
/usr/local/cuda-11.4/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8
/usr/local/cuda-11.4/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8
/usr/local/cuda-11.4/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8
/usr/local/cuda-11.4/targets/x86_64-linux/lib/libcudnn_ops_train.so.8
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn.so.8
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_adv_train.so.8
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_ops_train.so.8
/usr/local/cuda-11.8/targets/x86_64-linux/lib/libcudnn.so.8
/usr/local/cuda-11.8/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8
/usr/local/cuda-11.8/targets/x86_64-linux/lib/libcudnn_adv_train.so.8
/usr/local/cuda-11.8/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8
/usr/local/cuda-11.8/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8
/usr/local/cuda-11.8/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8
/usr/local/cuda-11.8/targets/x86_64-linux/lib/libcudnn_ops_train.so.8
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 96
On-line CPU(s) list: 0-95
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Platinum 8175M CPU @ 2.50GHz
Stepping: 4
CPU MHz: 1808.897
BogoMIPS: 4999.99
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 1024K
L3 cache: 33792K
NUMA node0 CPU(s): 0-23,48-71
NUMA node1 CPU(s): 24-47,72-95
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves ida arat pku ospke
Versions of relevant libraries:
[pip3] mypy==1.0.0
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.22.2
[pip3] torch==2.0.0a0+git34617d7
[pip3] torchvision==0.15.0a0+af04819
[conda] blas 1.0 mkl
[conda] mkl 2021.2.0 h06a4308_296
[conda] mkl-service 2.3.0 py38h27cfd23_1
[conda] mkl_fft 1.3.0 py38h42c9631_2
[conda] mkl_random 1.2.1 py38ha9443f7_2
[conda] numpy 1.22.2 pypi_0 pypi
[conda] torch 2.0.0a0+git34617d7 dev_0 <develop>
[conda] torchvision 0.15.0a0+af04819 dev_0 <develop>
```
cc @soumith @voznesenskym @yanboliang @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @desertfire
| 0 |
3,358 | 95,628 |
Pytorch Home Page does not specify which version of python it requires
|
module: docs, triaged
|
### 🐛 Describe the bug
When using the pip command to install pytorch suggested by the page https://pytorch.org/ , which is
pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu116
pip returns 'no matching distribution found". I guess this is because I have installed python 3.11. If I go back to python 3.10, pytorch installs correctly. I suggest that the pytorch home page clearly state that pytorch is not supported under windows/pip for python 3.11. If this is not possible, how can I know when pytorch will be supported for windows under python 3.11?
I am running Windows version 10.

### Versions
C:\Users\fredm>wget https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py
'wget' is not recognized as an internal or external command,
operable program or batch file.
So, that command does not work under Windows. How about one that does?
cc @svekars @carljparker
| 2 |
3,359 | 95,622 |
Testing InvokeAI 2.3.1.post1, using mps, with PyTorch nightly dev20230226 yields RuntimeError cross-device copies are not allowed!)
|
triaged, module: mps
|
### 🐛 Describe the bug
Testing InvokeAI 2.3.1.post1 with PyTorch nightly dev20230226, using mps, yields `RuntimeError: Attempting to copy from device mps:0 to device meta, but cross-device copies are not allowed!`
Comment: Posted this as [an issue](https://github.com/invoke-ai/InvokeAI/issues/2826) in InvokeAI repo. Moderator deferred this to pytorch issue.
To reproduce:
1. Install InvokeAI 2.3.1.post1
2. Install recommended models during configuration (option `r`.)
3. Run invoke.sh
4. Choose web interface (option 2: browser-based UI)
5. See console output.
Traceback:
```Traceback (most recent call last):
File "/Users/.../invokeai/.venv/lib/python3.10/site-packages/ldm/generate.py", line 889, in set_model
model_data = cache.get_model(model_name)
File "/Users/.../invokeai/.venv/lib/python3.10/site-packages/ldm/invoke/model_manager.py", line 106, in get_model
requested_model, width, height, hash = self._load_model(model_name)
File "/Users/.../invokeai/.venv/lib/python3.10/site-packages/ldm/invoke/model_manager.py", line 341, in _load_model
model, width, height, model_hash = self._load_diffusers_model(mconfig)
File "/Users/.../invokeai/.venv/lib/python3.10/site-packages/ldm/invoke/model_manager.py", line 514, in _load_diffusers_model
pipeline = StableDiffusionGeneratorPipeline.from_pretrained(
File "/Users/.../invokeai/.venv/lib/python3.10/site-packages/diffusers/pipelines/pipeline_utils.py", line 870, in from_pretrained
loaded_sub_model = load_method(os.path.join(cached_folder, name), **loading_kwargs)
File "/Users/.../invokeai/.venv/lib/python3.10/site-packages/transformers/modeling_utils.py", line 2362, in from_pretrained
model = cls(config, *model_args, **model_kwargs)
File "/Users/.../invokeai/.venv/lib/python3.10/site-packages/transformers/models/clip/modeling_clip.py", line 776, in __init__
self.text_model = CLIPTextTransformer(config)
File "/Users/.../invokeai/.venv/lib/python3.10/site-packages/transformers/models/clip/modeling_clip.py", line 681, in __init__
self.embeddings = CLIPTextEmbeddings(config)
File "/Users/.../invokeai/.venv/lib/python3.10/site-packages/transformers/models/clip/modeling_clip.py", line 209, in __init__
self.token_embedding = nn.Embedding(config.vocab_size, embed_dim)
File "/Users/.../invokeai/.venv/lib/python3.10/site-packages/torch/nn/modules/sparse.py", line 144, in __init__
self.reset_parameters()
File "/Users/.../invokeai/.venv/lib/python3.10/site-packages/torch/nn/modules/sparse.py", line 153, in reset_parameters
init.normal_(self.weight)
File "/Users/.../invokeai/.venv/lib/python3.10/site-packages/torch/nn/init.py", line 155, in normal_
return _no_grad_normal_(tensor, mean, std)
File "/Users/.../invokeai/.venv/lib/python3.10/site-packages/torch/nn/init.py", line 19, in _no_grad_normal_
return tensor.normal_(mean, std)
File "/Users/.../invokeai/.venv/lib/python3.10/site-packages/torch/_refs/__init__.py", line 437, in _fn
return fn(a, *args, out=a, **kwargs)
File "/Users/.../invokeai/.venv/lib/python3.10/site-packages/torch/_prims_common/wrappers.py", line 248, in _fn
_safe_copy_out(copy_from=result, copy_to=out, exact_dtype=exact_dtype) # type: ignore[arg-type]
File "/Users/.../invokeai/.venv/lib/python3.10/site-packages/torch/_prims_common/wrappers.py", line 168, in _safe_copy_out
raise RuntimeError(msg)
RuntimeError: Attempting to copy from device mps:0 to device meta, but cross-device copies are not allowed!```
### Versions
Collecting environment information...
PyTorch version: 2.0.0.dev20230226
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 13.1 (arm64)
GCC version: Could not collect
Clang version: 14.0.0 (clang-1400.0.29.202)
CMake version: Could not collect
Libc version: N/A
Python version: 3.10.4 (v3.10.4:9d38120e33, Mar 23 2022, 17:29:05) [Clang 13.0.0 (clang-1300.0.29.30)] (64-bit runtime)
Python platform: macOS-13.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Pro
Versions of relevant libraries:
[pip3] clip-anytorch==2.5.0
[pip3] numpy==1.23.5
[pip3] pytorch-lightning==1.7.7
[pip3] torch==2.0.0.dev20230226
[pip3] torch-fidelity==0.3.0
[pip3] torchaudio==2.0.0.dev20230226
[pip3] torchdiffeq==0.2.3
[pip3] torchmetrics==0.11.1
[pip3] torchsde==0.2.5
[pip3] torchvision==0.15.0.dev20230226
[conda] No relevant packages
cc @kulinseth @albanD @malfet @DenisVieriu97 @razarmehr @abhudev
| 5 |
3,360 | 95,613 |
[onnx] sort / argsort with `stable` argument specified cannot be exported to onnx
|
module: onnx, triaged
|
### 🐛 Describe the bug
When using `torch.sort(x)`, onnx export is ok.
But with `stable` flag, onnx export fails, saying "OnnxExporterError: Unsupported: ONNX export of operator Sort, Out parameter is not supported.". Here `out` argument is actually not provided.
Similar result occurs with `torch.argsort()`.
```python
import torch
from torch import nn
import numpy as np
class Demo(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, x):
v, inds = torch.sort(x, stable=False)
# v, inds = torch.sort(x, stable=True)
# inds = torch.argsort(x, stable=False)
# inds = torch.argsort(x, stable=True)
return inds
if __name__ == "__main__":
input_tensor = torch.range(20, 80)
demo = Demo()
out = demo(input_tensor)
torch.onnx.export(demo, input_tensor, "/tmp/debug.onnx", verbose=True,
input_names=['data'],
opset_version=11,
do_constant_folding=True,
dynamic_axes={'data':{0:'batch'}})
```
### Versions
[pip3] torch==1.13.1
| 2 |
3,361 | 95,604 |
Performance bugs exists in multiple convolution operations(e.g., `Convtranspose2d`) when useing the `groups` argument
|
module: cudnn, module: docs, module: convolution, triaged, oncall: pt2
|
### 🐛 Describe the bug
This issue should be related to [#70954](https://github.com/pytorch/pytorch/issues/70954).
The convolution performance problem of `Conv2d` has been mentioned in 70954 will actually affect multiple convolution operations, **including `Convtranspose2d`, `LazyConv2d`, `LazyConvTranspose2d`**.
As can be seen in the reproducible code provided below, even on the 1.13.0 version, after using the `group=4`, the running time of the model is significantly higher than expected.
To make matters worse, even in version 1.13.0, the relevant documentation (e.g., [ConvTranspose2d](https://pytorch.org/docs/stable/generated/torch.nn.ConvTranspose2d.html?highlight=convtranspose2d#torch.nn.ConvTranspose2d)) has **not added any hints and explanations for this performance problem**.
### To Reproduce
```python
import math
import time
import torch
import numpy as np
import torch.nn as nn
torch.backends.cudnn.benchmark = True
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
steps = 25
batch_size = 16
input_w = 224
dtype= torch.float32#
ch0=128
groups=4
# func_cls=torch.nn.LazyConvTranspose2d
# func_cls=torch.nn.LazyConv2d
func_cls=torch.nn.ConvTranspose2d
# # For LazyConv2d/LazyConvTranspose2d
# def build_model(ch: int, groups: int = 1) -> nn.Module:
# return nn.Sequential(
# func_cls(out_channels=ch, kernel_size=(1, 1)),
# func_cls(out_channels=ch, kernel_size=(3, 3), padding=1, groups=groups), # 0
# func_cls(out_channels=ch, kernel_size=(3, 3), padding=1, groups=groups), # 1
# func_cls(out_channels=ch, kernel_size=(3, 3), padding=1, groups=groups), # 2
# func_cls(out_channels=ch, kernel_size=(3, 3), padding=1, groups=groups), # 3
# func_cls(out_channels=ch, kernel_size=(3, 3), padding=1, groups=groups), # 4
# func_cls(out_channels=ch, kernel_size=(3, 3), padding=1, groups=groups), # 5
# func_cls(out_channels=ch, kernel_size=(3, 3), padding=1, groups=groups), # 6
# func_cls(out_channels=ch, kernel_size=(3, 3), padding=1, groups=groups), # 7
# nn.MaxPool2d((input_w, input_w)),
# nn.Flatten(),
# nn.Linear(ch, 2),
# nn.Softmax(1))
def build_model(ch: int, groups: int = 1) -> nn.Module:
return nn.Sequential(
func_cls(3, ch, (1, 1)),
func_cls(ch, ch, (3, 3), padding=1, groups=groups), # 0
func_cls(ch, ch, (3, 3), padding=1, groups=groups), # 1
func_cls(ch, ch, (3, 3), padding=1, groups=groups), # 2
func_cls(ch, ch, (3, 3), padding=1, groups=groups), # 3
func_cls(ch, ch, (3, 3), padding=1, groups=groups), # 4
func_cls(ch, ch, (3, 3), padding=1, groups=groups), # 5
func_cls(ch, ch, (3, 3), padding=1, groups=groups), # 6
func_cls(ch, ch, (3, 3), padding=1, groups=groups), # 7
nn.MaxPool2d((input_w, input_w)),
nn.Flatten(),
nn.Linear(ch, 2),
nn.Softmax(1))
def train_model(model: nn.Module, data) -> float:
optimizer = torch.optim.SGD(model.parameters(), lr=1e-3)
loss_fn = nn.BCELoss()
model.to(device)
model.train()
t0 = time.time()
for (X, y) in data:
X, y = X.to(device), y.to(device)
# Compute prediction error
pred = model(X)
loss = loss_fn(pred, y)
# Backpropagation
optimizer.zero_grad()
loss.backward()
optimizer.step()
return time.time() - t0
def perf_test(ch: int, groups: int, dtype) -> float:
model = build_model(ch, groups).to(dtype)
images = torch.from_numpy(np.full((batch_size, 3, input_w, input_w), 0.5)).to(dtype)
labels = torch.from_numpy(np.full((batch_size, 2), 1)).to(dtype)
train_model(model, [(images, labels)] * 2) # Warmup
return train_model(model, [(images, labels)] * steps)
dtype_s = str(dtype).replace("torch.", "")
dt0_last = float("nan")
dt0 = perf_test(ch0, 1, dtype)
dt0_ratio = f"{dt0 / dt0_last:.3f}" if not math.isnan(dt0_last) else ""
dt0_last = dt0
ch = ch0 * groups
dt = perf_test(ch, groups, dtype)
expected = dt0 * groups
ratio = dt / expected
print(f'expected time cost: {expected}')
print(f'real time cost: {dt}')
print(f'radio: {ratio}')
```
### Error Results
PyTorch 1.13.0, testing on `LazyConvTranspose2d`:
```
/opt/conda/lib/python3.9/site-packages/torch/nn/modules/lazy.py:180: UserWarning: Lazy modules are a new feature under heavy development so changes to the API or functionality can happen at any moment.
warnings.warn('Lazy modules are a new feature under heavy development '
expected time cost: 20.092201232910156
real time cost: 46.74727201461792
radio: 2.3266376577021295
```
PyTorch 1.13.0, testing on `LazyConv2d`:
```
/opt/conda/lib/python3.9/site-packages/torch/nn/modules/lazy.py:180: UserWarning: Lazy modules are a new feature under heavy development so changes to the API or functionality can happen at any moment.
warnings.warn('Lazy modules are a new feature under heavy development '
expected time cost: 20.051240921020508
real time cost: 46.57210874557495
radio: 2.322654689004887
```
PyTorch 1.13.0, testing on `ConvTranspose2d`:
```
expected time cost: 20.15345001220703
real time cost: 46.877477169036865
radio: 2.3260274117157596
```
### Expected behavior
Using the `groups` argument should speed up the models (as shown in many other experiments in [#70954](https://github.com/pytorch/pytorch/issues/70954)).
If it is not possible to speed up for larger channel settings, this unexpected performance bug **should be explained in the documentation of the relevant APIs**.
The current documentation (such as [ConvTranspose2d](https://pytorch.org/docs/stable/generated/torch.nn.ConvTranspose2d.html)) does not seem to have any clarification of the potential performance problems on the `groups` argument.
### Versions
```
[pip3] numpy==1.22.3
[pip3] torch==1.13.0
[pip3] torchtext==0.14.0
[pip3] torchvision==0.14.0
[conda] blas 1.0 mkl
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.22.3 py39he7a7128_0
[conda] numpy-base 1.22.3 py39hf524024_0
[conda] pytorch 1.13.0 py3.9_cuda11.6_cudnn8.3.2_0 pytorch
[conda] pytorch-cuda 11.6 h867d48c_0 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchtext 0.14.0 py39 pytorch
[conda] torchvision 0.14.0 py39_cu116 pytorch
```
cc @csarofeen @ptrblck @xwang233 @svekars @carljparker @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 16 |
3,362 | 95,595 |
TorchInductor fails with memoy violations in `test_comprehensive_grid_sampler_2d_cuda_float16` and `test_reflection_pad2d_dynamic_shapes_cuda`
|
high priority, triaged, oncall: pt2, module: inductor
|
### 🐛 Describe the bug
The latest nightly build fails with illegal memory accesses when the caching allocator is disabled as reported by `compute-sanitizer` on an A100:
```
PYTORCH_NO_CUDA_MEMORY_CACHING=1 compute-sanitizer python test_torchinductor_opinfo.py -v -k test_comprehensive_grid_sampler_2d_cuda_float16
========= COMPUTE-SANITIZER
test_comprehensive_grid_sampler_2d_cuda_float16 (__main__.TestInductorOpInfoCUDA) ... ========= Invalid __global__ read of size 2 bytes
========= at 0x17b0 in triton__0d1d2d3d4d5d6d7
========= by thread (23,0,0) in block (0,0,0)
========= Address 0x7f403ba00512 is out of bounds
========= and is 99 bytes after the nearest allocation at 0x7f403ba00000 of size 1200 bytes
========= Saved host backtrace up to driver entry point at kernel launch time
========= Host Frame: [0x30b492]
========= in /usr/local/cuda/compat/lib.real/libcuda.so.1
========= Host Frame:_launch [0x14e9]
========= in /tmp/torchinductor_root/triton/0/1a11982c9e681e0725af1fbdec50f2bc/triton_.so
========= Host Frame: [0x1d61]
========= in /tmp/torchinductor_root/triton/0/1a11982c9e681e0725af1fbdec50f2bc/triton_.so
========= Host Frame:PyCFunction_Call [0x1f5bda]
========= in /usr/bin/python
root@nvdl-a52-luna02:/workspace/src/libs/pytorch/test/inductor# PYTORCH_NO_CUDA_MEMORY_CACHING=1 compute-sanitizer python test_torchinductor_dynamic_shapes.py -v -k test_reflection_pad2d_dynamic_shapes_cuda
========= COMPUTE-SANITIZER
test_reflection_pad2d_dynamic_shapes_cuda (__main__.DynamicShapesCudaTests) ... ========= Invalid __global__ read of size 4 bytes
========= at 0xb50 in triton__0d1d2
========= by thread (17,0,0) in block (0,0,0)
========= Address 0x7f86cb1fffe4 is out of bounds
========= and is 28 bytes before the nearest allocation at 0x7f86cb200000 of size 256 bytes
========= Saved host backtrace up to driver entry point at kernel launch time
========= Host Frame: [0x30b492]
========= in /usr/local/cuda/compat/lib.real/libcuda.so.1
========= Host Frame:_launch [0x148d]
========= in /tmp/torchinductor_root/triton/0/b9ee7217453ddfeba9422db9c01a70ef/triton_.so
========= Host Frame: [0x17d2]
========= in /tmp/torchinductor_root/triton/0/b9ee7217453ddfeba9422db9c01a70ef/triton_.so
========= Host Frame:PyCFunction_Call [0x1f5bda]
========= in /usr/bin/python
```
Another IMA is raised in `test_torchinductor.py` but I'm currently unable to isolate the failing test case and will update the issue later once I have a small repro.
CC @ngimel @malfet
### Versions
```
PyTorch version: 2.0.0.dev20230226+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.24.1
Libc version: glibc-2.31
Python version: 3.8.10 (default, Nov 14 2022, 12:59:47) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.4.0-80-generic-x86_64-with-glibc2.29
Is CUDA available: True
CUDA runtime version: 12.1.66
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-40GB
GPU 1: NVIDIA A100-SXM4-40GB
GPU 2: NVIDIA A100-SXM4-40GB
GPU 3: NVIDIA A100-SXM4-40GB
GPU 4: NVIDIA A100-SXM4-40GB
GPU 5: NVIDIA A100-SXM4-40GB
GPU 6: NVIDIA A100-SXM4-40GB
GPU 7: NVIDIA A100-SXM4-40GB
Nvidia driver version: 525.85.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.8.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
[removed CPU information as not interesting]
Versions of relevant libraries:
[pip3] numpy==1.22.2
[pip3] pytorch-quantization==2.1.2
[pip3] pytorch-triton==2.0.0+b8b470bc59
[pip3] torch==2.0.0.dev20230226+cu118
[pip3] torch-tensorrt==1.4.0.dev0
[pip3] torchtext==0.13.0a0+fae8e8c
[pip3] torchvision==0.15.0.dev20230226+cu118
[conda] Could not collect
```
cc @ezyang @gchanan @zou3519 @msaroufim @wconstab @ngimel @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @soumith
| 5 |
3,363 | 95,590 |
Confusing error messages from `torch.nn.LazyLinear` in different versions.
|
module: error checking, triaged, module: lazy
|
### 🐛 Describe the bug
This issue should be related to [#77415](https://github.com/pytorch/pytorch/issues/77415).
In different versions of PyTorch, `LazyLinear` has different error messages for the inconsistency between the model dtype and the input dtype.
In addition, no matter which version it is, these raised error messages are not clear, which makes people feel a little confused.
The following reproducible code can provide an example.
### To Reproduce
```python
import torch
from torch import nn
def test():
tmp_result= torch.nn.LazyLinear(out_features=5)
# tmp_result= torch.nn.LSTMCell(5,5)
return tmp_result
net = test()
t = torch.ones(5, dtype=torch.double)
net(t)
```
### Error Results
For PyTorch 1.12.0, the above code will raise:
```
RuntimeError: expected scalar type Float but found Double
```
For PyTorch 1.11.0/1.13.0, the above code will raise:
```
RuntimeError: expected scalar type Double but found Float
```
It is shown that for the same problem, different versions return different error messages, and the expected scalar type and the found type are completely opposite in different versions.
To make matters worse, the information provided by such an error report is also very limited, which may give people the illusion that I am not passing the variable of the type I want to the network.
### Expected behavior
I think the error message was given by `torch.nn.LSTMCell` in 1.13.0 and later versions are more intuitive (its error message in 1.12.0 is the same as that of `LazyLinear` in that version).
In RuntimeError, it could directly tell the users that the dtype of the model and the input must be the same.
```
Traceback (most recent call last):
File "test.py", line 11, in <module>
net(t)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/rnn.py", line 1194, in forward
ret = _VF.lstm_cell(
RuntimeError: mat1 and mat2 must have the same dtype
```
### Versions
<details>
<summary>pytorch 1.11.0</summary>
<pre><code>
[pip3] numpy==1.21.6
[pip3] pytorch-lightning==1.6.3
[pip3] torch==1.11.0
[pip3] torch-tb-profiler==0.4.0
[pip3] torchaudio==0.11.0
[pip3] torchmetrics==0.9.0
[pip3] torchvision==0.12.0
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.3.1 h2bc3f7f_2
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py38h7f8727e_0
[conda] mkl_fft 1.3.1 py38hd3c417c_0
[conda] mkl_random 1.2.2 py38h51133e4_0
[conda] numpy 1.19.5 pypi_0 pypi
[conda] numpy-base 1.21.5 py38hf524024_1
[conda] pytorch 1.11.0 py3.8_cuda11.3_cudnn8.2.0_0 pytorch
[conda] pytorch-lightning 1.6.3 pypi_0 pypi
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torch 1.10.0 pypi_0 pypi
[conda] torch-tb-profiler 0.4.0 pypi_0 pypi
[conda] torchaudio 0.11.0 py38_cu113 pytorch
[conda] torchmetrics 0.9.0 pypi_0 pypi
[conda] torchvision 0.12.0 py38_cu113 pytorch</code></pre>
</details>
<details>
<summary>pytorch 1.12.0</summary>
<pre><code>
[pip3] numpy==1.21.5
[pip3] torch==1.12.0
[pip3] torchelastic==0.2.0
[pip3] torchtext==0.13.0
[pip3] torchvision==0.13.0
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.3.1 ha36c431_9 nvidia
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py37h7f8727e_0
[conda] mkl_fft 1.3.1 py37hd3c417c_0
[conda] mkl_random 1.2.2 py37h51133e4_0
[conda] numpy 1.21.5 py37he7a7128_2
[conda] numpy-base 1.21.5 py37hf524024_2
[conda] pytorch 1.12.0 py3.7_cuda11.3_cudnn8.3.2_0 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchelastic 0.2.0 pypi_0 pypi
[conda] torchtext 0.13.0 py37 pytorch
[conda] torchvision 0.13.0 py37_cu113 pytorch</code></pre>
</details>
<details>
<summary>pytorch 1.13.0</summary>
<pre><code>
[pip3] numpy==1.22.3
[pip3] torch==1.13.0
[pip3] torchtext==0.14.0
[pip3] torchvision==0.14.0
[conda] blas 1.0 mkl
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.22.3 py39he7a7128_0
[conda] numpy-base 1.22.3 py39hf524024_0
[conda] pytorch 1.13.0 py3.9_cuda11.6_cudnn8.3.2_0 pytorch
[conda] pytorch-cuda 11.6 h867d48c_0 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchtext 0.14.0 py39 pytorch
[conda] torchvision 0.14.0 py39_cu116 pytorch</code></pre>
</details>
| 1 |
3,364 | 95,578 |
[Reproducibility]replication_pad2d_backward_cuda does not have a deterministic implementation
|
triaged, module: determinism, module: padding, oncall: pt2
|
### 🚀 The feature, motivation and pitch
Hi, I am following the instruction from [here](https://pytorch.org/docs/stable/notes/randomness.html#cuda-convolution-determinism) to set torch.use_deterministic_algorithms(True)
However, i encounter an error as shown in title.
### Alternatives
I have tried:
torch.use_deterministic_algorithms(True, warn_only=True)
and it works for me.
### Additional context
I not sure why its happening. But my implementation does involves padding. The main architecture contains convolution (Unet-encoder), a transformer encoder, and a Unet-decoder with upsampling.
cc @mruberry @kurtamohler @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 6 |
3,365 | 95,572 |
Support datatype argument for torch.distributed.all_gather() (And the whole distributed module)
|
feature, triaged, module: nccl, module: c10d
|
### 🚀 The feature, motivation and pitch
When I try to train with customized data loader, getting ILSVRC2012's sample index with torch.distributed.all_gather() from multiple GPU, I find it uses 16-bit as default when using backend nccl, and torch API doesn't support specifying a datatype. Therefore indexes larger than 65535 will only get lower half digits depending on the default casting.
My workaround is to separate the int32 into high and low 16 bits and then recover from it.
I checked nccl document, their all_gather supports different datatypes, including their int32 type. I suggest adding a feature of choosing datatype (especially how many bits to use) for all_gather and other distributed operations, so that customized code can easily implement some desired function without increasing unnecessary overhead.
### Alternatives
_No response_
### Additional context
_No response_
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
| 3 |
3,366 | 95,562 |
test_layer_norm_backward and test_layer_norm_backward_5d run OOM in slow gradcheck
|
triaged, module: nestedtensor
|
### 🐛 Describe the bug
The two new tests test_layer_norm_backward and test_layer_norm_backward_5d in TestNestedTensorAutograd added by https://github.com/pytorch/pytorch/pull/94781 consistently run OOM in slow gradcheck mode. This occurs even after the job has been updated to run the test sequentially in https://github.com/pytorch/pytorch/pull/95494.
Here is an example failure https://github.com/pytorch/pytorch/actions/runs/4268550203/jobs/7431239137:
```
test_layer_norm_backward_5d_size_128_cuda (__main__.TestNestedTensorAutogradCUDA) ... test_layer_norm_backward_5d_size_128_cuda errored - num_retries_left: 3
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2081, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2081, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 414, in instantiated_test
raise rte
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 401, in instantiated_test
result = test(self, **param_kwargs)
File "/var/lib/jenkins/workspace/test/test_nestedtensor.py", line 2405, in test_layer_norm_backward_5d
assert gradcheck(grad_test_func, inputs=data, check_batched_grad=False)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3804, in gradcheck
return torch.autograd.gradcheck(fn, inputs, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/gradcheck.py", line 1476, in gradcheck
return _gradcheck_helper(**args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/gradcheck.py", line 1490, in _gradcheck_helper
_gradcheck_real_imag(gradcheck_fn, func, func_out, tupled_inputs, outputs, eps,
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/gradcheck.py", line 1113, in _gradcheck_real_imag
gradcheck_fn(func, func_out, tupled_inputs, outputs, eps,
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/gradcheck.py", line 1150, in _slow_gradcheck
numerical = _transpose(_get_numerical_jacobian(func, tupled_inputs, func_out, eps=eps, is_forward_ad=use_forward_ad))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/gradcheck.py", line 187, in _get_numerical_jacobian
jacobians += [get_numerical_jacobian_wrt_specific_input(fn, inp_idx, inputs, outputs, eps,
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/gradcheck.py", line 330, in get_numerical_jacobian_wrt_specific_input
jacobian_cols[d_idx] = _compute_numerical_jvps_wrt_specific_input(jvp_fn, eps, x.is_complex(), is_forward_ad)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/gradcheck.py", line 251, in _compute_numerical_jvps_wrt_specific_input
ds_dx_tup = jvp_fn(delta[0] if isinstance(delta, tuple) else delta)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/gradcheck.py", line 446, in jvp_fn
return _compute_numerical_gradient(wrapped_fn, input_to_perturb, delta, eps, nbhd_checks_fn)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/gradcheck.py", line 240, in _compute_numerical_gradient
return tuple(compute(a, b) for (a, b) in zip(outa, outb))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/gradcheck.py", line 240, in <genexpr>
return tuple(compute(a, b) for (a, b) in zip(outa, outb))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/gradcheck.py", line 237, in compute
ret = (b - a) / (2 * norm_v)
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 16.00 MiB (GPU 0; 7.44 GiB total capacity; 6.98 GiB already allocated; 10.81 MiB free; 7.02 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
ERROR (2.276s)
```
### Versions
CI periodic slow gradcheck test
cc @seemethere @malfet @pytorch/pytorch-dev-infra @cpuhrsch @jbschlosser @bhosmer @drisspg @mikaylagawarecki
| 2 |
3,367 | 95,560 |
torch.jit.load documentation doesn't specify if it is safe to load untrusted models or not
|
oncall: jit, module: docs, security
|
### 🐛 Describe the bug
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @svekars @carljparker @malfet
### Versions
master
| 0 |
3,368 | 95,548 |
torch.distributions.kumaraswamy.Kumaraswamy generates samples outside its support (0,1)
|
module: distributions, triaged, module: NaNs and Infs
|
### 🐛 Describe the bug
The support of the Kumaraswamy distribution is the open interval (0,1) but torch.distributions.kumaraswamy.Kumaraswamy can generate samples that equal 1. This causes “nan” outputs when computing log_prob.
```
import torch as ch
import torch.distributions as dist
kumaraswamy = dist.kumaraswamy.Kumaraswamy(ch.Tensor([0.2]), ch.Tensor([0.2]))
samples = kumaraswamy.sample(ch.Size([1000000]))
assert len(samples[samples==1.0]) > 0
assert torch.isnan(kumaraswamy.log_prob(samples[samples==1.0]).mean())
```
A possible cause of the issue is the use of the Uniform whose support is the half-open interval [0,1): https://github.com/pytorch/pytorch/blob/22e2fd554cf370765d4c44fe2b99c8bb6e42b0bb/torch/distributions/kumaraswamy.py#L44-L46.
Issue was first posted here: https://discuss.pytorch.org/t/torch-distributions-kumaraswamy-kumaraswamy-generates-samples-outside-its-support/173404. User 'Kfrank' noted "that floating-point numbers can get closer to 0.0 than to 1.0. I expect that samples from Uniform that are close to 0.0 (but not equal) get transformed
to 1.0 (and Uniform samples that are close to 1.0 get transformed to 0.0)."
### Versions
PyTorch version: 1.13.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.1 LTS (x86_64)
GCC version: (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.31
Python version: 3.8.5 (default, Jul 28 2020, 12:59:40) [GCC 9.3.0] (64-bit runtime)
Python platform: Linux-4.15.0-159-generic-x86_64-with-glibc2.29
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: Tesla V100-SXM3-32GB
Nvidia driver version: 450.142.00
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.0.5
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 96
On-line CPU(s) list: 0-95
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Platinum 8168 CPU @ 2.70GHz
Stepping: 4
CPU MHz: 2966.842
CPU max MHz: 3700.0000
CPU min MHz: 1200.0000
BogoMIPS: 5400.00
Virtualization: VT-x
L1d cache: 1.5 MiB
L1i cache: 1.5 MiB
L2 cache: 48 MiB
L3 cache: 66 MiB
NUMA node0 CPU(s): 0-23,48-71
NUMA node1 CPU(s): 24-47,72-95
Vulnerability Itlb multihit: KVM: Mitigation: Split huge pages
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Full generic retpoline, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT vulnerable
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti intel_ppin ssbd mba ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req pku ospke md_clear flush_l1d
Versions of relevant libraries:
[pip3] numpy==1.19.2
[pip3] torch==1.13.1
[pip3] torchvision==0.14.1
[conda] No relevant packages
cc @fritzo @neerajprad @alicanb @nikitaved
| 0 |
3,369 | 95,538 |
Tensor.all() fails on MPS for tensors with more than 4 dimensions
|
triaged, module: reductions, module: mps
|
### 🐛 Describe the bug
`Tensor.all()` fails on MPS for tensors with more than 4 dimensions
```python
import torch
x = torch.zeros(1, 1, 1, 1, 1, device="mps")
x.all() # crash
x.all(dim=0) # crash
x.all(dim=1) # no crash
x.all(dim=2) # no crash
x.all(dim=3) # no crash
x.all(dim=4) # no crash
```
Output
```bash
Assertion failed: (0 <= mpsAxis && mpsAxis < 4 && "Runtime canonicalization must simplify reduction axes to minor 4 dimensions."), function encodeNDArrayOp, file GPUReductionOps.mm, line 76.
[1] 77353 abort python debug.py
```
I believe the MPS implementation is here: https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/mps/operations/ReduceOps.mm
Surprisingly, most ops don't have this problem, including `torch.any()`. I'm guessing this is an undocumented limitation by MPS op `reductionAndWithTensor`.
### Versions
Collecting environment information...
PyTorch version: 2.0.0.dev20230224
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 13.0 (arm64)
GCC version: Could not collect
Clang version: 14.0.0 (clang-1400.0.29.102)
CMake version: version 3.24.0
Libc version: N/A
Python version: 3.10.9 | packaged by conda-forge | (main, Feb 2 2023, 20:26:08) [Clang 14.0.6 ] (64-bit runtime)
Python platform: macOS-13.0-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1
Versions of relevant libraries:
[pip3] mypy==1.0.1
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.2
[pip3] torch==2.0.0.dev20230224
[conda] numpy 1.24.2 pypi_0 pypi
[conda] pytorch 2.0.0.dev20230224 py3.10_0 pytorch-nightly
cc @kulinseth @albanD @malfet @DenisVieriu97 @razarmehr @abhudev
| 3 |
3,370 | 95,501 |
dynamo+aot improperly handles dupe args via *args
|
triaged, bug, oncall: pt2, module: dynamo
|
### 🐛 Describe the bug
A simple repro is a new test case for `pytorch/test/dynamo/test_aot_autograd.py` based on the existing test `test_arg_dupe_via_dynamo_recompiles_many_args_param`.
Basically, if a user module accepts *args format to forward, we don't deal with duplicate args correctly.
This shows up in real life once we make dynamo call `__call__` instead of .forward(), since __call__ (or rather, NNModule._call_impl) accepts *args rather than the typical usercode .forward which accepts an explicit list of args.
Dynamo+AOT may need to handle 'LocalInputSource' a little differently, to account for this.
```
@patch("torch._functorch.config.debug_assert", True)
def test_arg_dupe_via_dynamo_recompiles_star_args(self):
class F(torch.nn.Module):
def __init__(self):
super().__init__()
self.mean = torch.nn.Parameter(torch.randn(3, 3))
def forward(self, *args):
a, b, c, d = args
a.t_()
b.t_()
c.t_()
d.t_()
return a + b + c + d + self.mean
a = torch.randn(3, 3, requires_grad=True)
b = torch.randn(3, 3, requires_grad=True)
a1, a2, a3, a4 = a.clone(), a.clone(), a.clone(), a.clone()
b1, b2, b3, b4 = b.clone(), b.clone(), b.clone(), b.clone()
failure_reason = None
def guard_fail_fn(failure):
nonlocal failure_reason
failure_reason = failure[0]
self.assertTrue(failure_reason is None)
cc = torch._dynamo.testing.CompileCounterWithBackend("aot_eager")
f = torch._dynamo.optimize(cc, guard_fail_fn=guard_fail_fn)(F())
f(a1, a1, a1, a1)
# This raises the AssertionError pasted below
f(a2, b2, b2, b2)
```
AssertionError that is raised:
```
# AssertionError: At compilation time, graph 0 was compiled under the assumption that input 1 would be a duplicate of input 0, but at runtime this was not the case. This indicates a guard bug in AOTAutograd or Dynamo, please file a bug to PyTorch.
```
Expected behavior: dynamo guard failure triggers recompile
### Versions
pytorch @ bc438af6fed4fe1fd0ed80e6d5f5ea17c3ca30bb
cc @ezyang @soumith @msaroufim @ngimel @bdhirsh @voznesenskym @yanboliang @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @desertfire @Chillee
| 1 |
3,371 | 95,497 |
Import parameters from jit
|
oncall: jit
|
### 🚀 The feature, motivation and pitch
Hi, it would be great to have the possibility to import parameters from a jit module to the same network implemented as Torch::Module. This will allow performing fine-tuning on the platform.
### Alternatives
_No response_
### Additional context
_No response_
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| 0 |
3,372 | 95,487 |
Torch RPC on multiple nodes with GPU returns a EOF error
|
oncall: distributed, triaged, module: rpc, module: tensorpipe
|
### 🐛 Describe the bug
When running torch rpc on multiple nodes with submitit (through slurm) I get an EOF error _even if I'm not using the gpus and I'm not making them available to RPC_.
Here's a script to reproduce:
```python
import os
from torch.distributed import rpc
import torch
import submitit
import socket
import subprocess
import time
MAX_TIME_TO_CONNECT=1000
def rpc_init_node(
rank,
rank0_ip,
tcp_port,
world_size,
):
DEVICES=[] #list(range(torch.cuda.device_count()))
os.environ["MASTER_ADDR"] = str(rank0_ip)
os.environ["MASTER_PORT"] = "29500"
# os.environ["TORCH_DISTRIBUTED_DEBUG"] = "DETAIL"
# os.environ['TP_SOCKET_IFNAME']='lo'
options = rpc.TensorPipeRpcBackendOptions(
num_worker_threads=16,
init_method=f"tcp://{rank0_ip}:{tcp_port}",
rpc_timeout=MAX_TIME_TO_CONNECT,
_transports=["uv"],
# Currently fails when nodes have more than 0 gpus avail,
# even when no device is made visible
devices=DEVICES,
)
print(f"init rpc on {rank}")
rpc.init_rpc(
f"NODE_{rank}",
rank=rank,
backend=rpc.BackendType.TENSORPIPE,
rpc_backend_options=options,
world_size=world_size,
)
rpc.shutdown()
def rpc_init_master(
tcp_port,
world_size,
):
hostname = socket.gethostname()
rank0_ip = socket.gethostbyname(hostname)
DEVICES=[] # list(range(torch.cuda.device_count()))
os.environ["MASTER_ADDR"] = str(rank0_ip)
os.environ["MASTER_PORT"] = "29500"
# os.environ["TORCH_DISTRIBUTED_DEBUG"] = "DETAIL"
# os.environ['TP_SOCKET_IFNAME']='lo'
options = rpc.TensorPipeRpcBackendOptions(
num_worker_threads=16,
init_method=f"tcp://{rank0_ip}:{tcp_port}",
rpc_timeout=MAX_TIME_TO_CONNECT,
_transports=["uv"],
# Currently fails when nodes have more than 0 gpus avail,
# even when no device is made visible
devices=DEVICES,
)
print("init rpc on master")
rpc.init_rpc(
"TRAINER",
rank=0,
backend=rpc.BackendType.TENSORPIPE,
rpc_backend_options=options,
world_size=world_size,
)
# some dummy compute
out = rpc.rpc_sync("NODE_1", torch.add, args=(torch.ones(()), torch.ones(())))
rpc.shutdown()
print("result", out)
return out.item()
if __name__ == "__main__":
slurm_conf = {
"timeout_min": 100,
"slurm_partition": "train",
"slurm_cpus_per_task": 4, # works
#"slurm_gpus_per_task": 1, "slurm_cpus_per_gpu": 8, # does not work
}
master_on_node = True
num_nodes = 2
executor = submitit.AutoExecutor(folder="log_test")
executor.update_parameters(**slurm_conf)
if not master_on_node:
hostname = socket.gethostname()
IPAddr = socket.gethostbyname(hostname)
else:
job = executor.submit(rpc_init_master, 1234, num_nodes+1)
print("job id", job.job_id)
time.sleep(2.0)
cmd=f"squeue -j {job.job_id} -o %N | tail -1"
node = subprocess.check_output(cmd, shell=True, text=True).strip()
print("node", node)
cmd=f'sinfo -n {node} -O nodeaddr | tail -1'
print(cmd)
IPAddr = subprocess.check_output(cmd, shell=True, text=True).strip()
print("IP addr:", IPAddr)
for i in range(num_nodes):
_job = executor.submit(
rpc_init_node, i + 1, IPAddr, 1234, num_nodes+1)
if not master_on_node:
out = rpc_init_master(1234, num_nodes+1)
else:
out = job.result()
print("result", out)
```
I commented the line that makes the code break if uncommented (you should comment the line above tagged with `# works`).
### What does not matter
- If you tell which devices RPC should see using `devices=list_of_device`, or `devices=[]` the effect is the same.
- If you launch things from the master node or create a master node (see script for example) the error is the same
- The code runs using multiprocessing, presumably because I'm staying on the same node (?)
I had to set the `_transport` in the TensorPipe options because I'm running on AWS and without it it's not running
Here's the error:
```
Traceback (most recent call last):
File "/data/home/vmoens/dump/dummy.py", line 109, in <module>
out = job.result()
File "/fsx/users/vmoens/conda/envs/compile/lib/python3.9/site-packages/submitit/core/core.py", line 266, in result
r = self.results()
File "/fsx/users/vmoens/conda/envs/compile/lib/python3.9/site-packages/submitit/core/core.py", line 294, in results
raise job_exception # pylint: disable=raising-bad-type
submitit.core.utils.FailedJobError: Job (task=0) failed during processing with trace:
----------------------
Traceback (most recent call last):
File "/fsx/users/vmoens/conda/envs/compile/lib/python3.9/site-packages/submitit/core/submission.py", line 54, in process_job
result = delayed.result()
File "/fsx/users/vmoens/conda/envs/compile/lib/python3.9/site-packages/submitit/core/utils.py", line 133, in result
self._result = self.function(*self.args, **self.kwargs)
File "/data/home/vmoens/dump/dummy.py", line 63, in rpc_init_master
rpc.init_rpc(
File "/fsx/users/vmoens/conda/envs/compile/lib/python3.9/site-packages/torch/distributed/rpc/__init__.py", line 199, in init_rpc
_init_rpc_backend(backend, store, name, rank, world_size, rpc_backend_options)
File "/fsx/users/vmoens/conda/envs/compile/lib/python3.9/site-packages/torch/distributed/rpc/__init__.py", line 234, in _init_rpc_backend
rpc_agent = backend_registry.init_backend(
File "/fsx/users/vmoens/conda/envs/compile/lib/python3.9/site-packages/torch/distributed/rpc/backend_registry.py", line 104, in init_backend
return backend.value.init_backend_handler(*args, **kwargs)
File "/fsx/users/vmoens/conda/envs/compile/lib/python3.9/site-packages/torch/distributed/rpc/backend_registry.py", line 363, in _tensorpipe_init_backend_handler
api._all_gather(None, timeout=rpc_backend_options.rpc_timeout)
File "/fsx/users/vmoens/conda/envs/compile/lib/python3.9/site-packages/torch/distributed/rpc/api.py", line 82, in wrapper
return func(*args, **kwargs)
File "/fsx/users/vmoens/conda/envs/compile/lib/python3.9/site-packages/torch/distributed/rpc/api.py", line 224, in _all_gather
rpc_sync(
File "/fsx/users/vmoens/conda/envs/compile/lib/python3.9/site-packages/torch/distributed/rpc/api.py", line 82, in wrapper
return func(*args, **kwargs)
File "/fsx/users/vmoens/conda/envs/compile/lib/python3.9/site-packages/torch/distributed/rpc/api.py", line 809, in rpc_sync
return fut.wait()
RuntimeError: EOF: end of file (this error originated at tensorpipe/transport/uv/connection_impl.cc:132)
```
### Versions
Latest torch nightly, locally built
```
PyTorch version: 2.0.0.dev20230220+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~18.04) 9.4.0
Clang version: 6.0.0-1ubuntu2 (tags/RELEASE_600/final)
CMake version: version 3.25.0
Libc version: glibc-2.27
Python version: 3.9.15 | packaged by conda-forge | (main, Nov 22 2022, 15:55:03) [GCC 10.4.0] (64-bit runtime)
Python platform: Linux-5.4.0-1069-aws-x86_64-with-glibc2.27
Is CUDA available: True
CUDA runtime version: 11.6.112
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-40GB
GPU 1: NVIDIA A100-SXM4-40GB
GPU 2: NVIDIA A100-SXM4-40GB
GPU 3: NVIDIA A100-SXM4-40GB
GPU 4: NVIDIA A100-SXM4-40GB
GPU 5: NVIDIA A100-SXM4-40GB
GPU 6: NVIDIA A100-SXM4-40GB
GPU 7: NVIDIA A100-SXM4-40GB
Nvidia driver version: 510.47.03
cuDNN version: Probably one of the following:
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.0.5
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.1.1 /usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.1.1 HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 96
On-line CPU(s) list: 0-95
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Platinum 8275CL CPU @ 3.00GHz
Stepping: 7
CPU MHz: 1369.306
BogoMIPS: 5999.99
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 1024K
L3 cache: 36608K
NUMA node0 CPU(s): 0-23,48-71
NUMA node1 CPU(s): 24-47,72-95
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves ida arat pku ospke
Versions of relevant libraries:
[pip3] numpy==1.24.1
[pip3] pytorch-triton==2.0.0+c8bfe3f548
[pip3] torch==2.0.0.dev20230220+cu118
[pip3] torchaudio==2.0.0.dev20230222+cu118
[pip3] torchrl==0.0.4+46ec988
[pip3] torchsnapshot==0.1.0
[pip3] torchvision==0.15.0.dev20230221+cu118
[conda] magma-cuda110 2.5.2 1 pytorch
[conda] mkl 2022.1.0 hc2b9512_224
[conda] mkl-include 2023.0.0 h84fe81f_26648 conda-forge
[conda] numpy 1.24.1 pypi_0 pypi
[conda] pytorch-triton 2.0.0+c8bfe3f548 pypi_0 pypi
[conda] torch 2.0.0a0+gitd677432 dev_0 <develop>
[conda] torchaudio 2.0.0.dev20230222+cu118 pypi_0 pypi
[conda] torchrl 0.0.4+46ec988 pypi_0 pypi
[conda] torchsnapshot 0.1.0 pypi_0 pypi
[conda] torchvision 0.15.0.dev20230221+cu118 pypi_0 pypi
```
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @pietern @jjlilley @mrzzd @lw @beauby
| 5 |
3,373 | 95,485 |
Enrich shape operations with nested tensors
|
triaged, module: nestedtensor
|
### 🚀 The feature, motivation and pitch
[TorchRL](https://github.com/pytorch/rl/) and [tensordict](https://github.com/pytorch-labs/tensordict) could use nested tensors extensively, but we're facing some blockers:
- It often happens that we stack tensordicts, and then stack further stacks of tensordicts.
Say that instead of a pure stack we create a nested tensor:
```python
nt0 = torch.nested.nested_tensor((tensor0, tensor1))
nt1 = torch.nested.nested_tensor((tensor2, tensor3))
```
there is currently no way to create a nested tensor out of `nt0` and `nt1`.
In RL this is a common operation: the first nesting would represent a time dimension, the second a batch dimension.
- It would also be nice to be able to control along which dimension the nesting happens, ie not always have the leading dimension. The usage would be similar to what I mentioned earlier, that given two batches of data we would like to nest them along the first dimension (eg the time dimension).
- Representing shapes should be possible, with some placeholder value for the mismatching dim
Thanks for the feature!
### Alternatives
_No response_
### Additional context
_No response_
cc @cpuhrsch @jbschlosser @bhosmer @drisspg @mikaylagawarecki
| 1 |
3,374 | 95,481 |
[BE] Make ActivationWrapper an abstract class
|
oncall: distributed, better-engineering
|
### 🚀 The feature, motivation and pitch
As mentioned in the comments for `ActivationWrapper` in torch.distributed's activation checkpoint code, it is not meant to be instantiated directly, so we should just prevent this via using an abstract class.
### Alternatives
_No response_
### Additional context
_No response_
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
| 0 |
3,375 | 95,474 |
hf_GPT2_large CPU inference shows random failure on CI
|
triaged, oncall: pt2
|
https://hud.pytorch.org/hud/pytorch/pytorch/master/1?per_page=50&name_filter=inductor_torchbench_cpu
https://github.com/pytorch/pytorch/pull/95473 takes hf_GPT2_large off from the CI test, but it still needs investigation.
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 0 |
3,376 | 95,463 |
`add/add_` for CSC: errors when trying to access non-existent `crow_indices`.
|
module: sparse, triaged
|
### 🐛 Describe the bug
A simple repro:
```python
In [1]: import torch
In [2]: x = torch.rand(3, 3).to_sparse_csc()
<ipython-input-2-063a8a560ed1>:1: UserWarning: Sparse CSC tensor support is in beta state. If you miss a functionality in the sparse tensor support, please submit a feature request to https://github.com/pytorch/pytorch/issues. (Triggered
internally at /home/nik/git/Quansight/pytorch/aten/src/ATen/SparseCsrTensorImpl.cpp:54.)
x = torch.rand(3, 3).to_sparse_csc()
In [3]: x + x
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Input In [3], in <cell line: 1>()
----> 1 x + x
RuntimeError: col_indices expected sparse row compressed tensor layout but got SparseCsc
```
All in all, add is broken in many ways because of missing checks/dispatch manipulations.
### Versions
Current master.
cc @alexsamardzic @pearu @cpuhrsch @amjames @bhosmer
| 1 |
3,377 | 95,462 |
Extend docs - Fixing out of memory with python garbage collection
|
module: docs, module: memory usage, triaged
|
### 📚 The doc issue
I suggest to extend https://pytorch.org/docs/stable/notes/faq.html to include a section about the python garbage collector, which is missing.
To be specific, we've now encountered several scenarios where we hit the following case:
Due to other processing during the training loop (in one case the data loader was loading complex data structures, in the other case the main loop created complex tensorboard logs), the garbage collector is triggered for gen0 and gen1, moving a few of those large tensors to gen2 (only about 10-20 objects, but a few hundred megabytes on GPU). Thus, these large tensors will not be released immediately. After adding a `gc.collect()` every nth iteration (or specifically, after each of these costly iterations), the leak was gone. An alternative/addition could be to call `gc.freeze()` before the loop, which will clear the gen2, thus reducing the number of required objects to perform gc gen2 and also reduce the gen2 collection overhead.
Detailed explanation: Because [gen2 gc will only trigger](https://devguide.python.org/internals/garbage-collector/#gc-oldest-generation) if `long_lived_pending / long_lived_total > 0.25` and there are already a few hundred thousand objects in gen2 after startup, it will not trigger before a lot of these large tensors end up in gen2. As an example with numbers: Assuming there are 350.000 objects in `long_lived_total` (which is approx the number I see when in the inference loop), you'd need 87.500 new objects in gen2 before gen2 gc would be triggered, which would need ~5k iterations. This will yield an OOM before ever being triggered.
### Suggest a potential alternative/fix
The suggested addition:
**Python garbage collection**: If you create many python objects during your loop(s), consider calling the Python garbage collector periodically with `gc.collect()` and/or freeze objects to keep generation 2 small by calling `gc.freeze()` after initialization before entering your loop(s). Read more about python garbage collection [here](https://devguide.python.org/internals/garbage-collector).
Thanks for considering this addition!
cc @svekars @carljparker
| 1 |
3,378 | 95,460 |
torch.profiler.tensorboard_trace_handler Generates an incorrect JSON file
|
triaged, module: tensorboard, oncall: visualization
|
### 🐛 Describe the bug
For me, this minimal script produces a JSON file which is not readable by TensorBoard.
```python
import torch
a = torch.ones((10,10))
with torch.profiler.profile(
on_trace_ready=torch.profiler.tensorboard_trace_handler('./log/')
) as p:
for _ in range(100):
b = a*a
p.step()
```
This script writes a JSON file containing:
```json
[{"name": "aten::mul", "ph": "X", "ts": 633, "dur": 54, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 725, "dur": 7, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 757, "dur": 5, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 772, "dur": 7, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 787, "dur": 3, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 797, "dur": 2, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 806, "dur": 2, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 815, "dur": 2, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 824, "dur": 2, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 833, "dur": 3, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 842, "dur": 5, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 855, "dur": 2, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 864, "dur": 3, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 874, "dur": 2, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 883, "dur": 3, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 892, "dur": 3, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 901, "dur": 3, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 910, "dur": 3, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 920, "dur": 2, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 929, "dur": 7, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 943, "dur": 3, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 953, "dur": 2, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 962, "dur": 3, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 971, "dur": 3, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 980, "dur": 3, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 990, "dur": 2, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 999, "dur": 2, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 1008, "dur": 2, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 1017, "dur": 2, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 1026, "dur": 5, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 1038, "dur": 3, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 1048, "dur": 2, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 1057, "dur": 3, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 1066, "dur": 3, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 1075, "dur": 3, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 1084, "dur": 3, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 1093, "dur": 3, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 1103, "dur": 2, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 1112, "dur": 3, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 1121, "dur": 3, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 1130, "dur": 3, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 1139, "dur": 3, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 1149, "dur": 2, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 1158, "dur": 2, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 1167, "dur": 3, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 1176, "dur": 3, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 1185, "dur": 3, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 1194, "dur": 3, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 1203, "dur": 3, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 1212, "dur": 3, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 1221, "dur": 6, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 1234, "dur": 3, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 1243, "dur": 3, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 1253, "dur": 2, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 1262, "dur": 2, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 1271, "dur": 2, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 1280, "dur": 5, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 1297, "dur": 6, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 1316, "dur": 6, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 1333, "dur": 4, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 1348, "dur": 10, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 1369, "dur": 16, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 1398, "dur": 4, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 1415, "dur": 5, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 1430, "dur": 4, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 1440, "dur": 3, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 1450, "dur": 2, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 1459, "dur": 2, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 1468, "dur": 2, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 1477, "dur": 6, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 1490, "dur": 3, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 1500, "dur": 2, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 1509, "dur": 2, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 1518, "dur": 2, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 1527, "dur": 2, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 1536, "dur": 3, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 1545, "dur": 3, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 1554, "dur": 3, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 1563, "dur": 3, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 1572, "dur": 5, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 1584, "dur": 3, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 1594, "dur": 3, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 1603, "dur": 3, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 1612, "dur": 3, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 1621, "dur": 3, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 1630, "dur": 3, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 1639, "dur": 3, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 1648, "dur": 3, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 1657, "dur": 3, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 1666, "dur": 5, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 1678, "dur": 3, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 1688, "dur": 2, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 1697, "dur": 2, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 1706, "dur": 2, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 1715, "dur": 2, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 1724, "dur": 2, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 1733, "dur": 2, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 1742, "dur": 2, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 1750, "dur": 3, "tid": 1, "pid": "CPU functions", "args": {}}, {"name": "aten::mul", "ph": "X", "ts": 1759, "dur": 3, "tid": 1, "pid": "CPU functions", "args": {}}]
```
TensorBoard complains when loading this file, since it expects it to contain something like:
```json
["metadata":"etc", "traceEvents":[{"name": "aten::zeros", "ph": "X", "ts": 516, "dur": 11, "tid": 1, "pid": "CPU functions", "args": {}}, {...}, ...]])
```
### Versions
```
Versions of relevant libraries:
Python 3.9.16
[conda] libblas 3.9.0 16_linux64_mkl conda-forge
[conda] libcblas 3.9.0 16_linux64_mkl conda-forge
[conda] liblapack 3.9.0 16_linux64_mkl conda-forge
[conda] magma 2.5.4 hc72dce7_4 conda-forge
[conda] mkl 2022.2.1 h84fe81f_16997 conda-forge
[conda] numpy 1.24.2 py39h7360e5f_0 conda-forge
[conda] pytorch 1.12.1 cuda112py39hb0b7ed5_201 conda-forge
[conda] torch-tb-profiler 0.4.1 pypi_0 pypi
[conda] torchmetrics 0.8.2 pyhd8ed1ab_0 conda-forge
```
| 0 |
3,379 | 95,434 |
It seems that `torch.Tensor.addmv` and `torch.Tensor.addr` will check some inputs' dtype if and only if in `backward()`
|
module: autograd, triaged, module: complex, module: type promotion, module: linear algebra, actionable, complex_autograd
|
### 🐛 Describe the bug
For some cases where the dtypes of inputs are different, `torch.Tensor.addr` and `torch.Tensor.addmv` do not seem to detect this data type inconsistency in forward, but will throw out an Error in backward.
It is not sure whether this forward process will introduce unknown dtype-related problems in subsequent calculations.
I'm wondering if this is a feature of these functions or if the checks for datatype consistency need to be tightened up in the forward process.
This issue should be related to [#76785](https://github.com/pytorch/pytorch/issues/76785).
### To Reproduce
```python
import torch
input0 = torch.rand([3, 2], dtype=torch.complex128, requires_grad=True)
input1 = torch.rand([3], dtype=torch.float64, requires_grad=True)
input2 = torch.rand([2], dtype=torch.complex128, requires_grad=True)
def test():
tmp_result= torch.Tensor.addr(input0, input1, input2)
# tmp_result= torch.Tensor.addmv(input1, input0, input2)
return tmp_result
res = test()
# res1.backward()
# Enable the above line:
# addr returns: RuntimeError: expected scalar type ComplexDouble but found Double
# addmv returns: RuntimeError: Expected isFloatingType(grad.scalar_type()) || (input_is_complex == grad_is_complex) to be true, but got false.
```
### Error Results
When using `torch.Tensor.addr`, `res1.backward()` will return:
```
RuntimeError: expected scalar type ComplexDouble but found Double
```
When using `torch.Tensor.addmv`, `res1.backward()` will return:
```
RuntimeError: Expected isFloatingType(grad.scalar_type()) || (input_is_complex == grad_is_complex) to be true, but got false.
```
### Expected behavior
Perhaps it could handle this data inconsistency in the backend calculation like `torch.nn.functional.poisson_nll_loss` or `torch.nn.functional.l1_loss` mentioned in [#76785](https://github.com/pytorch/pytorch/issues/76785).
Or directly add the detection of the consistency of the input dtype in the forward process.
### Versions
<details>
<summary>pytorch 1.13.1</summary>
<pre><code>
[pip3] numpy==1.22.3
[pip3] torch==1.13.1
[pip3] torchelastic==0.2.2
[pip3] torchtext==0.14.1
[pip3] torchvision==0.14.1
[conda] blas 1.0 mkl
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py310h7f8727e_0
[conda] mkl_fft 1.3.1 py310hd6ae3a3_0
[conda] mkl_random 1.2.2 py310h00e6091_0
[conda] numpy 1.22.3 py310hfa59a62_0
[conda] numpy-base 1.22.3 py310h9585f30_0
[conda] pytorch 1.13.1 py3.10_cuda11.6_cudnn8.3.2_0 pytorch
[conda] pytorch-cuda 11.6 h867d48c_1 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchelastic 0.2.2 pypi_0 pypi
[conda] torchtext 0.14.1 py310 pytorch
[conda] torchvision 0.14.1 py310_cu116 pytorch</code></pre>
</details>
cc @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved @soulitzer @Lezcano @Varal7 @anjali411 @dylanbespalko @mruberry @nairbv @jianyuh @walterddr @IvanYashchuk @xwang233
| 2 |
3,380 | 95,432 |
Regression bug in `torch.nn.ReLU6` and `torch.nn.Hardtanh` that `inplace=True` doesn't work in PyTorch 1.10.0~1.13.1
|
high priority, module: nn, module: memory usage, triaged, actionable
|
### 🐛 Describe the bug
From version 1.10.0 to version 1.13.1 of PyTorch, the `inplace` argument does not seem to work well on `ReLU6` and `Hardtanh`.
The change of `inplace=True` and `inplace=false` will not affect the memory usage of the model.
However, in versions 1.8 and 1.9, setting `inplace` to False can effectively reduce the GPU usage of the model.
The root causes of these bugs should be related to [#81548](https://github.com/pytorch/pytorch/issues/81548)
### To Reproduce
```python
import time
from torch import nn
import torch
torch.manual_seed(0)
func_cls=nn.Hardtanh
# nn.Hardsigmoid
# nn.Hardtanh
# nn.ReLU6
# nn.Hardswish
# replace `inplace=True` to `inplace=False` and run again to observe reuslts
class EasyModel(nn.Module):
def __init__(self, num_classes=10):
super(EasyModel, self).__init__()
self.conv1 = nn.Conv2d(3, 64, kernel_size=3, stride=1, padding=1, groups=1, bias=False, dilation=1)
self.relu1= func_cls(inplace=True)
self.conv2 = nn.Conv2d(64, 64, kernel_size=3, stride=2, padding=1, groups=1, bias=False, dilation=1)
self.relu2= func_cls(inplace=True)
self.deconv1 = nn.ConvTranspose2d(64, 64, kernel_size=3, stride=2, padding=1, groups=1, bias=False, dilation=1)
self.derelu1= func_cls(inplace=True)
self.conv3 = nn.Conv2d(64, 64, kernel_size=3, stride=2, padding=1, groups=1, bias=False, dilation=1)
self.relu5 = func_cls(inplace=True)
self.avgpool = nn.AdaptiveAvgPool2d((1, 1))
self.fc = nn.Linear(64, num_classes)
self.model = nn.Sequential(self.conv1, self.relu1, self.conv2, self.relu2, self.deconv1, self.derelu1,
self.conv3, self.relu5, self.avgpool)
def forward(self, x):
x = self.model(x)
x = torch.flatten(x, 1)
x = self.fc(x)
return x
model = EasyModel()
model.cuda()
x = torch.randn(2,3,128,128).cuda()
y = model(x)
after_forward = torch.cuda.memory_allocated()
# 42659328 for nn.ReLU6\nn.Hardtanh\nn.Hardswish\nn.Hardsigmoid, no matter inplace=True or False
# 40431616 for nn.Hardtanh\nn.ReLU6 in 8,9+True; 21818368 in 8,9+False.
# 40431616 for nn.Hardsigmoid\nn.Hardswish, no matter inplace=True or False in 8-13.1
print(after_forward)
```
### Error Results
We show some erroneous results from the above code:
1. Version 1.13.1, using `ReLU6` and `Hardtanh`, `inplace` is True or False, `torch.cuda.memory_allocated` returns **40562176**.
2. Version 1.8.0/1.9.0, using `ReLU6` and `Hardtanh`, `inplace=True`, `torch.cuda.memory_allocated` returns **40431616**; and `inplace=False` returns **21818368**.
In addition, we also found that `Hardsigmoid` and `Hardswish` have similar interesting behavior.
3. Version 1.8.0/1.9.0/1.10.0/1.11.0/1.12.0/1.13.0/1.13.1, using `Hardsigmoid`and `Hardswish`, `inplace` is True or False, `torch.cuda.memory_allocated` returns **40431616**.
But we are not yet sure whether this is an expected behavior.
Considering that the implementation of these two functions can also optionally do operation in-place, they may have similar bugs with `ReLU6` and `Hardtanh`.
### Expected behavior
`inplace=True` should provide a solution to save memory overhead (selectively doing operations in-place, like what `ReLU6` and `Hardtanh` do in PyTorch 1.8.0), but the existing performance does not seem to be able to selectively execute and save GPU memory.
I want to confirm whether this is a feature or an implementation bug.
### Versions
<details>
<summary>pytorch 1.8.0</summary>
<pre><code>
[pip3] numpy==1.19.2
[pip3] torch==1.8.0
[pip3] torchelastic==0.2.2
[pip3] torchtext==0.9.0
[pip3] torchvision==0.9.0
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.1.74 h6bb024c_0 nvidia
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2020.2 256
[conda] mkl-service 2.3.0 py38he904b0f_0
[conda] mkl_fft 1.3.0 py38h54f3939_0
[conda] mkl_random 1.1.1 py38h0573a6f_0
[conda] numpy 1.19.2 py38h54aff64_0
[conda] numpy-base 1.19.2 py38hfa32c7d_0
[conda] pytorch 1.8.0 py3.8_cuda11.1_cudnn8.0.5_0 pytorch
[conda] torchelastic 0.2.2 pypi_0 pypi
[conda] torchtext 0.9.0 py38 pytorch
[conda] torchvision 0.9.0 py38_cu111 pytorch</code></pre>
</details>
<details>
<summary>pytorch 1.9.0</summary>
<pre><code>
[pip3] numpy==1.20.2
[pip3] torch==1.9.0
[pip3] torchelastic==0.2.0
[pip3] torchtext==0.10.0
[pip3] torchvision==0.10.0
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.1.74 h6bb024c_0 nvidia
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.2.0 h06a4308_296
[conda] mkl-service 2.3.0 py37h27cfd23_1
[conda] mkl_fft 1.3.0 py37h42c9631_2
[conda] mkl_random 1.2.1 py37ha9443f7_2
[conda] numpy 1.20.2 py37h2d18471_0
[conda] numpy-base 1.20.2 py37hfae3a4d_0
[conda] pytorch 1.9.0 py3.7_cuda11.1_cudnn8.0.5_0 pytorch
[conda] torchelastic 0.2.0 pypi_0 pypi
[conda] torchtext 0.10.0 py37 pytorch
[conda] torchvision 0.10.0 py37_cu111 pytorch</code></pre>
</details>
<details>
<summary>pytorch 1.10.0</summary>
<pre><code>
[pip3] numpy==1.21.2
[pip3] torch==1.10.0
[pip3] torch-tb-profiler==0.4.0
[pip3] torchelastic==0.2.0
[pip3] torchtext==0.11.0
[pip3] torchvision==0.11.0
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.3.1 ha36c431_9 nvidia
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.3.0 h06a4308_520
[conda] mkl-service 2.4.0 py37h7f8727e_0
[conda] mkl_fft 1.3.1 py37hd3c417c_0
[conda] mkl_random 1.2.2 py37h51133e4_0
[conda] numpy 1.21.2 py37h20f2e39_0
[conda] numpy-base 1.21.2 py37h79a1101_0
[conda] pytorch 1.10.0 py3.7_cuda11.3_cudnn8.2.0_0 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torch-tb-profiler 0.4.0 pypi_0 pypi
[conda] torchelastic 0.2.0 pypi_0 pypi
[conda] torchtext 0.11.0 py37 pytorch
[conda] torchvision 0.11.0 py37_cu113 pytorch</code></pre>
</details>
<details>
<summary>pytorch 1.11.0</summary>
<pre><code>
[pip3] numpy==1.21.6
[pip3] pytorch-lightning==1.6.3
[pip3] torch==1.11.0
[pip3] torch-tb-profiler==0.4.0
[pip3] torchaudio==0.11.0
[pip3] torchmetrics==0.9.0
[pip3] torchvision==0.12.0
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.3.1 h2bc3f7f_2
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py38h7f8727e_0
[conda] mkl_fft 1.3.1 py38hd3c417c_0
[conda] mkl_random 1.2.2 py38h51133e4_0
[conda] numpy 1.19.5 pypi_0 pypi
[conda] numpy-base 1.21.5 py38hf524024_1
[conda] pytorch 1.11.0 py3.8_cuda11.3_cudnn8.2.0_0 pytorch
[conda] pytorch-lightning 1.6.3 pypi_0 pypi
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torch 1.10.0 pypi_0 pypi
[conda] torch-tb-profiler 0.4.0 pypi_0 pypi
[conda] torchaudio 0.11.0 py38_cu113 pytorch
[conda] torchmetrics 0.9.0 pypi_0 pypi
[conda] torchvision 0.12.0 py38_cu113 pytorch</code></pre>
</details>
<details>
<summary>pytorch 1.12.0</summary>
<pre><code>
[pip3] numpy==1.21.5
[pip3] torch==1.12.0
[pip3] torchelastic==0.2.0
[pip3] torchtext==0.13.0
[pip3] torchvision==0.13.0
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.3.1 ha36c431_9 nvidia
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py37h7f8727e_0
[conda] mkl_fft 1.3.1 py37hd3c417c_0
[conda] mkl_random 1.2.2 py37h51133e4_0
[conda] numpy 1.21.5 py37he7a7128_2
[conda] numpy-base 1.21.5 py37hf524024_2
[conda] pytorch 1.12.0 py3.7_cuda11.3_cudnn8.3.2_0 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchelastic 0.2.0 pypi_0 pypi
[conda] torchtext 0.13.0 py37 pytorch
[conda] torchvision 0.13.0 py37_cu113 pytorch</code></pre>
</details>
<details>
<summary>pytorch 1.13.0</summary>
<pre><code>
[pip3] numpy==1.22.3
[pip3] torch==1.13.0
[pip3] torchtext==0.14.0
[pip3] torchvision==0.14.0
[conda] blas 1.0 mkl
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.22.3 py39he7a7128_0
[conda] numpy-base 1.22.3 py39hf524024_0
[conda] pytorch 1.13.0 py3.9_cuda11.6_cudnn8.3.2_0 pytorch
[conda] pytorch-cuda 11.6 h867d48c_0 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchtext 0.14.0 py39 pytorch
[conda] torchvision 0.14.0 py39_cu116 pytorch</code></pre>
</details>
<details>
<summary>pytorch 1.13.1</summary>
<pre><code>
[pip3] numpy==1.22.3
[pip3] torch==1.13.1
[pip3] torchelastic==0.2.2
[pip3] torchtext==0.14.1
[pip3] torchvision==0.14.1
[conda] blas 1.0 mkl
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py310h7f8727e_0
[conda] mkl_fft 1.3.1 py310hd6ae3a3_0
[conda] mkl_random 1.2.2 py310h00e6091_0
[conda] numpy 1.22.3 py310hfa59a62_0
[conda] numpy-base 1.22.3 py310h9585f30_0
[conda] pytorch 1.13.1 py3.10_cuda11.6_cudnn8.3.2_0 pytorch
[conda] pytorch-cuda 11.6 h867d48c_1 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchelastic 0.2.2 pypi_0 pypi
[conda] torchtext 0.14.1 py310 pytorch
[conda] torchvision 0.14.1 py310_cu116 pytorch</code></pre>
</details>
cc @ezyang @gchanan @zou3519 @albanD @mruberry @jbschlosser @walterddr @saketh-are
| 3 |
3,381 | 95,424 |
Dynamo or FakeTensor bug: reshape(): argument 'shape' (position 1) must be tuple of ints, but found element of type FakeTensor at pos 0
|
triaged, oncall: pt2, module: fakeTensor, module: dynamo
|
### 🐛 Describe the bug
Repo
```
import torch
import torch._dynamo
import numpy as np
class MyModule(torch.nn.Linear):
def __init__(self):
super().__init__(np.array([4, 4, 1]).prod(), np.array([4, 4, 1]).prod())
def forward(self, x):
# return x.reshape(1, self.in_features). # This line passed
return x.reshape(self.in_features)
x = torch.rand([4, 4])
model = MyModule()
print(model(x))
opt_model = torch._dynamo.optimize("eager")(model)
print(opt_model(x))
```
I think this is because we wrap ```self.in_features``` as ```UnspecializedPythonVariable```, and it becomes ```FakeTensor```. We are trying to dispatch to a ```reshape``` function with signature ```FakeTensor``` and failed. But the wired thing is, it passed if I switched to the commented line.
Error
```
Traceback (most recent call last):
File "/scratch/ybliang/work/repos/pytorch/torch/_dynamo/utils.py", line 1196, in run_node
return getattr(args[0], node.target)(*args[1:], **kwargs)
TypeError: reshape(): argument 'shape' (position 1) must be tuple of ints, but found element of type FakeTensor at pos 0
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/scratch/ybliang/work/repos/pytorch/torch/_dynamo/utils.py", line 1152, in get_fake_value
return wrap_fake_exception(
File "/scratch/ybliang/work/repos/pytorch/torch/_dynamo/utils.py", line 808, in wrap_fake_exception
return fn()
File "/scratch/ybliang/work/repos/pytorch/torch/_dynamo/utils.py", line 1153, in <lambda>
lambda: run_node(tx.output, node, args, kwargs, nnmodule)
File "/scratch/ybliang/work/repos/pytorch/torch/_dynamo/utils.py", line 1206, in run_node
raise RuntimeError(
RuntimeError: Failed running call_method reshape(*(FakeTensor(FakeTensor(..., device='meta', size=(4, 4)), cpu), FakeTensor(FakeTensor(..., device='meta', size=(), dtype=torch.int64), cpu)), **{}):
reshape(): argument 'shape' (position 1) must be tuple of ints, but found element of type FakeTensor at pos 0
(scroll up for backtrace)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/scratch/ybliang/work/repos/pytorch/debug/debug1.py", line 51, in <module>
print(opt_model(x))
File "/scratch/ybliang/work/repos/pytorch/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/scratch/ybliang/work/repos/pytorch/torch/_dynamo/eval_frame.py", line 82, in forward
return self.dynamo_ctx(self._orig_mod.forward)(*args, **kwargs)
File "/scratch/ybliang/work/repos/pytorch/torch/_dynamo/eval_frame.py", line 209, in _fn
return fn(*args, **kwargs)
File "/scratch/ybliang/work/repos/pytorch/torch/_dynamo/eval_frame.py", line 337, in catch_errors
return callback(frame, cache_size, hooks)
File "/scratch/ybliang/work/repos/pytorch/torch/_dynamo/convert_frame.py", line 404, in _convert_frame
result = inner_convert(frame, cache_size, hooks)
File "/scratch/ybliang/work/repos/pytorch/torch/_dynamo/convert_frame.py", line 104, in _fn
return fn(*args, **kwargs)
File "/scratch/ybliang/work/repos/pytorch/torch/_dynamo/convert_frame.py", line 262, in _convert_frame_assert
return _compile(
File "/scratch/ybliang/work/repos/pytorch/torch/_dynamo/utils.py", line 163, in time_wrapper
r = func(*args, **kwargs)
File "/scratch/ybliang/work/repos/pytorch/torch/_dynamo/convert_frame.py", line 324, in _compile
out_code = transform_code_object(code, transform)
File "/scratch/ybliang/work/repos/pytorch/torch/_dynamo/bytecode_transformation.py", line 530, in transform_code_object
transformations(instructions, code_options)
File "/scratch/ybliang/work/repos/pytorch/torch/_dynamo/convert_frame.py", line 311, in transform
tracer.run()
File "/scratch/ybliang/work/repos/pytorch/torch/_dynamo/symbolic_convert.py", line 1862, in run
super().run()
File "/scratch/ybliang/work/repos/pytorch/torch/_dynamo/symbolic_convert.py", line 619, in run
and self.step()
File "/scratch/ybliang/work/repos/pytorch/torch/_dynamo/symbolic_convert.py", line 583, in step
getattr(self, inst.opname)(inst)
File "/scratch/ybliang/work/repos/pytorch/torch/_dynamo/symbolic_convert.py", line 349, in wrapper
return inner_fn(self, inst)
File "/scratch/ybliang/work/repos/pytorch/torch/_dynamo/symbolic_convert.py", line 1014, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/scratch/ybliang/work/repos/pytorch/torch/_dynamo/symbolic_convert.py", line 517, in call_function
self.push(fn.call_function(self, args, kwargs))
File "/scratch/ybliang/work/repos/pytorch/torch/_dynamo/variables/misc.py", line 744, in call_function
return self.obj.call_method(tx, self.name, args, kwargs).add_options(self)
File "/scratch/ybliang/work/repos/pytorch/torch/_dynamo/variables/tensor.py", line 424, in call_method
return wrap_fx_proxy(
File "/scratch/ybliang/work/repos/pytorch/torch/_dynamo/variables/builder.py", line 754, in wrap_fx_proxy
return wrap_fx_proxy_cls(
File "/scratch/ybliang/work/repos/pytorch/torch/_dynamo/variables/builder.py", line 789, in wrap_fx_proxy_cls
example_value = get_fake_value(proxy.node, tx)
File "/scratch/ybliang/work/repos/pytorch/torch/_dynamo/utils.py", line 1173, in get_fake_value
raise TorchRuntimeError() from e
torch._dynamo.exc.TorchRuntimeError:
from user code:
File "/scratch/ybliang/work/repos/pytorch/debug/debug1.py", line 43, in forward
return x.reshape(self.in_features)
You can suppress this exception and fall back to eager by setting:
torch._dynamo.config.suppress_errors = True
```
### Versions
N/A
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh @voznesenskym @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @desertfire
| 8 |
3,382 | 95,423 |
PT2 Computes Multi Device Backward in a Single Thread
|
feature, low priority, triaged, oncall: pt2, module: inductor
|
### 🐛 Describe the bug
In PyTorch Eager when the backwards is invoked there is a separate thread per device that invokes nodes in the backward graph. However in PT2, this parallelism is lost as we will compile a backward with multiple devices into one single compiled function and invoked in one thread.
### Versions
master
cc @ezyang @msaroufim @wconstab @ngimel @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @soumith
| 5 |
3,383 | 95,412 |
DISABLED test_variant_consistency_jit_linalg_lstsq_cpu_complex64 (__main__.TestJitCPU)
|
triaged, module: flaky-tests, skipped, module: unknown
|
Platforms: win, windows, linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_variant_consistency_jit_linalg_lstsq_cpu_complex64&suite=TestJitCPU) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/11557283803).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_variant_consistency_jit_linalg_lstsq_cpu_complex64`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_ops_jit.py`
| 10 |
3,384 | 95,408 |
Parallel Associative Scan
|
high priority, triaged, oncall: pt2, module: functorch
|
### 🚀 The feature, motivation and pitch
It would be great to have a general parallel prefix sum (associative scan) operation in PyTorch, something like [associative_scan](https://jax.readthedocs.io/en/latest/_autosummary/jax.lax.associative_scan.html) in JAX or [scan_associative](https://www.tensorflow.org/probability/api_docs/python/tfp/math/scan_associative) in TensorFlow Probability. This operation is key for the parallelization of some algorithms in CRFs, [filtering/smoothing in state space models](https://github.com/EEA-sensors/sequential-parallelization-examples/blob/main/python/temporal-parallelization-bayes-smoothers/parallel_kalman_jax.ipynb), etc.
### Alternatives
I found [this implementation](https://github.com/lxxue/prefix_sum) but it's only for computing the prefix sum and not for general associative binary operations. It would be great to have native support for arbitrary binary operators.
### Additional context
_No response_
cc @ezyang @gchanan @zou3519 @soumith @msaroufim @wconstab @ngimel @bdhirsh @anijain2305 @Chillee @samdow @kshitij12345 @janeyx99
| 9 |
3,385 | 95,403 |
test_ddp_apply_optim_in_backward in distributed_test.py fails for gloo backend
|
oncall: distributed, triaged
|
### 🐛 Describe the bug
Command:
`BACKEND=gloo WORLD_SIZE=2 /usr/bin/python distributed/test_distributed_spawn.py -k cuda -k Cuda -k CUDA -k gpu -k nccl -k DistributedDataParallel -k DDP -v TestDistBackendWithSpawn.test_ddp_apply_optim_in_backward `
`BACKEND=gloo WORLD_SIZE=2 /usr/bin/python distributed/test_distributed_spawn.py -k cuda -k Cuda -k CUDA -k gpu -k nccl -k DistributedDataParallel -k DDP -v TestDistBackendWithSpawn.test_ddp_apply_optim_in_backward_grad_as_bucket_view_false`
View mismatch:
Mismatched elements: 57026 / 147456 (38.7%)
Greatest absolute difference: 0.058268263936042786 at index (120, 88, 0, 2) (up to 1e-05 allowed)
Greatest relative difference: 13452.156564975463 at index (101, 97, 1, 1) (up to 1.3e-06 allowed)
Params not equal at iteration 0
### Versions
pytorch commit: bc438af6fed4fe1fd0ed80e6d5f5ea17c3ca30bb
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
| 0 |
3,386 | 95,394 |
Add log highlights to Dr. CI's failed jobs
|
triaged
|
When a job fails, the Dr CI message should call out the specific log lines that show the failure, showing the text that HUD shows today:

(The old Dr CI had a similar feature)
Expected benefits:
- Devs can see the exact failure faster
- Devs are less likely to incorrectly assume that a failure is unrelated based just on the job name
Feature inspired by https://github.com/pytorch/pytorch/pull/95232#issuecomment-1442089616
| 0 |
3,387 | 95,380 |
Investigate/add Windows Arm64 support for cpuinfo
|
module: windows, triaged, module: arm
|
pytorch/cpuinfo arm64 support should be investigated and added if not already present.
- [ ] Support for M1 and M2 needs to be merged to pytorch/cpuinfo
A PR to pytorch/pytorch (updating cpuinfo submodule) will be created after the tasks are completed.
| 3 |
3,388 | 95,374 |
Add oscillating activation functions to PyTorch.
|
module: loss, triaged, needs research
|
### 🚀 The feature, motivation and pitch
Oscillating activation functions enable the neurons to learn the XOR function without manual feature engineering. With this kind of activation function, a single neuron can learn to approximate an XOR-like function. Otherwise, it needs multiple hyperplanes from multiple neurons. These activation functions also reduce training time and allow problems to be solved with smaller networks. The experimental results indicate that these activation functions are computationally cheaper than the state-of-art activation functions.
There are four oscillating activation functions to be added:
* Growing Cosine Unit (GCU): f(z) = zcos(z)
* Shifted Quadratic Unit (SQU): f(z) = $z^2$ + z
* Decaying Sine Unit (DSU): f(z) = $\pi$/2(sinc(z - $\pi$) - sinc(z + $\pi$))
* Non-Monotonic Cubic Unit (NCU): f(z) = z - $z^3$
### Alternatives
_No response_
### Additional context
## **Prior Art**
* Noel, Mathew Mithra, Advait Trivedi, and Praneet Dutta. "Growing cosine unit: A novel oscillatory activation function that can speedup training and reduce parameters in convolutional neural networks." arXiv preprint arXiv:2108.12943 (2021).
* Noel, Matthew Mithra, Shubham Bharadwaj, Venkataraman Muthiah-Nakarajan, Praneet Dutta, and Geraldine Bessie Amali. "Biologically inspired oscillating activation functions can bridge the performance gap between biological and artificial neurons." arXiv preprint arXiv:2111.04020 (2021).
* https://analyticsindiamag.com/how-oscillatory-activation-function-overcomes-problems-with-gradient-descent-and-xor/
* https://twitter.com/martin_gorner/status/1463294997436899332?lang=en
The most interesting part is that a single neuron activated by the oscillating activation units can learn to represent the XOR logical function, something that is known to be impossible for other artificial neuron designs (one needs multiple layers with those).
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 3 |
3,389 | 95,370 |
`argmin` + `view` Trigger Exception in compile mode
|
triaged, oncall: pt2, module: inductor
|
### 🐛 Describe the bug
The following program works fine in eager mode but triggers an exception in compile mode. See the comments in the code for some reproduction details.
```python
import torch
def fn(input):
v = input.argmin(1) # v: (0, )
return v.view([0, 3])
x = torch.rand([0, 3])
# if the shape is changed to [0, 1], torch.compile works fine.
# if we directly pass a tensor with shape [] to fn, torch.compile works fine.
ret_eager = fn(x)
print('==== Eager mode OK! ====')
compiled = torch.compile(fn)
print('==== torchcomp compilation OK! ====')
ret_compiled = compiled(x)
print('==== torchcomp mode OK! ====')
"""
==== Eager mode OK! ====
==== torchcomp compilation OK! ====
[2023-02-23 03:22:03,608] torch._inductor.graph: [ERROR] Error from lowering
Traceback (most recent call last):
File "python3.10/site-packages/torch/_inductor/ir.py", line 1389, in dynamic_reshape_indexer
reindex = cls._dynamic_reshape_indexer(old_size, new_size)
File "python3.10/site-packages/torch/_inductor/ir.py", line 1434, in _dynamic_reshape_indexer
modulus = stack_old.pop()
IndexError: pop from empty list
"""
```
### Versions
<details><summary><b>Environment</b> <i>[Click to expand]</i></summary>
```
PyTorch version: 2.0.0.dev20230220+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
Clang version: 14.0.0-1ubuntu1
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.6 (main, Nov 14 2022, 16:10:14) [GCC 11.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-58-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.6.124
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
Nvidia driver version: 525.78.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.4.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] mypy==0.960
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.23.1
[pip3] pytorch-triton==2.0.0+c8bfe3f548
[pip3] torch==2.0.0.dev20230220+cu117
[pip3] torchaudio==2.0.0.dev20230223+cu117
[pip3] torchvision==0.15.0.dev20230222+cu117
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.3.1 h2bc3f7f_2
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.21.5 py39he7a7128_1
[conda] numpy-base 1.21.5 py39hf524024_1
[conda] numpydoc 1.2 pyhd3eb1b0_0
```
</details>
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh @voznesenskym @yanboliang @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @desertfire
| 1 |
3,390 | 95,369 |
build failed when strictly following the guidelines
|
module: build, triaged
|
### 🐛 Describe the bug
Fresh install of ubuntu 20.10 and Anaconda3-2022.10
```
git clone --recursive https://github.com/pytorch/pytorch
cd pytorch
pip install -r requirements.txt
conda activate
export USE_CUDA=0
export USE_NNPACK=0
export CMAKE_PREFIX_PATH=${CONDA_PREFIX:-"$(dirname $(which conda))/../"}
python setup.py develop
```
then i get errors
### Error logs
```
[3735/6217] Building CXX object third_party/benchmark/src/CMakeFiles/benchmark.dir/benchmark.cc.o
FAILED: third_party/benchmark/src/CMakeFiles/benchmark.dir/benchmark.cc.o
/usr/bin/c++ -DHAVE_POSIX_REGEX -DHAVE_STD_REGEX -DHAVE_STEADY_CLOCK -I/root/anaconda3/include -I/root/pytorch/third_party/benchmark/include -I/root/pytorch/third_party/benchmark/src -isystem /root/pytorch/cmake/../third_party/googletest/googlemock/include -isystem /root/pytorch/cmake/../third_party/googletest/googletest/include -isystem /root/pytorch/third_party/protobuf/src -isystem /root/pytorch/third_party/gemmlowp -isystem /root/pytorch/third_party/neon2sse -isystem /root/pytorch/third_party/XNNPACK/include -D_GLIBCXX_USE_CXX11_ABI=1 -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -std=c++11 -Wall -Wextra -Wshadow -Wsuggest-override -pedantic -pedantic-errors -fstrict-aliasing -Wno-deprecated-declarations -Wstrict-aliasing -O3 -DNDEBUG -Werror -Wno-deprecated -MD -MT third_party/benchmark/src/CMakeFiles/benchmark.dir/benchmark.cc.o -MF third_party/benchmark/src/CMakeFiles/benchmark.dir/benchmark.cc.o.d -o third_party/benchmark/src/CMakeFiles/benchmark.dir/benchmark.cc.o -c /root/pytorch/third_party/benchmark/src/benchmark.cc
In file included from /root/pytorch/third_party/benchmark/src/benchmark.cc:15:
/root/anaconda3/include/benchmark/benchmark.h:1011:16: error: ‘virtual void benchmark::internal::FunctionBenchmark::Run(benchmark::State&)’ can be marked override [-Werror=suggest-override]
1011 | virtual void Run(State& st);
| ^~~
/root/anaconda3/include/benchmark/benchmark.h:1073:16: error: ‘virtual void benchmark::Fixture::Run(benchmark::State&)’ can be marked override [-Werror=suggest-override]
1073 | virtual void Run(State& st) {
| ^~~
/root/anaconda3/include/benchmark/benchmark.h:1514:16: error: ‘virtual bool benchmark::ConsoleReporter::ReportContext(const benchmark::BenchmarkReporter::Context&)’ can be marked override [-Werror=suggest-override]
1514 | virtual bool ReportContext(const Context& context);
| ^~~~~~~~~~~~~
/root/anaconda3/include/benchmark/benchmark.h:1515:16: error: ‘virtual void benchmark::ConsoleReporter::ReportRuns(const std::vector<benchmark::BenchmarkReporter::Run>&)’ can be marked override [-Werror=suggest-override]
1515 | virtual void ReportRuns(const std::vector<Run>& reports);
| ^~~~~~~~~~
/root/anaconda3/include/benchmark/benchmark.h:1530:16: error: ‘virtual bool benchmark::JSONReporter::ReportContext(const benchmark::BenchmarkReporter::Context&)’ can be marked override [-Werror=suggest-override]
1530 | virtual bool ReportContext(const Context& context);
| ^~~~~~~~~~~~~
/root/anaconda3/include/benchmark/benchmark.h:1531:16: error: ‘virtual void benchmark::JSONReporter::ReportRuns(const std::vector<benchmark::BenchmarkReporter::Run>&)’ can be marked override [-Werror=suggest-override]
1531 | virtual void ReportRuns(const std::vector<Run>& reports);
| ^~~~~~~~~~
/root/anaconda3/include/benchmark/benchmark.h:1532:16: error: ‘virtual void benchmark::JSONReporter::Finalize()’ can be marked override [-Werror=suggest-override]
1532 | virtual void Finalize();
| ^~~~~~~~
/root/anaconda3/include/benchmark/benchmark.h:1545:16: error: ‘virtual bool benchmark::CSVReporter::ReportContext(const benchmark::BenchmarkReporter::Context&)’ can be marked override [-Werror=suggest-override]
1545 | virtual bool ReportContext(const Context& context);
| ^~~~~~~~~~~~~
/root/anaconda3/include/benchmark/benchmark.h:1546:16: error: ‘virtual void benchmark::CSVReporter::ReportRuns(const std::vector<benchmark::BenchmarkReporter::Run>&)’ can be marked override [-Werror=suggest-override]
1546 | virtual void ReportRuns(const std::vector<Run>& reports);
| ^~~~~~~~~~
In file included from /root/pytorch/third_party/benchmark/src/benchmark.cc:17:
/root/pytorch/third_party/benchmark/src/benchmark_api_internal.h:46:23: error: ‘benchmark::internal::PerfCountersMeasurement’ has not been declared
46 | internal::PerfCountersMeasurement* perf_counters_measurement) const;
| ^~~~~~~~~~~~~~~~~~~~~~~
In file included from /root/pytorch/third_party/benchmark/src/benchmark.cc:18:
/root/pytorch/third_party/benchmark/src/benchmark_runner.h:49:38: error: ‘benchmark::BenchmarkReporter::PerFamilyRunReports’ has not been declared
49 | BenchmarkReporter::PerFamilyRunReports* reports_for_family);
| ^~~~~~~~~~~~~~~~~~~
/root/pytorch/third_party/benchmark/src/benchmark_runner.h:61:22: error: ‘PerFamilyRunReports’ in ‘class benchmark::BenchmarkReporter’ does not name a type
61 | BenchmarkReporter::PerFamilyRunReports* GetReportsForFamily() const {
| ^~~~~~~~~~~~~~~~~~~
/root/pytorch/third_party/benchmark/src/benchmark_runner.h:69:22: error: ‘PerFamilyRunReports’ in ‘class benchmark::BenchmarkReporter’ does not name a type
69 | BenchmarkReporter::PerFamilyRunReports* reports_for_family;
| ^~~~~~~~~~~~~~~~~~~
/root/pytorch/third_party/benchmark/src/benchmark.cc:136:1: error: no declaration matches ‘benchmark::State::State(benchmark::IterationCount, const std::vector<long int>&, int, int, benchmark::internal::ThreadTimer*, benchmark::internal::ThreadManager*, benchmark::internal::PerfCountersMeasurement*)’
136 | State::State(IterationCount max_iters, const std::vector<int64_t>& ranges,
| ^~~~~
/root/anaconda3/include/benchmark/benchmark.h:465:7: note: candidates are: ‘benchmark::State::State(benchmark::State&&)’
465 | class State {
| ^~~~~
/root/anaconda3/include/benchmark/benchmark.h:465:7: note: ‘benchmark::State::State(const benchmark::State&)’
/root/anaconda3/include/benchmark/benchmark.h:677:3: note: ‘benchmark::State::State(benchmark::IterationCount, const std::vector<long int>&, int, int, benchmark::internal::ThreadTimer*, benchmark::internal::ThreadManager*)’
677 | State(IterationCount max_iters, const std::vector<int64_t>& ranges,
| ^~~~~
/root/anaconda3/include/benchmark/benchmark.h:465:7: note: ‘class benchmark::State’ defined here
465 | class State {
| ^~~~~
/root/pytorch/third_party/benchmark/src/benchmark.cc: In member function ‘void benchmark::State::PauseTiming()’:
/root/pytorch/third_party/benchmark/src/benchmark.cc:186:7: error: ‘perf_counters_measurement_’ was not declared in this scope
186 | if (perf_counters_measurement_) {
| ^~~~~~~~~~~~~~~~~~~~~~~~~~
/root/pytorch/third_party/benchmark/src/benchmark.cc: In member function ‘void benchmark::State::ResumeTiming()’:
/root/pytorch/third_party/benchmark/src/benchmark.cc:200:7: error: ‘perf_counters_measurement_’ was not declared in this scope
200 | if (perf_counters_measurement_) {
| ^~~~~~~~~~~~~~~~~~~~~~~~~~
/root/pytorch/third_party/benchmark/src/benchmark.cc: In function ‘void benchmark::internal::{anonymous}::RunBenchmarks(const std::vector<benchmark::internal::BenchmarkInstance>&, benchmark::BenchmarkReporter*, benchmark::BenchmarkReporter*)’:
/root/pytorch/third_party/benchmark/src/benchmark.cc:306:53: error: ‘PerFamilyRunReports’ is not a member of ‘benchmark::BenchmarkReporter’
306 | std::map<int /*family_index*/, BenchmarkReporter::PerFamilyRunReports>
| ^~~~~~~~~~~~~~~~~~~
/root/pytorch/third_party/benchmark/src/benchmark.cc:306:53: error: ‘PerFamilyRunReports’ is not a member of ‘benchmark::BenchmarkReporter’
/root/pytorch/third_party/benchmark/src/benchmark.cc:306:72: error: template argument 2 is invalid
306 | std::map<int /*family_index*/, BenchmarkReporter::PerFamilyRunReports>
| ^
/root/pytorch/third_party/benchmark/src/benchmark.cc:306:72: error: template argument 4 is invalid
/root/pytorch/third_party/benchmark/src/benchmark.cc:319:26: error: ‘PerFamilyRunReports’ is not a member of ‘benchmark::BenchmarkReporter’
319 | BenchmarkReporter::PerFamilyRunReports* reports_for_family = nullptr;
| ^~~~~~~~~~~~~~~~~~~
/root/pytorch/third_party/benchmark/src/benchmark.cc:319:47: error: ‘reports_for_family’ was not declared in this scope
319 | BenchmarkReporter::PerFamilyRunReports* reports_for_family = nullptr;
| ^~~~~~~~~~~~~~~~~~
/root/pytorch/third_party/benchmark/src/benchmark.cc:321:49: error: invalid types ‘int[int]’ for array subscript
321 | reports_for_family = &per_family_reports[benchmark.family_index()];
| ^
/root/pytorch/third_party/benchmark/src/benchmark.cc:357:51: error: ‘class benchmark::internal::BenchmarkRunner’ has no member named ‘GetReportsForFamily’
357 | if (const auto* reports_for_family = runner.GetReportsForFamily()) {
| ^~~~~~~~~~~~~~~~~~~
/root/pytorch/third_party/benchmark/src/benchmark.cc:364:30: error: request for member ‘erase’ in ‘per_family_reports’, which is of non-class type ‘int’
364 | per_family_reports.erase(
| ^~~~~
/root/pytorch/third_party/benchmark/src/benchmark.cc: At global scope:
/root/pytorch/third_party/benchmark/src/benchmark.cc:380:1: error: ‘BENCHMARK_DISABLE_DEPRECATED_WARNING’ does not name a type
380 | BENCHMARK_DISABLE_DEPRECATED_WARNING
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/root/pytorch/third_party/benchmark/src/benchmark.cc:397:1: error: ‘BENCHMARK_RESTORE_DEPRECATED_WARNING’ does not name a type
397 | BENCHMARK_RESTORE_DEPRECATED_WARNING
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/root/pytorch/third_party/benchmark/src/benchmark.cc: In function ‘size_t benchmark::RunSpecifiedBenchmarks()’:
/root/pytorch/third_party/benchmark/src/benchmark.cc:432:32: error: no matching function for call to ‘RunSpecifiedBenchmarks(std::nullptr_t, std::nullptr_t, std::string&)’
432 | return RunSpecifiedBenchmarks(nullptr, nullptr, FLAGS_benchmark_filter);
| ~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/root/pytorch/third_party/benchmark/src/benchmark.cc:431:8: note: candidate: ‘size_t benchmark::RunSpecifiedBenchmarks()’
431 | size_t RunSpecifiedBenchmarks() {
| ^~~~~~~~~~~~~~~~~~~~~~
/root/pytorch/third_party/benchmark/src/benchmark.cc:431:8: note: candidate expects 0 arguments, 3 provided
/root/anaconda3/include/benchmark/benchmark.h:278:8: note: candidate: ‘size_t benchmark::RunSpecifiedBenchmarks(BenchmarkReporter*)’
278 | size_t RunSpecifiedBenchmarks(BenchmarkReporter* display_reporter);
| ^~~~~~~~~~~~~~~~~~~~~~
/root/anaconda3/include/benchmark/benchmark.h:278:8: note: candidate expects 1 argument, 3 provided
/root/anaconda3/include/benchmark/benchmark.h:279:8: note: candidate: ‘size_t benchmark::RunSpecifiedBenchmarks(BenchmarkReporter*, BenchmarkReporter*)’
279 | size_t RunSpecifiedBenchmarks(BenchmarkReporter* display_reporter,
| ^~~~~~~~~~~~~~~~~~~~~~
/root/anaconda3/include/benchmark/benchmark.h:279:8: note: candidate expects 2 arguments, 3 provided
/root/pytorch/third_party/benchmark/src/benchmark.cc: In function ‘size_t benchmark::RunSpecifiedBenchmarks(std::string)’:
/root/pytorch/third_party/benchmark/src/benchmark.cc:436:32: error: no matching function for call to ‘RunSpecifiedBenchmarks(std::nullptr_t, std::nullptr_t, std::remove_reference<std::__cxx11::basic_string<char>&>::type)’
436 | return RunSpecifiedBenchmarks(nullptr, nullptr, std::move(spec));
| ~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/root/pytorch/third_party/benchmark/src/benchmark.cc:431:8: note: candidate: ‘size_t benchmark::RunSpecifiedBenchmarks()’
431 | size_t RunSpecifiedBenchmarks() {
| ^~~~~~~~~~~~~~~~~~~~~~
/root/pytorch/third_party/benchmark/src/benchmark.cc:431:8: note: candidate expects 0 arguments, 3 provided
/root/anaconda3/include/benchmark/benchmark.h:278:8: note: candidate: ‘size_t benchmark::RunSpecifiedBenchmarks(BenchmarkReporter*)’
278 | size_t RunSpecifiedBenchmarks(BenchmarkReporter* display_reporter);
| ^~~~~~~~~~~~~~~~~~~~~~
/root/anaconda3/include/benchmark/benchmark.h:278:8: note: candidate expects 1 argument, 3 provided
/root/anaconda3/include/benchmark/benchmark.h:279:8: note: candidate: ‘size_t benchmark::RunSpecifiedBenchmarks(BenchmarkReporter*, BenchmarkReporter*)’
279 | size_t RunSpecifiedBenchmarks(BenchmarkReporter* display_reporter,
| ^~~~~~~~~~~~~~~~~~~~~~
/root/anaconda3/include/benchmark/benchmark.h:279:8: note: candidate expects 2 arguments, 3 provided
/root/pytorch/third_party/benchmark/src/benchmark.cc:435:8: note: candidate: ‘size_t benchmark::RunSpecifiedBenchmarks(std::string)’
435 | size_t RunSpecifiedBenchmarks(std::string spec) {
| ^~~~~~~~~~~~~~~~~~~~~~
/root/pytorch/third_party/benchmark/src/benchmark.cc:435:8: note: candidate expects 1 argument, 3 provided
/root/pytorch/third_party/benchmark/src/benchmark.cc: In function ‘size_t benchmark::RunSpecifiedBenchmarks(BenchmarkReporter*)’:
/root/pytorch/third_party/benchmark/src/benchmark.cc:440:32: error: no matching function for call to ‘RunSpecifiedBenchmarks(benchmark::BenchmarkReporter*&, std::nullptr_t, std::string&)’
440 | return RunSpecifiedBenchmarks(display_reporter, nullptr,
| ~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~
441 | FLAGS_benchmark_filter);
| ~~~~~~~~~~~~~~~~~~~~~~~
/root/pytorch/third_party/benchmark/src/benchmark.cc:431:8: note: candidate: ‘size_t benchmark::RunSpecifiedBenchmarks()’
431 | size_t RunSpecifiedBenchmarks() {
| ^~~~~~~~~~~~~~~~~~~~~~
/root/pytorch/third_party/benchmark/src/benchmark.cc:431:8: note: candidate expects 0 arguments, 3 provided
/root/pytorch/third_party/benchmark/src/benchmark.cc:439:8: note: candidate: ‘size_t benchmark::RunSpecifiedBenchmarks(BenchmarkReporter*)’
439 | size_t RunSpecifiedBenchmarks(BenchmarkReporter* display_reporter) {
| ^~~~~~~~~~~~~~~~~~~~~~
/root/pytorch/third_party/benchmark/src/benchmark.cc:439:8: note: candidate expects 1 argument, 3 provided
/root/anaconda3/include/benchmark/benchmark.h:279:8: note: candidate: ‘size_t benchmark::RunSpecifiedBenchmarks(BenchmarkReporter*, BenchmarkReporter*)’
279 | size_t RunSpecifiedBenchmarks(BenchmarkReporter* display_reporter,
| ^~~~~~~~~~~~~~~~~~~~~~
/root/anaconda3/include/benchmark/benchmark.h:279:8: note: candidate expects 2 arguments, 3 provided
/root/pytorch/third_party/benchmark/src/benchmark.cc:435:8: note: candidate: ‘size_t benchmark::RunSpecifiedBenchmarks(std::string)’
435 | size_t RunSpecifiedBenchmarks(std::string spec) {
| ^~~~~~~~~~~~~~~~~~~~~~
/root/pytorch/third_party/benchmark/src/benchmark.cc:435:8: note: candidate expects 1 argument, 3 provided
/root/pytorch/third_party/benchmark/src/benchmark.cc: In function ‘size_t benchmark::RunSpecifiedBenchmarks(BenchmarkReporter*, std::string)’:
/root/pytorch/third_party/benchmark/src/benchmark.cc:446:32: error: no matching function for call to ‘RunSpecifiedBenchmarks(benchmark::BenchmarkReporter*&, std::nullptr_t, std::remove_reference<std::__cxx11::basic_string<char>&>::type)’
446 | return RunSpecifiedBenchmarks(display_reporter, nullptr, std::move(spec));
| ~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/root/pytorch/third_party/benchmark/src/benchmark.cc:431:8: note: candidate: ‘size_t benchmark::RunSpecifiedBenchmarks()’
431 | size_t RunSpecifiedBenchmarks() {
| ^~~~~~~~~~~~~~~~~~~~~~
/root/pytorch/third_party/benchmark/src/benchmark.cc:431:8: note: candidate expects 0 arguments, 3 provided
/root/pytorch/third_party/benchmark/src/benchmark.cc:439:8: note: candidate: ‘size_t benchmark::RunSpecifiedBenchmarks(BenchmarkReporter*)’
439 | size_t RunSpecifiedBenchmarks(BenchmarkReporter* display_reporter) {
| ^~~~~~~~~~~~~~~~~~~~~~
/root/pytorch/third_party/benchmark/src/benchmark.cc:439:8: note: candidate expects 1 argument, 3 provided
/root/anaconda3/include/benchmark/benchmark.h:279:8: note: candidate: ‘size_t benchmark::RunSpecifiedBenchmarks(BenchmarkReporter*, BenchmarkReporter*)’
279 | size_t RunSpecifiedBenchmarks(BenchmarkReporter* display_reporter,
| ^~~~~~~~~~~~~~~~~~~~~~
/root/anaconda3/include/benchmark/benchmark.h:279:8: note: candidate expects 2 arguments, 3 provided
/root/pytorch/third_party/benchmark/src/benchmark.cc:435:8: note: candidate: ‘size_t benchmark::RunSpecifiedBenchmarks(std::string)’
435 | size_t RunSpecifiedBenchmarks(std::string spec) {
| ^~~~~~~~~~~~~~~~~~~~~~
/root/pytorch/third_party/benchmark/src/benchmark.cc:435:8: note: candidate expects 1 argument, 3 provided
/root/pytorch/third_party/benchmark/src/benchmark.cc:444:8: note: candidate: ‘size_t benchmark::RunSpecifiedBenchmarks(BenchmarkReporter*, std::string)’
444 | size_t RunSpecifiedBenchmarks(BenchmarkReporter* display_reporter,
| ^~~~~~~~~~~~~~~~~~~~~~
/root/pytorch/third_party/benchmark/src/benchmark.cc:444:8: note: candidate expects 2 arguments, 3 provided
/root/pytorch/third_party/benchmark/src/benchmark.cc: In function ‘size_t benchmark::RunSpecifiedBenchmarks(BenchmarkReporter*, BenchmarkReporter*)’:
/root/pytorch/third_party/benchmark/src/benchmark.cc:451:32: error: no matching function for call to ‘RunSpecifiedBenchmarks(benchmark::BenchmarkReporter*&, benchmark::BenchmarkReporter*&, std::string&)’
451 | return RunSpecifiedBenchmarks(display_reporter, file_reporter,
| ~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
452 | FLAGS_benchmark_filter);
| ~~~~~~~~~~~~~~~~~~~~~~~
/root/pytorch/third_party/benchmark/src/benchmark.cc:431:8: note: candidate: ‘size_t benchmark::RunSpecifiedBenchmarks()’
431 | size_t RunSpecifiedBenchmarks() {
| ^~~~~~~~~~~~~~~~~~~~~~
/root/pytorch/third_party/benchmark/src/benchmark.cc:431:8: note: candidate expects 0 arguments, 3 provided
/root/pytorch/third_party/benchmark/src/benchmark.cc:439:8: note: candidate: ‘size_t benchmark::RunSpecifiedBenchmarks(BenchmarkReporter*)’
439 | size_t RunSpecifiedBenchmarks(BenchmarkReporter* display_reporter) {
| ^~~~~~~~~~~~~~~~~~~~~~
/root/pytorch/third_party/benchmark/src/benchmark.cc:439:8: note: candidate expects 1 argument, 3 provided
/root/pytorch/third_party/benchmark/src/benchmark.cc:449:8: note: candidate: ‘size_t benchmark::RunSpecifiedBenchmarks(BenchmarkReporter*, BenchmarkReporter*)’
449 | size_t RunSpecifiedBenchmarks(BenchmarkReporter* display_reporter,
| ^~~~~~~~~~~~~~~~~~~~~~
/root/pytorch/third_party/benchmark/src/benchmark.cc:449:8: note: candidate expects 2 arguments, 3 provided
/root/pytorch/third_party/benchmark/src/benchmark.cc:435:8: note: candidate: ‘size_t benchmark::RunSpecifiedBenchmarks(std::string)’
435 | size_t RunSpecifiedBenchmarks(std::string spec) {
| ^~~~~~~~~~~~~~~~~~~~~~
/root/pytorch/third_party/benchmark/src/benchmark.cc:435:8: note: candidate expects 1 argument, 3 provided
/root/pytorch/third_party/benchmark/src/benchmark.cc:444:8: note: candidate: ‘size_t benchmark::RunSpecifiedBenchmarks(BenchmarkReporter*, std::string)’
444 | size_t RunSpecifiedBenchmarks(BenchmarkReporter* display_reporter,
| ^~~~~~~~~~~~~~~~~~~~~~
/root/pytorch/third_party/benchmark/src/benchmark.cc:444:8: note: candidate expects 2 arguments, 3 provided
/root/pytorch/third_party/benchmark/src/benchmark.cc: In function ‘size_t benchmark::RunSpecifiedBenchmarks(BenchmarkReporter*, BenchmarkReporter*, std::string)’:
/root/pytorch/third_party/benchmark/src/benchmark.cc:466:42: error: ‘CreateReporter’ is not a member of ‘benchmark::internal’
466 | default_display_reporter = internal::CreateReporter(
| ^~~~~~~~~~~~~~
/root/pytorch/third_party/benchmark/src/benchmark.cc:487:41: error: ‘CreateReporter’ is not a member of ‘benchmark::internal’
487 | default_file_reporter = internal::CreateReporter(
| ^~~~~~~~~~~~~~
cc1plus: all warnings being treated as errors
[3738/6217] Building CXX object third_party/googletest/googletest/CMakeFiles/gtest.dir/src/gtest-all.cc.o
ninja: build stopped: subcommand failed.
```
### Minified repro
_No response_
### Versions
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.10 (x86_64)
GCC version: (Ubuntu 12.2.0-3ubuntu1) 12.2.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.36
Python version: 3.9.13 (main, Aug 25 2022, 23:26:10) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.19.0-21-generic-x86_64-with-glibc2.36
Is CUDA available: N/A
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: GenuineIntel
BIOS Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU E5-2450L 0 @ 1.80GHz
BIOS Model name: Intel(R) Xeon(R) CPU E5-2450L 0 @ 1.80GHz CPU @ 1.7GHz
BIOS CPU family: 2
CPU family: 6
Model: 45
Thread(s) per core: 1
Core(s) per socket: 2
Socket(s): 16
Stepping: 7
BogoMIPS: 3591.34
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx rdtscp lm constant_tsc arch_perfmon nopl xtopology tsc_reliable nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 cx16 pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx hypervisor lahf_lm pti ssbd ibrs ibpb stibp tsc_adjust arat md_clear flush_l1d arch_capabilities
Hypervisor vendor: VMware
Virtualization type: full
L1d cache: 512 KiB (16 instances)
L1i cache: 512 KiB (16 instances)
L2 cache: 4 MiB (16 instances)
L3 cache: 320 MiB (16 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-15
NUMA node1 CPU(s): 16-31
Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Unknown: No mitigations
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.21.5
[pip3] numpydoc==1.4.0
[conda] blas 1.0 mkl
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.21.5 py39h6c91a56_3
[conda] numpy-base 1.21.5 py39ha15fc14_3
[conda] numpydoc 1.4.0 py39h06a4308_0
cc @malfet @seemethere @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 6 |
3,391 | 95,337 |
Changing behavior of module.to() to better support mixed real- and complex-valued parameters
|
module: nn, triaged, module: complex, needs design
|
### 🚀 The feature, motivation and pitch
Motivation
--------------
It is incredibly common for modules which use complex-valued parameters to also contain real-valued parameters. This causes problems wherever module.to() is called.
Consider the example below:
```python
real = torch.ones([10,10], dtype=torch.float32)
# returns torch.complex64
print(real.to(dtype=torch.complex64).dtype)
complex = torch.ones([10,10], dtype=torch.complex64)
# returns torch.float32 and raises UserWarning for casting complex to real
print(complex.to(dtype=torch.float32).dtype)
```
This behavior seems obviously correct, but a problem appears at the module level:
```python
class MixedModule(torch.nn.Module):
def __init__(self):
super().__init__()
self.complex_param = torch.nn.Parameter(torch.ones([10,10], dtype=torch.complex64))
self.real_param = torch.nn.Parameter(torch.ones([10,10], dtype=torch.float32))
mm = MixedModule()
mm.to(dtype=torch.float64)
# returns torch.float64
print(mm.real_param.dtype)
# returns torch.float64
print(mm.complex_param.dtype)
```
This is makes it a dangerous game to use `module.to()` on any module which has mixed real and complex-valued parameters, for example to ensure that all parameters loaded from some external data source are stored at the proper precision for the module to be run.
This issue also exists when modules have mixed integer and floating-point variables, which is why [module.to](https://pytorch.org/docs/stable/generated/torch.nn.Module.html#torch.nn.Module.to) only accepts floating-point dtypes, and will not update the dtype of any integer variables.
Proposed Feature
-----------------------
I propose add a new kwarg, `precision`, to both `Tensor.to` and `Module.to`, which will update the precision but not affect the real-ness or complex-ness of the tensor(s) to be updated. Here is a brief proposal for it's behavior:
1) It accepts as input a `torch.dtype`, whose precision it will attempt to match, e.g.:
```python
real = torch.ones([10,10], dtype=torch.float32)
# returns torch.float32
print(real.to(precision=torch.complex64).dtype)
# returns torch.float64
print(real.to(precision=torch.float64).dtype)
# returns torch.float64
print(real.to(precision=torch.complex128).dtype)
```
2) If both a dtype and precision are input, it chooses dtype over precision and raises a UserWarning
3) It raises a TypeError when asked to convert a complex tensor to a non-supported real dtype, e.g.:
```python
complex = torch.ones([10,10], dtype=torch.complex64)
# raises TypeError
complex.to(precision=torch.float16)
```
4) In all other respects, it acts just like the dtype argument. For example, in `Tensor.to`, it would also accept integer dtypes and happily operate on integer dtypes. In `Module.to`, it would treat integers just like they are treated by the `dtype` argument in `Module.to`. The value of this choice is that it enables other code which wants the "precision"-style behavior to make one change everywhere.
Notes
--------
It is possible that other code within pytorch which uses `Module.to` to update dtypes would want to have it's default behavior changed to use the `precision` kwarg, in which case this feature addition would require making backwards-incompatible changes to those functions.
### Alternatives
Alternative 1
----------------
Instead of introducing a `precision` kwarg to `Tensor.to` and `Module.to`, we could introduce a boolean `preserve_realness` kwarg which would switch the `dtype` kwarg to have the behaviour outlined above. Is there a better name for this kwarg? In this case, ideally the default setting for `Tensor.to` would be `False` but the default setting for `Module.to` would be `True`.
The advantage is that other functions which use `Module.to` do not need to be explicitly updated, but the disadvantage is that `Module.to` itself becomes explicitly backwards-incompatible.
Alternative 2
----------------
Introduce two new functions, `Tensor.change_precision` and `Module.change_precision`. I think this is much less clean, because pytorch already chooses to combine device changes and dtype changes into one function - why introduce a new function just for this one weird thing?
Alternative 3
----------------
The exact behavior of the `precision` kwarg could also be different in a few ways:
1) It could accept integer (e.g. 32, 64) or string (e.g. 'float', 'double') input, or multiple different input styles
2) It could only accept `t.float32`, `t.float64`, `t.complex64`, and `t.complex128`, rather than replicating the full functionality of the `dtype` arg.
3) It could raise an error instead of a warning if both a dtype and precision argument are input
### Additional context
Let's all remember that complex numbers and real numbers are friends!!!
cc @albanD @mruberry @jbschlosser @walterddr @saketh-are @ezyang @anjali411 @dylanbespalko @Lezcano @nikitaved
| 15 |
3,392 | 95,320 |
Circular padding error for 3D arrays
|
triaged, module: padding
|
### 🐛 Describe the bug
An error is raised when trying to do circular padding for a 3D array
```
import torch
import torch.nn.functional as F
F.pad(torch.arange(9).reshape(1, 3,3), (0,1,1,0), mode='circular')
```
Error:
```
RuntimeError: Invalid padding size, expected 2 but got 4
```
A current workaround is to get a view of the array with an extra dimension:
```
import torch
import torch.nn.functional as F
F.pad(torch.arange(9).reshape(1, 1, 3,3), (0,1,1,0), mode='circular')
```
which runs fine.
### Versions
PyTorch version: 1.13.0+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 10 Enterprise
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A
Python version: 3.8.15 | packaged by conda-forge | (default, Nov 22 2022, 08:43:00) [MSC v.1929 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.19044-SP0
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture=9
CurrentClockSpeed=1805
DeviceID=CPU0
Family=198
L2CacheSize=5120
L2CacheSpeed=
Manufacturer=GenuineIntel
MaxClockSpeed=1805
Name=11th Gen Intel(R) Core(TM) i7-1185G7 @ 3.00GHz
ProcessorType=3
Revision=
Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.23.5
[pip3] torch==1.13.0
[pip3] torch-tb-profiler==0.4.0
[pip3] torchvision==0.14.0
[pip3] opencv==4.6.0
[conda] blas 1.0 mkl
[conda] mkl 2021.4.0 haa95532_640
[conda] mkl-service 2.4.0 py38h2bbff1b_0
[conda] mkl_fft 1.3.1 py38h277e83a_0
[conda] mkl_random 1.2.2 py38hf11a4ad_0
[conda] numpy 1.23.0 pypi_0 pypi
[conda] numpy-base 1.23.5 py38h4da318b_0
[conda] torch 1.13.0 pypi_0 pypi
[conda] torch-tb-profiler 0.4.0 pypi_0 pypi
[conda] torchvision 0.14.0 pypi_0 pypi
| 1 |
3,393 | 95,309 |
`torch.distributed.Store` triggers INTERNAL ASSER FAILED when seting
|
oncall: distributed, triaged
|
### 🐛 Describe the bug
`torch.distributed.Store` triggers INTERNAL ASSER FAILED when seting
```py
import torch
torch.manual_seed(420)
store = torch.distributed.Store()
key = "key"
value = "value"
for i in range(100):
store.set(key+str(i), value+str(i))
# RuntimeError: fn INTERNAL ASSERT FAILED at
# "/opt/conda/conda-bld/pytorch_1672906354936/work/torch/csrc/distributed/c10d/init.cpp":138,
# please report a bug to PyTorch.
```
### Versions
```
PyTorch version: 2.0.0.dev20230105
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.9.15 (main, Nov 24 2022, 14:31:59) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-56-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.7.99
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
Nvidia driver version: 515.86.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.4.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.5
[pip3] torch==2.0.0.dev20230105
[pip3] torchaudio==2.0.0.dev20230105
[pip3] torchvision==0.15.0.dev20230105
[conda] blas 1.0 mkl
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.23.5 py39h14f4228_0
[conda] numpy-base 1.23.5 py39h31eccc5_0
[conda] pytorch 2.0.0.dev20230105 py3.9_cuda11.7_cudnn8.5.0_0 pytorch-nightly
[conda] pytorch-cuda 11.7 h67b0de4_2 pytorch-nightly
[conda] pytorch-mutex 1.0 cuda pytorch-nightly
[conda] torchaudio 2.0.0.dev20230105 py39_cu117 pytorch-nightly
[conda] torchtriton 2.0.0+0d7e753227 py39 pytorch-nightly
[conda] torchvision 0.15.0.dev20230105 py39_cu117 pytorch-nightly
```
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
| 1 |
3,394 | 95,304 |
`torch.cartesian_prod` returns inconsistent dimensions with only one input
|
triaged, module: linear algebra
|
### 🐛 Describe the bug
When calling `torch.cartesian_prod` with a single tensor, the result is a 1D tensor. Conversely, when calling it with multiple tensors, the result is 2D. I would expect that `torch.cartesian_prod` always returns a 2D tensor for consistency.
Further, this is what `itertools.product` would do, which this function is modelled on.
## MWE
```python
import itertools
import torch
torch.cartesian_prod(
torch.tensor([1,2,3])
)
# Output: tensor([1,2,3])
# Expected: tensor([[1], [2], [3]])
list(itertools.product([1, 2, 3]))
# Output: [(1,), (2,), (3,)]
```
### Versions
```
Python version: 3.10.9 (main, Jan 11 2023, 15:21:40) [GCC 11.2.0] (64-bit runtime)
Versions of relevant libraries:
[pip3] botorch==0.8.1
[pip3] gpytorch==1.9.1
[pip3] numpy==1.24.2
[pip3] torch==1.13.1+cpu
[conda] botorch 0.8.1 pypi_0 pypi
[conda] gpytorch 1.9.1 pypi_0 pypi
[conda] numpy 1.24.2 pypi_0 pypi
[conda] torch 1.13.1+cpu pypi_0 pypi
```
cc @jianyuh @nikitaved @pearu @mruberry @walterddr @IvanYashchuk @xwang233 @Lezcano
| 0 |
3,395 | 95,290 |
Continuous dropout layer
|
module: nn, triaged, enhancement
|
### 🚀 The feature, motivation and pitch
Hello!
I am working on information theory application for neural networks [(here)](https://openreview.net/forum?id=bQB6qozaBw).
With my research I show that for valid mutual information measurements one might apply continuous dropout. While continuous dropout was considered already in the original paper introducing dropout, the implementation of it is not unified and not added to the library. From my perspective it can be a large benefit to add a class for Gaussian dropout for example, or maybe for a dropout with noise sampled from any custom distribution.
Since I implemented it for my project, I am willing to create this contribution if it is considered interesting.
### Alternatives
_No response_
### Additional context
_No response_
cc @albanD @mruberry @jbschlosser @walterddr @saketh-are
| 3 |
3,396 | 95,276 |
tabulate is used by `torch.fx.graph_module.GraphModule.print_tabular` but is not installed when installing pytorch
|
triaged, module: fx
|
### 🐛 Describe the bug
The sample code from https://pytorch.org/docs/master/dynamo/custom-backends.html#debugging-backend fails on a clean install of pytorch 2.0 rc1 with this command:
```console
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/test/cu117
```
On a clean virtual environment
The exception stack trace is:
```console
my_compiler() called with FX graph:
`print_tabular` relies on the library `tabulate`, which could not be found on this machine. Run `pip install tabulate` to install the library.
Traceback (most recent call last):
File "/envs/pt2/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 670, in call_user_compiler
compiled_fn = compiler_fn(gm, self.fake_example_inputs())
File "/envs/pt2/lib/python3.10/site-packages/torch/_dynamo/debug_utils.py", line 1055, in debug_wrapper
compiled_gm = compiler_fn(gm, example_inputs)
File "pt2.py", line 5, in my_compiler
gm.graph.print_tabular()
File "/envs/pt2/lib/python3.10/site-packages/torch/fx/graph.py", line 1302, in print_tabular
print(tabulate(node_specs,
UnboundLocalError: local variable 'tabulate' referenced before assignment
```
Below is the code:
```python
from typing import List
import torch
def my_compiler(gm: torch.fx.GraphModule, example_inputs: List[torch.Tensor]):
print("my_compiler() called with FX graph:")
gm.graph.print_tabular()
return gm.forward # return a python callable
@torch.compile(backend=my_compiler)
def fn(x, y):
a = torch.cos(x)
b = torch.sin(y)
return a + b
fn(torch.randn(10), torch.randn(10))
```
### Versions
Collecting environment information...
PyTorch version: 2.0.0+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: 9.0.0 (tags/RELEASE_900/final)
CMake version: version 3.25.0
Libc version: glibc-2.27
Python version: 3.10.6 (main, Aug 30 2022, 16:00:07) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.15.87-051587-generic-x86_64-with-glibc2.27
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-PCIE-40GB
GPU 1: NVIDIA A100-PCIE-40GB
Nvidia driver version: 525.78.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7502 32-Core Processor
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 1
Stepping: 0
BogoMIPS: 5000.28
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sme sev sev_es
Virtualization: AMD-V
L1d cache: 1 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 16 MiB (32 instances)
L3 cache: 128 MiB (8 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-63
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.1
[pip3] pytorch-triton==2.0.0+c8bfe3f548
[pip3] torch==2.0.0+cu117
[pip3] torchaudio==2.0.0+cu117
[pip3] torchvision==0.15.0+cu117
[conda] Could not collect
cc @ezyang @SherlockNoMad @soumith @EikanWang @jgong5 @wenzhe-nrv @msaroufim @wconstab @ngimel @bdhirsh
| 0 |
3,397 | 95,262 |
`Tensor.copy_` + `moveaxis` Trigger Exception in Compile Mode
|
triaged, oncall: pt2, module: aotdispatch, module: inductor
|
### 🐛 Describe the bug
The following program works fine in eager mode but triggers an exception in compile mode:
```python
import torch
def fn(x, y):
_ = y.copy_(x)
return torch.moveaxis(y, source=0, destination=1)
x = torch.rand([2, 3], dtype=torch.float16)
y = torch.rand([2, 3], dtype=torch.float32) # works fine if x&y has the same type
ret_eager = fn(x, y)
print('==== Eager mode OK! ====')
compiled = torch.compile(fn)
print('==== torchcomp compilation OK! ====')
ret_compiled = compiled(x, y)
print('==== torchcomp mode OK! ====')
"""
==== Eager mode OK! ====
==== torchcomp compilation OK! ====
Traceback (most recent call last):
File "repro.py", line 15, in <module>
ret_compiled = compiled(x, y)
File "python3.10/site-packages/torch/_dynamo/eval_frame.py", line 209, in _fn
return fn(*args, **kwargs)
File "repro.py", line 3, in fn
def fn(x, y):
File "python3.10/site-packages/torch/_dynamo/eval_frame.py", line 209, in _fn
return fn(*args, **kwargs)
File "python3.10/site-packages/torch/_functorch/aot_autograd.py", line 2812, in forward
return compiled_fn(full_args)
File "python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1222, in g
return f(*args)
File "python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1895, in runtime_wrapper
all_outs = call_func_with_args(
File "python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1247, in call_func_with_args
out = normalize_as_list(f(args))
File "/tmp/torchinductor/75/c75w2gc5qdvwzlouzm2j2qf2kow35v3nlkpbq2tiunw6kht3swv7.py", line 47, in call
return (as_strided(buf0, (2, 2), (1, 2)), )
NameError: name 'buf0' is not defined
"""
```
It's worth noting that the error occurs only when the two input tensors have different `dtype`. Also, the two operators are both necessary for triggering this issue.
### Versions
<details><summary><b>Environment</b> <i>[Click to expand]</i></summary>
```
PyTorch version: 2.0.0.dev20230220+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
Clang version: 14.0.0-1ubuntu1
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.6 (main, Nov 14 2022, 16:10:14) [GCC 11.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-58-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.6.124
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
Nvidia driver version: 525.78.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.4.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] mypy==0.960
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.23.1
[pip3] pytorch-triton==2.0.0+c8bfe3f548
[pip3] torch==2.0.0.dev20230220+cu117
[pip3] torchaudio==2.0.0.dev20230222+cu117
[pip3] torchvision==0.15.0.dev20230221+cu117
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.3.1 h2bc3f7f_2
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.21.5 py39he7a7128_1
[conda] numpy-base 1.21.5 py39hf524024_1
[conda] numpydoc 1.2 pyhd3eb1b0_0
```
</details>
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh @voznesenskym @yanboliang @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @desertfire
| 1 |
3,398 | 95,244 |
Make this ridiculously long error message more user friendly
|
triaged, module: infra
|
Via @janeyx99: "highlighting the longest merge rejection message i've seen https://github.com/pytorch/pytorch/pull/92625#issuecomment-1439109986"
It's about 3 pages long, listing the full GQL query that failed:

| 1 |
3,399 | 95,238 |
Pytorch profiler stack exporting does not work
|
oncall: profiler
|
### 🐛 Description
While trying to export stacks to profile CUDA kernels runtime performance via exporting stacks directly following [public documentation](https://pytorch.org/tutorials/recipes/recipes/profiler_recipe.html#examining-stack-traces) one can easily find empty stacks file which is unexpected. To be more specific, the following code
```
with torch.profiler.profile(activities=[torch.profiler.ProfilerActivity.CUDA], with_stack=True) as prof:
self._training_step(...)
prof.export_stacks("stacks.txt", "self_cuda_time_total")
```
saves nothing
```
cat stacks.txt | wc -l
0
```
The `training_step` above is an arbitrary's model regular backpropagation step like:
```
...
loss = model(**X, **y).loss
optimizer.zero_grad()
loss.backward()
optimizer.step()
...
```
Notes:
- Nonetheless profiler does look to be working, e.g. calling `print(prof.key_averages().table(sort_by="self_cuda_time_total",
row_limit=10))` results into:
<img width="1201" alt="vanilla_attn_bf_16_100fwdbkwd_steps" src="https://user-images.githubusercontent.com/14129555/220458397-51e62a9b-9881-4c6a-8d21-ba58fc9a2380.png">
- [Similar issue reported just recently.](https://discuss.pytorch.org/t/pytorch-profiler-not-exporting-any-stack-information/167477)
### Versions
- Torch version: 1.13.1
- GPU: A100
- CUDA: 11.7 / Driver Version: 515.65.01
- OS: Ubuntu XX.YY
cc @robieta @chaekit @aaronenyeshi @ngimel @nbcsm @guotuofeng @guyang3532 @gaoteng-git @tiffzhaofb @dzhulgakov @davidberard98
| 0 |
3,400 | 95,237 |
test_foreach failing cuda memory leak check
|
module: cuda, triaged, module: mta
|
### 🐛 Describe the bug
Tests in test_foreach are failing the cuda memory leak check.
Examples:
```
2023-02-21T09:30:46.6234306Z ======================================================================
2023-02-21T09:30:46.6234581Z ERROR [1.728s]: test_binary_op__foreach_pow_is_fastpath_False_cuda_float64 (__main__.TestForeachCUDA)
2023-02-21T09:30:46.6234852Z ----------------------------------------------------------------------
2023-02-21T09:30:46.6234986Z Traceback (most recent call last):
2023-02-21T09:30:46.6235370Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2069, in wrapper
2023-02-21T09:30:46.6235490Z method(*args, **kwargs)
2023-02-21T09:30:46.6235862Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2068, in wrapper
2023-02-21T09:30:46.6235972Z with policy():
2023-02-21T09:30:46.6236346Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1608, in __exit__
2023-02-21T09:30:46.6236471Z raise RuntimeError(msg)
2023-02-21T09:30:46.6236913Z RuntimeError: CUDA driver API confirmed a leak in __main__.TestForeachCUDA.test_binary_op__foreach_pow_is_fastpath_False_cuda_float64! Caching allocator allocated memory was 4407296 and is now reported as 5606912 on device 0. CUDA driver allocated memory was 454033408 and is now 456130560.
2023-02-21T09:30:46.6236948Z
2023-02-21T09:30:46.6237074Z ======================================================================
2023-02-21T09:30:46.6237307Z ERROR [1.419s]: test_binary_op__foreach_pow_is_fastpath_True_cuda_float32 (__main__.TestForeachCUDA)
2023-02-21T09:30:46.6237574Z ----------------------------------------------------------------------
2023-02-21T09:30:46.6237708Z Traceback (most recent call last):
2023-02-21T09:30:46.6238126Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2069, in wrapper
2023-02-21T09:30:46.6238244Z method(*args, **kwargs)
2023-02-21T09:30:46.6238602Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2068, in wrapper
2023-02-21T09:30:46.6238715Z with policy():
2023-02-21T09:30:46.6239069Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1608, in __exit__
2023-02-21T09:30:46.6239194Z raise RuntimeError(msg)
2023-02-21T09:30:46.6239650Z RuntimeError: CUDA driver API confirmed a leak in __main__.TestForeachCUDA.test_binary_op__foreach_pow_is_fastpath_True_cuda_float32! Caching allocator allocated memory was 7473152 and is now reported as 8095232 on device 0. CUDA driver allocated memory was 462422016 and is now 464519168.
2023-02-21T09:30:46.6239672Z
2023-02-21T09:30:46.6239808Z ======================================================================
2023-02-21T09:30:46.6240044Z ERROR [1.346s]: test_binary_op__foreach_pow_is_fastpath_True_cuda_float64 (__main__.TestForeachCUDA)
2023-02-21T09:30:46.6240306Z ----------------------------------------------------------------------
2023-02-21T09:30:46.6240442Z Traceback (most recent call last):
2023-02-21T09:30:46.6240820Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2069, in wrapper
2023-02-21T09:30:46.6240936Z method(*args, **kwargs)
2023-02-21T09:30:46.6241332Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2068, in wrapper
2023-02-21T09:30:46.6241447Z with policy():
2023-02-21T09:30:46.6241819Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1608, in __exit__
2023-02-21T09:30:46.6241941Z raise RuntimeError(msg)
2023-02-21T09:30:46.6242398Z RuntimeError: CUDA driver API confirmed a leak in __main__.TestForeachCUDA.test_binary_op__foreach_pow_is_fastpath_True_cuda_float64! Caching allocator allocated memory was 10760192 and is now reported as 11648512 on device 0. CUDA driver allocated memory was 470810624 and is now 472907776.
2023-02-21T09:30:46.6242418Z
2023-02-21T09:30:46.6242683Z ----------------------------------------------------------------------
2023-02-21T09:30:46.6242845Z Ran 1166 tests in 542.496s
2023-02-21T09:30:46.6242864Z
2023-02-21T09:30:46.6243001Z FAILED (errors=3, expected failures=9)
2023-02-21T09:30:46.6243021Z
2023-02-21T09:30:46.6243145Z Generating XML reports...
2023-02-21T09:30:46.6243511Z Generated XML report: test-reports/python-unittest/test_foreach/TEST-TestForeachCUDA-20230221092140.xml
2023-02-21T09:30:46.6243883Z FINISHED PRINTING LOG FILE of test_foreach (/var/lib/jenkins/workspace/test/test-reports/test_foreach_81jyujhz.log)
2023-02-21T09:30:46.6243903Z
2023-02-21T09:30:46.6244016Z test_foreach failed!
2023-02-21T09:30:46.6244148Z Traceback (most recent call last):
2023-02-21T09:30:46.6244342Z File "/var/lib/jenkins/workspace/test/run_test.py", line 1394, in <module>
2023-02-21T09:30:46.6244439Z main()
2023-02-21T09:30:46.6244627Z File "/var/lib/jenkins/workspace/test/run_test.py", line 1352, in main
2023-02-21T09:30:46.6244746Z raise RuntimeError(
2023-02-21T09:30:46.6244869Z RuntimeError: test_foreach failed!
2023-02-21T09:30:46.6244888Z
```
https://hud.pytorch.org/pytorch/pytorch/commit/1ab112cfab5e9e5b3ec2521f0b4e6b93b6ff90d9
https://github.com/pytorch/pytorch/actions/runs/4230722115/jobs/7348826802
https://github.com/pytorch/pytorch/actions/runs/4230706718/jobs/7348783980
https://github.com/pytorch/pytorch/actions/runs/4230706718/jobs/7348788323
https://github.com/pytorch/pytorch/actions/runs/4230706718/jobs/7348957399
https://github.com/pytorch/pytorch/actions/runs/4230720847/jobs/7348827635
Last known success: 4d753b50451607b3314f827993df7e5527f0c0a7
First failure: 1ab112cfab5e9e5b3ec2521f0b4e6b93b6ff90d9
### Versions
CI
cc @crcrpar @mcarilli @ngimel
| 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.